Overview
This page is dedicated to the Book Club I organized for the PhD students in epidemiology at Aarhus University, Department of Clinical Epidemiology (Klinisk Epidemiologisk Afdeling, KEA). I openly share the materials after “What If” chapters for you to see, share, learn, and find mistakes 🤓
Download “What If” book
Follow this direct link to download the book version from 30 March, 2021. Visit Miguel Hernan’s Faculty Website, where the most updated version of causal inference book and supplementary learning materials are openly available.
Agenda
Part I
Chapter | Date | Notes | Slides link |
---|---|---|---|
Chapter 1. A definition of causal effect | 8 Oct, 2020 | Recap, discussion, scheduling Chapter 2 session | 📑 |
Chapter 2. Randomized experiments | 21 Oct, 2020 | Recap, discussion, scheduling Chapter 3 session | 📑 |
Chapter 3. Observational studies | 12 Nov, 2020 | Recap, discussion, Chapter 4 session on 26 Nov, 2020 | 📑 |
Chapter 4. Effect modification | 26 Nov, 2020 | Recap, discussion, Chapter 5 session on 10 Dec, 2020 | 📑 |
Chapter 5. Interaction | 10 Dec, 2020 | Recap, discussion, Chapter 6 session on 7 Jan, 2021 | 📑 |
Chapter 6. Graphical representation of causal effects | 21 Jan, 2021 | Recap, discussion, Chapter 7 session on 4 Feb, 2021 | 📑 |
Chapter 7. Confounding | 04 Feb, 2021 | Recap, discussion, Chapter 8 session on 18 Feb, 2021 | 📑 |
Chapter 8. Selection bias | 18 Feb, 2021 | Recap, discussion, Chapter 9 session on 4 March, 2021 | 📑 |
Chapter 9. Measurement bias | 4 March, 2021 | Recap, discussion, Chapter 10 session on 18 March | 📑 |
Chapter 10. Random variability | 18 March, 2021 | Recap, discussion, Chapter 11 session on 15 April, 2021 | 📑 |
Part II
Chapter | Date | Notes | Link |
---|---|---|---|
Chapter 11. Why model? | 15 April, 2021 | Recap, discussion, Chapter 12 session TBD | 📑 |
Chapter 12. Inverse probability weighting and marginal structural models | 29 April, 2021 | Recap, discussion, Chapter 13 session is on May 27th | 📑 |
Chapter 13. Standardization and the parametric g-formula | 28 May, 2021 | Recap, discussion | 📑 |
Chapter 14. G-estimation | 📑 | ||
Chapter 15. Outcome regression and propensity scores | 📑 | ||
Chapter 16. Instrumental variables estimation | 📑 | ||
Chapter 17. Causal survival analysis | 📑 | ||
Chapter 18. Variable selection for causal inference | 📑 |
Other materials
Twitter on selection bias
-
Couldn't agree more 😆
— Miguel Hernán (@_MiguelHernan) September 21, 2022
Great thread on #selectionbias by Elena Dudukina.
An up-to-date compilation + comment of papers on this topic, from decades ago to last week. https://t.co/zWqIy6KEat
Twitter on random non-exchangeability
-
Why is this idea so widely held (apparently) when it has been repeatedly shown to be wrong? The passage here is from a paper by philosopher @JonathanJFuller (https://t.co/at7bIh6baa). pic.twitter.com/Q9sxMiyuwn
— Darren Dahly (@statsepi) June 5, 2020 -
Randomization never ensures zero #confounding bias. It provides probabilistic bounds on confounding.
— Miguel Hernán (@_MiguelHernan) July 23, 2018
Therefore, by bad luck, the effect estimates from some perfectly conducted randomized #trials are substantially confounded. But we don't know which ones!
An eye-opening example: -
"Random confounding" is like "genuine imitation" or "plastic silverware," but for causal identification.
— Daniel Westreich, PhD (@EpidByDesign) December 12, 2019
If it is confounding (a systematic error), then by definition it is not random; if due to chance (+ so goes away with increasing sample size) it is not confounding.
Twitter on inappropriate use of p-values
-
I am teaching intro to epidemiology concepts to research year students at @DCEAarhus. One of the topics I mention is retiring null hypothesis testing. Here are some of my favorite materials on abolishing null hypothesis testing/p-value misinterpretation⤵️
— Elena Dudukina @evpatora@mastodon.social (@evpatora) August 29, 2022
1/n #IYKYK #epitwitter -
This is my P-value eureka example. pic.twitter.com/8CMj96bumn
— Ken Rothman (@ken_rothman) February 26, 2018 -
Speaking of "statistical significance", here is a candidate for the most outrageous use of the concept ever. I could not have made this up. pic.twitter.com/2f9YKJLwzC
— Miguel Hernán (@_MiguelHernan) May 2, 2017 -
Auspicious start for the new year: editors of 35 clinical journals offer guidance on the design/reporting of observational analyses for #causalinference.
— Miguel Hernán (@_MiguelHernan) January 6, 2019
For example:
- design analyses that emulate a #TargetTrial
- don't rely on p-values
It's happening.https://t.co/YOQ5jgomAr pic.twitter.com/oMTFVw6hXQ -
Slightly tired of explaining to reviewers why I don't report P-values in large (here, n= 40,000) observational studies. Especially not in Table 1.
— Lucas Morin (@lucasmorin_eolc) February 23, 2021
Full disclosure: I've done this in the past, and I'm trying my best not to repeat the same mistake again... pic.twitter.com/95JCO975U8 -
Time for a mid-year check-in of New Year's resolutions.
— Miguel Hernán (@_MiguelHernan) June 29, 2018
Someone asked me: "Is it really possible to write for a major medical journal without using statistical significance to describe the results?" The answer is yes.https://t.co/4Q17y1jyRP
Kudos to reviewers and editors @NEJM https://t.co/nICwWqE3IP -
Sometimes people ask me why the use of #StatisticalSignificance is a bad idea for science.
— Miguel Hernán (@_MiguelHernan) May 10, 2019
My shirt-based response is quoted below.
(if we don't use an arbitrary dichotomization of $ for life decisions, why use an arbitrary dichotomization of p-values for scientific decisions?) https://t.co/3H5QR9Ty2q -
More p-value silliness. HR 0.90, 95%CI 0.81-0.99--> 'effect'; HR 0.89, 95%CI 0.78-1.0009-->no 'effect' https://t.co/Xe5oakbxKR @ken_rothman pic.twitter.com/07sREcGbab
— Alvaro Alonso (@alonso_epi) May 12, 2017 -
Simple way for editors to improve science: If your journal still uses “statistical significance” in 2017, retire your statistical consultant pic.twitter.com/ZbPljE2OyP
— Miguel Hernán (@_MiguelHernan) April 28, 2017 -
...and here is an illustration of why the p-value is not the probability that the null hypothesis is correct. pic.twitter.com/jZdrd76KAB
— Ken Rothman (@ken_rothman) June 9, 2020 -
The problem with a p-value is that it confounds effect size with precision (https://t.co/RsyecQGn0O). A CI separates the two: effect size is given by the location of the CI on the parameter scale, and the distance between the lower and upper limit indicates the precision. pic.twitter.com/iA2Am07exh
— Ken Rothman (@ken_rothman) September 5, 2018 -
Those who think of p-values and CIs as interchangeable are those whose analytic goal is a declaration about statistical significance.
— Ken Rothman (@ken_rothman) September 6, 2018 -
Berkson in 1942 explaining the faulty logic of significance testing: "Suppose I said, "Albinos are very rare in human populations, only 1 in 50,000. Therefore, if you have taken a random sample of 100 from a population and found in it an albino, the population is not human." pic.twitter.com/rOblWF2yN5
— Ken Rothman (@ken_rothman) February 26, 2018 -
Oh lord, calling your piece "P-values explained--it's easier than you think" and then explaining p-values wrongly.
— Cecile Janssens (@cecilejanssens) September 4, 2020
This is my 'biggest' fear in public writing. It's hard to simplify complex concepts to their essence. https://t.co/Mu3nkzoFSE -
Let's try something new: *7 days, 7 statistical misconceptions*.
— Maarten van Smeden (@MaartenvSmeden) October 14, 2018
Over the next 7 days I'll post about 1 statistical misconception a day. Curious to hear your favorite #statsmisconceptions, so feel free to add you own to this thread