changed 5 years ago
Published Linked with GitHub

Causal inference


From internal validity to causal inference

  • Causal inference based on criteria
  • Causal pie or apportionment
  • Explore causation using DAGs
  • Counterfactual theories of causation

Simpson's paradox: why we need to study causality


Sir Austin Bradford-Hill (1965 Hill's Criteria)


Bradford-Hill Criteria - 1

  • Strength of Association
  • Consistency of findings
  • Specificity
  • Temporality

Bradford-Hill Criteria - 2

  • Biological gradient
  • Plausibility
  • Coherence
  • Experiment
  • Analogy

Strength

  • Stronger an association, more likely it is due to cause and effect
  • A stronger association would mean confounding
  • Strong associations also mean high PAF% with identical prevalence of exposure
  • pE = 0.20,RR: 3.0, PAF: 28.57%
  • pE = 0.20, RR: 10, PAF: 64.28%

Temporality

  • Cause must precede Effect

Biological Gradient

  • As dose of exposure increases,
  • So does effect size

Sufficient and Component Cause Model


Directed Acyclic graphs for causal inference

  • Go to http://dagitty.net/dags.html#
  • Take out your papers and pencils and follow along (you may work in groups)
  • You must close all open backdoor paths
  • You must not open any closed backdoor paths

Backdoor paths

  • Backdoor paths are open if they have confounders or mediators
  • Backdoor paths are closed if they have colliders
  • Conditioning on colliders open closed backdoor paths

Counterfactual causality concept 1

  • Imagine A and Y are both binary, 1 and 0
  • For A, we say a is counterfactual
  • a = 0 or a = 1,

Counterfactual causality concept 2

  • If everyone in the study were to
  • receive treatment or be exposed
  • simultaneously, and
  • what would the outcome?
  • P[Y_(a = 1) = 1]
  • Probability of the outcome Y under
  • a = 1

Counterfactual causality concept 3

  • If everyone in the study were to
  • receive the control condition or non-exposed
  • simultaneously,
  • What would be the outcome?
  • P[Y_(a = 0) = 1]
  • Probability of the outcome Y under
  • a = 0

Causal Risk Ratio

  • P[Y_(a = 1)] / P[Y_(A = 0)]

What do our observations show us?

  • But we do not get to see this, instead

  • We see P[Y = 1 | A = 1]

  • That is probability of the outcome given intervention or exposure

  • And,

  • P[Y = 1 | A = 0]

  • That is probability of outcome given control


Associational Risk Ratio

  • P[Y = 1 | A = 1] / P[Y = 1 | A = 0]

  • if causal ratio = associational ratio, then

  • association == causation, otherwise not


How do we measure counterfactuals?

  • Inverse probability weighting
  • Standardisation
  • g-methods
  • Instrumental variables

Conclusions

  • Moving from internal validity to causality is complex
  • Criteria based
  • Counterfactual theories of causation
  • Cause can be conceptualised in sufficient and component causes
  • Next up: Study designs that best capture these relationships
Select a repo