You too can conduct a high quality, valid observational study! Today, Dr. Miguel Hernan reminded us of some overlooked aspects of doing exactly this. When done correctly, we might get close to an estimate from a randomized controlled trial. When done incorrectly, we probably have not shaped our thinking in the right way. Ultimately, sometimes observational studies can mimic RCTs (without replacement). To even begin to try to achieve this, we need to approach our observational studies more like RCTs.
The thinking begins by asking an answerable question. Questions for RCTs involve interventions that are ‘randomly assignable’. ‘What is the effect of taking aspirin compared to a placebo on Y?’ ‘What is the effect of giving people a basic living wage compared to targeted benefits on Y’? Inherent to these are a well-defined intervention that is the same across participants. In contrast consider, ‘What is the effect of weight loss on heart disease?’ This is not a causal question because multiple interventions lead to weight loss (did you start exercising or start smoking? The effect will vary depending).
Next, consider eligibility and Time 0. Here is where even high-quality observational studies can go bad. Common mistakes are accidentally (?) including prevalent users of an intervention rather than incident users or inducing immortal time bias. A great example was the RCT versus observational design controversy regarding hormone replacement therapy and partial resolution once the time 0 and prevalent users inclusion issues were addressed. To avoid such blunders, get to know your data. Very well. In contrast to an RCT where we might build the data from scratch, us observational study-ists purchase pre-existing data and are thus at the mercy of its documentation. Thinking like a trialist and digging into the data reveals whether it is possible to answer your causal question, or some worthwhile version, after all.
Third, minimize confounding biases and consider estimating a per-protocol effect while you’re at it. Eliminating these biases means that the only important difference between treatment and control groups is the treatment itself. There are methods such as inverse probability of treatment weighting, propensity score matching, etc, to help mimic random assignment. Implementing these methods using baseline covariate measures allows estimation akin to an RCT’s ‘intent-to-treat’ (ITT) effect (the effect of being assigned versus not assigned to treatment).
But remember, confounding is an issue in both RCTs and observational studies if studying the ‘per-protocol’ effect (the effect of actually taking versus not taking a treatment). In RCTs this problem is generally dealt with by ignoring it. Estimating a per-protocol effect requires more complex analyses using time-varying exposure and covariate measures. But given per-protocol estimates are usually the actual effect of interest, and ITT estimates can be problematic, more effort should be made to present both ITT and per protocol results. Dr Hernan predicts these efforts will become commonplace in the near future.