“Comparison of Population-Based Observational Studies with Randomized Trials in Oncology,” published in JAMA Oncology, concludes that observational oncology studies comparing treatment efficacy do not have the same conclusions as randomized clinical trials comparing the same treatments. Here, I walk through some of the claims and reactions.
As the difficulty of funding randomized trials increases, more researchers are turning to population registry data to compare treatments. Although frequently taught as a hypothesis generating tool, some organizations have proposed replacing randomized trials with results from observational data. The claims that both methods produce unbiased and reproducible results in oncology has not been previously examined. While there are several papers published on reproducibility which have found poor agreement within randomized control trials, there is very little on comparing observational study results to randomized control trial results.
“Statisticians themselves must also be cautious implementing these methods with population registry data if they hope to have an unbiased estimate of the treatment effect difference.”
There are many researchers using observational data to make treatment efficacy comparisons, but we show that this can be dangerous as the results do not agree with those of randomized controlled trials. Further, we show that regardless of statistical methods used, there is not improved agreement between the trials. More statistically sound methodology does not increase the agreement with a matched randomized trial. This should motivate statisticians to better educate clinicians on the assumptions of the methods used (such as all confounders being correctly included). Statisticians themselves must also be cautious implementing these methods with population registry data if they hope to have an unbiased estimate of the treatment effect difference. This work also motivates further development in causal inference methodology to better reflect the true state of observational data and the assumptions that can or cannot be met.
The results of this paper have not been without contention. Researchers using registry data to make treatment comparisons may feel that these results are an attack on their work. Within months of publishing, the journal received a critique and asked us to write a rebuttal. We performed additional simulations and tests which further supported our original claim: observational studies cannot replace randomized trials in oncology. This conclusion may not be popular, but it is critical for progress. If patients are receiving worse treatments because of inaccurate conclusions from population registry data, there may be deadly consequences. As statisticians, the burden is on us to improve the methodology. We must educate clinicians and other policy makers being honest about the limitations of our work. In order to provide the best care to patients, we must use the best tool, and the best tool remains randomized control trials.