In March 2019, we spoke with Jessica Franklin, Ph.D., lead for the RCT DUPLICATE Project, a large, comprehensive comparison of advanced observational real-world data (RWD) study approaches and randomized controlled trials (RCTs) to provide guidance on how to optimize the performance of causal inference methods applied to RWD for the study of comparative effectiveness and safety of medications. Mandated by the 21st Century Cures Act and funded by the FDA's Center for Drug Evaluation and Research, the project expects interim results by mid-2019 and full results by the end of 2020.
In the initial phase of the project, Dr. Franklin’s team is attempting to emulate 30 completed clinical trials using real-world data. In a recently announced extension of the project, Franklin will conduct seven new studies that predict the results of ongoing trials—those for which findings are not yet available. We ask Dr. Franklin about the impetus for expanding the scope of the project from emulating to predicting randomized controlled trials with real-world data.
Q: What first led you to using real-world data to emulate randomized controlled trials?
A: I was looking at a meta analysis where they looked at a bunch of clinical questions. Each had a non-randomized study followed by a randomized trial that had published later. And I realized that the methodology that they used for comparing the RCT estimate to the real-world data estimate was problematic—even though this methodology has been used for 25 years. The problem is publication bias: If you're doing a non-randomized study of the same clinical question as a published randomized controlled trial and you get a different answer, you might not believe your results and not want to publish them. Finding a better way to compare findings from non-randomized to randomized trials is how I initially got interested in this problem. And that's really how this project got started.
A: FDA wants to understand when and how sources of real-world data can produce evidence that is useful for decision making. So, our goal is to provide a model to FDA for how they could accept real-world data and real-world evidence to support regulatory decisions.
However, if we only emulate RCT results that have already been published, there may be continuing concerns about publication bias: that any non-randomized studies that conflicted with the RCT results may have been suppressed. By extending our duplications to seven ongoing randomized controlled trials, however, we’re choosing an analysis and a design we think will provide valid results that match the RCT before we see the results of the RCT. And we will publish our non-randomized, real-world data studies first—before the randomized trial results are released. That will show that our findings have been informed only by study power and patient characteristics.
That’s important because we’re thinking about how our results apply to future clinical questions in which a real-world data study may be done without a corresponding randomized controlled trial. In those situations, we need to be able to say: This is the design and analysis that's going to give us valid results and with which we can proceed with confidence.
Q: What do you expect to learn by comparing a non-randomized study to a randomized trial reference inquiry?
A: It's really the only way we have to assess the success of the entire research process—from identifying the clinical question of interest to selecting an appropriate data source to answer that question, selecting an epidemiologic study design, all of the design parameters, follow-up time, watching out for different treatment groups. And then, finally, selecting the analyses that give out our final result.
Researchers are good at studying each one of those parts in isolation of the others. But how we end up with an analytic result requires this full stream of processes. So, this is the only way that I know of where we can assess the success of the entire research process in collaboration and in combination.
Overall, comparing non-randomized studies with randomized control trials can provide an empirical evidence base for building confidence in real-world data studies. There's tons of literature arguing, yes, these non-randomized real-world data studies are useful, or no, they're not useful, but at this point it's really just opinion and personal experience. There isn't a lot of empirical data that either says, yes, this method is valid, or, no, it's not.
So, I think this is a really useful way to build an empirical evidence base that we can point to and say this is how we create valid studies. And here's the data that proves it. These duplication trials can help us—and FDA—learn which methods work better in real data and real clinical questions; to confirm which design and analytic choices make our database studies interpretable for decision-making. They can help us learn which questions can be answered with real-world data. And similarly, which questions should not be answered with real-world data; which questions do we actually need to run a randomized trial to answer validly.
Q: With hundreds of thousands of trials published each year, how did you figure out which you would emulate and which you would predict?
A: The FDA is interested in trials that contributed to a regulatory decision; regulatory-grade trials. So we wanted to know if we could do regulatory-grade, non-randomized studies that match the results of those trials. We began the search with two databases of approvals—one of prior approvals and the other of supplemental approvals—we have in the Division of Pharmacoepidemiology and Pharmacoeconomics at Brigham and Women’s Hospital. In those databases, we looked at the pivotal trials that were used for approval of the drug and searched for those we thought were reproducible. FDA staff also suggested trials they thought would be good candidates. So, that gave us a set of trials that achieved their primary endpoint: either to show superiority to placebo or another treatment, or non-inferiority to another treatment.
We also wanted to have some trials in our set where the trial was basically a failure. It failed to achieve its primary endpoint and will not therefore contribute to an approval. To find negative trials we went to ClinicalTrials.gov and searched for published studies of medications that had negative findings, and that were studying an outcome that we thought would be reproducible.
Q: Which therapeutic areas are covered by these trials?
A: The 30 duplication studies are focused on endocrinology, musculoskeletal, cardiovascular, and pulmonary treatments. We haven’t finalized the set of seven, but we can say they will be in similar areas as the 30. We want to identify a space for RWD trials in terms of indications. So we considered therapy areas in which we can identify the characteristics where we can have confidence that a non-randomized design applied to real-world data can support regulatory decision-making.
Q: What makes an RCT reproducible with real-world data?
A: The datasets that we have ultimately determine which trials are reproducible. We're working with three health insurance claims datasets: Optum Clinformatics, Truven MarketScan, and Medicare data. Some trials may be reproducible in some data sources, but not in others. For example, these insurance claims datasets may not capture diagnoses like rheumatoid arthritis joint problems. So we're not able to emulate any trials that have rheumatological outcomes as the primary outcome in the trial because we don't have those outcomes measured in our datasets.
What we do have measured well are things like cardiovascular problems: heart attack, stroke. We have mortality measured well. In general, the data show hard clinical endpoints; events that are definitely going to send you to the hospital: hip fracture, stroke, heart failure, asthma exacerbation. If that's the outcome in the trial, then we think that's a trial that we can maybe emulate.
Q: Can you tell us more about the pilot RCT DUPLICATE study registered on clinicaltrials.gov? Are there any preliminary results you can share?
A: This study is predicting the findings of an ongoing phase IV trial that is comparing two drugs’ impact on cardiovascular outcomes among patients with type 2 diabetes. The manuscript describing the RWE results was submitted in January 2019. In February 2019, preliminary trial results were announced. Full results from both the RWE and the RCT will be released later this year. This particular demo focuses on a comparison of AFib patients who were on warfarin versus patients who were on novel oral anticoagulants (NOACs). The demo zeros in on a clinical trial measure with a different prevalence of stroke between the warfarin patients versus the NOACs patients, and then recreates the trial using the study population of all adults in the selected database.