In December 2018, the U.S. Food and Drug Administration unveiled its strategy to assess how real-world data can be used in regulatory decision-making. The Framework for FDA’s Real-World Evidence Program marks a new phase in fulfilling the requirements of the 21st Century Cures Act to accelerate medical product development and innovation, giving patients faster access to new treatments.

We spoke with Sebastian Schneeweiss, M.D., Sc.D., to learn the implications of the framework for biopharma across the life cycle of drug development. Dr. Schneeweiss is a leading researcher in risk-adjustment and rapid-cycle analytics of large longitudinal health care databases, Professor of Medicine and Epidemiology at Harvard Medical School, Chief of the Division of Pharmacoepidemiology and Pharmacoeconomics, Department of Medicine, Brigham & Women’s Hospital, and a co-founder of and advisor to Aetion.

Q: Why is the framework significant?
A:
The framework declares FDA’s commitment to embrace the exploration of real-world data (RWD) and real-world evidence (RWE) in its decision-making. They want to use the next three years to learn more about novel data sets as well as analytic solutions to learn if RWE can support regulatory decision making from pre-approval to approval and post-approval—and how these data can integrate with existing regulatory processes. The framework outlines how they would like to collect information to answer those questions.

Q: What is the most significant shift in FDA’s view of real-world data?
A:
In the framework, FDA addresses running randomized trials with real-world evidence for effectiveness decisions. And that’s important because these data were long used for safety decision making. But now FDA is zooming in on the fitness of these data in decision making. They want to determine if we have the right study designs and analytic techniques to reach causal conclusions so that regulatory decisions can be based on RWE.

Q: What does the framework say about biopharma’s interactions with FDA?
A:
It indicates that we will see new processes within FDA by which their leadership evaluate RWE. Those involved with FDA may have noticed in the past that certain teams at certain points in time have been open to engagement on RWE, but that has not been consistent across the agency. Now we have a clear commitment from FDA for increased and uniform consistency in how they apply their approach to RWE. More broadly, the framework acknowledges aligned goals across industry, academia, and the agency: We all want to make decisions about the effectiveness and the safety of medications based on causal relationships rather than associations. We are all asking the same questions about validity of data, if data are fit for purpose, and if confounders are measured in ways that ensure sufficient accuracy to reach causal conclusions.

Q: How does FDA define real-world data and real-world evidence?
A:
It defines RWD as data relating to patients’ health status and/or the delivery of health care; the data routinely collected from EHRs, claims, and registries. The emphasis on “routinely collected” is important. FDA’s focus is on the mechanisms of collection versus defining what to measure. And RWE is simply defined as the analysis of such data that supports regulatory decision-making.

It is good to see FDA focus on definitions, because across the health care ecosystem we must look at these sources of data, how we test them, and how we define them so that we can communicate and collaborate in all areas, in addition to regulatory decision-making—in outcomes-based contracting, for example—and other innovative approaches to linking payments to how well a drug works for a specific population.

Q: Does FDA’s definition of real-world data exclude data from pragmatic trials?
A:
If you read the definition narrowly, you might come to that conclusion. I don’t believe that is what is intended, however, because FDA has stated that they embrace pragmatic randomized trials and baseline randomization. They are most concerned about the quality of the data beyond initial randomization and the blinding (or lack of blinding) that make these studies pragmatic. My sense is that when they reference “routinely collected” data, they are also referring to pragmatic, randomized trials.

Q: How is FDA addressing data standards?
A:
FDA wants to know if data are suitable to address specific, regulatory questions. To answer that, they’ve introduced two terms: ‘data reliability’ and ‘data relevance.’ And within data reliability, they include ‘data accrual’ and ‘data quality control,’ which refer to making data generation mechanisms completely transparent. Are the data really what they seem to be? And how are these codes related to underlying biology? Data relevance regards three key data dimensions: information on exposure, on outcomes, and on covariates.

When they speak of data relevance, FDA wants to know if a given data source addresses those three dimensions in a way that can lead to causal conclusions. I interpret this to mean FDA does not want to limit the universe to one type of data format or another. What is important is the assessment of data selection for specific studies. While certifying entire data sources could be perceived as an easy solution—i.e., “data source X is approved for regulatory submissions”—the source is not the point. The key question to ask is: Is the data fit for a specific purpose? Each data source must be assessed anew to answer that. A data source might be perfectly fit for one study question but not for another.

The document also says, however, that if you can show in a transparent way how data from a real-world source are generated, curated, manipulated—and have audit trails for each step—FDA may provide a form of certification that declares, yes, this is a data source with sufficient transparency and certainty in its path from raw data to data in the final analysis. And that can be done for an entire data source—particularly those specific to therapeutic areas rather than sources of general purpose data.

Q: Does FDA consider data derived from electronic health records fit for purpose?
A:
While the framework goes into detail on EHRs, the gist is that we don’t know as much as we would like to know about EHRs. FDA may be indicating that the time may not yet be right for these data sources to support regulatory decision-making due to missing data, misinformation, and lack of continuity of patient data they contain. All valid points.

The fact is we understand some EHR data sources well and need to learn more about others. FDA’s reservations—even pessimism—about EHRs lies in these platforms’ limitations to consistently assess health outcomes such as pain levels, severity of chronic conditions, functional and cognitive status, and so on. These kinds of outcomes may be captured in free text, but not in standardized forms that support data collection.

Q: How does FDA view data from new technologies?
A:
The framework mentions mobile technologies, biosensors, and electronic patient-reported outcome tools. So, they are open to assessing the range of disease-state measurement tools now becoming more prevalent. And that tells us that we must learn how to integrate the data collected by these devices and apps with real-world data analyses. Although the framework doesn’t specifically address medical devices and other new technologies, it sets a foundation for future guidance.

Q: Does the framework address the use of data from other countries?
A:
It does so in terms of the extent to which findings from other countries are transportable to U.S. regulatory decisions. And those questions arise due to differences in health care delivery systems and how those differences may affect the analysis of raw data.

What we learn from the framework, however, is likely to be relevant in all jurisdictions. Globally, other regulatory agencies are also evolving in terms of RWE. The European Medicines Agency is, in fact, further ahead working in adaptive pathways, for example. Similar progress is seen in Japan with the PMDA, China and the CFDA, and in Health Canada. They’re all asking the same questions in principle; they all want to get close to causal conclusions.

Q: How does FDA address pragmatic randomized control trials?
A:
They ask exactly the right question about pragmatic trials: What types of interventions and therapeutic areas are well-suited for these trials? In this area, too, FDA expresses concerns about data quality as well as how many patients can be accessed, and if certain settings are too limited for these kinds of mega trials. They also ask how we can address variations in clinical practice—and if those variations affect the validity of pragmatic trials.

Q: What are FDA’s expectations for outcome validation in data used for regulatory decision-making?
A:
Extrapolating from what FDA has required in the safety space, they are concerned about the lack of standardization of the outcome assessment with real-world data tools. They want assurance that we can assess relevant health outcomes with sufficient specificity and in a reproducible way across studies presented to regulators.

The key question is: Can this be done in a scalable form or do we need to do validation studies for every submitted study?

I see some evidence that once validation studies are done for certain outcomes in a data asset, that information can be re-used for other studies in the same data source, and even transported to other data sources generated in very similar ways. The open question is: To what extent can we borrow from earlier studies, or must we start from scratch with each study?

Q: What are “FDA Program Items”?
A:
This term refers to ongoing demonstration projects at Brigham and Women's Hospital and elsewhere. FDA is working with partners across the industry and in academia to launch demonstration projects that will inform the program going forward. In those demonstrations, we are testing methodologies to determine when it is appropriate—and when it is not appropriate—to use real-world evidence with confidence. These projects are how FDA solicits help and inputs to write forthcoming guidance on assessing reliability and relevance—and reinforce the message of FDA’s commitment to engaging with all of us around the community to advance these standards.

Q: What can biopharma companies do now to prepare until the guidance document comes out in 2021?
A:
With this framework, we can already make some assumptions about where FDA will look closely to see how biopharma is structuring processes. They will want to see how well companies do in reproducing, for example, a randomized trial previously done for their own product. They will ask if those findings can be reproduced with available real-world data assets. These “dry-run projects” will allow pharma to engage with regulators by demonstrating what is possible.

Scott Gottlieb, Janet Woodcock and other FDA leaders are, I believe, very serious when they say, “we want to hear from you, we want to learn together.” They have demonstrated this again and again over the past two years in the ways they’ve engaged with, for example, Friends of Cancer Research and other data partners in the oncology space. Together, they’ve analyzed those data sources with a unifying protocol to determine if there are variations in insights drawn from varied data sources.

Other demonstration projects are about transparency and how best to report all the study parameters that the regulator needs to know in order to fully understand how an analysis was conducted. I’m aware of multiple activities—in oncology, rare diseases, and the cardio-metabolic space—in which investigators are trying to reproduce RCT findings to see how reliable RWE will be and if we can predict when findings will be valid. And FDA is looking for more activities than can generate of real-world evidence for regulatory decision-making. So, plenty of interesting work ahead.