Eleventh Annual Sentinel Initiative Public Workshop

(people chattering) – Okay, great. Good morning, I’d like to welcome everyone with us here today
in person and on our webcast, to the 11th Annual Sentinel
Initiative Public Workshop which is being hosted by the Duke Margolis
Center for Health Policy and supported by a cooperative
agreement with the FDA. I’m Greg Daniel, deputy center director of the Duke Margolis Center and I’d like to welcome all of you. And as you know, this annual
public workshop is a gathering of the Sentinel community
and leading experts to share recent developments
within the Sentinel Initiative. This year marks the 11th annual workshop which will be a two-day event and I believe there’s
even Regulator’s Day aimed at developing Sentinel-like
system capacity and tools. The continued growth of and interest in the Sentinel Initiative, underscores the incredible progress made over the last decade moving from the Mini-Sentinel pilot
project to a full-scale and core component of safety
surveillance at the FDA. These achievements are
only possible as a result of a unique partnership
and a diverse range of collaborations that comprise Sentinel. We’ll hear much more about this from key Sentinel leadership
at FDA, including Dr. Woodcock who will be discussing
priorities and strategic planning for Sentinel’s development
over the coming years. As many of you are aware,
CDER, the Center for Drugs, has also released a request for proposals, and while you may have many
questions about the RFP, we won’t be covering or addressing that in today’s discussion. Also, although the Sentinel team will not be fielding
questions about the RFP, I’m told there will be an
amendment made to the RFP to clarify questions
in mid-April timeframe with additional information
that people can refer to ahead of the deadline
for submitting proposals. What will be in scope
for today’s discussion, is how FDA is collaborating
with stakeholder groups and effectively using Sentinel
tools and data resources to inform regulatory science and policy. We’ll be hearing about key developments, including CDRs five-year strategy released earlier this year. CBER’s goals and roadmap
for the BEST program and CDRH’s continued work to leverage Sentinel’s
data and infrastructure. Each of these innovative
initiatives is contributing to the agency’s goal of
establishing Sentinel as a national resource that generates meaningful,
real-world evidence for a variety of purposes. We’re looking forward to
having a robust discussion on how these topics and
others throughout the day. And I thank you all for your interest to join us here and support
this important work. As I mentioned earlier,
today we’re honored to be joined by Dr. Janet Woodcock who will deliver our
opening keynote address. But before I introduce her, I want to spend a few minutes to quickly run through today’s agenda and cover a few housekeeping items. Following Dr. Woodcock’s address, we’ll hear from the Sentinel leadership, from across the centers,
at the Center for Drugs, Center for Biologics
and Center for Devices, who will highlight achievements and discuss center priorities
for the coming year. The next session will consider
CDR’s specific achievements within the Sentinel System itself and plans to enhance key
capabilities and data resources. This will lead us into our
first break for the day. And following that we’ll then hear from the Center for Biologics, CBER team, on their development and use of the Sentinel Initiative’s, specifically the BEST Program. This will bring us to our
lunch at approximately 12:30. We’ll reconvene at 1:30 sharp to hear directly from key representatives from the coordinating
centers participating in the Sentinel Initiative and their experiences
developing Common Data Models, applying analytic tools, establishing partnerships, et cetera. We’ll then take a quick break at 2:30 and before our last session of the day we’ll consider broader uses of Sentinel Initiative’s
data infrastructure for evidence generation beyond the use of the system for safety surveillance. Before I hand it over to Dr. Woodcock, just a few housekeeping items. We want to remind everyone
that this is a public meeting, that it is also being live webcast online. The record of the
webcast will be available on the Duke Margolis
website following the event. This meeting is intended
to spur discussion and we’re not looking for
achieving consensus today, or voting or anything like that, but rather hear a variety of
perspectives on these issues, input from all of you,
the critical stakeholders, to how things are going within Sentinel and suggestions on future directions. To this end we’ve reserved
time throughout the day for participants in the
audience to make comments and ask clarifying questions
and pointed questions during the moderated discussions. As a heads up, to speakers and panelist, Elizabeth Murphy and Kerra
Marcon will be keeping us all on schedule with signs to indicate how much time is left
in your presentation. And I emphasize that
stopping the presentation so that we can have moderated discussion and hear questions from all of
you is critically important. So don’t be offended,
speakers, if we cut you off. Lastly, this year’s workshop
will also feature a mobile app to provide easy reference to
all of the event materials in one place in order to improve your experience
at the workshop. I encourage you to download the app and download instructions are
available outside of this room via a big poster as well
as a handout leaflet about the app itself. Feel free to help yourself
to coffee and beverages outside of the room throughout the day and lastly, lunch will be on your own. There are a number of
restaurants in the area, the mobile app can help you. We also have a list of restaurants at our registration table as well. So with that, I’m very pleased
to introduce Dr. Woodcock, director of the Center for
Drug Evaluation and Research. As you know, Dr. Woodcock has
been a key critical leader and driver of the Sentinel Initiative from its very beginning, contributing greatly to its success back when it was only a pilot program, and much more, prior to that. We look forward to hearing more
about Dr. Woodcock’s vision for the Sentinel System and
its continued development as a national resource
for evidence generation. Welcome, Dr. Woodcock. (audience applauding) – Thanks, Greg and good
morning to everyone. Thank you for being here. And I want particular thanks
to Duke Margolis Center for helping us put this on. I think it’s very important
to consult with the community at intervals and have people
understand where we are. So it’s really a pleasure
to be with you all for this important
discussion about the past, the present and importantly the future of a really significant initiative at FDA. This year marks the
success of three full years of Sentinel System
being a fully functional and integrative part of
the regulatory process at the agency. In a short amount of
time, Sentinel has proven to be a vital source of
new safety information, that conform our
regulatory decision making and expand our knowledge of
how medical products perform once they’re widely used
in medical practice. Indeed, what started as
a Congressional mandate in the FDA Amendment Act of 2007, and actually a little bit before that as we were all thinking about,
we’d really want to do this, has given rise to one of the world’s premier
evidence generation platforms. The quality check, distributed
database has information on more than 67 million patients with federal or private health insurance and cumulatively on more
than 300 million patients across the time span of 2000 to 2018. This high-quality, curated
health care data system preserves privacy by sending
requests for analysis to the data partners who retain physical and operational control
over their own data, and it returns aggregated information to a trusted coordinating center that produces a unified effect estimates across the multiple sites. And what this does, the
distributed database, is really the core concept
that enables Sentinel to preserve privacy and be the largest multi-site medical product safety
surveillance system in the world. None of this would have been possible without the enormous and
actually essential contribution of our data partners to
the success of Sentinel. And I thank the representatives of the data partners who are here today. We know that this has been a heavy lift and is somewhat difficult with various business
models that may exist and we really appreciate
everyone coming together to make something larger than the whole. Their sharing of data and scientific expertise has allowed FDA to fulfill its important
public health mission. By joining together,
they enable the creation of this invaluable
resource for public health and their continued
collaboration has allowed us to further expand and
improve the capabilities of Sentinel and really transform it into a true national resource. I think of the Sentinel System and how it was done and
hopefully how it will continue to do it in the future is a
model for how FDA can work with diverse elements of
the healthcare industry, researchers and patient
groups to accelerate the pace and impact of post-market
safety activities. Sentinel and the FDA-Catalyst program are also proving useful as a test
bed for demonstration project like IMPACT-Afib that I’ll
talk about a little bit later, in creating opportunities to develop and evaluate the capabilities
of pragmatic trials, for example, and mobile applications to collect patient generated data that were prominently featured in FDA’s real-world evidence framework that we released earlier this year. We’re committed to continuing
this successful effort to help sustain and
enhance an infrastructure that produces the high-quality evidence that we need to guide FDA’s
decisions about drugs, biologics and medical devices that we’re charged with regulating. This tool must continue to evolve, though, to meet our growing evidentiary
and scientific needs through the availability
of rich data sources and reaching out to
additional data sources. Now, as far as the past. Okay, over the past three years, FDAs been used by FDA in a
variety of important ways and I would like to
share several examples. Sentinel data have been featured in numerous advisory committee meetings, providing important
information on, for example, the safety of gadolinium-based
contrast agents, a very controversial area where it’s difficult to
acquire correct data, the rates of diabetic
ketoacidosis that occur after various novel
anti-diabetic agents are used and contextualizing results that emerge from a post-market
cardiovascular clinical trial, to name few. Often we will have clinical trials, or we’ll have a safety signal and we really won’t understand the context within what happens within
the larger healthcare system. And Sentinel can give us
that contextual information, it can be very important. Now, FDA’s also used Sentinel to evaluate drug exposure
during pregnancy, to provide important
population level information about medical products
such as the TNF inhibitors. And contextualize enrollment in industry sponsored pregnancy registries of numerous drugs, including those to treat multiple sclerosis. So, we need to know,
often, how representative, how are these pregnancy outcomes similar to what we’ve seen in the
background rates and so forth. And it would be very difficult to gather that information otherwise. Sentinel also provided reassuring data on the safety of contraceptives,
a really important issue. Seizures after certain
heart failure medicines. And it characterized pediatric respiratory
syncytial virus disease to inform the development
of novel RSV therapeutics. Sentinel also provided
important information about the utilization of opioids and contributed to FDA’s
strategy to intervene, to combat this public health crisis. And you’ll hear more about that today in Judy Staffa’s presentation as you’ll hear about many of
the things I’m gonna bring up. Sentinel was one of several
successful initiatives that FDA’s developed to drive innovation and incorporate emerging
technologies into everything we do to protect and promote public health. Initiatives such as Sentinel, the CBERs Biologic
Effectiveness and Safety System or BEST which is overall part of this, CDRH’s National Evaluation
System for Health Technology are data driven initiatives aimed at building rigorous
evidence generation systems to produce that knowledge we need at the fraction of the cost
required for previous efforts. And I can definitely attest to that, because I’ve been in charge of
CDR’s budget for a long time. And I would have to fund long-term studies that would take a decade
and be very expensive and we’d have to wait
an extremely long time to get the answer, meanwhile, suffering under tremendous
uncertainty and controversy. So, what Sentinel can so, often, is provide us rapid answers
at a fraction of the cost that would have taken in the past and this is a tremendous benefit. So instead of launching
individual one-time studies, we invest in developing advanced sustainable analysis platforms that can be used multiple times to address a wide variety of regulatory and scientific questions. And so you’ll hear more about
many of these initiatives, like CBER and CBRH in this program. Certainly one of the key questions you’ll be addressing today
is just how does the FDA and its collaborators build
on the successes to date. So what about the future. You will hear from Gerald
Dal Pan and Bob Ball about Sentinel’s 5-Year Strategic Plan. So we are trying to look out in the future and say where should
we be going with this. This strategic plan comes at an important time as we plan to award the third five-year Sentinel
contract this September. It’s clear, that this
plan, if we can execute, our strategic plan, is bold
and will be transformative, preparing us to make the next
significant step forward. But it also deserves mentioning that the plan represents the
consensus of FDA’s scientific and organizational leadership
about where we should go next. We’re excited about the
possibilities that might emerge from establishing a new Sentinel center, charged with developing a
vigorous scientific community and another new Sentinel center dedicated to advanced analytics,
artificial intelligence and advancing the science of analysis and curation of electronic
health record data. And as you know, these
represent the next frontier in computational analytics
and a very big challenge. But something, I think, we should take on if we are going to able
to use these data fully, particularly from the
electronic health record. Both centers are proposed new additions to the main Sentinel operations
center where FDA conducts it routine regulatory
analysis and maintains that vital distributed database of high quality health care data. All three Sentinel
centers are being proposed in the new Sentinel contract this year. The open, transparent and
competitive solicitation process will ensure that FDA’s access
to the best scientific minds to partner with and to
build upon these successes of the previous 10 years that
we have already achieved. In addition to the strategic plan, our work is also guided by
our PDUFA VI commitments, which of course we negotiated. And so we agreed with them. This includes increasing
the sophistication of our core capabilities
that we already have, especially in areas of pregnancy
safety, signal detection, analytic tools and filling
gaps in data sources. So these are all, sort of
deficiencies of the current system that we’ve identified and hope to enhance. We’ll also improve our communication with sponsors regarding Sentinel queries, and we really want to
continue to support efforts to facilitate public
researcher and sponsor access to Sentinel’s distributed data network, such as through the
Reagan-Udall’s IMEDS program, which you’ll hear more about later today. We’ve always hoped that this could be a
true national resource, not just for the FDA but
for other researchers, so that the best scientific minds around the world could be applied to addressing these problems. We’re also in the process of internally developing
center-wide comprehensive training and a standard operating procedure so everyone within the
FDA really understands what’s going on. But I think this issue of
making a true national resource for researchers, we haven’t, we’re not quite there yet (laughs) and we really need to
put our heads together and think how that could be accomplished. However, we’re also guided
by even broader vision to create a national resource
for evidence development that also encompasses our
real-world evidence programs and safety surveillance science. We envision uses beyond the evaluation of medical product safety and efficacy, into biomedical science,
quality improvement, and perhaps a learning healthcare system. Some of the work that’s
being done on trying to look at how to improve
delivery of medical products, for example and utilization and so forth. For example, FDA’s committed to continue to build on our capacity in the generation and analysis of real-world evidence to inform regulatory decision making about our medical products
across the life-cycle. Our real-world evidence
framework that was published in December 2018, outlines our approach to implementation of this vision. It reflects a larger initiative
that we have to explore and pilot the utility of a variety of real-world evidence types
and technological innovations in evaluating medical
products, for example, apps, devices and so-forth. As FDA moves forward to evaluate the use of real-world evidence
for regulatory decisions about effectiveness, particularly of drug and biologic products. We will focus on three main issues. Whether the real-world
data that would lead to real-world evidence, whether that data is fit
for its intended purpose. Number two, whether the
trial or study design used to generate the real-world evidence provides adequate scientific
evidence to answer or help answer the regulatory
question that’s being posed. And whether the study conduct meets FDA regulatory requirements, say for the data collection and so forth. We see the Sentinel System
as being a critical part of the next phase of our
real-world evidence program. The Sentinel System, as I said, has been an unquestionable success and so we’ll continue to use Sentinel to test new methods and tools including in the real-world evidence area. Through our FDA-Catalyst program, we’re leveraging the Sentinel System to accelerate access to and broader use of real-world data for
real-world evidence generation for our regulatory decision
purposes, all right. So, a study called IMPACT-Afib, which most of you may know about, will determine whether a
patient education intervention, distributed by five data partners to over 40,000 patients increases the use of oral anticoagulants
and reduces the risk of stroke or TIA in patients with Afib. So, this is not something
the FDA usually does, (laughs) is test interventions
ourselves, right. However, we want to see
if we can use Sentinel as a vehicle for these
type of interventions. Then we can generalize about that to perhaps use of interventions, use of this type of data for
studying other interventions. Also the FDA MyStudies
is a mobile app designed to ease input of real-world patient data to support clinical trials, observational studies and registries. It’s open source and can be configurated for several different therapeutic
areas in health outcomes. And it supports the auditing needed for compliance with FDA regulations. So people can get this app
and use it to collect data from patients in a variety of
studies and this is ongoing. Current demonstration projects
with the FDA MyStudies App, include limit juvenile
idiopathic arthritis trial, it’s the first use of FDA-Catalyst to support a pediatric trial. And the app will collect primary outcome from ophthalmology appointments,
as well as adherence data, and information on adverse
events for the study drug with a drug diary part of it. And then the SPARC, Inflammatory
Bowel Disease Registry, patients provide repeated
patient reported outcome and responses from those
meeting those inclusion criteria will be included PCORI
comparative effectiveness study. So, in brief, we’re doing
experiments using a variety of tools, often including
using Sentinel to collect data to see how useful real-world
evidence actually can be in a regulatory context. We encourage others to do
this as well, obviously, but for us to have hands-on
experience on how this performs, I think will be very helpful for us in evaluating whatever’s submitted to us. And then there’s a RELIANCE trial, it’s a comparative effectiveness trial, a randomized real-world
trial with 1,600 adults in each arm and they
use two different drugs, each of which are
recommended by guidelines for COPD exacerbations. And we’ll use CMS linkage
for outcomes and exposure and we will test distributed
regression methods in this particular use of the technology. So, the Sentinel distributed
database and analytics, as well as the availability
of the data partners to contact individual patients
has really been a useful tool in some of these demonstration projects, intended to support our evaluation of real-world evidence
under 21st Century CURES. Collaboration between Sentinel and PCORI has also been
critical in the past year and we look forward to continued progress in that area because of course, PCORnet has a lot more health records, electronic health record information. So in summary of all this, this workshop provides an opportunity to discuss the
possibilities for the future and solicit ideas from the whole range of stakeholders that’s
represented here today. We really want to hear from you about how to achieve more ambitious goals over the next 10 years. What could we do that’s
really gonna move the dial. And I know that several of my colleagues will be addressing many of these points in more
detail during the session. But we are at an interesting
time where technology and the use digital
data are coming together to really enable us to learn much more, much more rapidly than we did before. And the question is, how can
we best take advantage of this, what are the next steps
we should be taking. Today the Sentinel System
has become one of the largest and most successful uses
of big data in healthcare. A more efficient system that can generate much
more high quality evidence in shorter time, and I’d
like to say collectively with less money which makes me very happy as having to do the budget. Although it continues to develop, we’ve already realized
many of the benefits that come from the ability to use real-world
observational data as a tool to monitor medical product safety and identify and evaluate concerns. So we have this successful platform that was just a hypothesis 10 years ago. Now it’s working, it
justifies its own existence, we don’t have to say, should we be investing in this anymore, we know this is a high
return on investment system. The question is, what more can we do over the next five years. How can we really nail
this and learn even more. I hope your discussion on this topic is engaging and productive. I again want to thank all the partners for their commitment to
this important effort. This was the work of a lot of people, not just one organization
or individual alone. Each of you is essential to its success and each of you is playing a vital role in finding new and important solutions to protecting and promoting the health of the people we serve. Thank you very much. (audience applauding) Hi, Mark.
– Hi. Hi, Janet. – So.
– Thank you. – [Janet] Should I answer
questions or am I done. – We have a, we are on a tight schedule. – Okay. – But if there’s a–
– I don’t need. – Okay.
– Okay. – Thank you very much for the– – Thank you all. – I want to thank Janet for framing today. I also want to thank, or
I want to thank all of you and add my welcome to Greg Daniels and to Janet at this 11th
Annual Sentinel Meeting. As Janet said, “It’s a
great time to reflect “on what has happened with Sentinel, “but especially looking forward.” I can remember back in that
legislative concept stage, in 2007 where the hope that this system could
play a role in drug safety. And I think then, we
didn’t have an appreciation for just how transformative
Sentinel would be, not only for safety surveillance but as you heard from
Janet, to serve as a basis, a foundation for even broader
and more substantial efforts to have a digitally driven,
distributed approach to generating evidence, not just on safety but through extensions in BEST and NEST and IMEDS and potentially
a range of other areas. But while the trajectory’s good, and there’s a tremendous
potential, we are not there yet and today’s meeting will hopefully provide some further insights about how to take advantage of
all these opportunities. With that in mind, I’d like
to bring up the leadership of the Sentinel Initiative, now. So if the speakers for the
next session could come up, we’re gonna hear about
center specific achievements, about high priority uses
and about strategic planning for the continued development
of the data infrastructure and the capacity to build
that scientific community and build out the evidence generation uses of this general Sentinel
framework in the coming years. So I’d like to introduce Gerald Dal Pan, the director of the Office of
Surveillance and Epidemiology for the Center for Drug Evaluation
Research, thanks, Gerald. Steven Anderson, the director of the Office of
Biostatistics and Epidemiology at the Center for Biologics
Evaluation and Research. Thanks for being here, Steven. And Danica Marinac-Dabic
who’s the director of the Division of Epidemiology in the office of
Surveillance and Biometrics at the Center for Devices
and Radiological Health. Thank Danica. So we’re gonna hear some opening comments and reflections from
each of these panelists, then have a bit of time for discussion and questions including with all of you. Gerald, would you please kick us off. – Thanks Mark, and I’d like
to thank the organizers for setting up today’s meeting. As Janet mentioned, Sentinel really has
become a critical piece of our comprehensive post-market drug safety surveillance system at FDA. There’s a lot to talk about, I
don’t want to go over my time but you’ll be hearing from
my colleagues later today about some of these initiatives
in a lot more detail. But I think that are investments
over the last decade, have really created a scientific capacity that is really different
from what we had in the past, much more efficient, much more timely and as Janet’s mentioned, much
more cost effective as well. We have access to a substantial amount of electronic health care
data that in some cases can go all the way back to the
original medical record and patient encounter. This, along with the infrastructure at Sentinel has allowed us to answer important drug safety questions in a matter of months and sometimes even in a matter of weeks. We’ve conducted over 325
analysis in the last three years. Many of which have had a real direct and meaningful impact on our
regulatory decision making. You heard Janet talk about some of the ways the regulatory
decision making process has been included or data from
Sentinel has been included in informing advisory
committee deliberations, as well contextualizing
post-market clinical trials, pregnancy registries and other activities. Let me add that we’ve deemed Sentinel to be sufficient in more than
18 different safety issues that would have otherwise resulted in industry required post-market studies. Let me just explain upon that a minute. If you go back to the Amendments
Act of 2007 that set up, or that required us to set
up what we now call Sentinel, it also gave us the authority
to require companies to conduct post-market safety studies. And that’s an authority
we previously didn’t have. But it also put a limit around that and it said that we could
not require the companies to, require such a study unless we determine that both our adverse event reporting data in the FAERS Database as well as the data in the system we now call
Sentinel were insufficient. So we’ve built in this
sufficiency process, the sufficiency determination
into our processes and now we’ve determined that 18 studies that would have been
industry required studies now are being down by
FDA through Sentinel. And we’ve used Sentinel data in more than 16 regulatory outcomes. These are posted online,
there’s a natural delay between when we get the
result and when you see it. That’s because there is an
internal deliberative process that we have to complete before we can post
something publicly online. The evidence that Sentinel
generates is based on a sophisticated data
quality assurance program, over 1,000 different quality checks. They’ll get completeness,
validity, accuracy, integrity, consistency
and trends over time. Each data partner
applies these data checks to their data each time
the data’s refreshed. That’s up to four times
a year for data partners and they’re both automated and manual checks
involved in this process. So when we go back to the beginning, the idea for Sentinel is based on the idea that science is rapidly changing and we need a program
that can change with that. So toward that end, the Sentinel
infrastructure allows us to keep evolving the type
of questions it can answer. Both with acquisition
of new types of data, as well as new analytic
methods put into it. And we monitor the kinds of
questions we’re able to answer, but also the kinds of questions
we’re not able to answer and then we build the system
to answer those questions. So let me talk about some
of the recent advances. In the last year we unveiled a new routinely refreshed cohort of more the four million
mothers and their babies to enable FDA investigators to study the effects of
medications used during pregnancy. We’ve added a new pregnancy
safety inferential analysis tool to conduct these studies
and control for confounding, using a propensity score matching methods. We’ve introduced a new analysis
tool to examine switching between medical products to
help advance our understanding both of generic drugs and biosimilars. And we’ve introduced tools
to examine the extent to which healthcare providers adhere to the recommended safe use conditions that are described in
medical products labels or in risk management strategies. This coming fall, we expect to build upon these new capabilities
with new tools, allowing us to evaluate the impact of FDA regulatory action using interrupted time series analysis
and also signal detection. Signal detection or
signal identification is a really core activity
of our organization. It’s traditionally been done using spontaneous adverse event reports. However, the Amendments Act of 2007 stated that FDA will create a robust system to identify adverse events and potential drug safety signals. So while we’ve used Sentinel to date to analyze existing signals, we’re extending the
scope of what we can do to move into signal identification. We held a public meeting on this topis in December of last year, and you’ll hear further
updates on this to say, but in brief, we need a capability, not only to study the signals, but to identify them in the first place. So we’ve started evaluating
potential approaches including tree-based scan
statistics, information component, temporal pattern discovery and
sequence symmetry analysis. And we’ll be piloting
these and hope to learn from them over the coming year. We also held a public
meeting, in July of 2018, on another strategic priority in Sentinel and that’s to improve use of
electronic health care records, or EHR data in Sentinel. We did this by focusing on
a foundational question, how to improve the efficiency and accuracy of health outcome validation. Because Sentinel, I
think, as everyone knows, relies on data that’s not collected or captured or created
specifically for research purposes, it’s important that we
evaluate the algorithms it uses to measure the outcomes
that we’re interested in. And so we’re launching
specific review projects to validate algorithms for lymphoma, stillbirth and serious infections. We’re moving more into informatics, and evaluating how advanced analytics such as machine learning can address gaps when human expert
constructed algorithms based on coded data can’t fully meet the needs. This is important when we
know the data are available but not in a format that’s easily usable. So that would be some laboratory
data, radiology reports, pathology reports, things like that that are often very critical
for confirming a diagnosis. We’re launching some new
projects to leverage EHR and artificial intelligence
to solve these problems, targeting outcomes of anaphylaxis, acute pancreatitis and rhabdomyolysis. And we’ve recently
initiated a collaboration with the People Center
Research Foundation, PCRF, the successor to PCORnet, the largest network of
electronic health record data in the United States. We’ll be testing and evaluating how this new data source might contribute to the mission at FDA
and you’ll be hearing from a representative of PCRF later today. Our interest in these areas,
to acquire deeper data sources and expand our scientific
capability was described at length in our Sentinel strategic plan. This was published online in January and is embodied in our next
five-year Sentinel contract. And as Greg mentioned, we’re not here to talk about the RFP or any details, and FDA staff won’t answer
any questions about it. But let me, given its
importance, just take a moment to share some of the highlights
from the re-compete process. We started talking about
this over two years ago in our office and we
deliberately developed an open and transparent process with
extensive market research. We knew that a lot changed, not only since the
first Sentinel contract, but also since the second one. Data are more widely available, computational science has advanced, statistical methods
are more sophisticated, there are new novel approaches
to distributive data networks that didn’t exist before,
for example Blockchain. So to this end we issued a
public request for information, held two different public meetings to engage all potential offers, met individually with 18 potential vendors to better understand their capabilities. This was all done in a consistent and open manner and
documented everything online. Earlier this month, we
even posted a draft request for proposal with an open end
question and answer period. And we did all this to ensure
open and fair competition and enable FDA to have access
to the best available data, resources, technology and partnership. But Sentinel is about more
than just technology and data. It’s clear that our
investment in these areas, to create a national resource, will involve further
collaborations and toward that end, the RFP includes a request
to establish a new center for community building, and importantly, expanding our international
collaborations. At previous public meetings,
and perhaps at some meetings of the International Society
of Pharmacoepidemiology, you’ve heard about our growing partnership with Health Canada and
the Canadian Network for Observational Drug Effect
Studies, known as CNODES. We’ve also recently
begun another partnership with the United Kingdom’s Medicines Healthcare Regulatory Agency, MHRA, and its Clinical Practice
Research Data Link, or CPRD. So we’re expanding beyond
our national borders to make sure that we
coordinate and collaborate with our scientific and
regulatory partners globally. And toward that end, as Greg mentioned, we’ve added a third day
to the annual meeting which will be for our
international regulatory partners. So Sentinel’s become a real engine for growth and scientific
advancement in the FDA. You’ll hear a lot more about it from our collaborators today. I’ve focused largely on what we’re doing in the safety realm, ’cause that’s where
it’s traditionally been. But we’re also interested in seeing how this can establish
a foundation for the use of real-world evidence
for efficacy as well. So, I hope I haven’t gone over. – Hey, Gerald, thanks, you
covered a lot of material there. Thank you very much. Next is Steve. – All right, so I’m gonna
discuss the CBER accomplishments in the past year. And it’s sort of been a
whirlwind year for us, since we last met. So, BEST is our major program. So it stands for the
Biologics Effectiveness and Safety System. So the initial one year program that we started in around 2016,
2017, was a great success. Through that first year contracting, that was a very simple pilot program, we followed that success by awarding two five-year
contracts in September, 2018. And with that we went
from on partner in 2017, to now three partners working with us. So the three partners are Acumen, IBM Watson Health and IQVIA/OHDSI. But also along with that, were a variety of academic partners. So, what we’ve got is this
really diverse group of people. So, there’s power in diversity because they bring a
whole host of capabilities and data to the table that
we hadn’t seen before. So, what improvements
are we seeing with BEST and what are the innovations? I’m just gonna say, when we
launched into this, we said, “We’re going big into EHRs “and we’re going big into AI
and machine learning and NLP.” And so that’s where our
program really has started. So, we’re adding new EHR
data, analytics, experts, and support infrastructure
to establish the system. Our primary system really
is for queries, studies and surveillance, so that’s our core work that we want to do with one of the projects in the first contract. So, we’re excited about this
sort of new evolution of BEST. As we, again, have this
diversity of EHR data sources and it totals about 75 million patients that those data cover
and it’s across an array of different healthcare
settings, hospitals, inpatient, outpatient, clinics, skilled nursing facilities and the like. So, we have this diversity
of health care settings that we have access to that we may not have
had previous access to. So, even more exciting, I guess, that came out of the contracts
was we now have a new source of data that really links
EHR and claims data. So, the power of those two
systems coming together, really gives us a fuller picture of the patient encounter and experience. And so, we feel that that adds a richness. One of our partners brought
five million of those types of data, representing five
million patients, to the table. And we’re currently using those. We want to expand that ’cause we think that’s a huge growth area and a huge plus for these types of systems. And I will say also, what
does it provide us with. It provides us, BEST
is providing CBER staff with more hands on access
to de-identify data. So we’re doing this through what we’re calling contractor portals. We don’t directly bring the
data in-house, into FDA, but we have access to the
contractor’s servers, et cetera, where we can access a subset
that’s heavily de-identified. But that’s really valuable for us in helping us conduct
feasibility studies and analysis. Before we’re gonna go into full-board designs of larger studies. So that’s really been
an advantage to us too. We’re just starting to
use that capability, it’s come online recently and
we’re very excited about it. There’s a focus again on
the innovative methods, as I mentioned, like AI,
NLP and other technologies. A whole host of things that can be done, potentially with these. Our two things that we want to achieve with the contract though, in five years, is one is semi-automated chart review, because chart review has
been a slow point for us in many of previous
experiences with claims data. And then automated
adverse event reporting, which means mining
adverse events from EHRs and then populating an adverse event form and then submitting that to us. So that we know that that adverse event is
present and occurred. This is exciting technology. I think two years ago, Mark
McClellan asked me a question about where I thought the
future was going and I had said, just off the cuff, that it was automated
adverse event reporting. And so here we are two years later, taking a blue sky response to a question to something
very new and novel. I will say that we believe
it’s very aspirational. We have no allusions
that this is simple to do but we think it’s gonna push
us and be transformational and that it will push us into new areas and it also compliments
nicely with the first contract of core activities which is
being able to do better studies because we’ll have access to
more adverse event information. Let’s see, so our goal at the
end of this sort of project with the adverse event reporting is really to stand up a straw man
system in the next year or two where we’re just looking at
one or two adverse events and then trying to make sure
that that process works. And then obviously, expanding
out the circle to many others. And we envision this
going, taking a long time, but going almost product
area by product area to execute this technology. And the same for the assisted or semi-automated chart review. We don’t expect to have
a fully blown tool ready to go immediately, but that’s something that we’re building over time and we hope to have a
beta version of that. And we have successes right now, where we’re starting to
build that beta version, hopefully within the next year we’ll have that operational for beginning to use. Again, the two contracts that
we have are complimentary with the latter providing
that engine of innovation to fuel the first contract where we have the surveillance
system that we built. So I think the question
that we always get is, why did you go with BEST? And the question is, that there’s
many important differences between our biologic products
and the other types of drugs. So we found that claims
based systems didn’t quite provide us with the clinical detail that we needed and the speed to address some of the questions. So, we found that EHR
systems could provide us with at least rapid access
to medical chart data. But also, kind of providing
us with the granularity of that data that’s needed
to answer questions, for instance for transfusions. You want timestamps to
know which came first. Did the transfusion come first
and then the adverse event or was it the other way
around in which case the adverse event wasn’t
associated with the transfusion. So, knowing those types of things and having access to that
information is really critical. And also having access
to labs, to radiology and a variety of information to verify those transfusion
diagnosis’s is really critical. Again, we also want to speed up our access to chart data when we’re
looking at vaccine safety. If we get a signal on a
system, we want to be able to resolve that within a matter of days or weeks and not months. And so that’s another advantage
of having the EHR systems. So, just to foreshadow,
as Dr. Azadeh Shoaibi and Allen Williams are gonna
focus on the technical aspects of the overall BEST system
and operational activities in the 11:20 session that
starts later this morning. But remember, we’re still
in the pilot phases of BEST but we’re quickly moving to stand up, sort of a more mature,
fully operational system within the next year or so. I’ll say that there’s a lot more that needs to be done, obviously. So we’re focusing in these first few years on the technical, how
many minutes? (laughs) Oh, two minutes, okay. So CBER’s been developing
a road map as well. And I will say, the roadmap
focuses on those first two items but then it has a lot of, I think, the thinking between CDER and CBER are very similar in this respect. And so we’re focusing on
real-world evidence generation, which we thing is really critical. Real-world evidence generation and the ability to do evaluation. We have a long record of doing real-world
evidence generation work on vaccine effectiveness,
that work has been done in CMS and it’s been led by Hector Izurieta and Rich Forshee in our group. We’ve done some work in patient input and we’re gonna continue
to do that type of work. Telba Irony in the session
this afternoon is going to be sharing more on
our efforts in that area. But as we proceed, we are
also building the community of users and stakeholders. We’ve got a relatively
pilot system right now, so we’re obviously not focusing
as much on the community. We have our community of
contractors working with us but what we want to do
is expand the circle and bring in others. So our goal is to launch
efforts to do that. I’ll talk about more of that
in that 11:20 session too. Let’s see, beyond the
priorities of the roadmap, we’re discussing operations
in the talk that I’ll give. But there’s other activities
and as Gerald mentioned, there’s always this eye
on our other requirements such as PDUFA VI, which
has a big influence on the way we operate BEST. So among those I’m gonna
be talking later today about the sufficiency
process that we’re using for CBER Sentinel to conduct studies in lieu of post-market requirements. And I can say that I
was just hand counting how many we’ve had in the
past, just a few months, and we probably have had
three post-market studies that we’ve done in lieu of
post-market requirements. So that’s been shown
to be of value overall. So the BEST roadmap
concludes with the summary of the current activities, plan for the next one to two years and the next three to five years. Again, you’ll be seeing
that later this afternoon, I mean later this morning in our session. So just in summary, BEST
is starting to mature as its own surveillance system, real-world evidence generation system. Again, it’s an EHR based system, we’re going big into the AI aspects, the NLP and others that
will advance critical areas for things like case definition as well but phenotypic identification and automation of reporting
as we’ve mentioned. But again, we want to continue our efforts beyond those technical components to continue doing outreach
efforts, communication, communication, I’m
sorry, community building and then our goal ultimately
is to build a bigger and better system, always, we want larger and larger networks. This is a resource that we
want, not only to be available to FDA and CBER but we
also want to turn it back to the community and have
it available to them. So, the other thing we’re
doing is instilling a culture of innovation and constant improvement to advance the CBER BEST program to begin to kind of generate the
high-quality real-world data and evidence to support the requirements of PDUFA and then the
21st Century CURES Act. And then I just wanted to
thank my colleagues, CBER, the OBE Sentinel core team, my
colleagues within the office, my colleagues within the center and then our very valuable contractors who’ve been an extreme help
in launching this work. And as Janet mentioned, you can never forget the data partners, they play an extremely important role and we can’t forget the
contributions that they make. They’re very important and very valuable. And with that, I’ll stop. – Again, great. Thanks Steve, again, lots going on. Danica? – Oh good morning. So I’d like to start with the following. Regardless where we sit
in the FDA structure. Whether we are serving the
public in the capacity of CDRH, CBER or CDER, the patients
are the core at what we do. So I couldn’t see the more
extraordinary opportunity then take a step back and
take a patient perspective into fantastic achievements
that had been up to this point, achieved by Sentinel and
BEST and also talk little bit about where we are going
in the devices space, in the evolution of the
national evaluation system for health technology. And try to take that patient perspective to try to leverage those resources to make the evidence generation appraisal and synthesis better for our patient. The other glue that I see between the centers
more and more is really this very renewed focus on the real-world evidence generation and putting more emphasis in exploring those real-world data sources to help us make our decisions
better, faster and cheaper. That’s another very unifying force that is in the core of Sentinel,
it’s at the core of BEST and I think it’s in the core of the NEST. But let me tell you a
little bit about devices. Devices are really very diverse
spectrum of technologies. FDA regulates over 180,000
of devices that are produced in over 100, I’m sorry, over 18,000 firms across the globe and even
more than that facilities. So you can imagine how
much of a diversity, both in terms of the
regulatory implications but also the methodological and the level of detail that’s needed to do sophisticated analysis
using the real-world evidence. So, even though they’re
such a diverse area, and even though they had not
been formally part of Sentinel, although always at the table
and always being participating in the growth of Sentinel, learning a lot from the
Sentinel Initiative, I think the impact of medical
devices is continue to grow. And when you look at the recently published
Eric Topple’s report, when looked into the most 10 areas in which they’re gonna be
having the actual opportunities to change the healthcare, many of those are regulated as devices. If you talk about the
biosensors, robotic surgery, artificial intelligence and
many other areas in imaging and automated imaging readings and others, we regulate them as devices. So, I would say, that we will have even
more impact moving forward. So, that having said about the actual need and what is the kind of the
impact of us working together. But I’d like to reflect a little bit about how we’ve been using
the Sentinel up to this point. In a most, I think, advantage
of the Sentinel program, actually, the group that
actually took the most advantage of the Sentinel is our
signal management program. We formally launched the signal, Device Signal Management
Program back in 2012. And since its inception, over 150 signals have been identified and over 50 communication, safety communications have been issued and many of them actually
were able to be developed with the active participation
of our colleagues from CDER and colleagues from Sentinel. I will name just a few. You may remember the
women’s health arena devices in which there were recently
a lot of signals evaluated and a lot of actions in
terms of the panel meetings that led to particular decisions, whether that be in the are
of morcellators, or SUIs, mesh or pelvic organ prolapse use of mesh or sterilization devices,
orthopedic devices, we relied heavily on canvassing what we can learn from the Sentinel data. Obviously, there is always that limitation that there is no unique
device identification in the claims data. But again, there are
certainly some questions that can be addressed even
without specifically knowing about the device specific information. We also always sat at a
table, and we still do, and the leadership of the Sentinel, in the methodological committees
and try to, as I said, learn from a lot of
particular areas of expertise from our colleagues from FDA and also from Sentinel partners. Now let me tell you a little
bit about where CDRH is going in terms of the development of the NEST. And I would like then to reflect on what, from my perspective would be
probably the low-hanging fruit for moving forward on how we can work together in the future. So, back in 2010, FDA, CDRH, started thinking about
what kind of improvements we can make in the surveillance
system for medical devices. And at that time, we launched
initiative called MDEpiNet, Medical Device Epidemiology Network, which was envisioned to be test bed for exploring variety of innovative ways of infrastructure and
methodology development that can help inform the development of the national system
for health technologies. And since that time, the partnership grew into an international
one with several chapters across the globe and having
access to over 100 registries, a lot of, 700 plus methodologies across the globe working in
this public-private partnership. It’s a really, truly
ecosystem driven effort. And that partnership was foundation that had been very helpful in helping the CDRH
put together the vision for the establishment of the
national evaluation system for health technology. So, all this efforts, now fast forward, culminated in 2017 when we
launched the actual work on establishing the
Corning Center of NEST, which is currently under the umbrella of medical device information consortium. And since that time, this center is actually
marching fast forward toward establishment of the data partners. There are currently 12 data partners, primarily the ones that also are partners in PCORnet, large health systems. And also the MDEpiNet had been
incorporated as data partner within the NEST and primarily responsible for coordinating registry networks. And by that what I mean,
registries linked to claims data, linked to electronic health records and patient generated data. So, we’re planning to use the NEST for a variety of
regulatory decision making, not only for safety signals,
evaluation and management, but also to drive the
cost and the time down in terms of achieving the
faster, better, cheaper evidence for regulatory decision making ranging from expended indications and
we’re envisioning even NEST in the clinical trials in the
context of these data sources. So now, moving forward,
how we can work together. There are many opportunities
under this NEST umbrella that can benefit heavily from
having input from Sentinel, and I would say vice a versa, especially now with this renewed interest in electronic health records and ability to actually learn from the
electronic health data sources. So, I would say that in the context of coordinated registry
networks in particular, when registries are very
granular in terms of exposure to data but not long-term followup, we can actually link
them very successfully with claims data and be able to provide a better way of collecting the evidence long-term. The other areas in which
registries do not exist, I would submit to you, that again Sentinel might
have even better role to play because there might be questions that in those areas in
which it’s not really wise or reasonable to establish
the new registries, we might be able to learn from
what is in the claims data and what is in the next
generation of Sentinel. And finally, I would say
with a lot of effort now to look into validating the claims data for a number of really
important endpoints. There are some really interesting studies that we’ve conducted in which actually claims data
become a very important tool, even for immunization in the
context of future studies. So now, those are the three areas in which I think we
can actually start now, to start exploring collaborative efforts. But I would like also to conclude that the gaps that are
ahead of us are still wide. Let’s not forget that our
country is still 35 out of 169 in the recent Bloomberg Report
based on the health index. So again, we are here to change that and we would like to make sure that the FDA is leading the charge into this ecosystem development. Also, recently, the Commonwealth
Fund Report lists U.S. as 11th out of 11th. Overall, in terms of the health
outcomes and efficiencies and administration of it and we still spent $11,000
per patient for the healthcare and yet our life expectancy’s declining and I don’t even want to go into the infant mortality
and other indicators. So to say the least, FDA’s
the best position to, in fact, where we sit and the
total product lifecycle, to actually lead the charge and also be very much part
of the ecosystem that can, in fact, contribute to our
country, to establish this, really truly implementing the
learning healthcare system that many of you, here in this
room, help actually vision. Thank you. – Great, thank you, Danica
and thank you all for, you know, I think I need
to ask that question, like a couple of years ago,
for Steve, about what’s next. You all each laid out, not
only a tremendous amount of accomplishments but some
key gaps and opportunities and ways to fill them
over the next few years. We have a few minutes for questions from you all who are here. So if you could, if you have one, please head up to one of the microphones. While we’re waiting for people to do that, I was struck by a couple
of cross-cutting themes. One is the increased reliance
on electronic health records, coupled with use of AI and several of you mentioned natural
language processing too. Seems like the main applications there are around identifying or
confirming diagnoses, identifying cases reliably. Maybe if you all could,
if any of you want to, come a little bit more
on since that does seem like a big next data frontier, supported by AI and related techniques, where you see some of
the biggest challenges and opportunities in that expansion out of Sentinel, BEST, NEST, data sources. – Let me just start, briefly. I think that coded data are wonderful because they’re organized and everything, but they don’t always tell
you exactly what you want. So I think there are nuances
of disease or subcategories or subgroups that the codes
aren’t gonna be sufficient for. They may be a starting
point, but it takes going into the medical record and sifting through the information there to more precisely get
the phenotype you want. I think there’s also important covariants that should be in a
medical record but aren’t in claims data, such as body mass index, smoking status, things like that, that are really important determinants of health and health outcomes. And that might be some easy
low-hanging fruit to start with. And then I think as we
look to broader uses, define not only the outcomes
but who the patients are in terms of genetic characteristics. I think if you look a lot of
what’s going on in oncology, it’s really based not only
tumor type, based on organ and pathology but also on tumor markers and getting those informations, that kind of information, rather, to more precisely define a population. – Okay, thank you. Okay, so I took a few notes,
so I guess I would say, accuracy, accuracy, accuracy. So, I think, you know the biggest, the first place we start is sort of, looking at the coded
and the structured data. So those are the easy
fields to kind of look at. But those don’t actually
always provide the level of information that you want. So it actually probably
is more informative to go to what we call the unstructured fields and those may be the physician’s notes, the nurse’s notes and those
particular types of areas. And so, and then trying
to use those technologies to pick out key terms from those fields. So that’s a real challenge, I think that’s a real morass right now and it’s something that
we’re gonna have all, I think, work through and wade through. It’s also the reports,
as well, so radiology, their radiology reports,
it’s the same thing. Those are almost, you know,
unstructured fields as well. The other thing too, is getting
the data into the records. So for instance, the
transfusion information into one place in the
record is pretty critical. And so we can find that information in several parts of the record. So I think, it’s not only the technology but I think we’ve also
got to work with vendors to kind of say, let’s make fields for these particular areas of importance so that it’s easier to actually access those types of data too. So those were kind of the
major areas that I have. – So a couple of things
from my perspective. And again, I’d like to again, acknowledge a huge support that comes from the Office of the
Assistant Secretary for Planning and Evaluation and PCOR
transfer program for us actually to undertake this initiative to try to put together the more
emphasis on harmonization and interoperability between
a variety of data sources. The reason why this is really important in the context of these
coordinated networks is that it’s not going to be one data source, that they will be drawing
the information from. And as much as possible, to
have the minimum core data sets that are aligned between the registries or adding the electronic
health records also to augment that information
that comes from the registries. And not to mention, that
a lot of data will not be in the electronic health records, but will be in the patient generated data to which we also have to
put a lot of emphasis on. So I think the quality
still obviously is an issue. And you may say, that, you know, sometimes we hear criticism,
registries is costly, because of all this curation efforts, but there is no way
around cleaning the data. You can do it this or another, but at some point you’re
gonna pay the price. So, quality is one, transparency
is another important and the access to the data
sources which had been also one of the shortcomings of the registries that we hear constantly. So these are the efforts
that we are trying to actually move forward under
this new grant opportunity from the PCOR Trust Fund. – Great comments and I
appreciate bringing up the patient generated datas,
another frontier, as you said, Danica, for bringing
together what will, I think, keep getting to be more reliable sources of richer data for this
evidence generation effort. I want to thank all of
you, Gerald, Steve, Danica for the overviews of what’s
happened and what’s coming up. It’s been a great session,
thank you all very much. – Thank you, Mark.
– Thank you. (audience applauding) – And as I said, we have
a packed schedule today and we’re gonna get right
into our next session which will go into some more detail on some of the key achievements
and strategic directions for the Sentinel System, for all of these safety
surveillance systems and foundations for broader
evidence generation. So, this session’s gonna
focus on the achievements and CDERs use of the Sentinel
System over the last year and plans to enhance the infrastructure as well as some of the
particular projects, some of the high-profile projects that the agency is undertaking related to combating the opioid
public health crisis. For this panel I’m very pleased to introduce Robert
Ball, the deputy director of the Office of Surveillance
and Epidemiology, Michael Nguyen The FDA
Sentinel Program lead and deputy director of the
Regulatory Science Staff and the Office of
Surveillance and Epidemiology, Michael, thanks for being here. And Judy Staffa who’s
the associate director for Public Health Initiatives, the Office of Surveillance
and Epidemiology and I’m gonna turn it over to Bob for the first presentation. – Thanks, Mark. Can you bring up my slides? So, this morning I’m
gonna talk a little bit about the Sentinel System Strategic Plan and some of the initial steps we’re taking towards implementation. So first, a little background. Why did FDA develop this strategic plan? So there was a final assessment of the Sentinel System in 2017 that was one of our PDUFA V commitments. And one of the recommendations
from that evaluation was that FDA should clearly articulate a long-term strategy for Sentinel. So it sort makes general sense. But there were other reasons. As it was mentioned already,
we have PDUFA VI commitments for the Sentinel System, focus improving how the system operates and
communication and training. And they already outlined
a kind of strategy that we were taking. So we wanted to take it the
next step and formalize that. Subsequent to that work, the 21st Century CURES
Act created requirements for FDA for a framework
for real-world evidence for efficacy evaluation. And Sentinel, as you heard,
has already been working in that area and FDA-Catalyst but we thought that this all needed to be integrated into a formal plan. You’ve also heard a lot
already about advances in data science technology and data types and we recognize we needed
to more systematically think about what we wanted to do in that space. And lastly because of the opportunity for the Sentinel System
contract re-compete, we thought it was a good
time to put out this plan as we were making that preparation. So the commissioner announced the Sentinel System 5-year
strategic plan in January. And I’ll just step through
it here at a high level. So the FDA vision is for
a sustainable system, a sustainable national
resource to monitor the safety of marketed medical products and expand the real-world
data sources used to evaluate medical product performance. As Dr. Woodcock said earlier this morning, that’s really been the
vision since Mini-Sentinel. To create this national resource that would broadly be used for
medical product performance. So we continue that here. So how are we approaching it? So the first part is to enhance
the foundation of Sentinel, but while maintaining
FDA’s 10-year investment. So we want to expand data
sources and linkages, improve data infrastructures and methods, enable more effective use
through operational improvements. So a lot of these are internal efforts that have been ongoing for a long time and will really just
continue those efforts in the next five years. We want to further enhance
safety analysis capabilities. You’ve already heard
about ARIA sufficiency and I’ll talk a little
bit more about that. And we also want to leverage
advances in data science and signal detection and
Michael Nguyen Will be going into a fair amount of detail on that. We want to accelerate access to broader use of real-world data. So the key to this, as
you’ve already heard, is improving our access to
electronic health records and in particular, linkages
between claims and EHR data. And then we want to conduct
specific demonstration projects using real-world data to
generate evidence for efficacy. Fourth, we want to create
a national resource by broadening the user base. So, this really, the key to this is really evolving the Sentinel
System operating model. And you’ve heard mention
of that this morning. I’ll talk a little bit more about that. And then lastly, we want
to disseminate knowledge and advance regulatory science. So this is something
that we’ve tried to do from the beginning and we want
to redouble those efforts. In the last session
you heard already a lot about the innovation direction. And this really is a cross-cutting effort for all the main strategic
aims of the strategic plan and it really ends up being the linchpin for how we think the
system is going to evolve. So, we want to use natural
language processing and machine learning and our
initial focus is on outcomes but as Gerald Dal Pan mentioned we can also expand into
several other areas. Other types of advanced analytics, we’re specifically
mentioning machine learning but there’s many other possibilities. And we anticipate, given
the rapid pace of change over the next five years, that there will likely
be other technologies that we’ll want to evaluate. Novel data sources, EHRs in particular, but also patient generated data. Of course there’s a lot
of buzz about wearables and we’ll see if those
data sources become useful to us over time. That interoperability, Danica
mentioned this already, and this I think, and is one of the most fundamental
issues that we face. Every time we talk about
how to incorporate new data into Sentinel, the Common
Data Model comes up. And the Common Data
Model is very important because it allows for the
efficient operation of the system but it also can be constraining because it requires a very
intense quality checks and we’re somewhat limited by what data is in the Common Data Model. So we have to think about how to improve the Common Data Model and come up with the next
generation version of it. And then there’s emerging
disruptive technologies, Blockchain always comes up in this space. Although we don’t necessarily
know if we’ll use Blockchain or if there will be other technologies that will be even more important. So I’m gonna take a little
bit of a deeper dive into two of the strategic aims. So Strategic Aim B, which is to further enhance our
safety analysis capabilities. We held two meetings with Duke Margolis in 2018 in this space. The first was focused on increasing the active risk
identification and analysis or ARIA systems sufficiency and Gerald already explained what that is. And the second was leveraging advances in data science and signal detection. So I’ll talk about the first and Michael will talk about the second. So, the focus of this,
the meeting on next steps to advance the Sentinel System was how do we improve ARIA sufficiency. So just a brief recap of that. Before FDA can require
a post-marketing study of a company we have to ask
if Sentinel is sufficient to answer the question. And what that boils down to is, do we have the data on exposure,
on outcome, on covariates, do we have the necessary
methods to answer the question of interest with a level of precision that we’re comfortable with? We’ve done an analysis of all the issues that have come before us
in the last couple of years and found that Sentinel’s
sufficient about 50% of the time. And the primary, or the biggest reason for lack of sufficiency
are data inadequacies. And the single largest data
inadequacy is outcomes. So, the first answer to this question is, by improving the efficiency
of outcome validation. So you heard a lot of discussion of that in the previous panel already. So, we’ve launched a number
of projects in this area. Just to highlight a couple of them. The three big bucket areas are chart review, improvement, activities. So we’re doing chart review a little bit of the old fashioned way,
but with improved processes, and also data collection so the data can potentially be reused for in an electronic fashion. So it’s not just all on paper. We’re also looking at how to
expand the Common Data Model to accommodate the date that is generated by these types of evaluations. And the third is looking
at advanced analytics. So, we launched a pilot project
a few years ago in Sentinel, looking at anaphylaxis, and
one of the interesting findings from that is even though we use NLP to extract information from narratives, the method didn’t really do much better than the traditional
claims based algorithms. So, the reason for that
has to do with the ability of the natural language
processing approach we use to extract the subtilities
around clinical diagnosis. So this project is taking
a very deep dive into that. And we’re hoping it’ll
generalize into an approach that can be applied
across all different types of outcomes and give us a general system. So implementing Strategic Aim D, which is to create a national resource by broadening the user base. So you already heard
mentioned that in the RFP, that the FDA has launched, we’re asking for proposals around three
different types of centers. An Operation Center, an Innovation Center and a Community Building
and Outreach Center. The Operation Center we
envision will be similar to the operating centers that we have now which answer the questions in ARIA and also the FDA-Catalyst program but hopefully with improved data and advanced methods over time. The Innovation Center will initially focus on the technologies around
NLP and machine learning but over time if needed there can be additional Innovation Centers for new types of methods
or new types of questions. And Community Building and
Outreach Center is really, one of the things we’ve
learned over the last decade is that it’s a full-time
job just to communicate what FDA does with Sentinel. So, the starting point is that
is really meeting our need for communication but also
hopefully to reach out to scientific communities
and elements of the public that haven’t really gotten
the message about Sentinel. And this just gives us the
opportunity to focus that effort. So, the other thing that this
new operating model gives us is the potential to evolve
the Sentinel System in a way that would help us fulfill this vision of creating a national resource
for evidence generation. So, that vision includes
having a broad range of Sentinel System users,
with open access by any party. So here you can see FDA as a central user, but academia, industry,
other government agencies, international regulators,
payers, providers, and patients. It can also create the opportunity for new types of analytic centers. So, for example, the
FDA’s analytic centers right now focuses on drug
safety and effectiveness. But there could be other uses that parties outside of the FDA might be interested in. And an Innovation Center could evolve to create that opportunity. It’s all based on the
Sentinel partner network with the FDA having its core use. And that would hopefully lead to this idea of a national recourse that
would be open to many users. So in summary then, some
key messages are that we were trying to maintain
and enhance the foundation of the Sentinel System
and FDA’s investment. We want to diversify data sources, especially for EHRs and claim linkage. Incorporate advanced analytics,
broaden our touchpoints for participating in Sentinel development across a wider community. And then create this
broader community of users that can use the Sentinel System and learn from the
projects that we conduct. So thank you. – Great, thanks very much, Bob. And next is Michael Nguyen. – Great, I’m delighted to be here to talk about a topic
I love to talk about, which is signal
identification and Sentinel. So, broad outline is, we’re gonna talk about three things today. I’m gonna give a little
bit on the background and motivation for signal
identification Sentinel. I’ll give a brief summary of the outcomes from the recent meeting we had a couple months
ago on signal detection. It was a public meeting held in December and it was a very important meeting in building support and
clarity for the program. And then I’ll end about
talking about next steps in launching Sentinel signal
identification program, specifically, that we are
going to implement TreeScan and on a limited set of drugs. We will develop internal
operating procedures and we will continue to
advance TreeScan capabilities and build a scientific
community around it. So, let’s start with what we have to start with which is the legislative mandate. And this is a non-trivial
thing, to create a robust system to identify adverse events and potential drug safety signals. And why is this important? It provides programmatic clarity for us and points to a future direction. So let’s define our terms before starting. So what is signal detection? It is a process of systematically evaluating
potential adverse events related to the use of medical products without pre-specifying
an outcome of interest. And that’s important. This is broad based signal detection. And we have several approaches to detect these new and
unsuspected safety concerns. But they all have one thing in common which is that they provide information about unexpected elevated frequencies of health outcomes after
a product exposure. The other thing to note,
is that they not intended to establish causal associations between potential products
and potential adverse events. And so they should always,
always, always be followed by clinical review and our safety study specifically designed to quantify the magnitude of effect with confined to that targeted to that specific outcome of interest. So more on this, more. So this is the basic
paradigm we’ve been operating with since we stood up Sentinel in 2016. There’s a signal of a serious risk, we investigate it in Sentinel and we contribute to the
regulatory decision making. It’s so easy and so deceptively simple that it took us seven years
to build such a system and then another three years proving that we could do it at scale for FDA. So let’s move this to the right. This is a right shift in what we’re doing. And what we’re doing now is not only investigating those signals but we’re also identifying
signals in Sentinel. And so the public,
there were two questions at this public meeting in December, and I won’t go over in depth, ’cause it was a full day
meeting and I don’t have time. But the first question was,
can Sentinel both identify and investigate safety signals, ’cause now we’re introducing
not just a one-step evaluation but a two-step evaluation. I was fortunate, actually,
after college to participate in a archeology dig in
Israel and one of the things that you learn about in
archeological dig is that one, it’s not as glamorous as it looks. (audience laughing) But two, it’s a two-stage process. And the first stage
actually involves pick axes and metal shovels and machines to uncover and move large amounts of dirt. And that’s essentially what
the signal detection phase is, is moving a lot of dirt
using very rough shovels, using large tools so that you can scan across lots of outcomes
and find what you want. Then the second stage is when you use your precision hand brushes and your tools to really find
the things that you want. So that’s what we’re doing here. What we learned about
this, is that doing this, at least in epidemiology
is, raises some challenges. So when you identify
and investigate signals in the same database, you
have to be very careful. And one of the questions was, can we actually do this in the database? Can we use the same source of information to both find and clarify a safety issue? And what we found was that there was a strong
scientific basis for doing this. And so long as if you
meet these three criteria. And this actually a
slide from Mark Levenson, I’ll give him credit for this. So it is valid if the
goal is to reduce bias and not provide replication, it’s valid if the investigator’s control for Type 1 and Type 2 errors
at each analytic stage in the sequence, between identification and refinement and/or evaluation. And it’s also valid if investigators pre-specify
the analysis plans. And this is important
so that you’re not seen as data dredging or intentionally trying to find what you want to find. And then finally, it’s valid
if the results are transparent. So the second part of
this scheme that changes when you introduce signal detection in Sentinel is about communication. We’re very good, already, at communicating
regulatory decision making. We have established processes. People are used to knowing and wanting to know
actionable information. What’s harder is communicating
signals of serious risk. And so you need to balance transparency with the inherent uncertainty
of that preliminary data that emerges from the signal
identification process. This balance between
waiting for actual evidence and communicating as early
as you can is not new. This is an article by
Gerald, back in 2012, that wrestled with this exact problem. And we came out saying, look, there are inherent uncertainties. To certain audiences
it can create problems if you have uncertain data. But we believe that that
transparency is vital to having a robust safety system. And more importantly, it fits and aligns with all of our other existing
transparency principles. We had Section 921 that
is related to FAERS and routinely publishing signals that are coming out of FAERS. The FDA’s tracked safety issue process, which is our established process for communicating with industry
about potential safety risks that we’re actively investigating. We also have drug safety communications that communicate not only early findings but also followup findings. And you’ll actually see this if you look historically back at our DICs, you’ll have DICs that say very early on that we’re still investigating this and you’ll two, three or
four other DICs subsequently that show how the risk is evolving and how our investigation and our conclusions of that risk evolve. And then we have our own Sentinel specific
transparency policies. But the bottom line of
all of these principles is that there’s the overall
concept of early communication to the public and industry
of the safety issues we identify at FDA. And so, the Sentinel System, even after we launch the
signal identification program, will continue posting analytic code, we’ll continue posting results of the signal identification analysis and we’ll continue to post
our regulatory determinations and outcomes online. And these will continue to evolve with all of the other activities
that are going on at FDA. So how will we do it? How will we do it? This is a slide that was
used back in December. I think it was fairly well supported and this is the template going forward. So you’ll see here that
it is a two-stage process. You have the first row showing the results of the signal identification. You’ll see there where the FDA will further investigate Outcomes
A and B in a Level 2 analysis. Which means that they were
two signals coming out of that process and we are going to further investigate them. And then the second row,
hypothetically, 10 months later, is the subsequent Level 2 analysis. Is it the subsequent
refinement or evaluation of Outcome A and B and then the results that are posted online. So, two-stage process,
two-stage communication process. So, how will we actually do this? This is another slide that
we showed back in December. We will take all of the
safety database that we know, integrate it and from
there decide which product should undergo signal
identification in Sentinel. We will chose one product going forward. We’ll chose a study design,
the tool we’re gonna do, use, we’ll conduct the analysis, we’ll review and classify those statistical alerts and then we’ll integrate the results with other sources of information. And there’s nothing
new about this process. The only thing new about it is that Sentinel is now a tool in it. We already do this at FDA. And from there, if we identify any outcome for further evaluation, we will do it. Importantly, the signal
identification will be led by the Divisions of Pharmacovigilance and the Sentinel Program
Team at FDA with support from the Office of New Drugs, the Division of Biometrics VII. And then the followup
investigations will be done by the Divisions of Epidemiology, so. All right. So what are the next steps. I should, before we go to the next steps, I just want to review briefly that we’ve been carefully developing this program over 10 years. Obviously we have the
multi-site distributed database, that’s privacy preserving. It includes private and
public insurance data. We developed signal identification tools. Then we developed alert
investigation tools and then we developed inferential tools for followup studies. All of this stuff has to be in place for this program to be successful. We also tested and developed those signal identification
tools in multiple ways. We started with simulated data sets where we inject signals into there and see if the tool can
find them, so that we knew that there was ground truth and
that we can test the method. Then we also, we did
empirically with real products. We did it on a whole wide
variety of products done that include vaccines, they include chronically
administered drugs that included other preventative
or prophylactic drugs. So next steps then. So building upon that momentum. We’re gonna go forth in the 12 months with limited implementation. We’re gonna select at least
one or two, maybe more, products and from that early experience we’ll develop a framework
around study design, timing, comparator choices and things like that. And from that experience, we
will then develop processes, training, templates, communications. So the next year’s gonna be
important for this program. Meanwhile, we’re still
advancing the methods. This shows four important
projects in signal identification. Three are ongoing, one is planned and it will be started later this year. The first is, and these are, the first two have protocols
online as a methods comparison. We are testing and contrasting TreeScan with two other signal detection approaches that use a self-control design. That, as I said before, the
protocols online can’t wait for the results, really excited about ’em. Also, we know that in CDER, especially, we’re gonna need active comparators. And we know how to do
propensity score studies, but we don’t know how to
do propensity score studies for 8,000 outcomes at once. That’s the challenge. And so we, the second project here is to develop a global propensity score that can be used across 8,000 outcomes. And we’re evaluating five
different versions of them that combine expert and form covariants with empirically drive covariants. And another one that I think is gonna be
really, really helpful to us. The third is sequentializing TreeScan. And this is unbelievably important because right now we face the challenge of when do we do a TreeScan study. We do it too early,
you can do it too late, depending on who you are. It’s pretty hard to get it just right. And so if you develop TreeScan and you nest it inside
a sequential framework, which is what we do, then
you can run it multiple times in a statistically rigorous manner so that you don’t have to guess and choose one point in time. And then the last one
is leveraging TreeScan for pregnancy and birth outcomes. We have a lot, a lot of
products that end up having two post-market required
studies by industry. One is a database study and
one is a pregnancy registry. Uniformly, the goal of those
studies is signal detection. And so there’s a clear need for signal detection in pregnancy. So more to come. Lastly, we’re gonna build
a scientific community, as Bob talked about around this. We just updated the signal
identification website online. This is actually already
old, this screenshot, we updated it yesterday again and I’m behind the times already. But you can see on the left, we are going to to keep building this out so that we faithfully
disseminate information, share best practices, and help
other people use the program, use the tools that are available. As part of that website,
this is just a nice table, I know you can’t see it here, it’s online, that contrasts the five different methods that we currently have available or that we’re already
investigating in Sentinel. Some of them are available. We have the Information Component Temporal Pattern Discovery, three versions of TreeScan and
Sequence Symmetry Analysis. And you can see, we organize it by how is the study designed different. How is the test statistic different? How is it controlled for multiple testing? And how does it control for trends in healthcare utilization? It’s a nice easy way to bring
yourself up to speed quickly. And we’ll continue to develop this. We also are developing
a TreeScan F-A-Q page, addresses the most
frequently asked questions. It’s up online, we’ll
continue to evolve it. Topics include Tree
structure, can we use ICD-9, can we use ICD-10, can we use MedDRA, can we use whatever
tree you’d like to use. It talks about specific statistical and epidemiological concerns
like, what is a TreeScan alert? How does TreeScan control
for multiple testing? And the role of chance in
testing and things like that. And then our approach
historically to method, to validate the method. And then, we’ll have a page dedicated to other folks’ use of TreeScan, again to build a scientific
community around this. So, if you use it, and I can
find, or my team can find it, put it on here, because how you use it and what you learn is
not only important to FDA but it’s important to others
who are using TreeScan. So this is my last slide, I’m
getting kicked out right now. (audience laughing) Signal identification is
both a legislative mandate and an important drug
safety priority at FDA. A strong scientific basis exists for conducting signal detection refinement and evaluation in Sentinel. As well as for communicating
safety information early. And FDA will continue to advance Sentinel signal
identification program through developing internal
operating procedures, continue to advance the
tools, develop new tools and building a scientific community. Thank you, very much. – Great thank.
– Thanks, Mark. (audience applauding) – Great to see all the progress there. Next is Judy. – Good morning. I’d like to thank the organizers
for inviting me to come and share some of the work
we’ve been doing in Sentinel to try to help inform some
of our decision making in the area of opioid,
addressing the opioid crisis. So, unless you’ve been under a rock, you know that we have an incredibly destructive
health crisis going on in this country. And FDA is spending a lot
of time, energy and resource in doing what we can to
try to help do our part to help resolve this crisis. So we all know that
over the past few years, the crisis has morphed in directions where we’re hearing less
about prescription opioids and more about the role
of heroin and fentanyl in prescription or I’m sorry,
in opioid overdose deaths. However, I’d like to just point out and make the case that even though, as you’ll see from the graph on the left, prescriptions for opioid
analgesics have been dropping since their peak in 2012. However, if you look on
the right, as we all know, and we’re pretty devastated to continue to watch deaths going up. If you look at the blue
bar, that’s fentanyl deaths and I think we’re all
reading about that every day. But if you take a look at the green bar, that’s the role of opioid analgesics, prescription opioid analgesics. So I would argue that
they’re sill important, they’re still causing harm and we still have a lot of work to do to try to decrease that. And as we learn from our colleagues at the Substance Abuse Mental Health Services Administration, indeed prescription opioids
are the most abused class of pharmaceutical products that are reported in the surveys they do. And if you look at the second bullet, you can see how they actually dwarf heroin as a reported source of misuse and abuse. So, we have claims data everywhere. Administrative claims data,
FDA has a lot of experience. We’ve been using these data
to study drug safety concerns and continue to refine our
methods for many years. But there’s a lot of challenges. Opioid misuse, abuse, addiction, overdose and death is a very different animal from the typical safety
issues that we study, that many of us have spent
our careers studying. So as we turn to these claims
data that we are so used to relying on to help us
with drug safety issues, there’s a lot of challenges when we try to study opioid
exposures and outcomes. And again, I won’t be able
to go into a lot of detail, but just to highlight
some of the big ones. Although in claims data
we can get and we know, we get a very accurate
record of a prescription drug that is dispensed to a patient covered under an insurance plan, we don’t capture the
other opioids exposures that patient may have. For example, those they pay for in cash, those they might buy on the street as perhaps they develop
opioid use disorder and their need for drug increases. These are also, remember,
activities that I may not want to bring to the attention of my healthcare provider
or my insurance plan. So again, there’s more of a
need to go outside the system. Trying to understand dosing. We often rely on claims
data to estimate dose. In opioids realm, when you’re talking about things that cause tolerance, when you’re talking
about behaviors that lead to escalating drug taking and dose, we can’t always rely on
those algorithms anymore. And very importantly, many of the folks who are actually experiencing harm from prescription opioids are not folks who have prescriptions for those products. Indeed, about half of overdose deaths, there is no evidence of
a prescription opioid in a patient’s record. So how do we understand
the exposures to people who are not actually
receiving the prescriptions but perhaps they’re in the household. And a lot of the policies we’re undertaking have
changed the coverage policies, the ways insurance companies
are actually paying for these products and we have to remember that as we interpret the results
we see out of claims data. And then, when we try to
look at the outcomes, again, since many of these behaviors occur outside the healthcare system, we’ve been challenged to try to develop and validate measures that actually measure
overdose misuse, abuse. And in many of the details
of a specific product that might be involved in that overdose or that episode are often
not available to us. And finally, only a
small subset of deaths, which is the thing we
worry about the most, are actually captured in claims data. So again, a lot of
these deaths as we know, and we read about every day in the paper are occurring outside. So, there’s some pretty
significant challenges with trying to use our typical
hammer to hit this nail. So, FDA has been focusing
a lot of our attention on a big problem we’ve seen which is a lot of the opioid that is prescribed for patients but not used. And therefore hanging around in medicine cabinets and in homes. This summarizes some literature of folks who have been looking into and asking patients what they need. What have you been dispensed in relation to a particular surgery
or procedure you’ve had, what were you prescribed
and how much did you use. And you can see, this graph,
the bars going across, actually represent the percentage of patients reporting leftover opioid. So anywhere between
60-ish to 90-ish percent of patients have extra opioid leftover. There’s a lot of reasons for
that, but the bottom line is, this opioid is hanging around. And to add insult to injury,
not only do folks hang on to this stuff, they’re
not storing it securely. It’s basically there in case
I need it again in the future. Why are we caring about
these unused opioids? Again, because from the National Survey on Drug Use and Health,
we know that extra opioids in the home actually lend
themselves to adverse outcomes, experimentation, accidental exposures. And indeed, the majority of people who misuse and abuse opioids
report getting the drugs either from their own prescriptions or from friends and family. So this is an area where
we think is very ripe for actually taking some
public health action. So, a lot of folks in the country have done different things. We’re not the only ones who
have recognized this issue. So a lot of states, a lot of health plans, a lot of municipalities
have developed strategies to try to limit the amount of opioid that is prescribed or
dispensed to patients. A lot of this is not evidence based, it’s basically just a knee-jerk reaction to try to do something to
stop a serious problem. So, these one-size-fits-all
strategies can cause harm because as we learn from a lot of these patient reported
studies that I showed you before, not only do a lot of patients
have leftover opioid, that’s a theme, but what
patients actually need varies quite a bit between
patients and between procedures, depending on what kind of pain you have and why you have that pain. So these policies can,
although well intended, can lead to a lot of harm. So, our goal, what we’ve been working on and what we now have a lot of support for from the support legislation is trying to develop indication specific
evidence-based guidelines. This is a mandate that’s now in our lap from the legislation passed
in October, but again, we were already working in this space to try to understand how
much opioid do patients actually need so that as
we develop guidelines, or use our new authorities
to develop packaging, we try to make sure, that
as we limit excess opioid, we try to hit that sweet spot where we actually give patients what they need to relieve pain. So, to get to the policy
questions, we said, well gee, can we use some
of our healthcare data to be actually to be able to support how to figure out how much
opioids patients need. The literature is full of
studies on different procedures, different surgeries and asking
patients how much they need. And that’s very helpful, and that’s some of the most
accurate information we can get. However, those studies are laborious, they focus on individual surgeries and it can be tedious to do those across all the different reasons
one might need an opioid. So we were trying to do
something that might allow us to cast a broader net
and get more information. So finally, I’m at the
Sentinel portion of the talk. So, we turned to our
Sentinel colleagues based on some studies we’d
seen in the literature where they used healthcare claims data to try to look at the
occurrence of procedures which are pretty well
documented in claims data. And then look at the prescriptions
that were given to adults who didn’t have any
previous history of opioids in their claims and then
try to understand can we, based on the quantity of opioid in that first prescription
predict, put it in a model, put in other variables and
try to predict the probability of needing another prescription. ‘Cause that could be a proxy
for I didn’t get enough in my first prescription, I need more. I have more pain. So, the challenge here is
that we had to set a cut point and as you see on this graph, we decided that we would pick the point where 20% of patients
actually needed more. Meaning that that first
prescription actually seemed to meet the needs based on
the model of 80% of patients. So again, it’s not perfect,
there’s still patients that would need to come back
and get more prescriptions, however, the environment
now, as I showed you, we’ve got way to much opioid out there. So it’s a place to start
to reign things in. So, these were some of
the surgical procedures we were able to look at. You can see there’s quite a long list, given that we were using
the Sentinel database with a lot of claims
there, a lot of patients. So I’m gonna apologize in
advance to my FDA colleagues who did this work as well as my Sentinel colleagues did an awful lot of complicated work in a very short time. And I’m gonna distill
it down to a soundbite, so I’m gonna apologize
right now for doing that. But, the bottom line is,
if you look at what’s in the green box, these are
just some of the results, we were able to identify some surgeries where according to the model
there were very small number of days worth of opioid that
most patients actually needed. So that can help us to be starting to think about which procedures, which patients may only
need very small amounts of opioids compared to what’s
actually being dispensed to them currently. And then in this next slide,
if you look in the red boxes, we also found surgical
procedures where it looks as if what patients are receiving
may actually not be enough. These are patients that
actually need more. And this makes sense with
what we know clinically about complex procedures
often involving bone where there’s more pain
and patients may need more. Again, to be able to
start to support policies where we can customize
what we do to make sure that we’re not harming patients while we’re trying to rein in opioid use. So, these kinds of analysis, and again, I’m showing you small pieces, but they’re ways that we
are using these data to try to inform what we do in our
policies around acute pain. So we’re gonna be looking at variation, trying to support more
customized solutions in the area of evidence-based guidelines of how to treat acute pain with opioids as well as some of the work we’re doing on trying to figure out, well what size packaging
would we want companies to make available for opioids that might meet a lot of needs. And again, this is just one of the sources of data we’re using, but
it’s a very important source and it compliments beautifully
the patient reported data. Because patient reported
data, as I mentioned, have limitations, these models, of course, have limitations like all models. But when we compare the
results we get out of the two, we can actually begin to learn more about, ’cause one kind of supports or addresses some of the
limitations in the other. So, I wouldn’t be an epidemiologist
if I were actually happy with the data I have in front of me. So, I have to make my plea
to say we need better data. So, what do we need in the opioid space, and what would we like to see,
enhancing not just Sentinel, but all the administrative
claims data out there. We need standing linkage to death, because if you can’t look at deaths, you don’t know what’s happening that’s adverse in relation
to opioids, right. You’re missing most of the
problem if you don’t have deaths. ‘Cause so many of these deaths occur outside of medical care. We have worked with our
industry colleagues, with some post-market required studies, to be doing a lot of validation work on some of the outcomes in claims data, to look at overdoses and misuse and abuse. They’ve done a fair amount of work, but there’s a lot more to do. So we really need to come
up with a set of metrics that we can all be comfortable
with that we can use and assess their portability across different claims environments. And then, I’m old enough to remember, when we first started
using insurance claims data and we only had family IDs,
we didn’t have individual IDs and I remember being really excited that we got to get individual IDs so we could look at individual
patients and insurance data. And I can’t believe I’m saying this, but we also need to now go back and pull in the family data, right? Because if we’re gonna understand
what might be happening in a household, it would be
very helpful to be able to look perhaps at an overdose event, or at some kind of a signal of abuse. If we don’t find a
prescription for that patient in the data, can we see a
prescription in that household. And again, better understand the patterns and what’s causing these events and the availabilities so
we can target what we do. And again, constantly
remembering the inadequacies of the data to study this problem. Can we be creative and start linking to other sources of data? Prescription drug monitoring programs that actually have
cash-paying prescriptions, looking at treatment
programs that have sources of other data like
methadone treatment programs to create a more complete picture of the exposures and outcomes. We’re partnering with some folks at Yale and actually working to link together data across the state of Connecticut. I know the state of
Massachusetts has done this too. But these are our posterchilds, right, these are our efforts to
show that we can do this at a state level and then
think about how we do this in broader ways so that
we can really begin to study this problem completely. Thank you very much for your time. (audience applauding)
– Thank you. I’d like to thank you all
for some very thoughtful and actual pretty detailed presentations about applications and
path forwards for Sentinel. We have time for a few
minutes of questions, if anyone does have a
question please head up to the microphone and
let us know who you are. – [Ed] Hi, thanks very much. I’m Ed Bortnichak from Merck. Mark, you pointed out at the beginning, introducing this panel that
there are two broad lines that really have been
discussed this morning. And I heard that as well. One is, the concentration of work in AI and the other is the concentration of work on signal identification. What I didn’t hear, perhaps I missed it, I was looking for,
particularly in Michael’s talk, was the intersection between the two. If indeed you are
planning to pilot the uses of artificial intelligence to augment signal identification. Now I can understand
from Gerald’s comments how AI comes in for signal evaluation, but I’m just wondering, have
you seen connections there for signal identification as well? Thank you. – Thanks. So, I’ll start a little
broader than that question and then try to come back to answer that. So I think the FDA has said all along that Sentinel was not gonna
replace its other sources of data or other methods for signal identification and evaluation. And as you know, that’s very broad, clinical trials, adverse event reports to FAERS, medical
literature, other sources. So, focusing a little bit on
the adverse event reports, for a second, FDA receives now over two million reports a year. And we know that the companies that submit those reports have also large numbers of reports. And many of those reports
don’t add a lot of value but there’s a general desire
to cast a very broad net in signal identification. So the ability to create a system that would do signal identification in a population-based resource, is in some ways just
complimentary to that. But once that full system comes into play, then we can use those other data sources, such as FAERS and literature
and things like that, in more focused way to use them for what they’re most valuable for. So, for example, spontaneous
reports are very valuable for detailed information from clinicians about unusual events that occur very soon after a product is used
in the marketplace. So, when thinking about
this whole process, we really have to think about
that sort of broad ecosystem. Now, where does AI, et
cetera, fit into all this? So, separately we have a
lot of effort in looking, applying AI-type
technologies to FAERS data, literature, how to find the
signals in those data sources. I think for the population based sources, our focus initially is trying
to do this in a rigorous way, focus primarily around
statistically evaluation. But, all of the improvements
that I mentioned in my talk, around ability to tap into EHR data, ability to have efficient
validation of outcomes. If we’re able to build
those types of systems within Sentinel, then the same types of, or improved approaches to
signal identification can also be applied to those
richer data sources. So I think it ends up being
a, all the data, the systems and the methods end up being reinforcing, but we’re at, right now we’re
at the initial baby steps. – Another question, yeah. – [Max] Max from Leidos. This question is to Michael. We really liked your analogy of oncology with signal detection. I think it welds very well
with it, well articulated. My question is around your statistic plan, to bring not just EHR
data, others datas as well. Social determinant software,
then bring all kinds of data. But when you’re trying
to bring those data, and your example of archeology of trying to remove the lines, it’s not one method, one statistical method will
fit for signal detection because when you try to remove the noises, you need sort of a dynamic
part of the condition, or dynamic methods or techniques, because a model or a technique might work for one kind of signal detection. That’s my first question. The second one is, when you
talk about removing the noises, you get into sort of a
ripple of causations, you need to control that as well, I think five year strategic
plan talks about that as well. Have you thought about providing
any tools or techniques or policies or framework regarding that because one leads to another. Vitamin K will leads to bled warfarin and which comes first
and things like that. So controlling the co-founding
and with both causation and also dynamic patterning, thank you. – I think this will be the last question we have time for but, Michael, do you want to take that one on? – Hi. So the dynamic pattern. I’ll say that it is true that the data are certainly dynamic. The data are constantly
accumulating over time and we have to make sure that
the method accounts for that. And I think that’s why
we are being very careful in terms of moving TreeScan
into a one-time analysis first and then developing a sequential analytic
framework around that. Bob can talk a little bit more about more advanced techniques
involving network analysis and things like that
for pattern recognition, I think, is what you’re
really talking about. In terms of real-world causation, I think that’s why we
have a two-step process. And that’s why we also
started with signal evaluation and signal refinement first
and got that process down. You don’t want to have a system that can only generate signals and not be able to evaluate them. You start and you make the system be able to evaluate signals and
then you move it backwards and then you start to
generating the signals so that the system is self-contained and can do the full spectrum. – Yeah, I’ll just add briefly, I think the current approach with focused on TreeScan specifically
will work very well within the claims based environment because of the tree
structure of ICD codes. But as we obtain richer data from EHR, then I think we’ll have to think about, are there other approaches
that might look more generally around pattern discovery in data that perhaps is not in a tree structure. – And to talk about the
thing we’re not supposed to talk about which is a Sentinel RFP, that idea’s in there, in the Sentinel RFP. – To just a brief kind of,
or maybe briefly address, this does seem like a significant
addition of resources to, sort of buy the pick axes and develop the capabilities in data here. Can you comment, maybe briefly on how much this is gonna be a part of the overall Sentinel structure in terms of additional
efforts, tools and like. – Well, I think, as Michael indicated, our first year is going
to be a limited engagement in this space because we
don’t really have sense of what the resource effort is in trying to put this
into a production system. And, so we’re going to learn. I think, you know, we look back at implementing the ARIA process and it seemed a very simple kind of thing, but partly because of the complexities of just internal management across so many different therapeutic areas and many different drugs
that FDA regulates, it can quickly ramp up. So, I think we’ll have to wait and see. – Right, well it will be a
very interesting year ahead. Thank you all for the presentations. Judy, thanks for the very relevant work on a national crisis. We are gonna take a 10 minute break now and restart again around 11:20 or so. Thank you all very much. (audience applauding) (people chattering) – Okay. Okay, great. A few more folks to settle in. Welcome back. In this next session
we’ll focus on development of Sentinel Initiative’s
safety surveillance programs and activities for CBER,
regulated medical products. As part of this discussion, we’ll explore key
implementation priorities and strategic plans to further develop
surveillance capabilities for these products. I’ll now introduce our panel. We’re very happy to be
joined by Azadeh Shoaibi, as the Sentinel lead at the Center for Biologics
Evaluation and Research. Alan Williams is associate
director for regulatory affairs in the Office of Biostatistics
and Epidemiology. And Steven Anderson is director of the Office of Biostatistics
and Epidemiology. I’ll now go ahead and turn
things over to Azadeh. – Good morning, everyone. And thank you for this
opportunity to talk about one of the new programs that
CBER has recently initiated. So I’d like to start by talking
about what biologics are because that’s usually not very clear. CBER regulates a diverse
array of medical products and they are called biologics. They include vaccines, blood components, blood derived products, human
and human, can you hear me? – [Man] Yes. – Okay, human tissues and
cellular products and others. And the kind of surveillance activities that CBER performs under different names, surveillance or Sentinel or BEST, those are all assessing the safety and effectiveness of
these medical products that are called biologics. So, CBER has certain
priorities with respect to its surveillance activities. And when we are building
a surveillance system, we need to pay attention
to these priorities, to include the infrastructure to accommodate these priorities. Some of the priorities I’m
listing here, for example, we have evaluating safety of
vaccination during pregnancy, use of natural language processing and artificial intelligence
for signal detection, which is a question that came
up at the previous session. Having a capability for
pandemic preparedness, which would require near real-time
surveillance capabilities and hence a shorter data
lag in our data sources. And also, emerging infectious
disease surveillance and monitoring. So, in October of 2017, CBER started to build a new active surveillance system specifically targeting the biologics, and I’ll explain the reason for that. This new system is called
Biologics Effectiveness and Safety Initiative. It’s the CBER Active Post-market
Surveillance Program. And it is a component of
the Sentinel Initiative which is an FDA-wide program. So, before initiating the BEST program we had been working with Harvard Pilgrim and using claims databases only and also the modular program tools that they have for our
surveillance activities. And our many years of working
with that system showed us that biologics have
certain characteristics and require certain accommodations and components in an
active surveillance system. And claims data with about
nine to 12 months data lag may not necessarily be able to accommodate and meet all of the needs for surveillance activities
of biologic products. So as a result, we started
to build a new system, also we have to take into account some of the new legislative
requirements for FDA such as PDUFA VI and
21st Century CURES Act. So, to keep up with these requirements, we decided to have some upgrading
done to the infrastructure of the active surveillance
system and start a new system. So the BEST Initiative
has two main objectives. The first one is to
build the data, analytics and infrastructure for
an active, large-scale and efficient surveillance
system for biologics. And then the second aim is to develop innovative methods
using the EHR data sources and also establishing an automated adverse
events reporting system that Steve mentioned. So I will be talking
about the Aim 1 in my talk and my colleague, Allen Williams, will be talking about Aim 2. So, we have awarded a
few different contracts to different organizations
that form the BEST Initiative and we are working
currently with IBM, Acumen and IQVIA and OHDSI working together and their collaborators
including Regenstrief Institute, Columbia University,
University of Colorado, UCLA and Cerner. The BEST Initiative has access
to diverse data sources. IBM provides a large claims database that covers about 60 million patients. IBM also provides a linked
EHR claims database, covering about five million patients. Acumen provides another
large claims database, covering about 23 million patients. And then IQVIA and OHDSI provide another large claims database covering about 160 million patients
but also we have with IQVIA and OHDSI a distributed
network of EHR data sources that cover more than 50 million patients. So because our network has
access to diverse data sources, we pay very close
attention to data quality and data quality assessment processes. So at each site, after each data refresh, that is done usually quarterly or sometimes even on a monthly basis, depending on the data
source that each site has and whether the site uses
a commondator model or not, a data quality assessment
process is executed after each data refresh. So I’d like to now talk about
some of the accomplishments of the BEST Initiative
that have been in place in the past year and a half. So, we have built a distributive
network of EHR data sources as well as claims and also
linked claims EHR databases. We have reduced the data lag for our near real-time
surveillance capabilities to about three to four months. We have analytic capabilities on demand. And that means that we
have computer programmers who would be able to write ad hoc and customize programming for all of our analytic capabilities and we would not to use
prepackaged programs that may introduce some
limitations to our analysis. Because we have access to,
particularly EHR networks, access to medical charts for validation and evaluation of outcomes
is much easier now. And we have built a
portal for some CBER staff to have to access to data and analytics for feasibility analysis before we start large scale studies. So overall, we have improved
the operational efficiency of our system and reduced
the turnaround time for all of our activities. So, with this BEST Initiative, CBER has built a new
modern surveillance system that is able to conduct queries and studies at different
complexity levels. So in order to test the
new system, the databases, the analytics, we have been
conducting exploratory analysis and also descriptive studies
on a wide range of biologics, outcomes and also special populations. For example, we have
looked at identification and exposure over time
to different vaccines, to different blood derived products and also use different algorithms, particularly with ICD-10 coding system, identifying outcomes
and special populations. Also, to test the system, we conducted, or we reproduced some
components of a vaccine study that had already been conducted
by vaccine safety data link and published in 2010 by Klein et al. And the study objective
was to assess the risk of febrile seizures in children
receiving their first dose of the MMRV vaccine compared to having MMR and V vaccines
administered separately but on the same day. So here I’m showing the results. The middle column shows the results from the original VSD study and the column on the
right shows the results from the BEST reproduction of the study and the results were similar. CBER has been conducting
activities called hemovigilance and by that we mean,
we monitor development of adverse events in recipients
of blood transfusions. So by using claims data, CBER has been performing
hemovigilance studies for a long time. Our experience has shown
that claims data captures somewhere less than about 60%
of all the blood transfusions. And they also don’t provide a
detail or granular information about blood collection
and modification methods which are very much relevant
to safety of blood components. So, with access to EHR
databases, we found out that there is a relatively
new coding system used by blood banks and other blood
organizations called ISBT128 and that coding system
which is basically a barcode that is used in the organizations can capture blood transfusions
at a much higher level. So ISBT stands for International Standard for Blood and Tissue. So we have incorporated ISBT coding system into our Common Data
Model of the EHR systems. And here I’m showing the results of a exploratory analysis
we did in this regard. The y-axis shows a number of
patients who received some kind of blood component and the
x-axis shows the years. The blue line shows
the capture of patients who received a blood
component using ISBT codes versus the orange line
using a billing code. And as you can see across
different components of blood and over time ISBT codes capture a larger proportion of transfusions. I would like to move on
to another capability that IBM has made available to us and that is availability of a linked EHR and claims database. So, IBM has a large database, a claims database called MarketScan. It also has a large EHR
database called Explorys. And these two databases were
merged, or I should say, linked deterministically
and the result is a database that has both components from the claims which provide the
longitudinal healthcare data and also the EHR database which provides detailed clinical data and covers about five million patients. So on this slide I’m
showing the comparison between the population in the CED database and the U.S. general
population from the census data with respect to gender
and age distribution. And as you can see, there are some similarities
and differences. So this linked claims
EHR database is important for our surveillance
activities because as I said, it both provides
longitudinal health care data as well as the clinical,
the granular clinical data that we need to more
robustly ascertain exposures, outcomes and other
covariants of the population. So as one of the priorities of CBER for surveillance activities, I mentioned building infrastructure for monitoring safety of
vaccines during pregnancy. So, in line with that priority we are currently running a
project using the CED Database for validation of pregnancy outcomes and gestational age using
ICD-10 code algorithms in this database. So the first objective of this study is to develop algorithms using ICD-10, diagnosis codes and also
HCPCS and CPT procedure codes to determine gestational age and also to classify pregnancy episodes as having one of the four
outcomes of full-term birth, pre-term birth, stillbirth
and spontaneous abortion. And then the next step, the
next objective of the study is to use the case definitions
developed by GAIA as a reference method to validate
estimated gestational age and outcomes classifications. So GAIA stands for Global Alignment of Immunization Safety
Assessment in Pregnancy. So using the GAIA case definitions, within the structure
components of the CED, EHR portion of the CED, the clinicians will be
adjudicating these outcomes derived from the ICD-10 algorithms
with the assistance of a semi-automated chart review tool. So here I’m showing
some preliminary results from this study in terms
of the study population. We looked in the claims portion of the CED and found about 35,000 pregnancy episodes. Then we eliminated pregnancy episodes that did not have
gestational age estimates in the claims portion of the data. Then we went to the
EHR portion of the data and eliminated pregnancy episodes that did not have gestational age or last menstrual period
dates or any outcomes in the form of SNOMED of LOINC
codes in the EHR portion. And we ended up with about 6,000 pregnancy
episodes for the study. So, the next step for the
study is for the clinicians to adjudicate these outcomes and the gestational age using the semi-automated chart review tool. So the tool has a built in questionnaire and it abstracts the data
from the structured components of the CED, EHR portion of it based on the GAIA case definitions. And then the clinicians use this semi-automated abstraction tool to adjudicate these pregnancy outcomes. So just in summary, CBER has built a new
active surveillance system that is able to run queries
and studies with considerations in terms of its priorities
and requirements. And this system provides a
more robust method for us to ascertain exposures,
outcomes and covariants. And some of the characteristics
of this new system that enable us to more
robustly ascertain exposures, outcomes and covariants
include having a large network of EHR databases as well as claims and also the linked EHR claims database. Access to EHR provides, as I mentioned, detailed clinical data and
also the blood coding system. And the EHR data provide us with a much faster and better
access to medical charts. We have reduced the data
lag to a few months, we have on-demand analytic capabilities and some CBER staff
have access to a portal to be able to do a feasibility analysis on the identified data and overall our operations have improved. I’d like to acknowledge a
large group of colleagues and collaborators who have made
all of this effort possible. Thank you. (audience applauding) – Great, thanks, Azadeh. So now we’ll go to Allen. – Hi, everyone. I’m gonna discuss the
BEST program number two for lack of a better name, which is development of
new and innovative methods for automated reporting for
CBER regulated products. I spoke at this meeting last year and because BEST was
really just starting up, I spent a lot of time describing
some of the challenges to conducting hemovigilence
or either passive or active surveillance for
blood transfusion recipients. And it’s important because that served as one of the major foci for BEST in the first couple of years. And I just reiterated a
couple of the points here. That passive reporting to FDA for a blood component AEs
is virtually nonexistent. Unlike other drugs and
biologic drugs recording is not in place and voluntary
reporting is really very low. There are some reasons for this. At the hospital level there
are a bunch of little entities without common data
systems for the most part. And there’s poor data interoperability and manual recording
typically is what’s required to generate a report. So there are some small
hemovigilance systems in the country but reporting is largely done manually. Also there are limited
resources for hemovigilance at the institutional level, so reporting tends to be burdensome. Blood adverse events,
nonetheless, are important. They can be serious and they can be fatal but they are rare and
diagnosis is complex. And building on one of
the comments Azadeh made, one of the reasons that claims data for blood transfusion exposures tends to lack sensitivity is many of the blood components are
reimbursed through DRGs, so it’s not there as a
separate billable item. So that doesn’t really work
for defining exposures. And then finally for other CBER products, where passive electronic reporting of product adverse events is required there are many, many reports
received as Bob Ball mentioned and to the extent that there
are actually some AI processes in place to find those reports by severity and kind of triage them
for medical officer review. So clearly, from many
aspects there’s a need to try to refine the data coming in and make it more accurate. So the BEST2 program goals are semi-automated data
extraction from EHR data to define CBER product exposures and then product related adverse events that meet the defined criteria. Then development and validation of descriptive case reports
reflecting adverse events. And then finally, electronic
submission of reports to FDA via the electronic gateway to either either the FAERS drug or VAERS vaccine data systems. So the first work in BEST was a one-year pilot contract
awarded to IQVIA, working with Georgia Technical Institute, Research Institute or GTRI working also with Columbia University, Stanford University and Regenstrief. And this ran one year, from
October ’17 to October of 2018. And the slides that are
headed and Rob referred to this pilot program, and
I’ll go over those first. But I’ll also mention that
in the course of the talk I will also describe the current work which is being done under
five-year IDIQ contracts awarded to several firms but with a
one year task order awarded to IBM Watson Health which is the basis of the work I’m going to describe. So in the BEST2 Pilot, there
was a limited time period and a lot of work to do. So we actually ran five
parallel workstreams. The first, I think, is one
of the most demonstrative of what can be accomplished
by applying machine learning, specifically NLP to
some of these problems. And what we were able to do was look into improved sensitivity and granularity of transfusion exposures
compared to claims data alone. As Azadeh mentioned, we
recognized by this point that the ISPT128 labeling code was there, in the blood bank, we just
had to get to the data. And that pretty well defined when a transfusion exposure took place. But there’s a lot of information
that is not available through that code that
would be useful to know. For instance, timing of between a reaction and a transfusion. So, Columbia University, working
with Georgia Tech used NLP, applied this to transfusion nursing notes which are standard. Transfusions need to be
monitored in the U.S. and notes maintained during
the course of the transfusion. And using NLP we were able
to define the components from a different angle,
transfusion start and stop times, vital signs and transfusion reactions that occurred during the
course of the transfusion. These were assessed for
34,000 transfusion cases at Columbia University Medical School, Medical Center, I’m sorry. And manual chart review
was done on 100 cases, just to look at them. Actually, showed 100% accuracy. So this is, I think,
really proof of concept, that you can enlarge the
available data for a key event and make use of it and further define in your surveillance system. The second workstream is looking at a specific transfusion adverse event, known as Post-transfusion
Transfusion-Associated Circulatory Overload or TACO. Stanford University took
the lead working with this, with its own medical center EHR data. And although it wasn’t completed, NLP based queries were conducted and demonstrated approximately
a 10% improvement in positive predictive
value for identifying cases over just typical reporting. And this work remains in progress, it’s not part of the current program but we hope to be able to build on that foundation and continue it. And then similarly,
workstream three looked at post-transfusion sepsis. Sepsis is a very rare
event in transfusion, estimated based on the available data now to be about one in 100,000. And Regenstrief Institute worked on that and were able to identify
sepsis cases that were validated and again, more work needs to be done, but I think a pretty good foundation toward hopefully being
able to complete this in approximately six months
to a year of further work. Workstream four was really
related to NLP platforms because any sort of extensibility of this program would
require many of the processes to work in different EHR environments. And we found that working
with a common NLP platform, in this case CLARITY NLP which was developed at Georgia Tech, can support efficient
interactive NLP studies that saved about a month or
two in each of the processes. So, it ended up that
Georgia Tech was working with Columbia on a harmonized NLP platform and really made work quite efficient. Then finally, was to
develop infrastructure to support nationwide scale-up of computational phenotype, computable phenotype
based case identification and automated report generation, i.e., can we identify cases and then write them to an automated report, send them to FDA and demonstrate that
the case was received. And this was accomplished on data used from the MIMIC test database and we used a published TACO case, put those characteristics
into the MIMIC database, pulled them back out,
wrote them to a report and sent them to FDA. So, the little diagram here shows a tool that was developed during that
part of the program called, AESOP, which is now part
of the OHDSI tool platform. AESOP is the center section there, which is able to query for data, either from a data coded to an OM outcome and Common Data Model. Or, OMOP on FHIR which actually
looks at the FHIR database, or a FHIR server which
works independently of OMOP. Take that query, convert
it to a case report, regulate the whole process
through management APIs and then send that to the FHIR gateway. The group was successful in doing that, actually having the case received by VAERS and validated as meeting the criteria for an electronic report. So, I think importantly, this program showed proof of concept for many of the elements that
we were trying to accomplish. Future challenges identified by the pilot. One, would it be possible to
derive granular timestamps from all EHR for events. (man coughing) Because the data should be
there somewhere in the EHR, it’s just a matter of
making sense out of it, can we establish time relationships. That would be very important. Then second is, iterative
computable phenotype development, where you’re going through
an NLP process and validation and NLP process, working
through a Common Data Model at each step is effective but it could potentially
be made more efficient. ‘Cause CBER regulates many
products, each have unique AEs. So there’s potential
efficiency to be gained there. And then, there’s a need for collaboration with EHR vendors ’cause
each system will be unique and clearly HL7 FHIR holds potential to support eventual scale-up. Larger group involved in the pilot and everybody did a
really great job on it. So, in overview, for the current work, is to design and implement a scalable and interoperable active
surveillance system using EHR data and this is from MedStar, MedStar Health, which is in
the Baltimore, Washington area. To automate the detection,
validation and reporting of adverse events related to
CBER related biologic products and for this year specifically, blood components and vaccines. The mechanisms to be used are detection, using both unsupervised and
supervised machine learning of both structured and
unstructured data elements such as notes and lab reports
from EHR sources and MedStar. Validation based on semi-automated or automated chart review. And reporting through an ICST individual case
study report mechanism to the FDA adverse event system. The overall scheme for
doing this is first, you’ll see on the left, current state is
preliminary data processing for MedStar goes through
a conversion process where a flat file from
MedStar EHR data is parsed into a FHIR encoded database and maintained on an
IBM central FHIR server. This sets up the conditions
where one can apply SMART on FHIR APIs to do the sorts
of work that I mentioned in the last slide, namely detection, chart review and reporting. In a future state, hopefully EHRs will have
FHIR encoding incorporated into the EHR and they
could potentially work from their own internal FHIR server and do the same thing
with EHR specific APIs. And then hopefully, once
these steps of detection, chart review and reporting are developed on developmental databases, this can be rolled out in modules to support other EHR
development of this work and approach a national
system in years to come. So what is SMART on FHIR API? It’s a standards-based
interoperable app platform for EHRs using FHIR. It extends the FHIR standard
to support the development of web and mobile apps that
can easily plug-and-play across clinical systems and it serves as a standardized platform
for provider networks to install and deploy applications on our EHRs in a standardized manner. SMART on FHIR API’s actually are the part of the EHR data system and they’re controlled by the EHR owner. Within BEST the objectives will be to launch the BEST Automated
AE Reporting System as a SMART on FHIR app on
several EHR vendor app sites and this will significantly
reduce barriers to extension across healthcare
provider systems for both use and continued refinement
of the tools and updates. There is a chart review that has been under intensive development
by the IBM group. This will enable semi-automated
clinical assessment with an intuitive user interface, designed for physician use and
reviewing candidate AE cases. This chart will serve as a
mechanism for chart abstraction, allowing for simplified visualization of patient EHR information and facilitate clinical judgments. And the reviewers will then
document information related to cases of interest relevant to specific adverse event
pairings, including the exposure, the certainty of the adverse event, assessment of causality or imputability, and evidence for the conclusions,
which will inform part of the individual case reporting
and algorithm training. This is not something
you’ll be able to rate, it’s just a mock up of
the type of dashboard that a physician might use
in receiving an AE report, looking at other data,
supporting data that comes from the chart and making
a clinical judgment as to whether that’s a
a true report or not. So, for this program, as I said, it’s focused on blood transfusion and vaccine related adverse events. There’s a small list of transfusion-related
adverse events here. Probably the program
will concentrate on two, maybe three of them,
such as allergic reaction and hypotensive transfusion reactions. And then vaccine-related adverse events where they might be facilitated by machine learning applications to some of the common adverse events. Not common, but rare, but
recognized adverse events such as febrile seizures,
allergic reactions, Bell’s palsy and intussusception. So, to just capture the summary
of the technical approach, the first step was infrastructure building which has been going on up to now. Which provides access to
standardized EHR data, efficient chart review tools, development of feature extractor for structured and unstructured EHR data and then to identify the biologic in adverse effect pairings of interest. The second major step which has begun and is just getting
underway in the second part of the program, is to drill up
machine learning algorithms. The first is foundational where you actually take
the case descriptors for each of the adverse
events and apply those to queries of the database
and that’s actually combined with unsupervised learning
approaches to looking for data and looking at data relationships that might not have been
recognized previously, which forms the foundation for future supervised machine learning. And then the second step
there is really the major one where it’s iterative, step-by-step supervised
learning approaches which are based on models that are defined from prior iterations. Then deployment and maintenance, deployment’s rolling it
out to other systems. Obviously very important to
maintain validation steps as well maintenance, ongoing and maintaining validation
throughout the program. And then automated and
semi-automated reporting. Automated reporting would only occur where the index of confidence is very high in a given result. Otherwise, it would
likely be semi-automated where it would receive
a physician adjudication as to whether it’s reportable or not. So, also some additional acknowledgements, as well as an acknowledgement
that HR systems are many and complex and not everyone
out there is gonna be as well characterized as
the development systems that have been worked
with on this program. So, certainly there
will be some challenges as this moves forward and
we don’t mean to imply that it will simply roll out to other EHRs without additional complexity,
which it certainly will have. Thank you. (audience applauding) – Great, thanks, Allen. So now we’ll turn to Steve. – All right, so let’s see how we do this. All right, so I’m just gonna
start with a disclaimer, ’cause I’m gonna be talking
about the CBER BEST roadmap which is still our roadmap under
construction, imagine that. So, it’s still in the works and it’s going through clearance and so, but what I can do and
what I’m gonna to do is just give the overall conceptual plan but it obviously it may be subject to modification just given that
it’s still under clearance. The roadmap is our sort of
strategic plan, counterpart, it provides direction for BEST
for the coming five years. This is an overview of that
map, BEST roadmap document, so several of the components and I just want to draw
your attention, I think, down to things like the scope which is number three and number five. We have five major priority areas and I’ll talk through
those in a bit more detail and then skim through
some of these sections. I’m not gonna represent and
talk about each section, just given that I have 20 minutes. So, the vision here is that
BEST is a preeminent resource for evaluating biologic product
safety and effectiveness. It leverages a high-quality
data, analytics and innovation, that
you’ve heard about today, to enhance our surveillance programs, real-world evidence generation
with clinical practice that benefits patients. The scope of the roadmap, okay, so the scope spans the next five years and is linked really
to our current contract which runs five years. The focus in this discussion
is gonna be on BEST with limited coverage
of our other programs including the CBER Sentinel
program or Sentinel Initiative. I’m gonna emphasize the
five major priority areas, ranging from expanding infrastructure all the way down to communication. Those five priority areas are here. I’m gonna go through them one by one, as quickly as possible. For some of them I’m gonna
provide a fair amount of detail, but I’m largely just
gonna provide highlight, just given the shortness
of time for the talk. So, priority one is really similar to what you saw for CBER’s strategic plan, which is really to grow and
enhance our BEST infrastructure and that’s add new data sources for millions of patients with an emphasis on really high-quality
regulatory quality data. As Azadeh mentioned, we want to emphasize on demand analytics in
use in our analysis. We need to improve methods for key areas, and Azadeh touched on a few of those and I’ll touch on those in a moment again. And then we haven’t want the
improved access for FDA staff so we built portals for hand on access for de-identified data. So as I mentioned in my earlier comments, so that we can run feasibility analysis to better design larger studies that can be run in the system. You’ve seen most of this slide before. I’m highlighting the parts in red that are really the EHR based data sources and I want to just run
down to the bottom of IBM. We also had that MedStar data which covers five million persons and then just above that is
that linked EHR in claims data which we believe is kind of the future for this type of program. But again, there’s also
claims data in there too, ’cause we don’t, we still need claims data for particular things. So we’re not abandoning that, we’re just putting more
emphasis on the EHR sources. All right, so we’re talking
about our priorities for improving biological, the methods for some of
these high priority areas. One you heard about from Azadeh, which is evaluating the safety of vaccinations during pregnancy. That’s a really important issue, it’s important issue in the
international setting too. So, we’re continuing our
work on that with linkage up to the GAIA case definitions. So that’s, again, ongoing
work that’s been starting. We’re really excited about
that, ’cause we’ve got, I think, a really great group of
experts but some good data by which to kind of start getting more, developing that method into
more fulfillment and use. Signal detection, the use of NLP and artificial intelligence. I will just say that CBER
is probably not going to be using TreeScan to a large extent. What we’re focusing on is the use of NLP in artificial intelligence. And just by the very virtue,
we’re using very simple types of signal detection because we’re looking at known adverse events in the
first two years of this work. So it’s a very simple model and I’ll talk more about that in a moment. Pandemic preparedness is
another important area and emerging infectious diseases as well. Number two is really
leveraging the EHR data using those innovative technologies,
again, AI, NLP and others. And then, the semi-automated
chart review work was really covered, I think, in
part by Azadeh and Allen, so I’m not gonna talk about that. The automation of the adverse
event reporting was talked about by Allen, so I’m not
going to cover that as well. So that’s two out of the five. (laughing) I’m standing between
you and lunch as well, so I’m well aware of that. So number three is really to support and advance real-world evidence,
to improve patient health. So in the real-world evidence generation and evaluation area, we have actually an effectiveness that we’ve been doing. We also want to talk about
BEST as an accessible resource and then patient engagement and input. I’m going to skip and I’m
gonna talk very briefly about the effectiveness work we’ve done. We’ve been doing vaccine
effectiveness work in CMS data for the past six or seven years. Again, as I mentioned,
that work’s been led by Hector Izurieta in my
office and Dr. Rich Forshee. They’ve done some great
work and here’s one study that is a comparison between high-dose and standard-dose influenza vaccine and those results from the
database align quite well with the sponsor study that was done. The sponsor study was 30,000 patients, the CMS was a 100, I’m
sorry, a million in each arm. So you can see the power
of using these databases by having access to these
types of data sources. I wanted to highlight one success for the effectiveness work. It’s a study that was done for Merck’s Zostavax
vaccine for herpes zoster. It’s a prospective observational study, it was done by Kaiser Permanente. And information from that study on effectiveness was actually
incorporated into the label. So that’s one success where, real-world evidence
ended up in the labeling. So one regulatory action taken because of real-world evidence. All right, we need to talk about BEST as an accessible resource and
real-world evidence generation for clinical trials and
post-market studies. We’re trying to build
larger and larger networks of EHR and EHR claims linked system. The goal being, we want a system that eventually will be
accessible by stakeholders where they can use those
to run clinical trials or enhance their clinical
trials but also have the ability to conduct post-market studies as well. Again, we sort of have the mantra too, where we’re trying to build these systems but it’s sort of the better,
faster, cheaper approach, with the eye on improving efficiencies that really benefit patients. All right, so patient
engagement and input. I’m not gonna say much on
this ’cause I don’t want, Dr. Telba Irony’s gonna speak
on this afternoon session. She’s my deputy in the
Office of Biostatistics and she’s gonna talk a bit about selection of patient cohorts and mobile platforms. All right, so that’s three down. We’re on number four. So building and engaging
our community of users, collaborators, partners, stakeholders for the BEST ecosystem
and other priorities. So building the community
and then accessing use of BEST is really a
critical functions for us. We’re building the community
so we have a small community right now, of CBER scientists
and our contractors scientists who have been working
collaboratively along with some of our academic partners over the past few years on BEST. But I will say, we’re trying
to build these capacities and build closer ties with groups like the OHDSI organization,
it’s a research community, so we’re in discussions with
them to kind of figure out how we can work more closely together. I think the one thing
that you have to realize between the two organizations is that, FDA’s regulatory and research and OHDSI’s really a research community. We’re dealing with any research community, there’s always gonna be
tension between those two because we may have higher for
quality for data standards, methods, et cetera, in
the regulatory setting, that may not quite match up with those in the research setting. So there’s always gonna be that tension. But we kind of need to work through that and find solutions to how
we work better together. So, again, it’s not only
building relationships with OHDSI and the research community, but building relationships with the professional societies too, and I listed several there as well. The next is really access and use of BEST. So, it’s sort of the same
thing that CDER had mentioned, I think Bob Ball in his
talk and Michael as well, users can work collaboratively
through the system, either through the Reagan-Udall Foundation or contact BEST contractors directly. The one thing is, I’ll go
back to the top of the slide, which is really we’ll await the stand-up of the BEST production system. So we’re in a pilot
phases, so in the next, probably year or two, we’re
expecting to go to production, the production phase where we’re able to conduct query studies and surveillance and then at that point, then start to make it available to others. All right, we’re on number five. Phew. So, number five is really
enhancing transparency, communication and then doing outreach. So, transparency and communication, it’s really doing the things
that we’re doing a fair amount of now which is posting code protocols, the results of analysis and studies. For BEST we need to do
that as we sort of, again, move from that pilot phase into the more fuller production phases. And so we’ll post things
on the CBER website. Publications, conduct public workshops, so we think that that’s an
important thing to be doing, so we’ll probably be
doing those on an annual or a semi-annual basis. And then training and outreach. It’s not only training and
outreach though of our staff at FDA, it’s training once we
get the system and running, the collaborators, users
and stakeholders as well, to make it again, more accessible
in how to use the system. All right, so, I think I
got through five but I have, I wanted to talk about
operations and other activities. So, Bob Ball had mentioned the
Sentinel sufficiency testing, and then also people have mentioned PDUFA VI and CBER Sentinel. So I wanted to talk a
bit about our process which does mirror the
CDER process quite well. So, as you can see in point
two, CDER has the ARIA process, but we use a broader process
which is using CBER Sentinel which includes BEST, the Sentinel System, CMS and any other data systems
that we have available to us. And then, just the requirements, in order to think about
doing a PMR which is, if there’s a known serious
risk, if there’s a signal or if there’s an unexpected serious risk that could potentially trigger a PMR, or post-market requirement. And then as others
mentioned, there are reasons for insufficiency in our data systems. You might not be able to
identify populations of interest and so on and so forth,
or the outcome, et cetera. So we’ve instituted and
implemented the process for CBER sufficiency within
our regulatory process. We’re updating our S-O-P-Ps and then we’re updating our
review memos, templates, which is for the Division of Epidemiology when they do their review
of Pharmacovigilance Plans. And then we have a separate
memo that we’re doing, which will be done by
the CBER Sentinel team to assess sufficiency. PDUFA VI, so a goal of ours is obviously to integrate Sentinel, greater into the regulatory processes. We also are tracking
uses of Sentinel, again, how it’s being used, I’m not gonna go into great detail on that. I will say though, in the last six months we’ve probably have had,
at least counting back from my memory, at least
three post-market studies that were done in lieu of
sponsor PMRs, so again, showing that there is value to
having this system in place. Going to summary, so in summary, is the probably the take
down for the entire talks. So if you drifted off, wake up. (hand rapping) (people laughing) So, our short term goals
really are focusing on the things that Allen, I’m sorry, Azadeh and Allen had both talked about. It’s really kind of a technical
part of what we’re doing. And we as scientists have a tendency to focus on the technical, right? ‘Cause we like those things. But we do have other goals as well, so it’s the longer term goals in the next three to five years, we want to build this
large network, again. We want to support real-world
evidence generation and get that core production system. And then leverage NLP and AI again. Our goal is again,
bigger, better and more, basically is the message for
the next three to five years. The long-term goals over the, as well, are we’ve got to do those things like building the community of
BEST users and stakeholders. Building more in high
quality partnerships, facilitating use of the system. We’ve also got to identify
barriers and reasons why people aren’t accessing
the system as well. So we would do that through
the public workshops and try to surface some of
those reasons and issues. And then try to find solutions to that. And then again, transparency,
communication and outreach. We have a very small presence
for BEST on the internet and we’re gonna be fixing that
within the next few months. So you’ll see more of a web presence for BEST in the coming months. And then obviously, training FDA staff and then external users, once again. I wanted to acknowledge
the many people involved in the project, our OHDSI collaborators, I dropped off the academic groups, but we’re very thankful to those as well as the data providers
and our data partners. So, again, thank you, and I’ll stop there. (audience applauding) – Thanks. Thanks, Steve and to
all of the presenters. We do have a few minutes left. I’m gonna ask one question and
then turn it to the audience, and if you permit, I’m gonna go off of the planned talking
points and based on something that you said in your talk, Steve, and it was around the use of
the data and infrastructure that you’re developing for effectiveness. And, linking back to what Dr.
Woodcock said this morning, she mentioned the FDA is
also, is building a program on how to use real-world evidence
for regulatory decisions. That group published a
framework on how it’s thinking about how real-world
evidence might be used in regulatory decisions. And one of, the key area that was focused in that framework was around data quality. Real-world data quality,
curation best practices, how we know that the data that are collected are of high quality. And it strikes me, as
you all have built BEST, based on the need to go outside of the more traditional
sources of real-world data, to get better data for
blood and blood products and associated adverse events. As you have dug into that data, what are you learning about
the quality of the data, the need for additional or new practices in data curation that might
actually be informative for other aspects of FDA as it considers using
real-world data more broadly. – So, I’m gonna turn that
over to Azadeh to answer since she’s the data person for the group. – So our work with a
large distributed network of EHR data providers has
shown that the community needs to come together and perhaps
develop certain standards. Because currently, different groups, based on their experiences or
the type of data they have, where whether they are
using data model or not, have set certain internal
standards for themselves. But I think this is a much larger issue that needs the collaboration
amongst many different groups to build a certain basic and
higher level of standards. – Yeah, maybe it goes to, I
think, one of the slides, Allen, that you indicated was
sort of collaborating more with EHR vendors to
sort of help in scaling up and it seems as you start,
as you continue your projects and some of the ones you’ve illustrated, using these new data
sources in different ways, you might even identify, oh, you know, there are data quality problems here and that might inform the community that you’ve assembled to
start new ways of looking at the data in terms of
quality and curation. So any additional comments. – So let me just say that. I sort of serve as the
real-world evidence lead for the office and the center and I just will remind
people that there are, there framework document was one of the first documents to go out. I think there’s a half
dozen more slated to go out and one of those revolves
around data sources and quality is embedded in those. So, we’re not just relying on the public to actually make determinations
about data quality, we’re also gonna provide some
guidance in those areas too. So I think that’s important to emphasize. – Yeah, and hopefully the
learning’s here can help inform some of that, great. So, I’ll turn to the audience, go ahead. – [Rosalie] Hi, my name is Rosalie Bright and I work in the Office
of the Commissioner at FDA. And I have a question about
why the decision has been made to try to take adverse events
that have been detected in the EHRs and turn them
into adverse event reports to submit into the FDA’s
adverse event report system. Is it a management
decision, a policy decision, is there some science ideas behind it? I’m just curious to hear the thinking behind the idea of doing that. – So, when we thought about that, so, Allen may have ideas about that too, the idea was how do we get these reports. So, you’re right, we
don’t necessarily have to have that submitted as report to FDA. We thought about having it
submitted as a table to us. But, what we wanted was to
have these data in perpetuity. So we thought the best way
to do that is have it filed within our adverse
event reporting systems. So that’s really our
underlying thinking in that. And maybe Allen or others
have thoughts about that. – If you are asking specifically
about one type of product like blood transfusion or the whole realm, but for current electronically
submitted reports for non-transfusion
biologics and drugs, I think, it would benefit everyone
to have these reports made, not only more granular but more accurate. And because it is required reporting, I think developing a capable
system would benefit everyone as long as it is required reporting. In terms of transfusion, I
think it could produce data that’s provided to other
entities, either academics or professional organizations who could make use of the data. The FDA does have a draft
rule that was issued in 2003 for required reporting for transfusion. Who knows if that will be implemented, but if reporting ever
does become required, this hopefully will
provide a more accurate and least burdensome way to
do it without any sort of, you know, thinking about whether that’s gonna happen in the future or not. – [Gregory] Go ahead. – [Elizabeth] Thanks a lot. Elizabeth Firstenburg, AstraZeneca. So, Allen, this touches
exactly on that issue of blood transfusion as
the example I’m gonna use. So we’ve heard a great
deal about detection and maybe it’s early days, but not quite so much about validation. So for a public user,
and I’ll use an example from blood transfusion, you quoted figures on the number of
transfusion related sepsis. So how would an outside user understand how those data are validated. So how do we know that that
was not concomitant sepsis, or perhaps transfusion
related immune reaction, how is that actually done? And that’s not meant as a criticism, but more of a way for
us to say, at this stage in the development of these
very technological systems, how can a user differentiate or figure out just how big the grain of salt needs to be when we look at and analyze this data. – Perhaps inadvertently you asked one that has a really relatively
straight forward answer because for post-transfusion
sepsis, really the way to definitively call it
post-transfusion sepsis is to culture the same organism
out of the residual bag as out of the patient. So if you see that,
that’s probability wise, that’s really a pretty good likelihood that the transfusion caused the sepsis. Beyond that, one in 100,000
it’s a difficult diagnosis to make in light of all sorts nosocomial
infections going on. That one in 100,000 could be less if we get the systems to better define it. So, I think things like
that would help a physician really pin down truth. – [Elizabeth] So I chose
the wrong example obviously, oh well.
(people laughing) My question really is, a little
bit more general than that. Which is, this is difficult stuff, so how does an outside user look at this and get a sense for how far along we are in validating the signals
that we’re detecting and is that a question of quality or is that a different
question from validation? – I’m not sure entirely
how to answer that. I mean, one additional
challenge is the diagnoses for some of these things, like ones where they’re are
respiratory implications, like TACO and trolley. There’s been a lot of
work done for decades to define how to provide case
definitions and even then, when you try to apply something like natural language processing, you find a lot of vagaries and what constitutes an
infiltrate on a radiologic exam. So, I think that’s got to evolve, we’re hoping that perhaps some
of the AI might actually help to fine tune on some of
these case definitions and provide a basis for validity, but we’re in certain sense
there’s gonna be remain some vagary until that happens. – Yeah, and I just want
to sort of add onto this, is that, a lot of the validation that typically gets done goes
back to the medical charts or the electronic medical record which you’re using in your data anyway. And so, in the example of sepsis, you’d be able to validate
that if it were documented in the chart, I presume,
that what was cultured from the bag was the same
thing as what was cultured from the patient, and so you’d
be looking with your tools, you’d be looking in your data source for those sort of confirmatory results or procedures that are sort
of associated with that. – But we wouldn’t just
be relying on the AI. So we’re not gonna set the, so
the AI will learn, obviously, and it will have a certain
amount of accuracy. But we wouldn’t just
leave it alone at that. So we wouldn’t let it submit,
it’ll submit into systems, let’s say, but then we would go back in and then probably do samples and just check on whether, you know, and validate those results and make sure that it’s maintaining
the level of validity that we deem appropriate. And presumably that’s quite high. – [Gregory] Okay, time
for one more question. – [Margie] I’m also interested
in outcome validation, Margie Goulding, CDER, Office of Surveillance and Epidemiology. Could you talk maybe about
spontaneous abortion, miscarriage and how you
validate that in the EHR? – So, we are using the
GAIA case definitions and there are specific data
elements and components and parameters with
different levels of certainty that the GAIA definition provides. So we will be taking those data elements that would be in the
EHR portion of the CED. And the cases that have been selected using the ICD-10 procedure codes, we would be comparing those cases based, with the GAIA definitions that the OB/GYN clinician
adjudicators will be using to see whether they are real cases or not. But the standard method we are using are GAIA case definitions. – Okay.
– Start in ICD-9. – Starting with ICD-10.
– ICD-10. – Starting with ICD-10
and procedure codes. – Yeah, okay. Great so that brings us
to the end of the session. I’d like to thank the panelists for a wonderful presentation,
provided a lot of information about what’s happening with BEST. As I said before, lunch
will be on your own. There are a list of
restaurants in the area, Sarah Saprizi at our registration desk. Also you can use the mobile app. We’ll reconvene exactly at
1:30 to being our next session. Thank you. (audience applauding) (people chattering) Rich is. Go ahead and get started
for our afternoon sessions. If I could ask you to
start taking your seats. (people chattering) Okay, well, this morning we
heard a lot about and from FDA about how FDA is leveraging
the Sentinel Initiative for post-market safety
surveillance activities. This afternoon we’re gonna
change gears a little bit and going to hear from a broader frame, a little bit more about how stakeholders outside of the FDA are building
and utilizing the suite of tools and data resources. For this next session, we’ll
hear from representatives from the Sentinel Initiative
coordinating centers who will discuss their
experiences developing and using these tools
and Common Data Models for safety surveillance purposes. Like to introduce our
first set of speakers which will include Christian Reich, vice president Real World
Analytic Solutions at IQVIA. Thomas MaCurdy, senior
research director at Acumen. And Timothy Stitely, vice president and partner at Federal Healthcare for IBM Global Business
Services Public Service. After that set of presentations,
we’ll pause briefly for some clarifying
questions from the group and then we’ll turn things
over to Richard Platt who’s a professor and chair of the Department of Population Medicine at Harvard Medical School
and executive director of the Harvard Pilgrim
Healthcare Institute. So, with that, I’ll turn
things right over to Christian. And I think we’re gonna go
ahead and speak from the table. So Christian, go ahead. We’ll start with you. – We’re starting with me. That’s very good. I wanted to, I was intending to refer to what Rich would have said before me, but now I can’t do that. So, I can’t, I won’t
steal your thunder, Rich. So I thank you very
much for having us here. Present, IQVIA, company formerly known as Quintiles IMS, if
you haven’t heard that. Which formerly was two
companies Quintiles and IMS and since we merged we
have the pleasure of having to explain these naming changes until really the whole
market has finally noticed that it’s still the same team. So we are running the research
network for the BEST program. You’ve that before lunch. Azadeh and Steve and Allen
gave full deep introduction of how that works and what
the goals are and what’s, how far we are along and
what has been achieved. So I’m not gonna repeat any of this. What I just wanted to
give a few impressions of perspective from our side. What it is, how it works
and how we see it evolve, how we see it in the context of the other initiatives. So the research network,
that is utilized for BEST, question is, what’s about that one, there is already network
called Sentinel Initiative, Mini-Sentinel, then Sentinel. You know, what’s going on? How many of these networks can exist? And why do we need all
these different networks? So generally, just to make that clear, it’s very similar, okay. So it is distributed network, which means the data are distributed and stay with the data partners. They are not pooled and stored at the FDA. So for somebody who’s in
the clinical research arena, it’s a little different, okay. So it’s very large. There are many millions
of patients covered and by law it has to be at least 100. There is all these networks work in a privacy preserving sense. That’s also very important. There are standardized analytics, which mean, you keep doing things in a way that you can generate efficiency, you can kind of industrialize research by building tools and
characterizing these tools. And there is a community
and there are mechanisms to reach out to the community. So all these things we have
as well and we develop them and sometimes things take a
little time, but in generally, that is the case, that’s
how these networks work. The difference in this particular case, results from the needs that BEST has. You also heard this morning, there are not a lot of prescription drugs that are being studied. Which means that the established networks, based mostly on the claims
data will not be able to answer the question at all or in the timely fashion
the way these networks work. So, our network is based
on the OHDSI initiative, which is an open source,
open collaborative to do a large-scale standardized
observational research. It contains a strong
focus on clinical data from patients, electronic
health records data. There is a large portion
of pre-adjudicated claims. Pre-adjudicated claims means
that it’s not the claims that the insurance company paid, it’s the claims that the
providers submitted to get paid. The advantage is you get ahead, you not only have one
insurance provider or network, you get the information
a lot faster than before it goes through the whole
adjudication process. Which is important for
some of the questions, for example, just think
about seasonal vaccines. We really don’t have a lot of time to understand the effects
of these products. And then the other thing is of course, if you have direct access
to the primary producers of the data, the institutions,
the provider institutions, you have an influence
over what is the content. So in this particular case, for example, we needed additional information from the blood bank
databases, coded in ISBT codes and we’re able to fairly
easily integrate that. So again, bottom line,
similar idea of a network, and I can’t actually thank the agency for establishing the idea of
distributed data networks. That’s something that really
happened on a large scale and in full visibility
of the community due to the Sentinel program. And there are people in
the world who don’t believe that government can pull
something like that off. And they have. And it’s extremely, it’s very impressive and it’s an example for
the entire world, actually. One last word. We also representing here the opinion, or the view of the actual institutions, who are not invited right now to this. That was new to me, I wouldn’t
know how they would respond to the idea to use a network and to utilize it for doing safety and effectiveness research for the agency. It turns out, for research institutions, this is a very attractive
proposition, okay. They love this. They feel they are contributing
to important cause. And it’s very encouraging
and they also impressed by the rigor this kind
of research is conducted. I don’t know, I don’t
want to make any comments about the rigor that they usually
apply, but they certainly, that is something that is
mentioned or that is appreciated. And I think this is
also good for the future because we will need a
lot more in order to get to the depth that we need to answer all the various regulatory
questions that are gonna come up. And I think it’s gonna be easier then maybe we were
thinking at the beginning. So, that’s all I have so far, thank you. – Thank you, let me just note that Acumen has really a
longstanding relationship with FDA. We began our work with FDA in
2008, supporting a broad range of their surveillance and
investigative activities. First of all, most of
this work is joint work with CMS and the FDA. So it’s a contract that
involves both of ’em. The main sources of
data that have been used for this earlier
collaboration are the universe of the Medicare and Medicaid data, which I’ll describe in
more detail a little later. And really, the analysis
has really consisted of kind of three principle
areas of work and studies. One is realtime analysis. And our group is the one who
has been doing surveillance, annual surveillance on
the influenza vaccine for, since 2010 and I’ll describe
that a little later as well. For mostly GBS but others issues as well. We also do very rapid response initial exploratory
investigations for the FDA, to the extent any kind
of safety signals arise. And then we support the FDA
in producing peer reviews, scientific epidemiologic
studies, and since about 2010, Acumen has jointly authored about 50 of those studies with the FDA staff. With the advent of the BEST project, Acumen has expanded its
support to CBER in two ways. First of all we’ve added the
Blue Cross Blue Shield data. This is through Blue Health Intelligence and I’ll talk about that a little later. And then we’ve added capacities to support external research groups. As an overview of Acumen’s participation in the CBER Sentinel activities there’s really four areas that I want to quickly mention. One is I want to give you an
overview of our data resources. The second area I want to talk about is the
analytical infrastructure that Acumen has built up to support FDA in its various forms of studies. The third area I want to
mention is our capacities for performing medical record reviews. And our fourth area is, I want
to talk about our capacities of expanding to studies
partners and sharing results. So let me start with the first area, the overview of data resources. The data are maintained
in a unified data system. We don’t use the distributive data system. We have researchers within
Acumen that know each one of these data sources pretty intricately. So, we don’t really require
a Common Data Model. If you take things like Medicaid, Common Data Models are
problematic in any case. But for the most part, you really have to understand the payment rules and the intricacies of
various sort of programs. So when we have studies done,
or we have analytical teams that know the data sets,
so it’ll pull the data sets and do the analysis according to whichever data source FDA would prefer. The data source, our data sources probably cover 150
million people annually. If you were to go over
the whole history it’s, I have no idea what it is, but I’m sure it’s above 300 million
because we have 100% of the Medicare data going back to 1991. We have the Medicare Advantage
data complete since 2013. We have the pharmacy
data, complete since 2006. Medicare data covers about 60 million beneficiaries per year. Our data are updated daily. The data that we pick up from Medicare is from the shared systems data, so it’s the same data
source that the match use. So we also get data before
it’s pre-adjudicated. So when it’s first submitted,
and then there’s a process of where it’s adjudicated
and it goes in the IDR and then eventually it
will go into the CCW, which are various sources. So our data sources, once
again, we work with the same, we work with adjudicated data but we also have the shared system data which is the math data. So our data is updated on
the FHIR service site daily. Our encounter data is
either weekly or monthly, depending on the source. We also have the universe
of the Medicare data. That data’s updated monthly. We get downloads of that. That data system is changed. I don’t know if you’re aware of this but it moved from a system
called MSIS to T-MSIS and there’s been some challenges of the switchover to T-MSIS data. And so, people are still
working on the reliability of that data and the
states are still getting that data kind of straightened out. For the most part, our
updates are monthly. Acumen also gets 100%
of the assessment data, which is the OASIS data and the MDS data. So those are detailed
assessments on individuals who are in nursing home and home health, irrespective of whether
on Medicare or Medicaid. So it’s anybody in any
of those, any payer. The patients are picked
up in those data sets and we have those monthly and all of our data systems are linked. Once again, with the
advent of the BEST data we have Blues Health Intelligence Data. It covers about 65 million patients over the past three years. In this first round, we basicallY
acquire data for everyone who’s received CBER
related regulated product. So there are about 24
million patients right now in the data systems that we have. What happens is, once these
patients receive any kind of CBER regulated product, we are able to pick up three
years of data past for them and then we get monthly updates for them. The data system, we get
claims starting about, little less than eight weeks
after they’re initiated and the data’s fully completed
after about 16 weeks. So, on service dates. So that data comes to us fairly rapidly. We can also ask for, to the
extent that CBER was interested in doing some kind of control study, we can obviously reach out
to Blue Health Intelligence to be able to pick a
control co-ord of some type if we wanted to do that. Let me just note that, part of Acumen’s data center
is an official CMS data center. So as a consequence we have
extremely rigorous processes for quality control. We’re actually the group that does a lot of the quality control for the main CMS data center, the IDR. So we understand the quality data sort of issues really, really well and we apply those sorts of checks both on the Medicare data, Medicaid data and on the Blue’s data,
and the Blue’s data we’re still working out on
all the quality issues there. Second area I wanted to mention was the analytical
capacities of Acumen. Acumen’s data systems run
about 2,000 queries a day. So for us to run a query in a day is not a big deal for us. The query can be fairly complicated. We do a lot of this kind of support first, CME directly anyway. Our query systems have built in programs that automatically create port summaries and a variety of relevant statistics. They can pull and extract
without any real problem. And we have staff with
considerable expertise in the design analysis to come
in diverse form of studies. So it can be a quick turnaround study or it can be a more
sophisticated study as well. These capacities essentially provide for three primary kinds of products or services for CBER in this area. The first is rapid response
initial investigations. Like I said, we can perform
real-time surveillance. If we’re given a safety
signal sort of issue, this happens in CMS all the time. We can basically put a study
into their hands in two days, with the most recent
data and all the data. Tracking at any particular point in time for any particular
subset of the population and we can easily cover
several thousand outcomes if that’s what really needed. So, that’s one of the main
capacities is that we have, is we’re very fast on rapid response with the Medicare data. The Medicaid data, we can do
the analysis fairly quickly, but the data is much slower
in terms of coming in. So it would have to be something more at a three to four month sort of period. We also are, to the extent that
we do those initial studies, we also provide delivery
of timely followup studies to do something a little
deeper to find out if in fact the signal was real. And then we also have,
as I mentioned earlier, we do provision of more
scientific epidemiologic studies that are more long-term
that we work with FDA staff which is all really kind of joint work. The third area is our capacities for performing medical record reviews. Acumen does have the capacity
to do medical record reviews for purposes of either validating claims or confirming the accuracy of claims. We can do these, the way we do the medical record reviews is, it’s literally by name of patient, sent out to the provider that served them. And this is done under FDA
public health authority. I’ll stop. I’m behind, so let me go
quick on the rest of the part. And let me just note that we can do this fairly inexpensively. It’s about $25 per record. We get about 85% of the
records that we send out in two months with the
remainder of the trial in after, so it’s rapid in that regard. Then last part, this is
very, very last part. As far as the fourth area, the capacities that we’re doing for expanding for BEST partners and sharing. We’re substantially
expanding our capacities to be able to work with
partner organizations and it can be done in two ways. One is by working and
directing Acumen staff, the way FDA does. And then we can deliver queries and analysis through secured portals. The other way is we’re
also, we have a capacity that we can open up for
secured remote access enclaves that would be able to use the data, that we needs to have
DUAs in place and all that for it to be appropriate
but our own researchers work with enclaves so it’s easy
for us to open those up. Sorry, for going over. – Yeah, great, no problem, Tom. So then, now Tim Stitely will
round out the presentations from the coordinating centers
on the CBER side of things. Go ahead, Tim. – Thanks, Greg. So, I’m here representing IBM, the global business services piece. You’ll hear as I go through this and I’ll try to keep it
quick, but efficient. And, we’re actually involved
in two separate contracts, one is around the active
surveillance system and the other is around creating the adverse
event detection, review and reporting tool to
automate that piece of it. So on contract, the first one
around active surveillance, I want to hit on three main areas. One is data diversity,
then speed and flexibility, followed by the dissemination
of the work itself. So around the data diversity
piece of it, again, the main push here is to make
sure that CBER has access to larger populations and richer data. To bring that to life,
you heard this morning, that we are leveraging IBM to CED, which is again a value-linked claims EMR. Our deterministically-linked
claims EMR data or better known as CED, it’s unique to BEST as a BEST data asset. It provides the combined
value of longitunality from claims and granularity from EMRs. The Common Data Model, CED
uses a robust data model of the two main components. One is MarketScan, which
you also heard this morning, the other is Explorys. And MarketScan is the claims,
Explorys is from the EMR. And it allows the aggregation
across these diverse platforms from a payer, provider network. Another advantage is the ad hoc programing which is working within the system which uses the CED Common Data Model. And we have the flexibility to convert that CED to a CDM such as FHIR or OMOP, which will allow us to link
to other sources as well. Going one step lower into
the MarketScan piece. This claims database has 120
million patients lives in it. It’s got a proven track record
with thousands of citations across numerous epidemiological studies. The size of this database
and it’s representativeness of the population captured
makes it a robust asset for epidemiologic studies,
for post-market surveillance. And then lastly on MarketScan
is the mother-infant linkage. It provides the ability to
link the mothers and infants which facilitates the possible studies of birth outcomes associated with biologic exposures during pregnancy. The last piece on the
diversity of data is quality, I think we all talked about that. It’s something that
starts from the beginning and goes to the end. It has to be there throughout the process and within the CED and
MarketScan it’s all part of the data curation process. Speed and flexibility
is the next piece I want to talk about in the active
surveillance side, right. So, lag, EMR data partners that feed into the CED provide access
to structured EMR data with near zero data lag. There’s weekly and direct
feeds into Explorys. Also, experts on demand,
the analytical clinical and epidemiologic capabilities
available on demand, so there’s no need for tools. This allows ad hoc
programming up on request. And the flexibility to run queries and epidemiological
studies relatively quickly. And then the closeness of data, in an effort to provide FDA
the ability to be close, we have created a portal for CBER staff to have access to the IBM databases. This allows for quick and easy
creation of sample cohorts for potential studies and to
better understand the kinds of studies that are
feasible with our data. And then dissemination,
on the dissemination piece to the larger scientific communities, CBER’s goal is to make the
surveillance system available to larger scientific
communities after it’s built. And we’re committed to these efforts by making our protocols,
methodology, code sets and resulting reports
available to the public. And then, moving on to
contract two which is more around the building of the automated adverse
events capability. Again, three areas of focus here. It’s diversity and richness of data, innovation and speed and flexibility. In the diversity and
richness notes in NLP, the access to unstructured
components of the EMR which provides immensely
valuable information that may not otherwise be available in the structured information. We’re partnering with MedStar
Health Research Institute to harness the information
contained within clinical notes to be able to create computable phenotypes for cohort identification
and to more accurately to detect adverse events
related to biologics. We’re also utilizing
natural language processing and other methods in support
of our surveillance activities with the previous contract I mentioned. And we’re leveraging featured
abstraction capabilities both on structured and
unstructured information that facilitates creation
of computable phenotypes in human readable format
that can be used in queries and epidemiological
studies in contract one. Again, quality, data quality,
assessment plans are in place for all of our data sources from the ground up while ensuring that our data partners
follow the best practices for EMR use. Around the innovation piece,
it’s around detection. Building the capability
to detect adverse events of interest using supervised learning. Then validation, the building capability to perform semi-automated or automated chart review for validation. The goal is to provide clinical review as a friendly, intuitive
UI to validate exposures and adverse reactions and
collect evidence easily, submit adverse event
reports in an ICSR format. Then there’s reporting, the capability to perform semi-automated
or automated ICSR population and reporting to a adverse event systems. And lastly is interoperability, the future, sorry, the future of the distributed automated
adverse event reporting system is built upon the foundation
of interoperability. We are building the system
with interoperability in mind by making the system containerized and FHIR friendly to be
facilitated for future adoption across the provider
network and EMR systems. Speed and flexibility, again,
having access to the experts for analytical, software engineering, and clinical capabilities
available on demand, reducing the need for packaged tools, lends the flexibility
to the FDA to be able to make informed regulatory
decisions relatively quickly. This flexibility is particular valuable in a dynamic regulatory environment. All of this wouldn’t be
possible without a village. So again, under the leadership of Dr. Steve Anderson’s BEST
team, the IBM’s expertise and offerings that we bring,
in addition to we have support in data expertise with MedStar, from Surveillance and
Epidemiological from Johns Hopkins, consolidated research and longevity. And then from a clinical expertise, many research institutes including Baylor, Kaiser Permanente, Vanderbilt and the University of
Washington, thank you. – Okay, great, thanks to all three of you, Christian, Tom and Tim for
some useful information about the coordinating center
on the CBER side of things. Before I turn things over to Rich, I just want to ask if there’s,
we have about a minute or two for any clarifying
questions from the audience on the presentations that you just heard. Okay, okay, so I’ll turn things over. I think you all know Dr. Platts, the head of the coordinating
center on the CDER side, the Sentinel Systems coordinating
center and Rich, go ahead. – Okay, thank you. One of the advantages of speaking where I am in the program
is that I can draw on the presentations from this morning and also these three preceding ones. Lots of common themes in there. I say, especially the
presentations of Janet Woodcock, Gerald Dal Pan, Bob
Ball and Michael Nguyen, make it really straight forward for me to anchor my comments. And my goal would be to
go just a level deeper into most of the things
that they have spoken about. So, let’s go back to 2011
when FDA put its marker down on a statement that was always part of the original frame and that is, the goal is to build Sentinel
not only to serve the FDA but to serve as a national resource for evidence development. So, I don’t know about you, but when I read things like this, I typically discount them substantially. That is, these are really
great promises to make and they typically sink below the waves. And so, if I have to tell you the thing that has most surprised me is how attentive FDA has been ever since then to making good on that commitment. The work that I’m going
to describe is based on the partnership with a very large number of organizations. The ones in the middle
group are scientific and data partners, that means they constitute the distributed
network that we have. And then there are a number
of scientific partners who bring expertise but don’t bring data. And the state of the distributed
data set now is this close to 700 million person
years of longitudinal data for which there is both pharmacy
and medical care benefits. So, by the end of last year, we and our FDA colleagues, actually wrote a report on
how have we been progressing in developing this national resource. And, this is what this article looks like and I’d like to take the next few minutes to read it with you. So, first of all, we said we
built a distributed network with quality checked
complete longitudinal data, built it from, it was a
good idea at the time. But it hadn’t really existed. Now, quality checked complete
longitudinal data rolls off the tongue real quickly
and the best way to drive you to your email is for me to describe to you just how hard that is. So rather than take my word for it, I’m going to cite a national expert, that is Greg Daniel allows
me to use his slide. It says, curation is often
complex and hard to explain. Now, I think the only thing
wrong with this slide, Greg, is it’s not often hard to explain, it’s always hard to explain. So, I think everyone of us who’s talked about curation, has said, “Oh, it’s a lot of work “and so I won’t keep whining about this.” But I do want to say that it
occupies a lot of our thoughts. So we said, it informed
many regulatory decisions. As of yesterday, the Sentinel website page on how our FDA has used ARIA analysis, lists 24 products and 38 outcomes. And as Janet Woodcock said, the work that the Sentinel
team has done together is increasingly being used
by advisory committees. You can see that the
pace has really picked up in the last year or so. And several more advisory
committee meetings planned. The reason this is possible is early on we realized the limitations
of writing new code for every analysis that FDA
was interested in doing. And so we set on a course
of developing reusable tools that we call modular programs that can do specific kinds of analysis. I know the concept is
very straight forward, what is remarkable to me is how it has been an iterative process between the whole
Sentinel investigator team and the Sentinel FDA team to articulate the highest priority needs and then start to build the tools that can address those needs. And the tools have become
increasingly sophisticated over time and they’re now capable of doing a substantial amount of work. So we call those Level
1, Level 2, Level 3, doing increasingly sophisticated
work over on the left and you can see signal identification is now a topic of substantial scrutiny. But one level deeper,
analysis that look just at medical products, how
are medical products used, or just at outcomes, how
frequent our outcomes, I’m sorry, this isn’t the
slide I was expecting. The level one programs typically
are the panel on the left and the level two and three programs are
the ones on the right. And let’s take a look at an example, this is one that Janet Woodcock
referred to this morning. A modular program analysis
in comparing the risk of venous thromboembolism
for women exposed to continuous oral contraceptives versus cyclical contraceptives. This was a program designed
to answer the question, is the risk of venous
thromboembolism higher with the use of the continuous
combined contraceptives than with the cyclical ones. The data set identified a
little over 200,000 users of the continuous oral contraceptives and little over half a million initiators of the cyclic contraceptives. Now we talk about initiators,
one of the important things about a longitudinal
data set is you can say, these are women who had
not been previously exposed to these in some period of time. So we believe that they are new users. We’re able to find 228
venous thromboembolism events in the continuous users. 297 in the cyclic users. Then if you look at those populations, you can see that they are
different in important ways. The women who received the continuous oral
contraceptives were older, they were more likely
to have cardiovascular or metabolic conditions, more likely to have
gynecologic conditions. So at baseline, one has no good reason to believe that they would
have the same VTE risk. And so the modular
program allowed the team to design a propensity
score matched analysis that matched on about 30 factors. And you can see what the
adjusted hazard ratio is here. It’s also possible because
it’s a defined population to look at the adjusted absolute risk, about one in 4,000 women
access the excess risk of VTE was about one in 4,000
or one in 3,000 person years. Allowing a regulatory conclusion that these findings don’t show
strong evidence supporting of VTE risk difference. So, this is the kind of study that would have taken a couple of years if we had started with
a blank piece of paper. The additional advantage of
these modular programs is they are extremely well vetted. It’s common when we write
code for the first time to find that there are aspects of it that don’t exactly capture
the investigators intent or they don’t run perfectly
in the distributed data system and these modular programs
are extremely well vetted. We talked about being able
to study patterns of use. I think the opioids examples
that Judy Staffa talked about are among the best examples I think we could talk about. But including the patterns of uses, this switching program that
we heard about this morning, that we designed with FDA staff for the particular interest of
the Office of Generic Drugs. It’s a program that finds
switches to a second product that is intended to be the same drug and then it classifies people on a time dependent
basis into three groups. Whether they switch to
a different product, switch back to the first
product or don’t switch. And that allows very rapid
analysis, like these. It shows what happens when
extended release Lamotrigine, an anticonvulsant went off patent. And you can see the yellow
line is the branded original, the black line is total use. So you can see total use went up though, the use of the original product
went down substantially. Interestingly there’s a
substantial difference between the generic products and the rate of switchbacks to the original product. And the total rate of switching
back was considerably higher than is seen with other products
like lipid-lowering drugs. So very straight forward way to do a standard kind of analysis that can substantially
inform understanding of whether generic drugs that should have the same
therapeutic effect appear to be being used in a
way that supports that. We went on to talk about
whether products were used in accordance with indications. Here’s an example where
the agency was interested to know whether two drugs, that have quite different
properties and intended use but similar sounding names might be being confused with one another. Brilinta is a anti-platelet drug and Brintellix an anti-depressant. The concern about the potential
name switch, name change, naming confusion was sufficient that that name was changed but we
had the opportunity to see how often there might have
been actual confusion. We assessed individual’s claims profiles to identify individuals who appeared to be appropriate candidates
for one of these drugs who was actually dispensed
another one of them. And there were remarkably
few events like that. Four apparent errors among
almost 17,000 Brilinta users and 16 errors among
21,000 Brintellix users. Janet Woodcock talked
about the great interest in studying exposures and outcomes in pregnancy and pregnancy outcomes. Here’s an example that FDA asked Sentinel to help evaluate looking at the use of sildenafil
in pregnant women. This was a result of a clinical trial that was stopped last summer when there were 11 reported infant deaths. And so the question was, how often is this product being used by women who are pregnant. That’s a non-indicated look. So, we were able to use
these modular programs to look at the use of this
drug and related drugs. And looking at over three
million pregnancies would show that the actual number of
exposures was remarkably small. And that supported an agency conclusion that no additional
regulatory action was used. Back at the very beginning, some of the most thoughtful
pharmacoepidemiologists I know observed that it was likely that a major benefit of creating this Sentinel System would
be to be a signal quencher. ‘Cause when concerns
arise, it would be possible to look at a large data set
and draw some reassurance that a problem isn’t large enough to warrant a change in practice. And that has been our experience. Now, to be able to say that we’re looking at patterns of use in pregnant women, obviously raises the question of how do we know when
the pregnancy started. And we did a study that was supported by the Center for Biologics PRISM program to test the accuracy of
the claims based codes for normal duration pregnancy, premature delivery, post-mature delivery. And we did that by linking the information in the Sentinel distributive data set to birth certificate records
for 83,000 pregnancies. So here are the results. 63% of deliveries, 63% of
pregnancies were within one week, plus or minus, of the
birth certificate estimate and 87% were within two weeks. The reason I particularly wanted
to share this with you was because the work of linking several different Sentinel
data partner’s data to 10 different states’ birth
certificate registries was a very large amount of work. The technical work was non-trivial and the number of data
agreements and permissions that were required was
really quite substantial. And so we thank our FDA colleagues for really helping a lot
in making this possible. Another important reason that
we wanted to do this work was to be able to test the accuracy of being able to link moms and babies. I can remember in the 1980s talking about the surprising
fact that it’s difficult to link mother’s records
with infant’s records. And you should say, why should that be, because we always know who
the moms and the babies are. But it is. And so, when we use the data that each of our data partners has, we could compartmentalize
these mother, baby pairs, well we could create three
kinds of individuals. We could have the moms and the babies who we were pretty
confident we could link. We could have the deliveries for whom we couldn’t locate the infant and we could have infants for
whom we can’t find their moms. And we linked that to the
birth certificate data from these departments of health. And here’s the result. The takeaway is that we were not really able to improve on the matching
that the data partners could do on their own by adding
birth certificate data. You basically can’t see
the yellow contributors to those bar graphs because
there were so few deliveries where the birth certificate
data actually added meaningfully to the number that we could link. And that’s what leads to the fact that we now have this linked data set that Janet talked about with over four million linked deliveries. The fact that we could
link them doesn’t mean we can use them until we can bolt them into the Common Data Model. So the data model has expanded over time and the mother infant linkage
cable is the newest addition to the data model. That brings its own new
curation activities. And I promised I wouldn’t
whine about curation. But curation is twice as hard for the linked mom baby data
set that it is for the others. The FDA is now using the Sentinel data set to evaluate the medical countermeasures. This is work that we’re doing with the Office of Counterterrorism
and Emergency Response, looking at pandemic preparedness. We’re assessing complication
rates of influenza among people who are treated with an antiviral under
various conditions. And assessing, confounding
in observational studies of influenza, antiviral effectiveness. There was a lot of discussion this morning about Sentinel’s role in FDA’s emerging real-world
evidence portfolio, the framework that was released in December highlights the
role of Sentinel there. It got a good treatment this morning and we’ll hear more about it later today. I’ll say that, I’ll spend
just a second saying that among the projects that
FDA has us working on now under the RWE portfolio,
the work we’re doing in support of the RELIANCE Study, I think, has important implications. Where our role is to assist in testing vertically
distributed regression analysis to link claims data to EHR data. For the foreseeable future, EHR data and claims data are gonna be separate. And the successes of the
distributed data model arise in large measure from the fact that we don’t ask individual organizations to part with their data. If we’re gonna do analysis
that link EHR and claims data, we’re gonna have to find a way to do that that minimizes the demand on sharing personally identifiable data. And I think distributed
regression analysis is an extremely promising
approach to doing that. And one of the important first tests of that kind of analysis
is built into this work that we’re doing under this task order from the Office of Medical Policy. It was always the intent that Sentinel support
other national initiatives. And that’s a happening thing. We’re working with the NIH
Collaboratory where we build the NIH Collaboratory
Distributed Research Network. The National Institute on
Aging has funded a study that is using Sentinel data partners to support a study of
polypharmacy in dementia. You’ll hear from Betty
Shenkman in just bit about work we have
contracted to do with PCORnet to start to learn about
how to use the EHR data. The Biologics and Biosimilars Collective Intelligence
Consortium is using the Sentinel platform to
do a number of studies. There are 10 studies that
have either been completed or are underway under that banner. And the Reagan-Udall Foundation
has done seven studies that have five different sponsors and I’m sure June Wasser
will talk more about that. Janet talked about the
international partnerships, so I can just say that
we are now at a level where we are code compatible, we believe, with the CNODES Network,
the Canadian network. So that a query that will run in one system should run in the other. There’s good complimentary in the sense that Sentinel System
is quite a lot larger. CNODES system, we think
has more longitudinal data. So it should be possible, I think of them as complimentary systems. And this is the clinical
practice research database. I give FDA full credit for its commitment to transparency for the
studies that are done. On that page that I flashed, showing how ARIA analysis are used, there’s always a link
to the analytic packages and those are posted on a
git that we have set up. Along with all the
tools and documentation. And finally, FDA has
created for us a robust set of activities to develop new data and new methods that
can enhance the system. And I’ll just sort of quickly run through the active projects. In the EHR arena, we’re developing
a set of quality metrics so that it would be possible to understand how various EHR systems perform
compared to one another. That’s still, it’s not
exactly the Wild West, but there’s a lot we have yet to learn about how best to use the different EHRs. We’re working on expanding
the Common Data Model. This is the work that
we’re doing with PCORnet in characterizing patient’s use of, characterizing patients who
use anti-diabetic agents. Gerald referred to this. And we have a couple of
projects, machine learning and natural language processing projects around electronic phenotyping of health outcomes and validation of anaphylaxis using
machine learning techniques. Other kinds of activities
include validation of health outcomes of
interest on algorithms that use ICD-10 codes for serious infections and for lymphoma. This is right on-the-ground
work, develop an algorithm, get the original records,
have experts adjudicate them, understand what the performance of these characteristics
of these algorithms are. Michael did a lovely
job describing our work in signal detection. And there’s ongoing work in
distributed regression analysis. Finally, what one of the
real signs of progress, I think, is that there’s
now more than one network. And so we see a lot of value in being able to characterize what
different networks have so that potentially users can know where to go try to get
their questions answered. So we’ve designed a
cross-network discovery service so that it would be possible
to have high-quality meta data about a variety of systems so that potential users can quickly identify potential partners. Okay, so I’ve come back to this slide just because it lets me know
that this is the end of the sort of one-level deeper
set of comments that I have. I do want to say, this is the
work, that I’m describing, of several hundred people who’ve worked for a very long time to make this work. And so as I take my seat, I’ll just let you see their
names here, thank you. – Great, thanks Rich for that overview. Okay, so we have just
about a minute or two as well for this, it’s just so, I just want to pause here and
just watch all of these names. It seems like a fun thing to do. (laughs) Any quick questions
regarding Rich’s presentation and what’s been going on at the Sentinel System
coordinating center? Question? Ah, okay, go ahead. – [Larry] Larry Mar
from Johnson & Johnson. Actually I have a two-part
question about Common Data Model. Number one, I thought the Common
Data Model is the backbone of the Sentinel network, basically, all the data partners
must convert our data to Common Data Model. So you can get a standard
in every respect. So when Tom shared directly from Acumen, I was surprised to hear his reservation about Common Data Model. So can you elaborate on that? The second-part question is about, there are a number of Common
Data Models out there. Probably the two leading ones are the Sentinel Common Data Modal, also OHDSI Common Data Model. Can you share your perspective
on the two Common Data Models and is there a synergy or
convergence for the future. Thank you. – All right, so I’ll do first
and then you can take over. If you’re working in a
single coherent system, you don’t need a Common Data Model. The issues arise when you’re working in a variety of different systems. And we built the Sentinel
Common Data Model from the perspective of
starting with the questions in the data that FDA told us were more important for it to have. And then, we worked with
our data partners to ask, of all the things that
for which you have data, which is an enormous
amount of information, which are the ones that you
think you can reliably extract, in a way that, not only is
reliable within your system, but means the same thing
from system to system. And so that was a consensus
process that we went through. And as data resources increase
and FDA’s needs increase, we’ve expanded the data
model to accommodate those. And that’s worked fairly well for the very large majority of activities. One of the things that we
committed to early on was to say, there’s a choice you can make in saying, there’s a lot of data, do we want to spend a lot of
time curating it up front, or are we prepared to put the data, to array the data in a
form that’s pretty good, and we’ll pay attention to it
later when the need arises? And, there’s no right
answer to how to do that, to which is right. It depends how you want to use the data. For the things that we want
to use modular programs, FDA doesn’t want to
stop for several months to clean the data and
so we do that up front. There are lots of potential
uses of large data sets where it doesn’t make sense to do that. It’s a very big lift even for
the fairly restricted number of data elements that
the Sentinel System uses. And you could well say, we’ll make sure that we have the data, it’s
approximately labeled correctly, and then we’ll clean it
up as we get to use it. So one of the examples we have it, we spent a fair amount of time bringing
laboratory test result data into the Sentinel data model. And then we started to clean it and when we looked at platelet counts, we capture both the platelet
count and the unit of measure. Unit of measure should be
quite straight forward, the American College of Pathology says, here’s what the unit of measure is. In fact, we had 63
different units of measure when we looked at that. So, we really backed off and said, until we have a need to use platelets, it’s not worth the effort to
standardize all of that data. And I think that’s the kind of trade offs that one makes in deciding how extensive should the data model be, how much should you invest
upfront in using it. – I understand the value of
a distributed data model, and then you need a Common Data Model to essentially be able to use that. Our experience has been, I can give you a particular
kind of examples, if you take, say, Part D, Part D would
seem like a Common Data Model. But unless you really
understand the step therapies that are used in various
formularies and the like, you looking at say, what drugs people take and what the sequences are,
context really matters a lot. And, I understand there’s
high cost to having people who are aware and can
work with that context. And the Common Data Model you can often, you’ll get to that context
eventually, it’s just often, there’s advantages and disadvantages. In our case because we have, I mean, Medicaid is another example. Medicaid, for even the
expanded Common Data Model for T-MSIS, maybe are 10
eligibility categories. California has 225 categories. And depending on the category,
you get some services and you don’t get other services. And so that sort of context
are things you just need to know in order to be
able to use the data in the right kind of way. Now, if you’re using a
broader sort of perspective, you always get down to
those sorts of detail. And it is high cost to maintain
teams that can do that, but that’s just one of the areas that consequently we’re able to cover fewer data
systems as a consequence. But there are trade-offs. – I think one of the things. I just want to underscore what you said. No matter what your
approach to the data is, it is critically important to have people who have deep expertise in the system that generated those data. Because it’s entirely
possible to get to a point where the data looks the same but actually means different things. And so we see great
value in the fact that, in this distributive data
set, we always have experts who are really sort of close
to the source of the data to help make sure that
it’s used appropriately. – Yeah, thanks for that, great question. And thanks to our presenters
for a presentations that’s chuck full of information on what you all have been
focusing your time on to improve the coordinating centers and use of these very important tools and capabilities for safety surveillance. We’re gonna go ahead and
transition to a break. But before you leave, I want
to plug this app one more time. There are a few survey
questions in the app that are aimed at soliciting
feedback from all of you on key priorities of Sentinel’s
continued development. And one of those questions
is focused on uses of Sentinel tools and
capabilities outside of the FDA and for things other
than safety surveillance. So that is the topic of the next session. So keep that in mind. The app will be asking you questions on your thoughts on that. So keep that in mind as you sit through this next and final session. So, I’ll give you almost 15 minutes. We’ll come back at about 2:50 for our last session of the day. Thank you. (audience applauding) Nice job, thank you very much.
– Thank you. (people chattering) – [Man] We need to move to this side. – [Man] I’ll have to change
that, that’s a very good point. – [Woman] So we manage it here, right. – [Mark] Yeah, yeah, I
mean, they’re gonna set, they’ll set this up for you,
but you can use this as. – [Woman] Use this too. – [Mark] Come on up. We’re gonna get started
again in just a minute. So please get whatever you need
and head back to your seats. (people chattering) – Before, we were talking.
– Yeah, yeah. – Thanks for joining us.
– Good to see you. – Yes, yes.
– Are you all coming up? (people chattering) – [Man] Yes, yes. (people chattering) – All right, I’d like
to welcome everyone back for the final session of the day. You’ll only remember the
beginning of the morning, Janet Woodcock not only laid out some of the many important
achievements resulting from the Sentinel Initiative,
the foundation it’s laid for a much more extensive learning system around safety surveillance. But also moving beyond that and that’s what we’re
really gonna focus on in this last session. Extensions of Sentinel and
looking forward on Sentinel, NEST, BEST, other
activities has been a theme throughout the day. This session though, is gonna
focus on the broader uses of the Sentinel Initiative infrastructure beyond post-market safety surveillance. For this purpose, and kind of indicative of how broad the potential applications of the Sentinel foundation may be, is a very diverse set of stakeholders who have some different uses of evidence that can be generated through the data
infrastructure and resources. So, without further ado, I’m
gonna introduce our panel, you’re gonna hear all
of their perspectives and we’ll hopefully have a
bit of time for discussion if everybody stays on
schedule with their comments. Like to first, next to me,
David Martin, associate director for Real World Evidence and
the Office of Policy at CDER. Next to him is June Wasser,
who’s the executive director, of the Reagan-Udall
Foundation for the FDA. And Betsy Shenkman, professor and chair of the Department of Health Outcome and Biomedical Informatics, the College of Medicine at
the University of Florida. And Anne Heatherington, who
is senior vice president and head of the Data
Sciences Institute at Takeda. Next, Telba Irony, who’s deputy director of the Office of Biostatistics
and Epidemiology at CBER. And then last, but definitely
not least, Art Sedrakyan, who is associate professor of Health Care Policy and
Research at Weill Cornell Medicine and lead of the coordinating Science and Infrastructure Center for MDEpiNet which you heard about
earlier today as well. So each of them is gonna
provide an overview of their perspectives on
these additional uses, building on the Sentinel foundation. And then we’ll have some time for discussion and questions after that. And so I’d like to start with David. – Okay, thanks, Mark. So I think you mentioned keeping our time, so it’s seven minutes. – [Mark] That’s right. – Okay, we’re on it. So, I was actually very
grateful to Janet Woodcock who actually introduced most
of the issues this morning that I’ll introduce in this talk. But since enough hours have passed, hopefully some visuals
will help you recall some of those themes. So, the FDA’s Real-World
Evidence Program was stimulated by the passage of the 21st CURES Act and it’s guided by this
recently released FDA framework which is publicly
available on our website. It was released in
December of the past year. And the key points that the
Real-World Evidence Program, is that it instructs FDA to
move beyond the traditional uses of real-world evidence,
primarily for safety, and also in some cases for rare diseases and some oncology use
cases and really look at a lot situations where
the current standard is to use a traditional
randomized control trial. And so, the specific, what
this basically means is that you’re talking in regulatory
speak about supplements, SNDAs, supplements for, also SBLAs, you are also talking about post-approval, post-marketing studies that
may have other purposes beyond traditional safety studies. So, just one key differentiator here, is that the framework applies
to drugs and biologics and CDRH has issued separate
real-world evidence guidance. So Janet mentioned these three key points. I won’t re-read them, you
can see them on the slide. But this is sort of how we
are guiding our evaluation of real-world evidence both in terms of demonstration projects
that we’re engaged in as well as when we’re actually
receiving submissions currently from industry. And then, in order to
talk about the linkage between the Sentinel Initiative
and the FDA framework, people have used the
term FDA-Catalyst today. And this is just a reminder
from the Sentinel website, that we consider sort
of the entire initiative to be the Sentinel Initiative
and that really covers all of the things you’ve
heard about today. And there are all of
these operational parts that collectively are
part the Sentinel system, and chiefly have the purposes
of looking at safety issues. But FDA-Catalyst is what
FDA named the program that takes the data in
Sentinel, the tools, such as the analytics as well as actually
leveraging the data partners of the integrated delivery
systems that are part of Sentinel to have direct contact with patients or physicians. So, just want to mention
that some of this work under Catalyst actually
pre-dates 21st Century CURES and I think Janet was really, probably, the biggest proponent. Early on it was already mentioned how Sentinel should be
a national resource. And she was very interested in stimulating this IMPACT-Afib Trial. And in super brief, we
wanted to know, basically, could we engage in a large-scale
randomized control trial with five data partners
actually collaborating to make this work. Now obviously, that was a pretty big lift, so you had to think of something that wouldn’t be a super
controversial question at the time. So we chose an appropriate use
question of looking at people who had sort of a guideline base need for oral anticoagulation
and weren’t receiving it and could you have in
intervention and could you impact both use of oral anticoagulants and could you impact health
outcomes as part of that. So just a quick update. 40,000 patients have been contacted through the intervention arm of the trial and then there’s what’s referred to as the delayed intervention arm and we’re preparing to send
out those contacts right now. So, one of the things that was interesting about IMPACT-Afib, was that, we had a waiver of informed consent. So we didn’t have to
deal with the problem of how do you contact 40,000 people if you want to do something like this? And so, a bit like Janet has some thoughts about expanding the
Sentinel infrastructure before there was 21st Century CURES, I also had some thoughts about that. And I was lucky enough to receive support from the Patient-Centered
Outcomes Research Trust Fund to oversee the development of a system to remotely collect
prospectively data from patients or other observers or
reporters and actually be able to link it to the secondary data that we have in Sentinel
from EHRs and claims. And so, this is a system
that’s now open source that consists of a mobile
app which is reusable, so people can download the code, look at the technical documents and learn how to use the system. And it has a configuration
portal so you can use it for basically any kinds of
health outcomes scenario. And it has a secure storage
environment associated with it and that can be used
with a distributed environment and most importantly, can be used in traditional clinical trials and meets FDA standards
for that kind of use. So right now we know that
one essential e-CRO has already taken all this information and successfully spun up my
studies in a test environment and so they’re working on it right now. And that’s again, private
sector, no support from FDA. And some other groups
have been on the GitHub and taken everything that’s there. So we know some other
people are working on it. For those that are interested, we have a big webinar on
this, like four hours. So if you’re a developer
and you really want to know how to use this thing, tune in on May 9th. So, that’s just where you find it. Okay, so, I’ve probably
already used my seven minutes. Okay, (laughs), so now I’ll be fast. So Jacqueline and I,
Jacqueline Corrigan-Curay, who couldn’t be here today as the head of the Office of Medical Policy. She came in, obviously we had the passage of the 21st Century CURES
Act, and we said look, we need to start looking at scenarios where regenerating data that could potentially inform SNDAs and SPLAs, actual submissions. And so, we’re working
with the LimitJIA Trial. So, and I want to just step
back for a second and say, we’re very grateful to PCORI
and the PCORI investments that have been made in real-world
evidence infrastructure in our country, real data infrastructure. And now we’re able to leverage that. And so, we’ve joined the LimitJIA Trial, we’re confronting a lot of
regulatory considerations, we’re also dealing with
real-world data fitness for use and we’re helping with the use of the app and this is the first use in peds. The SPARC registry, Janet
actually did a great job going over that this morning. So, I’m actually not gonna
add anything to that. And then, the RELIANCE
trial was discussed, and Rich actually helped
me with that today because he mentioned the
distributive regression piece of this which I think
would be very helpful. I’ll also mention that
this is also a good example of the fact that hybrid
approaches are really critical in the real-world evidence space because one thing 21
Century CURES did not do is it did it did not change
our approval standards. So the reference standard remains a traditional clinical trial. And whatever you develop in
the real-world space needs to be able to provide
substantial evidence, just like that trial would have. So you can see there’s
just a lot of data sources we’re using over here
to try to back this up, patient report, EHR, and CMS
claims from FDA-Catalyst. Finally, the Office of
New Drugs is working with the Office of Medical
Policy on a program to look at the feasibility
of using just secondary data for effectiveness purposes. They’re engaged with the Sentinel System, with cohort characterization, development and preliminary validation
of potential endpoints. And there’s a plan to hopefully prospectively
pre-replicate the RELIANCE trial before the RELIANCE trial is done. And then they’ll be two additional observational comparative
effectiveness studies. So, in conclusion, I just say, the framework serves as our roadmap. Real-world evidence is
definitely a top FDA priority. We’re committed to
understanding the full potential and really, FDA-Catalyst in particular, but the Sentinel Initiative and Sentinel System more broadly
are just critical resources for our efforts to
understand these issues. And the knowledge that we’ve taken from that has actually
helped us in our review of actual supplements
that are coming in now because it really helps to review things when you have first-hand experience with the things you’re reviewing. Thanks. – Thank you very much, David. (audience applauding) June. Very good slice.
(man laughs) – Just get this lower, okay. Thank you, I’m really
happy to be here today and update everyone on what’s going on with the foundations IMEDS program. And I really want to congratulate
all the Sentinel folks, particularly in the movie slide for building a scientific community. It’s great in itself, but also
really helps the foundation as far us doing our work as well. Sorry. Thanks. So just for those of you who
are not familiar with IMEDS, it’s a public-private partnership that the foundation coordinates. And as you heard many time,
when Sentinel was being built, FDA wanted it to be a broader resource and had looked at the
foundation to become a framework for entry point for industry
and other researchers to make use of the data. So, mainly for the analysis
of medical products but the important part is
we really do collaborate with existing Sentinel partners, both the analytic center
and data partners. So just a little bit about
roles and responsibilities. It’s a little bit different
than some other collaborations and takes some getting used to for folks who enter the first time. So the operation center in this case is located at the foundation. We facilitate the projects,
we put together all the SOPPs and the contracts and the financing and the legal compliance and
do the initial project vetting, education and customer service. The analytics center is the same for Sentinel’s at Harvard
Pilgrim Healthcare. And their responsibility is coordinating all the analytical activities
and providing a lot of the methodological expertise. And I should also
acknowledge, have been great in helping me at the foundation
in the educational aspect of how the data really works. And then our group of data partners, a group of the Sentinel data partners who have an interest in collaborating in the types of studies that we do. And we’re moving on so that
they can have a larger role in the studies as well. And then the project sponsors who are either industry
of non-industry groups. And I should say currently, we’re only working with industry groups. Here’s a list of the IMEDS data partners. And I just wanted to mention in passing, we also have access to CMS
data for the right study, even though they’re not really contracted as a data partner per se. And in fact, we are doing our
first study with CMS data now. So there’s a lot of
different types of activities that can be conducted using IMEDS. Protocol development and
assessment, customized studies, including adjunct tools, and I would refer to the kind of an example
that David just gave with patient reported outcomes
tools or MyStudies app. Validating studies with patient
records, drug utilization, effectiveness, compare
outcomes across patient groups, fulfill regulatory obligations
and characterize populations that are otherwise hard to reach. This is a quick overview. When we do a full API study, the first phase would be
working with the sponsor and the data partners on protocol development and assessment. Once the protocol is
accepted by the regulator, we would move on to a descriptive analysis and then on to an inferential
or protocol based assessment. It could be supplemented by
a validation study, registry, or depending on what the study is, every study is very customized. It isn’t necessarily linear, sometimes these phases can overlap. And they don’t all have to be done, though descriptive
analysis is generally done before the inferential. So as an update on what we have completed and are currently working on. Most folks know that
Pfizer did pilot study, I think it was back in 2016. It was before I arrived at the foundation and that has been made public. And last year, Eli Lilly
did a descriptive analysis with this for a new drug resubmission. And in fact, that PI was up here last year to tell you about it. Currently we’re doing three
PMR and or PASS studies. And they’re expected to last anywhere from three to five years and we’re contracting for
a couple other studies now. I wanted to mention, we did
a survey at the beginning of the year to get a sense of
a company’s thought process when choosing a database. Trying to figure out when
company’s might be interested in IMEDS and what the
reasons might be for that. So, we had 12 options, 12 was other. And the top six ranked by 15 companies, not surprising are the
size of the database and quality of the data. I wasn’t so sure people
thought the ability to meet short time lines
and deliverables was part of what we’re offering but they do. And I should say, our study
teams go out of the way to really try and meet, say,
a protocol deadline or so on. Obviously, because FDA
uses and have confidence in the system is a great
motivating factor for industry. And it’s great that people recognize the foundation has really worked hard in streamlining the processes. So when a company wants to use IMEDS, they contract just with the foundation. The groundwork is already
laid with contracts with all of our partners. So, you can plug in pretty easy. And then of course, the
expertise of our partners, which without we would not have a program. So, looking forward, I
think we feel confident that we’ve put together
an initial processes with these few studies that we took on. And we’re looking forward
now to what’s next, just as Sentinel is. And so we want to consider ways to enhance and expand the program,
whether that be new methods, data, tools or partnerships. And we have an IMEDS steering committee that is providing advice
to the foundation, particularly in those areas. And we’re looking to them for advice really on
sustainability and growth. I also want to say that, aside from these regulatory
required studies, I think IMEDS has a unique position in providing opportunities for pre-competitive collaborations
from multiple companies. And I want to give you an
example of something we’re in the process of doing right now. And I know there’s been
a lot of discussion today of health outcomes of interest. We’re discussing a project for
HOI to harmonize development of a validated set of
outcomes in multiple databases which can be in a distributed
database like ours. I’ll try and rush a little bit. It would be standardized,
high-quality research which everyone wants. And it also meets the goal
of our IMEDS partnership to use FDA’s tools and the network to evaluate safety concerns. So as I said, it’s a pre-competitive
collaborative project. We currently have a dozen companies who are interested in providing input. And, of course, one of the main reasons is to leverage Sentinel’s
validation experience. There’s two phases to the study and the first would be
a landscape analysis with literature review
including Sentinel validations, ICD-9, ICD-10 algorithms and
portability of the ICD-9s. And then the second phase
would be a validation study of chart review and adjudication. So, with these dozen companies, we’re currently in the process of thinking about what sort of
outcomes we might look at. And these are the top six right now. They have not been vetted, and you know the environmental
scan has not been done, but just sort of a flavor of
what the group is looking at and we’re still discussing. The purpose would be to really
help establish best practices on HOI and play a key
role validating algorithms across the Sentinel
network for validating them for a number of uses. And a lot of these companies will have their first experience working
with the Sentinel network and collaborating with our data partners. So, I think, Commissioner
Gottlieb said this very well, when he testified in July. “FDA is confident that IMEDS
sponsors will play a key role “in shaping the future
of evidence generation.” And the foundation is very honored to help support FDA’s work in this area and we could not do it
without all of our partners at the analytics center
and our data partners who really have the commitment and expertise to support
us and our sponsors. Thank you. – Great, thank you very much, June. (audience applauding) And next is Betsy. – Okay, yes, so there we go. – Big green one.
– Oops, I went too far. We need to go back. So, thank you, it’s a
pleasure to be here today and talk about PCORnet and that’s the Patient Centered
Outcomes Research Network which was funded initially through PCORI and then the funding continued through the People’s
Centered Research Foundation. So this slide’s showing
you the different sites that we have throughout the country. So the larger blue is
really showing the names of the networks and then underneath that are the names of the sites. And I think there’s couple of
really key points about this. One is the diversity of the setting. So when you think about the
framework of real-world evidence and the opportunity to be looking at data from diverse places,
diverse patient populations, you can see the coverage
and the varying types of networks ranging from federally
qualified health centers, primary care settings, regional hospitals and very large health systems as well as academic institutions. We also have two health plan partners that we work very closely with. And one of the things that
this type of network gives us a distinct advantage is really being able to directly contact patients and having close
relationships with clinicians. To really implement studies
in a very pragmatic way, and I’ll give you an
example in just a minute. We coverage for over
100 million individuals through PCORnet right now. So, when you think about the
strategic aims for Sentinel and the focus on the data
platform and the linkages, if you look at the puzzle pieces here, the green pieces are the core. We have a Common Data
Model that Rich Platt and others have referenced
throughout the day. And the green are the
electronic health record data that all of us bring in. Monthly, some people bring
it in weekly or daily, it does vary depending on
the data mart and the site. So we have lab datas, blood pressure, a range of clinical information. Also linked to claims to data, the most prominent of course is Medicare and Medicaid data linkages, but we also have some
commercial data linkages, particularly with our
health plan partners. Patients are geocoded to a nine digit zip and then we also have
two more registry datas, social determinants of health data, genomic and other information. The grayed areas, though, indicate that that is not complete
through all the data marts. But in the aggregate, we
have very rich information, both linked to electronic health record and claims for 100 million individuals. I’m just showing a
mother-baby illustration. This came up a lot during the day because of maternal substance
use, infant exposure, this is just an example using the OneFlorida Clinical Research Network which is the network that I am the PI for. And this is illustrating
using that network, where if you look at the first map, that’s showing where mothers are that have used prescription opioid drugs, and this is using EHR and claims data for documentation around opioid use. And then the second map is showing where babies are born with
neonatal abstinence syndrome. Interestingly, or maybe not surprisingly, the colors correspond to more of our rural areas within Florida. But also because we have
linked mother-baby data, we also have substance
tests and the actual results for the mother and baby
by the type of drug that mom was taking and then
what we saw with the baby. And we also just added linkages now to our Early Steps program so that we can now look
at long-term outcomes. And looking at how
these children are doing during their early development as they move into the Early Steps. The other example I want to give, this is one that was a study that we undertook across all PCORnet. I want to thank my colleagues at Duke and Adrian Hernandez
for sharing these slides. This is looking at ADAPTABLE. In this study the goal was to enroll over 16,000 individuals
nationally, to look at doses of aspirin across, for people
with coronary artery disease. And we right now are a little
over 13,000 people enrolled. And I like this slide and
the next one on ADAPTABLE because it illustrates a
lot of the capabilities within PCORnet that are a really good fit with a lot of the initiatives
around real-world evidence. The first is that we did
direct participant recruiting because we were able
to contact the patients and the close access we have
to patients and clinicians. We also did recruiting in clinic settings. There was followup through portals as well as warm contact approaches from research coordinators. Our primary endpoints were to look at mortality, bleeding, MI and stroke. So, we had the ability to really look at a robust set of outcomes. So what we did with
multiple data sources is we used our electronic health record data that we all have through
our different networks and that are in the
PCORnet Common Data Model. We also were able to work with CMS and have the Medicare data as
well as other health plan data through our health plan partners. And then algorithms were used to look at endpoint identification. And then that was fed into
the overall trial information. There also were patient
reported outcome events. And those were reconciled
against the EHR data and against the health
plan data that was provided either from CMS or from
health plan partners. And the interesting part, while the patient reported
outcomes are extremely important, the reconciliation of
them is very interesting. So if you look at the purple, it says, patient reported even reconciliation. The reason for hospitalization and then the confirmation rate. So, for example, I was surprised by this. But there was only a
56% agreement, roughly, between patients that had a CABG and the patient understanding or reporting that that’s what they had. So the agreement rates
were in some instance, I thought, surprisingly low. But this really gives a
very robust picture now for us to look at endpoints. So ADAPTABLE is just one illustration of the studies we’re
doing within the network. We do have a variety of
quality checks in place, as well as reusable tools, including quarterly data
refreshes, at minimum, but at many sites are doing
weekly and monthly checks, we do a refreshes. We have 30 data quality
checks that are verified through the coordinating center at Duke. And we have really had an ongoing process of improved data quality as well as enhancing the data elements. And we’re now onto the
Common Data Model 5.0 which is very robust in terms of lab and clinical information. We have menu driven query options. We do have a variety of modular programs so that we can look at
very complex cohort logic. There’s an example here about looking at statin prescriptions that
we ran for a recent study and work that we’re doing
in the statin space. Then we also have modules
for cohort identification. And that allows us to
really be very specific and very precise about the
characteristics of the cohort and where to reach out to them as well as the clinicians caring for them, so that that direct
contact can follow through for pragmatic clinical trials. This is just an example of
the computable phenotype that we have done. And you saw many examples during the day, so I won’t spend a whole
lot of time on this, but this was looking at
resistant hypertension. It was conducted by our colleagues at the University of Florida, I’m in the College of Pharmacy. And so this is just showing,
looking at drug exposure. It was used initially in
the OneFlorida data set and then the query was developed and deployed throughout the
PCORnet sites nationally. And it was looking at blood
pressure requiring four or more hypertensive drugs or a certain blood pressure level. And then looking at how often
the patients met the criteria during certain intervals. And then this was also validated using
medical record reviews. The graphics is also showing you i2b2. Some sites have centralized data models and OneFlorida is one of them. And so there are instances of i2b2 that allow for self-service queries to make some initial
look at cohorts or issues very rapid and very accessible. So in terms of next steps, we have partnerships with
very engaged networks. Everywhere from the researchers
to patient representatives, to the clinicians, to the
health system leaders. And I think that is really important to facilitate the rapid research process and pragmatic research process. We have research ready data to allow for the design of studies
in a very effective way and along with identifying the cohorts. And our recruitment methods
that we’ve been using, and we’ve shown this
through multiple studies, I only highlighted ADAPTABLE
but we have been doing work for other trials including
INVESTED, RELIANCE, David mentioned some in his presentation. And we have been finding
that our recruitment methods, through both warm contact
approaches, electronic outreach, have really been much better compared to historical standards and
our retention of patients in studies has been excellent. We are really well-positioned in terms of both prospective and
observational studies, patient reported outcomes. And then we do have very
engaged health systems that have been really supportive
and actively participating in the studies that PCRF,
PCORI and our partners throughout PCORnet have
brought to the table. And we also have research ready data for surveillance and reporting. So those are our next steps
that we want to continue to develop and work
towards, and we thank you. – Thanks very much, Betsy. (audience applauding) All right, and thanks and Anne. – Excellent. Good afternoon, it’s my
pleasure to be here today and represent Takeda and
give a voice to industry in this very important discussion. So I just joined Takeda quite recently and I head up the Data Sciences Institute which encompasses many of
the data driven science, as you might expect within Takeda. And one of those groups is our Global Outcomes Research
and Epidemiology Group headed by Kathleen Gondek. And I just want to acknowledge the support and help her folks and
her group have given me in preparation for this talk today. Then these are my disclosures, of course my opinions are my own and don’t represent anything
official from Takeda. So Takeda has a very
long and proud history. It’s founded 230 years ago. And it really has been
built on its values, and the values are called Takeda-isms, took me a while to get used to that word. But, at the heart of that is integrity and integrity is fundamental to everything we all do here in this room. And interestingly, at Takeda, our priorities do drive
much of every day work. And it’s not just the four
priorities as you see here, but it’s the order of the priorities. And you can see that
actually putting the patient at the center comes first for Takeda. And we believe that if
we do that, the rest, trust, reputation and
business will follow. And so it’s with that context
we’re actually really excited by the expansion and the
evolution of the Sentinel network. Because we really believe
and agree with many of the other speakers that Sentinel’s expanding vision can really help serve patients better. And the strategic aims that
have been laid out repeatedly by different speakers today, really ultimately will help patients and their use of the drugs
that we provide for them. And there’s no need to
tell anybody in this room, but traditionally drug
development incorporates data coming from our pre-clinical systems, our clinical trial data and that’s what goes into our submission. But now with the expanded mandate, we hope to see real-world data coming into those submissions as well. And in so doing, we would
be helping the FDA meet their 21st Century CURES
mandate by inclusion of those data in submissions
and in labeling requirements. So, we are actually quite confident that we can help FDA meet its mandate and use the Sentinel System
in this way, because actually, we at Takeda have been doing
many of these things already. As you can see here from this slide, that the majority of the data partners, over 70% are currently used by Takeda through independent contracts. During our drug development process, for many uses beyond safety. And so what I’d like to do
in the next two slides is to cover two broad examples as to how we’re using these data
partners within Takeda today. So the first broad example is really around choosing the
correct patient population, around patient diagnosis and
getting the right patients into our clinical trials. And this is particularly important for patients with rare diseases. As most of you know, and
is recently highlighted by the global commission to end the OHDSI, the diagnostic OHSDI for rare diseases, many of these patients
take a very, very long time to get diagnosed. And that is an issue for the patient and it’s also an issue for us in pharma and it should be an issue
for all of you in the room, this diagnostic OHDSI that patients go on. And so what I’d like to share with you as the first example here
is how we’ve used some of the newer technology
that has being talked about extensively today,
both artificial intelligence and natural language processing to better define patient populations. So in this particular case,
we were able to take data from a claims database and
use artificial intelligence to help pull out patients
there that were suffering from this rare disease,
hereditary angioedema. And when we used that data set, and looked for confirmatory cases, we actually turned up 24 subjects that we could confirm simply
using the claims database. However, when we took data,
the physician’s notes, and we used a natural language processing to help read those notes,
and we looked there for confirmatory evidence
we turned up 22 patients with confirmed evidence. But most interesting of
all, when we were able to directly link these
two data sources together, we were now able to get
up to 133 confirmed cases of HAE which is a huge finding in the area of a rare disease. And what this means is that
we can use this technology to now find and recruit suitable
patients into our trials and make sure these patients
are getting the treatments that they need in the future. We’ve used similar techniques
in another disease. And this time not a rare disease but quite a common disease, for GERD. But we used similar techniques, again to identify
suitable patient cohorts. In this case we had a prior hypothesis that patients that were partial responders to standard of care would
perform better and respond better to the new treatment than
those that were nonresponders. And so using artificial intelligence, we were able to define specific
patient characteristics that could distinguish
these two populations. And this lead us to again being able to recruit a suitable patient
population into our trial. And we think this linked pair-age of the data sources gives
us great opportunity and it’s something that we
really see as real potential within the Sentinel
System, to really use this for patient identification
and for diagnosis. Now the second example I’d like highlight is a
slightly different example and this time it’s focusing
on disease specific PRO data. And in this particular case, we again use multiple data sources. And you can see here the range
of data sources that we used. We used claims databases, EHRs, patient survey data and chart reviews. But very importantly, we
were able to get the registry from a patient advocacy group, where they have developed a PRO
for this particular disease. And so using all of these data sources, we were able to pull out
the most important aspects of the disease for the patient. So we were able to get the patient voice into our clinical trials
through development of a specific PRO for this disease. Now multiple data sources
allowed us to do this. But what we would really love is if all these multiple
data sources were linked, either through probabilistic
model or deterministic model to allow us to do even more with these data than we can already. And again, we see this as great potential within the Sentinel System if we are able to get to this stage. I could go on and on with
examples of what we’ve used and what we think it could be used for. We’ve listed here some of the areas that we think it could be important in. And many of these are actually covered in the FDA’s framework document
for real-world evidence. But one thing, just to highlight where we really see potential is in the very long-term
tracking of patients. Be it through continuous
of pragmatic trials or in the use to help look up potentially curative therapies. We think it really has real
potential in those spaces. But in order for the Sentinel System to fulfill this whole potential, the evolution that we’ve been talking about today would need to
continue, it would need to happen. And we’ve highlighted here some areas that we think would need to
evolve in order for industry to be able to more fully
participate in this system. So first of all, in an ideal world, we would have direct linkage between all the different data sources. But failing that, we really need to have really robust methods to account for the patient journey through
the multiple data partners that we know happens amongst all the different data
sources here in the U.S. And this is particularly important, I’ve shared a couple of
rare disease examples, this is particularly
important in rare disease where a triplicate of patients
could have profound effect on the outcome. The second area we want to
highlight is around governance. And so, up to now, everything in the Sentinel
System has been done in a very transparent manner. However, for industry to participate, we’ll need to consider the pre-competitive versus competitive landscape
that we need to operate in for our business model to survive. Secondly, we think there
needs to be a quality of voice at the table amongst all stakeholders and we need to have a seat
at the table for that. And thirdly, we would like to advocate for the development of additional public-private partnerships and the use of CROs and
core coordinating centers, so that specialists and
more extensive methodologies and approaches could be used
for some of the analytics that need to be undertaken. And finally, we really
are admirable of the FDA for the guidance and the deep expertise that they have both within CBER and CDER and CDRH on this topic. But we really would ask you to put out clear and consistent guidance so that we can follow
it and also internally, that when we approach different divisions in different levels, that we’re able to get a consistent message
for this novel technology that we really want to embrace. And lastly, I just want to
say thank you to the FDA for hosting this meeting today and giving us this opportunity to engage and we look forward to
engaging with you in the future as a collective group and
sharing our experience and expertise that
we’ve developed to date. Thank you.
– Thank you. (audience applauding) Thanks, Anne, so next is Telba. – Hi, good afternoon, I’m going
to talk to you a little bit about the efforts that
we are having at CBER in what we call capturing patients inputs to inform the regulation
of medical products and how we are leveraging
Sentinel and BEST to do so. Oops. Oh. Okay, so in recent years, the FDA has been emphasizing
patient engagement and the use of patient input to inform regulatory decision making. So, there have been several
efforts in this direction from the patient-focused
drug development for drugs and biologics to the
quantitative elucidation of patient preferences
for medical devices. Several guidance documents
related to the capture of patient input are being
developed currently under PDUFA. So there are multiple aspects
to patient experienced data. Ranging to the patient
journey, treatment burden, patient and disease burden to the most sophisticated
patient preferences. When we talk about patient preferences we mean how patients
trade off the benefits of some treatments in exchange for risks of the same treatments. So we at the Office of
Biostatistics and Epidemiology at CBER are using a program called the Science of Patient Input. And that program aims to capture preference
information for CBER products. So we defined them as patient
preference information, or PPI and also patient-reported outcomes. So there are multiple
uses of patient input in the regulatory arena. You can use patient experience, patient input to help drive your design, to help to define an endpoints and to select endpoints
for patient treatments. We can use it for regulatory reviews and to asses how the patients
trade off benefits and risks of certain treatments for
approval of medical products and also for post-market surveillance including the benefit-risk assessments. So, we at the Office of
Biostatistics and Epidemiology are conducting several
patient input projects using Sentinel and BEST information. Last year, 2018, we conducted a mobile app feasibility
project with Harvard Pilgrim. So our questions were, if we
could identify a patient cohort that was of interest to CBER related to some of the treatments
that we had in-house. And also we wanted to see if
we could use the MyStudies App to collect patient info from this cohort. And finally, we wanted to know if the data partners will be
interested in participating in a patients to the study,
using the MyStudies App. So, at that time, our cohorts or interests were four types of patients. We were interested in
patients with hemophilia, patients who had sickle cell disease, given the gene therapies
treatment for sickle cell disease. We were interested in patients with hereditary retinal dystrophy
and also brittle diabetes, in other words, treatment
resistant diabetes. With all this types of patients, we decided that hemophilia
A and B was the only cohort in which was feasible to use the Sentinel to collect patient input. Given the time and cost constraints, given the available tools,
we had at that time. We couldn’t use the novel programming. So, what happens is we found
the reasonable-sized cohort with patients with hemophilia,
we identified them. We tried to use the MyStudies App and the study app could display the most PRO survey questions. However, it could not display
the graphs that were necessary for you to conduct
patient preference study. Also, five out of the six data partners, that we consulted were
interested in further discussions about the patient input study. Currently we are conducting a three patient preference
studies using BEST. One is for patients who have
osteoarthritis of the knees. We are collaborating with
RTI Health Solutions. We conducted a patient preference study with Duke University
on sickle cell disease. And we are conducting a study with hard-to-control type 1 diabetes with the University of
California San Francisco. So, basically, this
patient preference studies, what we are doing, we
are defining attributes of the treatments that we have in-house, with CBER products, for osteoarthritis, for sickle cell disease and for hard-to-control diabetes type 1. And we want to get input from the patients about the trade-offs between benefits and risks of this treatments. So this studies are being
conducted at this time, we don’t have the results yet. So, finalizing our experience,
we need from Sentinel and from BEST in order
to collect patient input, access to vulnerable populations such as the elderly and children. We need access to patients
with rare diseases, most of the therapies that
we have in CBER particularly, the advanced therapies
deal with rare diseases. These patients are very hard to find and we really need input
from these patients, particularly when we are
considering those treatments that have high benefits
but very high risks. We need to access pregnant
women, immunocompromised persons and we need to identify
appropriate cohorts by confirming the clinical diagnosis. So, basically, that’s what I had to say, thank you very much. – Great, thank you very much, Telba. (audience applauding) And next is Art. – I wanted to thank Mark
and Greg for the invitation. I know in Mark’s head I’m
still associate professor because you knew me 10 years ago. Thanks, FDA, of course, Greg
and Danica, and thank you, Adam, and thank Kara, for being patient with me
with last minute slides. I would like to share some of our infrastructure
developments with the MDEpiNet, with its 10 year history now. And many of our networks are inspired by the Sentinel concepts and philosophy, and MDEpiNet grew as an
organization and as a concept and as a discipline along with Sentinel and I think it would be really relevant for us to give credit, and
important to us to give credit to Sentinel tools and methodology, next. Am I, oh, I can advance myself. So, what is MDEpiNet? It’s an organization that is a global
public-private partnership that we envisioned 10 years ago and to advance national and international patient centered medical device
evaluation and surveillance. In many ways we view
MDEpiNet as an organization that will help us build the discipline of medical device outcomes,
research and epidemiology. We have been growing in the
past 10 years internationally, launching chapters in
Australia, in Europe, the UK, a lot of interest
in Japan this year. And we’re also trying to
get access collaboratively with our chapters and our partners to many big data sources within Europe and Australia and North America. So, our current objectives and one of the major focus areas is to build coordinated registry networks. That’s what I’ll be talking about next. Really, these conceptual
framework that comes out of the inspiration from also Sentinel is to leverage the routinely
collected data in the country, or national investments
made already in a country and facilitate creation of the research and surveillance network. And with the knowledge and
expertise of clinicians and surgeons and interventionists, to be able to look at the
safety and effectiveness of medical products,
particularly medical devices and related health technologies. We’re also very dedicated
to developing methodologies that will support this
use of real-world evidence and do the studies that can
be illustrating this concept. And finally, and very important
component of our work is to support NEST, National
Evaluation System and Technologies and coordinating center
in building to NEST. We currently have 15
coordinated registry networks, national, internationally scope, and you can see we’re
targeting really those areas where it’s device heavy
or technology heavy. Orthopedics, vascular,
prostate ablation devices, women’s health, breast
implants, hernia repair devices, mostly mesh, related
cardiac devices, neurology, temporomandibular, joint devices, catheters for major IV access, and we’re also building
international consortia, we’re talking about two of them today. So, conceptually, what
we’re trying to build as a CRN is this community of
clinicians and methodologies and data owners that can come together and appreciate the
conceptual framework here that builds on again, registries or related data sources as
a core and then linkages. But most importantly, the
process of engaging patients, engaging other stakeholders and bringing, developing the community
of like-minded people who can work together to
develop this infrastructure. Because no matter how many data
sources you have access to, exciting new methodologies
that you can implement, if you can’t win the
hearts of the clinicians, surgeons and interventionists, I don’t think your process and a project and an
infrastructure will succeed. So we built it through Delphi process when we start targeting a
particular clinical area to develop core minimum data. Develop registries if needed, if there are certain areas where they require registry building. And then, if there is a
registry already in place, we’ll start running the pilot process so everybody can learn
about capacity and potential for this kind of network that we’ve built and expand and advance from that. Really important is
linkages to a data source such as CMS data and
private insure data sources would be making important investments in accessing these data
and working with partners. We heard in a discussion before our panel about availability of these data. Unfortunately it’s not
that easy and simple to access these data. We still have to delay some and the CMS contractors
maybe can access them with very short delay, but
they’re not allowed necessarily to work with clinicians and
work with academic institutions, setting up this analytic projects
and surveillance projects. So I think there’s a lot
of challenges remaining, working with this beta and partnering. We often acquired these data sources to create the infrastructure that needs to be implemented with CRNs,
Coordinated Registry Networks. As an example of the Delphi Process, I’m sure many of you are familiar with it but I view this as a
community building exercise. We bring analytic people, we
bring stakeholders together, we run the expert panel discussions and importantly involve
all the societal leaders and data owners and patient groups so that everyone is on the same page, what kind of data needs to
be collected and why and how. And then we provide feedback to them with what they responded and
try to manage the process so that it leads to much
more pragmatic coordinating with data definitions so that it’s not in hundreds of elements and inefficiencies that then won’t be really working out in real-world settings. So it’s really important to
manage the process in a way that will lead to a
manageable number of elements, non-burdensome, and still pragmatic enough to address safety and effectiveness of health technologies and devices. Here’s an example of
learning from Sentinel, in fact, and developing
distributed network model. Working with international
registries in orthopedics, we developed a distributed
data analysis model, you already know about
this model from Sentinel. There’s a very specific
implementation from six countries. We look at the metal-on-metal hips in this particular example
and look at the outcomes after metal-on-metal hips are used versus commonly used metal
and polyethylene implants that highly cross link
polyethylene implants. And within these
international collaboratives, which stand for ICOR, International Consortium
of Orthopedic Registries, we looked at the outcomes
over time and first time, I think internationally
implemented a distributed analysis that has shown that the
events start accumulating after two years related to metal-on-metal
implants, not immediately. So that was really
informative for also FDA to make decisions about
metal-on-metal implants. We also used a claims data,
I think we can be creative in many instances to use the claims data to look at the safety and
effectiveness of medical devices. I highlighted a few studies,
shamelessly from our own group, looking at the claims data use for mesh implants and re-operation rates, and also erosion rates
associated with mesh implants, number of publications and also
related to long-term events such as understanding if mesh itself leads to some cancer or autoimmune disease. Sentinel tools that we
have also been involved and developed in fact, is Anonymous Linkage of
Distributed Databases That’s been specifically
funded by Sentinel and while Cornell was the
lead for linking this data from orthopedic registry
to HealthCore data, which is one of the Sentinel partners, and we successfully
integrated Hash and Galgrid, and this was early years
of anonymous linkages. I think there’s a lot more
progress in this space now. But the methodology itself
lead to our linkage model, such as in this case in the Vascular Coordinate
Registry Network development when we linked the registry data from Vascular Quality Initiative, a registry equality system
within vascular community to Medicare data and
commercial claims data, all paired data from the New York state, while registry captures
only one-year outcome we’re able to get
five-plus years followup. Another instance that led to AJRR, American Joint Replacement Registry, developing its own model of data linkages. We pilot tested the linkage
with the New York state data and we validated and created a methodology so that the registry can
implement this in future. So, we have multiple uses for linkage. They’re examples with
Medicare claims data, there’s state data linkages, there’s linkage with EHR
such as New York City, CDRN data from PCORI funded CDRN. And also linkages with
clinical trial data now, with registry to validation of claims. Finally, we have been
investing in a HIVE Initiative. In a HIVE platform so that
we can have a flexible and high-capacity environment
to offer our partners to do multiple projects simultaneously and be able to do in a distributed fashion and decentralized fashion. ‘Cause again, from MDEpiNet
coordinating center perspective you want all the partners to
be able to access the data and be able to implement research studies. Thank you very much. – Thank you. (audience applauding) All right, thank you all. It’s pretty impressive to see just how much of the large-scale work that’s going on broadly today around real-world evidence
generation is linked in some way to the Sentinel
foundation and ongoing work. Because our panelists,
although we have a lot of them and a lot of these Sentinel extensions, because they were so efficient, we actually do have some time
for questions and discussion. If you look at the agenda, it says we end this session at four, my closing remarks are definitely
not gonna take 15 minutes. So we will end on time. But I’d like to take
advantage of this unique group we have up here to talk a little bit more about Sentinel extensions and moving beyond safety surveillance. So if there are any questions, please head up to the microphone. In the meantime, I wanted to ask you all
more directly about that. We gave you each only seven minutes and you had a lot to cover just talking about what’s going on now and some next steps that are planned. But I wonder if any of you,
or as many of you as wish to, would like to comment on what you all see as important next
applications, next use cases, building on what you’ve done so far. And David, do you mind
if I start with you? – Sure, right, well I think
as I mentioned before, we’ve heard a lot today
from different groups about the importance of hybrid approaches that I think have really come up here. And so, I think, there’s a
lot of things in that area. One thing we heard about from Telba, obviously is patient reported outcomes and just people may not know this, but there was a survey done in
CDER of registrational trials over nearly a 10-year period. And 30% had clinician reported outcomes, or patient reported outcomes with specific prospective scales, so, you’re never gonna get that
out of a routine EHR visit in the outpatient environment,
where the doctor says, “Seen the patient, she’s
still playing golf, “she’s doing much better.” And what you really needed was some kind of rheumatology symptom severity
scale and it’s not there. So, I think there’s potentially a lot of development potentially
around some things that may be oriented towards clinicians and we see a lot that with smart on FHIR Apps in the EHR space. We may see development with, obviously, the mobile app space. And I love the way Telba
sort of went through with the mobile app, like, okay, do the data support our use case before we even think about the app. Yes for the one situation, then the app will do certain things because it’s actually based
on the Apple Research Kit and Android ResearchStack
frameworks in its intended to be used internationally by developers. But, gee, if we could just visualize, or tweak our visualizations in this way. So that’s why a lot of this
open source is so great, because if someone wants
to make that investment, you can just build on that Research Kit and
ResearchStack framework. So I think making things open
source as much as possible so these tools can be leveraged is great. So, I think, those are
probably a number of the areas and I think, just in general, you’ll find that when you do this work, a number of the speakers have highlighted, it’s really critical to involve patients at the beginning on the research teams. And I think that’s really critical. And then there’s just a lot of logistical issues
with prospective work. I’m gonna steal two seconds to
just make a pitch for FDA too which is–
– Please. – My colleague from Takeda said, we’d love to see harmonized guidance. And I did just want to
mention that guidance from CDRH will obviously be different from guidance under 21st
Century CURES for drugs and biologics because of
different approval standards and different, basically,
legal frameworks. But the guidance for drugs
and biologics that is due at the end of 2021, is
intended to be aligned and we’re working very closely, both CDER and CBER and despite the fact that there’s a different framework, we still actually have liaisons with CDRH, we CDRH representation on our work groups. So it’s a great point and we just want to let you know that we’re sensitive to that. – [Mark] And all on
schedule it sounds like too. Other comments. – I just, I know Art closed
talking about the importance of the clinicians and we
really have found this in a lot of the work that
we’re doing through PCORnet. And so that component
of clinician engagement and how the studies will fit
into their workflow combined with a variety of strategies that we’re using to do engage patients. I think that’s something
that’s really critical and that we can’t lose sight of. And so our engagement
strategies really are focused on patients, clinicians, as well as our health system leaders because the health systems have
to support the studies too. – If I can add, I think
that the NLP methodology and that we implement within EHRs now, to identify some products, I think there are limits to that often, particularly for medical
devices and health technologies. I think more work is needed in that space. I know UDI implementation
is really critical. So we’re all trying to make the case for UDI to be implemented for devices. That will enable us, at
least to have somewhere in the health system,
readily available data that we can also link to and
take advantage of that data. Medical devices and health
technologies, I think, have not been major part
of the Sentinel before, but I think MDEpiNet’s experience and infrastructure building
can be really helpful now to collaborate and
bring that collaboration to the next level. I think we took some of
the tools to next level with MDEpiNet achievements and I’d love to see that
happen in the next phase. – So first of all, thank you,
delighted to hear about that. And so what we would like to do is, we would love it when we
go to an agency meeting and we say we have met
your evidentiary standards. You laid out these three criteria in your real evidence framework,
we’ve met it, here we go, we want to use real-world
evidence in this way, for this clinical development program. We would love it if we had a
robust discussion on the spot and the door was open
for those discussions. That’s what we would really like to see. – [David] I’m sorry,
I’ll respond immediately. – Good for you.
(people laughing) – No, that’s great. And, just, so the way we’re
trying to manage that is we, I believe you mentioned, obviously, so CDER is the largest of
the medical product center and it’s, you know, culture change in any large organization
takes time and so there’s, we’ve heard about
variation among divisions. I’d like to mention though, that’s it not just an issue
of size and culture change, there are obviously
different information needs in different therapeutic areas and so, since the approval
standards haven’t changed, those information needs
also often don’t change. It’s a question of how do you translate those information needs into the real-world evidence environment. And so what we have done internally, is we have the Medical Policy
and Program Review Committee, which is a committee
that’s jointly chaired by the leader of the Office of New Drugs and the leader of the
Office of Medical Policy and it has representatives,
senior representatives from all of the offices
that you would expect to be involved in this type of question. So, biostatistics,
regulatory policy, new drugs, surveillance and
epidemiology, medical policy and we are actually engaging with every new real-world
evidence submission that comes in the door. We do anticipate there will
be some guidance coming out, so when you see the first
guidance coming from FDA, it’s not all of the 21st Century
CURES guidance telling you what to do, so don’t get too
excited and tweet about it. But, it is something that
is equally important, which is some guidance
where you can help us by flagging submissions that
contain real-world evidence at the point-of-entry to FDA. We also have real-world evidence, if you basically google FDA,
CDER, real-world evidence, you’ll hopefully, get to
the sort of this group inbox where you can sort of say,
okay we went our submission in, but also email us to ping us and make sure that kind of have all our systems and all our people alerted and involved. So we’re trying to address
that, recognizing again, that the divisions have
legitimate information needs and sometimes different
therapeutic area considerations. – Telba, it’s a nice
goal or vision to have, and it’s also impressive just
how much from David’s comments and others earlier today,
Danica, and others, how much effort is going into trying to develop these capabilities. As you said, David, there are some traditional
standards out there and making sure that the
data, the methods, everything, the three components mentioned in the real-world evidence
framework are really being met and what that means to product developers. I think that’s gonna be
an area of obviously, continued focus in the
coming months in addition to that upcoming guidance and I’m sure more further
steps from CDER, CBER and CDRH. And another plug, we will
be doing another meeting later this year with FDA
on real-world evidence and how some are related
activities underway too. It’s really a lot of opportunities. Before I close up this
question, are there others who want to comment on
next steps and moving on. – Yes, I just wanted to add
to everything that’s done. You know, we at CBER are
particularly interested in rare diseases and
the small populations. And these tools can be
of tremendous support to build a natural history studies, to build control groups,
that was mentioned. And we really need to
improve the capacity, capability of reaching this, what we call, hard to reach populations. They can provide a wealth of information to CBER and to all centers. And another possibility
is to use these trackers and mobile apps to follow patients even more than physicians, even more than the patients themselves. We have the possibility of
having something connected to the patient 24/7 and
that can also provide a lot of important information, not only for post-market
but for pre-market use. – [Mark] That’s great. Yes. – [Michael] Hi, I’m Michael
Vender from Center for Drugs. This question is for Dr. Heatherington. I really loved your talk, you gave some nice concrete
recommendations at the end. And I just wanted to
say that we’re sensitive to this idea that industry
has a seat at the table. And I want to just give another chance to dig a little bit deeper into that and what would success
look like to you there, for industry participation in Sentinel? – So, I think it’s hard for
me to define that clearly. But certainly, first of
all, the conversation today, is excellent, it’s really, and I know this is building on other work that has been done. But continuing to engage like this. And then also around the overall decisions around the infrastructure
and around the use of the development of
public-private partnerships, just being able to participate in those discussions and those decisions. I think that would be a good first step. I think it’s hard for me
to define overall success, but I think, just having a seat
at the table to start with, we would be happy with that to get going. – I’d like to mention for IMEDS, many of the issues came up
with in terms of transparency and all of that we go through. These studies are proprietary, they want to hire us a
vendor, we’re not a vendor, it’s a collaboration, the
epidemiologists get it but the legal counsel don’t. But I do think that building
those larger collaboratives where industry and all
of us can work together, sort of help build that trust a bit more and when it works it’s great. – Yeah.
– Yeah. – I’m sorry, just to comment on, many of us have been involved in IMI and so we know these big
collaborations are possible. They require quite a
bit of infrastructure. But we know they’re possible. And I think that experience
will bode us well for the future. – It really is impressive
to see how much activity and interest there is and
when I asked about moving beyond the next opportunities, you all not only answered
that question pretty robustly, Anne picking up on Michael’s question too. I think, looking beyond having those kind of well aligned clear
meetings on these topics with the FDA to actual additional, ’cause there are some already, label expansion indications and so forth, real regulatory actions based on a more substantial use of the potential of real-world evidence described
here is pretty impressive. But, I also appreciate the
very realistic approach that you all have taken on
practical issues ranging from outcome validation to
linkages across data sets, overlap, I guess it gets to
be a bigger and bigger problem as you have more and more data sets with millions of observations. As well as the importance
of both clinician and patient engagement. This is getting to be a
pretty robust foundation, real foundation for real-world evidence. I thank you all for that. And Danica, you’re gonna
get the last question. – [Danica] Thank you, Mark. My question has to do with
the international space. And I maybe will start with Art and if other folks would
like to contribute, that would be great as well. So, last December, MDEpiNet
International Chapter in Australia held the first
international meetings involving regulators and owners of the admission claims
data internationally to explore the boundaries around the active surveillance in those. So, Art, what is the thinking in terms of the Sentinel capacity. I know we’ve engaged CMS to a great deal in terms of linkages, but next steps with regard
to the Sentinel capacity in the context of being part of the international active surveillance. – I can comment about the devise space. Well, you see, that’s the place where we have been making investments. With the HIVE infrastructure
that we’re developing, we’re building servers
in multiple countries now so that we can also overcome this GDPR regulation problems in Europe, so data stays in Europe,
so that data stays in U.S., data stays in Japan, Australia. But if we can’t come up with a way to combine the data and bring
it up in a HIVE environment, create a distributed analytics, and a pilot is on the way for
that, and then, after that, sort of data goes back to the same place, it really potential way to overcome some of the regulatory hurdles. I think that can really
be an exciting next step. Also maybe in a drug space. I mean, I don’t know how it works out for a drug environment. But clearly this is
something we identified as a important next step for device international
surveillance and research. And also, I’m just highlighting another element internationally. The registry concept has been very much embraced internationally. I think we learned a
lot from Scandinavians, from UK, Australians. And I think that has a lot of also implications for rare diseases. As Telba highlighted, I mean, rare diseases potentially
are the best target, maybe, for the registry system and then linkages. Rather than a huge population level, try to capture 100% of
populations, can target the whole, like the rare disease and
enroll almost everyone into a registry if, of course, conditional upon patient’s
consent and stuff. So, I think those models for international developments
are probably something that are worth considering. – And then we have–
(woman speaking faintly) Okay, and then I didn’t
want it for like a drug and biologic perspective on this too. It’s come up a few times today. Yeah. – [Danica] Just wanted
to give a credit to CBER for actually developing HIVE. HIVE as an infrastructure
was developed at CBER and in fact, it had been
piloted also in CDRH for a number of translational
research studies. And now, as you can see how much fruitful events are happening in the bio space internationally, thanks to the actual
investments from CBER. – Very synergistic. David. – Sure, yeah, so just in terms
of your question, Danica, about the international involvement. Obviously, with actual submissions, you know that’s sort of
a bilateral relationship, directly between the sponsor
and the regulatory agency. However, there a longstanding agreements that facilitate liaison
around these issues that we’re all aware of between FDA and other international agencies. And so we have routine collaboration with the European Medicines
Agency in particular, on our drug and biologic
real-world evidence issues. It’s one of our, through
our cluster meeting system which is a routine broad system for how FDA and EMA exchange information. And then, I think about probably our first contact informally
with PMDA was probably about a year ago and we’ve
actually just spent some time in the last few days
informally discussing, or I should say, on a
slightly more ad hoc basis, discussing formally, these
real-world evidence issues. So there is connection there. And then on the industry side, obviously, we see industry bringing submissions to us that involve international
real-world data sources. And I think there, our views on that are very aligned with
our views on clinical trials. We’re looking for adequate
and well-controlled studies. We’re looking for that in
traditional clinical trials. They may have international
data or international sites and the same thing is kind of the case in the real-world space. So, yes, so I think there’s definitely
active engagement there. – Great, and I’d like to
ask our panel to stay seated for just, and all of you
or many of you for just as many of you as possible for one minute while I do a few things. I’d like to start by thanking this panel for covering a wide range of issues, both the opportunities and
the specific practical steps that need to be overcome for more progress on extending Sentinel
beyond safety surveillance. Thank you all, very much. (audience applauding) And, like to start extending those thanks, thanks to all of our
speakers and panelists. We had a very rich day today as, every year at this meeting it seems like there is a lot more going
on than the year before because I think that’s actually the case as Janet noted starting out this morning, working this area on
a Sentinel foundation, around improving safety surveillance, continue to expand the science,
the empirical capabilities and the evidence continues to improve. And as you just heard, a lot of extensions beyond safety as well. So thanks to all of our
speakers and participants to bring those perspectives
together here in one place. Also want to thank all of
the partners we had at FDA in getting this event together,
Bob Ball, Steve Anderson, Michael Nguyen, Greg
Pappas, Sarah Osmebolarian, Daniela, Sueann Novic and Genella Ndeo, we’ve worked closely with them,
our team has worked closely with them over the past
year to plan this event. And a special thanks to
Anissa Ferguson at FDA who drove the development
of the Sentinel Mobile App for this workshop, which I hope at this point most of you have downloaded all of the workshop are available there and some other useful information as well. And last, our team at Duke,
Greg Daniel, Kerra Mercon, Sarah Supsiri, Morgan
Romine, especially Adam Aten for getting all of this together. And finally, thanks to all of you for your participation here today. Thank you. (audience applauding) For everyone safe travels and best wishes on this continued journey
around post-market safety and better real world evidence. (people chattering) – I enjoyed the talk.
– Oh, thanks so much. (people chattering)

Leave a Reply

(*) Required, Your email will not be published