The Intersection of Precision Medicine and Implementation Science

>> Please note that today’s session is being
recorded. If you have any objections, you may disconnect
at this time. Thank you very much, and on behalf of the
National Cancer Institute, I wish to welcome everyone to the Precision Medicine and Population
Health Interest Groups Webinar. The Intersection between Precision Medicine
and Implementation Science. A brief word about logistics, and will be
off. We ask that if you already on mute, to please
keep your phone on mute for the duration of our chief presentation. As mentioned, the session is being recorded,
and muting all lines will help us to avoid any background noise. We encourage questions, and they can be submitted
by using the Q&A feature on the right-hand side of your screen. Please type your question in the provided
field and hit submit. You’re free to submit your question at any
time, but we will opening the session for questions when both presentations have finished. Without any further do, it’s my pleasure to
turn the meeting over to Dr. Muin Khoury. >> Good afternoon everyone. This is Muin Khoury. I’m the director of the CDC Office of Public
Health Genomics, and I’ve been a consultant at the National Cancer Institute Division
of Cancer and Population Sciences. It’s my pleasure today to introduce our two
speakers, Dr. David Chambers and Geoff Ginsburg. You have their online bios online, but I’ll
do a brief introduction. Just as a point of reference, this is part
of a regular, probably every other month webinar series. The next one will be on October 19. We will have more information on that on our
website. So there’s been a lot of conversation lately
about precision medicine, as the Precision Medicine Initiative was launched last year,
and how it might intersect with the field of implementation science, and were really
lucky today to have two pioneers, two leaders. One in implementation science and once in
precision medicine and genomics, and we’ll hoping his dialogue will engender discussion. We’re giving them 20 minutes each, and the
last 20 minutes will be for questions. Feel free to send your questions online. We’ll pick as many as we can in the last 20
minutes. So without further ado, I’ll introduce the
two speakers quickly and then turn it over to David Chambers. I will first speak. So David is the Deputy Director for Implementation
Science in the Division of Cancer Control and Population Sciences here at NCI. He manages a team that focuses on efforts
to build in advance the field of implementation science through both funding, opportunities
training, dissemination platforms, and enhancing partnerships. David has really been one of the driving forces
of implementation science across NIH and beyond, and he was at the National Institute for Mental
Health for many years before he joined NCI, and our second speaker, Dr. Geoff Ginsburg,
who’s the direct of the center of Applied Genomics and Precision Medicine at Duke University
School of Medicine is also connected with the School of Engineering. He’s a professor of medicine and pathology
and biomedical engineering. Dr. Ginsberg is a world leader in genomic
medicine interested in the translation and application of genomics in day-to-day health
care. You can check his log bio online if you want,
but he’s also well known for his work and leadership in many activities around NIH. He served an advisory committee for the director
of NIH. He also chairs the National Academy’s Roundtable
on Genomics and Genomics and Precision Health at the National Academy of Health. So without further ado, I’m going to turn
it to David to kick off the first 20 minutes, and then Geoff. Thank you. >> Okay, thanks Muin, and thanks everyone
for tuning in. What I’d like to do with the next 15 or 20
minutes is make the case that implementation science and precision medicine really have
opportunities to be mutually beneficial. Whether we think about implementation science
historically is the way in which we’re trying to improve care, bring evidence, and bring
the results of research to those who could benefit from it most, and we think of precision
medicine, really, along the same lines of trying to figure out how do we optimize care
and how do we optimize health and healthcare for each individual, and I think the two really
can go hand-in-hand. So what I’d like to do briefly is just give
some orientation for folks who are a little bit new to implementation science and what
we mean. I’ll talk a little bit about a few examples
of how implementation science might be considered within precision medicine and then finish
with some assumptions that I think at times set us back in terms of really trying to optimize
the use of evidence, and in this case, ultimately, the use of precision medicine findings for
the benefit of health and of healthcare. So what you see in front of you is an effort
by Bales and Boren back in 2000 to try and reflect what if we assume that the end product
for our research really is that publication and everything magical will happen down the
road toward making sure that publication makes a positive benefit on people’s lives. So on the left side what you see walking through
are the steps in which we’re losing valuable research findings, whether positive or negative,
if we’re assuming that the publication pathway is the route to implementation. You’ll see that results may be negative from
research. We may submit papers for publication. They may not be accepted. They may be published in places where they
don’t join bibliographic databases. They may not be considered in systematic reviews
and guidelines in textbooks, and they may not make it toward implementation in the community
or in clinical settings. On the right side, what you see is just on
average, their effort to try to map out how long it takes for this process and into reasonably
best case scenario to get from the conclusion of a research study to instrumentation. What you’re probably more familiar worth is
this tagline that it takes 17 years to turn 14% of original research of the benefit of
patient care. A lot of times, people using the statistic
without knowing where it came from. For those of us were focusing on implementation
science, it says to us that we can’t rely on our traditional methods of dissemination
to assume that findings that evidence-based practices will be implemented. Too often, we’re assuming on the scientific
and this. You can probably, in your offices or wherever
you’re watching this, fill in the rest of the state. We assume that if you build it, if you create
the best possible tools and interventional guidelines, then people will come, but as
you can see from this lovely but empty baseball diamond, it’s not enough to just bring things
and make things presumably accessible to people, but it’s about what the demand is for them
and it’s about whether anyone actually wants to use and is able to use what you are creating. So for genetic testing, even if we assume
that genetic testing in terms of the evidence reaches the point where we can identify the
optimal treatment for a particular illness or we can reduce risk for health problems,
we start to look at that pathway around implementation, we can see where the benefit starts to decline. So let’s assume that as insurers who are able
to support the payment for care, let’s say only half of the insurers are choosing to
provide those wonderful tests. Let’s assume that only half of the systems
are going to go that next step and say, “How do we train clinicians so that the genetic
tests are potentially, we know when to offer those? Let’s assume further that, you know, within
this course those clinicians in those systems were trained to use these tests, only half
of them are prescribing them, and then we’ll assume that the eligible patients who might
benefit from this testing, only half of those patients that those clinicians see get tested. Now it seems like we’re making some pretty
strong assumptions about the limited use, but in many interventions, if we could get
half of insurers, half of systems half of clinicians, and half of those patients to
get those evidence-based interventions, we’d be doing a lot better than we currently are,
but if we just assume that sort of threshold, even within that scenario, there’s no problems
with access. There’s no problems in terms of the follow-up,
or there’s no problems in terms of following the procedures for testing, we’re already
losing probably 94% of the benefit of that genetic test. In other words, the amount of benefit that
we have from getting implementation right, may very well outstrip the amount of benefit
that we got just by assuming that that test exists, and we need to be better in following
that pathway and really understanding implementation so that we don’t do things to sit on the shelf
or to be located in tests to be located in a place where they are not used, they’re not
known when to be applied. So orient folks, this is how how think about
three terms that really underscore a lot of our work in the field. The umbrella term which we use within our
Implementation Science Team her at the NCI focuses on implementation science. There’s this broad study of how we promote
integrating research findings and evidence into healthcare and health policy and practice. We at NIH at times have chosen the words dissemination
and implementation research also to map the space, and the reason why we also use these
two terms is because we recognize that traditional dissemination methods, which are typically
passively providing information through a journal publication, through a message, doesn’t
necessarily fit nicely to the audiences that might benefit from it, and what we need in
dissemination research is a much more robust knowledge base of how we target that distribution
of evidence, how we target that distribution of effective interventions to particular audiences
so that we’re able to spread, but importantly, to sustain knowledge and the interventions
that could benefit people. If we make a contrast between that and implementation
research, which we feel much more of an intensive and active process of trying to bring interventions
and other tools to community and clinical settings, so that we can improve patient outcomes
and ultimately so we can benefit population health. I mention these because they’re terms that
are frequently used, but often, it’s not clear how they’re being used, and we want to make
sure in our efforts here that people know where we’re coming from. What do we mean when we say implementation
science? What do we mean when we say dissemination
and implementation research. So on dissemination research, we really thinking
about that process all around the creation, through the transmission, the reception, and
what happens with information as it’s translated ideally into positive action toward health
and health care. There are many different target audiences. We think about our consumers, patients, and
families. We think about our providers. We think about administrators, payers, etc.,
and we need to do a better job of understanding the implications
of how evidence is created, how it’s packaged, and then how it is transmitted to those who
might benefit from it. We think about implementation, we’re really
trying to draw a contrast between often what we’re focusing on, which is what do we do
when confronted in sort of an ideal actuation with the patient for whom we can try and help
a health problem or potentially try and reduce risk? So often, as you look at this cascade of different
outcomes, we’re really focusing on the left and and the right end. We’re focusing on what do we do? What are those empirically supported treatments? What are those efforts to try and improve
the quality of care for the individual so that on the right and, we’re improving function. We’re reducing symptoms. We’re improving health status, and so in a
lot of our trials we really do see that usual study focusing on what is the intervention
and what are the health outcomes that it’s going to generate. What we end up missing is all of this important
activity that goes in between. Well, were not just saying what are the treatments? What are the preventative strategies, what
are the other methods and tools that we need to use for the individual, but how do we actually
get them into practice? And so the core of implementation is often
focusing on what are the implementation strategies that are enabling those evidence-based practices,
genetic tests, etc., to make their way to the benefit not just of individuals, but of
the population. And what we argue that the key implementation
outcomes like the feasibility within a particular setting of an intervention being delivered,
penetration rate, how far it gets through a particular system. How many people within that system have access
to it? Sustainability, level of uptake of a particular
intervention, and our argument is that if we can improve implementation outcomes that
we’ll see knock-on benefits on the service system, and you’ll see in this picture the
IOMs or indicated standards of care, efficiency of the system, the sacredness, the effectiveness
overall, equity, etc., and the argument here is that if we can get implementation rights
to improve the service system, and then we see health outcomes improve not just one person
at a time but at that population level. So if we’re successful about implementation
it’s not just that we get these improved implementation outcomes for their own sake, but that we’re
ultimately seeing large drivers to improvements at the population, at the system level. Okay, dissemination implementation research
really does borrow tremendously from a whole range of fields. You’ll see some of them listed here in terms
of theories, in terms of the findings, in terms of the message that we use. It often requires pretty intensive collaboration
with stakeholders from multiple practice settings. As you’ll hear from Geoff, some specific examples
of ways in which they’ve developed tools that really do cut across different stakeholders
within sort of the genetics and genomics area. It features a whole range of different methodological
approaches, and we specify the methods need to be both rigorous but also relevant within
the settings where they’re being tested, and that studies of dissemination and implementation
should really focus on not just do we get successful uptake or improvement in the implementation
or sustainability of various interventions, but really trying to understand how and why
they work. So we do have current funding announcements
for anyone who hasn’t been aware of this. At NIH, it’s RO3s, RO1s, and R21s. The National Cancer Institute is at the helm,
but many other institutes, you’ll see here Fogarty, The National Institute of Mental
Health, Heartland Blood and our Human Genome Research Institute collaborators, as well
as the office of Behavioral and Social Science Research. It has been our way in terms of advancing
the research agenda for dissemination and implementation across the agency. Over 150 grants just funded through these
announcements and others from elsewhere, and something that our center for scientific review
at NIH have deemed important enough so there is a standing study section for anyone who
is interested in this. So whether you’re focusing on these specific
announcements for your identifying your own investigator-initiated research, in this case,
it cuts across implementation science and precision medicine. There is a logical and an expert home for
you for review. So I wanted to just walk through a couple
of examples. If we think about Lynch Syndrome and the recognition
that there are populations within this country and around the world that have elevated risk
for a range of different cancers, it’s not just enough to know that we have a particular
test. We need to be thinking about how do we really
enable a population benefit if we identify someone who carries a particular mutation
around Lynch Syndrome, and when we think about the implementation science challenges beyond
just the existence of the test, we’re really saying how do we take the power that we may
have to identify Lynch Syndrome and bring it through that broadest possible population. That population of everyone who has colorectal
cancer. So bring it to those who are members of the
family of those who have Lynch Syndrome, and how do we really think about the strategies
through which we’re implementing that screening, we’re thinking about cascading, we’re thinking
about ongoing screening and monitoring once some with Lynch syndrome has been identified. So again, really thinking about those other
aspects that optimize reach, that optimize the sustainability of the system that’s able
to capture the population. Similarly, for BRCA 1 and 2, beyond just recognizing
that there are elevated risks. How do we really bring that to the population
level? How do we think about some of those scale
up across the family’s decision making in those procedures? How do we think not just about the testing,
but think about the cascade of different services that need to be provided in order to make
sure that that test yields benefits to those individuals who may have that particular mutation,
and really thinking about what are the implications that it has if we’re thinking at that broader
level about screening to everyone who might benefit from it, monitoring, of preemptive
treatment if that ends up being the decision-making, and in all of these cases, how do we not just
think about the sort of under treatment, the problem that we’re not getting enough access? The potential issues of overtreatment where
you may be recognizing people and providing services that may not be as indicated for
them. In the third example, which is just broad
precision medicine, tremendous amount of interest and tremendous amount of potential to really
try and say how do we do a much better job of customizing a treatment, customizing our
ideas about health and health care to every individual on the basis of of a whole range
of data that we’re now able to gather? But we really need to be thinking, or ideally,
we would argue to be well-suited to be thinking about clinical practice can incorporate these
different findings? How do we think in this case about an evidence-base
that we know that’s going to be evolving? And how do we plan for a system that’s able
to receive that evidence as it evolves? Not just assume that either we’re waiting
for all answers to be in or that the answer that we get is going to be the final answer
on a particular question. How do we think about the workforce that ultimately
would be having to implement precision medicine findings? What are the services that are going to be
covered and paid for? All of these are the implementation science
challenges that go beyond that initial finding, and the more that we have a group of people
who are really thinking about anticipating that, we would argue the better off we’re
going to be. So I’m going to finish with just a couple
of the traditional assumptions that we have around evidence-based practices and argue
that maybe precision medicine is an opportunity to move beyond them. So often in our traditional way of thinking,
we create evidence. We create evidence-based practices, and they’re
more or less static. We manualize the fix and now we have to implement
them. Now we have to apply them. Also that our systems, once we understand
how our health systems work, they’re more or less static. We think we know exactly what needs to be
done. That implementation is often is this sort
of one practice or one test in time that consumers and patients, while we sort of know there’s
a lot of heterogeneity, we’re often assuming that if if there’s an evidence-based practice,
it’s kind of a one-size-fits-all, and we continue to assume that if someone chooses not to implement
various tests or various interventions, they may be irrational because we had the evidence. So the question here is that whether these
assumptions relate nicely, if you’re thinking about implementing precision medicine where
we know there’s going to be of lot of change, ongoing discovery and ongoing need to figure
out how do we best apply that discovery? Choosing not to implement. This notion that it’s irrational says that
were not looking at question about actionability, right? The degree to which a particular finding has
a particular action that can be taken to promote or improve someone’s health or maintain someone’s
health. It says that we’re not thinking about these
challenging issues about reimbursement. It’s that we’re not necessarily thinking about
whether we have the necessary resources around counseling. Whether our existing tests are optimal, given
that we’re seeing a whole range of new tests crop up month by month, year by year, and
it assumes that that there’s a demand for the knowledge, demand for the evidence that
there may not be. I’ve also put up the slide which says there
might be some rethinking that we have to do in terms of sustainability. Classically, we’re thinking about sustainability
is trying to keep our interventions and our systems the same for as long as we possibly
can. On the left side, and you’ll see the link
at the bottom, what this is showing are the Beatles. So folks who know the Beatles are seeing the
Beatles’ haircuts. This is the musical group from 1963 to 1970,
and what it underscores is how radically their appearance and some would argue, their sound
changed over time. Yet from 1963 to 1970, most people would agree
that this was still the Beatles. Can we think about our interventions, and
specifically, genetic tests or other precision medicine tools as going to evolve and allow
them to evolve? So should our existing interventions be those
that we’re assuming should be sustained in the same form? Or how do we think about the evolution of
these different tools over time, and how do our systems cope with the reality that this
field is constantly changing? Where do we go from here? So we’ve been, in the last couple of years,
trying to think conceptually, and we’d love to certainly hear from Geoff and from questions
and comments about how do we make this as concrete as possible conceptually about a
different approach to sustainability. In this particular paper, which we did a couple
of years ago, it posits that the interventions are like puzzle pieces that we should expect
are going to change their shape over time. That the context, the setting in which we
are trying to use these particular tests is also changing over time, and rather than assuming
we have the best possible benefit from the beginning, are there ways in which we can
assume we’re improving quality at all points, and ideally, quality improved in one setting
can contribute to a broader evidence-based. For us, this looks a lot like the learning
healthcare system, which we’ve seen in the National Academy of Medicine, or IOM, talk
about in recent years that a number of people embrace, which basically says can we not see
research and practice as separate entities, but how do we pull research and practice together? This particular paper that we wrote earlier
this year was specifically in the behavioral healthcare system context, but the argument
was that we now have technologies. We now have data that we are able to gather
in a real-time fashion, but if we can figure out the right way to be able to aggregate
that, to try to learn from the local lessons of how we apply these very challenging and
very personalized tests, might we ultimately have better supports for decision-making? Might we ultimately have better ways to use
that data to have accurate risk prediction overpopulation levels? Can we do a better job of care management? Can we do a better job at assessing performance? A paper that Muin and Greg Fierro and I had
published earlier this year in JAMA was just briefly trying to say that we think that implementation
science, that learning healthcare, and that precision medicine are three nice complementary
things that for too long have sort of been in an isolated manner, right? That there are people who are advocating for
a learning healthcare system separate form those who are really thinking about the benefits
of studying implementation and separate from those who are thinking about precision medicine. What we really see is that there can be such
this wonderful opportunity to use for implementation scientists to use precision medicine as a
chance to really be thinking about the dynamism of our evidence base, the dynamism of our
evidence-based practices, and in term, that precision medicine can look to implementation
science to ask the so-what questions. So we come up with this wonderful new insight. How does it actually tangibly make a difference
in the population? So the hope for the future is that we align
those of you on the call, those of you who are interested in this area, to say how can
we most complement what each other are doing, and that, you know, as a final note, that
for implementation and precision medicine, we see the context in both cases matters. It’s multilevel. That we can’t just focus on in a pristine
setting whether the practice works, but how well can it be delivered in a whole range
of settings. We do have a knowledge base around implementation
that continues to grow. It’s a team sport, and we need all of you
who are involved in this conversation and all those who are not yet, to really come
together around identifying both the questions and the studies that will get us further along. >> Thank you very much, David. Now we’d love to hear from Geoff Ginsberg. Thank you. >> Thank you. Can you hear me okay? >> Yep, you sound great, Geoff. >> Okay, perfect. Can I get my slides up? >> Yep, Geoff. If you want to go ahead and navigate, you
have control of the meeting. So you’re able to move those forward. >> Oh, okay, sorry. I didn’t realize it was one deck. >> Not a problem. >> Okay, perfect. So thank you Muin and team for having me here
today. I’m wearing three hats. One on my role at Duke on applied genomics
and precision medicine. Also as a principal investigator in the NHTRI
IGNITE Network, and thirdly, as a co-chair for the Global Genomic Medicine Collaborative
Out of the National Academies of Science, Engineering, and Medicine, and I’d like to
say that I really don’t consider myself to be an implementation scientist. So I’m really privileged to be on the same
panel as David Chambers, and what I hope to do in the next 20 minutes or so is highlight
some of the efforts that will bring David’s theoretical constructs that he so elegantly
told us of, you know, to practice. These are my disclosures. I have no conflict with the current presentation,
and I start off to say that I think that probably most of us that are listening today have been
consumed and somewhat obsessed by the challenge of bringing genomic information about an individual
to their clinical care. This, to me, forms the basis of what we now
call precision medicine and health, and often, like in slides like this, depict this is a
nice, linear relationship going from genome sequence to health care, but I would argue,
and I think we’ve already seen evidence of, that it’s a decidedly nonlinear path by which
we can actually bring the knowledge of the genome into the care of individuals and populations. So this is a diagram that’s probably familiar
to many of you. It’s a framework for translational research
that was initially proposed out of the IOM by Crowley and colleagues illustrating two
blocks, to challenges in translational research T1 and T2 and then was expended on by Muin
Khoury and colleagues, picking what once was a highway as a virtual cycle of translation
beginning with basic discoveries at the bench, moving to human applications, generating evidence
in T2, and disseminating and implementing that evidence-based practice in T3, leading
to public health and population benefit demonstrated through effectiveness and outcomes research,
but to put this into context, I would start, perhaps, with one of the earliest elements
of precision medicine, the use of risk calculators for the evaluation of patients for cardiovascular
risk, and it might be worth thinking about how perhaps something as validated and as
simple as this, how has this actually been brought into the practice of medicine and
how is it used. Several papers have evaluated the use of the
cardiovascular risk calculators by a variety of physician communities, and the results
are not as nice as we would like to see with a paucity of primary care physicians actually
reporting that they actually read clinical guidelines very carefully, and an equally
small percentage of individuals who actually use such a highly validated risk calculator
in database practice, owing to variability and knowledge, beliefs, practice patterns,
among the family physicians. So illustrating some barriers that we might
anticipate being relevant to genomic medicine and precision medicine having to do with the
knowledge and literacy, the understanding of the validity of the tools that are being
used, and also elements of integrating this into the clinical workflow, otherwise known
as being something that will take away from other potentially competitive activities in
their practice. Perhaps another example illustrated here from
oncology. As many of you know, EGFR was discovered as
a biomarker for non-small cell lung cancer in the 1980s. It was associated with lung cancer prognosis
in the 1990s, and it was demonstrated to be a biomarker response marker for their use
of non-small cell lung cancer in 2004, and studies have been done to look at how the
EGFR biomarker and testing has been implemented into practice, and while data from Boston,
The Dana Parker Cancer Institute in particular, showed that many of the patients presenting
with non-small-cell lung cancers have actionable mutations in the general community practices
or general oncology practices in the US. Less than 20% of those patients are actually
having EGFR testing, and I guess that I would argue that these, at least, these two examples
may be examples of some failures that we have to truly implement appropriately to bring
these innovations to clinical care. Several studies may have even point two one
of the reasons why this is the case. One issue might be the recognition that until
recently, that implementation science was a true science and should be supported by
a national funding agency. These two studies looked at the distribution
of funding for genetic research across both the NCI, as well, as both the NHLBI, and as
you can see, the vast majority of our research dollars are going to T0 and T1 research and
the paucity of our research funding is going to research beyond T1. So therein may lie at least some of the explanation
for our failures to implement. Another issue may be having to do with payer
reimbursement for genomic technologies, and we often think about payers as being sort
of a singular entity, but in fact, these data show that there’s quite a diversity of strategies
and acceptance of evidence across the payer community. In this particular case, the adoption of reimbursement
policies for the Oncotype DX Cancer Prognosis Test was highly variable and only bec evidence
ame more adopted over essentially a five-year period. So there is clearly variability in the thresholds
of evidence needed to implement meant across an important constituency in our clinical
care and health care delivery environment, and one might assume that FDA approval of
tests, as well, might be concordant with coverage decisions for tests, and these data indicate
that’s not necessarily the case. Again, across a number of cancer related genomic
tests that are available commercially, one might even conclude that there is almost a
discordant relationship between FDA approval and coverage decisions, perhaps indicating
that the efforts to achieve one might be somewhat orthogonal or quite different from the other,
and I think it’s probably fairly clear that FDA approval evidence and payer reimbursement
evidence are not necessarily the same and perhaps some of them are. So to examine the landscape of genomic medicine
implementation. In 2012, the NHGRI convened a meeting of United
States centers and institutes that are carrying out genomic medicine programs, really to find
out what’s been going on iin this space. Most of which essentially organically developed
and to begin to decide on funding strategies based on these types of data. Just in qualitative terms, what we learned
through that first meeting was that there’s a lot happening in the space, but it’s largely
happening in isolation. There was a fair amount of redundancy, and
the organizers and participants of this meeting distilled the findings to lack of evidence
for value for genomic medicine technologies as being a significant barrier. The ability to interpret variance, the lack
of knowledge and expertise, absence of standards, electronic medical record integration, and
the need for a financial model that might point to sustainability, easier adoption to
sustainability. Perhaps one of the most successful programs
in genomic medicine coming out of the NHGRI’s portfolio has been the electronic medical
records in genomics network or so-called eMERGE Network, as illustrated here. While this started off as a discovery program,
I think it’s rapidly evolved into one that’s implementing, particularly owing to its emphasis
on clinical decision support. So it has used the EMR for GWAS discovery,
to construct electronic phenotyping algorithms, to streamline informed consent, to engage
the community, to standardize elements of data privacy, to reach the pediatric population
with a flagship program in pharmacogenetics and genomics, educational platforms, and I
think centrally to this is all the — is the ability to use clinical decision support tools
as a primary strategy for implementation, and this is just an example of one of those
clinical decision support alerts that comes up in the Vanderbilt Health System when a
physician orders clopidogrel. This alert tells the treating physician or
provider what the genetic background of this patient is, and in this case, this would be
somebody who would be a poor metabolizer of clopidogrel, would be at risk for thrombosis
in the setting of percutaneous intervention. An alternative medication is actually recommended,
and in a learning health system type of strategy, if the alert is overridden, the alert actually
asks for the reason why. So the feedback can be obtained, and the need
for these types of tools to truly implement genomic medicine strategies has been voiced
by a number of communities leading to the development of an action collaborative at
the National Academy of Medicine, formerly IOM, focused on the integration of genomic
information into the electronic health record because this has been raised as such a critical
issue for implementation. So for the last two years, a group of electronic
health record vendors working closely with the academic community has really been working
on the standards for integration of genomic patient data with other healthcare data in
the electronic health record, so that it would become routine to deliver that information
to providers and patients in the context of patient care. About six or eight months ago, DIGITIzE, which
is the acronym for this group, began to publish standards for genetic information and integration
via EMR on the left-hand side of this slide are just indicating the stakeholder groups
that developed these standards, and on the right-hand side of the slide is the first
implementation guide that was published concerning the use of genetic variants for HLAB 5701
in the context of a vaclovir treatment or TMPT in the setting of mercapto purine. So I think these types of guidances and the
ability for the amenities to come together to develop these standards is one of the linchpins
for some of the challenges that we see in implementation of genomics into medical practice. The other major national human genome research
institute initiative for implementation of genomic medicine is the establishment of the
IGNITE network. It stands for Implementing Genomics in Practices. You can see the logo on the lower right-hand
side of the slide. This is our first network-wide publication
describing the network and the various programs that are in the network. The goal is really to link genomic medicine
efforts, harkening back to our meeting in 2011 or 2012 that we were working more in
isolation to really develop best practices around implementation and particularly across
diverse geographic and population settings to begin to contribute to the evidence base
for outcomes that would encourage adoption and to disseminate best practices for implementation
of genomic medicine and its sustainability. So certainly embracing many of the elements
that David talked about as part of IGNITE. This is the network I lead along with Lori
Orlando, the Duke Program in Family History. We’re also the coordinating center. The Mount Sinai group is working on APOL1
genetic variation in practice to manage hypertension and chronic kidney disease. There are several groups that Florida, Vanderbilt,
and also in Indiana that are doing different flavors of pharmacogenomics with germ line,
as well as cancer-related work, and the University of Maryland is working on the use of genome
sequencing for diagnosing monogenic diabetes. So this network is clearly focusing on implementation
and effectiveness. Also on clinical utility by using highly validated
genomic medicine tools to achieve this result. This is more or less a template for how many
of the studies are being done. Pragmatic trials are being conducted in community-based,
as well as academic-based practices, comparing genomics and informed care with standard of
care and measuring a series of outcomes across the areas that I’ve already mentioned, and
this slide just illustrates the Duke Project on Family History. Just to give you a quick snapshot, we’ve developed
a web-based family history tool that encourages or requires the patient enter the information
about their family history prior to seeing their provider. These results are sent to a clinical decision
support engine that uses guideline-driven rules to provide a report to the patient and
to the physician, the when the patient-provider interaction occurs, it’s really about understanding
risk and a strategy to implement, to actually undertake appropriate screening behaviors. What particularly we’re proud of in this particular
project and what’s been a linchpin for it is the use of the SMART on FHIR platform which
stands for Suitable Medical Applications and Reusable Technology. So the SMART are essentially absent. FHIR, which is Fast Help Interoperability
Resources is what really allows tools like this to be integrated across multiple electronic
medical records, and this particular program was highlighted at the White House last summer
at the Champions for Change Event because of it’s use of the SMART on FHIR platform
and the ability to realize patient derived information appearing essentially in the electronic
health record. So as David mentioned, the IGNITE network
is steeped in using implementation science strategies, both preimplementation assessments
of the status of the practices that the genomic medicine tools will be placed into. During implementation, a number of variables
are assessed and ongoing assessment in the post-implementation phase to understand the
true effectiveness of that implementation strategy. So these are really the, at least to the first
of our knowledge, a network wide utilization of standardized implementation science methodologies,
most of which require the use of qualitative interviews and assessments of the providers,
patients,a and their environmental surroundings. So I’m not going to go into the details of
this, and you’ve heard some of this from David, but we truly think about outcome measures
for implementation science, about how much we’ve reached the patient population intended,
how much the provider and clinics are actually using these strategies, and how appropriately
they are, and of course, measuring both the time and cost to implement. We also are measuring what we’re calling effectiveness
outcomes at the same time, and with this slide is indicating is that we’re really looking
at effectiveness at the patient, provider, and system-level. We also have a number of different contexts
ranging from a more standard clinical variables, financial variables, but also psychological
or psychosocial and behavioral variables, some of which we can directionally capture
directly from the electronic medical record. Those are illustrated in blue, and the others
that we are capturing through survey, more qualitative interviews and surveys. So what IGNITE has at least identified initially
are several challenges and also has provided some solutions. Around clinical knowledge, we’re developing
a vast series of resources around educational materials for the targeted clinician, the
clinicians that are using it. As I’ve mentioned, we’ve developed, in many
cases, the integration with the EMR, electronic health records, both on a more national scale
working with the vendors, but also with local health system solutions that have been devised. Engaging the diversity of patient and clinician
populations. Most of the groups have developed a genomic
medicine advisory board that really is meant to link the stakeholders and engage them at
every step, and similar types of strategies have been used to help with a problem that
is not unique to genomics, but patient recruitment into these studies. The other thing that IGNITE has done is developed
a tool box that can be found on the IGNITE website. The link is at the bottom of this slide, and
while this is not an exact representation of what the toolbox looks like today (It’s
been undergoing a number of important renovations), you will find knowledge and best practices
around testing protocols, about clinical implementation strategies in various environments, about
source clinical decision support tools that have been developed both in IGNITE and eMERGE
network and a suite of educational tools and knowledge for the various stakeholders in
the genomic medicine space. So finally, I just want to close in the next
couple of slides just to highlight the point that genomic medicine implementation is not
unique to the US, not surprisingly, and part of the ongoing series from NHGRI’s Genomic
Medicine Working Group, we hosted a meeting, a series of meetings, of the international
genomic medicine community, the first of which was represented in this publication, but we
now have formed the Genomic Global Medicine Collaborative that is really meant to explore
the synergies and opportunities for collaboration and implementation of genomics into medicine
on a global scale and really think about how we not just advance genomic sciences for individual
health and population health but also for global health, and it’s important because
as we look across the globe, there are now, and this is just a snapshot of several large-scale
national sequencing efforts. Of course, the PMI is something that looms
large for the United States, with the potential for a million genomes and I’m sure people
are aware of Genomics England, Regeneron Geisinger. Regeneron Geisinger Sequencing Program. The Middle East has programs in both Qatar,
as well as in Saudi Arabia and other European initiatives, many of which were highlighted
in a meeting that the G2MC held late last year in Singapore, but to sum up, I think
this is a global challenge and a grand challenge for the implementation of genomic medicine. We’re clearly going to see on the horizon
through the PMI and many of the initiatives that I just mentioned, we’re going to have
millions of genomes obviously contributing an amazing amount to discovery of research,
but also, this needs to be linked on the international stage to precision medicine population health
and global health, and in the middle of this slide, you can see, I think, what at least
the global community is thinking about in terms of the five major challenges that need
to be addressed, several of which I have mentioned. David has mentioned. Evidence generation and the economic model
that supports sustainability of genomic medicine. The ability to share data across institutions
that are working in this space about data on implementation but also data on effectiveness
and outcomes. The incentives to implement need to be aligned
across the stakeholders. As David mentioned, there’s a significant
emphasis that needs to be placed on developing the appropriate workforce, both as implementation
scientists but also genomic medicine outcomes researchers, and probably at the heart of
this is really the ability to engage the participants in being part of this research effort and
ensuring that they have sufficient trust with their genomic data in particular. With that, I think I’ll stop and turn it back
to the folks at CDC. Thank you. Muin, not hearing anything. >> Thanks Geoff. Sorry about that. A brief, thanks to Geoff and David for their
wonderful presentations. We will now open the session for questions. As a reminder, questions can be submitted
using the Q&A feature on the right-hand side of your screen. Please type your question in the provided
field and hit submit. Slides to this presentation and the archive
will be found on our website, the direct link to which is in the chat box on the right-hand
side of your screen. Or not, you can find our website at cancercontrol.cancer.gov. The Precision Medicine and Population Health
Initiative is found under cross-cutting areas on our homepage. And with that, I’m going to turn it over to
Muin. >> Thank you David and thank you Geoff for
wonderful presentations. As we get some of the questions come in, I
just want to initiate some discussion here, and, you know, I come from a world of public
health that has a healthy skepticism about genomics and precision medicine, and the word
I keep hearing in my circles is that is it too soon to think about implementation, given
the status of precision medicine? So that’s sort of part A to this, and the
part B related to this question, I’d like to ask both Geoff and David is the precision
medicine initiative, per se. I mean, you both alluded to it. There will be a million or more people involved
in a cohort. This is primarily a discovery platform, but
from your perspectives, what kind of opportunities for implementing implementation science directly
into that platform of discovery based on the learning health system model that you talked
about. So two parts to this, are we too early to
think about this Implementation stuff when we’re still discovering new things and number
two was in this case of the precision medicine large cohort. Whoever wants to jump in, you’re welcome. >> Sure. So I can see Muin, so maybe it’s easier for
me to jump in. So is it too early to jump in around or to
be thinking about implementation where precision medicine is concerned? I think if we don’t think about the ultimate
benefit of what I think is a potentially incredible discovery platform early enough, we’re going
to replicate where we failed in many, many other cases, assuming that the answer that
we create in a vacuum is going to be that answer that fits best for people’s lives,
for health systems, etc. so I don’t think it’s too early, and I think that just the
reality that in various ways and under various names, I would say, for a long time there
have been all sorts of efforts toward trying to improve the precision of medicine, to try
to, whether it’s a specifically with genetics or other ways of trying to understand risk
and understand optimal treatment, we should be learning from each and every one of those
opportunities. I think often there is that gulf between research
and practice that doesn’t seem to serve much good, I think, at least over time, if what
we want to see is the optimal benefit out of our research, and so the idea of being
able to look at ways, whether it’s just within genetics, of a whole range of different systems
that have been thinking about how do we set up a process through which we can optimize
the delivery, the use of particular tests, to optimize the process for which we use that
information to drive decision making. We should be learning from those natural experiments
really trying to study them, and to your point in terms of what to think about, I think we
know from an implementation science, and I’m sure from Geoff and others’ specific experience
with eMERGE and with IGNITE, how valuable it is to understand context, and so I saw
on one of Jeff’s slides where they talked about the variables that they’re already measuring
to really understand things at the provider level and the system level, and I think within
precision medicine, with that initiative, we have an opportunity to really understand
more about the context, not just where health care is being delivered, but where people
are living their lives. Which our goal is to try and figure out not
just what to do but how best and where best to intervene. We’re likely going to have an enormous amount
of data that understands where people are, and ideally, how best to reach them, and those,
I think, are questions or components that could be nicely added to a whole range of
different projects. >> Can I jump in on this a little bit too? So first to your too soon question, I want
to echo a little bit of what David just said and maybe just maybe say it slightly differently. So certainly, precision medicine is not new. Many have argued that that’s the way medicine
has been practiced, just the tools have been evolving to allow that to be more enhanced,
and of course, I showed one example of a tool that’s been around for over 50 years that
has not been adequately implemented, perhaps. Your own work, Muin, and others has identified
tier-one evidence for certain genomic technologies that should be implemented but may be largely
are not. So I guess I would say we should not let perfection
be the enemy of the good. We should at least use the most advanced tools
and evidence where evidence exists and really define that path for implementation that will
allow the ones behind it to move forward. I think I’m not sure exactly what we would
be waiting for to pull that trigger. Maybe Muin, you have something in mind, but
I think the more we begin to advance the earlier stage technologies that have the evidence
base, the shorter the time will be between now and when they have true public health
impact. Regarding the Precision Medicine Initiative,
I view the PMI, at least as it’s being constructed right now, as you said, as a massive engine
for discovery, but that engine for discovery is going to lead to a whole series of novel
insights for disease, and tools and technologies, as well, that should be implemented. So again, defining that pathway with the tools
that we have now in advance of the PMI will only make the PMI impact, in my mind, more
realizable, whether it’s through a pilot demonstration projects of the type that IGNITE and eMERGE
and other are doing and others are doing even independently of federally funded programs,
but we have to create that framework and create the ecosystem that will allow the PMI discovery
engine to coexist with the implementation science engine and what Dave and Muin have
published on this learning health system piece is a significant part of the equation. So we can’t wait for all these pieces to become
as mature as we might like that. We should begin to assemble them now so that
the PMI can have the greatest impact. >> Great. Thank you. So we’re going to start taking some of our
online questions. The first three, typically, evidence-based
interventions are based off of RCTs. This is not possible in genomic medicine. How can we balance different types of evidence,
i.e., observational studies with evidence-based recommendations. >> So I agree with the statement that with
the deluge of genomic findings that are coming into play and actually the pragmatic issue
of the cost and feasibility of an RCT, that we can’t expect to see RCT level of evidence
generation for everything. So we have to achieve that balance. In my mind, perhaps, the balance is really
around what the risk is around the tool or technology that’s being — that’s in question. So for example, do we need to have — should
we have an RCT for a genomic variant that may be in forms of lifestyle choice versus
the genomic variant that forms a chemotherapeutic agent selection where that might have a significant
impact on one’s livelihood and future life or death in many cases. So should we begin to align the evidence generation
structure with the level of risk of the technology that’s being considered? >> Yeah, I think that’s a great point, and
I also say that I think to some degree, one has to ask the question of why is one choosing
to focus on randomization. I think it ends up, I’ve heard a lot that
in order to have sort of precision or personalized medicine, your trials have to be, in order
to be RCTs have to be ridiculously large. It’s kind of that counterintuitive thing. If we’re focusing on the individual we get
bigger and bigger and bigger, but I guess it ends up being, to some degree, as we are
gathering all this information that gets to pretty heterogeneous individuals. To what degree does it serve us to try and
figure out the appropriate grouping for randomization? Because we may have a lot of challenges there,
and so I guess, often, the question ends up being, you know, and these are for our methodologists
to help us with, is how can we optimally answer the questions that are provided and whether
that’s through a randomized controlled trial in certain ways or to study natural experiments
through capturing dynamic information, I think we need to use the full range of approaches. >> And this is a brief note. For those who must disconnect at 3 p.m., thank
you so much for attending. If you are able to stick with us, we are going
to answer the three last questions that came into the meeting online. So I’m going to go ahead and read those out. The first three, “Should those of us involved
in classic technology assessment in medicine think about the use and dissemination of genomic
technologies any differently than we have with non-genomic technologies/interventions? If so, why?” >> So I would say I agree with the sort of
what’s underlying this is that we shouldn’t necessarily consider genomic medicine as an
exclusive technology. As a matter of fact, I tend to think that
we have not done the same implementation, thought about the same rigor of implementation
science with other technologies, such as imaging, but the only caveat here is I think is the
issue around data security and privacy with respect to one’s genomic information. That is, genome sequencing information, has
another layer on it that is going to have to be addressed to achieve, you know, full
implementation. >> And I think ideally we’re using the sort
of mutual advantages of implementation science conducted outside of genomics and genetics
to improve our understanding there and vice versa, and so I really see it as, yes, with
particular treatments, particular interventions, particular types of data, there are some specifics
that one has to be thinking about, but overall, we should be learning some of the same barriers
that Geoff had mentioned, as well as some of the same strategies to overcome them are
consistent, whether one is specifically focusing on genetics or genomics or thinking about
other interventions. >> Great. “Could you discuss a bit more about your thinking
about implementation science in precision medicine versus in prevention in population
approaches to cancer control?” >> So I think that’s a great question. I think just that overall, I think they’re
ultimately going to converge, because I think that we’ve seen, and even from the series
that this specific webinar is a part of, just as well as a broader thinking of what precision
medicine is, shouldn’t preclude the idea or your focusing on the prevention side. So I think the key is trying to get a sense
of what are the key questions in each area and how do we go about optimally answering
them? I don’t necessarily think that there is a
different world between implementation science in say genetics and genomics from that in
other areas, and I think the benefit of seeing the work and seeing the number of institutes,
for example, that are involved in our trans NIH program announcement and involved in our
annual meeting is that you see common threads, common discussion across the board. So I don’t know specifically from my perspective
that there is a totally separate agenda, but rather, to say can we be coming together,
and what are the priorities? What are the capacities that we need to build,
and what are the questions that we need to answer? >> And I would just say I’ve learned a lot
from both Muin and David about this point. You know, the agenda for implementation science
into a health care system is one agenda, but the agenda for implementation science into
public health to affect populations is a somewhat separate agenda. They have certain commonalities. Whether they have to be done in sequence or
can be done in parallel, I tend to believe the latter, that we should be thinking about
how to do public health implementation of genomic medicine technologies while we’re
working with complex health delivery systems, and hopefully, as David said, the two will
converge. >> This is Muin here, just to add to what
Geoff and David said, this whole webinar series is part of a long-term dialogue between medicine
and public health focused on how to bring precision tools to the population level. So there’s really no different agenda. Sometimes concepts converge, but what we’re
trying to do is, at the end the day, better cancer control and prevention or any other
disease control and prevention, and that would be a good segue for our next webinar on October
19 where we explore more fully the idea of precision public health. So stay tuned. So there’s one more question before we go. >> That was an excellent plug, Muin, and with
the last question of today ‘s session, this is regarding the Global Genomic Medicine Collaborative. Excuse me if I messed that up. The question reads, “Is the Million Genome
Project interoperable with the 100,000 Genomes Project? Our collaboration members collecting the same
family history data?” And continues to say, “That’s important in
rare diseases where our relatives overseas may hold genomic keys.” >> Yeah, thanks for that question. So I think the key word in this questions
is interoperable. If I made one believe that there is a Million
Genomes Project, that’s not really what I was meaning to say. I was meaning to say that all of these independent
efforts are going to result in millions of genomes that should be interoperable in terms
of the types of data that can be shared across these nations, and the question about whether,
you know, all these groups are using the same clinical assessment tools, phenotyping tools,
family history tools, is a huge one. If we’re going to create the global knowledge
base for genome-based discovery or implementation, then we’ve got to get the global communities
to come together and really agree on the standards for acquiring the data, standards for disseminating
data, and of course, the tools that should be used for clinical phenotyping. So we’re a long ways off from that, but I
wanted to at least indicate that we had our toe in the water and that there was a lot
of interest and motivation on the part of the stakeholders across the globe to do this. >> Wonderful, and thank you so much, and thank
you to everyone for sticking with us. I again, want to thank Muin, David, and Geoff
for their time today and to you for joining us. You may disconnect at this time. >> Thank you.

Leave a Reply

(*) Required, Your email will not be published