The Digitalization of Healthcare: A Status Report for American Health Information Technology



good afternoon everybody I'm Steve dimelo I'm of health care here at citrus welcome to the research exchange also like to welcome our web viewers today thank you particularly to UC Berkeley attendees for your pre-registration it really helps us with lunch set up and with logistics for the day and one other note before our speaker is that the I for energy talk this Friday is in room 250 HOSA tarja died so I'm sustaining a green campus by Lisa McNeely who is UC Berkeley campuses first director of sustainability it's my great pleasure today to introduce Michael Meneer who's the chief information officer at the UC Davis Medical Center and he's a national leader in healthcare information technology with an extensive record of leading transformations of large complex organizations in the use of medical I t-mike is responsible for developing and executing a technology strategy that supports the healthcare systems for missions of clinical care research education and community engagement mike has worked in the healthcare industry for over 34 years he's held the positions of senior vice president and cio at the University of Maryland Health System vice presidency LOL at Park Nicollet health services in Minneapolis CIO of University of Minnesota hospital and clinic and vice president of medical systems while at Medicus he co-designed and managed to the development of the first commercial executive information system for healthcare the discovery is so please join me in welcoming Michael Meneer can you hear me thank you as as just shared I'm the CIO at the University of California Davis health system i also have taught at johns hopkins school public health since 2001 and I just hit a milestone where i've taught over 500 grad students from 18 countries so i enjoyed that a lot what i'm going to talk about today is a few slides of context so why should we care about health care and bringing digital technology to the industry a current status of health IT and then talk about too many drivers we could talk about of innovation and change in health IT which is the secondary use of EHR content and population data and then our push to create a more modern standard around phenotype data to link with genotype data and then I have some ten areas of ideas for research and innovation that you might be interested in just to set some context if you look at the health expenditure per capita in this country compared to other countries we're on the far right in America and much higher as you note from other countries so the cost of our care is much higher unfortunately the quality of our care the quality of the outcomes is often much lower this cost really impacts people in this country the bottom set of rows of lines indicate workers earnings in America and overall inflation which tracks fairly well and then the top two lines are on health insurance premiums and workers contributions to premiums so the gap even since 2000 between the cost of health care and the rise in wages and inflation is very dramatic very scary for our economy going forward further the impact to people the cost of health care so high that the leading causes a person or bankruptcy in the United States is unpaid medic bills and the death rate of any given year for someone without health insurance is twenty-five percent higher than someone with insurance that seems pretty striking and so a lot of what we're trying to do is bring efficiencies to healthcare to get more quality outcomes for the investment we save and hopefully do this for more money so it doesn't have such a negative impact on the people in this country in our economy I thought it was interesting right after the war to in 1946 there was a catalyst funding program by the federal government in American health care it was called the Hilbert enact and doesn't seem a lot of money now but I this is old dollars not current dollars about 4.6 billion in health federal health burden grants and in one and a half billion in loans were given to the health care industry and basically since the depression and world war two that the physical plant of hospitals had really deteriorated and so this money was used to build the hospitals of America over again and return for these federal funds the facilities hospitals agreed to provide free or reduced charge medical services to persons unable to pay and then make services available to all people if you fast forward to 2009 in the area legislation part of the 788 787 billion area legislation was the HITECH Act which brought 22 billion dollars to healthcare IT and much like gilberton were investing in America's health care but rather than building bricks and mortar buildings were building infrastructure such as networks and clinical software and we adjourn for these federal funds hospitals and providers agreed to acquire certified EHR technology this is only in the last three or four years that you could get a certified EHR meaning that it does certain functionality has certain levels of security and has the ability to exchange data with other hrs we also as health providers have to agree to to create a complete clinical record so for example you might be amazed at how many clinical records did not in the past have a patient's problem list or patients are current medications so there's a lot of list of data that have to be included in patients record doing a an encounter and so a lot of that is now essentially a federal regulation and also to i'll create a number of clinical measures so things like a diabetic clinical measure and many others built based on National Quality forum standards are also required and again in years past you would not have seen that clinical data in an EHR and if you wanted to get a sense of reverence of populations clinical measures you know diabetic rates or whatever you would have to do the sad task of doing a paper chart audit which which took months and months and months and was willfully yet accurate and then we also have to agree to share eh are content with other providers that was kind of a weak requirement in the early stages of this program but now it's ratcheting up quite rapidly service nineteen billion dollars of the 22 is really going to hospitals and physicians and a few other providers who achieve what I just summarize and then 2 billion went to a number of other programs some of which to create health information exchange and so on but at the same time the private market is investing massive amounts in health IT it's estimated that 70 3.1 billion would be spent this year and that was last year and then by 2014 85 billion a year in health IT and so one of the questions is all this federal money and all the private money is it going to any good will it actually improve our industry where we actually get better data to support clinical care and research this has had a significant impact on California and the style amount ole will start in 2010 a good start in 2010 and will continue up to seven and in some cases a few more years but the impact in California Healthcare could be over seven hundred billion dollars and a big chunk of that has been earned by hospitals and providers while this talks about consumer products I think it's an interesting r and analysis where if you look at that what they call the diffusion rates of technology on the left you see electric service you see in the middle a much longer line of adoption of Technology for washing machines the VCR in the right was adopted almost immediately what's clear from this is that whatever technology you're looking at it takes a takes a while for you know to achieve one hundred percent adoption in this country and every technology will have its own rate of adopted option if you look at what the federal government is trying to do rayon would have estimated that by roughly 2000 to at least in acute care hospitals a modern EHR was diffused or used at thirty-two percent and so from that estimate in 2002 the federal government is trying to push it to one hundred percent by 2014 and so that's a forward steep line of Technology absorption a lot of people have shared frustrations about how slow healthcare is adopting technology but in reality that's a fairly steep adoption right now but to be sure in prior years the adoption rate was woefully slow there is some evidence that the adoption is speeding up so one of the hardest parts of the healthcare industry to automate is physicians offices so these might be one or two physicians 20 or 30 physicians it's not a hospital it's not a big clinic and as you see back to two thousand one only eighteen point two percent of American physicians offices had any kind of an ER EHR system electronic health record system and if you look at 2009-2010 when the error legislation funding started it's a pretty dramatic increase up to an estimated seventy one point eight percent in December of 2012 so that's impressive but then but the sad truth is that the Green Line shows that a basic system meaning it's not very well deployed it's not deeply deployed is still the predominant use of EHRs in that in that seventy one percent so it's one thing to deploy the technology it's quite another to use it in a sophisticated way which is part of where the federal regulations are pushing us there's really dramatic changes in health IT I think in my opinion it really started back in the late 1990s where we saw the National Institute of Medicine have a number of reports to err is human where they they noted that you know hospitals and healthcare kills more people from medical error then we might see on the total of our car accident deaths in the country a lot of efforts a lot of groups came out of that notification of errors and the damage that causes people so the leapfrog group has been pushing adoption of Technology and safe practices the Institute for safe medication practices and Joint Commission and so on reversing in the last 10 or so years on a major push on quality improvement the institute for health integration ihi out of boston has done a lot of things they had a major campaign to reduce patient deaths and patient errors a lot of federal effort now is focused on comparative effectiveness if you give a healthcare treatment is it really working is it better than other options you probably read unfortunately today in the last week about privacy and security breaches in healthcare so as we become more automated we have had a lot of people in health care not really secure the information we've joined a whole new effort to certify our electronic health records as I touched on earlier where before this effort which is not perfect to be sure but before you bought clinica software and it was really buyer beware we're now if you buy a certified software system much like you might see a UL label on electrical appliance you have some confidence that it's been tested and it works at some level we talked about the massive investment and there's really been in advance in clinical software so I think it's pretty dramatic over the last 10 to 15 years the software we used to deploy in healthcare to be frank really wasn't that good and I could I could have a whole method to our discussion on while I wasn't very good it's not perfect by any means but it certainly have has got a lot better and our ability finally to interoperate literally yesterday at the UC Davis health system we turned on an interface formally to be car part of a national health information exchange network called new in exchange and we shared on the first day eight patient records with the Social Security Administration I have some younger programmers of course most people are younger than I am but you know they they told me well you know the interface went live we shared a patient records and I said come on I've been trying to do that for 20 years I mean they seem to take it for granted but we're actually sharing data now at a national level it's still somewhat small but we're finally starting to share it and interestingly just on the Social Security Administration example why are we sharing data from Sacramento to the Social Security Administration someone has applied for a disability claim and it used to take literally two to four months for that to be reviewed and approved if it was approved now the social security administration is doing that in roughly 48 hours so these are the sharing of clinical data can have a huge impact on people and that leather definition of the new reality were working under a lot of informatics issues were used to debate such as you know we really should have a problem lists and current medications and a patient record there now essentially federal and in some cases state regulation so my life is very different than it was 20 years ago most health organizations are now dependent on technology in a sense our clinical origin is our clinical ization noir kent said that our clinical organization essentially runs on us the desk in our data center we were digital in terms of our clinical record in December of 2010 and and like us many hospitals and clinics struggle to be as redundant and make those investments to match the dependency we have on this new software and in technology yet UC Davis health system we spent over six million dollars on security technology in the last four years and we still have more to go I grew up in health care but there are days I feel like it work for the NSA with all of the security technologies we have to do it's really truly a different world we mentioned the federal government is a catalyst stylist which with its investments and we're releasing at least in the clinical research area a new definition of how to compete around clinical research where a lot of grand tours are the NIH included they're not really interested in getting a faculty and investigative money to build an infrastructure of databases and EHRs and stuff they feel that you must have that in place in many cases and then the grant leverages infrastructure that's already there so we have a lot of pressure at any large academic Health System to have a lot of not only clinical technology in place but link that and also have researched technology in place even the accreditation report requirements for medical schools and even the residents that we manage after medical school we give them different teaching organizations or opportunities that the accreditation is really dependent on having sophisticated technology and I'm going to talk later about data curation one of the dirty secrets in healthcare is the quality of the data we've collected is frankly not that good and so if we want to make use of the data and improve clinical care and support research we have to do a lot better job then of the big things we are for Cassandra see Davis is secondary use of EHR content so certainly you first have to have a digital record in our case we started deploying the epic EHR in 2002 as I said we went digital in 2010 UCLA for example will turn on taken production on march first only seven weeks away so what we've learned is that you really have to do the hard work of deploying some of this modern technology and only then can you you know kind of stand back and say how can we use the data differently and I'll share a few examples on how we've done that and in a very exciting way we're now doing that across university-california together so if you look at all the clinical care technology we have over the last 11 years UC Davis health system to spend 163 million dollars to do this UCLA UCSF almost anyone at that size is spending equivalent amounts of money but unfortunately if you look at disease registries quality management data sets research data sets the partnerships we do in research for example we have a partnership with one of our burn surgeons she has 26 million in American burn Association grants and we run a data coordinating Center at Davis for 24 burn centers increasingly a lot of our research is multi-site and in the complexity that goes with that especially when they have different ways to collect data in their clinical silo again another silo for clinical trials how we share data with the government and so on and so one of our goals is to take these lines and make them disappear and if data is created in any for any purpose that it could be reused and support of admissions and in what area or the american medical informatics association had to find some years ago secondary use of data basically they said we need to increase transparency of data use focus on data access and use versus ownership of data privacy policies for secondary use for example when we share data with Sutter Health to care for a patient center has made it very clearly we cannot use that at Davis for research so there's a whole different question if we share data can we use it in a secondary way increase the awareness of benefits and challenges and then taxonomy for secondary use we have a lot of challenges around use modern ontologies and vocabularies you know period but basically you take the EHR content and there's every data like clinical images and other content that doesn't specifically sit in the EHR and typically EHRs have multiple ways they store the data you do some level of data transformation and there's all kinds of things you could do and then you have a secondary use show to show you an example we took our epic EMR and in the native databases cash a or m or mumps and there's a copy of that data in sequel or Oracle sequel in our case and this is more of a data repository data model there's 2.1 million patients that we have an epic we identify it so the transformation in this example is to de-identify the data we load it to an eye to be to application this was written at partners and harvard in boston from a 20 million dollar nih gram and then we have a different user interface they look at the same 2.1 million patients but in this case we can give it to any investigator or any faculty and they can search for clinical cohorts in a sense all they want to we have an IRB approval of all this process but basically an investigator does not have to go and get an IRB approval for every query and what this tool essentially does is fine cohorts if they find a cohort that's of interest they want to do a study they need the identified n tified data we basically save the query and then we can go back and give them the identified data at that point they would need an RB approval which we've done many times the user interface for the court discovery tool looks like this and basically using all the clinical data as search we can apply boolean logic and we can pull different parts of the clinical content we can create a you know very simple a very sophisticated queries based on logic and then what cohort discovery fundamentally does is give you patience and again some of our clinicians do very intense algorithms or queries to find a cohort and it could be you know I want gender age I want certain diagnosis I want patients that it took this medication I want patients that did not take their medication and we have a series of faculty champions that if we have a new faculty or we do this a lot for our grad students new to the tool we will instead of having the programmer sit down we will have a faculty member who volunteers to kind of give people an overview in 2009 and we finalize this in 2010 we publish an article with the University of Washington UC Davis and UC San Francisco in our main informatics journal about a federated use of this technology and what we did was we basically left the data at these three universities we use the software that Harvard road again called trying we actually use it before Harvard in a production mode that was kind of interesting and we did some minor data normalization and basically we did a query against those three databases and I think it was something like 30 seconds we defined a cohort and of course when we we had an auditorium like this and we demonstrated that UC Davis in February of 2010 somebody raised their head and several 30 seconds that's a long time and I said come on you can even do this before now the 30 seconds is too long but that was neat and what we've what we've built on from that is to do this across the University of California so we have five campuses with patient data and four of us after March one at UCLA goes live for the five use epic which is one commercial vendor EHR it doesn't mean that UCLA and UCSF and Davis have exact same definition of epic that it's pretty close irvine users eclipses and what we've done is even though UCLA is bringing up epic they do have a database that they've created with some legacy data so we have a court discovery query against our five datasets and again they stay locally at the campus and we first started with 8.1 million patients and as we add UCLA it's now 11.8 and 11.8 represents about one in every three Californians and we've been out to find queries against our five campuses and obviously if we're competing for a grant or we need volume in terms of a patient with certain condition we have a much better chance of doing it together and this is managed through our ctsas and some other p is that are our Vice Chancellor researches on can our health campuses generally do one of the interesting things about California this is probably not a surprise to you but on the left if you look at the top 25 most diverse communities in America this is from the 2,000 census these red stars indicate central California so around San Francisco Vallejo Sacramento is essentially the most diverse area of America there was an article in Time magazine for five years ago that said Sacramento was the most diverse community in America and to be sure we have 20 languages that our patients speak and we have to supply interpreters for and and there's some notification that maybe up to 40 languages exists in our patient population so one of the unique things about this 11.8 million patients is that this represents some of the most diverse people in the country and certain kinds of research requires different racial or ethnic or other attributes that they're testing the drug on or trying to understand the impact of a genetic mutation and so it makes our data somewhat more interesting and valuable if you go to Mayo we work with mayo clinic their database is ninety-six percent Caucasian if you go to Vanderbilt we work with them their database is forty percent african-american but none of these databases will probably likely ever be as diverse as ours which has a great value for clinical research we compare that to some commercial activities so the Cleveland expand out of a company called explore us two years ago they had the goal to have a database of 12 million people and it was only 9 million a year ago and now they claim to have 31 million what they called cared for lives in their database but if you look at their site and talk to them most of the patients come from community hospitals around the country plus the Cleveland Clinic which is obviously not a community hospital and one of the things that really disturbed us about this is that initially they were going to sell the data they used to drug companies for research some of that selling of data now is specifically that we go under the high-tech version of the HIPAA regulation many of us have felt it would be wrong to do that and we may partner it you see with drug companies but it's typically where we are faculty or jointly doing research and trying to do something together as opposed to just selling data which i think is something we could never do any way a day does we're talking many other efforts Kaiser for example is very active in something called the HMO research Network and they have about 9 to 10 million patients we're actively looking at joining that now with with Kaiser locally and I so I think the message here is that if you want to do certain kinds of clinical research you have to have large databases they have to typically be a de-identified and have the ability to be reified if are needed and approved for certain kinds of research and there's kind of an arms race now to see who can have a competitive position with these databases and certain kinds of new research you know won't be able to be done without it we've done a second type of secondary use where in addition to the cohort discovery has talked about we are building a type of metal registry at UC Davis so in addition to take you the 2.1 million patients from epic and we use it and again in this case it's identified it's not d identified we also load a legacy disease registries and there's a long story but a lot of these are essentially databases that were designed in the early 80s and there's still that on some of them literally still run on foxpro many of you in the room are old enough to know what foxpro is but trust me you're lucky you didn't have to use it it's a database and so we're taking some of the knowledge we learned and experienced and kind of replicating that but for a different purpose and so we've created a type of method registry the term tethered means it's interfaced and constantly updated by modern EHR again the 2.1 million patients it's a meta registry meaning that we load all these disease registries into one data model as opposed to create an aside a database for each one as we've done for 20 30 years and we've already created a cancer registry this is the same data we send to the state cancer registry that goes on to the national program of cancer registries managed by the CDC and National Cancer Institute and UC Davis in the last six months we just took over a grant where we actually manage the state of California cancer registry and so we're doing a lot of work in cancer we've created a berry registry a registry of all patients who have a CT image and so we've created a register that basically of how much radiation those people got we actually found a patient who had 99 CTS that seems like an awful lot we have a substance registry and a diabetic registry and we're building some others and what's really interesting now is that we can start to look at the intersection of these registries because it's in the same data model so what about our cancer patients that got sepsis in the hospital a lot of times if we would create that data you could not integrate it because of the silo data models and approaches and so this is one of the things we're doing other campuses at UCLA they are taking a different approach to kind of reusing the data for clinical analysis and quality management to move on quickly to phenotype and genotype certainly we all have genotype information and it expresses itself in phenotype data the genotype data is the constitution of an organism cell also refers to the specific set of alleles inherited at a locus I'm not a geneticist but you know we're actually hosting with the Beijing genomics Institute we're building a very large sequencing lab at UC Davis health system we've got four next gen sequencers running now in a temporary lab and will have 15 or more running probably in six months and in a genetic lab we're building and there's there's a massive amount of data and if you look at the next-gen sequencers it's creating new kinds of data where you have to rework a lot of your algorithms and bioinformatics to leverage it the phenotype is the observable physical and their biochemical characteristics of an expression of a gene and in the clinical presentation of an individual with particular genotype so much of healthcare data you could say is phenotype data at our Davis campus we have Mouse clinics we do a lot of mouse research we actually like Jackson labs provide a knockout mice to people around the world and they do mouse cortex and basically they have genetic material or genetic sequencing them in the mice and the mouse clinic basically collects phenotype data you know color behavior attributes and so on and it's basically the same type of thing but what we're trying to do is link more the genotype and phenotype to create some some breakthroughs hopefully a lot of the technology development has driven genetics certainly but a lot of technology challenges remain in terms of the measurement of cell and organism level phenotypes the truth is in most labs if they have a genotype data set they basically define their own phenotype data model in most cases and so if you want to share data among a lot of that research again you've got the silent approach and you haven't typically have a lot of sophistication and how they define the data the integration of genotype and phenotype content requires annotation and correlation of genetic genomic information with the high quality phenotype data and I think it's fair to say we're still defining what high-quality phenotype data would look like viable electronic health records or systems capable of handling family history and genetic data are required most EHRs the way they would store family history is simply not adequate for genetic you know family histories and family trees so there's a lot of gaps in terms of what we have now in terms of clinical informatics and and how we can rate them and leverage genomic data for managing at a personalized level you know if we now understand the genetic makeup of a person when we then go to prescribe a drug we should know more and more in the future or that drug be effective on that person many of the drugs we we we use now literally do nothing for the patient of any value or all the other give them a lot of side effects so a lot of our interest is to prescribe medications that will actually work on that genome genome of one the patient in front of us so the fina that the NCBI has one tool that I thought was interesting just to show it's a phenotype genotype integrator so you can find a trade in this case I searched for Alzheimer disease and again what when in the clinical world when we talk about Alzheimer's disease we would want to code it in some way what's the icd-9 what's the icd-10 is there sno-med or Laurent definitions of that to be very precise but you go down for Alzheimer's and find traits which are essentially diseases and jeans and then the way they would define a phenotype in this NCBI tool is is a text summary and then these terms and again but I'm looking for our what's the sno-med code at a loan code or the icd-10 code for this so I could know that I'm very precise and then when they list you know this phenotype data as they define it for Alzheimer's they then have a much richer data frankly on the genotype side of that and it goes on for pages and pages actually getting down to the to the letters of the genetic sample so the question is can we tempt this clinical infrastructure that we spend a lot of money on in this country and can that be the phenotype data and link it to the genotype data 11 effort that has some NIH funding is Phoenix toolkit and what Phoenix has done is they create a domain so these are kinds of diseases like cancer diabetes cardiovascular these aren't exactly how we would define them in health care but they're close and so they have I think 20 of these domains and you can choose one like cardiovascular and you can go down again to some very simple text terms these are not what I would call encoded or precise terms and so the question is where are these vocabularies used to define these terms there's probably 30 ways you can talk about in China or heart pain or heart attack or suspected heart attack and so these kind of terms are very in precise and so the question is these data elements can we code them and are they in the electronic health record what we're trying to do is basically from the top go from a modern electronic health record create and this is not really an existence yet in my opinion a refining phenotype data set which will create a lot of change we need a lot of change here from the bottom up you have the raw sequence of data you get to certain levels of summarization and visualization of that data and then where can we match that refine phenotype with likely a higher level of visualization of that genetic data this is done in in some labs in some narrow ways but for the average patient is really not done and what we think of as the innovation intersection again you you have to come from the EHR we find a phenotype data set in any number of different options on how the genetic data could be summarized it really means that we have to go back to clinical documentation that's you know a lot of physicians still do you know pretty much as they've done 20 years ago some are doing it in a much more modern way and then use this documentation that frankly we spent a lot of money to create and use it for things like research clinical trials and disease registries for 20 or 30 years in American healthcare we've used this documentation essentially to get paid and in it's a sad statement that we really need to put a lot more emphasis not just I'm getting a bill out the door so the hospital can get paid but also support these other clinical and research goals we need a lot more sophisticated clinical documentation we have to eliminate the use of text a lot of EHRs are still mostly text use a lot more encoded data elements and we cannot have gaps if we have a patient that we've documented a clinical encounter on it has to be completely filled out we need new approaches to data quality and then we need to modernize the disease registries as I've alluded to neutrons or at least new to Healthcare's things like bio curation where basically since we have increasing dependence on this clinical data how can we make sure it's accurate it's complete and we have a long way to go some of the things that we do to support research we're taking some of the knowledge that we we learn in the research world on how to make research data sets very accurate very precise for auditable and then bring that back to the clinical care world so some opportunities for research and innovation I think these ten areas are right for research and development so number one enhance current clinical software to adequately support phenotype requirements and then once you have an adequate phenotype data set create new kinds of linkages between that phenotype enabled clinical system and genotype content second define data structures for biospecimen repositories linked with genotype and phenotype databases to give you a sense of the complexity here if you have bladder cancer and you come to the UC Davis health system our physicians will probably take your bladder out they'll remove the tumors in your bladder and they literally start to infant those pieces of those tumors and knock out lice and they'll grow your tumors in mice and so a mouse will have a large gray of similarity to you with your cancer and what they start to do is give those mice chemotherapy and other treatments and to see if it's effective and so we may put a patient on a certain drug and he or she may do well but a lot of these are very effective drugs lose their effectiveness and then essentially the oncologists know that because of the the work done on a number of these mock knockout mice which drug would best work for you it's pretty amazing but the notion of keeping biospecimen you know your tumor cells a keeping track that we've put them in certain knockout mice and we're going your tumors in mice and then linking it back to new specimens or new lab tests for that patient it's really getting quite complex and to say there's a line between oncology care and research or evanescent exists anymore it's all research and care at the same time we have two number three transform patient level clinical data into sophisticated population data sets eh ours really do not create population data sets they create records about one patient and one encounter at a time number four we have to enhance our clinical data with advanced evidence-based knowledge we're starting to do that to treat things like identify sepsis in an inpatient and in alert people and then when we have identified a patient who is pre septic or has septic sometimes we have patients admitted to the ER with sepsis we basically have evidence-based knowledge order sets and we can quickly execute them and in our case we've reduced the incidence of sepsis over twenty six percent in the last six months it's been fairly dramatic where our chief medical officer would say we have not been able to move some of our quality improvement needles for many years even though we've tried but now the new evidence based algorithms and clinic digital data we've been able to do some pretty pretty dramatic improvements and how we're treating patients again I think it's right to have a create and maintain a phenotype standard the really no standard for phenotype create community research networks and integrate them with two provider EHRs and research support infrastructure number seven defining new models for Disease registries if you want to study a disease do you really want to wait recruit subjects and gather data for for some time before you can start your studies or you want to go to DHR at UC davis or the EHRs and all UC health campuses finds your cohort and immediately have a cohort that you might be able to do some kinds of research on that's really the line that were on right now link emerging public health surveillance and registries or bio sense as an example created and managed by the CDC but it's a one-way thing we send demographic data essentially to buy a sense for syndromic surveillance on you know is the next SARS out there as the next you know bioterrorism event out there but we don't get any data back and so there should be a two-way street number nine and clinical content sophistication and knowledge to mobile and other technologies when I look at the phone of the iPhone applications around health I think most of them frankly or a joke but the technology is very powerful Eric Topol one of the physicians who found the Vioxx damage that was done when he was at Cleveland Clinic he's done some amazing work with iphones basically equipped to be hemodynamic monitors and he can basically put on a strip see and iphone and basically do a heart rhythm strip and as he says you know that kind of eliminates you know hundreds of millions of dollars a wasted effort that goes into massive cardiology labs and testing that frankly take a lot of time and and in to cost a lot of money and then finally leverage mobile technologies for diagnostic testing if you look at a lot that you know that the most technology we have and the best technology if you go into like our cardiology labs we spend tens of millions of dollars on amazing imaging technologies but they won't let us patch the operating system and so it's sort of a notice and secure stuff we have in our hospital the FDA forces these vendors to test and certify that their technology but it's very expensive they don't use some modern technologies pretty well and I'm talking more about the ability to share data and code data not necessarily the amazing imaging that they can do so I think those are 10 areas that we could focus on and in some ways we are but a lot of work yet to be done hi thank you very much we do have time for some questions anybody with questions is all the data you have from the campuses I didn't realize that many patients were going through the UC systems yeah the estimate now is 11.8 million and again that goes back ten or more years so it's kind of like a total we wouldn't see that many on a given year other questions oh the standpoint of the patient where is your involvement was the feedback so the individual person to take more and understanding with his conditions are what possible alternatives might be in a way of engaging the patient I don't see that any best practices you can suggest well what we do in many hospitals and clinics do now is we have what's called a tethered personal health record and that didn't have time to talk about that but that's kind of a whole nother topic where a patient or a family member if the patient approves it can log into our EHR and basically get most of the available data on them but they can see it themselves they can also copy it in a standard form and take it with them or put it into a personal version like Microsoft has something called helpful there's a lot of other data on there's a continual health alliance vs standards body and so they're taking digital heart monitors digital blood sugar devices and many others and they're allowing the data that you capture on those home digital devices to be sent to an EHR or other things I think one of it there's a lot of debate now that if a patient has genetic samples provided Vanderbilt is building a huge genetic database and patients can opt in or opt Bob but if you're admitted to the last four Vanderbilt they're going to do a genetic profile on you and a lot of times again the debaters they may find something when they sequence your genome but they don't always tell the patient and there's a big controversy now if we learn something should we tell the patient or in some cases that goes back to how it was consented but there's a lot of effort now to make consents such that if you take my genetic sample and you do something some research and you find that I'm at risk for some disease you should come back and tell me that is often not done though so that you know it's a whole whole nother set of discussion but I think you're very right that it's something that we need to do a lot better on and and the patients should should have much broader access I think to not only the clinical data collected on them but also the research a lot of the technology is in place to share data and the problem is policy what one one particular aspect of that which you alluded to was selling information to drug companies and you said well I don't like the concept of that what's to stop institutions publicly funded institutions making these large data data sets publicly available for everybody free of charge one of the issues in getting to that point I think it boils down to patient consent I think you know technically there's no barrier to that that there is a lot of controversy even a week ago there was some articles published where supposedly open genetic databases where the patient had been d identified but they've contributed their genetic material people have been able to link them with ancestry databases that you can find in the web and they've been able to read n tify many of the people and so I think it's kind of sending a shockwave through a lot of people you know normal people who contribute their samples to this but I think it's inevitable I think it's a matter of when not if that you're going to have massive databases I mean what we're doing that you see they're going to look back in ten years and say wasn't that a quaint example but we're going to have massive databases of genetic material or genetic data linked to high quality phenotype data and then a young investigator whoever will be able to access that for whatever and I think it's I think the technology needs a little improvement I think the phenotype definition needs a lot of improvement and the policy and the consenting is really probably the emotional barrier that people have right now other questions Mike thank you very very much thank you you

Leave a Reply

(*) Required, Your email will not be published