Performance Assessment in Competency-Based Medical Education: Implications for Psychiatry

as John and I go way back we have a wonderful speaker today John Young but I'm going to let Kaz do the formal you know Department of Education you know welcome because this is a grand Grand Rounds which is focused on education but I want to give a personal note of introduction which is we met when I was an assistant professor at UCSF and John was a resident and he was already you know obviously stood out in terms of his both clinical enthusiasm and his really thoughtful way of thinking about the learner experience and we shortly after that had a chance to work together on a initiative that our chair had put into place which was to rethink the curriculum for the residency particularly with a focus on bringing in more of the developments that were occurring in neuroscience into the curriculum and weaving it throughout the four years and just thinking how could a newer curriculum being reinvented a fun working together it was to have someone so innovative and enthusiastic who was speaking from the learner perspective but could also pan back and think about the field changes you know creating new ways of learning important new material it just ended up being a very kind of kind of productive I would say I'd be the product that we actually came up was very very useful to the department and actually ended up in sum being implemented in some in some new ways so you know I have a lot of affection for John on a personal level and to have seen his career go in the direction of actually thinking about applying research and methodological rigor to the learning experience in psychiatry and then to see him now in this vice chair position rethinking and redoing a residency program and other learning experiences has just been a real kind of pleasure for me but personal though over and now to his long list of accomplishments good morning everybody thank you for being here I'm caz Nelson vice chair for education in the department of psychiatry and it's my extreme pleasure to be introducing dr. young today I know dr. young from our work together on a National Council for the associate American Association of directors of psychiatry residency training add pert where we served on the Executive Council together for many four or five years together and so I've been consistently impressed with him as a person and and his professional career and really looked to him in his the path he's paving in education research a little bit more about dr. young he oversees education for the department of psychiatry at the Zucker School of Medicine and the Zucker hillside Hospital which is also responsible for dr. visors training so we're grateful for that connection he also directs the psychiatry residency training program at Sucker hillside and chairs the curriculum committee which oversees the four-year curriculum at the School of Medicine he received his BA magna laude from Harvard University with a double concentration in social studies and the comparative study of religion he earned a master's degree in public policy from the UC Berkeley Goldman School of Public Policy and a PhD in health professional education from the Utrecht University in the Netherlands he obtained his MD from the University of California San Francisco School of Medicine where he also completed residency training in general adult psychiatry and prior to his career in medicine he did human rights work in Southern Africa and Asia and served as legislative director for an Assemblywoman in the California State Legislature where he helped lead several important health policy initiatives so so all of that's his training and with that training he's held a number of esteemed leadership positions and conducted a significant amount of education research focused on the intersection of medical education patient safety and quality improvement and performance assessment and in particular he's been a national leader in handoffs education and research cognitive load theory the July effect and competency based assessment with publications in the top journals of the field and you know as as funding becomes more of issue for research and particularly education research for which there's significantly fewer grant mechanisms he is one of the leaders in obtaining grant funding for educational research with over $200,000 right now supporting his initiatives from the American Board of psychiatry and neurology so you know it just doesn't get much better than that and so we're delighted to hear from you today thank you and definitely blushing very excited to be here as you heard I'm a former resident Sophia's and she's really been an inspiring role model for me as a leader researcher educator and somehow it does it all and it's really great to be here and see her and her new digs Minnesota Minneapolis all right so I'm also had the chance of spending some time with with the education team here so far and just really impressed by your leadership Cass and Laura and John and then of course might a former resident who seems to be blossoming here so you always you're former residents you always feel a royalty and kind of a parental responsibility for and it's a it's really good to see him here and how well things are going okay so I wish I had more disclosures here you go I'm gonna want to talk about and I definitely have two topics I want to explore one is reimagining GME in light of failures in our healthcare and metod systems and then the second is future directions and competency-based assessment I'm gonna move through the first part fairly rapidly but I think it's an important framework to situate the conversation around assessment so in the last four years five years we've had a number of reports from bodies IOM regulators like the ACGME stakeholders like the AMC a Lancet Commission the Carnegie Foundation etc all coming to similar conclusion that there are critical deficits in our medical education programs and that our graduates leaving our programs are not prepared to practice in the emerging care delivery models now I want to bring that conversation to the broader failure that's really driving all of this which is the concerns about our healthcare system overall this is a value matrix that plots performance against spending you know an ideally where we want to be is in the upper left-hand quadrant right so high performance low spend you can see where ten comparator countries are all with kind of similar spend but quite a bit of variability in performance well you don't want to be using the lower right-hand quadrant right because that's that's high spend low performance and that's where we said as a country and you see this you know all sorts of different metrics we have the highest health care amenable mortality rate among 11 countries we have the highest rate of death due to mental illness and substance use we have the highest rate of medical errors we spend two to three times more per capita than other countries so this is you know raised a lot of concerns around how can we better align our health care system with the health needs of our patients and communities since was idea of population health and that's really what has led to the scrutiny of medical education programs as a key driver not the only driver there's a lot of things feeding into the health care system and determining ultimately health outcomes but our medical education system is an important driver of quality so I see this as being that having is kind of three types of misalignment between the medical education programs in health system there's the misalignment about the what's or the competencies that that are learned and taught there's a misalignment around pedagogy so how the instructional techniques that we use and then around purpose and meaning I'm going to run through each of these fairly rapidly but to set the stage for competency based assessment so we have this paradigm shift going on in healthcare from a system that's historically been focused on the episodic management of acute illness by siloed expert physicians depending a lot on the hospital and really focus one patient at a time and you know what submerging is the idea of population health with a much more of a focus on the longitudinal management of chronic illness delivered by teens and so in a way that the patient-physician relationship in some ways is being expanded into a patient team relationship delivered primarily an ambulatory settings with with a focus on on both the individual patient but also the health of the community and so this requires a whole new set of competencies around team-based care knowledge management care coordination Penant patient centered communication data-driven qi and yet these are areas that study after study shows that our graduates have deficits and anywhere from say that our competencies with shared decision-making with patients to clinical reasoning and kind of the scourge of diagnostic errors that are becoming more and more appreciated and exist in the count for us a significant proportion of error related deaths in the US as well as adverse effects and and also deficits in patient safety measurement base care etc so all of this then leads to the recommendations of hey we're going about developing our curriculum wrong we need to start with what are the needs of the population and the health system define the competencies that are required to function and deliver value and setting and then devise a curriculum to support that and this is really quite different from how curriculum has been developed in the past which has been more in a silo I sit on the review committee for Psychiatry for the ACGME Orr C and we're about to do a focused revision of the program requirements and I'm struck that really the only organization that we're going to consult early early on or the program directors so the program directors are really important I'm one cast one we have obviously an important stakeholder but really shouldn't the the the it start with a conversation with the health system and perspective and patient perspectives how are we doing how are psychiatrists performing right now so this is a shift I think we need to make and kind of both at the local level as well as national so if you think about it all right then we let's define the psychiatrist of the future and that's of course all hazardous but has something to do with chronic disease something to do with really looking at population health and primary prevention using technology like telepsychiatry and two-way texting and panel management and functioning integrated systems and if that's what we're trying to get at then we need to reimage our clinical curriculum and so there's a lot of obvious implications one is moving training from often predominantly inpatient intensivist settings to ambulatory like the medicine and neurology training and and probably moving medicine so that ideally it occurs not just in the first year but that would require regulatory changes but if you see psychiatry as a as a specialty of medicine and that the embedding of behavioral health and medicine is such a critical mandate and we probably want to see medicine training more longitudinal emergency psychiatry rather than that being part that sort of the domain of the IDI as a setting really thinking about how do we train to manage those urge urgencies emergencies in urgent care clinics hot lines etc telepsychiatry has a set of competencies associated with it this is going to become a just a real prominent feature I think for most psychiatrists and us 15 years or VA and some other systems that are more population oriented have been real leaders in this but you're seeing now all healthcare systems looking at right now you know using telic consults and kind of telepsychiatry for emergencies but moving where we need to get to is really for routine ambulatory care similarly council liaison has traditionally been all about the hospital and and this is probably one of the most important areas that we can add value to the healthcare system is how do we embed and manage behavioral health issues that present in primary care and different kinds of medical settings and integrated care models have emerged collaborative care is one of them and this is really needs to become I think the central part of CL training it down the road is we need to figure out not to eliminate the hospital experience but really incorporate the ambulatory experience in a robust way and and move this out of the classroom into really the core clinical curriculum other implications which in some ways Minnesota I don't know real well but like California's has a history of more managed care and lower lengths of stay and kind of some more systems approaches but you know big picture the learning about acute psychiatric illness traditionally has been an inpatient where there's long lengths of stay and has really facilitated terrific context in which to follow patients for several weeks or months but as lengths of stay really come down and when I was at UCSF I watched it come down from a couple weeks to less than three days acute illness is managed more and more in partials and IOPS and again this is an area where there probably will need to be some regulatory relief because right now the RC requires six months of inpatient but if the goal is learning the acute management of illness you're probably going to want to have our residence and in these other settings terms of chronic disease it's always been an important part of psychiatry but it really needs to become the backbone bone where it's team based where there's your learning panel management skills and an ideally starting intern year but that too would require some some regulatory relief but certainly second year and around that really you know these skills that are so critical to chronic disease shared decision-making motivational interviewing we should be the experts at this and are both and our residents and graduates should be extremely skilled at being able to transfer these skills to different settings and often in programs you learn motivational interviewing in the context of addiction but it never gets really transferred into routine med management in the ambulatory setting or other contexts and so you can check it off yes we taught it but it hasn't really been learned in the way that it's infusing practice across context okay and then of course we need what I call pulse policy leadership systems engineering a lot of cool developments around how do you create experiencial opportunities for residents to learn safe patient safety and qi there's a model patient safety rotation that is published in academic medicine last summer and it's been featured by the AAMC that two-week experience where residents learn by doing an actual root cause for the hospital and doing work that would otherwise need to be done and it has had really good outcomes associated with it okay now we get to pedagogy so second misalignment and Sophia will recognize the the book on the right which is by Dave RB and Molly Cooke and Bridget O'Brien at UCSF is a Carnegie report published in 2010 on the hundredth anniversary of the Flexner report which was 1910 and was really the prevailing paradigm for medical education for the last hundred years and these reports in addition to the IOM and Lancet Commission have all concluded that we basically rely on non evidence-based ineffective instructional techniques now here with your team and Bob Engler stuff you're probably very aware of kind of the but we tend to rely on stage on the stage kind of doing what I'm doing today which there's a lot of slides and a lot of information transfer it's passive and doesn't work and in the clinical environments we tend to either have students passively observe you know we send residents off to do a lot of care without being observed and coached enough and all of this means that using sort of these ineffective pedagogy's means the training is inefficient it's probably longer than it needs to be but more importantly and more concerning is it's ineffective so that even if we're teaching the right content it's not getting learned it's not getting transferred to the workplace and a part of that is that given the way medical education is structured it really does teach kind of more a passive approach in our and our residents aren't graduating with the skills to regulate their growth and change practice as the evidence changes and this is partly why physician practices really don't change much after training and sort of and it's very it takes a long time for new evidence once it's been definitively established to actually be adopted and interesting in some recent studies show that it takes even longer for us to die adopt practices that are no longer supported and that may actually be one of the most important challenges for us to get our heads around so this then says ok let's read let's go back to science and redesign medical education with with what we know about how people learn and this is a slide that sort of captures the different learning frameworks on the on the far left or cognitive and behavioral approaches that focus on skill acquisition and use techniques like activating knowledge making sure that we're integrating foundational science with experiential knowledge and using scaffolds deliberate practice on the far right you've got more socio-cultural perspectives which have really emerged in the last ten years and looking at how does learning occur in the workplace and and it turns out that you know some of the really important features of that is having a clinic or a team that's welcoming to learners that identifies clear role for them and has affordances or supports and scaffolds in place to help them move from the periphery to the center of the team with the authentic responsibilities and they and these kind of frameworks really emphasize the importance of role modeling and the climate as well as attending to sort of motivation and self-directed learning this is a study that I use a lot for my thinking because we know that spacing or practice is really key to learning and but not all practice is equal so a lot of medical education tends to be in blocks and kind of that binge purge you know I'm doing inpatient this month and Cl the next but we know that people learn better when it's when it's spaced so a half-day or day a week for six months or a years can be better than a full-time for a month and yet the learners perception which is on the Left they perceive that they learn better in the block frame whereas when you actually measure performance and learning it's it's eighty percent are doing better and the interleaved or spaced so this is one I think training directors and and associate Dean's that curriculum come up against this because learners and often faculty are really tied to the block approach and are convinced that it's better and yet the evidence is absolutely clear that it's not the case so one learning framework that that really applies to GME it's called deliberate practice and this comes out of the study of how novices become experts and I won't spend much time on this just to say it when you look at that process there's a lot of practice there's a lot of direct observation and there's a lot of feedback in coaching because it turns out that we are not very good at self-assessment and we need real help in sort of interpreting data and making change in a way that's both supportive but accountable you think about how many folks learned a musical instrument growing up and anyone and kids how many folks have kids now taking lessons fewer imagine if you you know if you took your kid to a lesson and the teacher piano teacher said okay sit down I want you to practice the four or five pieces that I gave you last week and then went into another room and practice their piano in parallel then comes back 20 minutes later and says hey Naomi it's my daughter who's taking piano how'd you do what are you doing well what do you need to work on okay once you work on that and then went back to there there the other room and practice their piano and parallel right we wouldn't pay for that and yet that's really how we traditionally have trained doctors you think about we rarely observed with intensive coaching and we really don't know what we're what we're graduating at the end all right and then the third area of misalignment is purpose meaning and there's something in and you have a very strong leader in kaz on this there's something fundamentally wrong about our medical education programs when you think that matriculating students have higher indices of well-being they're more idealistic they're more empathic than their age similar graduates but within the first year it all changes and burnout increases the curves flip and it continues through residency into early practice and now I'm sure you've heard a lot of discussion a lot of this data is come out of mayo forty to sixty percent of physicians or experience burnout and we know that you know not only is there the human cost there for the physicians but we know that that leads to higher errors lower quality higher turnover etc a lot of efforts now and how do we then help trainees become more resilient and how do we address the environment that drives burnout and things like rvu systems and how do we modify those so they account for mission aligned but non-billable activity that's an important part of meaning and work but is getting squeezed out as is our views become more intense or the EMR which in many contexts has turned physicians into glorified data entry clerks and is a huge dis satisfier and you know technically can it's not a technical issue in terms of you can make interfaces that are seamless and work as a matter of kind of will and resources but other than those two frameworks my own sent feeling is we gotta come back to calling in that in helping learners to discover you know where is the passion of their heart overlap a need in the world and go deep and go after that if you look at physicians who spent twenty percent of their week doing something they're passionate about they're much lower probability of burnout so I I think you know from a curriculum standpoint and and lots of programs are starting to do this with tracks and other mechanisms I know you have here such a approach what we're doing is we have what we call pathways expertise in turn supply second-year gets a month to do lit review and come up with a project proposal a study or an innovation and then third and fourth you have a half-day to a day a week in a mentored way to implement evaluate and then publish or disseminate and this has just been incredibly successful for us and seventy percent of our residents are participating and they are leave with both an emerging expertise but some self-efficacy in a sense of like purpose that's really cool to see we're about we're just finishing a qualitative study we were using them a social cognitive career development theory – and we're interviewing graduates who've been in this program and not and looking at sort of how its impacted their their career development okay so that's that's the first part the talk and this is you know about these three misalignments and if we can you know we need to address all three and if we can do that I think there's hope that we can both improve alignment but but also really result in physicians who are going to be lifelong learners and more competent we transition to the second part so there's a catch in all this and the catch is that none of this is likely to really work unless we develop a robust system of assessment that has two parts to it one is promoting driving self regulated learning so formation the sort of sessemann for learning feedback and coaching so that learners graduate a self-regulated and they can then change as the evidence and context changes which does not exist right now the second piece is an assessment system that both promotes competency but also in a way that is determined in a trustworthy process it has validity and that's critical because otherwise we're not going to ever know if someone's ready for practice and we're not going to be to rapidly identify folks or off trajectory we'll never be able to move to time variable training if we don't have if we don't have this and thirdly you know the the third reason for all of this and these are the two aspects but if we're if we don't have meaningful educational outcome measures we're not going to know which of our strategies work and which doesn't and we're not going to drive continuous improvement so a lot of focus on now in the last ten years in thinking about assessment in a different way so it used to be that we really got focused on the psychometric properties of the instrument that you use to assess and a lot of my researches then in that area and I'll present some of it in a moment but turns out the validity doesn't reside in the instrument but in how the instruments used and how the data is interpreted and applied so a lot of the thinking internationally has been that you need to get validity you need to have a program or system of assessment that has a number of components there's a work based assessment which we'll talk a bit about in a moment so moving from measures of knowledge to actually what do what does the learner do with patience in the workplace faculty development how to directly observe how to give feedback and then dashboards which you have here I believe in you and me at least for the EPAct program that capture aggregate and visualize performance data for each learner or resident that then a longitudinal coach can use with that learner to identify areas for improvement and drive cycles of change dashboard also be used by the clinical competency committee to be trained in how to use that data to more rapidly identify folks who need help early on and to know when someone's ready to launch I'm going to talk now about just two parts of this framework one is choosing your assessment framework and then the seconds work based assessment and we'll leave the wealth say a few things about the other parts at the end but leave that for another day okay so first thing you got to do is you got to choose your assessment framework and just like there's been this paradigm shift from acute management of acute illness to chronic disease and population health in health care there's also been a huge paradigm shift in medical education which has been from a focus on process-based education where training is fixed in duration we infer competence if you've done X months of this Y months of that focus on knowledge acquisition rather than how its applied in the workplace Norma and seen which means our assessment traditionally is in how does the resident or learner compared to peers they top 10% average versus criterion reference which is assessing against sort of the objective competencies that are deemed necessary for a given task and in the past much more focus on summative assessment and not as much on formative or the assessment for learning this has been nothing short really of a revolution for those of us in education because it's a completely different focus from it used to be that you know when we're getting a site visit we're checking off all the things we teach and now it's really not about what we teach but it's about what's learned and I just find this incredibly exciting because it changes the conversation and at all levels and it also raises the possibility of where time is no longer constant and a proxy for competence but is a resource to drive acquisition so here's our framework right six core competency domains which was adopted by the ACGME and 99 and implemented in 2001 similar frameworks in other parts of the world can Med's UK Europe Australia New Zealand all working with similar kind of outcome frameworks here in 2014 we implemented milestones which was an effort to try and make the competencies more specific and workable this is a picture of what Kaz has to do every six months for every resident is fill out this on every single you can see in the upper left PB Li is the domain practice based learning and improvement teachings the sub competency there are two threads a and B in an outlined in red or the milestones for that for thread a we've got 22 sub competencies 66 threads and several hundred milestones so this has been an effort to really work to the right we're starting with the domains trying to make them more specific by splitting into the kind of the component knowledge skills attitudes but this has met with a lot of challenges in implementing I'll just show you a few a few headlines here that some of you might identify with and a lot of challenges with this idea that they're to simultaneously to abstract but also like too many of them to pay attention to simultaneous so there's sort of two granular and just when you're observing someone in clinic you can't pay attention to 22 sub competencies or milestones so this has led to a lot of a lot of dissatisfaction with with this framework and partly also the validity evidence hasn't been that great and that's where we come into this whole shift to the left which is okay let's see if we can kind of move from this reductionist approach to a more holistic or synthetic approach so what are the key activities that a psychiatrist does in the workplace and maybe we ought to make that the focus of assessment and this is what an EPA is an interesting profession 'el activity it is a it's a unit of activity that that's discrete that a real doctor does in the workplace and something that can be entrusted to someone once they are ready to go and when if you define the epa's for a specialty together they should capture the essence of that profession it doesn't catch everything but it should capture a large part of what really defines that specialty EPA's have really taken off internationally as in a framework lots of specialties have have adopted them particularly medicine pedes OB anesthesia and psychiatry were we're often the last to join this sort of thing and we're just now kind of entering into the EPA frameworks these are some examples of EPA's from other specialties you can see in manage high-risk childbirth ammo B or managed to psychiatric emergency from psychiatry so you can see these are much more synthetic or holistic so milestones are you know more person descriptors EPA's work descriptors milestones are really an extra exercise in deconstruction and splitting which can be very helpful in certain contexts EPA's are much more holistic and and the unit of observation is the the core activity so this is where and in general it's not an either/or partly because in the u.s. were required to do milestones and a pas are optional so if you're going to go to EPA's you still got to do milestone so we tend to think about these as complementary frameworks where you can have BPA's as your assessment framework but you're mapping back to the to the relevant competencies and milestones in EPA's you have a different kind of scale so rather than you know does not meet expectations does above expectations or meets the competency threshold exceeds it's all about level of supervision so you're asking you know based on this observation or these series of data points how much supervision do I think this training needs this is an GME would be a typical scale that's used and the initial evidence or some of the early returns have been very encouraging in terms of entrustment as a construct assessment construct this is a study from anesthesia that showed comparing to the many cex with the traditional competency nine-point competency scale compared to the supervision scale you can see that the variance is much better discriminating between trainees it reduced disagreement between Raiders and you needed far fewer observations in order to get to some kind of generalizability threshold and you have ad hoc and you have summative entrustment and the ad hoc is what we do every day in clinic you know how much autonomy you were gonna give the resident and then the summative is when you're ready to launch to a new less supervision or independence so I'm going to tell you a little bit now so about psychiatry so you have the CPA framework that's emerged and we now have our first set of proposed epa's for Enda trainee and this comes from work that I chaired in add pert which had it was had a task force which was charged with please develop EPA's for psychiatry so being dutiful folks we did that six of us and Catlin Hassler Eric Hong who Sophia knows from UCSF were involved in this Colin Stewart from Georgetown Andrea wise from Einstein and Nancy Williams from Iowa and we set out with we wanted EPA's that were essential clear and representative of the specialty and we have took a four phase process to this so in the first phase the six of us met for 90 minutes once a month for a year reviewed the literature looked at other specialties other countries and then we each generated our own list and then we went through an iterative process of kind of winnowing down and we started with the round actually it's about 75 EPA's and we got to 52 pretty rapidly and then at the end of this we had 24 proposed EPA's the second phase we consulted with non psychiatrists experts so this is part of our methodology the first step is pretty typical this step is fairly unique in it has something that a lot of other specialties and countries are now doing is we went to people outside of psychiatry who are experts in EPA's and we said look at this list tell us what you see are they clear do they make sense are they kind of the right type of scope and we got terrific feedback because they were able to identify some of the blind spots and assumptions and our thinking and that we had that got us to 14 IPAs and we wrote out then title description for each one all the competencies that are required for the activity and we cross-referenced two milestones the third phase was the National Delphi survey and this is also I think a strength methodologically where we we identified 31 experts they were we define them as members of the Executive Council that pert which is the program directors Association and the members of the original psychiatry milestones workgroup we developed the video and a two-page article that we had every respondent expert watch and read to make sure that we all had a shared mental model about what an EPA is great response rate and we did two rounds of survey our respondents are very experienced you can see about 23 years have practice on average 50% from the Northeast otherwise distributed across the country and a lot of local and national leadership this is a type of profile for each EPA that the first survey round generated so we had you know the highly scored in terms of essentialness clarity scope and we had comments for each of these categories and we revised the EPA's based on this data and then resurveyed and in the end with 13 proposed EPA's them and you can see them here I think they're readable managed psychiatric patients longitudinally which includes panel management managed psychiatric emergencies a diagnostic evaluation managed psychiatric and with medications etc couple comments about these we purposefully did not split by disorder so some specialties I have to create different EPA's let's say number for managed psychiatric conditions with medications they might have a different one for bipolar for schizophrenia also some some contexts have I've done it by medication class so in New Zealand they have an EPA just on the use of clozapine which is interesting we also didn't split by patient complexity so internal medicine has EPA's for sort of the simple presentation and the complex you could do that and you know in fact there's lots of different ways that you can go with this what we laid out are these thirteen really proposed EPA's for experimentation at the local level a couple of just high points from this think our methodology fruit for this space was very strong particularly the non specialty experts and the Delphi and the frame of reference training we had very stringent inclusion criteria which I won't go into today but use something called the asymmetric confidence interval which we're really excited about but this is pretty technical and and then you know here you see these lumps split decisions that programs we identify and really encourage programs to experiment with ok so if you have a pas as your framework that are cross referenced some milestones the last part of my talk is about creating a technology enabled work based assessment program so this is you know starting with Miller's pyramid in medical education we have traditionally focused on the node knowledge with MCQs or Oski to figure out if a student or resident can demonstrate a certain skill but as we really focus on preparedness for practice and wanting to have valid assessment it the emphasis has changed to the does so what does the learner actually do with patients in the workplace this is led to the emergence of a a whole field of assessment called workplace based assessment or WBA again this remember now our kind of pedagogy of choice for today for this talk is deliberate practice and and the importance of sort of redesigning first as a starting place you have to redesign our clinical training experiences so that there's a lot of observation occurring and feedback and that there's a lot of challenges to that but that's one of the big hurdles facing GME we know a lot about feedback that you know part of why you're doing direct observation and work based assessment is you want to generate feedback that enhances learning but not all feedback is equal or helpful we've learned quite a bit needs to be soon after the observation needs to be a behaviorally specific you want it to include self assessment two things that have really emerged lately are or three things is there needs to be an action step and there needs to be accountability around that which is generally doesn't happen in part because our feedback sessions are often one-off and you if you happen to see the learner again a week later or even two days later you may have you may forget about what you decided on and not ask needs to be both written and verbal and it turns out in the written space that the narrative comments are really important much more so than the quantitative scores for learning quantitative scores are important for summative judgments but for for learning and then this whole idea of safe interpersonal space I mean this makes a lot of sense to us right we we as psychiatrists appreciate the role the therapeutic alliance and creating a safe place for for the clinical care well it turns out feedbacks no different you need an educational alliance and in it we're looking now at really shifting more from these models of information transfer to really a conversation that's bi-directional and is situated in a in a relationship that's longitudinal and and supportive so here's an example of a work based assessment tool that we've developed in psychiatry for one of our EPA's and we started on this before I even knew what an EPA was this goes back to twelve years and at a time when there was no competency based assessment tool in psychiatry for anything including medication management so we developed the piece go and this is a tool aim to promote growth and assess competence with medication management has 27 items so we have a checklist that we went through a an expert process of identifying what are the essential tasks of a med visit a scale that started off done not done but in our pilot testing faculty really wanted the n/a because a lot of tasks that are essential aren't always applicable for every visit and our faculty really wanted a way differentiating between done done with suggestions for improvement thing really well and so we changed the scale and then we have a space for narrative comments and I have a hope I have a whole program of research that pursued around this and in you know medical education we've adopted the unitary model of validity which understands validity in terms of multiple dimensions and we won't get into that right here but say I'm going to present some data on internal structure and correlation and there's other data we have around content validity response process and consequences so the data set that I'm going to just briefly summarize for you today is from a four year implementation and a third year outpatient med clinic so at UCSF so each residence in the clinic a half-day a week there's four residents two faculty over the four years we got 601 completed observations so this is a pretty big in for Education Research and we on average we had 64 residents over the four years 11 different faculty and on average about observations per resident per year our goal was eight thinking that that was sort of a minimum threshold to be able to generalize so in terms of internal structure so we factor analysis that we did yielded three factors or three underlying constructs we expected the first two so eighty is effective tasks CT is cognitive tasks so eighty are those tasks related to the interpersonal interaction effect in the room such as establishing rapport conveying hope a cognitive tasks are sort of a more technical task of information gathering assessment treatment planning and then HT is hard tasks so this is a surprise factor and we called it hard task because these were tasks that residents even at the end of their ambulatory psycho farm training were not consistently performing our factor analysis from a validity standpoint was encouraging and that we had good 0.5 in terms of proportion of variance which is good for this kind of tool and instrument study and 0.9 chromebox alpha so good internal consistency within we mapped the factors scores over time to what they with the assumption that they should improve with experience that in this case we're showing the proportion of trainees that have low mean scores so we would expect this to go down and in fact for all three factors they did go down we also did some more sophisticated regression analyses that supported the same finding well you can see here though for HT hard tasks is 30% of the trainees are still not performing consistently at the end of their training so you know lessons from this evidence for feasibility for direct observation and the busy clinic the underlying constructs and factor scores would have behaved more or less as we predicted and support the validity of the instrument and the hard tasks really raised Rick Euler questions for us around if this is generalizable another with this same phenomenon is true and other programs we may be graduating psychiatrists who really don't attend to a set of tasks that are critical mediators of response okay second study on this final study is narrative comments so how good are the comments being generated by this observation tool what are the themes and how do the scores relate to the comments a lot of study in medical education has been on end rotation evaluations or multi-source nothing has really looked yet at direct observation tools and the quality of the comments that they generate we sample 25% of the completed piece goes we then coded them on three axes so valence is a corrective or reinforcing specificity is a specific in actionable or general so that you know best resident 30 years that would be general and then content like what's the actual competency or theme that's being commented on each piece go on average at five comments three reinforcing to corrective ninety five percent or specific and actionable and he had a 60 percent reinforcing forty percent corrective this is really encouraging if you look at end rotation evaluations they tend to be at best 40 50 percent specific and often largely reinforcing I don't know what your experience is here but studies other institutions so very encouraged by this eight primary themes that were identified I won't spend time on this right now but just to say that they were all meaningful themes that really rich feedback that's was being given and this gives you an example some of the comments so under assesses is like you know assesses adverse effects or response uses scales one reinforcing comment was excellent how followed up on passive positive si the patient had expressed that last visit on the correctives side suicidality ask what means by not yet granted patient said it in a light-hearted manner or adherence can ask how many doses missed rather than have you missed so this is really specific types of guidance that's that's getting generated by the piece go and then the final thing we looked at is how do the checklist scores relate to the to the comments and this is something again that has been looked at in end rotation evaluations but not so much some direct observation tools and here we found that if a trainee got marked low or high on a checklist score two-thirds of the time there was a comment expanding on that and providing explanation for what about it was good or what needed to be improved on the other hand for the comments for that which were almost all either corrective or reinforcing only half the time was a comment accompanied by a corresponding checklist score so this gives us some insight into the role of the comments the comments are providing essential amplification of checklist scores so that the learner knows what to continue or what to change but it also is adding unique feedback that's not actually conveyed by the weather checklist and the the comments actually helped us identify two constructs that we think aren't adequately captured and the checklist which is engaging patients or the shared decision-making and time management which is interesting we didn't really have that explicit and in our checklist so you know this really supports overall is in one of the first studies show that actually these direct observation tools can generate high-quality feedback all right so lessons from all of this implementation wise set your bar high because when we started off we said okay we don't want to overburden faculty will do one per month but the that is it's not a habit and then you forget and it never happens so then we when we made it every clinic that you supervise them you do it at least once is remarkable actually the you know the we did much better against that standard than the one per month standard culture changed a real big change in the role of faculty from more you know a synchronous case guidance to being in there you know coaching and observing and giving feedback and you know there definitely we had resistance initially but once folks got into this the most faculties lobbed it and really they felt like they have a much better sense of what's going on with patients and a better sense of what their learners are struggling with we've talked about how it improved feedback the other big thing when you start to do this is you realize what you're not teaching well and so in our case we realized wow you know at least in our setting our residents don't know how to assess her adherence monitor adverse effects aren't really screening well for substance misuse and so we then developed them if some supporting workshops for those skills alright so I'm gonna just finish here by referencing it now we're putting this on an app and this is really exciting where it's a single entrustment scale so it's on your phone faculty you're doing this here I think in you and me it's you open it up you choose the resident you choose the EPA you only know checklist you just choose the level supervision that the resident needs based on that observation and then one comment you stop submit it gets emailed simultaneously to all parties and and then uploaded to a run chart that so shows progress over time and if you hover over any data point it shows a comment attached to that and this is takes 72 seconds on average to complete in our pilot study and about 65 characters is the average and so much shorter comments but from a quality standpoint high quality very specific behaviorally faculty to prefer it and you know the the issue is really around issues of now thinking about this paper piece go with the checklist iPhone more just a global evaluation is thinking through is there a role for both or should we just go to the iPhone and forget the paper and I think there's some questions here about really what's the role the checklist and there is some evidence that it helps develop shared mental model between faculty in terms of what are the essential tasks there's also some evidence that faculty observed differently when they're using a checklist and you know what is this we have to look at this more but there may be a role for both big area of research is how to residents assign credibility to feedback when because there's a lot of feedback that turns out residents just dismiss right and so patient feedback is is number one feedback from peers is given quite a bit of valence and feedback from faculty is really mixed and so understanding like what are the features of a feedback from faculty that actually are helpful and sort of adopted or engaged with by the trainee early studies suggests that it goes back to what we were saying earlier to go which is the type of relationship it's situated in the perception that the motive is to support and that it's meaningful feedback all right so final on all of this is what you want we're not going to have time to talk about but faculty development is key we faculty need to be trained on how to observe they need to be trained on the performance dimensions and frames of reference otherwise we're not going to be assessing the same thing and they need to be trained on how to apply the scale so even if you agree on what constitutes medvisit how are you applying the standard and this needs to be done repetitively over time if you do it once nine months later there's there's back to idiosyncratic poor interrater reliability narrative feedback turns out though not with the Pisco but a lot of narrative feedback tends to be vague and hard to decipher and in shiver Ginsberg has done amazing research at the Wilson Center University of Toronto showing using politeness theory and showing how it's actually intentionally vague and it is actually hard often for residents to understand what's really being said so probably don't want to get rid of politeness but need to train folks and really how how to write specific and valuable feedback verbal more of the conversation and the biggest threat is trivialization so if this just becomes part pro-forma something we do and it'll lose really all value and so that's a lot of discussion right now and how do you keep this meaningful and that's that's it so misalignment what how why redesigning and then sort of this idea of EPA's and work based assessment thank you [Applause] yes I'm director for silent adolescent fellowship here at the U and so this is a very timely and relevant topic for us because we're going to do our milestones here this afternoon and so my question is via struggling a little bit with the milestones and we in fact have in this this morning the conversations about whether we have to switch to EPA's and this being a specialty have you actually looked at how much how they would differ do we have do you have to do more work based on your model that you have something ready for the adult ceccacci program but if we were to adapt it to child psychiatry where development and all those aspects become important do we have to make a lot of changes for that to become adaptable to the to that specialty it's a child all right yes I think so I think you probably have to you don't necessarily have to do what we did but I think you have to go through the process of what are the what are the epa's that are really specific to child and adolescent fellowship you hopefully are able to assume that folks entering fellowship ideally have accomplished the end of training general although not always the case I guess this raises a question of fast-tracking and in this notion that we count you know one of the years so yeah I think there's some there's enough this number of variables there that I think you have to think through right well that is what we ran into this morning saying a first-year Capp fellow when they enters what should be their baseline and where do we start I think there's not even agreement among our staff faculty about where should they start how we are scoring them so I want to ask you your reflections about how you went about that faculty development because is the first year we've actually tried the dashboard with dr. Michael Cullen and we are doing a pilot project yeah for the cap fellowship and so can you reflect a little bit upon how the faculty development this was and how long you took for your faculty to get onto the same page about the anchors and about how you score and some yeah yeah Michael Collins amazing he is your he's a real leader nationally and I were big admirers of his work so I you know the there's faculty development for direct observation and feedback and then as faculty develop there's development for the coaches and those are probably different people and then there's a development for the CCC so there's like there's a there's those are really three big buckets that have to be attended to I am in terms of for the feedback really we have a model that we use I teach in a competency-based assessment course that the ACGME offers the program directors nationally and we do we we have a really nice approach to that I think that's quite effective where we should do in asti six stations where you go through performance dimension training frame of reference and then you walk through a standardized resident do something and then you give feedback and you get feedback on your feedback and but I think you know you those sorts of things the other classic approach is to put up a video watch it together and then score it together I think those are you know mixing it up but I think you have to do it over time and it's not just one-off well we did the one-off trainings of faculty help for a couple months but then those reversion so to our idiosyncratic so it's it's like learning it needs to be repeated over time the coaching using the dashboards is more in our setting as what the coaches not so much the faculty doing direct observation but it's the the coaches and the CCC's and that that is a different skill set you know we're and I wouldn't say I don't think we figured that out but there's a lot of literature emerge coaching and we're about to develop a coaching program and maybe we could share notes somehow not really we know it's important we use a lot of Jones sergeant's work around our to see – which is kind of a probably the the one of the models that it's got the most traction and sort of research base to it is that helpful we just we haven't done any development faculty development with our ccc but that's our next step but i think it's a good start for us to know yeah and then to realize that there are some resources out there yeah i can send you some I mean the resources that we're using and developing our coaching program UCSF has a coaching program that they've rolled out in U&E that is is pretty decent so they're there are some models out there but I think the important thing is that coaching is different from feed from the observation feedback so I mean you you captured something very appropriate when we were crafting the milestones we really struggled with how to get granular enough without – without too much and and I'm not surprised to hear that that's continuing to be reflected in their use one of the things I'm thinking about that we struggle with that seems like we're gonna continue to have a problem with with IBAs is what could be called the weird stuff ya know I'm specifically nur neuroscience and Sora and where does the role of diagnostic testing neuroimaging genetics the emerging neuromodulation therapies the things that most residents are probably not going to see or use a lot during their training and yet over the 20 years of their practice they'd better be able to figure out how to how we could with the milestones we ended up just sort of punting and saying know something and how to refer is there any thought about how in the epa's we handle those sort of rare but essential for future growth kind of activities yeah so I one thing that we did is we thought we talked about kind of core EPA's and aspirational EPA so so I think there are a pas it aren't yet in certain settings maybe you could have but aren't aren't yet ready as are to be required because there's just not enough there for it but for your the other response I would make is I don't think EPA's actually captures everything so I think you're gonna need there certain type like practice based learning and improvement probably isn't an EPA and yet it's critical you know it undergirds all these units of activity is sort of this idea of self-assessment and incorporating data and all of that so you know a lot of us think that that probably is something you need to have a drive and and have people learn and needs to be assessed but may not be best you may need something other than an APA framework to get at and that may be true for you know neuroscience if if you've decided as a program like as we have that you really want your graduating residents to have some familiarity with kind of some of the basic frameworks and you know brain and cells and receptors and networks and all that even if it's not being used in day-to-day clinical I think that's that's a important value and then you need to you know obviously structure that in the curriculum but you're gonna have to assess it with something other than EPA's so my answer to that is I think EPA's is incredibly useful for for those units of activities that we do in clinic but it doesn't necessarily cover I don't think it covers everything Qi is another one that you know it's the 13th EPA for ume and we had one of ours our last the one that performed the poorest still performed pretty well was qi i'm you know a lot of us aren't Kannamma he is a huge believer in this importance of that for physicians but you know how do you how m it EPA's may not be the best framework to get at that and what's the screech unit of activity I suppose you could define it as doing a root cause or something but so let's be my answer yeah mm-hmm back in the 1990s the board certification process changed from life forever board certified to being certified only for a period of 10 years so my question has to do with after graduation and after this training where were reinventing I guess at this point would you make some comments about the maintenance of surgery certification now required most specialties well I love the question and my biggest comment would be that I think we everything we're talking about we want to do with residents I think we should do in MOC ie why aren't we observing each other as peers and giving each other feedback and using the epa's you know as part of that so I think there's a lot of potential for applying actually the EPA framework to see me and sort of that the whole what do you call it the fppe or oppe and but so I yes I think I think there's a lot of applicability there yeah MOC started off it's there were really kind of withdrawing right we started off with all these different requirements and now it's kind of they're making it simpler in us back to the test and you know we know that the test is ID you know is it really has probably limited value or piloting now I don't know if this is a BMS why'd I think it is but this whole idea where you don't take the test but you do the X number of questions every year and that that's more consistent with how people learn but I think you know we got to get to what we do and with patients that's going to become part of it at Kaiser you get feedback you get your patient evaluations and they take that really seriously and and if you don't perform well on that you get coached and you get really good coaching so there's that's another model using multi source feedback but sort of you know co-workers and patients but you know indirect observation is key is anyone using patient feedback with trainees yes I think the like you know I do you guys see that yeah limited Mountain clinic yeah yeah we yeah the Press Ganey surveys are collected on the residents care the challenge we have right now is we're not able to capture Press Ganey at the resident level and we have to administer our own we have patients in the resident clinic complete twice a year for like a week every patient comes in completes it it's on paper and it's it's kind of it's not fair it's very low yield but if we I think that the yes we should we should we need to figure that out and and capture that both for we don't do that for attendings very well actually so my other question I reserve because I think you I think that study with was it the fiscal scales yeah we're starting when I was there yeah I remember people talking about it and you know the the data are impressive and their and the particularly the the ability to give very specific feedback including narrative feedback is very impressive and I guess my guests can only be valuable to the residents do you have any kind of either quantitative or qualitative feedback from the residents on the process that's question number one and then question number two that I was curious why given the richness of of that particular instrument if I was following you correctly this current instrument you're looking at the app based instrument is Oh non-specific this is it did I get it right that you're sticking a moment in time and giving feedback to a learner about something you've observed rating it on where you think they are in the level of Independence or entrusted action yeah could be any number of things well it's um so both are based on observing a specific visit resident with patient one is the checklist is has all those behaviors the app is just a single level of observation so it's a global assessment notice today you didn't give information about SSRI yes yeah so you'll rate them somewhere on that scale of Independence and then you'll give the specific feedback about what you observed yeah what they could do better to get to the next level but you end up getting is you get probably comparable quality comment but it's you're getting one comment and maybe with a little bit less context and narrative to it so we don't know we don't so the answer is we don't know like how what's more helpful to learners they sort of getting five comments with some more context to it and the thickness and and with like three of which are positive and two are corrective or is it more helpful or equally helpful just to get a single kind of straight up this is something to to work on yeah we're so not yet I'm we're finishing actually a using that consolidated framework for implementation research we're actually interviewing faculty and residents and their experience with it and I haven't seen the resident data yet faculty prefer the app because it's quicker and easier however they say that the checklist actually helps them actually they feel like it helps them with their own practice because it reminds them to do things that they don't do and if they're going to be evaluating the resident on it yeah there's sort of this internal like accountability while I can't be pinging the resident if I don't do it you're only going to see the errors that you yourself happen to think are yeah he just errors yes the things that you don't do yes I think that's spot-on so what we I suspect you know that maybe the solution is it's both so you train faculty with the checklist and you work and they continue to use that some of the time so that we're aligned around the same mental model but there's also spaces where you maybe use the app which is quicker and kind of faster and all that just have one last thing and then we're near the end of our time but a kind of building off of of this discussion here well I'm not a proponent of the vague polite feedback which we have copious amounts in Minnesota because that's kind of our usual communication style I do worry that with some of the frequent data points and the frequent evaluations you start to get data that is often skewed by sometimes implicit biases of the evaluator and then you can sometimes get some sort of discriminate discriminatory drift towards trainees that are less like or different from from the evaluator and I wonder how you've approached this or thought about this I think it's a I think it's a really critical issue it's goes to I mean really it's a type of validity bias you know compromises that and can be a way of other inai guess is what you're suggesting I mean I I guess I don't know if I have any pick you know my thoughts on that is that's why the frame of reference and performance dimension it particularly frame of reference training is so important because you need to do a lot of compare and contrast the surface assumptions the faculty are making that aren't often actually once they're articulated and brought out you realize oh that's actually probably not true one of developing precision prior launching it so we don't we don't just launch this without some very specific training yes so like the the frame of reference training we do we we focus we listen those of us who are leading it and the small groups were listening and challenging assumptions and that's really our role well how do you know that resident doesn't care well because they were looking down at their phone they were totally dissing this this faculty member well you know well that's one possible explanation but there are others right and so like drilling in on that and that is that's really important work and sort of uncovering because we bring all these assumptions in biases and then make judgments so I think that's it's hard work though because you know we as hard to change bias you know the heuristics how our brains work but in any event that's why frame of reference is so important it's helpful thank you yeah [Applause] you have been disconnected from the meeting

Leave a Reply

(*) Required, Your email will not be published