Transfusion Medicine Informatics: ORCA Orders and Database Development

[MUSIC PLAYING] HAMILTON TSANG: Hello. Thank you for allowing
me to talk today, about some of the work that
I’ve been doing at UW Medicine, conducting here at the
intersection of transfusion medicine and
clinical informatics. I have no conflicts of
interest to disclose. Some commercial software is
mentioned during this talk as their specific idiosyncrasies
at this institution are relevant to this discussion. The objectives are
available here and online. So patient blood management
is an evidence-based, multidisciplinary approach to
optimizing the care of patients who might need transfusion. Basically, this is with the
intent of using transfusion as a therapeutic modality only
when it’s in the patient’s best interest to do so. And using best available
evidence to guide transfusion. Blood utilization
review is one big part of patient blood management. And this touches the entire
process of transfusion, including physician ordering,
transfusion indications, thresholds, and monitoring
for adverse events. Informatics plays a huge
role in transfusion medicine, as well, and in every
aspect of blood bank. Therefore, the interplay
of transfusion medicine and informatics for
influencing blood utilization has many opportunities
for benefits and pitfalls. I really racked my
brain for a long time as to how best to
structure this talk. After all, I’ve
really been working on two parallel projects with
some significant intersections. I decided that
the best way to do this would be to first, go
over how transfusion medicine informatics works. Then segue into
one project where we have been developing
a data warehouse to extract and combine data
from various different databases within our institution. I will then circle back
around to a project that we completed in parallel
to update and streamline a lot of the
transfusion-related workflows in our electronic
medical record. These efforts
really come together when we start to talk about
quality metrics for both of these projects. And these quality
metrics will be a huge part of our
patient blood management strategy at our institution. So Seattle’s claim to fame was a
centralized transfusion service known as the Puget Sound
Blood Center, which was later rebranded as
Bloodworks Northwest. In 2011, Harborview
Medical Center was the first to
splinter off of this with their own hospital-based
transfusion service. And then in 2016, UW
Medicine followed suit, with our own
transfusion service. And in 2018, Seattle
Children’s opened its own hospital-based
transfusion service. So our transfusion service
is actually, relatively new. It’s about two to three
years old at this time. Within laboratory
medicine, I think that anyone would say
that their lab is special, but our lab is really,
really special. Although, I might
be a little biased. In most labs, a
specimen is received. Testing is performed. And the result is reported out. And this may or may not
have clinical significance, depending on the type of
testing that was performed. The way that the blood
bank is different is that we actually issue a
biologically-active product, much in the same way
that a pharmacy does. Each of these
products that goes out has a direct clinical impact. These are living cells
that are characterized as biologic products
and highly regulated by the Center for Biologics of
the FDA and other accrediting agencies. And therefore, the Blood
Establishment Computer System– or BECS– regulates
the interaction between laboratory testing
and product allocation. For example– if ABO
testing of the sample does not match the compatibility
rules of the product that’s being issued, the
software is designed to prevent this from occurring. So let’s talk about
some definitions, here. An electronic medical record
contains notes and information that’s collected by
and for the clinicians in a clinic or hospital. And mostly used by providers
for treatment and diagnosis. The EMR at UW Medicine
is a customized instance of Cerner, that’s known
as ORCA, or Online Record of Clinical Activity. A laboratory
information system is a software system that records,
managers, and stores data for clinical laboratories. And at UW Medicine,
this system is Sunquest. While a blood establishment
computer software is a regulated software
that’s designed to be used in diagnosis or
in the prevention of disease by release of unsuitable blood. And this is 510(k) regulated. Most people think of this as
an LIS or a blood bank LIS. But remember, the
other part of it is that it regulates blood
safety through prevention of release based on testing. I say this, but I might
use LIS interchangeably with BECS out of habit. At UW Medicine, this system is
currently a module of Sunquest. So a robust, up-to-date
tracking of transfusion metrics in our LIS and our EMR are
essential to accomplish appropriate blood
utilization review. The metrics are also
important for monitoring whether changes in blood
administration policy has its intended effect. Up until recently,
it’s been no easy task to track all of the
approximately, 60,000 units that’s transfused at UW
Medicine and at Harborview. In a way, we’ve kind of
been operating blindly, even lacking some baseline
data for benchmarking current operations. At the same time, we
had been experiencing long-standing inefficiencies
from legacy workflows that were carried over from prior to
the TSL establishment in 2016. So here comes the
first part– how does all of this stuff work? You might not realize it,
but information technology and informatics is concerned
with how information passes from one place to another. The clinical teams generally
interact mostly with the lab through the EMR through orders. And the LIS is mainly interacted
with by the TSL staff. So how does a message
go from a provider and end up with the patient
giving a unit of blood? For blood ordering,
the provider first selects a power plan, which is
a set of orders related to blood that may include
guidelines, premedication orders, and related laboratory
testing, along with blood product and transfusion orders. So what is a blood
product order? And what’s a transfuse order? And how do these all work
and fit in with this? Well, the provider
produces two orders– the product order, which goes
to the transfusion service. The transfuse order,
which goes to nursing. The transfuse order
tells the nurse to do all the things
necessary to get the patient ready to transfuse. They might have other tasks. They may have to
give medications. They may have to do all
sorts of other things. And originally, this was
printed because Sunquest– or LIS– was not
interfaced with ORCA. And then once
they’re ready, they would let the
transfusion service know that they were
ready to actually give the blood product. So they would send a blood
product release form– which is also a paper form– to the transfusion service. And then, there’s also
the blood product order. So the product order
tells the blood bank to do all the things
necessary to prepare the product, including
modifications to the product. If the unit has
special properties, then the unit might be
allocated to the patient, but not necessarily issued
until the team asks for it. And if there are no special
modifications or processing involved, then they might
allocate at the time that the team asks
for the unit so as not to tie up the inventory. The TSL staff does all of
these things in Sunquest. And once the unit
is allocated, it can be issued to nursing
in order to be transfused. So you guys got all of that? There’s going to be a
quiz on this later– you know I’m in charge of the quiz? You think I’m joking. So I don’t mean for you to
read all of this stuff that’s on this slide. Why am I bringing this up? This is mostly to
show that there’s a very complex workflow that’s
involved with issuing blood. And also, you might see
where some of the weaknesses could be in some
of this workflow. And why we need this
to be a streamlined as possible in order to issue
blood safely and with quality controls. One of the big problems
stemmed from some of the manual processes
which led to errors, which then become a regulatory
issue because we are so highly regulated. So this is the first branch– act one– which I call database
development– how to bin, trend, and influence SQL. So all of the interactions of
the clinicians, the patients, the nurses, and lab
staff– as aforementioned– are recorded by each of the
different source systems in their respective databases. Your ability to get data
from the EMR and LIS are affected by how these
database systems are setup. Traditionally, if
you had a question that you wanted to
answer about the data– you might query each of
these different databases separately for each
piece of information, especially if you’re
interested in stuff that goes across different databases. Alternatively, one
might hold the data in something that’s
called a data lake. That’s a huge storage
repository which basically holds a lot of this
information in its raw form until it’s needed. And if you wanted
to ask a question, then you would query
a subset of this data in order to process it
for further analysis. The problem with this is that
it could still be in its raw in the databases and this
data can be hard to work with. And to work with this data,
each of the different users would have to do the
processing themselves and this would result
in duplicated effort. And some of these
efforts could be singular to that working group. So you might not be
able to easily reference data across different
working groups. So around the time that
I joined and started exploring some of the work that
was being done at UW regarding blood utilization data– UW ITS analytics
was in the midst of developing an analytics
data warehouse where the data from the
source systems was extracted from the databases
and housed under one roof. You might know this as
Amalga, but it’s recently been rebranded to the EDW or
enterprise data warehouse. The data undergoes extraction,
transformation, and loading, which is used to blend
data from multiple sources. So during this process, data is
taken from the source systems– extracted. Converted into a format which
can be analyzed or transformed. And then loaded into
the database or stored. A lot of the work of
processing is front-loaded with this kind of
data architecture and allows for quicker analysis
of the pre-processed data. So serendipitously, my
contact in the data analytics department– sitting up here– was involved
in both blood utilization queries, as well as this effort
to create this data warehouse. At the time that
I contacted him, I was able to help him in terms
of providing clinical expertise and as a subject matter expert. You see, one of the
difficulties for data analysts who is operating without
a clinical knowledge is knowing what’s
important or not. Knowing how to organize
things in a way that makes clinical sense. On the other hand, it’s
difficult for a clinician, without coding
knowledge, to explain how to best sort and
organize this kind of data. So here– take a look at this
fairly raw data from ORCA, which contains information
that’s from orders that’s sent by clinicians. If you take a look at
this long string of text, how would you split up and
name the discrete parts of this data? Think about that. So raw data, which comes
like this, is unprocessed or what you would call
non-discrete data. You might notice that
there’s some useful component parts in here, as well
as some free text. Without any particular
specialized knowledge, you could probably guess that
there’s a date and time here, that’s useful. Followed by something
about the priorities here. Followed by the dosing and
then a bunch of other stuff here– which could be
various different attributes or could be various other
important information here. So it’s more ideal to
the store this data discreetly at the lowest
level of granularity of each of those component parts. And additionally, if this
data is coded or taken from just a
controlled vocabulary, then it can be a lot easier
to process this data. And converting this data
from this long texturing into discrete data is sometimes
referred to as parsing or basically, breaking up
that sentence into something that’s more usable. I considered torturing
you with a fairly long, in-depth parsing
problem, but suffice to say that, there’s a
lot of coding involved. But think about this– if you were to search these
strings for a pattern like two units, you’d come up,
sometimes, with other patterns that are interfering with that. If you’re trying to look
for a date in here– you could see that sometimes
if there’s free text here, some people might
enter in dates. So how would you best
approach this problem? Well, in order to fix
this, we split this up based upon commas into their
various different fields, knowing that each of
these different fields corresponded to something
that was in that order set. And then searched within each
of those different columns or comma separated
values in order to derive this kind of data. So once the data
is robust, that’s when you can start analyzing it. So what does this look like
when we start blending data from the EMR and the LIS? Here, we can look
at service data from ORCA of where
the transfusion went. And how many units
from Sunquest. And we can see here,
that he [? hmon ?] is one of our biggest users– as would be expected. Followed by surgical services
combined as a close second. If we say that we’re interested
in surgical services, we might be
interested in looking at how these procedures
play a role in transfusion. One of the issues
that we ran into was that there’s
hundreds or thousands of different procedures
and procedure names. So after developing
a parsing schema, we sorted all of the
different procedures by how frequently they use
any sort of blood product. And the ones on the top are the
ones that you would expect– liver transplants, cardiac
procedures, lung transplants, et cetera. We found that the procedures
that most frequently use blood at UWMC and Harborview– which resulted in about 50
major categories of procedures in about 13 different
specialty categories. And the mix of important
blood procedures at Harborview is different from UW. Obviously, we see a lot more
trauma-related procedures such as incision and drainage
[? INDs. ?] Burn surgeries, ortho categories such as
fracture and spinal fusion categories. So then, what percent
of these frequent blood procedures or major blood
procedures use blood products? And we can see here, that
cardiac, liver, and thoracic– like lung transplants– tend to
use blood the most frequently out of this. So between 15% to 22%– so one in six or one in four– procedures will use
some form of transfusion during the procedure. We could look at
this a little bit differently because frequency
isn’t the same as volume. If we look at this another way
comparing the number of units that are transfused– this
tells a slightly different story of how major ortho surgeries are
in that when they do use blood, they seem to use
quite a bit of it. And the ones that– Harborview being the
most frequent users of blood being cardiac
procedures using a lot of blood– also is something that’s
not necessarily surprising. So we can start asking some
more complex questions. If we’re interested
in anemia management– is there a difference
in the hemoglobin prior to the procedure
when we stratify procedures based upon the number
of RBCs that they received during a procedure? So this is combining LIS
results, BECS results, as well as ORCA data. So we see here that looking
only at cardiac procedures– that we do see,
overall, there seems to be a clear linear
pattern between when people transfuse five or more
units and a lower hemoglobin. And to a lesser extent,
you can see this pattern when they transfused
one to three units over the course of an encounter. But you might ask– how
can we enact patient blood management through this blood
utilization review data? Well, one thing
that we could do is we could look at service
and encountered data and see if there’s any
difference in the number of units used in a case
by each different surgeon or anesthesiologist. And use this as a report card. So this shows the number
of units used in a case by each surgeon. And the percentage of times
that they used that many units. And of course,
the names here are redacted to protect the
names of the innocent. So let’s circle back– in act two, which I subtitle
ORCA orders ordeals– a whale of an overhaul. This was mainly, a
massive undertaking. It affected many
different services with many different nuances. But I’ll mostly try to stick
to some of the highlights here. I mentioned earlier that there
are many legacy workflows that were problematic and
that we wanted to update. And some of them are pointed
out here, in general. Some of the common
themes from users were that the power plan
was not user-friendly. That it’s overpopulated. There are too many words. The notes for too long. The orders weren’t interfaced. There was lots of manual entry. And many chances to make
manual errors, as well as duplicated efforts
that people thought could be just transmitted
electronically without those kinds of errors. And there were separate product
[? entrance ?] [? use ?] order. Sometimes people would forget
to add those orders together. People wanted transparency
for what was ordered. They wanted to know
what was transfused and the status of
the type and screen. They wanted more guidance for
certain kinds of ordering. For example–
ordering attributes, people were always
very confused about. And they thought
that there was too many types of
attributes and people didn’t know when to order them. So coincidentally, in November
and December of last year– around the time that I joined– we were informed that there
were some resources that were set aside to update the
ORCA orders as the ITS had some resources that were
opened up from another project that they were working on. So an eight-month
project timeline was planned with planning,
design, system build, testing, and training phases. The design team was
assembled including ITS, [? lab med IT, ?]
and informatics. Transfusion services
at Harborview and at UW Medicine, SCCA,
ICU, nursing, anesthesia, and others. And we had a decision escalation
path during the design sessions with each of the design
session participants, as well as the
steering committee and an executive sponsor. I’m going to go into a
handful of the changes that we enacted when we
started going into this, but this is not necessarily
all of the changes. So originally, this is the
old code sets, as well as some of the values that are in here. You can see that there’s
a lot of redundancy. You can see like emergency,
emergency AB is provided. Emergency cross-match,
emergency uncross-match, emergency O provided, et cetera. Like outpatient,
patient is waiting, a [? patient-patient ?]
is within two hours. Planned transfusion–
a bunch of this stuff. So we wanted to,
basically, take away a lot of the redundant
values and take away some of the complexity
with ordering. This is what the code set looks
like towards the end of this. We took the priority
of code sets down to about four unique values
for about 71% decrease. We took the attribute code sets
down to about 14 unique values for about 48% decrease. And we took the number of
units and transfuse orders down to 24 unique values
with a 57% decrease. We provided another
avenue for the clinician to communicate important
information to the transfusion service through what we call a
blood bank requirements order. Remember that the
clinician mostly communicated to the blood
bank through the information on their ordering. So the blood bank
requirements order notified the transfusion
service of medically complicated patients and OR procedures. And we thought that
it was very important that we put these categories
forth in clinical terms rather than transfusion
processing terms for better comprehension
and usability. These were linked to special
processing requirements so it was a lot easier
for the clinicians to understand what kind
of information to give us. So this is some pop
ups that are occurring. So the blood banks
requirements field is a required field,
especially if it wasn’t previously documented. And so these
relevant requirements would carry over
from each encounter. And if the blood
product attributes didn’t match a clinical
indication that was documented, a hard stop would return the
practitioner to the power plan in order to modify
the attributes. And this would prevent providers
from overlooking attributes. So another thing was
that this would then funnel into the power plans
in terms of useful information at the top of the power plans. Basically, this was a form
of clinical decision support. And as well, we also populated
this area with information about their type
and screen orders. Or if they had a cross-match
expiration that was coming up. And these attributes
and special instructions went across
different encounters. So another form of
clinical decision support that we instituted was
best practice alerts. So these best practice
alerts were for providers if they were ordering blood
products outside of clinically determined thresholds. And we instituted this
for red blood cells in terms of hemoglobin
and hematocrit. Platelets in terms
of platelet count. Plasma in terms of INR and PT. As well as for cryo in
terms of fibrinogens. And of course, this did
require some negotiations with the clinical team
in order to figure out which thresholds would be best. This is how long
order sets were. You can see that there’s
many different products. There’s a bunch of
different lab values that are associated with this. There’s a bunch
of different stuff that’s just a lot of clutter. And so we removed a bunch
of defunct product types that people don’t generally
order, such as autologous, autologous transfusion, or
autologous blood products, and directed blood products. And so this really
simplified our list. A lot of the text
in the guidelines was put into links so
that they were still available for reference. And these were just
generally, streamlined for increased efficiency. A lot of the
pre-medications and labs were removed from the
normal adult order set. But were still available
for like, BMT and services that actually use them. So in terms of a
project overview, we updated and interfaced
ORCA blood product orders and transfusion
power plans in order to reflect in-house practices
and regulatory requirements. And this involved streamlining
and standardizing order sets. Streamlining power plans
to reduce visual noise. As well as to institute clinical
decision support and best practice alerts. We improved flow
and communication between clinicians and
laboratory systems. So blood bank requirement
attributes communicated context in order for lab staff to
understand what was going on. We improved workflows
for nursing staff– I didn’t really go into this
too much– but the nurses each received a transfusion
task for each blood product order to be transfused. It used to be lumped all
together under one transfuse order. So they would have to ask
multiple times for providers to add more transfusion
tasks for them if they didn’t transfuse the entire order. There was increased transparency
of blood product status. And a new transfusion
reaction reporting forum, which was a really big
quality of life improvement or enhancement for
the nursing staff. And then through this,
we reduced the need for manual paper
blood product order forms with electronic
requests in ORCA. We interfaced ORCA
behind-the-scenes with lab systems where possible. So the go live was
completed on 8/14. There were about 9,000
technical updates that were made post-go live
where a combination of issues were resolved. And some quick enhancements
were delivered. There were about 20 issues that
were called into the help desk, primarily involving training
issues during the two weeks that we were monitoring
this after the go live. And about 1,000 different orders
were ordered during this time. In terms of the ITS statistics– they had about 18
people that was working on this project
in terms of coding. Their forecasted work
was about 29,000 hours, but their actual work
was about 5,000 hours with a variance of
about 2,000 hours. And this was attributed
to the complexity of the ORCA design and clinical
workflows, including SCCA– which came in at
the last minute. And complexity of
custom code, as well as training requirements for two
e-learnings and multiple job aids. So you might have received a
thing for the e-learnings– did everyone do that? I don’t see a show of hands– single tear. So let’s bring this back
to transfusion metrics. Once this project was
completed, the work that we had done in pursuing
patient blood management metrics became important
for this project for monitoring
ordering practices. This is easier to do because
of the data warehouse design. Our ability to
measure these metrics was dependent on our
access to this data. And we see here that the
number of products that were transfused against the
week that it was transfused. So the number of
products versus the week that it was transfused. And we didn’t really see any
appreciable change over time– at least overall,
but we haven’t really stratified these by service
to look at these, yet. We can compare this to
the volume of order types that we receive for
product orders over time. We see that as expected, certain
orders which were phased out go down to zero. And some of the other
order sets or order types pick up the slack here, in
terms of planned transfusion versus what used to
be planned procedures. And this was meant to be
merged into this form of order. We looked at order completion
and cancellation rates as markers of clinician
effort and workload. And as a surrogate
marker for the usability and intuitiveness of the system. We saw that there was an overall
increase in completed orders. And a decrease in discontinued
orders, both by the system and by the user. One of the new features of our
update that we wanted to know how often it fired– because
this was a concern that it would fire too often when we
were instituting this was– the number of transfusion
alerts that was fired over time. We can see that the
blood bank requirements order fires quite often. And this would be
expected because it’s linked to the reminders
to the provider to order the proper
attributes for each product. So of the clinical
decision support alerts, hemoglobin alerts fire
the most often over here– which isn’t surprising
given that red blood cells are some of the most
frequently transfused product. Just for reference–
about one hemoglobin alert fires for every
three units of RBCs that were transfused
during a period of time. And one platelet alert fires
for every approximately, 10 platelets that are transfused. One INR alert fires for
about 10 to 15 plasma that are transfused. So we’re curious as to
whether or not we really changed ordering practices. So we looped back to the
average hemoglobins– so this is like average
hemoglobins per period of time, where people transfused an RBC. And we looked at this
an as an aggregate. We didn’t really see any
appreciable differences in terms of what hemoglobin
people tend to transfuse at, at the different institutions
with or without procedures performed during an encounter. So high hemoglobin values
might be skewing this high. I would probably–
in the future– look at these in terms
of acuity of admission in order to look
at these, again. Or stratify these
based upon service. So the only one that really
seemed to have any sort of effect or detectable
effect– at this point– was INR prior to
plasma transfusion. So here’s the average INR
string that average INRs prior to a transfusion
per period of time– pre and post go live. And at least at Harborview,
and maybe at UW Medicine– average INR of
transfusion seems to be slightly lower in encounters
with no procedure done. So the values for this is like– in case you can’t read this–
is 1.8 post-go live and about 2.3 pre-go life at Harborview,
during the year of 2018. And 2.3 post-go live and maybe,
like 2.6 pre-go live, in 2018. So we completed a major project
to update and streamline transfusion-related workflows
in our electronic medical record with hospital-wide involvement. We increased quality,
safety, and ease of use. Ordering patterns may
have changed slightly, but significant changes in blood
utilization remain to be seen. And so we’re still
monitoring this and maybe we can slice and dice
the data in a different way, but these quality
metrics are still important for monitoring
our current state. And will continue to be
important moving forward. What’s some of the lessons
that we took away from this? And this actually
wasn’t a lesson, but I already knew this– but
the hardest thing to change is its culture. The laboratory has to
be in such a process– very proactive in the entire
process of planning, design, and testing. We have to build bridges
and working relationships with clinical stakeholders. And we have to be willing
to negotiate and compromise in order to make progress
and move forward. We can’t let the perfect
be the enemy of the good. Another thing is that things
that are out of scope, don’t always stay that way. Especially if they are a
core part of your process– a lot of the time, those
things tend to bleed back in. Even though people keep
saying they’re out of scope, but they don’t
always stay that way. One important concept in
any sort of development is regression. So you could also think
of this as whack-a-mole. You like, fix one thing
and break another. And a lot of the time,
these are unknown unknowns. So you don’t really know
that doing one thing was going to have
an effect elsewhere. So what are we
talking about when we start to think about
challenges in the future? Clinical transformation,
epic transition– this is what’s on a lot
of people’s minds. In December, 4th to 11th
and for about like, 14 days or less than 14 days– they’re
supposed to do about 300 or 600 different design
sessions, et cetera. So hopefully, this
recent exercise will have helped
to flesh out what some of the requirements and
ask from the stakeholders are going to be. What And we hope that this
will carry over into some of the design sessions. But it’s not all doom and
gloom because, hopefully, some of the– since this is clinical
transformation. And all we’ve been told– we’ve been told that
all workflows are going to be under review. This could be an opportunity
to affect other not-in-scope workflows, such as– oh, I don’t know– like the fact
that surgery orders everything manually and on
paper, but who knows? I don’t know. The truth of the matter
is that we don’t really know how the designers
or the coders are going to put all of
these sessions together. And what’s going to
be the outcome of this or how they’re going to
actually approach this. Although, certain people in
the audience might be able to speak more on this. So what’s in the future for
our database and metrics? We want to continue to validate
the data in the data warehouse. We want to develop tools
for more complex queries, as well as conduct
statistical analysis. And we want to do this
in order to continue to provide quality improvement. We want to develop dashboards
for patient blood management feedback to the
different services. And we want to accept and
triage research projects that are leveraging using
this big data that’s combining data from all of
these various different sources. We can look at vital signs. We can look at the various
different procedures and a bunch of other data
that’s just floating out there. But I think the
important thing is knowing what question to ask. Because otherwise, we
could go in all sorts of different
directions with this. And maybe that’s a good thing. So in conclusion, informatics
plays an important role in every aspect of
the clinical pathology laboratory, blood bank, and the
transfusion medicine service. Being involved with
informatics workflows allows clinical stakeholders
to influence the quality of their data and metrics. We completed a major project
to update and streamline transfusion-related workflows
with hospital-wide involvement. And we have developed an
electronic data warehouse to extract and combine data from
our electronic medical record, laboratory information
system, and other databases within our institution. As a capping off
point, I would like to give some acknowledgments
to Monica Pagano and John Hess, as well as Patrick Mathias. I wouldn’t be able to do any of
this work without their support and help. The work of Ray
Bunnage and Joe Oates– sitting here in the audience–
have been absolutely essential, as they’re my contacts
in data analytics. As well as to the rest of
the people who are involved. Because this is something
that takes a really large team in order to accomplish. And it’s not something
that I did on myself, but I kind of did– but no. But really, it’s something
that really did require hospital-wide involvement. So I’ll take any
questions at this time. [APPLAUSE] AUDIENCE: So with
Amalga or EDW– or whatever it’s
called– can you look at outcomes after the
changes that you put in place? And ask that question or is
there not enough data in Amalga to ask that? HAMILTON TSANG: So
I think you can. It depends on what your
you’re looking for. If you’re looking for stuff
that’s in the actual notes– that can be a little
bit more difficult because, obviously, that’s
like a wall of free text– basically. But you can look at
like hemoglobin values. Stuff that’s already
in discrete data. To a certain
extent, you can also try to look for outcomes
if you do enough parsing. You can look in those notes
and look for certain phrases. You can look for
discharge outcomes. So I would say yes, it depends
on what the question is. So yes. AUDIENCE: I’m sure you got lots
of anecdotal feedback on this, but I’m wondering
if you’re planning any kind of formal survey
of folks ordering blood, in finding out how successful
you were from a user experience perspective. HAMILTON TSANG: Yeah,
that’s a really good point. I think that, that would
be great to do in order to follow-up and put
a capstone on this. Where I think we got
most of the feedback was just informal polling
of users who were using the system right after go live. But a more formalized
system of doing a survey would be a great idea. Monica. AUDIENCE: That was a very
good talk– thank you. So I have two questions. The number one is if you
can expand a little bit more on how these data actually is
going to apply to patient blood management. And how that translates
into clinical practice. And the second
question that I have is if you can comment,
also, in other institutions experience in terms of data
extraction and creating these kind of reports. HAMILTON TSANG: So I’ll answer
the first question first. So that question was how to
use this in patient blood management. So right now, I’ve mostly– or at least in this talk– I mostly talked about
how to get the data. And that was a big
problem because we– prior to this– had a
big issue with even just getting the data
in the first place. And a lot of my efforts
have been a lot more on mechanics of
getting the data, rather than the analytics part– which is what we hoping
to transition into once we get the data very validated. And that we can trust that the
data is going to be robust. But after getting this data– one of the things
that I talked about was giving feedback to
the different services. So right now, we have a
bunch of different requests from OB/GYN, from surgery,
from anesthesia for– how much blood are we using? And we don’t really
even have baseline data. So I think even just
getting the baseline data in the first place is
going to be very important. And then that way, we
can actually monitor. Because people are saying
like, we don’t even know how much we’re using now. How are we going to know when
we do some sort of change in blood utilization policy– how are we going
to monitor that? So this is going to
play a big role– I think– in monitoring that. And there’s some other options. For example, giving
report cards to like, surgeons or anesthesiologists
for how much blood they’re using as
compared to their peers– which is something that has been
done before by our own Patrick Mathias. So I think that it could
play a big role in that. And then, I forget
your second question– it was other institutions. AUDIENCE: [INAUDIBLE] HAMILTON TSANG: And what was it
about the other institutions? Sorry. AUDIENCE: [INAUDIBLE] HAMILTON TSANG: So
think like stuff– for example– like
clinical decision support or places– for
example– like Stanford have been doing things
where they have given report cards or feedback to various
different departments. And they report success
with changing practice based upon that. I guess, it’s like
your mileage may vary. I don’t know how that’s
going to translate over to our own institution. Here, we instituted something
like clinical decision support and we haven’t really seen
too many big changes or fluxes in terms of our
blood utilization. But maybe that’s having to do
with the granularity of how we’re looking at things. And a lot of the changes
that we did institute have been based
off of the models that we’ve seen in a lot
of different institutions. So I would have to
say that probably, every single institution
is different. But following one of
those models is something that we definitely should do. AUDIENCE: Hamilton,
one of the things that every time I go
onto ORCA and look back at somebody– it will tell
me if the patient is dead. And this gives us– in a sense– an output. The system follows,
very carefully, everybody who dies in
the state of Washington and feeds them back in. These kinds of
dichotomous outcomes would allow you to do logistic
regression against large data sets and come up with odds
ratios of what’s happening. How far are we from having these
kinds of engines available? HAMILTON TSANG: I think that,
that data is probably already there. And that we probably could
look at these outcomes. So I don’t– I think that it
probably is in there, but the problem is if they
die outside of the hospital. So that kind of data
might not necessarily be– I don’t know if that’s
all captured in there. Is it? AUDIENCE: Actually, it is
through the Social Security Administration system,
which maintains that And so they just keep constantly
feeding the Social Security numbers of our patient log
into the Social Security system and see who has died. HAMILTON TSANG: Yeah,
that’s really interesting. That would be a
great idea to do. And as I said, a lot of the
work that we’ve been doing has been to set this up. And we are looking
into now, moving into looking at more
research, kind of things. But I think the important
thing that we’ve been really working
on is making sure that the data is very robust. And that all of the things
within this data warehouse make sense before
we move on to that. As they say– garbage
in, garbage out. So we want to make sure that
all of the data is validated. And that it all works. Anything else? Thank you. [MUSIC PLAYING]

Leave a Reply

(*) Required, Your email will not be published