Conference PaperPDF Available

Abstract and Figures

Clinical decision support tools (DSTs) are computational systems that aid healthcare decision-making. While effective in labs, almost all these systems failed when they moved into clinical practice. Healthcare researchers speculated it is most likely due to a lack of user-centered HCI considerations in the design of these systems. This paper describes a field study investigating how clinicians make a heart pump implant decision with a focus on how to best integrate an intelligent DST into their work process. Our findings reveal a lack of perceived need for and trust of machine intelligence, as well as many barriers to computer use at the point of clinical decision-making. These findings suggest an alternative perspective to the traditional use models, in which clinicians engage with DSTs at the point of making a decision. We identify situations across patients' healthcare trajectories when decision supports would help, and we discuss new forms it might take in these situations.
Content may be subject to copyright.
Investigating the Heart Pump Implant Decision Process:
Opportunities for Decision Support Tools to Help
Qian Yang1John Zimmerman1Aaron Steinfeld1Lisa Carey2James F. Antaki2
School of Computer Science1School of Biomedical Engineering2
Carnegie Mellon University, Pittsburgh PA, USA
{qyang1, johnz}@cs.cmu.edu {steinfeld, lcarey, antaki}@cmu.edu
ABSTRACT
Clinical decision support tools (DSTs) are computational
systems that aid healthcare decision-making. While effective
in labs, almost all these systems failed when they moved
into clinical practice. Healthcare researchers speculated it is
most likely due to a lack of user-centered HCI considerations
in the design of these systems. This paper describes a
field study investigating how clinicians make a heart pump
implant decision with a focus on how to best integrate
an intelligent DST into their work process. Our findings
reveal a lack of perceived need for and trust of machine
intelligence, as well as many barriers to computer use at the
point of clinical decision-making. These findings suggest
an alternative perspective to the traditional use models, in
which clinicians engage with DSTs at the point of making a
decision. We identify situations across patients’ healthcare
trajectories when decision supports would help, and we
discuss new forms it might take in these situations.
Author Keywords
Clinical Decision Support Systems; Decision Support Tools;
Field Study; Qualitative Methods; Service Design.
ACM Classification Keywords
H.4.2. Information Systems Applications: Decision support
(e.g., MIS)
INTRODUCTION
The idea of leveraging machine intelligence in healthcare
in the form of decision support tools (DSTs) has fascinated
healthcare and AI researchers for decades. These tools
promise improved healthcare quality through complementary
insights on patient diagnosis, treatment options, and likely
prognosis. In recent years, the adoption of electronic medical
records along with advances in big data technologies has
created the perfect environment for algorithm-powered DSTs
to impact clinical practice.
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than
ACM must be honored. Abstracting with credit is permitted. To copy otherwise,
or republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee. Request permissions from permissions@acm.org.
CHI’16, May 07-12, 2016, San Jose, CA, USA.
Copyright © 2016 ACM ISBN/978-1-4503-3362-7/16/05...$15.00.
DOI: http://dx.doi.org/10.1145/2858036.2858373
Interestingly, almost all these tools have failed when
migrating from research to clinical practice [15, 24, 25]. In a
review of clinically deployed DSTs, healthcare researchers
ranked the lack of HCI as the most likely reason of their
failure. This includes a lack of consideration for clinicians’
workflow and the role AI currently plays in clinical practice
[33, 46]. Currently little to no work in the field of HCI has
investigated these issues or proposed how intelligent DSTs
should be integrated into care environments.
We, HCI researchers and Bioengineering researchers, are
collaborating on the design of a DST supporting a heart pump
implant decision. The heart pump VAD (ventricular assist
device) is an implantable electromechanical device used to
partially replace the function of a heart. They were initially
used to support heart failure patients until they could get a
heart transplant. A few years ago, VADs became approved
as a destination therapy: as the last therapeutic treatment for
people in end-stage heart failure [19]. VADs implanted for
destination therapy were expected to extend patients lives
for several years. However, many patients who received
VADs died shortly after the implant [7]. Additionally, many
patients who might benefit from a VAD did not appear to have
been offered this treatment. The decision to implant a VAD
seemed a perfect place to apply a DST, as these intelligent
systems could mine thousands of patient records, bringing
the collective intelligence of many physicians to each implant
decision.
Given the previous failures of DST deployment and the wide
gap between DST technology and clinical reality, we chose to
conduct a field study. We had two goals:
1. To understand the clinical decision process around a VAD
implant, including the participants, their work practices,
the contexts where the decisions get made, and other
critical factors that influence the decision;
2. To identify the key touch points where we might situate a
prognostic DST that clinicians would find useful in their
practice.
We interviewed and observed clinicians caring for VAD
patients at three different implant centers. We then analyzed
our data using affinity diagrams and a service blueprint of
the decision paths different patients follow. Our findings
reveal that for most cases, clinicians do not find the decision
process to implant a VAD challenging, and thus would not
likely engage with a DST to aid with the decision. However,
we did identify situations when decision support would help.
Clinicians would value the support for emergent cases, when
they have very little data to predict how a critical patient
might respond to available therapies. In addition, the implant
clinicians would value a DST that worked in upstream clinics
and hospitals if it could prevent patients from arriving at an
implant center after their window for an implant had closed.
This study makes two contributions. First, our field
observations and interviews provide a rare description of
how an implant decision is reached across many clinician
roles and contexts. It provides a timely answer to healthcare
researchers’ call for context-focused HCI approach. Second,
this work suggests an alternative perspective to the idea of a
DST as a system clinicians engage with at the point of making
a decision. This work suggests a DST might play a more vital
role if it followed clinicians over time and across their care for
heart failure patients.
RELATED WORK
Clinical Decision Support Tools (DSTs)
Clinical decision support tools (DSTs) are computational
systems that support one of three tasks: diagnosing patients,
selecting/recommending treatments, or making prognostic
predictions of the likely course of a disease or outcome of
a treatment [48].
Most DSTs aim to reduce human errors and help clinicians
make the right decisions. Others prepare patients to
make well-informed decisions, where patients’ preferences
and values play an important role in critical decisions
[48]. Integrating the positivistic, doctor perspective and the
humanistic patient perspective remains an open challenge in
medicine [6]. We found no DST that supports clinician and
patient collaboration in making well-informed decisions, and
no clinician-facing systems that include social support factors
in their prognostic predictions.
Output from DSTs can take several forms: discreet
decisions, a set of ranked recommendations, predictions
of likely outcomes, alerts of a potential problem, or lists
of considerations that should be taken into account before
making a decision [48]. The DST for VAD implant decision
will be a prognostic DST using data mining to help clinicians
make a good destination therapy choice. Like almost all
other prognostic DSTs, it currently takes a context-less,
prototypical form: It takes in a list of patient condition
measures and produces an individualized prediction of patient
trajectory, including likely survival and other post-surgical
risks [5].
Despite success in labs, the vast majority of DSTs failed when
they moved to clinical practice. Clinicians rarely use them
[13, 15, 47]. Healthcare researchers have speculated that
the lack of HCI consideration in the design of these systems
might be the main cause of these failures rather than poor
technical performance [38, 40]. These HCI-related issues
identified by these researchers include:
Poor workflow integration: Clinicians reported DSTs
are disruptive, time-consuming, and conflict with the
chaotic nature of clinical work [3, 25, 34, 36, 45, 46].
Some researchers suggested DSTs should be integrated
into Electronic Medical Records (EMR) so as to fit into
clinician workflow [35].
Poor social integration: Most DSTs have been designed
for use by a single user/decision maker; however, many
critical healthcare decisions are made by clinician teams
[10, 20, 27, 44]. Research also started investigating DST’s
social influence. A lab experiment shows that physicians
are concerned patients would think less of them and their
skills if they needed a tool to make medical decisions [37];
Poor concern for clinician needs: Clinicians often lack the
motivation to use a DST [22, 40]. They see it as getting in
their way and slowing them down. Other clinicians do not
trust that the outputs of these systems are informative for
the kinds of patients they care for [15, 47]. Finally, some
perceive DSTs as infringing on their autonomy and their
expertise [45].
Interestingly, drug ordering and preventive care reminder
systems are one type of DST that has worked very well in
clinical environments. These are rule-based systems. When
clinicians enter a prescription that falls outside the standard
of care, the DST issues an alert and it requires the clinician
to input a rationale for the deviation [26, 41]. These systems
prevent human errors, and they collect important information
when clinicians should deviate from the standard. They have
demonstrated relatively wide use because they have been
integrated into the tools clinicians already use, and they only
make their presence known to clinicians when an anomaly is
detected. This is quite different from the interaction of most
prognostic DSTs, which assume clinicians will recognize
they need help, walk up to the system, and seek advice for
the decision.
Decision Support Tools in HCI
Little HCI research has investigated why DSTs fail in clinical
practice or studied the context of healthcare decision making
with a focus on how to best integrate and situate a DST.
Rather, research has focused on other critical issues including
better information presentation and visualization, accuracy of
risk communication, trust-worthiness, ease of use for medical
information, etc. [39, 43, 48] Few studies that investigated
DST in use are lab studies; instead, studies have often
substituted undergraduate students for patients and medical
students for clinicians. [37].
While making important advances, this prior work offers few
insights into how to integrate intelligent systems into chaotic,
human-centered clinical environments. More work is needed
to capture how clinicians deliberate and reach a decision, to
document the contextual barriers to computer interaction, and
to understand clinicians’ perceptions of and expectations for
using intelligent tools to make care decisions.
A related strand of HCI research has looked at emergent,
intensive, and routine care settings with a focus on new tools
for care coordination [2, 9, 17], as well as communication
tools for multidisciplinary meetings [28]. While this body
of work provides valuable snapshots of clinical work, it has
a strong focus on clinician coordination, documenting the
times and places that many clinicians and healthcare activities
densely aggregate.
Healthcare Context and Decision-Making
Research in healthcare organizational decision-making has
focused on understanding clinical work and culture. We noted
two different themes: evidence-based medicine and chaotic
clinical environment.
An evidence-based principle dominates the clinical world.
VAD implant physicians, for example, are expected to follow
an eight-step approach to make a clinical decision [23]. They
are also expected to use the VAD implant decision tree
and risk models [12, 18, 29], multidisciplinary team model,
patient communication guidelines [1, 30] and more. All of
these tools promote a standard of care and are expected to
capture and promote the best practices from across many
healthcare centers.
Interestingly, clinicians do not always follow best practices.
Empirical studies in clinical settings have repeatedly reported
chaotic workflows [34, 36, 42], communication breakdowns,
authority-based decision-making that diminishes or dismisses
the input of some team members [11, 21], overconfidence,
and preventable errors [16]. Very little empirical research
has investigated the when,why, and how of clinical
decision-making as it naturally occurs. Instead, most work
simply notes how it does or does not deviate from the
dominating best practice culture.
Our work attempts to bring these strands of related work
together. Previous research in healthcare has identified
many HCI-related adoption barriers and provided preliminary
depictions of clinical environment. Beyond what have
been done in the field of HCI, we apply HCI approach to
investigate clinical decision making with an eye on where and
how DSTs could help.
FIELD STUDY DESIGN
We wanted to understand how the decision making process
to implant a VAD unfolds in the clinical environment. We
wanted to know who participates and where decision-making
happens, and to probe on when clinicians think an intelligent
system might offer support for their work. We wanted to
identify contextual barriers that might prevent people from
engaging with a DST and to identify the times and places it
might add the most value.
To address these needs, we chose to conduct a qualitative
field study consisting of observations and semi-structured
interviews. We chose an ethnographic approach so as
to capture the richness of context, and also because this
has become a standard HCI approach when designing new
software systems meant to improve work. We analyzed
our data using affinity diagrams [32] and by creating a
service blueprint [8] that documents the decision pathway for
individual patients. We chose affinity diagrams and service
blueprints, methods from HCI and service design field over
the more conventional use of grounded theory because our
focus is more on discovering the opportunity for a technology
to enhance future than on building a detailed theory of the
present work situation.
The decision to implant a VAD involves participation from
both clinicians and patients. Clinicians need to assess the
medical necessity of this invasive therapy, and this was the
focus of our research. Patients participate in this decision by
deciding if they want to endure life with a VAD. While we
recognize the importance of the patient in the decision, this
phase of our work focuses exclusively on the clinician side of
the decision.
We carried out this research at 3 different implant hospitals
all in the United States, hospitals that regularly perform VAD
implantation. In two of the hospitals we performed interviews
and observations. In the third, we only performed interviews,
as we could not secure permission to make observations for
legal and privacy reasons. In general, concerns over access to
protected patient health information along with the general
sensitivity over this end of life decision has made getting
access to clinicians extremely difficult for HCI researchers
and practitioners.
The three facilities vary geographically and in scale. Their
performance rankings range from top 5 to top 60 in the United
States. Despite great inter-site differences we observed, we
report findings that all three facilities share.
Hospital 1: large-scale service performing over 60 heart
transplants and over 100 VAD implants per year;
Hospital 2: moderate-sized service performing over 20
heart transplants and over 30 VAD implants per year;
Hospital 3: relatively small service performing about 20
heart transplants and 40-50 VAD implants per year;
We conducted observations in two Advanced Heart Failure
services for 6 to 14 hours a day for 13 days. The observed
VAD teams cared for approximately 75 patients who were
formally or informally being considered for an implant. We
followed attending cardiologists across all decision-related
settings including morning rounds, clinician-patient
consultations, clinician-to-clinician conversations, and
weekly implant meetings. We observed out-patients from
both General and Advanced Heart Failure clinics and
in-patients from Advanced Heart Failure wards, Intensive
Care Units, and Emergency Rooms.
We conducted IRB approved interviews with a total
of 24 VAD clinical team members from 3 hospitals,
covering many different roles and statuses that participate
in decision-making. Interviewees were chosen according to
their level of involvement in VAD decision-making. Our
research collaborators at each hospital recommended an
initial set of interviewees. We then expanded this set by
recruiting others we observed to play important roles in the
decision-making.
We confirmed our findings with a VAD cardiologist, a
mid-level resident intern, and a VAD coordinator. Field
notes were recorded using pen and paper. Interviews were
audio-recorded and transcribed.
FINDINGS
Findings from this study are threefold. We first give an
overview of the decision process around a VAD implant,
including the participants and their work practices. Next,
we highlight the decision-makers needs for decision support
given the social and environmental contexts where the
decisions get made. Finally, we identify three pathways of the
decision-making process. We report findings based on shared
observations among all studied sites along with quotes from
the interviews, unless noted otherwise.
Overview of the Decision Landscape
The clinical decision to implant a VAD involves
many clinician roles and unfolds across many clinical
contexts. Table 1 provides a high-level abstraction of the
decision-makers and contexts.
The clinical environment is extremely hierarchical; however,
it is also collaborative across status levels. While many roles
contribute to and execute on the implant decision, only a
small and stable coalition has a final say. We refer to these
ultimate decision-makers as implant physicians. These are
mostly cardiologists, though at some sites surgeons and/or
senior nurse practitioners also participate. The midlevels refer
to other clinical members of the VAD team and also the
non-clinical members who focus on insurance, social support,
and VAD-related care coordination. The consults include
other support services and physicians outside of the implant
team.
Implant physicians function at the top of the hierarchy,
leading major decision-related activities. They decide who
transitions from clinic to hospitalization and who gets
classified as a difficult case and gets being discussed at an
implant meeting.
At clinics, implant physicians monitor out-patients and
hospitalize them for a formal VAD evaluation. When an
out-patient gets hospitalized and becomes an in-patient, a
group of clinicians visit the patient every morning during
rounds: they visit each patient after a brief deliberation in
the hallway outside the patient’s room, where they establish
a care plan for the day. The attending cardiologist of the
week picks and presents the “difficult” cases during a weekly
implant meeting, where all available clinicians can voice their
opinions. The attending cardiologist and surgeon take away
a collective decision for each presented case. If approved
for implant, they pick a surgery date. They may stop the
procedure if a patient’s condition changes prior to surgery.
We refer to the cardiologists who provide general heart
failure services as well as the cardiologists that work at local
hospitals as general cardiologists. We refer to non-VAD
implant hospitals as local hospitals. Note that all patients
visit a general cardiologist and most have been admitted to a
local hospital before they get admitted to an Advance Heart
Failure ward and get evaluated for a VAD.
Motivation to Use a DST
Implant physicians perceived no need for a DST. They
view the decision to implant a VAD as easy. As long
Procedure
VAD Team Clinic Ward
Round
Weekly
Meeting
Implant
Physicians
Cardiologists  
Surgeons  
Medical
Midlevels
Nurse Practitioners  
Fellow & Interns  
Physician Assistant
Registered Nurses
VAD Coordinators
Social Finance Coordinator
Midlevels Social Workers
Palliative Care
Consults
Pharmacists
Nutritionists On Demand
Other physicians
Table 1. Clinicians and activities of a VAD implant team. They
unequally participate in routine decision-making activities. marks the
clinicians who lead or always attend the activity; marks those who
attends occasionally or in a subset of hospital sites.
as patients have no definitive exclusion conditions, they
will all get a VAD after failing on an identical, escalating
sequence of less aggressive treatments. Under this strategy,
clinicians thoughtfully order tests to detect red flags, and then
deliberately and iteratively adjust daily medications to resolve
the red flags. They spend much more time on daily care
decisions than on the implant decision itself.
“I am the VAD guy. They came to me for a VAD.”
(Cardiologist)
“He was on a decent amount of diuretics. It’s not really
working. He doesn’t tolerate [Medicine A] or [Medicine
B]. We don’t know what else to do. Then that’s maybe a
time that patient gets admitted for evaluation of LVAD.”
(Nurse practitioner.)
Implant physicians expressed no desire for a prognostic DST.
Their tried-and-true precedence works for the majority of
their cases. For the grey cases, implant physicians did not
imagine that algorithmic predictions would help. While all
physicians knew about the availability of VAD risk models,
none used them in practice. Physicians’ rationale for not
using these models presented a number of barriers that a
prognostic DST would likely face.
1) The implant physicians doubted data applicability. They
did not think the patient history data used to derive the risk
factors matched their grey case patients. Several pointed out
pre-selection biases of the models. More noted that even if
the estimated outcome fits for a cohort, it is not clear which
side of the probability an individual patient would fall on.
“All these scores are not ideal.” (Cardiologist)
“I would say right now, there’s no data to guide that
decision.”(Cardiologist)
“I will still take the risk, and we’re going to push
like crazy to get him through. And the reason we
do that is because a lot of those people get through.”
(Cardiologist)
2) The implant physicians perceived no need for risk
prediction support, even for the grey cases. They were
confident in their prediction; they did not believe a more
precise prediction would be helpful because there is no
clear-cut threshold between risky and too-risky, especially in
cases where clinicians had to choose between “VAD them” or
“let them die”. Predicting is easy. Action taking is difficult.
“I can tell you who will struggle. That is easy.
The question is who will recover from that struggle.”
(Surgeon)
3) They do not value computation in clinical practice. During
the interviews, several clinicians describe computer science
as “all logic and data” while clinical care is not. They
repeatedly emphasized fuzziness in medical decisions and
gave many examples to prove that experience matters more
than computation.
“We ordered two tests. One test was telling you one
thing, the other is telling you another...With the years of
experience I have, I can still make a reasonably accurate
decision.” (Cardiologist)
“Often what happened was the test did not correspond
to what the patient was saying. The test looked better
than the patient. We need to do a decision how to deal
with that.” (Cardiologist)
These excerpts confirmed and explained physicians’ lack of
motivation for using DSTs. They felt no need for support
in implant decisions and they did not trust that a DST could
provide valuable support. In interviews, they expressed
appreciation of prognostic DSTs as an educational tool for
patients or as a presentation tool for clinician meetings, but
never as a prognostic tool for medical decision-making.
Hospital Environment and Computers
Computers were both used and perceived primarily as
a documentation tool related to legal and financial
accountability. When asked how they use the EMR, many
replied that they do “documentations” after work, often from
home. Many check the EMR each morning from bed or when
eating breakfast to see if anything changed during the night.
When asked how long they spend using a computer each day,
the typical response was, “Too long”.
Interaction with computers presents many challenges in a
ward environment (Figure 1) where most in-patient clinical
decisions happen. During the 4-to-6-hour rounds, clinicians
visit more than 30 patient rooms. They are constantly moving
and conversing, logging in and out of the EMR. Everything
they have with them must fit into their pockets because before
and after visiting each patient’s room they must wash their
hands, and sometimes put on and take off disposal gowns and
gloves as well [31].
Figure 1. The field illustration of an in-patient rounding scene. Hospital
environments pose unique restrictions to computer use: Clinicians
are constantly on the move, frequently putting on and off protective
clothing, logging in and out of different public computers in hallways.
These barriers naturally stratified across decision makers and computer
users.
These barriers naturally stratified across decision makers and
computer users. For example, cardiologists give oral orders
during meetings with patients, and a midlevel will take notes
and enter them into EMR at a later time. A few midlevels
would carry a computer with them when rounding. They
often skipped the in-room patient conversations because of
the hassle hand washing presented. As a result, almost no
decision-making ever takes place in front of a computer.
Social Decision Support
When faced with difficult cases, we observed implant
physicians turning to their colleagues. The consultative
collaborations were frequent and clinicians generally found
them efficient and effective.
Implant physicians relied on teamwork. Within a shift
cycle, one attending cardiologist cares for all in-patients:
often more than 40 patients per week. Each patient gets
assigned a primary nurse and resident intern who prepare
information and monitor unfolding situations. The nurse
and intern handle all reporting and documentation, and they
prevent patients from falling though the cracks. Cardiologists
also consult surgeons for surgical risks, and pharmacists for
nuanced medication changes. For patients with other organ
complications, they turn to physicians with corresponding
expertise.
Attending cardiologists fluently integrate inputs from
colleagues through various routine and ad-hoc activities.
During rounds, for example, they request midlevel
follow-ups right after visiting a patient; they call other
cardiologists whenever a problem emerges; they always
consult pharmacists right after rounds and before ordering
medications. Unlike EMR use, these collaborations happen
when and where decisions get made. The implant physicians
trust this social decision support process; they often
immediately act on their colleagues’ input.
Figure 2. An abstraction of VAD decision path and patient journey. Despite variations in timespan and different escalations and patterns, patients
basically followed one of three paths to get to the VAD decision. The standard path (black) illustrates a systematic escalation of care and prepares
relatively robust VAD decisions. Patients who followed the late referral path (blue) have missed the implant window before arrival at an implant center.
The emergency room path (yellow) usually accompanies incomplete clinical and social evidences, making VAD clinicians’ decisions difficult.
“We are rounding or doing something else, so it’s much
easier for me to call a surgeon and say there is a patient
in this room...” (Cardiologist)
“I asked the surgeon, would this condition be too risky
to operate on. He said no. Then he will do it.”
(Cardiologist)
Interviewer: What do you do when feeling uncertain?
Cardiologist: I look through medical record one more
time, making sure I did not miss anything, and I ask my
colleagues to see him.
Social decision supports happen at formal meetings, as
well as through phone calls and during impromptu hallway
chats. While midlevels were sometimes left out of the
informal inner-circle conversations, neither physicians nor
the midlevels expressed any concern that this lead to poor
decision.
Three Paths to a VAD Decision
We created a service blueprint to map the VAD decision
process narratives collected from observations and
interviews. A consolidation of the customer journeys
revealed three decision paths that could take a patient towards
a VAD. Each anchors and shapes the decision-making
situation distinctively. We use the scenarios below to
illustrate the paths. Drawing on them, we note potential
breakdowns and discuss design opportunities for DSTs
respectively. A VAD cardiologist, a mid-level resident intern,
and a VAD coordinator confirmed the abstraction of these
three paths.
1. Standard Path (Black Line in Figure 2)
A heart failure patient stays at home on oral medication.
The patients visit a local cardiologist regularly. As heart
failure progresses, the doctor requests more frequent clinic
visits for closer monitoring, and occasional hospitalization
for intravenous medications. “You might need a mechanical
heart in the future, but that’s way down the road.” The doctor
tells the patient.
As heart failure continues worsening, the local cardiologist
refers the patient to an implant hospital. An implant
cardiologist talks to the patient and family at the clinic,
getting to know the medical history and social conditions.
The implant physician orders more medications and tests,
monitoring the patient’s trajectory. Midlevels educate the
patient regarding consequences and cautions of a VAD
implant or heart transplant: Quit smoking. Otherwise it will
hurt your transplant candidacy.”“Try to lower your BMI to
32.” “Call your nurse practitioner if these symptoms appear.
As heart failure worsens, the patient get hospitalized at the
implant hospital. The same implant team starts a formal
evaluation. All members talk to and evaluate the patient
during the first couple of days after hospitalization. At the
weekly meeting, everyone voices their opinions and agrees
on a decision.
The standard path depicts a systematic process of therapy
escalation and a staged unfolding of decision considerations.
Once all medication therapies prove ineffective, clinicians
initiate a VAD workup.
When following this path, the implant team has time to
get to know the patient, including their medical and social
conditions. They have exhausted all less aggressive therapies.
They are able to come to a decision easily and quickly.
“I’ve had 9 months to get know him, to do tests on, to
follow... It’s hard to say what else I will need. I had a lot
of time to think through things.”(Cardiologist)
Occasionally, on this decision path, the choice can become
more challenging for social or financial reasons. Patients with
no insurance at the time of admission, or who have no one at
home who can take on caregiver duties, raise non-medical
problems with choosing a VAD. While such issues can be
overcome, they add uncertainties to implant outcomes and
often require urgent problem solving by midlevels.
2. Late Referral Path (Blue Line in Figure 2)
Clinicians at local hospitals keep trying different therapies
and delay making a referral to an implant facility. When the
implant team first meets the patient, the patient has been too
sick to survive an open-heart surgery.
The late referral path documents a major breakdown in the
VAD decision process: missing the implant window. In
a consolidated decision process, every physician involved
carefully monitors patient progression, and escalates care
or initiates referral in a timely manner. They cannot rush
or skip any step because of reimbursement restrictions and
ethical considerations. Facing this string of judgments,
local cardiologists and primary care doctors who lack
experience, knowledge, or even awareness of VAD candidacy
evaluation might find referring within the implantable
window challenging.
“They (general cardiologists) go through a process: Do
we think it’s even reasonable to think about transplant
or LVAD? And then if they think it’s possible they’ll call
one of us. They’re the gatekeepers.”(Surgeon)
“I think (they refer the patients) when they burn out
all options, when they can’t keep someone out of the
hospital. Unfortunately most of the time they refer
people who are extremely late in their clinical conditions
so that the choice that we have to make is not an easy
one.”(Cardiologist)
Currently, most referrals and VAD education happen among
established and stable clinician connections. New referral
relationships seem to grow extremely slowly across social
connections.
“Some cardiologists have relatively stable referring
relationships with us.” (Cardiologist)
A nurse practitioner spoke of giving her card to a newly
implanted patient: After you get home, ask your local
cardiologist to call me. I’ll tell him how to take care of
VAD patients.
3. Emergency Room Path (Yellow Line in Figure 2)
Implant cardiologists met a patient for the first time in the
emergency room. The patient was “crashing and burning”.
The physicians put him in an induced coma and predicted
that if they do not implant soon, the patient will die.
The blood tests suggested heavy alcohol and substance use
history, which almost automatically excludes the patient from
implantation. The team could not confirm this issue with the
patient or the family. The decision has to happen fast.
“These are uncomfortable decisions.” (Cardiologist)
Although there is no definite time requirement for making a
VAD decision, clinicians often find emergency room cases
difficult because they often accompany incomplete clinical
evidence and tight time constraints. Clinicians collectively
described patients “we have not met before” as difficult cases.
They find it difficult to make a quality medical judgment
based solely on a snapshot of the patient’s condition. For
patients following the standard path, implant physicians
always call the referring doctors in addition to checking the
EMR. During hospitalization, detailed patient dynamics are
carefully monitored. Clinicians rely on such information
to differentiate minor side effects from notable signals of
complications as well as to adjust and plan medication
strategies.
“I know his trajectory and tests from EMR. But I
don’t know what has been tried and how his body
responded...” (Cardiologist)
When an in-patient experiences a sudden and steep decline,
clinicians sometimes have to make a less-than-informed
decision to avoid missing the implant window.
“We’ve got a patient that came in here on breathing
tubes. Families said go ahead, and patient woke up on a
mechanical pump.” (Cardiologist)
“We decided to implant him. If he didn’t have
a caregiver, then come up with a caregiver.”
(Cardiologist)
Clinicians expressed the need to slowly prepare patients for
the decision. They need time to build up a connection with
a patient before they can truly understand the social situation
and discuss this sensitive and fuzzy end-of-life decision. For
urgent cases, the process becomes over-simplified; it gets
turned into a social support checklist.
“Her husband was there. They were going through
a divorce. She would never tell me that. Patients
are always a bit intimidated by doctors. But they will
tell the coordinators, who created a level of comfort,
so they open up and tell them everything. That’s an
important piece of information, because if we put a VAD
in the patient... who will take care of the patient?”
(Cardiologist)
DISCUSSION
DSTs, despite compelling evidence of their effectiveness in
lab studies, have mostly failed in clinical practice, failing
to improve patient outcomes [24]. Healthcare researchers
suggest that a lack of user-centered HCI considerations in the
design of these systems plays a critical role in these repeated
failures. Our field study helps to confirm this speculation.
We identified many barriers that could negatively impact
the use and perceived value of a prognostic DTS situated
in VAD implant hospitals. We observed a perceived lack
of need when making decisions and lack of trust in the
ability of intelligent systems to help with difficult cases.
We also observed many patterns in work practices and
decision-making, as well as contextual barriers to computer
interaction in the clinical environment that might prevent or
deter clinicians from accessing a DST.
These observations forced us to reflect on the traditional
forms most prognostic DSTs take. Most require clinicians
to recognize when computational advice would be useful and
then make an explicit effort to access a DST [33]. In addition,
most imagine a single decision maker participating in making
the decision at a single time and place [41].
Our findings suggest clinicians in VAD implant hospitals
are not likely to use such DSTs. Below we highlight four
barriers that emerged from our observation of VAD implant
decision-making. We suspect most if not all of these barriers
will generalize to other DSTs intended to support high-risk,
clinical decisions. We then reframe the VAD decision
process, identifying times and places DST support could be
helpful, and new forms DSTs might take to better integrate
into a clinical environment.
Barriers of DST Adoption
Attitudinal Barrier
First and most importantly, clinicians we interacted with have
no desire to use a DST. Our findings confirms much of what
previous work reported in other clinical contexts [15, 22,
25]. We advance this previous work with observations of
a new context, VAD implant hospitals, and with a detailed
discussion of need barriers, social barriers, informational
barriers, and environmental barriers.
Need Barrier
Clinicians perceived no need for data support because they
felt that they know how to effectively factor patient conditions
into clinical decisions. Their experience with current tools
like the VAD risk models has in no way provided any
confidence that DST or other intelligent systems can provide
valuable new data. It is unlikely that they would explicitly
use any DST until they perceive a need and until they trust
these systems can deliver value. A better DST would have the
explicit goal of helping clinicians feel they are doing better
work, and not necessarily automating the part of work that
makes them feel like an expert.
Social Barrier
The lack of a real consideration for social context in the
design of DSTs can be a significant deficit. The hierarchical
but collaborative clinical culture poses a two-fold challenge
for DST use. First, decision makers (physicians) and
computer users (the midlevel) rarely overlap at the point of
decision-making. Second, physicians have great trust in their
social network of other physicians, who help them make
more difficult decisions. It seems unlikely they will move
towards computational support and away from social support
when things are difficult. There may be an opportunity for
systems that improve the process of getting and receiving
social support as a core feature of the DST.
Previous field work in clinics reported that only junior
clinicians use DSTs in ward rounds, and they concluded that
many DSTs targeted the wrong users, the senior physicians
who are unlikely to be in front of a computer [4]. Our
observation echoes this. We suspect this comes from both
a deeply rooted hierarchical workplace culture and with
the younger personnel’s generally higher level of facility
with computing and new technology. DST design needs
to integrate and even leverage this layer of social context
in order to place the information in front of real decision
makers. DSTs could be designed such that younger clinicians
become a rich information channel through which the DST
recommendations are passed to more senior decision makers.
Furthermore, DSTs have to demonstrate their value to the
decision makers because all decision supports, social or
computational, happen on their demands.
Informational Barrier
We observed a mismatch between clinicians’ information
needs and DST’s information flow. The commonly assumed
function of a prognostic DST is to predict the likely trajectory
a patient will take based on a list of quantitative measures.
Our patient trajectory paths demonstrate that none of the
major decision breakdowns happens in this prediction.
At the input end, DSTs take in quantitative and explicit
inputs, while challenging decisions are often characterized
by unavailable or ambiguous medical and/or social evidence.
Clinicians are unlikely to use a tool that only does the easiest
part of their job; telling them a textbook case is, “textbook”.
Even if they approach the system when facing a difficult case,
they might find it difficult to fill in some of the blanks, such
as diagnosis for an emergency-room-path patient. They might
find the information that most concerns them is not captured
in the prediction, such as the patients home life and social
support, which are critical and difficult factors most often not
captured in the medical history.
In terms of DST output, physicians need support for action
taking. Consultation between cardiologist and surgeon best
captures this: Is this case too risky to operate on? No?
Ok, then do it. A probabilistic prediction can be obscure in
telling whether to execute a therapy or not, to do it now or to
“wait and see”. DSTs only predict outcomes of “conducting
a therapy now”, with little sense of waiting and seeing.
Environmental Barrier
Finally, hospital environments pose unique restrictions to
computer use. Clinicians are constantly on move. They
frequently log in and out on different public computers in
hallways. They need to put on and take off protective gloves
and clothing, and many must wash their hands well more than
60 times per day. Collectively, these raise many concerns that
suggest the current WIMP (windows, icons, menus, pointer)
style interactions might always struggle in this environment.
Re-framing the VAD Decision
Our findings demonstrate that a VAD decision is anchored by
many small, unfolding healthcare decisions, including:
Patient condition clarification: disease progression
monitoring, tests, diagnosis, social evaluation etc.;
Daily care decisions: stabilizing a patient to buy time for
decision-making, optimizing patient condition to reduce
treatment risks;
Care escalation decisions: adjusting clinic visit frequency,
hospitalizing, escalating treatment, etc.
The decision paths illustrate how failing any of these
decisions can harm the VAD decision. None of the major
breakdowns in the three decision paths failed in factoring
patient condition into a prognosis, the decision that most
prognostic DSTs aim to support. This revealed a real need
to reframe the scope of a VAD implant decision. A new way
to see this decision is as a consolidated VAD decision process
unfolding in stages, over time, and across healthcare facilities
including clinics, local hospitals, and implant hospitals. We
believe this alternative view of clinical decisions will inspire
new possibilities for more effective DST designs. While
much related work has assumed decision support must be
delivered to the time and place of decision-making [24], we
see great potential in DSTs supporting healthcare trajectories
as patients move down pathways towards major decisions.
Implications for the Design of Effective Clinical DSTs
We draw three implications from our findings to inform
and inspire DST designers in addressing adoption barriers
as well as in exploring new design possibilities: embrace
context, make the decision process a design material, and
blend human and machine intelligence.
Embracing the Richness of Clinical Context
A core goal of our study was to explore how a DST might
better fit in clinical workflow and social context. Our study
revealed rich details illustrating the context, which open up
new opportunities for DST designs to integrate and leverage.
DSTs should be integrated into EMRs or they should at
least automatically take in EMR data as inputs. They need
to minimize input of data due to clinicians’ frequent hand
washing and lack of time spent in front of a computer.
The fact that seasoned physicians do not perceive a need for
decision support suggests that DSTs have to make an effort
before they can reach and convince these decision-makers.
Designers should leverage the midlevels who more frequently
use EMRs as a channel for delivery of decision support.
Designers should approach this with some caution, as this
may disrupt the hierarchical decision structure that is in place,
but it could also positively elevate the role midlevels play
in decision-making, thus encouraging them to participate.
Central to this point is that establishing credibility and value
across all members of the implant decision team should be a
primary design goal.
In addition to midlevels, we also view the weekly implant
meetings as an opportunity. A DST that wants to
demonstrate value might automate the process of preparing
patient information for this meeting. By automating the
tedious information retrieval tasks, a DST could ease its
recommendations into the discussion materials that the whole
implant team reviews.
Decision Process As Design Material
Our finding illustrates that a healthcare trajectory, as well
as its decision process, is pushed forward by a string of
treatment escalations. We believe this new perspective on
decision-making inspires a new theme in the DST design
space: Decision support along the trajectory.
A VAD implant decision is anchored by a set of many smaller
decisions that clarify and optimize patient conditions. Our
illustrations of patients’ journeys indicate that a breakdown
at any of these steps can negatively limit therapeutic choice.
We see a real opportunity for DSTs to provide much more
integrated support. Currently, DSTs take only one form.
They support making right or good decisions; they either help
making a diagnosis, or a treatment choice, or a prognosis.
Our findings prompt DST designers to consider combining a
range of DST components with various forms and functions
in order to support many small decisions that often lead up to
a major clinic decision.
We observed that initiating timely care escalation has a
crucial and direct impact on VAD decision quality. The
late referral path offers a perfect example. DSTs might be
able to improve patient outcomes by supporting physicians
across healthcare facilities with smart adjustment of patient
clinic visit intervals, timely consideration of hospitalization,
referral, and a formal workup for VAD. All along this process
an intelligent system could be monitoring to make sure
a patient does not arrive at an implant facility after it is
already too late for an implant. We see these functions as
particularly valuable for local or primary care doctors who
do not specialize in VAD, but care for the majority of heart
failure patients. In a broader sense, DSTs could help surface
newer care options to primary care and local doctors who are
not current on advances in sub-specialties.
A better clinical DST could prompt both patients and care
providers to resolve fixable implant exclusions. DSTs could
flag behavioral factors such as smoking, drinking, and a
patient’s BMI. In addition, it could also prompt upstream
social workers, as well as patients and their families to
address a lack of effective social support needed for post
VAD life and a lack of insurance that would cover this
expensive procedure. Solving these issues earlier in the
decision process reduces the likelihood a patient might miss
an implant window due to an exclusion criteria that could
have been resolved.
Developing DSTs to support the management of a panoramic
healthcare decision process marks a clear space for future
research in both data science and HCI. We imagine prior
research on care planning could be leveraged in support
of this. We strongly encourage data scientists to explore
healthcare process data and predicting longer-term treatment
outcomes. We also suggest HCI researchers investigate the
decision-making activities in local or primary care settings
and further examine this concept.
Blending Human and Machine Intelligence
Our findings highlight the attitudinal/informational barriers
DSTs face. Currently clinicians have little motivation to
use DSTs and many barriers stopping them. In our study,
implant physicians expressed no need or desire to use any
DST or risk model because they find no difficulty in making
a VAD candidacy judgment. They also found DST data
support for VAD decision inferior to fellow colleagues’ input.
This provokes us to critically re-consider the role of DST in
decision-making tasks.
In our study, VAD physicians reported that they know how
to make a VAD decision. As trivial as it sounds, it is a
missing perspective in DST literature that has instead focused
on the clinicians as a source of errors, biases, overconfidence,
and communication breakdowns. This assumption behind
DST development and design, though not immediately
evident in interfaces, perhaps seeds this attitudinal barrier.
Many of our participants implied that makers of current
prognostic systems want to replace their expertise with
inhuman technology. Taking a lesson from early HCI work
in participatory design, we need to make technical advances
that skill workers instead of de-skilling them [14].
Clearly there are opportunities for designing new interactions
between DSTs and clinicians that work to integrate the
abilities of both agents. One straightforward solution is to
focus on more pliable forms of interactions, such as alerts
and reminders. To date, one of the only successfully adopted
DSTs has been alert systems. These systems require minimal
user effort to manage and pose little disturbance on those who
can make a correct decision.
At a deeper level, the attitude and informational barriers
have resulted from a simple fact that clinicians will not use
DSTs for tasks they feel they can do better than a machine.
There is a real need for new DST design that better allocates
human and machine intelligence into different components of
healthcare decisions. Clinicians might make better judgments
than algorithms in synchronization of clinical evidence and
social evaluations. Some patient situations reported by
our participants suggest these difficult situations are where
data-centered systems are less likely to offer helpful advice.
In other decision tasks, such as clarifying and monitoring
patient condition as well as managing care escalation,
machine intelligence can and should help. We see emerging
opportunities in these spaces for DSTs to add value. For
example, when facing an emergency-room-path patient with
sparse data available, clinicians seem the most likely to
benefit from the collective intelligence that is collated across
many implant centers. Such cases might be the best
opportunities for DSTs to gain trust from clinicians by
addressing an actual situation where a need for support might
be present. Clinicians might also value computational support
in referral management. We observe inter-site clinician
collaborations are not remotely as frequent and ripe as the
ones between colleagues. Information technologies could
potentially perform matching inter-site consultations much
more precisely and catalyzing new referral relations much
faster than the currently manual methods.
We encourage DST designers to deliberately blend different
clinicians and decision support components in decision space.
Central to this implication is to make clinicians feel they are
becoming better at their job, and to enhance clinical decision
quality by leveraging advantages of both human and machine
intelligence.
CONCLUSION
In this paper, we have presented a field study to understand
how clinicians collaboratively decide whether and when to
implant a patient. We expanded previous work on DSTs
by providing a rare description of the who, where and
how of clinical decision making in practice and identifying
opportunities where DSTs can add value. These findings
challenge the commonly assumed form of DSTs and suggest
an alternative perspective on DSTs’ role in decision-making.
Given the great potential of machine intelligence in
improving healthcare, we strongly encourage HCI
researchers join filling in the gap between DST technologies
and clinical contexts. DST development teams should work
closely with HCI researchers and practitioners in search of
near-term, pragmatic solutions to breach acceptance barriers.
There is also a real need for HCI researchers to investigate
clinical decision-making in various healthcare settings to
enable the design of real-world-ready DSTs.
ACKNOWLEDGEMENT
This work was supported by grants from NIH, National Heart,
Lung, and Blood Institute (NHLBI) # 1R01HL122639-01A1.
We thank the research collaborators at each hospital for their
help in preparing and coordinating the studies. We thank
the participants in this work for their dedication, time and
valuable inputs.
REFERENCES
1. Larry A Allen, Lynne W Stevenson, Kathleen L Grady,
Nathan E Goldstein, Daniel D Matlock, Robert M
Arnold, Nancy R Cook, G Michael Felker, Gary S
Francis, Paul J Hauptman, and others. 2012. Decision
making in advanced heart failure a scientific statement
from the American Heart Association. Circulation 125,
15 (2012), 1928–1952.
2. Ofra Amir, Barbara J Grosz, Krzysztof Z Gajos,
Sonja M Swenson, and Lee M Sanders. 2015. From
Care Plans to Care Coordination: Opportunities for
Computer Support of Teamwork in Complex
Healthcare. In Proceedings of the 33rd Annual ACM
Conference on Human Factors in Computing Systems.
ACM, 1419–1428.
3. Joan S Ash, Marc Berg, and Enrico Coiera. 2004. Some
unintended consequences of information technology in
health care: the nature of patient care information
system-related errors. Journal of the American Medical
Informatics Association 11, 2 (2004), 104–112.
4. Melissa T Baysari, Johanna I Westbrook, Katrina L
Richardson, and Richard O Day. 2011. The influence of
computerized decision support on prescribing during
ward-rounds: are the decision-makers targeted? Journal
of the American Medical Informatics Association 18, 6
(2011), 754–759.
5. Riccardo Bellazzi and Blaz Zupan. 2008. Predictive data
mining in clinical medicine: current issues and
guidelines. International journal of medical informatics
77, 2 (2008), 81–97.
6. Jozien Bensing. 2000. Bridging the gap.: The separate
worlds of evidence-based medicine and patient-centered
medicine. Patient education and counseling 39, 1
(2000), 17–25.
7. Raymond L Benza, Dave P Miller, Robyn J Barst,
David B Badesch, Adaani E Frost, and Michael D
McGoon. 2012. An evaluation of long-term survival
from time of diagnosis in pulmonary arterial
hypertension from the REVEAL Registry. CHEST
Journal 142, 2 (2012), 448–456.
8. Mary Jo Bitner, Amy L Ostrom, and Felicia N Morgan.
2007. Service Blueprinting: A Practical Technique for
Service Innovation. (2007).
9. Claus Bossen, Lotte Groth Jensen, and Flemming Witt.
2012. Medical secretaries’ care of records: the
cooperative work of a non-clinical group. In
Proceedings of the ACM 2012 conference on Computer
Supported Cooperative Work. ACM, 921–930.
10. Caryn Christensen, James R Larson, and others. 1993.
Collaborative medical decision making. Medical
Decision Making 13, 4 (1993), 339–346.
11. Maureen Coombs and Steven J Ersser. 2004. Medical
hegemony in decision-making–a barrier to
interdisciplinary working in intensive care? Journal of
advanced nursing 46, 3 (2004), 245–252.
12. Jennifer Cowger, Kartik Sundareswaran, Joseph G
Rogers, Soon J Park, Francis D Pagani, Geetha Bhat,
Brian Jaski, David J Farrar, and Mark S Slaughter. 2013.
Predicting survival in patients receiving continuous flow
left ventricular assist devices: the HeartMate II risk
score. Journal of the American College of Cardiology
61, 3 (2013), 313–321.
13. Srikant Devaraj, Sushil K Sharma, Dyan J Fausto, Sara
Viernes, and Hadi Kharrazi. 2014. Barriers and
Facilitators to Clinical Decision Support Systems
Adoption: A Systematic Review. Journal of Business
Administration Research 3, 2 (2014), p36.
14. Pelle Ehn. 1993. Scandinavian design: On participation
and skill. Participatory design: Principles and practices
(1993), 41–77.
15. Glyn Elwyn, Isabelle Scholl, Caroline Tietbohl, Mala
Mann, Adrian GK Edwards, Catharine Clay, France
L´
egar´
e, Trudy van der Weijden, Carmen L Lewis,
Richard M Wexler, and others. 2013. Many miles to go:
a systematic review of the implementation of patient
decision support interventions into routine clinical
practice. BMC medical informatics and decision making
13, Suppl 2 (2013), S14.
16. John W Ely, Mark L Graber, and Pat Croskerry. 2011.
Checklists to reduce diagnostic errors. Academic
Medicine 86, 3 (2011), 307–313.
17. Anthony Faiola, Preethi Srinivas, Yamini Karanam,
David Chartash, and Bradley Doebbeling. 2014.
VizCom: a novel workflow model for ICU clinical
decision support. In CHI’14 Extended Abstracts on
Human Factors in Computing Systems. ACM,
1705–1710.
18. David Feldman, Salpy V Pamboukian, Jeffrey J
Teuteberg, Emma Birks, Katherine Lietz, Stephanie A
Moore, Jeffrey A Morgan, Francisco Arabia, Mary E
Bauman, Hoger W Buchholz, and others. 2013. The
2013 International Society for Heart and Lung
Transplantation Guidelines for mechanical circulatory
support: executive summary. The Journal of Heart and
Lung Transplantation 32, 2 (2013), 157–187.
19. Centers for Medicare and Medicaid Services. 2010.
Decision Memo for Ventricular Assist Devices as
Destination Therapy (CAG-00119R2). (2010).
https://www.cms.gov/medicare-coverage- database/
details/nca-decision- memo.aspx?NCAId=243&ver=9&
NcaName=Ventricular+Assist+Devices+as+
Destination+Therapy+(2nd+Recon)&bc=
BEAAAAAAEAAA&&fromdb=true.
20. Dominick L Frosch and Robert M Kaplan. 1999. Shared
decision making in clinical medicine: past research and
future directions. American journal of preventive
medicine 17, 4 (1999), 285–294.
21. Eileen Gambrill. 1999. Evidence-based practice: An
alternative to authority-based practice. Families in
Society: The Journal of Contemporary Social Services
80, 4 (1999), 341–350.
22. Karine Gravel, France L´
egar´
e, and Ian D Graham. 2006.
Barriers and facilitators to implementing shared
decision-making in clinical practice: a systematic review
of health professionals’ perceptions. Implement Sci 1, 1
(2006), 16.
23. MG Myriam Hunink, Milton C Weinstein, Eve
Wittenberg, Michael F Drummond, Joseph S Pliskin,
John B Wong, and Paul P Glasziou. 2014. Decision
making in health and medicine: integrating evidence
and values. Cambridge University Press.
24. Monique WM Jaspers, Marian Smeulers, Hester
Vermeulen, and Linda W Peute. 2011. Effects of clinical
decision-support systems on practitioner performance
and patient outcomes: a synthesis of high-quality
systematic review findings. Journal of the American
Medical Informatics Association 18, 3 (2011), 327–334.
25. Kensaku Kawamoto, Caitlin A Houlihan, E Andrew
Balas, and David F Lobach. 2005. Improving clinical
practice using clinical decision support systems: a
systematic review of trials to identify features critical to
success. Bmj 330, 7494 (2005), 765.
26. Gilad J Kuperman, Anne Bobb, Thomas H Payne,
Anthony J Avery, Tejal K Gandhi, Gerard Burns,
David C Classen, and David W Bates. 2007.
Medication-related clinical decision support in
computerized provider order entry systems: a review.
Journal of the American Medical Informatics
Association 14, 1 (2007), 29–40.
27. James R Larson, Caryn Christensen, Ann S Abbott, and
Timothy M Franz. 1996. Diagnosing groups: charting
the flow of information in medical decision-making
teams. Journal of personality and social psychology 71,
2 (1996), 315.
28. Stephen J Leslie, Mark Hartswood, Catrin Meurig,
Sinead P McKee, Roger Slack, Rob Procter, and
Martin A Denvir. 2006. Clinical decision support
software for management of chronic heart failure:
Development and evaluation. Computers in biology and
medicine 36, 5 (2006), 495–506.
29. Wayne C Levy, Dariush Mozaffarian, David T Linker,
Santosh C Sutradhar, Stefan D Anker, Anne B Cropp,
Inder Anand, Aldo Maggioni, Paul Burton, Mark D
Sullivan, and others. 2006. The seattle heart failure
model prediction of survival in heart failure. Circulation
113, 11 (2006), 1424–1433.
30. Gregory Makoul and Marla L Clayman. 2006. An
integrative model of shared decision making in medical
encounters. Patient education and counseling 60, 3
(2006), 301–312.
31. Yatin Mehta, Abhinav Gupta, Subhash Todi, SN Myatra,
DP Samaddar, Vijaya Patil, Pradip Kumar Bhattacharya,
and Suresh Ramasubban. 2014. Guidelines for
prevention of hospital acquired infections. Indian
journal of critical care medicine: peer-reviewed, official
publication of Indian Society of Critical Care Medicine
18, 3 (2014), 149.
32. Bill Moggridge. 2007. Designing interactions. Vol. 14.
33. Mark A Musen, Blackford Middleton, and Robert A
Greenes. 2014. Clinical decision-support systems. In
Biomedical informatics. Springer, 643–674.
34. Zahra Niazkhani, Habibollah Pirnejad, Marc Berg, and
Jos Aarts. 2009. The impact of computerized provider
order entry systems on inpatient clinical workflow: a
literature review. Journal of the American Medical
Informatics Association 16, 4 (2009), 539–549.
35. Annette M OConnor, John E Wennberg, France Legare,
Hilary A Llewellyn-Thomas, Benjamin W Moulton,
Karen R Sepucha, Andrea G Sodano, and Jaime S King.
2007. Toward the tipping point: decision aids and
informed patient choice. Health Affairs 26, 3 (2007),
716–725.
36. M Peleg and S Tu. 2006. Decision support, knowledge
representation and management in medicine. Methods of
Information in Medicine 45 (2006), 72–80.
37. Victoria A Shaffer, C Adam Probst, Edgar C Merkle,
Hal R Arkes, and Mitchell A Medow. 2013. Why do
patients derogate physicians who use a computer-based
diagnostic support system? Medical Decision Making
33, 1 (2013), 108–118.
38. Dean F Sittig, Adam Wright, Jerome A Osheroff,
Blackford Middleton, Jonathan M Teich, Joan S Ash,
Emily Campbell, and David W Bates. 2008. Grand
challenges in clinical decision support. Journal of
Biomedical Informatics 41 (2008), 387–392.
39. Alan R Tait, Terri Voepel-Lewis, Brian J
Zikmund-Fisher, and Angela Fagerlin. 2010. The effect
of format on parents’ understanding of the risks and
benefits of clinical research: a comparison between text,
tables, and graphics. Journal of health communication
15, 5 (2010), 487–501.
40. Svetlena Taneva, Waxberg Sara, Goss Julian, Rossos
Peter, Nicholas Emily, and Cafazzo Joseph. 2014. The
Meaning of Design in Healthcare: Industry, Academia,
Visual Design, Clinician, Patient and Hf Consultant
Perspectives. In Proceedings of the Extended Abstracts
of the 32Nd Annual ACM Conference on Human Factors
in Computing Systems (CHI EA ’14). ACM, New York,
NY, USA, 1099–1104. DOI:
http://dx.doi.org/10.1145/2559206.2579407
41. Jonathan M Teich, Pankaj R Merchia, Jennifer L
Schmiz, Gilad J Kuperman, Cynthia D Spurr, and
David W Bates. 2000. Effects of computerized physician
order entry on prescribing practices. Archives of internal
medicine 160, 18 (2000), 2741–2747.
42. Bruce A Thyer and Laura L Myers. 1999. On science,
antiscience, and the client’s right to effective treatment.
Social Work (1999), 501–504.
43. Danielle Timmermans, Bert Molewijk, Anne
Stiggelbout, and Job Kievit. 2004. Different formats for
communicating surgical risks to patients and the effect
on choice of treatment. Patient education and
counseling 54, 3 (2004), 255–263.
44. Visith Uy, Suepattra G May, Caroline Tietbohl, and
Dominick L Frosch. 2014. Barriers and facilitators to
routine distribution of patient decision support
interventions: a preliminary study in community-based
primary care settings. Health Expectations 17, 3 (2014),
353–364.
45. Annette L Valenta, Margaret M Browning, Timothy E
Weddle, Greer WP Stevenson, Andrew D Boyd, and
Denise M Hynes. 2010. Physician perceptions of clinical
reminders. In Proceedings of the 1st ACM International
Health Informatics Symposium. ACM, 710–717.
46. Robert L Wears and Marc Berg. 2005. Computer
technology and clinical work: still waiting for Godot.
Jama 293, 10 (2005), 1261–1263.
47. Jeremy C Wyatt and Douglas G Altman. 1995.
Commentary: Prognostic models: clinically useful or
quickly forgotten? Bmj 311, 7019 (1995), 1539–1541.
48. Qian Yang, John Zimmerman, and Aaron Steinfeld.
2015. Review of Medical Decision Support Tools :
Emerging Opportunity for Interaction Design. In IASDR
2015 Interplay Proceedings.
... "To consider how an implant decision was reached across many clinicians roles and contexts" (Yang et al., 2016) "Authors co-created with these senior users to engage them in the design process of AI-enabled mobility service" (Li & Lu, 2021) 5 (83.3%) ...
... Five papers adopted service design approaches to consider how various groups of users and stakeholders interacted within an AI application Li & Lu, 2021;Ghunaim et al., 2022;Vetch et al., 2021;Yang et al., 2016). For example, Yang et al. (2016) mapped out how implant decisions were reached across many clinicians' roles and contexts. ...
... Five papers adopted service design approaches to consider how various groups of users and stakeholders interacted within an AI application Li & Lu, 2021;Ghunaim et al., 2022;Vetch et al., 2021;Yang et al., 2016). For example, Yang et al. (2016) mapped out how implant decisions were reached across many clinicians' roles and contexts. Similarly, Li and Lu (2021) created a service blueprint to map out how senior users, student drivers, and elderly center managers formed a collaborative network within an AI reservation platform. ...
Conference Paper
Full-text available
Designing AI products presents novel challenges that traditional design methods may be insufficient to tackle. With a growing shift from design of "products" to "services", service design is a promising approach to facing these new and unique challenges. However, little research has been done to understand how service design may contribute, or to identify how it might have been adopted and used in existing projects where AI solutions were designed. This research performed a scoping review on extant publications that highlighted two things: challenges faced by designers that perceived a role of service design, and how service design has been adopted in existing AI products design processes. The review findings revealed how service design can foster new collaborations, and how service blueprints and journey maps are being used in existing AI projects. We discuss future design and research opportunities for designers to utilize service design in designing AI applications.
... "To consider how an implant decision was reached across many clinicians roles and contexts" (Yang et al., 2016) "Authors co-created with these senior users to engage them in the design process of AI-enabled mobility service" (Li & Lu, 2021) 5 (83.3%) ...
... Five papers adopted service design approaches to consider how various groups of users and stakeholders interacted within an AI application Li & Lu, 2021;Ghunaim et al., 2022;Vetch et al., 2021;Yang et al., 2016). For example, Yang et al. (2016) mapped out how implant decisions were reached across many clinicians' roles and contexts. ...
... Five papers adopted service design approaches to consider how various groups of users and stakeholders interacted within an AI application Li & Lu, 2021;Ghunaim et al., 2022;Vetch et al., 2021;Yang et al., 2016). For example, Yang et al. (2016) mapped out how implant decisions were reached across many clinicians' roles and contexts. Similarly, Li and Lu (2021) created a service blueprint to map out how senior users, student drivers, and elderly center managers formed a collaborative network within an AI reservation platform. ...
... • 50% of participants thought they would use the AI-CDSS for all of their patients with depression, with an additional 40% stating they would use it for more complex or treatment-resistant patients. Yang et al. (2016) Investigate how clinicians make a heart pump implant decision with a focus on how to best integrate an AI-CDSS into their work process. ...
... As shown in Figure 2A, of the 20 studies, eleven were conducted in the United States (Stevens et al., 2012;Yang et al., 2016Yang et al., , 2019Cai et al., 2019a,b;Chiang et al., 2020;Kumar et al., 2020;Romero-Brufau et al., 2020;Sendak et al., 2020;Jacobs et al., 2021;Lee et al., 2021), two each in China (Jin et al., 2020;Wang et al., 2021), Canada (Benrimoh et al., 2021;Tanguay-Sela et al., 2022), and Thailand (Hoonlor et al., 2018;Beede et al., 2020), and one each in Spain (Caballero-Ruiz et al., 2017), United Kingdom (Abdulaal et al., 2021), and Austria (Jauk et al., 2021). The reviewed articles were published between 2012 and 2022. ...
... The application areas of AI-CDSS in the reviewed studies varied. As summarized in Table 3, four studies focused on general medicine (Chiang et al., 2020;Jin et al., 2020;Kumar et al., 2020;Wang et al., 2021), three studies each focused on diabetes (Caballero-Ruiz et al., 2017;Beede et al., 2020;Romero-Brufau et al., 2020) and depressive disorders (Benrimoh et al., 2021;Jacobs et al., 2021;Tanguay-Sela et al., 2022), while two studies each focused on cancer diagnosis (Cai et al., 2019a,b) and heart pump implant (Yang et al., 2016(Yang et al., , 2019. The remaining studies focused on intensive care unit (ICU) monitoring (Stevens et al., 2012), sepsis (Sendak et al., 2020), rehabilitation (Lee et al., 2021), delirium (Jauk et al., 2021), snake envenomation (Hoonlor et al., 2018), and COVID-19 (Abdulaal et al., 2021). ...
Article
Full-text available
Introduction Artificial intelligence (AI) technologies are increasingly applied to empower clinical decision support systems (CDSS), providing patient-specific recommendations to improve clinical work. Equally important to technical advancement is human, social, and contextual factors that impact the successful implementation and user adoption of AI-empowered CDSS (AI-CDSS). With the growing interest in human-centered design and evaluation of such tools, it is critical to synthesize the knowledge and experiences reported in prior work and shed light on future work. Methods Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, we conducted a systematic review to gain an in-depth understanding of how AI-empowered CDSS was used, designed, and evaluated, and how clinician users perceived such systems. We performed literature search in five databases for articles published between the years 2011 and 2022. A total of 19874 articles were retrieved and screened, with 20 articles included for in-depth analysis. Results The reviewed studies assessed different aspects of AI-CDSS, including effectiveness (e.g., improved patient evaluation and work efficiency), user needs (e.g., informational and technological needs), user experience (e.g., satisfaction, trust, usability, workload, and understandability), and other dimensions (e.g., the impact of AI-CDSS on workflow and patient-provider relationship). Despite the promising nature of AI-CDSS, our findings highlighted six major challenges of implementing such systems, including technical limitation, workflow misalignment, attitudinal barriers, informational barriers, usability issues, and environmental barriers. These sociotechnical challenges prevent the effective use of AI-based CDSS interventions in clinical settings. Discussion Our study highlights the paucity of studies examining the user needs, perceptions, and experiences of AI-CDSS. Based on the findings, we discuss design implications and future research directions.
... Wang et al. conclude that while the AI system might be technically competent, it is designed to support a 'textbook' workow rather than clinical reality. Similarly, in a study on CDSS in the heart pump implant decision process, Yang et al. found that current decision support systems lack both clinician's trust and perceived need [70]. The authors point to the CDSS's sole focus on the nal decision making point, whereas decision support could prove more benecial when presented across the patients' entire healthcare trajectory. ...
... This points to a mismatch between clinicians' mental model and expectations of the AI's capabilities [62]. Similar observations were made by Yang et al. in their investigation of decision-making practices in the context of heart pump implants [70]. Here, the AI's quantitative predictions did not align with the information needs of the clinicians. ...
... In addition to the technological limitations highlighted by our participants, our results further point to the tension that may arise between the need for respecting clinicians' expertise and the need for mitigating cognitive biases in CDSS. Prior work on AI decision support has shown that clinical experts may choose not to use such support systems due to a lack of perceived need [66,70], especially among senior clinicians [61,70]. In contrast, enabling the clinicians to direct the CDSS rather than merely receiving its recommendations has improved clinicians' acceptance and eciency [9,26,26]. ...
Conference Paper
Full-text available
Clinical needs and technological advances have resulted in increased use of Artificial Intelligence (AI) in clinical decision support. However, such support can introduce new and amplify existing cognitive biases. Through contextual inquiry and interviews, we set out to understand the use of an existing AI support system by ophthalmologists. We identified concerns regarding anchoring bias and a misunderstanding of the AI’s capabilities. Following, we evaluated clinicians’ perceptions of three bias mitigation strategies as integrated into a mockup of their existing decision support system. While clinicians recognised the danger of anchoring bias, we identified a concern around the negative effect of bias mitigation on procedure time. Our participants were divided in their expectations of any positive impact on diagnostic accuracy, stemming from deviating levels of trust and reliance on the decision support. Our results provide insights into integrating bias mitigation in the clinical domain amidst a growing dependency on AI support systems.
... We conducted two ideation workshops to generate AI concepts. A major challenge for AI innovation in healthcare is ensuring clinician acceptance [40,92,95]. Thus, our goal was to produce ideas that are feasible and clinically relevant. ...
Article
Hypotension during perioperative care, if undetected or uncontrolled, can lead to serious clinical complications. Predictive machine learning models, based on routinely collected EHR data, offer potential for early warning of hypotension to enable proactive clinical intervention. However, while research has demonstrated the feasibility of such machine learning models, little effort is made to ground their formulation and development in socio-technical context of perioperative care work. To address this, we present a study of collaborative work practices of clinical teams during and after surgery with specific emphasis on the organisation of hypotension management. The findings highlight where predictive insights could be usefully deployed to reconfigure care and facilitate more proactive management of hypotension. We further explore how the socio-technical insights help define key parameters of machine learning prediction tasks to align with the demands of collaborative clinical practice. We discuss more general implications for the design of predictive machine learning in hospital care.
Article
Explainable AI (XAI) systems are sociotechnical in nature; thus, they are subject to the sociotechnical gap-divide between the technical affordances and the social needs. However, charting this gap is challenging. In the context of XAI, we argue that charting the gap improves our problem understanding, which can reflexively provide actionable insights to improve explainability. Utilizing two case studies in distinct domains, we empirically derive a framework that facilitates systematic charting of the sociotechnical gap by connecting AI guidelines in the context of XAI and elucidating how to use them to address the gap. We apply the framework to a third case in a new domain, showcasing its affordances. Finally, we discuss conceptual implications of the framework, share practical considerations in its operationalization, and offer guidance on transferring it to new contexts. By making conceptual and practical contributions to understanding the sociotechnical gap in XAI, the framework expands the XAI design space.
Article
In many real world contexts, successful human-AI collaboration requires humans to productively integrate complementary sources of information into AI-informed decisions. However, in practice human decision-makers often lack understanding of what information an AI model has access to, in relation to themselves. There are few available guidelines regarding how to effectively communicate aboutunobservables: features that may influence the outcome, but which are unavailable to the model. In this work, we conducted an online experiment to understand whether and how explicitly communicating potentially relevant unobservables influences how people integrate model outputs and unobservables when making predictions. Our findings indicate that presenting prompts about unobservables can change how humans integrate model outputs and unobservables, but do not necessarily lead to improved performance. Furthermore, the impacts of these prompts can vary depending on decision-makers' prior domain expertise. We conclude by discussing implications for future research and design of AI-based decision support tools.
Conference Paper
Full-text available
Over the last two decades great advances have been made in medical decision support tools (DSTs). Interestingly, as these systems move out of labs and into clinical practice, many fail due to a lack of interaction design and the considerations for a user's context that this discipline brings. Today design researchers and practitioners are beginning to be asked to collaborate on the design of these intelligent tools; however, few design patterns or design research exists to guide this work. To better understand the state of the art we conducted literature review. Our review of both technical and healthcare research documents the goals, forms, and audiences of these systems. These can function as starting places for design. We further identified two major opportunities for design research to impact this emerging area: 1) There is a great need for human-centered design research on the decision-making contexts. 2) There is a great need for research that envisions new roles for DSTs that enhance both clinician work practice and clinician-patient relationships. We strongly encourage interaction design practitioners and researchers to get involved in the design of these systems that promise to improve healthcare.
Article
Full-text available
The objective of the study was to identify potential barriers and facilitators to improve clinical practice using computer-based Clinical Decision Support System (CDSS). Studies published since 2000 were found using PubMed database, PsychInfo, CINAHL, EBSCO host database, and Google scholar. Twenty-six relevant publications were examined. Thirty-five unique barriers and twenty-five unique facilitators were identified in the literature as important determinants of CDSS’s adoption in clinical practice. The list of barriers and facilitators collected from each study were then organized under the four dimensions of The Unified Theory of Acceptance and Use of Technology (UTAUT) model: performance expectancy, effort expectancy, social influence, and facilitating conditions. Some of the important barriers to CDSS use include; lack of time or time constraints, economic constraints (e.g., finance and resources), lack of knowledge of system or content, reluctance to use system in front of patients, obscure workflow issues, less authenticity or reliability of information, lack of agreement with the system, and physician or user attitude toward the system. The study contributes immensely to the literature by identifying the important barriers and facilitators of CDSS.
Conference Paper
Full-text available
Children with complex health conditions require care from a large, diverse team of caregivers that includes multiple types of medical professionals, parents and community support organizations. Coordination of their outpatient care, essential for good outcomes, presents major challenges. Extensive healthcare research has shown that the use of integrated, team-based care plans improves care coordination, but such plans are rarely deployed in practice. This paper reports on a study of care teams treating children with complex conditions at a major university tertiary care center. This study investigated barriers to plan implementation and resultant care coordination problems. It revealed the complex nature of teamwork in complex care, which poses challenges to team coordination that extend beyond those identified in prior work and handled by existing coordination systems. The paper builds on a computational teamwork theory to identify opportunities for technology to support increased plan-based complex-care coordination and to propose design approaches for systems that enable and enhance such coordination.
Book
Decision making in health care involves consideration of a complex set of diagnostic, therapeutic and prognostic uncertainties. Medical therapies have side effects, surgical interventions may lead to complications, and diagnostic tests can produce misleading results. Furthermore, patient values and service costs must be considered. Decisions in clinical and health policy require careful weighing of risks and benefits and are commonly a trade-off of competing objectives: maximizing quality of life vs maximizing life expectancy vs minimizing the resources required. This text takes a proactive, systematic and rational approach to medical decision making. It covers decision trees, Bayesian revision, receiver operating characteristic curves, and cost-effectiveness analysis, as well as advanced topics such as Markov models, microsimulation, probabilistic sensitivity analysis and value of information analysis. It provides an essential resource for trainees and researchers involved in medical decision modelling, evidence-based medicine, clinical epidemiology, comparative effectiveness, public health, health economics, and health technology assessment.
Chapter
In Scandinavia we have for two decades been concerned with participation and skill in the design and use of computer-based systems. Collaboration between researchers and trade unions on this theme, starting with the pioneering work of Kristen Nygaard and the Norwegian Metal Workers’ Union, and including leading projects like DEMOS and UTOPIA, has been based on a strong commitment to the idea of industrial democracy. This kind of politically significant, interdisciplinary, and action-oriented research on resources and control in the processes of design and use has contributed to what is often viewed abroad as a distinctively Scandinavian approach to systems design. This Scandinavian approach might be called a work-oriented design approach. Democratic participation and skill enhancement, and not only productivity and product quality, are themselves considered objective of design. [Based on the two research projects, DEMOS and UTOPIA, I have elaborated this approach in detail in Work-Oriented Design of Computer Artifacts (1989). This paper is based on that work.] Two important features of participatory design shape its trajectory as a design strategy. The political one is obvious. Participatory design raises questions of democracy, power, and control in the workplace. In this sense it is a deeply controversial issue, especially from a management point of view. The other major feature is technical—its promise that the participation of skilled users in the design process can contribute importantly to successful design and high-quality products. Some experiences, perhaps most developed in Scandinavia, support this prediction and contribute to the growing interest in participatory design in the United States and other countries; by contrast, “expert” design strategies have too often turned out to be failures in terms of the usability of the resulting systems. These two features together suggest that there should be a strong link between the skill and product quality aspect of user participation and the democracy and control aspect, or else participatory design will be a deeply controversial issue from the point of view of the employees and trade unions. The trade-union-oriented democracy aspect of skill and participation in design is discussed in the first part of the chapter.
Article
After reading this chapter, you should know the answers to these questions:
Article
There is no agreement on “one way of knowing” in social work, and it is certainly not scientific reasoning that is accepted, as can be seen by examining the literature in social work on “different ways of knowing.”
Article
Healthcare technologies have a reputation for being overly-complex, difficult-to-use, uninspired, and unusable. There is a significant opportunity to incorporate human-centered design into healthcare through the synthesis of deep healthcare domain expertise, visual and industrial design, human factors theory and practice, and an understanding of the patient experience. In practice, the need for such synthesis brings together professionals who often have distinct understanding of the meaning of design in healthcare and the ways in which design can solve problems in healthcare. As such, each professional encounters distinct challenges in trying to realize good design in healthcare. This panel will bring together the contrasting perspectives of industry, academia, visual design, clinicians, patients, and human factors consultants on the role of user interface and technology design in healthcare. Each representative speaker will share his/her experience, lessons learned, and thoughts on the role and meaning of design in healthcare. The audience will then be engaged in a discussion that aims to bring a common understanding, inclusive of all perspectives.