Article

Health information technology: Fallacies and sober realities

Department of Industrial and Systems Engineering and Systems Engineering Initiative for Patient Safety, University of Wisconsin, Madison, Wisconsin 53706, USA.
Journal of the American Medical Informatics Association (Impact Factor: 3.5). 11/2010; 17(6):617-23. DOI: 10.1136/jamia.2010.005637
Source: PubMed

ABSTRACT

Current research suggests that the rate of adoption of health information technology (HIT) is low, and that HIT may not have the touted beneficial effects on quality of care or costs. The twin issues of the failure of HIT adoption and of HIT efficacy stem primarily from a series of fallacies about HIT. We discuss 12 HIT fallacies and their implications for design and implementation. These fallacies must be understood and addressed for HIT to yield better results. Foundational cognitive and human factors engineering research and development are essential to better inform HIT development, deployment, and use.

Full-text

Available from: Robert Wears, Mar 20, 2014
Health information technology: fallacies and
sober realities
Ben-Tzion Karsh,
1
Matthew B Weinger,
2,3
Patricia A Abbott,
4,5
Robert L Wears
6,7
ABSTRACT
Current research suggests that the rate of adoption of
health information technology (HIT) is low, and that HIT
may not have the touted beneficial effects on quality of
care or costs. The twin issues of the failure of HIT
adoption and of HIT efficacy stem primarily from a series
of fallacies about HIT. We discuss 12 HIT fallacies and
their implications for design and implementation. These
fallacies must be understood and addressed for HIT to
yield better results. Foundational cognitive and human
factors engineering research and development are
essential to better inform HIT development, deployment,
and use.
INTRODUCTION
Current research demonstrates that health infor-
mation technology (HIT) can improve patient
safety and healthcare quality, in certain circum-
stances.
1e6
At the same time, other research shows
that HIT adoption rates are low,
7e10
and that HIT
may not reliably improve care quality
11 12
or reduce
costs.
13
A recent National Research Council
report
14
provided a hypothesis to explain these
observations:
. current efforts aimed at the nationwide
deployment of health care IT will not be sufcient to
achieve the vision of 21st-century health care, and
may even set back the cause if these efforts continue
wholly without change from their present course.
Specically, success in this regard will require greater
emphasis on providing cognitive support for health
care providers and for patients and family caregivers
. This point is the central conclusion of this report.
This is a stunning conclusion, especially in light
of the new Meaningful Use rules.
15
Yet, it is
consistent with evidence of HIT failures and
misuses.
16e20
In this article, we argue that the twin
issues of the failure of HIT adoption and of HIT
efcacy can be understood by examining a series of
misguided beliefs about HIT. The implications of
these fallacies for HIT design and implementation
need to be acknowledged and addressed for HIT use
to attain its predicted benets.
THE ‘RISK FREE HIT’ FALLACY
Many designers and policymakers believe that the
risks of HIT are minor and easily manageable.
However, because HIT is designed, built, and
implemented by humans, it will invariably have
bugs and latent failure modes.
21 22
The deploy-
ment of HIT in high-pressure environments with
critically ill patients poses signicant risk.
17e19
Fallible humans have learned to build generally
reliable complex physical systems (eg, bridges,
buildings, cars), but it took more than a century to
understand and mitigate the myriad of hazards of
these systems. In contrast, we cannot yet design
and deploy complex software systems that are on
time, within budget, meet the specied require-
ments, satisfy their users, are reliable (bug free and
available), maintainable, and safe.
23 24
Edsger
Dijskstra, a recognized leader in software engi-
neering, lamented that:
. most of our systems are much more complicated
than can be considered healthy, and are too messy
and chaotic to be used in comfort and condence.
The average customer of the computing industry has
been served so poorly that he expects his system to
crash all the time, and we witness a massive
worldwide distribution of bug-ridden software for
which we should be deeply ashamed.
23
There are two additional reasons why HIT fail-
ures are particularly problematic. First, they are
often opaque to users and system managers alike; it
can be very challenging to understand exactly how
a particular failure occurred. Envisioning paths to
IT failure in advance, so they might be forestalled,
is particularly difcult.
16 25 26
Second, HIT systems
tend to have a magnifying property, wherein one
exchanges a large number of small failures for
a small number of large, potentially catastrophic
failures. For example, instead of one pharmacist
making a single transcription error that affects one
patient, when a medication dispensing robot has
a software failure it can produce thousands of
errors an hour. Moreover, as different HIT systems
become coupled (eg, when a CPOE system is
directly linked to a pharmacy information system
and that to an electronic medication administration
record), errors early in the medication process can
more quickly pass unscrutinized to the patient.
Currently, there are no regulator y requirements
to evaluate HIT system safety even though these
systems are known to directly affect patient care in
both positive and negative ways.
2 17 18 27e34
Thus,
current HIT may:
<
Have been developed from erroneous or incom-
plete design specications;
<
Be dependent on unreliable hardware or soft-
ware platforms;
<
Have programming errors and bugs;
<
Work well in one context or organization but be
unsafe or even fail in another;
<
Change how clinicians do their daily work,
thus introducing new potential failure
modes.
16 18 28 35e39
Decades of experience with IT in other hazardous
industries has emphasized the importance of these
1
Department of Industrial and
Systems Engineering and
Systems Engineering Initiative
for Patient Safety, University of
Wisconsin, Madison, Wisconsin,
USA
2
Center for Perioperative
Research in Quality, Vanderbilt
University School of Medicine,
Nashville, Tennessee, USA
3
Geriatrics Research, Education,
and Clinical Center, VA
Tennessee Valley Healthcare
System, Nashville, Tennessee,
USA
4
Division of Health Sciences
Informatics, Johns Hopkins
University School of Medicine,
Baltimore, Maryland, USA
5
Department of Health Systems
and Outcomes, Johns Hopkins
University School of Nursing,
Baltimore, Maryland, USA
6
Department of Emergency
Medicine, University of Florida,
Jacksonville, Florida, USA
7
Clinical Safety Research Unit,
Imperial College London,
London, UK
Correspondence to
Ben-Tzion Karsh, Department of
Industrial and Systems
Engineering and Systems
Engineering Initiative for Patient
Safety, University of Wisconsin,
1513 University Avenue, Room
3218, Madison, WI 53706,
USA; bkarsh@engr.wisc.edu
This paper stemmed from the
authors’ participation as
external resources in
a workshop sponsored by the
Agency for Healthcare Research
and Quality (AHRQ) entitled
‘Wicked Problems in Cutting
Edge Computer-Based Decision
Support’ held on March 26e27,
2009 at the Center for Better
Health, Vanderbilt University,
Nashville, Tennessee.
Received 30 April 2010
Accepted 1 September 2010
J Am Med Inform Assoc 2010;17:617e623. doi:10.1136/jamia.2010.005637 617
Viewpoint paper
Page 1
problems
40 41
and led to the development of methods for safety
critical computing.
42 43
Healthcare has been slow to embrace
safety critical computing,
44
and HIT software has commonly
been identied as being among the least reliable.
45
A recent
National Academy of Science report concluded that ITshould be
considered guilty until proven innocent , and that the burden
of proof should fall on the vendor to demonstrate to an inde-
pendent certier or regulator that a system is safe, not on the
customer to prove that it is not.
41
No other hazardous industry
deploys safety critical IT without some form of independent
hazard analysis (eg, a safety case); it is unwise for healthcare to
continue to do so.
THE ‘HIT IS NOT A DEVICE’ FALLACY
An off-shoot of the risk free HIT fallacy is the belief that HIT
can be created and deployed without the same level of oversight
as medical devices. Currently, an FDA-approved drug (eg, an
opioid) is delivered by an FDA-approved device (infusion pump)
to a patient in pain. But none of the HIT that mediates and
inuences all of the critical steps between the clinicians deter-
mination that pain relief is needed and the start of the opioid
infusion (eg, order entry with decision support, pharm acy
checking and dispensing systems, robotic medication delivery
and dispensing systems, and bedside medication management
systems) is subject to any independent assessment of its safety
or tness for purpose. The complexity of HIT systems and the
risk of potentiating serious error is sufciently signicant to
demand effective regulatory oversight.
46
The Ofce of the
National Coordinator has recognized this risk, and held hearings
on HIT safety on 25 February 2010,
47
but it is unlikely that any
effective process of independent review of safety will be in place
in the time frame produced by the HITECH Act.
The issue of regulation of HIT for safety and effectiveness is
a difcult and contentious one. The need for some form of
independent evaluation of HIT safety prior to market intro-
duction has gained recent attention.
47
Much has changed since
the 1997 publication of Miller and Gardners consensus recom-
mendations,
48
most notably the recent, relatively rapid and
semi-compulsory implementation of complex HIT systems
under the HITECH Act in organizations without much prior
experience in rolling out or managing such products. Many in
the HIT industry and academics argue that the institution of
FDA-type regulation would be counterproductive, by, for
example, slowing innovation, freezing improvement of current
systems with risky congurations, and freezing out small
competitors. However, the current approach can no longer be
justied. A passive monitoring approach, as currently suggested
by the Ofce of the National Coordinator,
49
seems likely to be
both expensive and ultimately ineffective. We believe that
a proactive approach is required.
An alternative to FDA-type regulation would be a pre-market
requirement for a rigorous independent safety assessment. This
approach has shown some promise in proactively identifying
and mitigating risks without unduly degrading innovation and
necessary product evolution. Such an approach has been
endorsed by international standards organizations
50 51
and is
beginning to be applied in Europe.
52
THE ‘LEARNED INTERMEDIARY’ FALLACY
One of the drivers of the risk free HIT fallacy is the learned
intermediary doctrine, the idea that HIT risks are negligible
because the human alone ultimately makes the decision.Itis
believed that because a human operator monitors and must
conrm HIT recommendations or actions, that humans can be
depended on to catch any system-induced hazards.
53
Paradoxi-
cally, this fallacy stands a fundamental argument in favor of
HIT on its head (ie, that HIT will help reduce human errors but
we will rely on the human to catch the HIT errors). Moreover,
this fallacy assumes that people are unaffected by the tech-
nology they use. However, it is well established that the way in
which problems, information, or recommendations are
presented to users by technology reframes them in ways that
neither the users nor the designer may appreciate.
39 54
Data
presentation format will affect what the user perceives and
believes to be salient (or not) and therefore affects subsequent
decisions and actions.
55 56
The clinician does not act or decide in
a vacuum, but is necessarily inuenced by the HIT. Users are
inevitably and often unknowingly inuenced by what many
HIT designers might consider trivial design detailsdplacement
(information availability), font size (salience), information
similarity and representativeness, perceived credibility (or
authority), etc.
57e59
For example, changing the order of medi-
cation options on a drop down pick list will inuence clinicians
ordering behavior.
Empirical studies have demonstrated that people will accept
worse solutions from an external aid than they could have
conceived of, unaided.
60
Because information presentation
profoundly affects user behavior and decision-making, it is
critical that information displays be thoughtfully designed and
rigorously tested to ensure they yield the best possible perfor-
mance outcomes. These evaluations must consider the full
complexity of the context in which the system is to be used.
61
THE ‘BAD APPLE’ FALLACY
It is widely believed that many healthcare problems are due
primarily to human (especially clinician and middle manager)
shortcomings. Thus, computerization is proposed as a way to
make healthcare processes safer and more efcient. Further,
when HIT is not used or does not perform as planned, designers
and administrators ask, Why wont those [uncooperative, error-
prone] clinicians use the system? or Why are they resisting?
The ngers are pointed squarely at front-line users.
However, human factors engineers, social psychologists, and
patient safety researchers long ago debunked the bad apple theory
of human error, replacing it with the more accurate and useful
systems view of error,
21 62 63
which is supported by strong theory
and evidence from safety science, industrial and systems engi-
neering, and social psychology.
62 64 65
Thus, bad outcomes are the
result of the interactions among systems components including
the people, tools and technologies, physical environment, work-
place culture, and the organizational, state, and federal policies
which govern work. Poor HIT outcomes do not result from
isolated acts of individuals, but from interactions of multiple
latent and triggering factors in a eld of practice.
18 20 65 66
THE ‘USE EQUALS SUCCESS’ FALLACY
Equating HIT usage with design success can be misleading and
may promulgate inappropriate policies to improve use.
67
Humans are the most exible and adaptable elements in any
system, and will nd ways to attain their goals often despite the
technology. However, the effort and resilience of front-line
workers is a nite resource and its consumption by workarounds
to make HITwork because its use is required reduces the overall
ability of the system to cope with unexpected conditions and
failures. Thus, the fact that people can use a technology to
accomplish their work is not, in itself, an endorsement of that
618 J Am Med Inform Assoc 2010;17:617e623. doi:10.1136/jamia.2010.005637
Viewpoint paper
Page 2
technology. Conversely, a lack of use is not evidence of a awed
systemdclinicians may ignore features like reminders for legiti-
mate reasons. Healthcare is a complex sociotechnical system
where simple metrics can mislead because they do not adequately
consider the context of human decisions at the time they are
made. Thus, the promulgation of meaningful use may lead to
undesirable consequences if such use is not contextually grounded
and tied to improved efciency, learning, ease of use, task and
information ow, cognitive load, situation awareness, clinician
and patient satisfaction, reduced errors, and faster error recovery.
THE ‘MESSY DESK’ FALLACY
Much of the motivation for HIT stems from the belief that
something is fundamentally wrong with existing clinical work,
that it is too messy and disorganized. It needs to be rationalized
into something that is nice, neat, and linear.
68
However, as
a complex sociotechnical system, many parts of healthcare
delivery are messy and non-linear. That is not to say that waste
does not exist nor does it mean that standardization is unwise.
There exist processes within clinical care that require linearity
and benet from standardization. But, in many clinical settings,
multiple patients are managed simultaneously, with clinicians
repeatedly switching among sets of goals and tasks, continu-
ously reprioritizing and replanning their work.
69 70
In such
settings, patient care is less an algorithmic sequence of choices
and actions than an iterative process of sensing, probing, and
reformulating intermediate goals negotiated among clinicians,
patients, caregivers, and the clinical circumstances. Because of
time constraints, many care goals, and the tasks or decisions
needed to pursue those goals, are intentionally deferred until
a future opportunity.
However, HIT designs often assume a rationalized model of
healthcare delivery. Templates walk clinicians through
a prescribed set of questions even though the questions and/or
their order may not be relevant for a particular patient at that
time. Similarly, some clinical decision support (CDS) rules force
clinicians to stop and respond to the CDS, interrupting their
work, substituting the designers judgment for that of the
clinician.
71
This mismatch between the reality of clinical work
and how it is rationalized by HIT leads clinicians to perceive
that these systems are disruptive and inefcient. Accommo-
dating the non-linearity of healthcare delivery will require new
paradigms for effective HIT design. Consistent and appropriate
data availability and quick access may need to supplant inte-
gration into workow as a key design goal.
THE ‘FATHER KNOWS BEST’ FALLACY
While HIT has been sold as a solution to healthcares quality and
efciency problems, most of the benets of current HIT systems
accrue to entities upstream from direct patient care proc-
esses
72
dhospital administrators, quality improvement profes-
sionals, payors, regulators, and the government.
73
In contrast,
those who suffer the costs of poorly designed and inefcient
HIT are front-line providers, clerks, and patients. Thus, most
HIT has been designed to meet the needs of people who do not
have to enter, interact with, or manage the primary (ie, raw)
data. This mismatch between who benets and who pays leads
to incomplete or inaccurate data entry (garbage indgarbage
out), inefciency, workarounds, and poor adoption.
74
This
fundamental principle has been expressed as Grudins Law, one
form of which is: When those who benet from a technology
are not those who do the work, then the technology is likely to
fail or be subverted.
75
As noted by Frisse,
76
HIT that focuses too much on the
administrative aspects of healthcare (eg, complete and accurate
documentation to meet authorization rules or to improve
revenue) rather than on care processes and outcomes (ie, the
actual quality of disease management) will result in a missed
opportunity to truly transform care. Healthcare does not exist to
create documentation or generate revenue, it exists to promote
good health, prevent illness, and help the sick and injured.
Efforts currently underway to align incentives to enhance
adoption of electronic health records (EHRs) are acknowledged
and warranted. However, the denitions of meaningful use and
certied systems, and how these milestones are to be measured,
must be considered carefully. Otherwise, unintended conse-
quences, such as physicians and hospitals investing in HIT to the
exclusion of what might actually be more effective local strategies
(eg, use of nurse case managers or process redesign), may occur.
THE ‘FIELD OF DREAMS’ FALLACY AND THE ‘SIT-STAY’
FALLACY
The eld of dreams fallacy suggests that if you provide HIT to
clinicians, they will gladly use it, and use it as the designer
intended. This fallacy is further reinforced by the belief that
clinicians should rely on HIT because computers are, after all,
smarter than humans (the sit-stay fallacy explained below).
The eld of dreams fallacy is well-known in other domains
where it is also referred to as the designer fallacy or designer-
centered design.
77
Here, if a systems designer thinks the system
works, then any evidence to the contrary must mean the users
are not using it appropriately. In fact, designers sometimes
design for a world that does not actually exist
78
(also called the
imagined world fallacy). For healthcare, the imagined world
may be a linear orderly work process used by every clinician.
Computers cannot be described as being inherently smart.
Instead, computers are very good at repeatedly doing whatever
they were told to do, just like a well-trained animal (ie, sit-
stay). Computers implement human-derived rules and with
a degree of consistency much higher than human workers. This
does not make them intelligent. Instead, computers are much
more likely than humans to perform their clever and complex
tricks at inappropriate times. Moreover, a computer, in its
consistency, can perpetuate errors on a very large scale. People,
on the other hand, are smart, creative, and context sensitive.
79
Technology can be at its worst, and humans at their best, when
novel and complex situations arise. Many catastrophes in
complex sociotechnical systems (eg, Three Mile Island) occur in
such situations, particularly when the technology does not
communicate effectively with its human users.
Thus, HIT must support and extend the work of users,
80 81
not try to replace human intelligence. Cognitive support that
offers clinicians and patients assistance for thinking about and
solving problems related to speci c instances of healthcare
14
is
the area where the power of IT should be focused.
THE ‘ONE SIZE FITS ALL’ FALLACY
HITcannot be designed as if there is always a single user, such as
a doctor, working with a single patient. The one doctoreone
patient paradigm has largely been replaced by teams of physi-
cians, nurses, pharmacists, other clinicians, and ancillary staff
interacting with multiple patients and their families, often in
different physical locations. HIT designed for single users, or for
users doing discrete tasks in isolated sessions, are misconceived.
There are tremendous differences in the HIT needs of different
clinical roles (nurse vs physician), clinical situations (acute vs
J Am Med Inform Assoc 2010;17:617e623. doi:10.1136/jamia.2010.005637 619
Viewpoint paper
Page 3
chronic care), clinical environments (intensive care unit vs
ambulatory clinic, etc), and institutions. The interaction of HIT
with multiple users will inuence communication, coordination,
and collaboration. Depending on how well the HITsupports the
needs of the different users and of the team as a whole, these
interactions may be improved or degraded.
80e82
To succeed in
todays team-based healthcare reality, HIT should be designed
to: (a) facilitate the necessary collaboration between health
professionals, patients, and families; (b) recognize that each
member of the collaborative team may have different mental
models and information needs; and (c) support both individual
and team care needs across multiple diverse care environments
and contexts. This will require more than just putting a new
front end on a standard core; it needs to inform the funda-
mental design of the system.
THE ‘WE COMPUTERIZED THE PAPER, SO WE CAN GO
PAPERLESS’ FALLACY
Taking the data elements in a paper-based healthcare system and
computerizing them is unlikely to create an efcient and effec-
tive paperless system. This surprises and frustrates HIT
designers and administrators. The reason, however, is that the
designers do not fully understand how the paper actually
supports users cognitive needs. Moreover, computer displays are
not yet as portable, exible, or well-designed as paper.
83
The paper persistence problem was recently explored at a large
Veterans Affairs Medical Center
84
where EHRs have existed for
10 years. Paper continues to be used extensively. Why? The
paper forms are not simple data repositories that, once
computerized, could be eliminated. Rather such scraps of paper
are sophisticated cognitive artifacts that support memory,
forecasting and planning, communication, coordination, and
education. User-created paper artifacts typically support patient-
specic cognition, situational awareness, task and information
communication, and coordination, all essential to safe quality
patient care. Paper will persist, and should persist, if HIT is not
able to provide similar support.
THE ‘NO ONE ELSE UNDERSTANDS HEALTHCARE’ FALLACY
Designers of HIT need to have a deep, rich, and nuanced
understanding of healthcare. However, it is misguided to believe
that healthcare is unique or that no one outside of the domain
could possibly understand it. This fallacy mistakes a condition
that is necessary for success (ie, the design team must include
clinicians in the design process) from one that is sufcient (ie,
only clinicians can understand and solve complex HIT issues).
Teams of well-intentioned clinicians and software engineers may
believe that understanding of clinical processes coupled with
clever programming can solve the challenges facing healthcare.
But such teams typically will not have the requisite breadth and
depth of theories, tools, and ideas to develop robust and usable
systems. By seeing only what they know, such teams do not
understand how clinical work is really carried out, what clini-
cians real needs are, and where the potential hazards and
leverage points lie. As a result, problems have been framed too
narrowly, leading to impoverished designs and disappointing
solutions.
85
Understanding what would help people in their complex work
is not as simple as asking them what they want
86
dan all too
common approach in HIT design. Peoples ideas for what should
be part of HIT design are hypotheses based on their perceptions of
the world.
87
Like all hypotheses, some or many could be wrong.
Furthermore, most clinicians are not experts in device design, user
interface design, or the relationship between HIT design and
performance. What clinicians say they want may be limited by
their own understanding of the complexity of their work or even
their design vocabulary. Thus, simply asking clinicians (or any
end-user, for that matter) what they want and giving it to them is
not a wise approach. What clinicians want and what will actually
improve their work may be quite different.
Clinicians need to be studied so that the designer is aware of
the complexities of their workdthe tasks, processes, contexts,
contingencies, and constraints. The results of observations,
interviews, and other user research can best be analyzed by
trained usability engineers and human factors professionals to
properly inform design. Similarly, it takes special training and
skills to evaluate a humanecomputer interface, assess the
usability of a system, or predict the changes in communication
patterns and social structures that a design might induce.
Furthermore, design decisions should be based on test results,
not user preferences. The involvement of human factors engi-
neering, cognitive engineering, interaction design, psychology,
sociology, anthropology, etc, in all phases of HIT design and
implementation will not be a panacea, but could substantially
improve HIT usability, efciency, safety, and user satisfaction.
80
WHAT SHOULD WE DO NOW?
HIT must be focused on transfor ming care and improving
patient outcomes. HIT must be designed to support the needs
of clinicians and their patients.
62
As point ed out recently by
Shavit,
88
It is health that people desire, and health technology
utilization i s merely the means to achieve it. The needs of
users and the complexities o f clinical work must be analyzed
rst, followed by evaluation of the entire scope of potential
solutions, rather than examining the current array of available
products and characterizing the needs that they might meet.
88
We must delineate the key questions (based on the critical
problems) before we arrive at answers . Unfortunately, insuf-
cient contextual research has been con ducted to support
effective HIT design and implementation.
14
Exemplary
research on relevant topics has been carried out for several
decades,
89e98
but it does not seem that commercial HIT has
beneted adequately from these ndings. Much more founda-
tional work is needed. We applaud the recent funding of the
Strategic Health IT Advanced Research Project on cognitive
informatics and decision making in healthcare b y t he Ofce of
the National Coordinator,
99
and hope that this represents the
beginning of a sustained research effort on the safe and effective
design of HIT.
As stated earlier, appropriate metrics for HIT success should
not be adoption or usage, but rather impact on population
health. The comparative effectiveness perspective must also be
applied to HITdwhat is the return-on-investment of each HIT
initiative compared with alternative uses of these funds?
Importantly, just as the structure of a single carbon group on
a therapeutic molecule can make the difference between
a miracle cure and a toxic substance, the details of HIT design
and implementation
100
in a specic context can make a huge
difference to its effectiveness, safety, and real cost (ie, not just
the purchase price but training costs, lost productivity, user
satisfaction, HIT-induced errors, workarounds, etc).
There are fundamental gaps in the awareness, recognition,
and application of existing scientic knowledge bases, especially
related to human factors, and systems and cognitive engineering,
that could help address some of HITs biggest problems. To that
end, we recommend the following:
620 J Am Med Inform Assoc 2010;17:617e623. doi:10.1136/jamia.2010.005637
Viewpoint paper
Page 4
<
These challenges will only be overcome by collaborating
substantively with those who can contribute unique and
important expertise such as human factors engineers, applied
psychologists, medical sociologists, communication scien-
tists, cognitive scientists, and interaction designers. Pilots did
not improve aviation safety nor did nuclear power operators
improve nuclear safety . by themselves. Rather they worked
closely with experts in cognitive, social, and physical
performance and safety to improve safety. HIT stands to
benet in the same way.
<
Humans have very limited insight into their own perfor-
mance, and even more limited ability to articulate what
might improve it. We need substantial research on how
clinical work is actually done and should be done. Methods to
accomplish this include cognitive eld analyses
101
(eg,
cognitive work analysis,
78
cognitive task analysis
102
), work-
ow and task analyses
103 104
(eg, hierarchical task analysis,
sequence diagrams), and human-centered design evalua-
tions
61 105 e108
(eg, usability testing). The latter takes the
results of domain studies and validates them. Validation of
HIT cannot be achieved by asking a clinician if they like the
design. Validation requires thorough experimental testing of
the design based on well-dened performance criteria.
<
Measurements of meaningful use
15
are designed to facilitate
payment of government incentives to physicians for adopting
HIT. However, use may not truly be meaningful in a clinical
sense until HIT truly supports users needs. During HIT
development, vendors and healthcare organizations must
focus on more meaningful measures of design success:
clinician and patient ease of learning, time to nd informa-
tion, time to solve relevant clinical problems, use errors,
accuracy of found information, changes in task and informa-
tion ow, workload, situation awareness, communication
and coordination effectiveness, and patient and clinician
satisfaction.
65 109e112
These measures should be applied to all
members of the care team.
These steps alone will require a signicant investment by
vendors, healthcare organizations, and government funders. The
path may seem daunting and the fruits of the investment
distant, so a little perspective might help. In 1903, the rst
controlled powered airplane took ight. In 1947, Fitts
113
published a paper in which he explained that,
. up to the present time psychological data and research
techniques have played an insignicant role in the eld .
Particularly in the eld of aviation has the importance of human
requirements in equipment design come to be recognized. There
probably is no other engineering eld in which the penalties for
failure to suit the equipment to human requirements are so great.
With present equipment, ying is so difcult that many individuals
cannot learn to pilot an aircraft safely, and even with careful
selection and extensive training of pilots, human errors account for
a major proportion of aircraft accidents. The point has been reached
where addition of new instruments and devices . on the cockpit
instrument panel actually tends to decrease the over-all
effectiveness of the pilot by increasing the complexity of a task that
already is near the threshold of human ability. As aircraft become
more complex and attain higher speeds, the necessity for designing
the machine to suit the inherent characteristics of the human
operators becomes increasingly apparent.
Substitute clinician for pilot and patient room for cockpit
and the text feels current. In the more than 60 years since that
publication, commercial aviation has become very safe. While it
may not take 60 years for HIT to become as safe, if we do not
change from our current course, it never will be.
Throughout human history, signicant innovations have
always been associated with new perils. This is as much the case
for re, the wheel, aviation, and nuclear power as it is for HIT.
HIT affords real opportunities for improving quality and safety.
However, at the same time, it creates substantial challenges,
especially during everyday clinical work. This paper is not
a Luddite call to cease HIT development and dissemination.
Rather, it is a plea to accelerate and support the design and
implementation of safer HIT so that we need not wait as long as
did aviation to see the fruits of innovation.
We must also consider the likely undesirable consequences of
current HIT deployment policies and regulations. The hold
harmless clauses
53
found in many HIT contracts are anathema
to organizational learning, innovation, and safety because they
stie reporting and sharing of experiences and data (risk free,
eld of dreams, and father knows best fallacies). Current
meaningful use rules and deadlines leave little time for HIT
product improvement and testing, incentivizing rapid imple-
mentation of whatever is available (one size ts all fallacy).
Despite compelling evidence that HIT works best (and is safest)
when it is customized to local circumstances and workows, the
government-sponsored push for meaningful use may leave
clinicians trying to adapt their care practices to suboptimal
systems (eld of dreams and sit-stay fallacies). Finally, the
current functional usage measures of meaningful use will focus
healthcare facilities and practices on meeting those measures (eg,
a certain percentage of prescriptions must be generated by HIT
systems) to the exclusion of others (eg, the incidence of inap-
propriate prescribing) that may be more important (use equals
success fallacy). However, if put on the right path now, HITwill
ultimately take its rightful place in healthcare, supporting and
extending clinician and patient efforts to enhance human health
and well-being.
Funding The authors’ time has been supported by grants R18SH017899 from AHRQ
and R01LM008923-01A1 from NIH to BK; IAF06-085 from the Department of
Veterans Affairs Health Services Research and Development Service (HSR&D) and
HS016651 from AHRQ to MBW; and R18HS017902 from AHRQ to RLW.
Competing interests None.
Provenance and peer review Not commissioned; externally peer reviewed.
REFERENCES
1. Bates DW, Leape LL, Cullen DJ, et al. Effect of computerized physician order entry
and a team intervention on prevention of serious medication errors. JAMA
1998;280:1311e16.
2. Chaudhry B, Wang J, Wu SY, et al. Systematic review: Impact of health
information technology on quality, efficiency, and costs of medical care. Ann Intern
Med 2006;144:742e52.
3. Devine EB, Hansen RN, Wilson-Norton JL, et al. The impact of computerized
provider order entry on medication errors in a multispecialty group practice. JAm
Med Inform Assoc 2010;17:78e84.
4. King WJ, Paice N, Rangrej J, et al. The effect of computerized physician order
entry on medication errors and adverse drug events in pediatric inpatients.
Pediatrics 2003;112:506e9.
5. Mekhjian HS, Kumar RR, Kuehn L, et al. Immediate benefits realized following
implementation of physician order entry at an academic medical center. J Am Med
Inform Assoc 2002;9:529e39.
6. Poon EG, Keohane CA, Yoon CS, et al. Effect of bar-code technology on the safety
of medication administration. N Engl J Med 2010;362:1698e707.
7. DesRoches CM, Campbell EG, Rao SR, et al. Electronic health records in
ambulatory caredA national survey of physicians. N Engl J Med
2008;359:50e60.
8. Furukawa MF, Raghu TS, Spaulding TJ, et al. Adoption of health information
technology for medication safety in U.S. Hospitals, 2006. Health Aff (Millwood)
2008;27:865e75.
9. Jha AK, DesRoches CM, Campbell EG, et al. Use of electronic health records in
U.S. hospitals. N Engl J Med 2009;360:1628e 38.
10. Pedersen CA, Gumpper KF. ASHP national survey on informatics: Assessment of
the adoption and use of pharmacy informatics in US hospitals-2007. Am J Health
Syst Pharm 2008;65:2244e64.
J Am Med Inform Assoc 2010;17:617e623. doi:10.1136/jamia.2010.005637 621
Viewpoint paper
Page 5
11. Linder JA, Ma J, Bates DW, et al. Electronic health record use and the quality of
ambulatory care in the United States. Arch Intern Med 2007;167:1400e5.
12. Zhou L, Soran CS, Jenter CA, et al. The relationship between electronic health
record use and quality of care over time. J Am Med Inform Assoc 2009;16:
457e64.
13. Himmelstein DU, Wright A, Woolhandler S. Hospital computing and the costs and
quality of care: a national study. Am J Med 2010;123:40e6.
14. Stead WW, Lin HS, eds. Computational Technology for Effective Health Care:
Immediate Steps and Strategic Directions. Washington DC: National Academies
Press, 2009.
15. US Department of Health and Human Services. Final rule on meaningful use.
http://edocket.access.gpo.gov/2010/pdf/2010-17207.pdf (accessed 14 Jul 2010).
16. Ash JS, Sittig DF, Dykstra R, et al. The unintended consequences of computerized
provider order entry: Findings from a mixed methods exploration. Int J Med Inform
2009;78(Suppl 1):S69e76.
17. Koppel R, Metlay JP, Cohen A, et al. Role of computerized physician order entry
systems in facilitating medication errors. J Am Med Inform Assoc
2005;293:1197e203.
18. Koppel R, Wetterneck TB, Telles JL, et al. Workarounds to barcode medication
administration systems: occurrences, causes and threats to patient safety. JAm
Med Inform Assoc 2008;15:408e28.
19. Nebeker JR, Hoffman JM, Weir CR, et al. High rates of adverse drug events in
a highly computerized hospital. Arch Intern Med 2005;165:1111e16.
20. Wears RL, Cook RI, Perry SJ. Automati on, interaction, complexity, and failure:
A case study. Reliability Engineering & System Safety 2006;
91:1494e501.
21. Reason J. Human error. New York: Cambridge University Press, 1990.
22. Reason J. Managing the Risks of Organizational Accidents. Aldershot: Ashgate,
1997.
23. Dijkstra EW. The end of computing science? Commun ACM 2001;44:92.
24. Sauer C. Deciding the future for IS failures: not the choice you might think. In:
Currie W, Galliers R, eds. Rethinking Management Information Systems. Oxford, UK:
Oxford University Press, 1999:279e309.
25. Ash JS, Berg M, Coiera E. Some unintended consequences of information
technology in health care: the nature of patient care information system-related
errors. J Am Med Inform Assoc 2004;11:104e12.
26. Ash JS, Sittig DF, Dykstra RH, et al. Categorizing the unintended sociotechnical
consequences of computerized provider order entry. International Journal of Medical
Informatics 2007;76(Suppl 1):S21e7.
27. Bates DW, Cohen M, Leape LL, et al. Reducing the frequency of errors in medicine
using information technology. J Am Med Inform Assoc 2001;8:299e308.
28. Carayon P, Wetterneck TB, Schoofs Hundt A, et al. Evaluation of nurse interaction
with Bar Code Medication Administration (BCMA) technology in the work
environment. J Patient Saf 2007;3:34e42.
29. Chaudhry B. Computerized clinical decision support: will it transform healthcare?
J Gen Intern Med 2008;23(Suppl 1):85e7.
30. Eslami S, Aru-Hanna A, De Keizer NF. Evaluation of outpatient computerized
physician medication order entry systems: a systematic review. J Am Med Inform
Assoc 2007;14:400e6.
31. Garg AX, Adhikari NKJ, McDonald H,
et al. Effects of computerized clinical decision
support systems on practitioner performance and patient outcomes - A systematic
review. JAMA 2005;293:1223e38.
32. Kaushal R, Shojania KG, Bates DW. Effects of computerized physician order entry
and clinical decision support systems on medication safety: a systematic review.
Arch Intern Med 2003;163:1409e16.
33. Kawamoto K, Houlihan CA, Balas EA, et al. Improving clinical practice using clinical
decision support systems: a systematic review of trials to identify features critical to
success. Br Med J 2005;330:765e8.
34. Sittig DF, Wright A, Osheroff JA, et al. Grand challenges in clinical decision
support. J Biomed Inform 2008;41:387e92.
35. Beuscart-Zephir MC, Pelayo S, Anceaux F, et al. Impact of CPOE on doctor-nurse
cooperation for the medication ordering and administration process. International
Journal of Medical Informatics 2005;74:629e41.
36. Patterson ES, Cook RI, Render ML. Improving patient safety by identifying side
effects from introducing bar coding in medication administration. J Am Med Inform
Assoc 2002;9:540e53.
37. van Onzenoort HA, van de Plas A, Kessels AG, et al. Factors influencing bar-code
verification by nurses during medication administration in a Dutch hospital. Am J
Health Syst Pharm 2008;65:644e8.
38. Wears RL, Perry SJ. Status boards in accident and emergency departments:
support for shared cognition. Theoretical Issues in Ergonomic Science
2007;8:371e80.
39. Holden RJ. Cognitive performance-alterning effects of electronic medical records:
an application of the human factors paradigm for patient safety. Cognition,
Technology & Work. Published Online First: 2010, doi: 10.1007/s10111-010-0141-8.
40. Jackson D.
A direct path to dependable software. Commun. ACM
2009;52:78e88.
41. Jackson D, Thomas M, Millett LI, eds. Software for Dependable Systems:
Sufficient Evidence? Washington, DC: National Academy Press, 2007.
42. Leveson NG. Safeware: System Safety and Computers. Boston: Addison-Wesley,
1995.
43. Storey N. Safety-Critical Computer Systems. Harlow, UK: Pearson Education
Limited, 1996.
44. Wears R, Leveson NG. “Safeware”: safety-critical computing and healthcare
information technology. In: Henriksen K, Battles JB, Keyes MA, et al, eds. Advances
in Patient Safety: New Directions and Alternative Approaches. Vol. 4. Technology
and Medication Safety. AHRQ Publication No. 08-0034-4 ed. Rockville, MD: Agency
for Healthcare Research and Quality, 2008:1e10.
45. Johnson CW. Why did that happen? Exploring the proliferation of barely usable
software in healthcare systems. Qual Saf Health Care 2006;15(Suppl 1):i76e81.
46. Hoffman S, Podgurski A. Finding a cure: the case for regulation and oversight of
electronic health record systems. Harv J Law Technol 2008;22:104e65.
47. US Department of Health and Human Services. HIT Safety Hearing. http://
healthit.hhs.gov/portal/server.pt?open ¼512&objID¼1473&&PageID¼17117&
mode¼2&in_hi_userid¼11673&cached¼true#2252010 (accessed 5 May 2010).
48. Miller RA, Gardner RM. Recommendations for responsible monitoring and
regulation of clinical software systems. J Am Med Inform Assoc 1997;4:442e57.
49. Egerman P, Probst M. Memorandum to David Blumenthal: Adoption-Certification
Workgroup HIT Safety Recommendations. http://healthit.hhs.gov/portal/server.pt/
gateway/PTARGS_0_11673_911847_0_0_18/
AdoptionCertificationLetterHITSafetyFINAL508.pdf (accessed 17 Aug 2010).
50. International Standards Organization. Health informaticsdapplication of clinical
risk management to the manufacture of health software. Geneva, Switzerland:
International Standards Organization, 2008. ISO/TS 29321:2008.
51. International Standards Organization. Health InformaticsdGuidance on the
Management of Clinical Risk Relating to the Deployment And Use of Health Software
Systems. Geneva, Switzerland: International Standards Organization, 2008. ISO/TS
29322:2008(E).
52. La¨kemedelsverket. Proposal for Guidelines Regarding Classification of Software
Based Information Systems Used in Health Care. Stockholm, Sweden: Medical
Products Agency, 2009 (revised 18 Jan 2010).
53. Koppel R, Kreda D. Health care information technology vendors’ “hold harmless”
clause: implications for patients and clinicians. JAMA 2009;301:1276e8.
54. Park S, Rothrock L. Systematic analysis of framing bias in missile defense:
implications toward visualization design. Eur J Oper Res 2007;182:1383e98.
55. Hollnagel E, Woods DD. Joint Cognitive Systems: Foundations of Cognitive Systems
Engineering. New York: CRC Press, 2005.
56. Zhang J, Norman DA. Representations in distributed cognitive tasks. Cogn Sci
1994;18:87e122.
57. Buckhout R. Eyewitness testimony. Sci Am 1974;231:23e32.
58. Cialdini R. Influence: The Psychology of Persuasion. New York: William Morrow,
1993.
59. Tversky A, Kahneman D. Judgment under uncertainty: Heuristics and biases.
Science 1974;185:1124e31.
60. Smith GF. Representational effects on the solving of an unstructured decision
problem. IEEE Trans Syst Man Cybern 1989; 19:1083e90.
61. Weinger M, Gardner-Bonneau D, Wiklund ME, eds. Handbook of Human Factors in
Medical Device Design. CRC Press.
62. Karsh B, Alper SJ, Holden RJ, et al. A human factors engineering paradigm for
patient safetyddesigning to support the performance of the health care
professional. Qual Saf Health Care 2006;15(Suppl I):i59e65.
63. Reason J.
A systems-approach to organizational error. Ergonomics
1995;38:1708e21.
64. Carayon P, Hundt AS, Karsh B, et al. Work system design for patient safety: the
SEIPS model. Qual Saf Health Care 2006;15(Suppl I):i50e8.
65. Karsh B. Clinical Practice Improvement and Redesign: How Change in Workflow
can be Supported by Clinical Decision Support. Rockville, Maryland: Agency for
Healthcare Research and Quality, 2009. AHRQ Publication No. 09-0054-EF.
66. Holden RJ, Karsh B. A theoretical model of health information technology behavior.
Behav Inf Technol 2009;28:21e38.
67. Diamond CC, Shirky C. Health information technology: a few years of magical
thinking? Health Aff 2008;27:W383e90.
68. Berg M. Rationalizing Medical Work. Cambridge, MA: MIT Press, 1997.
69. Ebright PR. Understanding nurse work. Clin Nurse Spec 2004;18:168 e 70.
70. Ebright PR, Patterson ES, Chalko BA, et al. Understanding the complexity of
registered nurse work in acute care settings. J Nurs Adm 2003;33:630e8.
71. Bisantz AM, Wears RL. Forcing functions: the need for restraint. Ann Emerg Med
2008;53:477e9.
72. Simborg DW. Promoting electronic health record adoption. Is it the correct focus?
Journal of the American Medical Informatics Association 2008;15:127e9.
73. Greenhalgh T, Potts HW, Wong G, et al. Tensions and paradoxes in electronic
patient record research: a systematic literature review using the meta-narrative
method. Milbank Q 2009;87:729e88.
74.
Abbott P, Coenan A. Globalization and advances in information and communication
technologies: The impact on nursing and health. Nurs Outlook 2008;58:238e46.
75. Grudin J. Computer-supported cooperative work: history and focus. IEEE Computer
1994;27:19e27.
76. Frisse ME. Health information technology: one step at a time. Health Aff
(Millwood) 2009;28:W379e84.
77. Hoffman RR, Militello LG. Perspectives on Cognitive Task Analysis. New York:
Taylor and Francis, 2009.
78. Vicente KJ. Cognitive Work Analysis. Mahwah, NJ: Lawrence Erlbaum Associates,
1999.
79. Norman DA. Things That Make Us Smart: Defending Human Attributes In The Age
Of The Machine. New York: Basic Books, 1994.
622 J Am Med Inform Assoc 2010;17:617e623. doi:10.1136/jamia.2010.005637
Viewpoint paper
Page 6
80. Saleem JJ, Russ AL, Sanderson P, et al. Current challenges and opportunities for
better integration of human factors research with development of clinical
information systems. Yearb Med Inform 2009:48e58.
81. Sittig DF, Singh H. Eight rights of safe electronic health record use. JAMA
2009;302:1111e13.
82. Holden RJ. Physicians’ beliefs about using EMR and CPOE: In pursuit of
a contextualized understanding of health IT use behavior. International Journal of
Medical Informatics 2010;79:71e80.
83. Sellen AJ, Harper RHR. The Myth of the Paperless Office. Cambridge, MA: MIT
Press, 2002.
84. Saleem JJ, Russ AL, Justice CF, et al. Exploring the persistence of paper with the
electronic health record. International Journal of Medical Informatics
2009;78:618e28.
85. Kaplan B. Evaluating informatics applicationsesome alternative approaches:
theory, social interactionism, and call for methodological pluralism. Int J Med Inform
2001;64:39e56.
86. Andre AD, Wickens CD. When users want whats not best for them. Ergon Des
1995:10e14.
87. Woods DD. Designs are hypotheses about how artifacts shape cognition and
collaboration. Ergonomics 1998;41:168e73.
88. Shavit O. Utilization of health technologies-Do not look where there is a light; shine
your light where there is a need to look! Relating national health goals with resource
allocation decision-making; illustration through examining the Israeli healthcare
system. Health Policy 2009;92:268e75.
89. Kushniruk A. Evaluation in the design of health information systems: application of
approaches emerging from usability engineering. Comput Biol Med 2002;32:141e9.
90. Kushniruk AW.
Analysis of complex decision-making processes in health care:
cognitive approaches to health Informatics. J Biomed Inform 2001;34:365e76.
91. Kushniruk AW, Patel VL, Cimino JJ. Usability testing in medical informatics:
Cognitive approaches to evaluation of in formation systems and user interfaces. Proc
AMIA Annu Fall Symp 1997:218e22.
92. Patel VL, Kaufman DR. Medical informatics and the science of cognition. Journal of
the American Medical Informatics Association 1998;5:493e502.
93. Patel VL, Kaufman DR, Arocha JA, et al. Bridging theory and practice: cognitive
science and medical informatics. Medinfo 1995:1278e82.
94. Ash JS, Gorman PN, Lavelle M, et al. Bundles: meeting clinical Information needs.
Bull Med Libr Assoc 2001;89:294e6.
95. Gorman P, Ash J, Lavelle M, et al. Bundles in the wild: Managing information to
solve problems and maintain situation awareness. Libr Trends 2000;49:266e89.
96. Gorman PN. Information needs of physicians. J Am Soc Inf Sci
1995;46:729e36.
97. Zhang JJ, Patel VL, Johnson TR. Medical error: Is the solution medical or
cognitive? Journal of the American Medical Informatics Association
2002;9:S75e7.
98. Zhang JJ, Patel VL, Johnson TR, et al. A cognitive taxonomy of medical errors.
J Biomed Inform 2004;37:193e204.
99. US Department of Health and Human Services. Strategic Health IT Advanced
Research Projects (SHARP) Program. http://healthit.hhs.gov/portal/server.pt?
open¼512&mode¼2&objID¼1806&PageID
¼20616 (accessed 17 Aug 2010).
100. Karsh B. Beyond usability for patient safety: designing effective technology
implementation systems. Qual Saf Health Care 2004;13:388e94.
101. Bisantz A, Roth E. Analysis of cognitive work. Reviews of Human Factors and
Ergonomics 2008;3:1e42.
102. Schraagen JM, Chipman SF, Shalin VL, eds. Cognitive Task Analysis. Mahwah,
NJ: Lawrence Erlbaum, 2000.
103. Diaper D, Stanton N, eds. The Handbook of Task Analysis for Human-Computer
Interaction. Mahwah, NJ: CRC Press, Lawrence Erlbaum Associates, 2003.
104. Kirwan B, ed. A Guide to Task Analysis. Boca Raton FL: CRC Press, 1992.
105. Nemeth C. Human Factors Methods for Design: Making Systems Human-Centered.
New York: CRC Press, 2004.
106. Nielsen J. Usability Engineering. Boston: Academic Press, 1993.
107. Rubin JR. Handbook of Usability Testing. New York, NY: John Wiley & Sons, 1994.
108. Wiklund ME. Medical Device and Equipment Design: Usability Engineering and
Ergonomics. Buffalo Grove, IL: Interpharm Press, 1995.
109. Agutter J, Drews F, Syroid N, et al. Evaluation of graphic cardiovascular display in
a high-fidelity simulator. Anesth Analg 2003;97:1403e13.
110. Unertl KM, Weinger MB, Johnson KB, et al. Describing and modeling workflow and
information flow in chronic disease care. Journal of the American Medical
Informatics Association 2009;16:826e36.
111. Wachter SB, Agutter J, Syroid N, et al. The employment of an iterative design
process to develop a pulmonary graphical display. Journal of the American Medical
Informatics Association 2003;10:363e72.
112. Weinger MB, Slagle J. Human factors research in anesthesia patient safety:
Techniques to elucidate factors affecting clinical task performance and deci sion-
making. Journal of the American Medical Informatics Association 2002;
9
(6 Suppl):S58e63.
113. Fitts PM. Psychological research on equipment design in the AAF. Am Psychol
1947;2:93e8.
J Am Med Inform Assoc 2010;17:617e623. doi:10.1136/jamia.2010.005637 623
Viewpoint paper
Page 7
  • Source
    • "[116] 5 The Relevance of Information Quality in the Healthcare Domain The attention to HIQ can be related to a number of reasons. A first reason refers to concern about information quality could have been transferred from the information system and information technology discourses and communities to healthcare during the pervading digitization of the healthcare processes and of the artifacts used therein (i.e., documents, forms, records); although this process of digitization is proving itself to be harder in reaching the initial promises than expected [42, 8, 43, 27, 50], it is a fact that the health sector is increasingly characterized by the efficient provision of information-driven services, and these latter ones, as discussed in Chapter ??, are inextricably linked to the use of information systems [53], where information quality has always been an important factor to be considered, see e.g. [? ]. "
    Full-text · Chapter · Apr 2016
  • Source
    • "Thus, they may choose to outsource analysis of their data to an outside organisation that specialises in these sorts of analyses. Under these circumstances, maintaining the security of health information can be both a technical and organisational challenge (Karsh et al. 2010). A data analyst requires full access to all relevant information; yet full access is prohibited by laws governing confidentiality of medical data, especially when the analyst is not the owner of the data (Williams, Robinson & Toth 2007). "
    Full-text · Article · Oct 2015 · Health information management journal
    • "Building upon our knowledge of the complexities during nursing face-to-face interruptions [15], and using the STS approach, we define HCD-mediated Interruption Dynamics (HCDMID) as a combination of contextual factors influencing nursing work performance and decision-making during the entire HCD-mediated interruption process, starting from the interrupter initiating the interruption, to the interruption content being communicated , and finally to the interruptee managing the interruption. Multiple contextual factors, including both interrupters (and all of their characteristics) and the interruptees (and all of their characteristics), the primary task, interruption timing, frequency, modality, and handling strategies, may be involved in the interruption dynamics to positively or negatively affect nursing work [12,37,38]. The purpose of this study is to examine how the integration of Hands-free Communication Devices (HCDs) differentiates the interruption dynamics from the existing knowledge of face-to-face interruptions in a nursing work system. "
    [Show abstract] [Hide abstract] ABSTRACT: Objective: The objective of this study is to examine how the integration of Hands-free Communication Devices (HCDs) differentiates the interruption dynamics from the existing knowledge of face-to-face interruptions in a nursing work system. Introduction: Many aspects of nursing workflow and work efficiency have been improved with the implementation of HCDs; however, the frequency of interruptions in the work place has not been reduced. The complexity of HCD-mediated interruption dynamics needs to be studied using a Socio-technical Systems (STS) approach in order to holistically understand the problem and their effects. Methods: We conducted field observations in the acute care setting. A total of 12 nurses across two units were selected as participants in this study. Each participant was shadowed for 2.5. h totaling 30. h of observations. We iteratively coded the data into overarching themes using content analysis. Results: We determined three overarching themes: (1) assessment prior to interrupting, (2) interruption content delivery, and (3) response to interruptions. Based the coding structure and the observation events in each theme, we identified facilitators and barriers for nursing work related to the HCD-mediated interruption dynamics. The facilitators included "intact workflow continuity", "reduced time pressure", and "increased flexibility to respond to interruptions". The barriers included "interrupter-oriented nature", "interruptee's overprotection of workflow", "delay of information delivery", and "inaccuracy of information communicated". Conclusion: The findings of this study reflect the unique role of HCD in affecting interruption dynamics and nursing work. Based on the findings, we proposed system design recommendations, organizational-level interventions, and policy suggestions.
    No preview · Article · Aug 2015 · Health Policy and Technology
Show more