Health information technology: fallacies and
Ben-Tzion Karsh,1Matthew B Weinger,2,3Patricia A Abbott,4,5Robert L Wears6,7
Current research suggests that the rate of adoption of
health information technology (HIT) is low, and that HIT
may not have the touted beneficial effects on quality of
care or costs. The twin issues of the failure of HIT
adoption and of HIT efficacy stem primarily from a series
of fallacies about HIT. We discuss 12 HIT fallacies and
their implications for design and implementation. These
fallacies must be understood and addressed for HIT to
yield better results. Foundational cognitive and human
factors engineering research and development are
essential to better inform HIT development, deployment,
Current research demonstrates that health infor-
mation technology (HIT) can improve patient
safety and healthcare quality, in certain circum-
stances.1e6At the same time, other research shows
that HITadoption rates are low,7e10and that HIT
may not reliably improve care quality11 12or reduce
report14provided a hypothesis to explain these
. current efforts aimed at the nationwide
deployment of health care ITwill not be sufficient to
achieve the vision of 21st-century health care, and
may even set back the cause if these efforts continue
wholly without change from their present course.
Specifically, success in this regard will require greater
emphasis on providing cognitive support for health
care providers and for patients and family caregivers
. This point is the central conclusion of this report.
This is a stunning conclusion, especially in light
of the new Meaningful Use rules.15Yet, it is
consistent with evidence of HIT failures and
misuses.16e20In this article, we argue that the twin
issues of the failure of HIT adoption and of HIT
efficacy can be understood by examining a series of
misguided beliefs about HIT. The implications of
these fallacies for HIT design and implementation
need to be acknowledged and addressed for HITuse
to attain its predicted benefits.
THE ‘RISK FREE HIT’ FALLACY
Many designers and policymakers believe that the
risks of HIT are minor and easily manageable.
However, because HIT is designed, built, and
implemented by humans, it will invariably have
‘bugs’ and latent failure modes.21 22The deploy-
ment of HIT in high-pressure environments with
critically ill patients poses significant risk.17e19
Fallible humans have learned to build generally
reliable complex physical systems (eg, bridges,
buildings, cars), but it took more than a century to
understand and mitigate the myriad of hazards of
these systems. In contrast, we cannot yet design
and deploy complex software systems that are on
time, within budget, meet the specified require-
ments, satisfy their users, are reliable (bug free and
available), maintainable, and safe.23
Dijskstra, a recognized leader in software engi-
neering, lamented that:
. most of our systems are much more complicated
than can be considered healthy, and are too messy
and chaotic to be used in comfort and confidence.
The average customer of the computing industry has
been served so poorly that he expects his system to
crash all the time, and we witness a massive
worldwide distribution of bug-ridden software for
which we should be deeply ashamed.23
There are two additional reasons why HIT fail-
ures are particularly problematic. First, they are
often opaque to users and system managers alike; it
can be very challenging to understand exactly how
a particular failure occurred. Envisioning paths to
IT failure in advance, so they might be forestalled,
is particularly difficult.16 25 26Second, HITsystems
tend to have a ‘magnifying’ property, wherein one
exchanges a large number of small failures for
a small number of large, potentially catastrophic
failures. For example, instead of one pharmacist
making a single transcription error that affects one
patient, when a medication dispensing robot has
a software failure it can produce thousands of
errors an hour. Moreover, as different HIT systems
become coupled (eg, when a CPOE system is
directly linked to a pharmacy information system
and that to an electronic medication administration
record), errors early in the medication process can
more quickly pass unscrutinized to the patient.
Currently, there are no regulatory requirements
to evaluate HIT system safety even though these
systems are known to directly affect patient care in
both positive and negative ways.2 17 18 27e34Thus,
current HIT may:
< Have been developed from erroneous or incom-
plete design specifications;
< Be dependent on unreliable hardware or soft-
< Have programming errors and bugs;
< Work well in one context or organization but be
unsafe or even fail in another;
< Change how clinicians do their daily work,
modes.16 18 28 35e39
Decades of experience with IT in other hazardous
industries has emphasized the importance of these
1Department of Industrial and
Systems Engineering and
Systems Engineering Initiative
for Patient Safety, University of
Wisconsin, Madison, Wisconsin,
2Center for Perioperative
Research in Quality, Vanderbilt
University School of Medicine,
Nashville, Tennessee, USA
3Geriatrics Research, Education,
and Clinical Center, VA
Tennessee Valley Healthcare
System, Nashville, Tennessee,
4Division of Health Sciences
Informatics, Johns Hopkins
University School of Medicine,
Baltimore, Maryland, USA
5Department of Health Systems
and Outcomes, Johns Hopkins
University School of Nursing,
Baltimore, Maryland, USA
6Department of Emergency
Medicine, University of Florida,
Jacksonville, Florida, USA
7Clinical Safety Research Unit,
Imperial College London,
Ben-Tzion Karsh, Department of
Industrial and Systems
Engineering and Systems
Engineering Initiative for Patient
Safety, University of Wisconsin,
1513 University Avenue, Room
3218, Madison, WI 53706,
This paper stemmed from the
authors’ participation as
external resources in
a workshop sponsored by the
Agency for Healthcare Research
and Quality (AHRQ) entitled
‘Wicked Problems in Cutting
Edge Computer-Based Decision
Support’ held on March 26e27,
2009 at the Center for Better
Health, Vanderbilt University,
Received 30 April 2010
Accepted 1 September 2010
J Am Med Inform Assoc 2010;17:617e623. doi:10.1136/jamia.2010.005637 617
problems40 41and led to the development of methods for safety
critical computing.42 43Healthcare has been slow to embrace
safety critical computing,44and HIT software has commonly
been identified as being among the least reliable.45A recent
National Academy of Science report concluded that ITshould be
considered “guilty until proven innocent”, and that the burden
of proof should fall on the vendor to demonstrate to an inde-
pendent certifier or regulator that a system is safe, not on the
customer to prove that it is not.41No other hazardous industry
deploys safety critical IT without some form of independent
hazard analysis (eg, a ‘safety case’); it is unwise for healthcare to
continue to do so.
THE ‘HIT IS NOT A DEVICE’ FALLACY
An off-shoot of the ‘risk free HIT’ fallacy is the belief that HIT
can be created and deployed without the same level of oversight
as medical devices. Currently, an FDA-approved drug (eg, an
opioid) is delivered by an FDA-approved device (infusion pump)
to a patient in pain. But none of the HIT that mediates and
influences all of the critical steps between the clinician’s deter-
mination that pain relief is needed and the start of the opioid
infusion (eg, order entry with decision support, pharmacy
checking and dispensing systems, robotic medication delivery
and dispensing systems, and bedside medication management
systems) is subject to any independent assessment of its safety
or fitness for purpose. The complexity of HIT systems and the
risk of potentiating serious error is sufficiently significant to
demand effective regulatory oversight.46The Office of the
National Coordinator has recognized this risk, and held hearings
on HITsafety on 25 February 2010,47but it is unlikely that any
effective process of independent review of safety will be in place
in the time frame produced by the HITECH Act.
The issue of regulation of HIT for safety and effectiveness is
a difficult and contentious one. The need for some form of
independent evaluation of HIT safety prior to market intro-
duction has gained recent attention.47Much has changed since
the 1997 publication of Miller and Gardner’s consensus recom-
mendations,48most notably the recent, relatively rapid and
semi-compulsory implementation of complex HIT systems
under the HITECH Act in organizations without much prior
experience in rolling out or managing such products. Many in
the HIT industry and academics argue that the institution of
FDA-type regulation would be counterproductive, by, for
example, slowing innovation, freezing improvement of current
systems with risky configurations, and ‘freezing out’ small
competitors. However, the current approach can no longer be
justified. A passive monitoring approach, as currently suggested
by the Office of the National Coordinator,49seems likely to be
both expensive and ultimately ineffective. We believe that
a proactive approach is required.
An alternative to FDA-type regulation would be a pre-market
requirement for a rigorous independent safety assessment. This
approach has shown some promise in proactively identifying
and mitigating risks without unduly degrading innovation and
necessary product evolution. Such an approach has been
endorsed by international standards organizations50 51and is
beginning to be applied in Europe.52
THE ‘LEARNED INTERMEDIARY’ FALLACY
One of the drivers of the ‘risk free HIT’ fallacy is the ‘learned
intermediary’ doctrine, the idea that HIT risks are negligible
because ‘the human alone ultimately makes the decision’. It is
believed that because a human operator monitors and must
confirm HIT recommendations or actions, that humans can be
depended on to catch any system-induced hazards.53Paradoxi-
cally, this fallacy stands a fundamental argument in favor of
HITon its head (ie, that HITwill help reduce human errors but
we will rely on the human to catch the HIT errors). Moreover,
this fallacy assumes that people are unaffected by the tech-
nology they use. However, it is well established that the way in
which problems, information,
presented to users by technology reframes them in ways that
neither the users nor the designer may appreciate.39 54Data
presentation format will affect what the user perceives and
believes to be salient (or not) and therefore affects subsequent
decisions and actions.55 56The clinician does not act or decide in
a vacuum, but is necessarily influenced by the HIT. Users are
inevitably and often unknowingly influenced by what many
HIT designers might consider trivial design detailsdplacement
(information availability), font size (salience), information
similarity and representativeness, perceived credibility (or
authority), etc.57e59For example, changing the order of medi-
cation options on a drop down pick list will influence clinicians’
Empirical studies have demonstrated that people will accept
worse solutions from an external aid than they could have
conceived of, unaided.60
Because information presentation
profoundly affects user behavior and decision-making, it is
critical that information displays be thoughtfully designed and
rigorously tested to ensure they yield the best possible perfor-
mance outcomes. These evaluations must consider the full
complexity of the context in which the system is to be used.61
THE ‘BAD APPLE’ FALLACY
It is widely believed that many healthcare problems are due
primarily to human (especially clinician and middle manager)
shortcomings. Thus, computerization is proposed as a way to
make healthcare processes safer and more efficient. Further,
when HIT is not used or does not perform as planned, designers
and administrators ask, “Why won’t those [uncooperative, error-
prone] clinicians use the system?” or “Why are they resisting?”
The fingers are pointed squarely at front-line users.
However, human factors engineers, social psychologists, and
of human error, replacing it with the more accurate and useful
systems view of error,21 62 63which is supported by strong theory
and evidence from safety science, industrial and systems engi-
neering, and social psychology.62 64 65Thus, bad outcomes are the
result of the interactions among systems components including
the people, tools and technologies, physical environment, work-
place culture, and the organizational, state, and federal policies
which govern work. Poor HIT outcomes do not result from
isolated acts of individuals, but from interactions of multiple
latent and triggering factors in a field of practice.18 20 65 66
THE ‘USE EQUALS SUCCESS’ FALLACY
Equating HIT usage with design success can be misleading and
may promulgate inappropriate policies to improve ‘use’.67
Humans are the most flexible and adaptable elements in any
system, and will find ways to attain their goals often despite the
technology. However, the effort and resilience of front-line
workers is a finite resource and its consumption by workarounds
to make HITwork because its use is required reduces the overall
ability of the system to cope with unexpected conditions and
failures. Thus, the fact that people can use a technology to
accomplish their work is not, in itself, an endorsement of that
618J Am Med Inform Assoc 2010;17:617e623. doi:10.1136/jamia.2010.005637
technology. Conversely, a lack of use is not evidence of a flawed
systemdclinicians may ignore features like reminders for legiti-
mate reasons. Healthcare is a complex sociotechnical system
where simple metrics canmislead because they do not adequately
consider the context of human decisions at the time they are
made. Thus, the promulgation of ‘meaningful use’ may lead to
and tied to improved efficiency, learning, ease of use, task and
information flow, cognitive load, situation awareness, clinician
and patient satisfaction, reduced errors, and faster error recovery.
THE ‘MESSY DESK’ FALLACY
Much of the motivation for HIT stems from the belief that
something is fundamentally wrong with existing clinical work,
that it is too messy and disorganized. It needs to be ‘rationalized’
into something that is nice, neat, and linear.68However, as
a complex sociotechnical system, many parts of healthcare
delivery are messy and non-linear. That is not to say that waste
does not exist nor does it mean that standardization is unwise.
There exist processes within clinical care that require linearity
and benefit from standardization. But, in many clinical settings,
multiple patients are managed simultaneously, with clinicians
repeatedly switching among sets of goals and tasks, continu-
ously reprioritizing and replanning their work.69 70In such
settings, patient care is less an algorithmic sequence of choices
and actions than an iterative process of sensing, probing, and
reformulating intermediate goals negotiated among clinicians,
patients, caregivers, and the clinical circumstances. Because of
time constraints, many care goals, and the tasks or decisions
needed to pursue those goals, are intentionally deferred until
a future opportunity.
However, HIT designs often assume a rationalized model of
a prescribed set of questions even though the questions and/or
their order may not be relevant for a particular patient at that
time. Similarly, some clinical decision support (CDS) rules force
clinicians to stop and respond to the CDS, interrupting their
work, substituting the designer’s judgment for that of the
clinician.71This mismatch between the reality of clinical work
and how it is rationalized by HIT leads clinicians to perceive
that these systems are disruptive and inefficient. Accommo-
dating the non-linearity of healthcare delivery will require new
paradigms for effective HIT design. Consistent and appropriate
data availability and quick access may need to supplant ‘inte-
gration into workflow’ as a key design goal.
THE ‘FATHER KNOWS BEST’ FALLACY
While HIT has been sold as a solution to healthcare’s quality and
efficiency problems, most of the benefits of current HITsystems
accrue to entities upstream from direct patient care proc-
esses72dhospital administrators, quality improvement profes-
sionals, payors, regulators, and the government.73In contrast,
those who suffer the costs of poorly designed and inefficient
HIT are front-line providers, clerks, and patients. Thus, most
HIT has been designed to meet the needs of people who do not
have to enter, interact with, or manage the primary (ie, raw)
data. This mismatch between who benefits and who pays leads
to incomplete or inaccurate data entry (‘garbage indgarbage
out’), inefficiency, workarounds, and poor adoption.74This
fundamental principle has been expressed as Grudin’s Law, one
form of which is: “When those who benefit from a technology
are not those who do the work, then the technology is likely to
fail or be subverted.”75
As noted by Frisse,76HIT that focuses too much on the
administrative aspects of healthcare (eg, complete and accurate
documentation to meet authorization rules or to improve
revenue) rather than on care processes and outcomes (ie, the
actual quality of disease management) will result in a missed
opportunity to truly transform care. Healthcare does not exist to
create documentation or generate revenue, it exists to promote
good health, prevent illness, and help the sick and injured.
Efforts currently underway to align incentives to enhance
adoption of electronic health records (EHRs) are acknowledged
and warranted. However, the definitions of ‘meaningful use’ and
‘certified systems’, and how these milestones are to be measured,
must be considered carefully. Otherwise, unintended conse-
quences, such as physicians and hospitals investing in HIT to the
exclusion of what might actually be more effective local strategies
(eg, use of nurse case managers or process redesign), may occur.
THE ‘FIELD OF DREAMS’ FALLACY AND THE ‘SIT-STAY’
The ‘field of dreams’ fallacy suggests that if you provide HIT to
clinicians, they will gladly use it, and use it as the designer
intended. This fallacy is further reinforced by the belief that
clinicians should rely on HIT because computers are, after all,
smarter than humans (the ‘sit-stay’ fallacy explained below).
The ‘field of dreams’ fallacy is well-known in other domains
where it is also referred to as the ‘designer fallacy’ or ‘designer-
centered design’.77Here, if a system’s designer thinks the system
works, then any evidence to the contrary must mean the users
are not using it appropriately. In fact, designers sometimes
design for a world that does not actually exist78(also called the
‘imagined world fallacy’). For healthcare, the imagined world
may be a linear orderly work process used by every clinician.
Computers cannot be described as being inherently ‘smart’.
Instead, computers are very good at repeatedly doing whatever
they were told to do, just like a well-trained animal (ie, ‘sit-
stay’). Computers implement human-derived rules and with
a degree of consistency much higher than human workers. This
does not make them intelligent. Instead, computers are much
more likely than humans to perform their clever and complex
tricks at inappropriate times. Moreover, a computer, in its
consistency, can perpetuate errors on a very large scale. People,
on the other hand, are smart, creative, and context sensitive.79
Technology can be at its worst, and humans at their best, when
novel and complex situations arise. Many catastrophes in
complex sociotechnical systems (eg, Three Mile Island) occur in
such situations, particularly when the technology does not
communicate effectively with its human users.
Thus, HIT must support and extend the work of users,80 81
not try to replace human intelligence. Cognitive support that
offers “clinicians and patients assistance for thinking about and
solving problems related to specific instances of healthcare”14is
the area where the power of IT should be focused.
THE ‘ONE SIZE FITS ALL’ FALLACY
HITcannot be designed as if there is always a single user, such as
a doctor, working with a single patient. The one doctoreone
patient paradigm has largely been replaced by teams of physi-
cians, nurses, pharmacists, other clinicians, and ancillary staff
interacting with multiple patients and their families, often in
different physical locations. HIT designed for single users, or for
users doing discrete tasks in isolated ‘sessions’, are misconceived.
There are tremendous differences in the HIT needs of different
clinical roles (nurse vs physician), clinical situations (acute vs
J Am Med Inform Assoc 2010;17:617e623. doi:10.1136/jamia.2010.005637 619
chronic care), clinical environments (intensive care unit vs
ambulatory clinic, etc), and institutions. The interaction of HIT
with multiple users will influence communication, coordination,
and collaboration. Depending on how well the HITsupports the
needs of the different users and of the team as a whole, these
interactions may be improved or degraded.80e82To succeed in
today’s team-based healthcare reality, HIT should be designed
to: (a) facilitate the necessary collaboration between health
professionals, patients, and families; (b) recognize that each
member of the collaborative team may have different mental
models and information needs; and (c) support both individual
and team care needs across multiple diverse care environments
and contexts. This will require more than just putting a new
‘front end’ on a standard core; it needs to inform the funda-
mental design of the system.
THE ‘WE COMPUTERIZED THE PAPER, SO WE CAN GO
Taking the data elements in a paper-based healthcare system and
computerizing them is unlikely to create an efficient and effec-
tive paperless system. This surprises and frustrates HIT
designers and administrators. The reason, however, is that the
designers do not fully understand how the paper actually
supports users’ cognitive needs. Moreover, computer displays are
not yet as portable, flexible, or well-designed as paper.83
The paper persistence problem was recently explored at a large
Veterans Affairs Medical Center84where EHRs have existed for
10years. Paper continues to be used extensively. Why? The
paper forms are not simple data repositories that, once
computerized, could be eliminated. Rather such ‘scraps’ of paper
are sophisticated cognitive artifacts that support memory,
forecasting and planning, communication, coordination, and
education. User-created paper artifacts typically support patient-
specific cognition, situational awareness, task and information
communication, and coordination, all essential to safe quality
patient care. Paper will persist, and should persist, if HIT is not
able to provide similar support.
THE ‘NO ONE ELSE UNDERSTANDS HEALTHCARE’ FALLACY
Designers of HIT need to have a deep, rich, and nuanced
understanding of healthcare. However, it is misguided to believe
that healthcare is unique or that no one outside of the domain
could possibly understand it. This fallacy mistakes a condition
that is necessary for success (ie, the design team must include
clinicians in the design process) from one that is sufficient (ie,
only clinicians can understand and solve complex HIT issues).
Teams of well-intentioned clinicians and software engineers may
believe that understanding of clinical processes coupled with
clever programming can solve the challenges facing healthcare.
But such teams typically will not have the requisite breadth and
depth of theories, tools, and ideas to develop robust and usable
systems. By seeing only what they know, such teams do not
understand how clinical work is really carried out, what clini-
cians’ real needs are, and where the potential hazards and
leverage points lie. As a result, problems have been framed too
narrowly, leading to impoverished designs and disappointing
Understanding what would help people in their complex work
is not as simple as asking them what they want86dan all too
common approach in HIT design. People’s ideas for what should
the world.87Like all hypotheses, some or many could be wrong.
Furthermore, mostclinicians arenotexpertsin devicedesign,user
interface design, or the relationship between HIT design and
performance. What clinicians say they want may be limited by
their own understanding of the complexity of their work or even
their design vocabulary. Thus, simply asking clinicians (or any
improve their work may be quite different.
Clinicians need to be studied so that the designer is aware of
the complexities of their workdthe tasks, processes, contexts,
contingencies, and constraints. The results of observations,
interviews, and other user research can best be analyzed by
trained usability engineers and human factors professionals to
properly inform design. Similarly, it takes special training and
skills to evaluate a humanecomputer interface, assess the
usability of a system, or predict the changes in communication
patterns and social structures that a design might induce.
Furthermore, design decisions should be based on test results,
not user preferences. The involvement of human factors engi-
neering, cognitive engineering, interaction design, psychology,
sociology, anthropology, etc, in all phases of HIT design and
implementation will not be a panacea, but could substantially
improve HITusability, efficiency, safety, and user satisfaction.80
WHAT SHOULD WE DO NOW?
HIT must be focused on transforming care and improving
patient outcomes. HIT must be designed to support the needs
of clinicians and their patients.62As pointed out recently by
Shavit,88“It is health that people desire, and health technology
utilization is merely the means to achieve it.” The needs of
users and the complexities of clinical work must be analyzed
first, followed by evaluation of the entire scope of potential
solutions, rather than examining the current array of available
products and characterizing the needs that they might meet.88
We must delineate the key questions (based on the critical
problems) before we arrive at answers. Unfortunately, insuffi-
cient contextual research has been conducted to support
effective HIT designand
research on relevant topics has been carried out for several
decades,89e98but it does not seem that commercial HIT has
benefited adequately from these findings. Much more founda-
tional work is needed. We applaud the recent funding of the
Strategic Health IT Advanced Research Project on cognitive
informatics and decision making in healthcare by the Office of
the National Coordinator,99and hope that this represents the
beginning of a sustained research effort on the safe and effective
design of HIT.
As stated earlier, appropriate metrics for HIT success should
not be adoption or usage, but rather impact on population
health. The ‘comparative effectiveness’ perspective must also be
applied to HITdwhat is the return-on-investment of each HIT
initiative compared with alternative uses of these funds?
Importantly, just as the structure of a single carbon group on
a therapeutic molecule can make the difference between
a ‘miracle cure’ and a toxic substance, the details of HIT design
and implementation100in a specific context can make a huge
difference to its effectiveness, safety, and real cost (ie, not just
the purchase price but training costs, lost productivity, user
satisfaction, HIT-induced errors, workarounds, etc).
There are fundamental gaps in the awareness, recognition,
and application of existing scientific knowledge bases, especially
related to human factors, and systems and cognitive engineering,
that could help address some of HIT’s biggest problems. To that
end, we recommend the following:
620 J Am Med Inform Assoc 2010;17:617e623. doi:10.1136/jamia.2010.005637
< These challenges will only be overcome by collaborating
substantively with those who can contribute unique and
important expertise such as human factors engineers, applied
psychologists, medical sociologists, communication scien-
tists, cognitive scientists, and interaction designers. Pilots did
not improve aviation safety nor did nuclear power operators
improve nuclear safety . by themselves. Rather they worked
closely with experts in cognitive, social, and physical
performance and safety to improve safety. HIT stands to
benefit in the same way.
< Humans have very limited insight into their own perfor-
mance, and even more limited ability to articulate what
might improve it. We need substantial research on how
clinical work is actually done and should be done. Methods to
accomplish this include cognitive field analyses101
cognitive work analysis,78cognitive task analysis102), work-
flow and task analyses103 104(eg, hierarchical task analysis,
sequence diagrams), and human-centered design evalua-
tions61 105e108(eg, usability testing). The latter takes the
results of domain studies and validates them. Validation of
HITcannot be achieved by asking a clinician if they like the
design. Validation requires thorough experimental testing of
the design based on well-defined performance criteria.
< Measurements of meaningful use15are designed to facilitate
payment of government incentives to physicians for adopting
HIT. However, use may not truly be meaningful in a clinical
sense until HIT truly supports users’ needs. During HIT
development, vendors and healthcare organizations must
focus on more meaningful measures of design success:
clinician and patient ease of learning, time to find informa-
tion, time to solve relevant clinical problems, use errors,
accuracy of found information, changes in task and informa-
tion flow, workload, situation awareness, communication
and coordination effectiveness, and patient and clinician
satisfaction.65 109e112These measures should be applied to all
members of the care team.
These steps alone will require a significant investment by
vendors, healthcare organizations, and government funders. The
path may seem daunting and the fruits of the investment
distant, so a little perspective might help. In 1903, the first
controlled powered airplane took flight. In 1947, Fitts113
published a paper in which he explained that,
. up to the present time psychological data and research
techniques have played an insignificant role in the field .
Particularly in the field of aviation has the importance of human
requirements in equipment design come to be recognized. There
probably is no other engineering field in which the penalties for
failure to suit the equipment to human requirements are so great.
With present equipment, flying is so difficult that many individuals
cannot learn to pilot an aircraft safely, and even with careful
selection and extensive training of pilots, human errors account for
a major proportion of aircraft accidents. The point has been reached
where addition of new instruments and devices . on the cockpit
instrument panel actually tends to decrease the over-all
effectiveness of the pilot by increasing the complexity of a task that
already is near the threshold of human ability. As aircraft become
more complex and attain higher speeds, the necessity for designing
the machine to suit the inherent characteristics of the human
operators becomes increasingly apparent.
Substitute ‘clinician’ for ‘pilot’ and ‘patient room’ for ‘cockpit’
and the text feels current. In the more than 60years since that
publication, commercial aviation has become very safe. While it
may not take 60years for HIT to become as safe, if we do not
change from our current course, it never will be.
Throughout human history, significant innovations have
always been associated with new perils. This is as much the case
for fire, the wheel, aviation, and nuclear power as it is for HIT.
HITaffords real opportunities for improving quality and safety.
However, at the same time, it creates substantial challenges,
especially during everyday clinical work. This paper is not
a Luddite call to cease HIT development and dissemination.
Rather, it is a plea to accelerate and support the design and
implementation of safer HITso that we need not wait as long as
did aviation to see the fruits of innovation.
We must also consider the likely undesirable consequences of
current HIT deployment policies and regulations. The ‘hold
harmless’ clauses53found in many HITcontracts are anathema
to organizational learning, innovation, and safety because they
stifle reporting and sharing of experiences and data (‘risk free’,
‘field of dreams’, and ‘father knows best’ fallacies). Current
meaningful use rules and deadlines leave little time for HIT
product improvement and testing, incentivizing rapid imple-
mentation of whatever is available (‘one size fits all’ fallacy).
Despite compelling evidence that HITworks best (and is safest)
when it is customized to local circumstances and workflows, the
government-sponsored push for meaningful use may leave
clinicians trying to adapt their care practices to suboptimal
systems (‘field of dreams’ and ‘sit-stay’ fallacies). Finally, the
current functional usage measures of meaningful use will focus
healthcare facilities and practices on meeting those measures (eg,
a certain percentage of prescriptions must be generated by HIT
systems) to the exclusion of others (eg, the incidence of inap-
propriate prescribing) that may be more important (‘use equals
success’ fallacy). However, if put on the right path now, HITwill
ultimately take its rightful place in healthcare, supporting and
extending clinician and patient efforts to enhance human health
Funding The authors’ time has been supported by grants R18SH017899 from AHRQ
and R01LM008923-01A1 from NIH to BK; IAF06-085 from the Department of
Veterans Affairs Health Services Research and Development Service (HSR&D) and
HS016651 from AHRQ to MBW; and R18HS017902 from AHRQ to RLW.
Competing interests None.
Provenance and peer review Not commissioned; externally peer reviewed.
Bates DW, Leape LL, Cullen DJ, et al. Effect of computerized physician order entry
and a team intervention on prevention of serious medication errors. JAMA
Chaudhry B, Wang J, Wu SY, et al. Systematic review: Impact of health
information technology on quality, efficiency, and costs of medical care. Ann Intern
Devine EB, Hansen RN, Wilson-Norton JL, et al. The impact of computerized
provider order entry on medication errors in a multispecialty group practice. J Am
Med Inform Assoc 2010;17:78e84.
King WJ, Paice N, Rangrej J, et al. The effect of computerized physician order
entry on medication errors and adverse drug events in pediatric inpatients.
Mekhjian HS, Kumar RR, Kuehn L, et al. Immediate benefits realized following
implementation of physician order entry at an academic medical center. J Am Med
Inform Assoc 2002;9:529e39.
Poon EG, Keohane CA, Yoon CS, et al. Effect of bar-code technology on the safety
of medication administration. N Engl J Med 2010;362:1698e707.
DesRoches CM, Campbell EG, Rao SR, et al. Electronic health records in
ambulatory caredA national survey of physicians. N Engl J Med
Furukawa MF, Raghu TS, Spaulding TJ, et al. Adoption of health information
technology for medication safety in U.S. Hospitals, 2006. Health Aff (Millwood)
Jha AK, DesRoches CM, Campbell EG, et al. Use of electronic health records in
U.S. hospitals. N Engl J Med 2009;360:1628e38.
Pedersen CA, Gumpper KF. ASHP national survey on informatics: Assessment of
the adoption and use of pharmacy informatics in US hospitals-2007. Am J Health
Syst Pharm 2008;65:2244e64.
J Am Med Inform Assoc 2010;17:617e623. doi:10.1136/jamia.2010.005637621
Linder JA, Ma J, Bates DW, et al. Electronic health record use and the quality of
ambulatory care in the United States. Arch Intern Med 2007;167:1400e5.
Zhou L, Soran CS, Jenter CA, et al. The relationship between electronic health
record use and quality of care over time. J Am Med Inform Assoc 2009;16:
Himmelstein DU, Wright A, Woolhandler S. Hospital computing and the costs and
quality of care: a national study. Am J Med 2010;123:40e6.
Stead WW, Lin HS, eds. Computational Technology for Effective Health Care:
Immediate Steps and Strategic Directions. Washington DC: National Academies
US Department of Health and Human Services. Final rule on meaningful use.
http://edocket.access.gpo.gov/2010/pdf/2010-17207.pdf (accessed 14 Jul 2010).
Ash JS, Sittig DF, Dykstra R, et al. The unintended consequences of computerized
provider order entry: Findings from a mixed methods exploration. Int J Med Inform
Koppel R, Metlay JP, Cohen A, et al. Role of computerized physician order entry
systems in facilitating medication errors. J Am Med Inform Assoc
Koppel R, Wetterneck TB, Telles JL, et al. Workarounds to barcode medication
administration systems: occurrences, causes and threats to patient safety. J Am
Med Inform Assoc 2008;15:408e28.
Nebeker JR, Hoffman JM, Weir CR, et al. High rates of adverse drug events in
a highly computerized hospital. Arch Intern Med 2005;165:1111e16.
Wears RL, Cook RI, Perry SJ. Automation, interaction, complexity, and failure:
A case study. Reliability Engineering & System Safety 2006;91:1494e501.
Reason J. Human error. New York: Cambridge University Press, 1990.
Reason J. Managing the Risks of Organizational Accidents. Aldershot: Ashgate,
Dijkstra EW. The end of computing science? Commun ACM 2001;44:92.
Sauer C. Deciding the future for IS failures: not the choice you might think. In:
Currie W, Galliers R, eds. Rethinking Management Information Systems. Oxford, UK:
Oxford University Press, 1999:279e309.
Ash JS, Berg M, Coiera E. Some unintended consequences of information
technology in health care: the nature of patient care information system-related
errors. J Am Med Inform Assoc 2004;11:104e12.
Ash JS, Sittig DF, Dykstra RH, et al. Categorizing the unintended sociotechnical
consequences of computerized provider order entry. International Journal of Medical
Informatics 2007;76(Suppl 1):S21e7.
Bates DW, Cohen M, Leape LL, et al. Reducing the frequency of errors in medicine
using information technology. J Am Med Inform Assoc 2001;8:299e308.
Carayon P, Wetterneck TB, Schoofs Hundt A, et al. Evaluation of nurse interaction
with Bar Code Medication Administration (BCMA) technology in the work
environment. J Patient Saf 2007;3:34e42.
Chaudhry B. Computerized clinical decision support: will it transform healthcare?
J Gen Intern Med 2008;23(Suppl 1):85e7.
Eslami S, Aru-Hanna A, De Keizer NF. Evaluation of outpatient computerized
physician medication order entry systems: a systematic review. J Am Med Inform
Garg AX, Adhikari NKJ, McDonald H, et al. Effects of computerized clinical decision
support systems on practitioner performance and patient outcomes - A systematic
review. JAMA 2005;293:1223e38.
Kaushal R, Shojania KG, Bates DW. Effects of computerized physician order entry
and clinical decision support systems on medication safety: a systematic review.
Arch Intern Med 2003;163:1409e16.
Kawamoto K, Houlihan CA, Balas EA, et al. Improving clinical practice using clinical
decision support systems: a systematic review of trials to identify features critical to
success. Br Med J 2005;330:765e8.
Sittig DF, Wright A, Osheroff JA, et al. Grand challenges in clinical decision
support. J Biomed Inform 2008;41:387e92.
Beuscart-Zephir MC, Pelayo S, Anceaux F, et al. Impact of CPOE on doctor-nurse
cooperation for the medication ordering and administration process. International
Journal of Medical Informatics 2005;74:629e41.
Patterson ES, Cook RI, Render ML. Improving patient safety by identifying side
effects from introducing bar coding in medication administration. J Am Med Inform
van Onzenoort HA, van de Plas A, Kessels AG, et al. Factors influencing bar-code
verification by nurses during medication administration in a Dutch hospital. Am J
Health Syst Pharm 2008;65:644e8.
Wears RL, Perry SJ. Status boards in accident and emergency departments:
support for shared cognition. Theoretical Issues in Ergonomic Science
Holden RJ. Cognitive performance-alterning effects of electronic medical records:
an application of the human factors paradigm for patient safety. Cognition,
Technology & Work. Published Online First: 2010, doi: 10.1007/s10111-010-0141-8.
Jackson D. A direct path to dependable software. Commun. ACM
Jackson D, Thomas M, Millett LI, eds. Software for Dependable Systems:
Sufficient Evidence? Washington, DC: National Academy Press, 2007.
Leveson NG. Safeware: System Safety and Computers. Boston: Addison-Wesley,
Storey N. Safety-Critical Computer Systems. Harlow, UK: Pearson Education
Wears R, Leveson NG. “Safeware”: safety-critical computing and healthcare
information technology. In: Henriksen K, Battles JB, Keyes MA, et al, eds. Advances
in Patient Safety: New Directions and Alternative Approaches. Vol. 4. Technology
and Medication Safety. AHRQ Publication No. 08-0034-4 ed. Rockville, MD: Agency
for Healthcare Research and Quality, 2008:1e10.
Johnson CW. Why did that happen? Exploring the proliferation of barely usable
software in healthcare systems. Qual Saf Health Care 2006;15(Suppl 1):i76e81.
Hoffman S, Podgurski A. Finding a cure: the case for regulation and oversight of
electronic health record systems. Harv J Law Technol 2008;22:104e65.
US Department of Health and Human Services. HIT Safety Hearing. http://
mode¼2&in_hi_userid¼11673&cached¼true#2252010 (accessed 5 May 2010).
Miller RA, Gardner RM. Recommendations for responsible monitoring and
regulation of clinical software systems. J Am Med Inform Assoc 1997;4:442e57.
Egerman P, Probst M. Memorandum to David Blumenthal: Adoption-Certification
Workgroup HIT Safety Recommendations. http://healthit.hhs.gov/portal/server.pt/
AdoptionCertificationLetterHITSafetyFINAL508.pdf (accessed 17 Aug 2010).
International Standards Organization. Health informaticsdapplication of clinical
risk management to the manufacture of health software. Geneva, Switzerland:
International Standards Organization, 2008. ISO/TS 29321:2008.
International Standards Organization. Health InformaticsdGuidance on the
Management of Clinical Risk Relating to the Deployment And Use of Health Software
Systems. Geneva, Switzerland: International Standards Organization, 2008. ISO/TS
La ¨kemedelsverket. Proposal for Guidelines Regarding Classification of Software
Based Information Systems Used in Health Care. Stockholm, Sweden: Medical
Products Agency, 2009 (revised 18 Jan 2010).
Koppel R, Kreda D. Health care information technology vendors’ “hold harmless”
clause: implications for patients and clinicians. JAMA 2009;301:1276e8.
Park S, Rothrock L. Systematic analysis of framing bias in missile defense:
implications toward visualization design. Eur J Oper Res 2007;182:1383e98.
Hollnagel E, Woods DD. Joint Cognitive Systems: Foundations of Cognitive Systems
Engineering. New York: CRC Press, 2005.
Zhang J, Norman DA. Representations in distributed cognitive tasks. Cogn Sci
Buckhout R. Eyewitness testimony. Sci Am 1974;231:23e32.
Cialdini R. Influence: The Psychology of Persuasion. New York: William Morrow,
Tversky A, Kahneman D. Judgment under uncertainty: Heuristics and biases.
Smith GF. Representational effects on the solving of an unstructured decision
problem. IEEE Trans Syst Man Cybern 1989;19:1083e90.
Weinger M, Gardner-Bonneau D, Wiklund ME, eds. Handbook of Human Factors in
Medical Device Design. CRC Press.
Karsh B, Alper SJ, Holden RJ, et al. A human factors engineering paradigm for
patient safetyddesigning to support the performance of the health care
professional. Qual Saf Health Care 2006;15(Suppl I):i59e65.
Reason J. A systems-approach to organizational error. Ergonomics
Carayon P, Hundt AS, Karsh B, et al. Work system design for patient safety: the
SEIPS model. Qual Saf Health Care 2006;15(Suppl I):i50e8.
Karsh B. Clinical Practice Improvement and Redesign: How Change in Workflow
can be Supported by Clinical Decision Support. Rockville, Maryland: Agency for
Healthcare Research and Quality, 2009. AHRQ Publication No. 09-0054-EF.
Holden RJ, Karsh B. A theoretical model of health information technology behavior.
Behav Inf Technol 2009;28:21e38.
Diamond CC, Shirky C. Health information technology: a few years of magical
thinking? Health Aff 2008;27:W383e90.
Berg M. Rationalizing Medical Work. Cambridge, MA: MIT Press, 1997.
Ebright PR. Understanding nurse work. Clin Nurse Spec 2004;18:168e70.
Ebright PR, Patterson ES, Chalko BA, et al. Understanding the complexity of
registered nurse work in acute care settings. J Nurs Adm 2003;33:630e8.
Bisantz AM, Wears RL. Forcing functions: the need for restraint. Ann Emerg Med
Simborg DW. Promoting electronic health record adoption. Is it the correct focus?
Journal of the American Medical Informatics Association 2008;15:127e9.
Greenhalgh T, Potts HW, Wong G, et al. Tensions and paradoxes in electronic
patient record research: a systematic literature review using the meta-narrative
method. Milbank Q 2009;87:729e88.
Abbott P, Coenan A. Globalization and advances in information and communication
technologies: The impact on nursing and health. Nurs Outlook 2008;58:238e46.
Grudin J. Computer-supported cooperative work: history and focus. IEEE Computer
Frisse ME. Health information technology: one step at a time. Health Aff
Hoffman RR, Militello LG. Perspectives on Cognitive Task Analysis. New York:
Taylor and Francis, 2009.
Vicente KJ. Cognitive Work Analysis. Mahwah, NJ: Lawrence Erlbaum Associates,
Norman DA. Things That Make Us Smart: Defending Human Attributes In The Age
Of The Machine. New York: Basic Books, 1994.
622J Am Med Inform Assoc 2010;17:617e623. doi:10.1136/jamia.2010.005637
80. Download full-text
Saleem JJ, Russ AL, Sanderson P, et al. Current challenges and opportunities for
better integration of human factors research with development of clinical
information systems. Yearb Med Inform 2009:48e58.
Sittig DF, Singh H. Eight rights of safe electronic health record use. JAMA
Holden RJ. Physicians’ beliefs about using EMR and CPOE: In pursuit of
a contextualized understanding of health IT use behavior. International Journal of
Medical Informatics 2010;79:71e80.
Sellen AJ, Harper RHR. The Myth of the Paperless Office. Cambridge, MA: MIT
Saleem JJ, Russ AL, Justice CF, et al. Exploring the persistence of paper with the
electronic health record. International Journal of Medical Informatics
Kaplan B. Evaluating informatics applicationsesome alternative approaches:
theory, social interactionism, and call for methodological pluralism. Int J Med Inform
Andre AD, Wickens CD. When users want whats not best for them. Ergon Des
Woods DD. Designs are hypotheses about how artifacts shape cognition and
collaboration. Ergonomics 1998;41:168e73.
Shavit O. Utilization of health technologies-Do not look where there is a light; shine
your light where there is a need to look! Relating national health goals with resource
allocation decision-making; illustration through examining the Israeli healthcare
system. Health Policy 2009;92:268e75.
Kushniruk A. Evaluation in the design of health information systems: application of
approaches emerging from usability engineering. Comput Biol Med 2002;32:141e9.
Kushniruk AW. Analysis of complex decision-making processes in health care:
cognitive approaches to health Informatics. J Biomed Inform 2001;34:365e76.
Kushniruk AW, Patel VL, Cimino JJ. Usability testing in medical informatics:
Cognitive approaches to evaluation of information systems and user interfaces. Proc
AMIA Annu Fall Symp 1997:218e22.
Patel VL, Kaufman DR. Medical informatics and the science of cognition. Journal of
the American Medical Informatics Association 1998;5:493e502.
Patel VL, Kaufman DR, Arocha JA, et al. Bridging theory and practice: cognitive
science and medical informatics. Medinfo 1995:1278e82.
Ash JS, Gorman PN, Lavelle M, et al. Bundles: meeting clinical Information needs.
Bull Med Libr Assoc 2001;89:294e6.
Gorman P, Ash J, Lavelle M, et al. Bundles in the wild: Managing information to
solve problems and maintain situation awareness. Libr Trends 2000;49:266e89.
Gorman PN. Information needs of physicians. J Am Soc Inf Sci
Zhang JJ, Patel VL, Johnson TR. Medical error: Is the solution medical or
cognitive? Journal of the American Medical Informatics Association
Zhang JJ, Patel VL, Johnson TR, et al. A cognitive taxonomy of medical errors.
J Biomed Inform 2004;37:193e204.
US Department of Health and Human Services. Strategic Health IT Advanced
Research Projects (SHARP) Program. http://healthit.hhs.gov/portal/server.pt?
open¼512&mode¼2&objID¼1806&PageID¼20616 (accessed 17 Aug 2010).
Karsh B. Beyond usability for patient safety: designing effective technology
implementation systems. Qual Saf Health Care 2004;13:388e94.
Bisantz A, Roth E. Analysis of cognitive work. Reviews of Human Factors and
Schraagen JM, Chipman SF, Shalin VL, eds. Cognitive Task Analysis. Mahwah,
NJ: Lawrence Erlbaum, 2000.
Diaper D, Stanton N, eds. The Handbook of Task Analysis for Human-Computer
Interaction. Mahwah, NJ: CRC Press, Lawrence Erlbaum Associates, 2003.
Kirwan B, ed. A Guide to Task Analysis. Boca Raton FL: CRC Press, 1992.
Nemeth C. Human Factors Methods for Design: Making Systems Human-Centered.
New York: CRC Press, 2004.
Nielsen J. Usability Engineering. Boston: Academic Press, 1993.
Rubin JR. Handbook of Usability Testing. New York, NY: John Wiley & Sons, 1994.
Wiklund ME. Medical Device and Equipment Design: Usability Engineering and
Ergonomics. Buffalo Grove, IL: Interpharm Press, 1995.
Agutter J, Drews F, Syroid N, et al. Evaluation of graphic cardiovascular display in
a high-fidelity simulator. Anesth Analg 2003;97:1403e13.
Unertl KM, Weinger MB, Johnson KB, et al. Describing and modeling workflow and
information flow in chronic disease care. Journal of the American Medical
Informatics Association 2009;16:826e36.
Wachter SB, Agutter J, Syroid N, et al. The employment of an iterative design
process to develop a pulmonary graphical display. Journal of the American Medical
Informatics Association 2003;10:363e72.
Weinger MB, Slagle J. Human factors research in anesthesia patient safety:
Techniques to elucidate factors affecting clinical task performance and decision-
making. Journal of the American Medical Informatics Association 2002;9
Fitts PM. Psychological research on equipment design in the AAF. Am Psychol
J Am Med Inform Assoc 2010;17:617e623. doi:10.1136/jamia.2010.005637623