ArticlePDF Available

The Price of Artificial Intelligence

Authors:

Abstract

Introduction: Whilst general artificial intelligence (AI) is yet to appear, today’s narrow AI is already good enough to transform much of healthcare over the next two decades. Objective: There is much discussion of the potential benefits of AI in healthcare and this paper reviews the cost that may need to be paid for these benefits, including changes in the way healthcare is practiced, patients are engaged, medical records are created, and work is reimbursed. Results: Whilst AI will be applied to classic pattern recognition tasks like diagnosis or treatment recommendation, it is likely to be as disruptive to clinical work as it is to care delivery. Digital scribe systems that use AI to automatically create electronic health records promise great efficiency for clinicians but may lead to potentially very different types of clinical records and workflows. In disciplines like radiology, AI is likely to see image interpretation become an automated process with diminishing human engagement. Primary care is also being disrupted by AI-enabled services that automate triage, along with services such as telemedical consultations. This altered future may necessarily see an economic change where clinicians are increasingly reimbursed for value, and AI is reimbursed at a much lower cost for volume. Conclusion: AI is likely to be associated with some of the biggest changes we will see in healthcare in our lifetime. To fully engage with this change brings promise of the greatest reward. To not engage is to pay the highest price.
14
IMIA Yearbook of Medical Informatics 2019
© 2019 IMIA and Georg Thieme Verlag KG
The Price of Artificial Intelligence
Enrico Coiera
Australian Institute of Health Innovation, Macquarie University, Sydney, NSW, Australia
We are not ready for what is about to come.
It is not that healthcare will be soon run
by a web of artificial intelligences (AIs) that
are smarter than humans. Such general AI
does not appear anywhere near the horizon.
Rather, the narrow AI that we already have,
with all its flaws and limitations, is already
good enough to transform much of what we
do, if applied carefully.
Amara’s Law tells us that we tend to
overestimate the impact of a technology in
the short run, but underestimate its impact
in the long [1]. There is no doubt that AI has
gone through another boom cycle of inflated
expectations, and that some will be disap-
pointed that promised breakthroughs have
not materialized. Yet, despite this, the next
decade will see a steadily growing stream
of AI applications across healthcare. Many
of these applications may initially be niche,
but eventually they will become mainstream.
Eventually they will lead to substantial
change in the business of healthcare. In
twenty years time, there is every prospect
the changes we find will be transformational.
Such transformation however comes with
a price. For all the benefits that will come
through improved efficiency, safety, and
clinical outcomes, there will be costs [2]. The
nature of change is that it often seems to appear
suddenly. While we are all daily distracted try-
ing to make our unyielding health system bend
to our needs using traditional approaches,
disruptive change surprises because it comes
from places we least expected, and in ways we
never quite imagined.
In linguistics, the Whorf hypothesis says
that we can only imagine what we can speak
of [3]. Our cognition is limited by the concepts
we have words for. It is much the same in the
world of health informatics. We have devel-
oped strict conceptual structures that corral AI
into solving classic pattern recognition tasks
like diagnosis or treatment recommendation.
We think of AI automating image interpreta-
tion, or sifting electronic health record data
for personalized treatment recommendations.
Most don’t often think about AI automating
foundational business processes. Yet AI is
likely to be more disruptive to clinical work
in the short run than it will be to care delivery.
Digital scribes, for example, will steadily
take on more of the clinical documentation task
[4]. Scribes are digital assistants that listen to
clinical talk such as patient consultations. They
may undertake a range of tasks from simple
transcription through to the summarization of
key speech elements into the electronic record,
as well as providing information retrieval and
question-answering services. The promise of
digital scribes is a reduction in human docu-
mentation burden. The price for this help will
be a re-engineering of the clinical encounter.
The technology to recognize and interpret
clinical speech from multiple speakers, and
to transform that speech into accurate clinical
summaries is not yet here. However, if humans
are willing to change how they speak, for
example by giving an AI commands and hints,
then much can be done today. It is easier for
a human to say “Scribe, I’d like to prescribe
some medication” than for the AI to be trained
to accurately recognize whether the speech it
is listening to is past history, present history,
or prescription talk.
The price for using a scribe might also be
an even more obvious intrusion of technol-
ogy between patient and clinician, and new
risks to patient privacy because speech data
contains even more private information than
clinician-generated records. Clinicians might
simply replace today’s effort in creating
records, where they have control over con-
tent, to new work in reviewing and editing
automated records, where content reflects
the design of the AI. There are also subtler
risks. Automation bias might mean that many
clinicians cease to worry about what should
go into a clinical document, and simply
accept whatever a machine has generated
Summary
Introduction: Whilst general artificial intelligence (AI) is yet to
appear, today’s narrow AI is already good enough to transform
much of healthcare over the next two decades.
Objective: There is much discussion of the potential benefits
of AI in healthcare and this paper reviews the cost that may
need to be paid for these benefits, including changes in the way
healthcare is practiced, patients are engaged, medical records are
created, and work is reimbursed.
Results: Whilst AI will be applied to classic pattern recognition
tasks like diagnosis or treatment recommendation, it is likely to
be as disruptive to clinical work as it is to care delivery. Digital
scribe systems that use AI to automatically create electronic
health records promise great efficiency for clinicians but may lead
to potentially very different types of clinical records and work-
flows. In disciplines like radiology, AI is likely to see image inter-
pretation become an automated process with diminishing human
engagement. Primary care is also being disrupted by AI-enabled
services that automate triage, along with services such as tele-
medical consultations. This altered future may necessarily see an
economic change where clinicians are increasingly reimbursed for
value, and AI is reimbursed at a much lower cost for volume.
Conclusion: AI is likely to be associated with some of the biggest
changes we will see in healthcare in our lifetime. To fully engage
with this change brings promise of the greatest reward. To not
engage is to pay the highest price.
Keywords
Artificial intelligence, electronic health record, radiology, primary
care, value-based care
Yearb Med Inform 2019:14-5
http://dx.doi.org/10.1055/s-0039-1677892
Published online: 25.04.2019
IMIA Yearbook of Medical Informatics 2019
15
The Price of Artificial Intelligence
[5]. Given the widespread use of copy and
paste in current day electronic records [6],
such an outcome seems a distinct possibility.
At this moment, narrow AI, predomi-
nately in the form of deep learning, is making
great inroads into pattern recognition tasks
such as diagnostic radiological image inter-
pretation [7]. The sheer volume of training
data now available, along with access to
cheap computational resources, has allowed
previously impractical neural network archi-
tectures to come into their own. When a price
for deep learning is discussed, it is often in
terms of the end of clinical professions such
as radiology or dermatology [8]. Human
expertise is to be rendered redundant by
super-human automation.
The reality is much more nuanced. Firstly,
there remain great challenges to generalizing
narrow AI methods. A well-trained deep
network typically does better on data sets
that resemble its training population [9]. The
appearance of unexpected new edge cases,
or implicit learning of features such as clin-
ical workflow or image quality [10], can all
degrade performance. One remedy for this
limitation is transfer learning [11], retraining
an algorithm on new data taken from the
local context in which it will operate. So, just
as we have seen with electronic records, the
prospect of cheap and generalizable technol-
ogy might be a fantasy, and expensive system
localization and optimization may become
the lived AI reality.
Secondly, the radiological community
has reacted early, and proactively, to these
challenges. Rather than resisting change,
there is strong evidence not just that AI is
being actively embraced within the world
of radiology, but also that there is an under-
standing that change brings not just risks, but
opportunities. In the future, radiologists might
be freed from working in darkened reading
rooms, and emerge to become highly visible
participants to clinical care. Indeed, in the
future, the idea of being an expert in just a
single modality such as image interpretation
may seem quaint, as radiologists transform
into diagnostic experts, integrating data from
multiple modalities from the genetic through
to the radiologic.
The highly interconnected nature of
healthcare means that changes in one part
of the system will require different changes
elsewhere. Radiologists in many parts of the
world are paid for each image they read. With
the arrival of cheap bulk AI image interpre-
tation, that payment model must change. The
price of reading must surely drop, and expert
humans must instead be paid for the value
they create, not the volume they process.
The same kind of business pressure is
being felt in other clinical specialties. In
primary care, for example, the arrival of
new, sometimes aggressive, players who base
their business model on AI patient triage and
telemedicine is already problematic [12, 13].
Patients might love the convenience of such
services, especially when they are technolog-
ically literate, young, and in good health, but
they may not always be so well served if they
are older, or have complex comorbidities [14].
Thus, AI-based primary care services might
end up caring for profitable low-cost and low-
risk patients, and leave the remainder to be
managed by a financially diminished existing
primary care system. One remedy to such a
risk is again to move away from reimburse-
ment for volume, to reimbursement for value.
Indeed, value-based healthcare might arrive
not as the product of government policy, but
as a necessary side effect of AI automation.
There are thus early lessons in the different
reactions to AI between primary care and
radiology. One sector is being caught by sur-
prise and playing catch up to new commercial
realities that have come more quickly than
expected; the other has begun to reimagine
itself in anticipation of becoming the ones
that craft the new reality. The price each
sector pays is different. Proactive preparation
requires investment in reshaping workforce,
and actively engaging with industry, con-
sumers, and government. It requires serious
consideration of new safety and ethical risks
[15]. In contrast, reactive resistance takes a toll
on clinical professionals who rightly wish to
defend their patients’ interests, as much as their
own right to have a stake in them. Unexpected
change may end up eroding or even destroying
important parts of the existing health system
before there is a chance to modernize them.
So, the fate of medicine, and indeed for
all of healthcare, is to change [15]. As change
makers go, AI is likely to be among the
biggest we will see in our time. Its tendrils
will touch everything from basic biomedical
discovery science through the way we each
make our daily personal health decisions. For
such change we must expect to pay a price.
What is paid, by whom, and who benefits, all
depend very much on how we engage with
this profound act of reinvention. To fully
engage brings promise of the greatest reward.
To not engage is to pay the highest price.
References
1. Roy Amara 1925–2007, American futurologist. In:
Ratcliffe S, editor. Oxford Essential Quotations.
4th ed; 2016.
2. Schwartz WB. Medicine and the Computer. The
Promise and Problems of Change. N Engl J Med
1970;283(23):1257-64.
3. Kay P, Kempton W. What is the Sapir-Whorf
hypothesis? Am Anthropol 1984;86(1):65-79.
4. Coiera E, Kocaballi B, Halamka J, Laranjo L. The
digital scribe. NPJ Digit Med 2018;1:58.
5. Lyell D, Coiera E. Automation bias and verification
complexity: a systematic review. J Am Med Inform
Assoc 2017;24(2):423-31.
6. Siegler EL, Adelman R. Copy and paste: a reme-
diable hazard of electronic health records. Am J
Med 2009 Jun;122(6):495-96.
7. Litjens G, Kooi T, Bejnordi BE, Setio AAA,
Ciompi F, Ghafoorian M, et al. A survey on deep
learning in medical image analysis. Med Image
Anal 2017 Dec;42:60-88.
8. Darcy AM, Louie AK, Roberts LW. Machine
learning and the profession of medicine. JAMA
2016;315(6):551-2.
9. Chen JH, Asch SM. Machine Learning and Prediction
in Medicine - Beyond the Peak of Inflated Expecta-
tions. New Engl J Med 2017;376(26):2507-09.
10. Zech JR, Badgeley MA, Liu M, Costa AB,
Titano JJ, Oermann EK. Variable generalization
performance of a deep learning model to detect
pneumonia in chest radiographs: A cross-sectional
study. PLoS Med 2018 Nov 6;15(11):e1002683.
11. Pan SJ, Yang Q. A survey on transfer learning. IEEE
Trans Knowl Data Eng 2010;22(10):1345-59.
12. McCartney M. General practice can’t just exclude
sick people. BMJ 2017;359:j5190.
13. Fraser H, Coiera E, Wong D. Safety of patient-fac-
ing digital symptom checkers. Lancet 2018 Nov
24;392(10161):2263-4.
14. Marshall M, Shah R, Stokes-Lampard H. Online
consulting in general practice: making the move
from disruptive innovation to mainstream service.
BMJ 2018 Mar 26;360:k1195.
15. Coiera E. The fate of medicine in the time of AI.
Lancet 2018;392(10162):2331-2.
Correspondence to:
Enrico Coiera
Australian Institute of Health Innovation
Macquarie University
Level 6 75 Talavera Rd
Sydney, NSW 2109, Australia
E-mail: enrico.coiera@mq.edu.au
... Prediction of illness development [94][95][96][97][98] Improvements in treatment optimization and effectiveness [94,97,99,100] Evidence-based recommendations [60,98,101] Delegation of simple and repeating tasks to AI [96] Lower number of hospitalizations [95] Cost cutting [77,95,97] Less pressure on scarce HR in healthcare [102,103] Automatic recall and rescheduling of patients [98] Bigger potential of other digital innovations [68,104] Ability to process huge amounts of data [101] AI-biosensors (miniaturization, scalability, low power consumption, high sensitivity, multifunction, safety, non-toxicity, and degradation) [77] Incompatible with older infrastructure [105] Lack of understanding of AI functionality [68,106] Inefficient use of AI in day-to-day workflows [107,108] Potential conflict between human ability to act autonomously and the complicated, allegedly infallible machine logic (known as automation bias [69,100] Legal and ethical issues [68,95,100,101,104] Physicians' concern about AI (security, privacy, and confidentiality) [68,101] Missing multidisciplinary AI teams [98] [47,91] Declining patient self-discipline over time [91] Limited availability due to high production costs of some technologies [77] Internet of Things (IoT) ...
... Prediction of illness development [94][95][96][97][98] Improvements in treatment optimization and effectiveness [94,97,99,100] Evidence-based recommendations [60,98,101] Delegation of simple and repeating tasks to AI [96] Lower number of hospitalizations [95] Cost cutting [77,95,97] Less pressure on scarce HR in healthcare [102,103] Automatic recall and rescheduling of patients [98] Bigger potential of other digital innovations [68,104] Ability to process huge amounts of data [101] AI-biosensors (miniaturization, scalability, low power consumption, high sensitivity, multifunction, safety, non-toxicity, and degradation) [77] Incompatible with older infrastructure [105] Lack of understanding of AI functionality [68,106] Inefficient use of AI in day-to-day workflows [107,108] Potential conflict between human ability to act autonomously and the complicated, allegedly infallible machine logic (known as automation bias [69,100] Legal and ethical issues [68,95,100,101,104] Physicians' concern about AI (security, privacy, and confidentiality) [68,101] Missing multidisciplinary AI teams [98] [47,91] Declining patient self-discipline over time [91] Limited availability due to high production costs of some technologies [77] Internet of Things (IoT) ...
... Prediction of illness development [94][95][96][97][98] Improvements in treatment optimization and effectiveness [94,97,99,100] Evidence-based recommendations [60,98,101] Delegation of simple and repeating tasks to AI [96] Lower number of hospitalizations [95] Cost cutting [77,95,97] Less pressure on scarce HR in healthcare [102,103] Automatic recall and rescheduling of patients [98] Bigger potential of other digital innovations [68,104] Ability to process huge amounts of data [101] AI-biosensors (miniaturization, scalability, low power consumption, high sensitivity, multifunction, safety, non-toxicity, and degradation) [77] Incompatible with older infrastructure [105] Lack of understanding of AI functionality [68,106] Inefficient use of AI in day-to-day workflows [107,108] Potential conflict between human ability to act autonomously and the complicated, allegedly infallible machine logic (known as automation bias [69,100] Legal and ethical issues [68,95,100,101,104] Physicians' concern about AI (security, privacy, and confidentiality) [68,101] Missing multidisciplinary AI teams [98] [47,91] Declining patient self-discipline over time [91] Limited availability due to high production costs of some technologies [77] Internet of Things (IoT) ...
Article
Full-text available
Citation: Hospodková, P.; Berežná, J.; Barták, M.; Rogalewicz, V.; Severová, L.; Svoboda, R. Change Management and Digital Innovations in Hospitals of Five European Countries. Healthcare 2021, 9, 1508. https:// Abstract: The objective of the paper is to evaluate the quality of systemic change management (CHM) and readiness for change in five Central European countries. The secondary goal is to identify trends and upcoming changes in the field of digital innovations in healthcare. The results show that all compared countries (regardless of their historical context) deal with similar CHM challenges with a rather similar degree of success. A questionnaire distributed to hospitals clearly showed that there is still considerable room for improvement in terms of the use of specific CHM tools. A review focused on digital innovations based on the PRISMA statement showed that there are five main directions, namely, data collection and integration, telemedicine, artificial intelligence, electronic medical records, and M-Health. In the hospital environment, there are considerable reservations in applying change management principles, as well as the absence of a systemic approach. The main factors that must be monitored for a successful and sustainable CHM include a clearly defined and widely communicated vision, early engagement of all stakeholders, precisely set rules, adaptation to the local context and culture, provision of a technical base, and a step-by-step implementation with strong feedback.
... Two studies evaluated the efficacy of voice recognition software and determined that summarization technology is feasible in non-linear settings, and that computer-based systems can be successfully used in clinical practice [29,30]. Five studies detailed practicalities of digital scribe integration and its associated challenges; [16,19,[26][27][28] two of these detailed how the current way of practicing medicine must change in order to work with digital scribe technology [16,27], one detailed how implementation can be done in discreet steps allowing for physicians to train the digital scribe to ultimately work together [19], and two outlined details of implementation and potential barriers [26,28]. Finally, one study evaluated the cost effectiveness of implementing digital scribe technology into clinical practice [31]. ...
... Two studies evaluated the efficacy of voice recognition software and determined that summarization technology is feasible in non-linear settings, and that computer-based systems can be successfully used in clinical practice [29,30]. Five studies detailed practicalities of digital scribe integration and its associated challenges; [16,19,[26][27][28] two of these detailed how the current way of practicing medicine must change in order to work with digital scribe technology [16,27], one detailed how implementation can be done in discreet steps allowing for physicians to train the digital scribe to ultimately work together [19], and two outlined details of implementation and potential barriers [26,28]. Finally, one study evaluated the cost effectiveness of implementing digital scribe technology into clinical practice [31]. ...
... This time intensive process may require clinic modifications (e.g. reduced patient census, addition of trainees or superusers during initial implementation phases) or increased time spent outside of clinic hours learning how to use the technology, with or without compensation [27]. While the time spent learning how to interact with the digital scribe will be beneficial once perfected, learning how to use and incorporate a new technology can be a significant undertaking -a familiar experience for physicians who practiced through periods of initial EHR adoption [39,40]. ...
Article
Full-text available
Electronic health records (EHRs) allow for meaningful usage of healthcare data. Their adoption provides clinicians with a central location to access and share data, write notes, order labs and prescriptions, and bill for patient visits. However, as non-clinical requirements have increased, time spent using EHRs eclipsed time spent on direct patient care. Several solutions have been proposed to minimize the time spent using EHRs, though each have limitations. Digital scribe technology uses voice-to-text software to convert ambient listening to meaningful medical notes and may eliminate the physical task of documentation, allowing physicians to spend less time on EHR engagement and more time with patients. However, adoption of digital scribe technology poses many barriers for physicians. In this study, we perform a scoping review of the literature to identify barriers to digital scribe implementation and provide solutions to address these barriers. We performed a literature review of digital scribe technology and voice-to-text conversion and information extraction as a scope for future research. Fifteen articles met inclusion criteria. Of the articles included, four were comparative studies, three were reviews, three were original investigations, two were perspective pieces, one was a cost-effectiveness study, one was a keynote address, and one was an observational study. The published articles on digital scribe technology and voice-to-text conversion highlight digital scribe technology as a solution to the inefficient interaction with EHRs. Benefits of digital scribe technologies included enhancing clinician ability to navigate charts, write notes, use decision support tools, and improve the quality of time spent with patients. Digital scribe technologies can improve clinic efficiency and increase patient access to care while simultaneously reducing physician burnout. Implementation barriers include upfront costs, integration with existing technology, and time-intensive training. Technological barriers include adaptability to linguistic differences, compatibility across different clinical encounters, and integration of medical jargon into the note. Broader risks include automation bias and risks to data privacy. Overcoming significant barriers to implementation will facilitate more widespread adoption. Supplementary information: The online version contains supplementary material available at 10.1007/s12553-021-00568-0.
... Therefore, it is imperative to leverage digital technologies to further improve our understanding of disease pathogenesis, diagnosis, and therapy. Stakeholders in medicine need to believe that new technologies provide an advantage to traditional working structures and are effortless to apply before they will accept them [38,39]. Naturally, people fear that AI may replace clinicians or take their jobs. ...
Article
Full-text available
Digital technologies in health care, including artificial intelligence (AI) and robotics, constantly increase. The aim of this study was to explore attitudes of 2020 medical students’ generation towards various aspects of eHealth technologies with the focus on AI using an exploratory sequential mixed-method analysis. Data from semi-structured interviews with 28 students from five medical faculties were used to construct an online survey send to about 80,000 medical students in Germany. Most students expressed positive attitudes towards digital applications in medicine. Students with a problem-based curriculum (PBC) in contrast to those with a science-based curriculum (SBC) and male undergraduate students think that AI solutions result in better diagnosis than those from physicians (p < 0.001). Male undergraduate students had the most positive view of AI (p < 0.002). Around 38% of the students felt ill-prepared and could not answer AI-related questions because digitization in medicine and AI are not a formal part of the medical curriculum. AI rating regarding the usefulness in diagnostics differed significantly between groups. Higher emphasis in medical curriculum of digital solutions in patient care is postulated.
... The use of AI to ensure quality delivery is essential in the fight against diseases and there is no doubt that the next decade will see a growing stream of AI applications across healthcare. "We are not ready for what is about to come" Coiera tells us [1], a statement highlighting the need for healthcare practitioners and services to prepare for AI's adoption into health practice. With the progress of AI systems, it seems obvious that machine thinking will invade our workspace, literally and figuratively, in all areas of breast cancer care. ...
Article
Introduction: Early recognition of out-of-hospital cardiac arrest (OHCA) by ambulance service call centre operators is important so that cardiopulmonary resuscitation can be delivered immediately, but around 25% of OHCAs are not picked up by call centre operators. An artificial intelligence (AI) system has been developed to support call centre operators in the detection of OHCA. The study aims to (1) explore ambulance service stakeholder perceptions on the safety of OHCA AI decision support in call centres, and (2) develop a clinical safety case for the OHCA AI decision-support system. Methods and analysis: The study will be undertaken within the Welsh Ambulance Service. The study is part research and part service evaluation. The research utilises a qualitative study design based on thematic analysis of interview data. The service evaluation consists of the development of a clinical safety case based on document analysis, analysis of the AI model and its development process and informal interviews with the technology developer. Conclusions: AI presents many opportunities for ambulance services, but safety assurance requirements need to be understood. The ASSIST project will continue to explore and build the body of knowledge in this area.
Article
An increasing number of artificial intelligence (AI) systems are being proposed in ophthalmology, motivated by the variety and amount of clinical and imaging data, as well as their potential benefits at the different stages of patient care. Despite achieving close or even superior performance to that of experts, there is a critical gap between development and integration of AI systems in ophthalmic practice. This work focuses on the importance of trustworthy AI to close that gap. We identify the main aspects or challenges that need to be considered along the AI design pipeline so as to generate systems that meet the requirements to be deemed trustworthy, including those concerning accuracy, resiliency, reliability, safety, and accountability. We elaborate on mechanisms and considerations to address those aspects or challenges, and define the roles and responsibilities of the different stakeholders involved in AI for ophthalmic care, i.e., AI developers, reading centers, healthcare providers, healthcare institutions, ophthalmological societies and working groups or committees, patients, regulatory bodies, and payers. Generating trustworthy AI is not a responsibility of a sole stakeholder. There is an impending necessity for a collaborative approach where the different stakeholders are represented along the AI design pipeline, from the definition of the intended use to post-market surveillance after regulatory approval. This work contributes to establish such multi-stakeholder interaction and the main action points to be taken so that the potential benefits of AI reach real-world ophthalmic settings.
Article
Full-text available
Introduction The Royal Australian and New Zealand College of Radiologists (RANZCR) led the medical community in Australia and New Zealand in considering the impact of machine learning and artificial intelligence (AI) in health care. RANZCR identified that medical leadership was largely absent from these discussions, with a notable absence of activity from governments in the Australasian region up to 2019. The clinical radiology and radiation oncology sectors were considered ripe for the adoption of AI, and this raised a range of concerns about how to ensure the ethical application of AI and to guide its safe and appropriate use in our two specialties. Methods RANZCR’s Artificial Intelligence Committee undertook a landscape review in 2019 anddetermined that AI within clinical radiology and radiation oncology had the potential to grow rapidly and significantly impact the professions. In order to address this, RANZCR drafted ethical principles on the use of AI and standards to guide deployment and engaged in extensive stakeholder consultation to ensure a range of perspectives were received and considered. Results RANZCR published two key bodies of work: The Ethical Principles of Artificial Intelligence in Medicine, and the Standards of Practice for Artificial Intelligence in Clinical Radiology. Conclusion RANZCR’s publications in this area have established a solid foundation to prepare for the application of AI, however more work is needed. We will continue to assess the evolution of AI and ML within our professions, strive to guide the upskilling of clinical radiologists and radiation oncologists, advocate for appropriate regulation and produce guidance to ensure that patient care is delivered safely.
Chapter
Medicine is a human endeavor aided by a sophisticated set of diagnostic tools. Healthcare systems are challenged with incorporating new and unfamiliar technology into existing systems of practice. As diagnostic tools such as artificial intelligence have entered the realm of clinical practice, new opportunities have arisen to optimize healthcare delivery. Overreliance on AI may lead to the dehumanization of medicine. However, with appropriate implementation, AI can free up time and resources to allow healthcare providers to focus on aspects of care that are unique humanistic. Effective medical practice requires availability of data, application of information, and appropriate clinical judgement. A large portion of modern patient care takes place without the presence of the patient. AI has shown the potential to synthesize and summarize vast amounts of data from medical records, clinical trials, and best-practice guidelines. By tailoring all available data to each case, AI can serve as an asset in enhancing diagnostic accuracy and increasing the efficiency of healthcare delivery. However, clinical decisions made between patients and their physicians cannot be reduced to a set of parameters, code, or logic trees. Clinical judgment and the implementation of available information remains necessarily human tasks. Only through a strong therapeutic relationship built on trust and empathy can shared decision making and compliance be attained. We propose a framework through which AI and humanistic medicine can build on one another to create a symbiosis of the highest possible caliber of patient care and healthcare quality.
Article
Full-text available
Mis-diagnosis by physicians is a common problem affecting 5% of outpatients. There is a growth in interest in computerised diagnostic decision support systems for physicians, and increasingly for direct use by patients on mobile phones, termed Symptom Checkers(SC). These have the potential to improve the way in which health care is delivered and reduce the burden on GP services. However claims have been made that SC from Babylon Health is more accurate at diagnosis than physicians. Evaluations to date have primarily been conducted in controlled environments using clinician-generated scenarios, and surrogate outcomes such as diagnostic performance in lieu of clinical outcomes. Such results are unlikely to reflect real-world use and can be unrealistically optimistic. Patients use risks missing important diagnoses and/or may increasing the burden on the health system. To avoid this, we advocate the use of multi-stage evaluation, building on many years of experience in health informatics and reflecting best practice in other areas of medicine.
Article
Full-text available
BACKGROUND:There is interest in using convolutional neural networks (CNNs) to analyze medical imaging to provide computer-aided diagnosis (CAD). Recent work has suggested that image classification CNNs may not generalize to new data as well as previously believed. We assessed how well CNNs generalized across three hospital systems for a simulated pneumonia screening task. METHODS AND FINDINGS:A cross-sectional design with multiple model training cohorts was used to evaluate model generalizability to external sites using split-sample validation. A total of 158,323 chest radiographs were drawn from three institutions: National Institutes of Health Clinical Center (NIH; 112,120 from 30,805 patients), Mount Sinai Hospital (MSH; 42,396 from 12,904 patients), and Indiana University Network for Patient Care (IU; 3,807 from 3,683 patients). These patient populations had an age mean (SD) of 46.9 years (16.6), 63.2 years (16.5), and 49.6 years (17) with a female percentage of 43.5%, 44.8%, and 57.3%, respectively. We assessed individual models using the area under the receiver operating characteristic curve (AUC) for radiographic findings consistent with pneumonia and compared performance on different test sets with DeLong's test. The prevalence of pneumonia was high enough at MSH (34.2%) relative to NIH and IU (1.2% and 1.0%) that merely sorting by hospital system achieved an AUC of 0.861 (95% CI 0.855-0.866) on the joint MSH-NIH dataset. Models trained on data from either NIH or MSH had equivalent performance on IU (P values 0.580 and 0.273, respectively) and inferior performance on data from each other relative to an internal test set (i.e., new data from within the hospital system used for training data; P values both
Article
Full-text available
Current generation electronic health records suffer a number of problems that make them inefficient and associated with poor clinical satisfaction. Digital scribes or intelligent documentation support systems, take advantage of advances in speech recognition, natural language processing and artificial intelligence, to automate the clinical documentation task currently conducted by humans. Whilst in their infancy, digital scribes are likely to evolve through three broad stages. Human led systems task clinicians with creating documentation, but provide tools to make the task simpler and more effective, for example with dictation support, semantic checking and templates. Mixed-initiative systems are delegated part of the documentation task, converting the conversations in a clinical encounter into summaries suitable for the electronic record. Computer-led systems are delegated full control of documentation and only request human interaction when exceptions are encountered. Intelligent clinical environments permit such augmented clinical encounters to occur in a fully digitised space where the environment becomes the computer. Data from clinical instruments can be automatically transmitted, interpreted using AI and entered directly into the record. Digital scribes raise many issues for clinical practice, including new patient safety risks. Automation bias may see clinicians automatically accept scribe documents without checking. The electronic record also shifts from a human created summary of events to potentially a full audio, video and sensor record of the clinical encounter. Digital scribes promisingly offer a gateway into the clinical workflow for more advanced support for diagnostic, prognostic and therapeutic tasks.
Article
Full-text available
Big data, we have all heard, promise to transform health care. But in the “hype cycle” of emerging technologies, machine learning now rides atop the “peak of inflated expectations,” and we need to better appreciate the technology’s capabilities and limitations.
Article
Full-text available
Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.
Article
Introduction: While potentially reducing decision errors, decision support systems can introduce new types of errors. Automation bias (AB) happens when users become overreliant on decision support, which reduces vigilance in information seeking and processing. Most research originates from the human factors literature, where the prevailing view is that AB occurs only in multitasking environments. Objectives: This review seeks to compare the human factors and health care literature, focusing on the apparent association of AB with multitasking and task complexity. Data sources: EMBASE, Medline, Compendex, Inspec, IEEE Xplore, Scopus, Web of Science, PsycINFO, and Business Source Premiere from 1983 to 2015. Study selection: Evaluation studies where task execution was assisted by automation and resulted in errors were included. Participants needed to be able to verify automation correctness and perform the task manually. Methods: Tasks were identified and grouped. Task and automation type and presence of multitasking were noted. Each task was rated for its verification complexity. Results: Of 890 papers identified, 40 met the inclusion criteria; 6 were in health care. Contrary to the prevailing human factors view, AB was found in single tasks, typically involving diagnosis rather than monitoring, and with high verification complexity. Limitations: The literature is fragmented, with large discrepancies in how AB is reported. Few studies reported the statistical significance of AB compared to a control condition. Conclusion: AB appears to be associated with the degree of cognitive load experienced in decision tasks, and appears to not be uniquely associated with multitasking. Strategies to minimize AB might focus on cognitive load reduction.
Article
This Viewpoint discusses the opportunities and ethical implications of using machine learning technologies, which can rapidly collect and learn from large amounts of personal data, to provide individalized patient care.Must a physician be human? A new computer, “Ellie,” developed at the Institute for Creative Technologies, asks questions as a clinician might, such as “How easy is it for you to get a good night’s sleep?” Ellie then analyzes the patient’s verbal responses, facial expressions, and vocal intonations, possibly detecting signs of posttraumatic stress disorder, depression, or other medical conditions. In a randomized study, 239 probands were told that Ellie was “controlled by a human” or “a computer program.” Those believing the latter revealed more personal material to Ellie, based on blind ratings and self-reports.1 In China, millions of people turn to Microsoft’s chatbot, “Xiaoice,”2 when they need a “sympathetic ear,” despite knowing that Xiaoice is not human. Xiaoice develops a specially attuned personality and sense of humor by methodically mining the Internet for real text conversations. Xiaoice also learns about users from their reactions over time and becomes sensitive to their emotions, modifying responses accordingly, all without human instruction. Ellie and Xiaoice are the result of machine learning technology.