Available via license: CC BY-NC 4.0
Content may be subject to copyright.
Attitudes and Acceptance Towards
Artificial Intelligence in Medical Care
Dana HOLZNERa,1, Timo APFELBACHERa,1, Wolfgang RÖDLEa,
Christina SCHÜTTLERa, Hans-Ulrich PROKOSCHa, Rafael MIKOLAJCZYKb,
Sarah NEGASHb, Nadja KARTSCHMITb, Iryna MANUILOVAc, Charlotte BUCHd,
Jana GUNDLACKe and Jan CHRISTOPH a,c,2
a Department of Medical Informatics, Friedrich-Alexander-Universität Erlangen-
Nürnberg, Erlangen, Germany
b Institute for Medical Epidemiology, Biometrics and Informatics, Interdisciplinary
Center for Health Sciences, Faculty of Medicine, Martin-Luther-University Halle-
Wittenberg, Halle, Germany
c Junior Research Group (Bio-)medical Data Science, Faculty of Medicine, Martin-
Luther-University Halle-Wittenberg, Halle, Germany
d Institute for History and Ethics of Medicine, Center for Health Sciences Halle,
Medical Faculty, Martin-Luther-University Halle-Wittenberg, Halle, Germany
e Institute of General Practice and Family Medicine, Center of Health Sciences,
Faculty of Medicine, Martin-Luther-University Halle-Wittenberg, Halle, Germany
Abstract. Background: Artificial intelligence (AI) in medicine is a very topical issue.
As far as the attitudes and perspectives of the different stakeholders in healthcare
are concerned, there is still much to be explored. Objective: Our aim was to
determine attitudes and aspects towards acceptance of AI applications from the
perspective of physicians in university hospitals. Methods: We conducted individual
exploratory expert interviews. Low fidelity mockups were used to show
interviewees potential application areas of AI in clinical care. Results: In principle,
physicians are open to the use of AI in medical care. However, they are critical of
some aspects such as data protection or the lack of explainability of the systems.
Conclusion: Although some trends in attitudes e.g., on the challenges or benefits of
using AI became clear, it is necessary to conduct further research as intended by the
subsequent PEAK project.
Keywords. Artificial Intelligence; Physicians; Attitude; Delivery of healthcare;
Expert Systems; Machine Learning; Neural Networks, Computer; Computers
1. Introduction
The principle of Artificial Intelligence (AI) goes back to the 1950s when Alan Turing
described this type of technology [1]. Although, it is not a new technology, the field of
AI seems to be developing very fast nowadays.
1 These authors contributed equally to this work.
2 Corresponding Author, Jan Christoph, AG (Bio-)Medical Data Science, Faculty of Medicine, Martin-Luther-
University Halle-Wittenberg, Halle, Germany; E-mail: jan.christoph@uk-halle.de.
Challenges of Trustable AI and Added-Value on Health
B. Séroussi et al. (Eds.)
© 2022 European Federation for Medical Informatics (EFMI) and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/SHTI220398
68
The PEAK project (prospects for the use and acceptance towards artificial
intelligence in healthcare) aims to investigate the attitudes of physicians and patients
regarding the use of applications of AI [2]. Current areas of use, possible further areas of
use and any gaps in knowledge will be explored. Since there are many different
definitions of AI, we settled on one for the PEAK project which encompasses a broad
scope: “the ability to process external data systematically and learn from it to achieve
specific goals and tasks. AI involves using machines to simulate human thinking
processes and intelligent behaviors, such as thinking, learning, and reasoning, and aims
to solve complex problems that can only be solved by experts. As a branch of computer
science, the field of AI mainly studies the following contents: machine learning,
intelligent robot, natural language understanding, neural network, language recognition,
image recognition, and expert system.” [3]. In this sub-study of the PEAK project,
physicians were interviewed about their attitudes towards AI.
2. Methods
For the sample description, a total of twelve physicians with experience from different
medical disciplines (anesthesiology, cardiovascular surgery, otolaryngology, general
medicine, hematology, internal medicine, neurorehabilitation, pediatrics, psychiatry,
radiology, radiotherapy, urology, and visceral surgery) aged 28 to 41 years were
randomly (regarding to their affinity to AI) chosen and interviewed. There were eight
assistant physicians, two specialists and two assistant medical directors with professional
experience ranging from 3.5 months to 12 years. The survey covered the entire spectrum
from low to high tech affinity.
The interviews were conducted between Nov. 15 and Dec. 14, 2021 by one
interviewer, using the "Zoom" web conferencing tool. After oral consent, the
conversations were recorded and transcribed by two team members for later analysis.
For the evaluation of the expert interviews, the qualitative content analysis according to
Kuckartz was applied [4]. For the transcription of the interviews as well as for all
evaluation steps and analyses the software MAXQDA was used [5]. In a pretest with
four subjects, the interview guide was tested. Since only minor adjustments were made
after pretesting and three of the four subjects were physicians, three of the pretest
interviews were included in the overall evaluation.
Before conducting the semi structured interviews, a guide was developed. The
process of guideline construction was based on the S2PS2 -method according to Kruse [6].
This consists of the phases "collect, sort, check, delete, subsume". After a brainstorming
phase, in which existing literature was incorporated [7-9], unsuitable questions were
deleted in discussion with the PEAK team. The remaining questions were subsumed in
a structured manner. To arrange the broadest possible focus of potentially addressable
topics, the interview guide was filled primarily with qualitative open-ended questions.
As an introduction, a query was made about the understanding of the term "Artificial
Intelligence" and the current use of AI systems in everyday clinical practice according
to perception of the interviewees. In a second step, seven low fidelity mockups i.e.,
simplified visual representations were used to illustrate possible application scenarios of
AI in the clinical context. The mockups served as inspiration for the subsequent general
qualitative part, which consisted of acceptance-promoting, acceptance-behind, and topic-
specific questions.
D. Holzner et al. / Attitudes and Acceptance Towards Artificial Intelligence in Medical Care 69
For clustering the mockups, four categories were formed, which should cover the
widest possible range of usage of AI systems in the medical context: 1. AI in diagnostics,
2. AI in therapy, 3. AI for prognosis/prediction, and 4. AI for process optimization in the
hospital. The mockups were illustrated by a PowerPoint presentation that was visible to
the participants with screen sharing.
3. Results
3.1. Understanding of the Term Artificial Intelligence and Areas of Application
Regarding the question what the respondents understand of the term Artificial
Intelligence the ideas were similar. All twelve interviewees agreed that it is a technical
system. Furthermore, about half of the interviewees defined AI as a system that works
independently (7/12) and can make decisions on its own (6/12).
Half of the respondents reported not using AI systems in clinical practice to date.
However, all but one person had already heard of AI in medical care. After demonstration
of the mockups, there were ten people who were using AI applications according to our
used definition, mostly in form of medication alerts or voice recognition.
3.2. General Attitudes Towards AI
We asked our twelve interview participants what benefits they believe AI can provide in
medical care. For six of the interviewees, the benefit lay in reducing errors in care and
thus increasing (patient) safety. Five of the interviewees named the relief of medical
work by taking over repetitive or simple tasks, as well as the optimization of processes.
One-third of the interviewees saw the benefit of an AI system in improving medical care,
structuring data, or saving time, which, according to two interviewees, can in turn be
used for patient care.
In contrast to this, five of the interviewees saw seamless integration of systems into
existing settings as a major challenge to implementing AI in everyday medical practice,
in terms of both program functionality (3/12) and interoperability (2/12). A third of the
interviewees saw a risk in relying too much on the systems. Another important point
mentioned was the potential for manipulation or misuse of patient data (4/12). In addition,
aspects such as the endangerment of the doctor-patient relationship (2/12), the emergence
of legal challenges in dealing with AI decisions (2/12), the susceptibility to errors in the
development of AI systems recording to the databasis (3/12), and the need for
explainability and transparency of the systems, (2/12) were also mentioned.
Further we asked for weaknesses of AI. In this context the interviewees mentioned
clinical experience, patient perception and interaction, and human instinct (5/12). In
addition, a human is faster than an AI in emergencies. The strengths of AI mentioned
included the objectivity of the systems (5/12), matching (1/12) and good presentation
(1/12) of data, and that an AI does not miss anything (2/12), has more knowledge (2/12),
and does not get tired (2/12).
Regarding the topic of responsibility towards the patient, the physician who uses AI
merely as a support for medical decisions still retains responsibility (6/12). The physician,
who must question and weigh the AI's decisions, retains the final say (3/12). One subject
estimated the responsibility as increased, because a physician has to include an external
decision in addition to his own assessment.
D. Holzner et al. / Attitudes and Acceptance Towards Artificial Intelligence in Medical Care70
Targeting the topic of discrepancy between own medical judgement and AI
judgement, most interviewees saw it as an impetus to critically question their own
decision and would get to the bottom of the exact cause of the discrepant results (9/12).
Half of the respondents would seek the advice of other physician colleagues or
supervisors to match opinions with human experience in a discussion.
Moving on to the factors that are influencing trust in AI systems, one of the most
frequently mentioned points is the data basis of the systems (7/12). In addition, the
transparency of AI systems is of great importance. It was important to six participants
that they are informed about the background of the system, such as the nature of the
underlying data or knowledge about the development of the programs as well as how the
systems work. Other influences on trust included testing of AI systems through clinical
trials (3/12).
Regarding the freedom of decision-making for physicians, AI is seen merely as a
decision support tool that does not dictate how they should act. More than half (7/12) of
the respondents saw no influence on their scope of action. Two feared the subconscious
influence of relying more and more on the systems, and three interviewees could imagine
AI having a greater influence on decision-making freedom in the future.
4. Discussion
The use of exemplary AI scenarios for the introduction and discussion stimulus of the
interviews proved to be useful. Based on lively participation and positive feedback, we
could conclude that the subjects were positive about the methodological approach to
conducting the interviews. Although the physicians were asked to evaluate AI systems
of fields, they were not specialists in, new aspects arose that were thought-provoking and
provided valuable input.
The physicians were generally positive about the use of AI in the clinical context.
This conclusion was also made in other studies such as Maassen et al. 2021 [9]. An
important advantage of the increased use of AI in healthcare is the relief of medical staff
and the associated time savings. This saved time would then be available for better
patient care, for instance. In the medium to long term, it is entirely conceivable that even
this theoretical time savings will fall victim to economic constraints. In contrast, Maassen
et al. found that most physicians do not even expect to have more time for patients
because of using AI [9].
The interviews revealed that it is important for physicians that AI implementation
into everyday life and thus into already existing systems is seamless and without
problems. At the same time, care must be taken to ensure that these tools are used wisely
in regards of supporting the physicians but not giving them the possibility to rely
completely on AI systems´ recommendations. In the end, as Braun et al. stated, it is a
human who must make the decision and not AI [10]. Furthermore, according to the
physicians, it would be helpful if users of AI could understand why the system makes a
decision. We assume that if the attending physician understands why a decision was
made, he can also communicate this better to the patient.
A limitation of this study is the small group of twelve experts. Nevertheless, it covers
a relatively broad clinical field of eleven medical disciplines. Further research is needed
on a much larger group of physicians as well as on patients and the general population.
This is planned at a later stage of the study with a total of 800 physicians, 800 patients
and 1,000 persons of the general population over the next 2.5 years.
D. Holzner et al. / Attitudes and Acceptance Towards Artificial Intelligence in Medical Care 71
5. Conclusion
Our initial findings show that physicians are generally open to AI in their everyday
clinical work. The interviewed physicians see the possibility to get support from AI
systems to come to a right decision. Hence, the participants expect that a better decision-
making will increase quality in patient care. In addition, according to the physicians´
estimation AI is well suited to assist in some fields of repetitive tasks as it does not get
tired and does not need a break. This will be possible especially in areas of factual
knowledge matching, such as in the pharmacology interaction check where AI has the
advantage of not overseeing something. Furthermore, according to our interviewees AI
can also make an important contribution to process and resource optimization which can
ultimately lead to time savings. Nevertheless, further studies are needed to cover the
current need for more specific information, to clarify specific technical, legal, and ethical
issues, and to address the lack of small sample size and small age range in this substudy.
This is exactly where the PEAK project will start and therefore, contribute to the research
of this topic.
Acknowledgement
We thank all interviewees and all further members of the PEAK team namely Thomas
Frese, Jan Schildmann, Daniel Tiller, Susanne Unverzagt and Christoph Weber. PEAK
is funded by the Joint Federal Committee (G-BA) under the FKZ 01VSF20017. The
present work was performed in fulfillment of the requirements for obtaining the degree
"Dr. rer. biol. hum." from the Friedrich-Alexander-Universität Erlangen-Nürnberg.
References
[1] Ramesh AN, Kambhampati C, Monson JRT, Drew PJ. Artificial intelligence in medicine. Ann R Coll
Surg Engl. 2004;86(5):334–338.
[2] PEAK – Perspektiven des Einsatzes und Akzeptanz Künstlicher Intelligenz - G-BA Innovationsfonds
[Internet]. 2022 [updated 2022 Jan 20; cited 2022 Jan 20]. Available from: https://innovationsfonds.g-
ba.de/projekte/versorgungsforschung/peak-perspektiven-deseinsatzes-und-akzeptanz-kuenstlicher-
intelligenz.397.
[3] Liu R, Rong Y, Peng Z. A review of medical artificial intelligence. Global Health Journal. 2020; 4 (2):
42–45.
[4] Kuckartz U. Einführung in die computergestützte Analyse qualitativer Daten. 3rd ed. Wiesbaden: VS
Verl. für Sozialwiss; 2010. 92 ff. p. (Lehrbuch). ger.
[5] Rädiker S, Kuckartz U. Analyse qualitativer Daten mit MAXQDA. Wiesbaden: Springer Fachmedien
Wiesbaden; 2019.
[6] Kruse J. Qualitative Interviewforschung: Ein integrativer Ansatz. Weinheim, Basel: Beltz Juventa; 2014.
227 p. (Grundlagentexte Methoden). ger.
[7] Oh S, Kim JH, Choi S-W, Lee HJ, Hong J, Kwon SH. Physician Confidence in Artificial Intelligence:
An Online Mobile Survey. J Med Internet Res. 2019;21(3):e12422.
[8] Laï M-C, Brian M, Mamzer M-F. Perceptions of artificial intelligence in healthcare: findings from a
qualitative survey study among actors in France. J Transl Med. 2020;18(1):14.
[9] Maassen O, Fritsch S, Palm J, Deffge S, Kunze J, Marx G, Riedel M, Schuppert A, Bickenbach J. Future
Medical Artificial Intelligence Application Requirements and Expectations of Physicians in German
University Hospitals: Web-Based Survey. J Med Internet Res. 2021; 23 (3): e26646.
[10] Braun M, Hummel P, Beck S, Dabrock P. Primer on an ethics of AI-based decision support systems in
the clinic. J Med Ethics. 2020. doi:10.1136/medethics-2019-105860 Cited in: PubMed; PMID 32245804.
D. Holzner et al. / Attitudes and Acceptance Towards Artificial Intelligence in Medical Care72