ChapterPDF Available

Attitudes and Acceptance Towards Artificial Intelligence in Medical Care

Authors:

Abstract

Background: Artificial intelligence (AI) in medicine is a very topical issue. As far as the attitudes and perspectives of the different stakeholders in healthcare are concerned, there is still much to be explored. Objective: Our aim was to determine attitudes and aspects towards acceptance of AI applications from the perspective of physicians in university hospitals. Methods: We conducted individual exploratory expert interviews. Low fidelity mockups were used to show interviewees potential application areas of AI in clinical care. Results: In principle, physicians are open to the use of AI in medical care. However, they are critical of some aspects such as data protection or the lack of explainability of the systems. Conclusion: Although some trends in attitudes e.g., on the challenges or benefits of using AI became clear, it is necessary to conduct further research as intended by the subsequent PEAK project.
Attitudes and Acceptance Towards
Artificial Intelligence in Medical Care
Dana HOLZNERa,1, Timo APFELBACHERa,1, Wolfgang RÖDLEa,
Christina SCHÜTTLERa, Hans-Ulrich PROKOSCHa, Rafael MIKOLAJCZYKb,
Sarah NEGASHb, Nadja KARTSCHMITb, Iryna MANUILOVAc, Charlotte BUCHd,
Jana GUNDLACKe and Jan CHRISTOPH a,c,2
a Department of Medical Informatics, Friedrich-Alexander-Universität Erlangen-
Nürnberg, Erlangen, Germany
b Institute for Medical Epidemiology, Biometrics and Informatics, Interdisciplinary
Center for Health Sciences, Faculty of Medicine, Martin-Luther-University Halle-
Wittenberg, Halle, Germany
c Junior Research Group (Bio-)medical Data Science, Faculty of Medicine, Martin-
Luther-University Halle-Wittenberg, Halle, Germany
d Institute for History and Ethics of Medicine, Center for Health Sciences Halle,
Medical Faculty, Martin-Luther-University Halle-Wittenberg, Halle, Germany
e Institute of General Practice and Family Medicine, Center of Health Sciences,
Faculty of Medicine, Martin-Luther-University Halle-Wittenberg, Halle, Germany
Abstract. Background: Artificial intelligence (AI) in medicine is a very topical issue.
As far as the attitudes and perspectives of the different stakeholders in healthcare
are concerned, there is still much to be explored. Objective: Our aim was to
determine attitudes and aspects towards acceptance of AI applications from the
perspective of physicians in university hospitals. Methods: We conducted individual
exploratory expert interviews. Low fidelity mockups were used to show
interviewees potential application areas of AI in clinical care. Results: In principle,
physicians are open to the use of AI in medical care. However, they are critical of
some aspects such as data protection or the lack of explainability of the systems.
Conclusion: Although some trends in attitudes e.g., on the challenges or benefits of
using AI became clear, it is necessary to conduct further research as intended by the
subsequent PEAK project.
Keywords. Artificial Intelligence; Physicians; Attitude; Delivery of healthcare;
Expert Systems; Machine Learning; Neural Networks, Computer; Computers
1. Introduction
The principle of Artificial Intelligence (AI) goes back to the 1950s when Alan Turing
described this type of technology [1]. Although, it is not a new technology, the field of
AI seems to be developing very fast nowadays.
1 These authors contributed equally to this work.
2 Corresponding Author, Jan Christoph, AG (Bio-)Medical Data Science, Faculty of Medicine, Martin-Luther-
University Halle-Wittenberg, Halle, Germany; E-mail: jan.christoph@uk-halle.de.
Challenges of Trustable AI and Added-Value on Health
B. Séroussi et al. (Eds.)
© 2022 European Federation for Medical Informatics (EFMI) and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/SHTI220398
68
The PEAK project (prospects for the use and acceptance towards artificial
intelligence in healthcare) aims to investigate the attitudes of physicians and patients
regarding the use of applications of AI [2]. Current areas of use, possible further areas of
use and any gaps in knowledge will be explored. Since there are many different
definitions of AI, we settled on one for the PEAK project which encompasses a broad
scope: “the ability to process external data systematically and learn from it to achieve
specific goals and tasks. AI involves using machines to simulate human thinking
processes and intelligent behaviors, such as thinking, learning, and reasoning, and aims
to solve complex problems that can only be solved by experts. As a branch of computer
science, the field of AI mainly studies the following contents: machine learning,
intelligent robot, natural language understanding, neural network, language recognition,
image recognition, and expert system.” [3]. In this sub-study of the PEAK project,
physicians were interviewed about their attitudes towards AI.
2. Methods
For the sample description, a total of twelve physicians with experience from different
medical disciplines (anesthesiology, cardiovascular surgery, otolaryngology, general
medicine, hematology, internal medicine, neurorehabilitation, pediatrics, psychiatry,
radiology, radiotherapy, urology, and visceral surgery) aged 28 to 41 years were
randomly (regarding to their affinity to AI) chosen and interviewed. There were eight
assistant physicians, two specialists and two assistant medical directors with professional
experience ranging from 3.5 months to 12 years. The survey covered the entire spectrum
from low to high tech affinity.
The interviews were conducted between Nov. 15 and Dec. 14, 2021 by one
interviewer, using the "Zoom" web conferencing tool. After oral consent, the
conversations were recorded and transcribed by two team members for later analysis.
For the evaluation of the expert interviews, the qualitative content analysis according to
Kuckartz was applied [4]. For the transcription of the interviews as well as for all
evaluation steps and analyses the software MAXQDA was used [5]. In a pretest with
four subjects, the interview guide was tested. Since only minor adjustments were made
after pretesting and three of the four subjects were physicians, three of the pretest
interviews were included in the overall evaluation.
Before conducting the semi structured interviews, a guide was developed. The
process of guideline construction was based on the S2PS2 -method according to Kruse [6].
This consists of the phases "collect, sort, check, delete, subsume". After a brainstorming
phase, in which existing literature was incorporated [7-9], unsuitable questions were
deleted in discussion with the PEAK team. The remaining questions were subsumed in
a structured manner. To arrange the broadest possible focus of potentially addressable
topics, the interview guide was filled primarily with qualitative open-ended questions.
As an introduction, a query was made about the understanding of the term "Artificial
Intelligence" and the current use of AI systems in everyday clinical practice according
to perception of the interviewees. In a second step, seven low fidelity mockups i.e.,
simplified visual representations were used to illustrate possible application scenarios of
AI in the clinical context. The mockups served as inspiration for the subsequent general
qualitative part, which consisted of acceptance-promoting, acceptance-behind, and topic-
specific questions.
D. Holzner et al. / Attitudes and Acceptance Towards Artificial Intelligence in Medical Care 69
For clustering the mockups, four categories were formed, which should cover the
widest possible range of usage of AI systems in the medical context: 1. AI in diagnostics,
2. AI in therapy, 3. AI for prognosis/prediction, and 4. AI for process optimization in the
hospital. The mockups were illustrated by a PowerPoint presentation that was visible to
the participants with screen sharing.
3. Results
3.1. Understanding of the Term Artificial Intelligence and Areas of Application
Regarding the question what the respondents understand of the term Artificial
Intelligence the ideas were similar. All twelve interviewees agreed that it is a technical
system. Furthermore, about half of the interviewees defined AI as a system that works
independently (7/12) and can make decisions on its own (6/12).
Half of the respondents reported not using AI systems in clinical practice to date.
However, all but one person had already heard of AI in medical care. After demonstration
of the mockups, there were ten people who were using AI applications according to our
used definition, mostly in form of medication alerts or voice recognition.
3.2. General Attitudes Towards AI
We asked our twelve interview participants what benefits they believe AI can provide in
medical care. For six of the interviewees, the benefit lay in reducing errors in care and
thus increasing (patient) safety. Five of the interviewees named the relief of medical
work by taking over repetitive or simple tasks, as well as the optimization of processes.
One-third of the interviewees saw the benefit of an AI system in improving medical care,
structuring data, or saving time, which, according to two interviewees, can in turn be
used for patient care.
In contrast to this, five of the interviewees saw seamless integration of systems into
existing settings as a major challenge to implementing AI in everyday medical practice,
in terms of both program functionality (3/12) and interoperability (2/12). A third of the
interviewees saw a risk in relying too much on the systems. Another important point
mentioned was the potential for manipulation or misuse of patient data (4/12). In addition,
aspects such as the endangerment of the doctor-patient relationship (2/12), the emergence
of legal challenges in dealing with AI decisions (2/12), the susceptibility to errors in the
development of AI systems recording to the databasis (3/12), and the need for
explainability and transparency of the systems, (2/12) were also mentioned.
Further we asked for weaknesses of AI. In this context the interviewees mentioned
clinical experience, patient perception and interaction, and human instinct (5/12). In
addition, a human is faster than an AI in emergencies. The strengths of AI mentioned
included the objectivity of the systems (5/12), matching (1/12) and good presentation
(1/12) of data, and that an AI does not miss anything (2/12), has more knowledge (2/12),
and does not get tired (2/12).
Regarding the topic of responsibility towards the patient, the physician who uses AI
merely as a support for medical decisions still retains responsibility (6/12). The physician,
who must question and weigh the AI's decisions, retains the final say (3/12). One subject
estimated the responsibility as increased, because a physician has to include an external
decision in addition to his own assessment.
D. Holzner et al. / Attitudes and Acceptance Towards Artificial Intelligence in Medical Care70
Targeting the topic of discrepancy between own medical judgement and AI
judgement, most interviewees saw it as an impetus to critically question their own
decision and would get to the bottom of the exact cause of the discrepant results (9/12).
Half of the respondents would seek the advice of other physician colleagues or
supervisors to match opinions with human experience in a discussion.
Moving on to the factors that are influencing trust in AI systems, one of the most
frequently mentioned points is the data basis of the systems (7/12). In addition, the
transparency of AI systems is of great importance. It was important to six participants
that they are informed about the background of the system, such as the nature of the
underlying data or knowledge about the development of the programs as well as how the
systems work. Other influences on trust included testing of AI systems through clinical
trials (3/12).
Regarding the freedom of decision-making for physicians, AI is seen merely as a
decision support tool that does not dictate how they should act. More than half (7/12) of
the respondents saw no influence on their scope of action. Two feared the subconscious
influence of relying more and more on the systems, and three interviewees could imagine
AI having a greater influence on decision-making freedom in the future.
4. Discussion
The use of exemplary AI scenarios for the introduction and discussion stimulus of the
interviews proved to be useful. Based on lively participation and positive feedback, we
could conclude that the subjects were positive about the methodological approach to
conducting the interviews. Although the physicians were asked to evaluate AI systems
of fields, they were not specialists in, new aspects arose that were thought-provoking and
provided valuable input.
The physicians were generally positive about the use of AI in the clinical context.
This conclusion was also made in other studies such as Maassen et al. 2021 [9]. An
important advantage of the increased use of AI in healthcare is the relief of medical staff
and the associated time savings. This saved time would then be available for better
patient care, for instance. In the medium to long term, it is entirely conceivable that even
this theoretical time savings will fall victim to economic constraints. In contrast, Maassen
et al. found that most physicians do not even expect to have more time for patients
because of using AI [9].
The interviews revealed that it is important for physicians that AI implementation
into everyday life and thus into already existing systems is seamless and without
problems. At the same time, care must be taken to ensure that these tools are used wisely
in regards of supporting the physicians but not giving them the possibility to rely
completely on AI systems´ recommendations. In the end, as Braun et al. stated, it is a
human who must make the decision and not AI [10]. Furthermore, according to the
physicians, it would be helpful if users of AI could understand why the system makes a
decision. We assume that if the attending physician understands why a decision was
made, he can also communicate this better to the patient.
A limitation of this study is the small group of twelve experts. Nevertheless, it covers
a relatively broad clinical field of eleven medical disciplines. Further research is needed
on a much larger group of physicians as well as on patients and the general population.
This is planned at a later stage of the study with a total of 800 physicians, 800 patients
and 1,000 persons of the general population over the next 2.5 years.
D. Holzner et al. / Attitudes and Acceptance Towards Artificial Intelligence in Medical Care 71
5. Conclusion
Our initial findings show that physicians are generally open to AI in their everyday
clinical work. The interviewed physicians see the possibility to get support from AI
systems to come to a right decision. Hence, the participants expect that a better decision-
making will increase quality in patient care. In addition, according to the physicians´
estimation AI is well suited to assist in some fields of repetitive tasks as it does not get
tired and does not need a break. This will be possible especially in areas of factual
knowledge matching, such as in the pharmacology interaction check where AI has the
advantage of not overseeing something. Furthermore, according to our interviewees AI
can also make an important contribution to process and resource optimization which can
ultimately lead to time savings. Nevertheless, further studies are needed to cover the
current need for more specific information, to clarify specific technical, legal, and ethical
issues, and to address the lack of small sample size and small age range in this substudy.
This is exactly where the PEAK project will start and therefore, contribute to the research
of this topic.
Acknowledgement
We thank all interviewees and all further members of the PEAK team namely Thomas
Frese, Jan Schildmann, Daniel Tiller, Susanne Unverzagt and Christoph Weber. PEAK
is funded by the Joint Federal Committee (G-BA) under the FKZ 01VSF20017. The
present work was performed in fulfillment of the requirements for obtaining the degree
"Dr. rer. biol. hum." from the Friedrich-Alexander-Universität Erlangen-Nürnberg.
References
[1] Ramesh AN, Kambhampati C, Monson JRT, Drew PJ. Artificial intelligence in medicine. Ann R Coll
Surg Engl. 2004;86(5):334–338.
[2] PEAK – Perspektiven des Einsatzes und Akzeptanz Künstlicher Intelligenz - G-BA Innovationsfonds
[Internet]. 2022 [updated 2022 Jan 20; cited 2022 Jan 20]. Available from: https://innovationsfonds.g-
ba.de/projekte/versorgungsforschung/peak-perspektiven-deseinsatzes-und-akzeptanz-kuenstlicher-
intelligenz.397.
[3] Liu R, Rong Y, Peng Z. A review of medical artificial intelligence. Global Health Journal. 2020; 4 (2):
42–45.
[4] Kuckartz U. Einführung in die computergestützte Analyse qualitativer Daten. 3rd ed. Wiesbaden: VS
Verl. für Sozialwiss; 2010. 92 ff. p. (Lehrbuch). ger.
[5] Rädiker S, Kuckartz U. Analyse qualitativer Daten mit MAXQDA. Wiesbaden: Springer Fachmedien
Wiesbaden; 2019.
[6] Kruse J. Qualitative Interviewforschung: Ein integrativer Ansatz. Weinheim, Basel: Beltz Juventa; 2014.
227 p. (Grundlagentexte Methoden). ger.
[7] Oh S, Kim JH, Choi S-W, Lee HJ, Hong J, Kwon SH. Physician Confidence in Artificial Intelligence:
An Online Mobile Survey. J Med Internet Res. 2019;21(3):e12422.
[8] Laï M-C, Brian M, Mamzer M-F. Perceptions of artificial intelligence in healthcare: findings from a
qualitative survey study among actors in France. J Transl Med. 2020;18(1):14.
[9] Maassen O, Fritsch S, Palm J, Deffge S, Kunze J, Marx G, Riedel M, Schuppert A, Bickenbach J. Future
Medical Artificial Intelligence Application Requirements and Expectations of Physicians in German
University Hospitals: Web-Based Survey. J Med Internet Res. 2021; 23 (3): e26646.
[10] Braun M, Hummel P, Beck S, Dabrock P. Primer on an ethics of AI-based decision support systems in
the clinic. J Med Ethics. 2020. doi:10.1136/medethics-2019-105860 Cited in: PubMed; PMID 32245804.
D. Holzner et al. / Attitudes and Acceptance Towards Artificial Intelligence in Medical Care72
... As part of the project, 12 doctors were asked what benefits they thought AI could have for medical care. The results were published by Holzner et al. [29] In the study, the benefits of AI were cited as reducing errors in care and thus increasing (patient) safety, reducing the workload of physicians by performing repetitive or simple tasks, optimizing processes, structuring data, or saving time. The respondents saw the strengths of AI primarily in the objectivity of the systems. ...
... One respondent felt that the responsibility was increased, as medical professionals must take into account an external decision in addition to their assessment. When asked about the discrepancy between their medical judgment and that of the AI, two-thirds of health professionals saw this as an incentive to critically question their own decision and, if necessary, to seek a second opinion from a colleague [29]. Among the challenges of implementing AI in everyday medical practice, respondents cited the seamless integration of systems into existing settings. ...
... However, AICDSS should fit as well as possible into users' existing workflows, e.g., through integration with the electronic health record (EHR), to minimize user burden and increase access to recommendations [30]. The potential for manipulation or misuse of patient data has also been criticized [29]. A recent survey on AI in the United States shows that privacy is seen as the most important issue when it comes to this technology [31]. ...
Chapter
Full-text available
In the healthcare sector in particular, the shortage of skilled workers is a major problem that will become even more acute in the future as a result of demographic change. One way to counteract this trend is to use intelligent systems to reduce the workload of healthcare professionals. AI-based clinical decision support systems (AICDSS) have already proven their worth in this area, while simultaneously improving medical care. More recently, AICDSS have also been characterized by their ability to leverage the increasing availability of clinical data to assist healthcare professionals and patients in a variety of situations based on structured and unstructured data. However, the need to access large amounts of data while adhering to strict privacy regulations and the dependence on user adoption have highlighted the need to further adapt the implementation of AICDSS to integrate with existing healthcare routines. A subproject of the ViKI pro research project investigates how AICDSS can be successfully integrated into professional care planning practice using a user-centered design thinking approach. This paper presents the design of the ViKI pro AICDSS and the challenges related to privacy, user acceptance, and the data base. It also describes the development of an AI-based cloud technology for data processing and exchange using federated learning, and the development of an explicable AI algorithm for recommending care interventions. The core of the AICDSS is a human-in-the-loop system for data validation, in which the output of the AI model is continuously verified by skilled personnel to ensure continuous improvement in accuracy and transparent interaction between AI and humans.
... A robust planning is particularly important in a healthcare environment where the stakes are high, and efcient management is crucial. Te positive attitudes expressed by the respondents are consistent with fndings from previous studies [38,39]. Maassen and colleagues reported that 70% of their participants had either a positive or very positive attitude toward AI in medicine, and Holzner and colleagues suggested that ML systems could bring substantial improvements in safety, quality and efciency. ...
Article
Full-text available
Introduction: The need for innovative technology in healthcare is apparent due to challenges posed by the lack of resources. This study investigates the adoption of AI-based systems, specifically within the postanesthesia care unit (PACU). The aim of the study was to explore staff needs and expectations concerning the development and implementation of a digital patient flow system based on ML predictions. Methods: A qualitative approach was employed, gathering insights through interviews with 20 healthcare professionals, including nurse managers and staff involved in planning patient flows and patient care. The interview data were analyzed using reflexive thematic analysis, following steps of data familiarization, coding, and theme generation. The resulting themes were then assessed for their alignment with the modified technology acceptance model (TAM2). Results: The respondents discussed the benefits and drawbacks of the proposed ML system versus current manual planning. They emphasized the need for controlling PACU throughput and expected the ML system to improve the length of stay predictions and provide a comprehensive patient flow overview for staff. Prioritizing the patient was deemed important, with the ML system potentially allowing for more patient interaction time. However, concerns were raised regarding potential breaches of patient confidentiality in the new ML system. The respondents suggested new communication strategies might emerge with effective digital information use, possibly freeing up time for more human interaction. While most respondents were optimistic about adapting to the new technology, they recognized not all colleagues might be as convinced. Conclusion: This study showed that respondents were largely favorable toward implementing the proposed ML system, highlighting the critical role of nurse managers in patient workflow and safety, and noting that digitization could offer substantial assistance. Furthermore, the findings underscore the importance of strong leadership and effective communication as key factors for the successful implementation of such systems.
... Studies show that health care professionals are generally aware AI will have organizational and professional impacts that they are not yet prepared for which may threaten to undermine the benefits of AI before its implementation [24,26,[74][75][76][77]. Despite this, AHPs in this study remain optimistic about the potential benefits of AI such as improving health care, clinical decision-making, and delivery of patient care, consistent with other studies [23,78,79]. These views are not dissimilar to those held in the previous digital health revolutions in which the rapid increase of the use of the internet and computers in health care delivery prompted an examination of the expectations, skills, and resources of users [80]. ...
Article
Full-text available
Background Artificial intelligence (AI) has the potential to address growing logistical and economic pressures on the health care system by reducing risk, increasing productivity, and improving patient safety; however, implementing digital health technologies can be disruptive. Workforce perception is a powerful indicator of technology use and acceptance, however, there is little research available on the perceptions of allied health professionals (AHPs) toward AI in health care. Objective This study aimed to explore AHP perceptions of AI and the opportunities and challenges for its use in health care delivery. Methods A cross-sectional survey was conducted at a health service in, Queensland, Australia, using the Shinners Artificial Intelligence Perception tool. Results A total of 231 (22.1%) participants from 11 AHPs responded to the survey. Participants were mostly younger than 40 years (157/231, 67.9%), female (189/231, 81.8%), working in a clinical role (196/231, 84.8%) with a median of 10 years’ experience in their profession. Most participants had not used AI (185/231, 80.1%), had little to no knowledge about AI (201/231, 87%), and reported workforce knowledge and skill as the greatest challenges to incorporating AI in health care (178/231, 77.1%). Age (P=.01), profession (P=.009), and AI knowledge (P=.02) were strong predictors of the perceived professional impact of AI. AHPs generally felt unprepared for the implementation of AI in health care, with concerns about a lack of workforce knowledge on AI and losing valued tasks to AI. Prior use of AI (P=.02) and years of experience as a health care professional (P=.02) were significant predictors of perceived preparedness for AI. Most participants had not received education on AI (190/231, 82.3%) and desired training (170/231, 73.6%) and believed AI would improve health care. Ideas and opportunities suggested for the use of AI within the allied health setting were predominantly nonclinical, administrative, and to support patient assessment tasks, with a view to improving efficiencies and increasing clinical time for direct patient care. Conclusions Education and experience with AI are needed in health care to support its implementation across allied health, the second largest workforce in health. Industry and academic partnerships with clinicians should not be limited to AHPs with high AI literacy as clinicians across all knowledge levels can identify many opportunities for AI in health care.
... Doctors recognize the need for the integration of AI tools with the existing system of medicine (86.5%). Some international studies report that doctors are open to the use of AI in the medical industry, [10] while some studies identified reserved attitudes of doctors about the prospect of AI. [11] The limitations of our study include less generalizability as the sample size is small, and also the possibility cannot be denied that respondents are more likely to hold stronger views on this issue than non-respondents. ...
Article
A BSTRACT Background Artificial intelligence (AI) has led to the development of various opportunities during the COVID-19 pandemic. An abundant number of applications have surfaced responding to the pandemic, while some other applications were futile. Objectives The present study aimed to assess the perception and opportunities of AI used during the COVID-19 pandemic and to explore the perception of medical data analysts about the inclusion of AI in medical education. Material and Methods This study adopted a mixed-method research design conducted among medical doctors for the quantitative part while including medical data analysts for the qualitative interview. Results The study reveals that nearly 64.8% of professionals were working in high COVID-19 patient-load settings and had significantly more acceptance of AI tools compared to others ( P < 0.05). The learning barrier like engaging in new skills and working under a non-medical hierarchy led to dissatisfaction among medical data analysts. There was widespread recognition of their work after the COVID-19 pandemic. Conclusion Notwithstanding that the majority of professionals are aware that public health emergency creates a significant strain on doctors, the majority still have to work in extremely high case load setting to demand solutions. AI applications are still not being integrated into medicine as fast as technology has been advancing. Sensitization workshops can be conducted among specialists to develop interest which will encourage them to identify problem statements in their fields, and along with AI experts, they can create AI-enabled algorithms to address the problems. A lack of educational opportunities about AI in formal medical curriculum was identified.
Article
Full-text available
Objectives This study aimed to systematically map the evidence and identify patterns of barriers and facilitators to clinician artificial intelligence (AI) acceptance and use across the types of AI healthcare application and levels of income of geographic distribution of clinician practice. Design This scoping review was conducted in accordance with the Joanna Briggs Institute methodology for scoping reviews and reported using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews guideline. Data sources PubMed and Embase were searched from 2010 to 21 August 2023. Eligibility criteria This scoping review included both empirical and conceptual studies published in peer-reviewed journals that focused on barriers to and facilitators of clinician acceptance and use of AI in healthcare facilities. Studies that involved either hypothetical or real-life applications of AI in healthcare settings were included. Studies not written in English and focused on digital devices or robots not supported by an AI system were excluded. Data extraction and synthesis Three independent investigators conducted data extraction using a pre-tested tool meticulously designed based on eligibility criteria and constructs of the Unified Theory of Acceptance and Use of Technology (UTAUT) framework to systematically summarise data. Subsequently, two independent investigators applied the framework analysis method to identify additional barriers to and facilitators of clinician acceptance and use in healthcare settings, extending beyond those captured by UTAUT. Results The search identified 328 unique articles, of which 46 met the eligibility criteria, including 44 empirical studies and 2 conceptual studies. Among these, 32 studies (69.6%) were conducted in high-income countries and 9 studies (19.6%) in low-income and middle-income countries (LMICs). In terms of the types of healthcare settings, 21 studies examined primary care, 26 focused on secondary care and 21 reported on tertiary care. Overall, drivers of clinician AI acceptance and use were ambivalent, functioning as either barriers or facilitators depending on context. Performance expectancy and facilitating conditions emerged as the most frequent and consistent drivers across healthcare contexts. Notably, there were significant gaps in evidence examining the moderator effect of clinician demographics on the relationship between drivers and AI acceptance and use. Key themes not encompassed by the UTAUT framework included physician involvement as a facilitator and clinician hesitancy and legal and ethical considerations as barriers. Other factors, such as conclusiveness, relational dynamics, and technical features, were identified as ambivalent drivers. While clinicians’ perceptions and experiences of these drivers varied across primary, secondary and tertiary care, there was a notable lack of evidence exclusively examining drivers of clinician AI acceptance in LMIC clinical practice. Conclusions This scoping review highlights key gaps in understanding clinician acceptance and use of AI in healthcare, including the limited examination of individual moderators and context-specific factors in LMICs. While universal determinants such as performance expectancy and facilitating conditions were consistently identified across settings, factors not covered by the UTAUT framework such as clinician hesitancy, relational dynamics, legal and ethical considerations, technical features and clinician involvement emerged with varying impact depending on the level of healthcare context. These findings underscore the need to refine frameworks like UTAUT to incorporate context-specific drivers of AI acceptance and use. Future research should address these gaps by investigating both universal and context-specific barriers and expanding existing frameworks to better reflect the complexities of AI adoption in diverse healthcare settings.
Article
Background The rapid progress in the development of artificial intelligence (AI) is having a substantial impact on health care (HC) delivery and the physician-patient interaction. Objective This scoping review aims to offer a thorough analysis of the current status of integrating AI into medical practice as well as the apprehensions expressed by HC professionals (HCPs) over its application. Methods This scoping review used the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines to examine articles that investigated the apprehensions of HCPs about medical AI. Following the application of inclusion and exclusion criteria, 32 of an initial 217 studies (14.7%) were selected for the final analysis. We aimed to develop an attitude range that accurately captured the unfavorable emotions of HCPs toward medical AI. We achieved this by selecting attitudes and ranking them on a scale that represented the degree of aversion, ranging from mild skepticism to intense fear. The ultimate depiction of the scale was as follows: skepticism, reluctance, anxiety, resistance, and fear. Results In total, 3 themes were identified through the process of thematic analysis. National surveys performed among HCPs aimed to comprehensively analyze their current emotions, worries, and attitudes regarding the integration of AI in the medical industry. Research on technostress primarily focused on the psychological dimensions of adopting AI, examining the emotional reactions, fears, and difficulties experienced by HCPs when they encountered AI-powered technology. The high-level perspective category included studies that took a broad and comprehensive approach to evaluating overarching themes, trends, and implications related to the integration of AI technology in HC. We discovered 15 sources of attitudes, which we classified into 2 distinct groups: intrinsic and extrinsic. The intrinsic group focused on HCPs’ inherent professional identity, encompassing their tasks and capacities. Conversely, the extrinsic group pertained to their patients and the influence of AI on patient care. Next, we examined the shared themes and made suggestions to potentially tackle the problems discovered. Ultimately, we analyzed the results in relation to the attitude scale, assessing the degree to which each attitude was portrayed. Conclusions The solution to addressing resistance toward medical AI appears to be centered on comprehensive education, the implementation of suitable legislation, and the delineation of roles. Addressing these issues may foster acceptance and optimize AI integration, enhancing HC delivery while maintaining ethical standards. Due to the current prominence and extensive research on regulation, we suggest that further research could be dedicated to education.
Article
Full-text available
In recent years artificial intelligence (AI), as a new segment of computer science, has also become increasingly more important in medicine. The aim of this project was to investigate whether the current version of ChatGPT (ChatGPT 4.0) is able to answer open questions that could be asked in the context of a German board examination in ophthalmology. After excluding image-based questions, 10 questions from 15 different chapters/topics were selected from the textbook 1000 questions in ophthalmology (1000 Fragen Augenheilkunde 2nd edition, 2014). ChatGPT was instructed by means of a so-called prompt to assume the role of a board certified ophthalmologist and to concentrate on the essentials when answering. A human expert with considerable expertise in the respective topic, evaluated the answers regarding their correctness, relevance and internal coherence. Additionally, the overall performance was rated by school grades and assessed whether the answers would have been sufficient to pass the ophthalmology board examination. The ChatGPT would have passed the board examination in 12 out of 15 topics. The overall performance, however, was limited with only 53.3% completely correct answers. While the correctness of the results in the different topics was highly variable (uveitis and lens/cataract 100%; optics and refraction 20%), the answers always had a high thematic fit (70%) and internal coherence (71%). The fact that ChatGPT 4.0 would have passed the specialist examination in 12 out of 15 topics is remarkable considering the fact that this AI was not specifically trained for medical questions; however, there is a considerable performance variability between the topics, with some serious shortcomings that currently rule out its safe use in clinical practice.
Article
Full-text available
Background The increasing development of artificial intelligence (AI) systems in medicine driven by researchers and entrepreneurs goes along with enormous expectations for medical care advancement. AI might change the clinical practice of physicians from almost all medical disciplines and in most areas of health care. While expectations for AI in medicine are high, practical implementations of AI for clinical practice are still scarce in Germany. Moreover, physicians’ requirements and expectations of AI in medicine and their opinion on the usage of anonymized patient data for clinical and biomedical research have not been investigated widely in German university hospitals. Objective This study aimed to evaluate physicians’ requirements and expectations of AI in medicine and their opinion on the secondary usage of patient data for (bio)medical research (eg, for the development of machine learning algorithms) in university hospitals in Germany. MethodsA web-based survey was conducted addressing physicians of all medical disciplines in 8 German university hospitals. Answers were given using Likert scales and general demographic responses. Physicians were asked to participate locally via email in the respective hospitals. ResultsThe online survey was completed by 303 physicians (female: 121/303, 39.9%; male: 173/303, 57.1%; no response: 9/303, 3.0%) from a wide range of medical disciplines and work experience levels. Most respondents either had a positive (130/303, 42.9%) or a very positive attitude (82/303, 27.1%) towards AI in medicine. There was a significant association between the personal rating of AI in medicine and the self-reported technical affinity level (H4=48.3, P
Article
Full-text available
Since the concept of “artificial intelligence” was introduced in 1956, it has led to numerous technological innovations in human medicine and completely changed the traditional model of medicine. In this study, we mainly explain the application of artificial intelligence in various fields of medicine from four aspects: machine learning, intelligent robot, image recognition technology, and expert system. In addition, we discuss the existing problems and future trends in these areas. In recent years, through the development of globalization, various research institutions around the world have conducted a number of researches on this subject. Therefore, medical artificial intelligence has attained significant breakthroughs and will demonstrate wide development prospection in the future.
Article
Full-text available
Making good decisions in extremely complex and difficult processes and situations has always been both a key task as well as a challenge in the clinic and has led to a large amount of clinical, legal and ethical routines, protocols and reflections in order to guarantee fair, participatory and up-to-date pathways for clinical decision-making. Nevertheless, the complexity of processes and physical phenomena, time as well as economic constraints and not least further endeavours as well as achievements in medicine and healthcare continuously raise the need to evaluate and to improve clinical decision-making. This article scrutinises if and how clinical decision-making processes are challenged by the rise of so-called artificial intelligence-driven decision support systems (AI-DSS). In a first step, this article analyses how the rise of AI-DSS will affect and transform the modes of interaction between different agents in the clinic. In a second step, we point out how these changing modes of interaction also imply shifts in the conditions of trustworthiness, epistemic challenges regarding transparency, the underlying normative concepts of agency and its embedding into concrete contexts of deployment and, finally, the consequences for (possible) ascriptions of responsibility. Third, we draw first conclusions for further steps regarding a ‘meaningful human control’ of clinical AI-DSS.
Article
Full-text available
Background: Artificial intelligence (AI), with its seemingly limitless power, holds the promise to truly revolutionize patient healthcare. However, the discourse carried out in public does not always correlate with the actual impact. Thus, we aimed to obtain both an overview of how French health professionals perceive the arrival of AI in daily practice and the perception of the other actors involved in AI to have an overall understanding of this issue. Methods: Forty French stakeholders with diverse backgrounds were interviewed in Paris between October 2017 and June 2018 and their contributions analyzed using the grounded theory method (GTM). Results: The interviews showed that the various actors involved all see AI as a myth to be debunked. However, their views differed. French healthcare professionals, who are strategically placed in the adoption of AI tools, were focused on providing the best and safest care for their patients. Contrary to popular belief, they are not always seeing the use of these tools in their practice. For healthcare industrial partners, AI is a true breakthrough but legal difficulties to access individual health data could hamper its development. Institutional players are aware that they will have to play a significant role concerning the regulation of the use of these tools. From an external point of view, individuals without a conflict of interest have significant concerns about the sustainability of the balance between health, social justice, and freedom. Health researchers specialized in AI have a more pragmatic point of view and hope for a better transition from research to practice. Conclusion: Although some hyperbole has taken over the discourse on AI in healthcare, diverse opinions and points of view have emerged among French stakeholders. The development of AI tools in healthcare will be satisfactory for everyone only by initiating a collaborative effort between all those involved. It is thus time to also consider the opinion of patients and, together, address the remaining questions, such as that of responsibility.
Article
Full-text available
Artificial intelligence is a branch of computer science capable of analysing complex medical data. Their potential to exploit meaningful relationship with in a data set can be used in the diagnosis, treatment and predicting outcome in many clinical scenarios. Medline and internet searches were carried out using the keywords 'artificial intelligence' and 'neural networks (computer)'. Further references were obtained by cross-referencing from key articles. An overview of different artificial intelligent techniques is presented in this paper along with the review of important clinical applications. The proficiency of artificial intelligent techniques has been explored in almost every field of medicine. Artificial neural network was the most commonly used analytical tool whilst other artificial intelligent techniques such as fuzzy expert systems, evolutionary computation and hybrid intelligent systems have all been used in different clinical settings. Artificial intelligence techniques have the potential to be applied in almost every field of medicine. There is need for further clinical trials which are appropriately designed before these emergent techniques find application in the real clinical setting.
Book
Dieses Buch vermittelt auf verständliche Weise das Wissen, um qualitative und Mixed-Methods-Daten mit MAXQDA auszuwerten. Die Autoren verfügen über jahrzehntelange Forschungserfahrung und decken in diesem Buch ein breites Methodenspektrum ab. Sie beschränken sich nicht auf einzelne Forschungsansätze, sondern vermitteln das Know-how, um verschiedene Methoden – von der Grounded Theory über Diskursanalysen bis zur Qualitativen Inhaltsanalyse – mit MAXQDA umsetzen zu können. Darüber hinaus werden spezielle Themen fokussiert, wie Transkription, Kategorienbildung, Visualisierungen, Videoanalyse, Concept-Maps, Gruppenvergleiche und die Erstellung von Literaturreviews. Der Inhalt • Wie MAXQDA optimal in jeder Phase Ihres Projekts genutzt wird • Wie Daten transkribiert, exploriert und paraphrasiert werden • Wie Sie Daten codieren und Kategoriensysteme gestalten • Wie Sie mit Memos, Variablen und Zusammenfassungen arbeiten Wie spezielle Datenarten analysiert werden (Fokusgruppen, Online-Surveys, Literaturangaben etc.) • Wie Sie effizient im Team Daten mit MAXQDA analysieren Die Autoren Dr. Stefan Rädiker ist freiberuflicher Berater und Trainer für Forschungsmethoden und Evaluation. Im Zentrum seiner Tätigkeiten steht die computergestützte Analyse von qualitativen und Mixed-Methods-Daten mit der Analysesoftware MAXQDA. Dr. Udo Kuckartz ist emeritierter Professor für empirische Erziehungswissenschaft und Methoden der Sozialforschung an der Philipps-Universität Marburg.
Book
Die sozialwissenschaftliche Analyse von qualitativen Daten, die Text- und Inhaltsanalyse lassen sich heute sehr effektiv mit Unterstützung von Computerprogrammen durchführen. Der Einsatz von QDA-Software verspricht mehr Effizienz und Transparenz der Analyse. Dieses Buch gibt einen Überblick über diese neuen Arbeitstechniken, diskutiert die zugrunde liegenden methodischen Konzepte (u.a. die Grounded Theory und die Qualitative Inhaltsanalyse) und gibt praktische Hinweise zur Umsetzung.
Book
In dem Methodenbuch wird forschungsphasenorientiert sowohl methodologisch umfassend als auch praxisnah in die zentralen Aspekte qualitativer Interviewforschung eingeführt. Dabei wird ein integrativer Ansatz verfolgt, der in den verschiedenen Forschungsphasen und -dimensionen versucht, ein zentrales Ziel nicht aus den Augen zu verlieren: die Offenheit gegenüber den Forschungsgegenständen und den Forschungsprozessen vor dem Hintergrund der methodischen Herausforderungen und Problemen qualitativer Sozial-/Interviewforschung. Die rekonstruktive Sozialforschung ist aus dem Kanon der Methoden empirischer Sozialforschung nicht mehr wegzudenken. Im Zuge ihrer Etablierung hat sie sich enorm ausdifferenziert. Dies gilt auch für den Bereich der qualitativen Interviewforschung, innerhalb derer es eine Vielzahl an Forschungsprogrammen und methodischen Ansätzen gibt. In Bezug auf deren gegenseitige Anschlussfähigkeit fällt auf, dass es scheinbar zahlreiche methodologische und forschungspolitische Unvereinbarkeiten gibt, worunter das zentrale Grundprinzip der rekonstruktiven Sozialforschung oftmals selbst zu leiden hat: die Offenheit gegenüber dem Forschungsgegenstand und den Forschungsprozessen. In dem Methodenbuch wird forschungsphasenorientiert sowohl methodologisch umfassend als auch praxisnah in die zentralen Aspekte qualitativer Interviewforschung eingeführt und dabei ein integrativer Ansatz verfolgt, der in den verschiedenen Forschungsphasen und -dimensionen ein zentrales Ziel versucht nicht aus den Augen zu verlieren: die Offenheit gegenüber den Forschungsgegenständen und den Forschungsprozessen vor dem Hintergrund der methodischen Herausforderungen und Problemen qualitativer Sozial-/Interviewforschung.
Physician Confidence in Artificial Intelligence: An Online Mobile Survey
  • S Oh
  • J H Kim
  • S-W Choi
  • H J Lee
  • J Hong
  • S H Kwon
Oh S, Kim JH, Choi S-W, Lee HJ, Hong J, Kwon SH. Physician Confidence in Artificial Intelligence: An Online Mobile Survey. J Med Internet Res. 2019;21(3):e12422.