ChapterPDF Available

Attitudes and Acceptance Towards Artificial Intelligence in Medical Care

Authors:

Abstract

Background: Artificial intelligence (AI) in medicine is a very topical issue. As far as the attitudes and perspectives of the different stakeholders in healthcare are concerned, there is still much to be explored. Objective: Our aim was to determine attitudes and aspects towards acceptance of AI applications from the perspective of physicians in university hospitals. Methods: We conducted individual exploratory expert interviews. Low fidelity mockups were used to show interviewees potential application areas of AI in clinical care. Results: In principle, physicians are open to the use of AI in medical care. However, they are critical of some aspects such as data protection or the lack of explainability of the systems. Conclusion: Although some trends in attitudes e.g., on the challenges or benefits of using AI became clear, it is necessary to conduct further research as intended by the subsequent PEAK project.
Attitudes and Acceptance Towards
Artificial Intelligence in Medical Care
Dana HOLZNERa,1, Timo APFELBACHERa,1, Wolfgang RÖDLEa,
Christina SCHÜTTLERa, Hans-Ulrich PROKOSCHa, Rafael MIKOLAJCZYKb,
Sarah NEGASHb, Nadja KARTSCHMITb, Iryna MANUILOVAc, Charlotte BUCHd,
Jana GUNDLACKe and Jan CHRISTOPH a,c,2
a Department of Medical Informatics, Friedrich-Alexander-Universität Erlangen-
Nürnberg, Erlangen, Germany
b Institute for Medical Epidemiology, Biometrics and Informatics, Interdisciplinary
Center for Health Sciences, Faculty of Medicine, Martin-Luther-University Halle-
Wittenberg, Halle, Germany
c Junior Research Group (Bio-)medical Data Science, Faculty of Medicine, Martin-
Luther-University Halle-Wittenberg, Halle, Germany
d Institute for History and Ethics of Medicine, Center for Health Sciences Halle,
Medical Faculty, Martin-Luther-University Halle-Wittenberg, Halle, Germany
e Institute of General Practice and Family Medicine, Center of Health Sciences,
Faculty of Medicine, Martin-Luther-University Halle-Wittenberg, Halle, Germany
Abstract. Background: Artificial intelligence (AI) in medicine is a very topical issue.
As far as the attitudes and perspectives of the different stakeholders in healthcare
are concerned, there is still much to be explored. Objective: Our aim was to
determine attitudes and aspects towards acceptance of AI applications from the
perspective of physicians in university hospitals. Methods: We conducted individual
exploratory expert interviews. Low fidelity mockups were used to show
interviewees potential application areas of AI in clinical care. Results: In principle,
physicians are open to the use of AI in medical care. However, they are critical of
some aspects such as data protection or the lack of explainability of the systems.
Conclusion: Although some trends in attitudes e.g., on the challenges or benefits of
using AI became clear, it is necessary to conduct further research as intended by the
subsequent PEAK project.
Keywords. Artificial Intelligence; Physicians; Attitude; Delivery of healthcare;
Expert Systems; Machine Learning; Neural Networks, Computer; Computers
1. Introduction
The principle of Artificial Intelligence (AI) goes back to the 1950s when Alan Turing
described this type of technology [1]. Although, it is not a new technology, the field of
AI seems to be developing very fast nowadays.
1 These authors contributed equally to this work.
2 Corresponding Author, Jan Christoph, AG (Bio-)Medical Data Science, Faculty of Medicine, Martin-Luther-
University Halle-Wittenberg, Halle, Germany; E-mail: jan.christoph@uk-halle.de.
Challenges of Trustable AI and Added-Value on Health
B. Séroussi et al. (Eds.)
© 2022 European Federation for Medical Informatics (EFMI) and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/SHTI220398
68
The PEAK project (prospects for the use and acceptance towards artificial
intelligence in healthcare) aims to investigate the attitudes of physicians and patients
regarding the use of applications of AI [2]. Current areas of use, possible further areas of
use and any gaps in knowledge will be explored. Since there are many different
definitions of AI, we settled on one for the PEAK project which encompasses a broad
scope: “the ability to process external data systematically and learn from it to achieve
specific goals and tasks. AI involves using machines to simulate human thinking
processes and intelligent behaviors, such as thinking, learning, and reasoning, and aims
to solve complex problems that can only be solved by experts. As a branch of computer
science, the field of AI mainly studies the following contents: machine learning,
intelligent robot, natural language understanding, neural network, language recognition,
image recognition, and expert system.” [3]. In this sub-study of the PEAK project,
physicians were interviewed about their attitudes towards AI.
2. Methods
For the sample description, a total of twelve physicians with experience from different
medical disciplines (anesthesiology, cardiovascular surgery, otolaryngology, general
medicine, hematology, internal medicine, neurorehabilitation, pediatrics, psychiatry,
radiology, radiotherapy, urology, and visceral surgery) aged 28 to 41 years were
randomly (regarding to their affinity to AI) chosen and interviewed. There were eight
assistant physicians, two specialists and two assistant medical directors with professional
experience ranging from 3.5 months to 12 years. The survey covered the entire spectrum
from low to high tech affinity.
The interviews were conducted between Nov. 15 and Dec. 14, 2021 by one
interviewer, using the "Zoom" web conferencing tool. After oral consent, the
conversations were recorded and transcribed by two team members for later analysis.
For the evaluation of the expert interviews, the qualitative content analysis according to
Kuckartz was applied [4]. For the transcription of the interviews as well as for all
evaluation steps and analyses the software MAXQDA was used [5]. In a pretest with
four subjects, the interview guide was tested. Since only minor adjustments were made
after pretesting and three of the four subjects were physicians, three of the pretest
interviews were included in the overall evaluation.
Before conducting the semi structured interviews, a guide was developed. The
process of guideline construction was based on the S2PS2 -method according to Kruse [6].
This consists of the phases "collect, sort, check, delete, subsume". After a brainstorming
phase, in which existing literature was incorporated [7-9], unsuitable questions were
deleted in discussion with the PEAK team. The remaining questions were subsumed in
a structured manner. To arrange the broadest possible focus of potentially addressable
topics, the interview guide was filled primarily with qualitative open-ended questions.
As an introduction, a query was made about the understanding of the term "Artificial
Intelligence" and the current use of AI systems in everyday clinical practice according
to perception of the interviewees. In a second step, seven low fidelity mockups i.e.,
simplified visual representations were used to illustrate possible application scenarios of
AI in the clinical context. The mockups served as inspiration for the subsequent general
qualitative part, which consisted of acceptance-promoting, acceptance-behind, and topic-
specific questions.
D. Holzner et al. / Attitudes and Acceptance Towards Artificial Intelligence in Medical Care 69
For clustering the mockups, four categories were formed, which should cover the
widest possible range of usage of AI systems in the medical context: 1. AI in diagnostics,
2. AI in therapy, 3. AI for prognosis/prediction, and 4. AI for process optimization in the
hospital. The mockups were illustrated by a PowerPoint presentation that was visible to
the participants with screen sharing.
3. Results
3.1. Understanding of the Term Artificial Intelligence and Areas of Application
Regarding the question what the respondents understand of the term Artificial
Intelligence the ideas were similar. All twelve interviewees agreed that it is a technical
system. Furthermore, about half of the interviewees defined AI as a system that works
independently (7/12) and can make decisions on its own (6/12).
Half of the respondents reported not using AI systems in clinical practice to date.
However, all but one person had already heard of AI in medical care. After demonstration
of the mockups, there were ten people who were using AI applications according to our
used definition, mostly in form of medication alerts or voice recognition.
3.2. General Attitudes Towards AI
We asked our twelve interview participants what benefits they believe AI can provide in
medical care. For six of the interviewees, the benefit lay in reducing errors in care and
thus increasing (patient) safety. Five of the interviewees named the relief of medical
work by taking over repetitive or simple tasks, as well as the optimization of processes.
One-third of the interviewees saw the benefit of an AI system in improving medical care,
structuring data, or saving time, which, according to two interviewees, can in turn be
used for patient care.
In contrast to this, five of the interviewees saw seamless integration of systems into
existing settings as a major challenge to implementing AI in everyday medical practice,
in terms of both program functionality (3/12) and interoperability (2/12). A third of the
interviewees saw a risk in relying too much on the systems. Another important point
mentioned was the potential for manipulation or misuse of patient data (4/12). In addition,
aspects such as the endangerment of the doctor-patient relationship (2/12), the emergence
of legal challenges in dealing with AI decisions (2/12), the susceptibility to errors in the
development of AI systems recording to the databasis (3/12), and the need for
explainability and transparency of the systems, (2/12) were also mentioned.
Further we asked for weaknesses of AI. In this context the interviewees mentioned
clinical experience, patient perception and interaction, and human instinct (5/12). In
addition, a human is faster than an AI in emergencies. The strengths of AI mentioned
included the objectivity of the systems (5/12), matching (1/12) and good presentation
(1/12) of data, and that an AI does not miss anything (2/12), has more knowledge (2/12),
and does not get tired (2/12).
Regarding the topic of responsibility towards the patient, the physician who uses AI
merely as a support for medical decisions still retains responsibility (6/12). The physician,
who must question and weigh the AI's decisions, retains the final say (3/12). One subject
estimated the responsibility as increased, because a physician has to include an external
decision in addition to his own assessment.
D. Holzner et al. / Attitudes and Acceptance Towards Artificial Intelligence in Medical Care70
Targeting the topic of discrepancy between own medical judgement and AI
judgement, most interviewees saw it as an impetus to critically question their own
decision and would get to the bottom of the exact cause of the discrepant results (9/12).
Half of the respondents would seek the advice of other physician colleagues or
supervisors to match opinions with human experience in a discussion.
Moving on to the factors that are influencing trust in AI systems, one of the most
frequently mentioned points is the data basis of the systems (7/12). In addition, the
transparency of AI systems is of great importance. It was important to six participants
that they are informed about the background of the system, such as the nature of the
underlying data or knowledge about the development of the programs as well as how the
systems work. Other influences on trust included testing of AI systems through clinical
trials (3/12).
Regarding the freedom of decision-making for physicians, AI is seen merely as a
decision support tool that does not dictate how they should act. More than half (7/12) of
the respondents saw no influence on their scope of action. Two feared the subconscious
influence of relying more and more on the systems, and three interviewees could imagine
AI having a greater influence on decision-making freedom in the future.
4. Discussion
The use of exemplary AI scenarios for the introduction and discussion stimulus of the
interviews proved to be useful. Based on lively participation and positive feedback, we
could conclude that the subjects were positive about the methodological approach to
conducting the interviews. Although the physicians were asked to evaluate AI systems
of fields, they were not specialists in, new aspects arose that were thought-provoking and
provided valuable input.
The physicians were generally positive about the use of AI in the clinical context.
This conclusion was also made in other studies such as Maassen et al. 2021 [9]. An
important advantage of the increased use of AI in healthcare is the relief of medical staff
and the associated time savings. This saved time would then be available for better
patient care, for instance. In the medium to long term, it is entirely conceivable that even
this theoretical time savings will fall victim to economic constraints. In contrast, Maassen
et al. found that most physicians do not even expect to have more time for patients
because of using AI [9].
The interviews revealed that it is important for physicians that AI implementation
into everyday life and thus into already existing systems is seamless and without
problems. At the same time, care must be taken to ensure that these tools are used wisely
in regards of supporting the physicians but not giving them the possibility to rely
completely on AI systems´ recommendations. In the end, as Braun et al. stated, it is a
human who must make the decision and not AI [10]. Furthermore, according to the
physicians, it would be helpful if users of AI could understand why the system makes a
decision. We assume that if the attending physician understands why a decision was
made, he can also communicate this better to the patient.
A limitation of this study is the small group of twelve experts. Nevertheless, it covers
a relatively broad clinical field of eleven medical disciplines. Further research is needed
on a much larger group of physicians as well as on patients and the general population.
This is planned at a later stage of the study with a total of 800 physicians, 800 patients
and 1,000 persons of the general population over the next 2.5 years.
D. Holzner et al. / Attitudes and Acceptance Towards Artificial Intelligence in Medical Care 71
5. Conclusion
Our initial findings show that physicians are generally open to AI in their everyday
clinical work. The interviewed physicians see the possibility to get support from AI
systems to come to a right decision. Hence, the participants expect that a better decision-
making will increase quality in patient care. In addition, according to the physicians´
estimation AI is well suited to assist in some fields of repetitive tasks as it does not get
tired and does not need a break. This will be possible especially in areas of factual
knowledge matching, such as in the pharmacology interaction check where AI has the
advantage of not overseeing something. Furthermore, according to our interviewees AI
can also make an important contribution to process and resource optimization which can
ultimately lead to time savings. Nevertheless, further studies are needed to cover the
current need for more specific information, to clarify specific technical, legal, and ethical
issues, and to address the lack of small sample size and small age range in this substudy.
This is exactly where the PEAK project will start and therefore, contribute to the research
of this topic.
Acknowledgement
We thank all interviewees and all further members of the PEAK team namely Thomas
Frese, Jan Schildmann, Daniel Tiller, Susanne Unverzagt and Christoph Weber. PEAK
is funded by the Joint Federal Committee (G-BA) under the FKZ 01VSF20017. The
present work was performed in fulfillment of the requirements for obtaining the degree
"Dr. rer. biol. hum." from the Friedrich-Alexander-Universität Erlangen-Nürnberg.
References
[1] Ramesh AN, Kambhampati C, Monson JRT, Drew PJ. Artificial intelligence in medicine. Ann R Coll
Surg Engl. 2004;86(5):334–338.
[2] PEAK – Perspektiven des Einsatzes und Akzeptanz Künstlicher Intelligenz - G-BA Innovationsfonds
[Internet]. 2022 [updated 2022 Jan 20; cited 2022 Jan 20]. Available from: https://innovationsfonds.g-
ba.de/projekte/versorgungsforschung/peak-perspektiven-deseinsatzes-und-akzeptanz-kuenstlicher-
intelligenz.397.
[3] Liu R, Rong Y, Peng Z. A review of medical artificial intelligence. Global Health Journal. 2020; 4 (2):
42–45.
[4] Kuckartz U. Einführung in die computergestützte Analyse qualitativer Daten. 3rd ed. Wiesbaden: VS
Verl. für Sozialwiss; 2010. 92 ff. p. (Lehrbuch). ger.
[5] Rädiker S, Kuckartz U. Analyse qualitativer Daten mit MAXQDA. Wiesbaden: Springer Fachmedien
Wiesbaden; 2019.
[6] Kruse J. Qualitative Interviewforschung: Ein integrativer Ansatz. Weinheim, Basel: Beltz Juventa; 2014.
227 p. (Grundlagentexte Methoden). ger.
[7] Oh S, Kim JH, Choi S-W, Lee HJ, Hong J, Kwon SH. Physician Confidence in Artificial Intelligence:
An Online Mobile Survey. J Med Internet Res. 2019;21(3):e12422.
[8] Laï M-C, Brian M, Mamzer M-F. Perceptions of artificial intelligence in healthcare: findings from a
qualitative survey study among actors in France. J Transl Med. 2020;18(1):14.
[9] Maassen O, Fritsch S, Palm J, Deffge S, Kunze J, Marx G, Riedel M, Schuppert A, Bickenbach J. Future
Medical Artificial Intelligence Application Requirements and Expectations of Physicians in German
University Hospitals: Web-Based Survey. J Med Internet Res. 2021; 23 (3): e26646.
[10] Braun M, Hummel P, Beck S, Dabrock P. Primer on an ethics of AI-based decision support systems in
the clinic. J Med Ethics. 2020. doi:10.1136/medethics-2019-105860 Cited in: PubMed; PMID 32245804.
D. Holzner et al. / Attitudes and Acceptance Towards Artificial Intelligence in Medical Care72
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Background The increasing development of artificial intelligence (AI) systems in medicine driven by researchers and entrepreneurs goes along with enormous expectations for medical care advancement. AI might change the clinical practice of physicians from almost all medical disciplines and in most areas of health care. While expectations for AI in medicine are high, practical implementations of AI for clinical practice are still scarce in Germany. Moreover, physicians’ requirements and expectations of AI in medicine and their opinion on the usage of anonymized patient data for clinical and biomedical research have not been investigated widely in German university hospitals. Objective This study aimed to evaluate physicians’ requirements and expectations of AI in medicine and their opinion on the secondary usage of patient data for (bio)medical research (eg, for the development of machine learning algorithms) in university hospitals in Germany. MethodsA web-based survey was conducted addressing physicians of all medical disciplines in 8 German university hospitals. Answers were given using Likert scales and general demographic responses. Physicians were asked to participate locally via email in the respective hospitals. ResultsThe online survey was completed by 303 physicians (female: 121/303, 39.9%; male: 173/303, 57.1%; no response: 9/303, 3.0%) from a wide range of medical disciplines and work experience levels. Most respondents either had a positive (130/303, 42.9%) or a very positive attitude (82/303, 27.1%) towards AI in medicine. There was a significant association between the personal rating of AI in medicine and the self-reported technical affinity level (H4=48.3, P
Article
Full-text available
Since the concept of “artificial intelligence” was introduced in 1956, it has led to numerous technological innovations in human medicine and completely changed the traditional model of medicine. In this study, we mainly explain the application of artificial intelligence in various fields of medicine from four aspects: machine learning, intelligent robot, image recognition technology, and expert system. In addition, we discuss the existing problems and future trends in these areas. In recent years, through the development of globalization, various research institutions around the world have conducted a number of researches on this subject. Therefore, medical artificial intelligence has attained significant breakthroughs and will demonstrate wide development prospection in the future.
Article
Full-text available
Making good decisions in extremely complex and difficult processes and situations has always been both a key task as well as a challenge in the clinic and has led to a large amount of clinical, legal and ethical routines, protocols and reflections in order to guarantee fair, participatory and up-to-date pathways for clinical decision-making. Nevertheless, the complexity of processes and physical phenomena, time as well as economic constraints and not least further endeavours as well as achievements in medicine and healthcare continuously raise the need to evaluate and to improve clinical decision-making. This article scrutinises if and how clinical decision-making processes are challenged by the rise of so-called artificial intelligence-driven decision support systems (AI-DSS). In a first step, this article analyses how the rise of AI-DSS will affect and transform the modes of interaction between different agents in the clinic. In a second step, we point out how these changing modes of interaction also imply shifts in the conditions of trustworthiness, epistemic challenges regarding transparency, the underlying normative concepts of agency and its embedding into concrete contexts of deployment and, finally, the consequences for (possible) ascriptions of responsibility. Third, we draw first conclusions for further steps regarding a ‘meaningful human control’ of clinical AI-DSS.
Article
Full-text available
Background: Artificial intelligence (AI), with its seemingly limitless power, holds the promise to truly revolutionize patient healthcare. However, the discourse carried out in public does not always correlate with the actual impact. Thus, we aimed to obtain both an overview of how French health professionals perceive the arrival of AI in daily practice and the perception of the other actors involved in AI to have an overall understanding of this issue. Methods: Forty French stakeholders with diverse backgrounds were interviewed in Paris between October 2017 and June 2018 and their contributions analyzed using the grounded theory method (GTM). Results: The interviews showed that the various actors involved all see AI as a myth to be debunked. However, their views differed. French healthcare professionals, who are strategically placed in the adoption of AI tools, were focused on providing the best and safest care for their patients. Contrary to popular belief, they are not always seeing the use of these tools in their practice. For healthcare industrial partners, AI is a true breakthrough but legal difficulties to access individual health data could hamper its development. Institutional players are aware that they will have to play a significant role concerning the regulation of the use of these tools. From an external point of view, individuals without a conflict of interest have significant concerns about the sustainability of the balance between health, social justice, and freedom. Health researchers specialized in AI have a more pragmatic point of view and hope for a better transition from research to practice. Conclusion: Although some hyperbole has taken over the discourse on AI in healthcare, diverse opinions and points of view have emerged among French stakeholders. The development of AI tools in healthcare will be satisfactory for everyone only by initiating a collaborative effort between all those involved. It is thus time to also consider the opinion of patients and, together, address the remaining questions, such as that of responsibility.
Article
Full-text available
Artificial intelligence is a branch of computer science capable of analysing complex medical data. Their potential to exploit meaningful relationship with in a data set can be used in the diagnosis, treatment and predicting outcome in many clinical scenarios. Medline and internet searches were carried out using the keywords 'artificial intelligence' and 'neural networks (computer)'. Further references were obtained by cross-referencing from key articles. An overview of different artificial intelligent techniques is presented in this paper along with the review of important clinical applications. The proficiency of artificial intelligent techniques has been explored in almost every field of medicine. Artificial neural network was the most commonly used analytical tool whilst other artificial intelligent techniques such as fuzzy expert systems, evolutionary computation and hybrid intelligent systems have all been used in different clinical settings. Artificial intelligence techniques have the potential to be applied in almost every field of medicine. There is need for further clinical trials which are appropriately designed before these emergent techniques find application in the real clinical setting.
Book
Dieses Buch vermittelt auf verständliche Weise das Wissen, um qualitative und Mixed-Methods-Daten mit MAXQDA auszuwerten. Die Autoren verfügen über jahrzehntelange Forschungserfahrung und decken in diesem Buch ein breites Methodenspektrum ab. Sie beschränken sich nicht auf einzelne Forschungsansätze, sondern vermitteln das Know-how, um verschiedene Methoden – von der Grounded Theory über Diskursanalysen bis zur Qualitativen Inhaltsanalyse – mit MAXQDA umsetzen zu können. Darüber hinaus werden spezielle Themen fokussiert, wie Transkription, Kategorienbildung, Visualisierungen, Videoanalyse, Concept-Maps, Gruppenvergleiche und die Erstellung von Literaturreviews. Der Inhalt • Wie MAXQDA optimal in jeder Phase Ihres Projekts genutzt wird • Wie Daten transkribiert, exploriert und paraphrasiert werden • Wie Sie Daten codieren und Kategoriensysteme gestalten • Wie Sie mit Memos, Variablen und Zusammenfassungen arbeiten Wie spezielle Datenarten analysiert werden (Fokusgruppen, Online-Surveys, Literaturangaben etc.) • Wie Sie effizient im Team Daten mit MAXQDA analysieren Die Autoren Dr. Stefan Rädiker ist freiberuflicher Berater und Trainer für Forschungsmethoden und Evaluation. Im Zentrum seiner Tätigkeiten steht die computergestützte Analyse von qualitativen und Mixed-Methods-Daten mit der Analysesoftware MAXQDA. Dr. Udo Kuckartz ist emeritierter Professor für empirische Erziehungswissenschaft und Methoden der Sozialforschung an der Philipps-Universität Marburg.
Book
Die sozialwissenschaftliche Analyse von qualitativen Daten, die Text- und Inhaltsanalyse lassen sich heute sehr effektiv mit Unterstützung von Computerprogrammen durchführen. Der Einsatz von QDA-Software verspricht mehr Effizienz und Transparenz der Analyse. Dieses Buch gibt einen Überblick über diese neuen Arbeitstechniken, diskutiert die zugrunde liegenden methodischen Konzepte (u.a. die Grounded Theory und die Qualitative Inhaltsanalyse) und gibt praktische Hinweise zur Umsetzung.
Book
In dem Methodenbuch wird forschungsphasenorientiert sowohl methodologisch umfassend als auch praxisnah in die zentralen Aspekte qualitativer Interviewforschung eingeführt. Dabei wird ein integrativer Ansatz verfolgt, der in den verschiedenen Forschungsphasen und -dimensionen versucht, ein zentrales Ziel nicht aus den Augen zu verlieren: die Offenheit gegenüber den Forschungsgegenständen und den Forschungsprozessen vor dem Hintergrund der methodischen Herausforderungen und Problemen qualitativer Sozial-/Interviewforschung. Die rekonstruktive Sozialforschung ist aus dem Kanon der Methoden empirischer Sozialforschung nicht mehr wegzudenken. Im Zuge ihrer Etablierung hat sie sich enorm ausdifferenziert. Dies gilt auch für den Bereich der qualitativen Interviewforschung, innerhalb derer es eine Vielzahl an Forschungsprogrammen und methodischen Ansätzen gibt. In Bezug auf deren gegenseitige Anschlussfähigkeit fällt auf, dass es scheinbar zahlreiche methodologische und forschungspolitische Unvereinbarkeiten gibt, worunter das zentrale Grundprinzip der rekonstruktiven Sozialforschung oftmals selbst zu leiden hat: die Offenheit gegenüber dem Forschungsgegenstand und den Forschungsprozessen. In dem Methodenbuch wird forschungsphasenorientiert sowohl methodologisch umfassend als auch praxisnah in die zentralen Aspekte qualitativer Interviewforschung eingeführt und dabei ein integrativer Ansatz verfolgt, der in den verschiedenen Forschungsphasen und -dimensionen ein zentrales Ziel versucht nicht aus den Augen zu verlieren: die Offenheit gegenüber den Forschungsgegenständen und den Forschungsprozessen vor dem Hintergrund der methodischen Herausforderungen und Problemen qualitativer Sozial-/Interviewforschung.
Physician Confidence in Artificial Intelligence: An Online Mobile Survey
  • S Oh
  • J H Kim
  • S-W Choi
  • H J Lee
  • J Hong
  • S H Kwon
Oh S, Kim JH, Choi S-W, Lee HJ, Hong J, Kwon SH. Physician Confidence in Artificial Intelligence: An Online Mobile Survey. J Med Internet Res. 2019;21(3):e12422.