ArticlePDF Available

Identity Threats as a Reason for Resistance to Artificial Intelligence: Survey Study With Medical Students and Professionals

Authors:

Abstract and Figures

Background: Information systems based on artificial intelligence (AI) have increasingly spurred controversies among medical professionals as they start to outperform medical experts in tasks that previously required complex human reasoning. Prior research in other contexts has shown that such a technological disruption can result in professional identity threats and provoke negative attitudes and resistance to using technology. However, little is known about how AI systems evoke professional identity threats in medical professionals and under which conditions they actually provoke negative attitudes and resistance. Objective: The aim of this study is to investigate how medical professionals' resistance to AI can be understood because of professional identity threats and temporal perceptions of AI systems. It examines the following two dimensions of medical professional identity threat: threats to physicians' expert status (professional recognition) and threats to physicians' role as an autonomous care provider (professional capabilities). This paper assesses whether these professional identity threats predict resistance to AI systems and change in importance under the conditions of varying professional experience and varying perceived temporal relevance of AI systems. Methods: We conducted 2 web-based surveys with 164 medical students and 42 experienced physicians across different specialties. The participants were provided with a vignette of a general medical AI system. We measured the experienced identity threats, resistance attitudes, and perceived temporal distance of AI. In a subsample, we collected additional data on the perceived identity enhancement to gain a better understanding of how the participants perceived the upcoming technological change as beyond a mere threat. Qualitative data were coded in a content analysis. Quantitative data were analyzed in regression analyses. Results: Both threats to professional recognition and threats to professional capabilities contributed to perceived self-threat and resistance to AI. Self-threat was negatively associated with resistance. Threats to professional capabilities directly affected resistance to AI, whereas the effect of threats to professional recognition was fully mediated through self-threat. Medical students experienced stronger identity threats and resistance to AI than medical professionals. The temporal distance of AI changed the importance of professional identity threats. If AI systems were perceived as relevant only in the distant future, the effect of threats to professional capabilities was weaker, whereas the effect of threats to professional recognition was stronger. The effect of threats remained robust after including perceived identity enhancement. The results show that the distinct dimensions of medical professional identity are affected by the upcoming technological change through AI. Conclusions: Our findings demonstrate that AI systems can be perceived as a threat to medical professional identity. Both threats to professional recognition and threats to professional capabilities contribute to resistance attitudes toward AI and need to be considered in the implementation of AI systems in clinical practice.
Content may be subject to copyright.
Original Paper
Identity Threats as a Reason for Resistance to Artificial
Intelligence: Survey Study With Medical Students and
Professionals
Ekaterina Jussupow1*, PhD; Kai Spohrer2*, PhD; Armin Heinzl1*, Prof Dr
1University of Mannheim, Mannheim, Germany
2Frankfurt School of Finance & Management, Frankfurt, Germany
*all authors contributed equally
Corresponding Author:
Ekaterina Jussupow, PhD
University of Mannheim
L15 1-6
Mannheim, 68313
Germany
Phone: 49 621 181 1691
Email: jussupow@uni-mannheim.de
Abstract
Background: Information systems based on artificial intelligence (AI) have increasingly spurred controversies among medical
professionals as they start to outperform medical experts in tasks that previously required complex human reasoning. Prior research
in other contexts has shown that such a technological disruption can result in professional identity threats and provoke negative
attitudes and resistance to using technology. However, little is known about how AI systems evoke professional identity threats
in medical professionals and under which conditions they actually provoke negative attitudes and resistance.
Objective: The aim of this study is to investigate how medical professionals’ resistance to AI can be understood because of
professional identity threats and temporal perceptions of AI systems. It examines the following two dimensions of medical
professional identity threat: threats to physicians’ expert status (professional recognition) and threats to physicians’ role as an
autonomous care provider (professional capabilities). This paper assesses whether these professional identity threats predict
resistance to AI systems and change in importance under the conditions of varying professional experience and varying perceived
temporal relevance of AI systems.
Methods: We conducted 2 web-based surveys with 164 medical students and 42 experienced physicians across different
specialties. The participants were provided with a vignette of a general medical AI system. We measured the experienced identity
threats, resistance attitudes, and perceived temporal distance of AI. In a subsample, we collected additional data on the perceived
identity enhancement to gain a better understanding of how the participants perceived the upcoming technological change as
beyond a mere threat. Qualitative data were coded in a content analysis. Quantitative data were analyzed in regression analyses.
Results: Both threats to professional recognition and threats to professional capabilities contributed to perceived self-threat and
resistance to AI. Self-threat was negatively associated with resistance. Threats to professional capabilities directly affected
resistance to AI, whereas the effect of threats to professional recognition was fully mediated through self-threat. Medical students
experienced stronger identity threats and resistance to AI than medical professionals. The temporal distance of AI changed the
importance of professional identity threats. If AI systems were perceived as relevant only in the distant future, the effect of threats
to professional capabilities was weaker, whereas the effect of threats to professional recognition was stronger. The effect of threats
remained robust after including perceived identity enhancement. The results show that the distinct dimensions of medical
professional identity are affected by the upcoming technological change through AI.
Conclusions: Our findings demonstrate that AI systems can be perceived as a threat to medical professional identity. Both
threats to professional recognition and threats to professional capabilities contribute to resistance attitudes toward AI and need
to be considered in the implementation of AI systems in clinical practice.
(JMIR Form Res 2022;6(3):e28750) doi: 10.2196/28750
JMIR Form Res 2022 | vol. 6 | iss. 3 | e28750 | p. 1https://formative.jmir.org/2022/3/e28750 (page number not for citation purposes)
Jussupow et alJMIR FORMATIVE RESEARCH
XSL
FO
RenderX
KEYWORDS
artificial intelligence; professional identity; identity threat; survey; resistance
Introduction
Objective
Advances in machine learning and image recognition have
driven the implementation of systems based on artificial
intelligence (AI) in clinical practice. These systems are able to
perform tasks commonly associated with intelligent beings,
such as reasoning or learning from experience [1]. AI systems
based on machine learning gain insight directly from large sets
of data [2]. Such AI systems can, therefore, make more complex
decisions and complete more complex tasks than can rule-based
systems. Moreover, machine learning has the potential to
improve AI system capabilities with growing amounts of
training data. In medical disciplines such as radiology, AI
systems already diagnose specific diseases in a manner
comparable with that of experienced physicians [3]. Current AI
systems provide diagnosis and treatment recommendations,
facilitate access to information, and perform time-consuming
routine tasks of physicians, such as the segmentation of
radiological images [4]. In the light of an accelerated
technological development, AI systems in health care are
expected to pave the way for truly personalized medicine by
augmenting medical decisions and helping physicians cope with
increasing amounts of relevant data and growing numbers of
medical guidelines [5]. In fact, future AI systems are expected
to autonomously execute medical actions (eg, communicating
results and making diagnosis decisions) as efficiently and
accurately as expert physicians [3]. Thus, medical professionals
will be able to delegate more complex medical tasks to those
systems than to any traditional rule-based clinical decision
support system [6].
Despite their benefits and the great expectations toward their
future use, AI systems do not cause only positive reactions in
medical professionals. Although nearly all current applications
of AI are still limited to narrow use cases, the future potential
of this technology has resulted in a discourse driven by a duality
of hyped expectations and great fears [7]. Many physicians seem
to be concerned about the changes that AI systems impose on
their profession [8]. Such negative attitudes toward new
technology can manifest as resistance attitudes, resulting in
hesitation toward adopting a technology [9] and even in active
resentment against using a technology in clinical practice [10].
Although multiple studies have investigated attitudes toward
AI [11-13], they did not consider how resistance attitudes toward
AI are formed. Specifically, they did not examine whether
negative attitudes toward AI stem from the perception that AI
is threatening the medical professional identity. Therefore, we
aim to address the following research questions:
1. How is the medical professional identity threatened by AI
systems?
2. Under which conditions do medical professional identity
threats contribute to resistance to AI?
Theoretical Background
Professional identity refers to how professionals define
themselves in terms of their work roles and provides an answer
to the question “Who am I as a professional?” [14]. In general,
professionals strive to maintain a coherent and positive picture
of themselves [15], resulting in a tendency to interpret
experiences in identity-coherent ways. This helps individuals
to adapt to social changes, such as technological innovations,
and to create benefits from the positive identification with a
social group. Professional identity can be considered as a
combination of social identity [16] (ie, membership in a group
of professionals) and personal identity (ie, the individual
identification and enactment of professional roles) [15]. Medical
professionals are known for their strong commitment to their
professional identity, which is already developed in the early
phases of their socialization and later refined through practical
experience [17]. The medical profession’s group boundaries
are very rigid and shaped by strong core values and ideals [18].
Medical professionals can, therefore, be seen as a prototypical
profession [17], and their professional identity is particularly
resilient to change [19].
Identity threats are “experiences appraised as indicating potential
harm to the value, meanings, or enactment of an identity” [20].
Such experiences are potentially decreasing an identity’s value,
eliminating a part of the identity, or disturbing the connection
between an identity and the meaning the individual associates
with the threatened identity [20]. In the following, we distinguish
between two parts of individuals’identity that can be threatened:
personal identity and professional identity.
First, self-threat describes a context-independent threat to
personal identity by challenging fundamental self-motives of
distinctiveness, continuity, generalized self-efficacy, and
self-esteem [21]. Self-threat can bias information processing,
result in avoidance of threatening information [22-24], and
adverse emotional reactions [25]. Recently, self-threat has been
identified as an antecedent of resistance to technology [26].
Second, medical professionals can be threatened along different
dimensions of their professional identity. Drawing on a synthesis
of prior work (Multimedia Appendix 1 [9,10,14,27-34]), we
differentiate between 2 dimensions of medical professional
identity threats. First, threats to professional recognition refer
to challenges to the expertise and status position of medical
professionals [10,27]. Second, threats to professional
capabilities refer to the enactment of roles related to the medical
work itself. The latter include threats to the care provider role
[28], autonomy [14,29-31], and professional control [9,32].
Multiple studies show that professional identity threats can
stipulate resistance to new technologies and organizational
change [10,28,35]; however, it is unclear how these threats
manifest in the context of AI systems.
Multiple conditions do influence how medical professionals
experience identity threat from AI systems. In this paper, we
focus on the following two conditions: professional experience
and the perceived temporal distance of AI. First, perceived
JMIR Form Res 2022 | vol. 6 | iss. 3 | e28750 | p. 2https://formative.jmir.org/2022/3/e28750 (page number not for citation purposes)
Jussupow et alJMIR FORMATIVE RESEARCH
XSL
FO
RenderX
experience influences how likely medical professionals perceive
that AI systems can replace parts of their work. In particular,
more experienced physicians believe that they have unique skills
that AI systems cannot substitute, whereas novices have yet to
develop those skills. Second, medical professionals might have
different perceptions of how fast AI systems are implemented
in clinical practice. Perceiving AI systems as temporally close
and relevant in the immediate future suggests that AI systems
will be seen as more influential on concrete medical work
practices, thus threatening one’s professional capabilities.
Conversely, if AI systems are perceived to be relevant only in
the distant future, the perceived threat might be less specific to
medical work practices but more relevant for the long-term
reputation of medical professionals, thus threatening their
professional recognition. Hence, the perceived temporal distance
of AI systems could affect how relevant each dimension of the
professional identity becomes [36].
Methods
Survey Design
We collected data in 2 waves of a web-based survey. The first
wave survey mainly addressed medical students, whereas the
second survey focused on experienced physicians from different
specialties. All participants were provided with a vignette of an
AI system named Sherlock (Textbox 1) that was based on the
description of IBM Watson and was pretested with researchers
and medical students. We selected this vignette because it
depicts a general AI system in which the participants were
familiar with because of the marketing efforts of the vendor. It
does not limit the abilities of the AI to a specific medical
specialty. The vignette was purposefully focused on the benefits
of the system and evoked expectations of high accuracy to
establish the picture of a powerful AI system that goes beyond
extant narrow use cases and, thus, has the potential to be
threatening. Then, control questions were included to ensure
that participants associated AI with the vignette and that it was
perceived as realistic. The vignette was pretested with medical
students and professionals. We then asked an open question
about the participants’ perceptions of the changes to their
professional role caused by AI systems to gain qualitative
insights into the perceived upcoming change of their identity.
Afterward, we asked participants to complete the provided
survey of experience, identity threat, resistance attitudes, and
perceived temporal distance of AI systems.
Textbox 1. Vignette Sherlock—a general artificial intelligence system.
What is an intelligent clinical decision support software?
Physicians often need to quickly analyze all the information provided to make diagnoses and decisions about treatments. These decisions have
far-reaching consequences for patients and yet often have to be made under time pressure and with incomplete information. This is where the
Sherlock decision support software comes in.
Sherlock can be used in different specialties but will be presented here as with the following example of an oncological system.
Every physician has an iPad or a laptop through which he or she can access his or her electronic medical records. The “Ask Sherlock” button is
integrated into each medical record. When the physician asks Sherlock, he or she receives a 1-page treatment recommendation with up to 40
pages of explanations as backup.
With the help of artificial intelligence, Sherlock can integrate and compare millions of patient data to identify similarities and connections. In
addition, Sherlock has been trained by renowned experts. On the basis of the evidence base and the expert training, Sherlock then generates
treatment options. Sherlock then presents those treatment options ranked by appropriateness (recommended, for consideration, or not recommended)
alongside key information from the medical record and relevant supporting articles from current research. As a result, the practicing physician
can easily follow Sherlock's recommendations and directly access relevant articles or clinical data.
Sherlock is already in use in some clinics and tests have shown that Sherlock's recommendations are 90% in line with the decisions of renowned
tumor boards.
Measures
We used a 4-item measure of self-threat [21] on a 6-point Likert
scale and a 3-item measure of resistance [9]. Furthermore, we
asked participants whether they perceived the change from AI
systems as temporally close or distant by assessing the
agreement to the following statements: “Such systems will only
become relevant in the distant future,” “Such systems are
unlikely to be implemented technically,and “Such systems
are too abstract and intangible for me.” We extended existing
measures for threats toward professional recognition and
professional capabilities following the procedure of MacKenzie
et al [37], as outlined in Multimedia Appendix 2[9,28,29,37-46].
For medical students, we also assessed positive expectations
toward AI, which mirrored the negatively framed items that we
used for identity threat (identity enhancement). We also included
an open question about their expectations of how the medical
role would change with the introduction of AI systems. As a
control variable, we asked for participants’ familiarity with
clinical decision support systems. Except items related to
self-threat, all items were measured on a 5-point Likert scale
from totally disagree to totally agree. Table 1 lists the survey
items. The items for identity enhancement can be found in
Multimedia Appendix 3 [47-50].
JMIR Form Res 2022 | vol. 6 | iss. 3 | e28750 | p. 3https://formative.jmir.org/2022/3/e28750 (page number not for citation purposes)
Jussupow et alJMIR FORMATIVE RESEARCH
XSL
FO
RenderX
Table 1. Final list of items used for the hypothesis testing.a
ItemConstruct
Threats to professional recognition (self-developed from literature review)
Threat to expertise
E2: I fear that when using the system, physicians may lose their expert status.
E3: I fear that when using the system, certain physician specializations can be replaced.
Perceived threat to status position
S1: I fear that when using the system, physicians’ position in the hospital hierarchy may be undermined.
S2: I fear that when using the system, physicians may have a lower professional status.
S4: I fear that the status of physicians, who use the system, may deteriorate within the physician community.
Threats to professional capabilities
Perceived threat to autonomy (adapted from Walter and Lopez [29])
A1: I fear that when using the system physicians’ job autonomy may be reduced.
A3: I fear that physicians’ diagnostic and therapeutic decisions will be more monitored by nonphysicians.
Perceived threat to professional influence
I1: I fear that when using the system physicians may have less control over patient medical decisions.
I2: I fear that when using the system physicians may have less control over ordering patient tests.
I3: I fear that when using the system physicians may have less control over the distribution of scarce resources.
Perceived threat to being a care provider (self-developed from literature review)
C1: I fear that when using the system physicians have less influence on patient care.
C3: I fear that when using the system physicians are less able to treat their patients well.
Self-threat from AIb(adapted from Murtagh et al [21])
ST1: Using Sherlock undermines my sense of self-worth.
ST2: Using Sherlock makes me feel less competent.
ST3: Using Sherlock would have to change who I am.
ST4: Using Sherlock makes me feel less unique as a person.
Resistance to AI (adapted from Bhattacherjee and Hikmet [9])
RC1: I do not want Sherlock to change the way I order patient tests.
RC2: I do not want Sherlock to change the way I make clinical decisions.
RC3: I do not want Sherlock to change the way I interact with other people on my job.
RC4: Overall, I do not want Sherlock to change the way I currently work.
Temporal distance of AI (self-developed)
A1: Such systems will only become relevant in the distant future.
A2: Such systems are unlikely be implemented technically.
A3: Such systems are too abstract and intangible for me.
Familiarity
F1: I have never heard of such systems to I have heard a lot of such systems
F2: I have never used such systems to I have used such systems quite often
F3: I have never dealt with such systems to I have dealt with such systems in great detail.
F4: I am not at all familiar with such systems to I am very familiar with such systems.
aItems with the identifiers E1, S3, A2, and C2 were removed because of measurement properties (see Multimedia Appendix 2).
bAI: artificial intelligence.
JMIR Form Res 2022 | vol. 6 | iss. 3 | e28750 | p. 4https://formative.jmir.org/2022/3/e28750 (page number not for citation purposes)
Jussupow et alJMIR FORMATIVE RESEARCH
XSL
FO
RenderX
Scale Validation
We validated the scales of professional identity threats in the
sample of novices and experienced physicians and the
corresponding identity enhancement values, as outlined in
Multimedia Appendix 2. A confirmatory factor analysis with
all measurement scales resulted in a good model fit. All scales
displayed good psychometric properties, including reliability,
convergent validity, and discriminant validity (Multimedia
Appendix 2). The correlation between threats to professional
recognition and professional capabilities was 0.64,which was
smaller than the lowest square root of the average variance
extracted of 0.77, indicating acceptable multicollinearity.
Similarly, multicollinearity between self-threat and threats to
professional recognition was acceptable with a correlation of
0.66 being lower than the lowest square root of the average
variance extracted of self-threat. We accounted for potential
common method bias in the survey design and through testing
for a common method factor. The results indicated that common
method bias is unlikely to have a strong impact on our results
(see Multimedia Appendix 2 for details). The items of identity
enhancement were combined into 1 factor because of the result
of the exploratory factor analysis. All analyses were performed
using SPSS (version 26; IBM Corporation) and Stata (version
16; StataCorp).
Sample and Participants
A total sample of 227 novice and experienced physicians
participated between fall 2017 and spring 2019. Participants
were recruited from medical social media groups and by
personal reference. After excluding participants because of
failed comprehension checks or poor data quality (ie, very fast
completion time or answers to the open question that were
completely unrelated to the question), a total data sample of
206 participants was used for data analysis. Of these 206
participants, 164 (79.6%) were medical students and 42 (20.4%)
participants were medical professionals across different
specialties (see Table 2 for details on sample). We included
both medical students and trained physicians in our sample for
two reasons: First, especially inexperienced physicians are
susceptible to the influence of technology [51]. They may thus
provide valuable insight into the effects of AI systems. Second,
particularly medical students face strong, long-term career
consequences if AI systems alter the meaning of specific medical
disciplines such as radiology. They are thus likely to cognitively
engage with potential identity threats to make reasonable career
decisions, for example, with regard to the specialty they pursue.
Conversely, experienced medical professionals may have a more
pronounced professional identity and may experience threats
from AI systems differently.
Table 2. Sample properties of novice and experienced physicians.a
Experienced physiciansNovice physicians
45182Total sample size, N
42 (93.3)164 (90.1)Sample used for data analysis, n (%)
39.57 (13.14)24.65 (3.23)Age (years), mean (SD)
18 (40)131 (72)Gender (female), n (%)
12.6 years of job experienceEighth semesterExperience (average)
aThe specialties of experienced physicians are presented at a later stage.
Statistical Analysis
Our main outcome variable measures participants’ resistance
toward using the Sherlock application. Because of differences
in sample size between novice physicians (n=164) and
experienced physicians (n=42), we conducted a nonparametric
Mann–Whitney Utest to test the differences between the 2
samples regarding resistance attitude and self-threat. We
conducted seemingly unrelated regression analyses with
self-threat and resistance as the dependent variables because
we expected their error terms to be correlated. The predictors
were included in a stepwise approach to assess how much
additional variance they explain. To test the effect of perceived
temporal distance of AI, we incorporated 2 interaction terms
between temporal distance and the 2 dimensions of professional
identity threats into the regression analysis. We conducted a
mediation analysis following Hayes [52] with 10,000 bootstrap
samples to test the mediating role of self-threat on resistance
attitudes.
Ethics Approval
Ethics approval was not sought or obtained from the Ethics
Committee at the University of Mannheim, as an approval was
not required under the institution’s current ethics statute.
Results
Descriptives and Group Differences
We find that both experienced and novice physicians perceived
identity threats from the upcoming change from AI systems
(Table 3). Novice physicians showed relatively high resistance
to AI and self-threat, whereas experienced physicians showed
slightly lower resistance and self-threat from AI. The group
differences were significant for resistance (P=.005) and
self-threat (P<.001). Novices perceived equally strong threats
to their professional recognition (mean 3.05, SD 1.23) and
professional capabilities (mean 3.25, SD 0.97), whereas
experienced physicians perceived a stronger threat to their
professional capabilities (mean 2.72, SD 1.18) than to their
professional recognition (mean 2.38, SD 1.03). Group
differences were statistically significant; novices experienced
JMIR Form Res 2022 | vol. 6 | iss. 3 | e28750 | p. 5https://formative.jmir.org/2022/3/e28750 (page number not for citation purposes)
Jussupow et alJMIR FORMATIVE RESEARCH
XSL
FO
RenderX
more threats to professional recognition (P<.001) and
professional capabilities (P=.80) than experienced physicians.
Similarly, novices reported AI systems as slightly more
temporally distant than did experienced physicians (P=.08).
Moreover, in the sample of experienced physicians, the
experienced threats and resistance attitudes differed based on
the medical specialty (Table 4). The descriptive statistics show,
for example, that physicians in the psychiatry specialty reported
stronger threats to professional recognition than to professional
capabilities, whereas surgeons reported stronger threats to
professional capabilities than to professional recognition.
However, as the sample size was relatively small, more research
is needed to fully understand the effects of different specialties
on experienced identity threats.
Table 3. Mann–Whitney Utest between novice and experienced physicians.a
Group differencesExperienced physicians (n=42), mean (SD)Novice physicians (n=164), mean (SD)
PvalueZvalue
.0052.833.13 (0.97)3.58 (0.97)
Resistance to AIb
<.0013.851.92 (0.91)2.78 (1.36)Self-threat from AI
.0013.362.38 (1.03)3.05 (1.23)Threats to professional recognition
.012.622.72 (1.18)3.25 (0.97)Threats to professional capabilities
.081.752.03 (0.94)2.23 (0.84)Perceived temporal distance of AI
<.001–4.282.32 (0.77)1.75 (0.89)Familiarity with AI
aAll items except self-threat were measured on a 5-point Likert scale and self-threat was measured on a 6-point scale ranging from strongly disagree to
strongly agree.
bAI: artificial intelligence.
Table 4. Means (SDs) by specialty of experienced physicians (n=42).
Values, mean (SD)Specialty
Threats to professional
capabilities from AI
Threats to professional
recognition from AI
Temporal distance
of AI
Resistance to AI
Self-threat from AIa
3.18 (1.33)2.48 (0.87)1.71 (0.65)3.71 (1.25)2.32 (0.89)Not specified (n=7)
2.62 (1.27)2.28 (1.33)1.83 (0.65)3.15 (0.88)2.05 (0.98)Internal medicine (n=10)
3.57 (0.95)3.06 (0.59)2.78 (1.68)2.92 (0.88)1.92 (1.01)General medicine (n=3)
1.94 (0.73)2.50 (0.90)1.53 (0.61)2.70 (0.97)1.55 (0.82)Psychiatry (n=5)
3.00 (1.06)2.97 (0.84)2.13 (1.17)2.80 (1.14)2.05 (1.25)Pediatrics (n=5)
2.11 (1.44)1.53 (0.63)2.07 (0.64)3.10 (1.10)1.45 (0.45)Surgery (n=5)
2.96 (0.83)2.94 (1.08)1.78 (0.38)3.58 (0.63)1.75 (0.90)Anesthesiology (n=3)
2.96 (1.04)2.40 (0.83)3.08 (1.52)2.81 (0.59)1.94 (1.13)Others such as neurology
and pathology (n=4)
aAI: artificial intelligence.
Regression Analyses
Testing the relationships between different types of identity
threat and resistance attitudes in the total sample (Tables 5 and
6; Multimedia Appendix 3), we found that perceived
professional identity threats directly affected resistance attitudes
and personal identity threat (self-threat). Both threats to
professional recognition (P<.001) and threats to professional
capabilities (P<.001) were significant predictors of self-threat
(model 4a, Table 6). Moreover, we found that both professional
identity threats contributed independently to resistance to
change. However, threats to professional recognition predicted
resistance only in isolation (Multimedia Appendix 2; P<.001)
but not in combination with threats to professional capabilities
(Multimedia Appendix 3, model 3b; P=.50). Hence, threats to
professional capabilities overruled the impact of threats to
professional recognition on resistance attitudes and significantly
increased resistance (Multimedia Appendix 3, model 3b;
P<.001). The findings suggest that threats to professional
recognition are more strongly related to personal identity,
whereas threats to professional capabilities are more strongly
and directly related to resistance to change.
JMIR Form Res 2022 | vol. 6 | iss. 3 | e28750 | p. 6https://formative.jmir.org/2022/3/e28750 (page number not for citation purposes)
Jussupow et alJMIR FORMATIVE RESEARCH
XSL
FO
RenderX
Table 5. Descriptive statistics of the measurement model (N=206).a
ProRecd
ProCapc
Temporal
distance
GenderGenderAgeSelf-
threat
ResistanceSquare
root of
the AVEb
Mean (SD)Variables
e
(0.856)0.7783.49
(0.989)
Resistance
(0.891)
0.491f
0.8212.606
(1.324)
Self-threat
(—)
–0.287f
–0.131g
27.689
(8.896)
Age
(—)0.1070.058–0.0730.345
(0.476)
Gender
(0.905)
0.124g
0.162h
–0.152h
–0.122g
0.8411.869
(0.892)
Familiarity
(0.713)
–0.209f
0.111–0.111
0.306f
0.264f
0.6732.188
(0.864)
Temporal dis-
tance
(0.910)
0.295f
–0.0540.006
–0.132g
0.621f
0.529f
0.7713.14
(1.037)
ProCap
(0.896)
0.640f
0.232f
–0.108–0.050
–0.154h
0.660f
0.383f
0.7982.915
(1.133)
ProRec
aValues in table are correlations between two variables. Values in parentheses are composite reliabilities.
bAVE: average variance extracted.
cProCap: threats to professional capabilities.
dProRec: threats to professional recognition.
eNot applicable.
fSignificance level: P<.001.
gSignificance level: P<.05.
hSignificance level: P<.01.
JMIR Form Res 2022 | vol. 6 | iss. 3 | e28750 | p. 7https://formative.jmir.org/2022/3/e28750 (page number not for citation purposes)
Jussupow et alJMIR FORMATIVE RESEARCH
XSL
FO
RenderX
Table 6. Results of seemingly unrelated hierarchical regression analyses with self-threat and resistance to change as dependent variables (full model
4).
PvalueZvalueCoefficient (SE; 95% CI)
Model 4a with dependent variable self-threat
Stage 1 (controls)
<.001–3.650–0.034 (0.009; –0.052 to –0.016)Age
.39–1.000–0.134 (0.134; –0.396 to 0.129)Gender
.25–1.210–0.086 (0.072; –0.226 to 0.054)Familiarity
.151.4600.317 (0.218; –0.109 to 0.744)Group (experienced and novice)
Step 2 (identity threats)
<.0017.1700.503 (0.070; 0.366 to 0.641)
ProReca
<.0014.6500.372 (0.080; 0.215 to 0.529)
ProCapb
Step 3 (Temporal distance of AIc, interactions)
.081.7800.137 (0.077; –0.014 to 0.289)Temporal distance
<.0013.7300.291 (0.078; 0.138 to 0.443)Temporal distance x ProRec
.02–2.350–0.203 (0.086; –0.372 to –0.034)Temporal distance x ProCap
<.0013.8801.071 (0.276; 0.531 to 1.611)Intercept
Model 4b with dependent variable resistance
Stage 1 (controls)
.90–0.130–0.001 (0.009; –0.018 to 0.016)Age
.29–1.060–0.134 (0.127; –0.382 to 0.114)Gender
.33–0.970–0.066 (0.068; –0.198 to 0.067)Familiarity
.77–0.300–0.061 (0.206; –0.465 to 0.343)Group (experienced and novice)
Step 2 (identity threats)
.410.8200.055 (0.066; –0.076 to 0.185)ProRec
<.0015.2700.400 (0.076; 0.251 to 0.548)ProCap
Step 3 (Temporal distance of AI, interactions)
.042.0300.149 (0.073; 0.005 to 0.292)Temporal distance
.241.1900.087 (0.074; –0.057 to 0.232)Temporal distance x ProRec
.04–2.020–0.165 (0.082; –0.325 to –0.005)Temporal distance x ProCap
.360.9100.237 (0.261; –0.274 to 0.748)Intercept
aProRec: threats to professional recognition.
bProCap: threats to professional capabilities.
cAI: artificial intelligence.
We also analyzed how perceiving AI systems as temporally
close or distant interacted with perceived professional identity
threat. The perception of AI systems as temporally distant
interacted positively with threats to professional recognition
(P<.001) and interacted negatively with threats to professional
capabilities (P=.02) in predicting self-threat (Table 6, model
4a). In predicting resistance, the perception of AI systems as
temporally distant interacted negatively with threats to
professional capabilities (Table 6, model 4b; P=.04), whereas
the interaction with threats to professional recognition was not
significant (P=.24). Figures 1-4show the moderating effects of
temporal distance on both dimensions of identity threat. The
findings suggest that experienced identity threats are closely
related to how temporally distant or close the technological
change from AI systems is perceived. Threats to professional
capabilities refer to more concrete and context-specific elements
of professional identity. Thus, these threats are more salient if
physicians believe that AI systems are temporally close and
relevant to clinical practice in the near future. Conversely,
threats to professional recognition require physicians to consider
their profession in a holistic way. Thus, these threats are more
salient if physicians perceive AI systems to be relevant only in
the distant future.
JMIR Form Res 2022 | vol. 6 | iss. 3 | e28750 | p. 8https://formative.jmir.org/2022/3/e28750 (page number not for citation purposes)
Jussupow et alJMIR FORMATIVE RESEARCH
XSL
FO
RenderX
Figure 1. Moderating effect of temporal distance on the association of threats to professional recognition with self-threat. AI: artificial intelligence.
Figure 2. Moderating effect of temporal distance on the association of threats to professional recognition with resistance. AI: artificial intelligence.
JMIR Form Res 2022 | vol. 6 | iss. 3 | e28750 | p. 9https://formative.jmir.org/2022/3/e28750 (page number not for citation purposes)
Jussupow et alJMIR FORMATIVE RESEARCH
XSL
FO
RenderX
Figure 3. Moderating effect of temporal distance on the association of threats to professional capabilities with self-threat. AI: artificial intelligence.
Figure 4. Moderating effect of temporal distance on the association of threats to professional capabilities with resistance. AI: artificial intelligence.
For the sample of novice physicians, we also collected data
about perceived identity enhancements through AI. We used
these data for robustness analysis to exclude the possibility that
our results regarding the effects of identity threat are biased by
hidden positive attitudes. The identity enhancement items
mirrored the wording of the identity threat items to capture any
positive expectations toward the change induced by AI systems
(Multimedia Appendix 3). The analysis shows that identity
enhancement reduced resistance to AI (P=.02) and was not
related to self-threat (P=.52). After including identity
enhancement as a control variable, the above-described effects
of perceived identity threat remained qualitatively unchanged.
This indicates that perceived professional identity threats have
a significant effect on resistance to AI over and beyond any
effects of perceived identity enhancement through AI.
Finally, we conducted a content analysis of the qualitative
statements from novice physicians to validate that our measured
dimensions capture the experienced changes through AI systems.
The data set consisted of a total of 414 distinct statements by
176 participants. The content of all statements was classified
as positive or negative statements about AI systems by 2
independent coders (EJ and one student assistant; Table 7).
JMIR Form Res 2022 | vol. 6 | iss. 3 | e28750 | p. 10https://formative.jmir.org/2022/3/e28750 (page number not for citation purposes)
Jussupow et alJMIR FORMATIVE RESEARCH
XSL
FO
RenderX
Table 7. Content analysis of the qualitative statements.
Example statements from participants (survey ID)Dimensions of identity threat and categories
identified in the data
Threats to professional recognition
AIareplaces important knowledge tasks
leading to a loss in professional status “Physicians appear to be less competent as patients believe that a small machine can also find solu-
tions and solve their problem.” (#98)
“Physicians will be replaced by computers and will only fulfill an assistant job.” (#894)
Decreasing importance of professional
knowledge and expertise “Physicians are tempted to study less as they know that the system will have all data accessible.
(#492)
Threats to professional capabilities
Loss of professional autonomy in deci-
sion-making “Physicians might stop to think for themselves and do not reflect on the results presented by the
system which is a severe source of mistakes” (#98)
“Less legal protection for the physician if he/she acts against the AI’s recommendation due to ex-
perience” (#954)
Pressuring the role of care provider “A physician will be more someone who operates a machine than being a doctor taking care of
his/her patient’s individual needs.” (#2426)
“less patient oriented care when using technology” (#850)
Loss of influence “The management will then require even faster decisions which results in increased time pressure.
(#719)
Perceived enhancements
AI supports decision-making by increas-
ing decision certainty “AI is a relief for the physician and helps to gain security in diagnosing by providing a second
opinion which either encourages the physician to reflect his diagnosis a second time or strengthens
the certainty of having found the right diagnosis.” (#47)
“work more independently and provides them more security in their work.” (#3159)
AI supports decision-making by help-
ing to stay up-to-date “it is no longer necessary to know the smallest detail of every single disease.” (#743)
“AI stays abreast of the fast changes and developments in medical science.” (#250)
AI increases workflow efficiency “AI saves time which can then be invested in the treatment of more patients or more personalized
care.” (#94)
AI leads to better patient care “AI increases security in diagnosing and might lead to better results. This again can increase the
patient’s trust towards the physician.” (#2277)
aAI: artificial intelligence.
One-third of the negative statements (34/105; 32.4%) described
threats to professional recognition. The implementation of AI
systems was perceived as leading to a loss of status and prestige
for the occupational group of physicians and made participants
fear that physicians might become redundant and reduced to a
mere voice of the AI system. Moreover, participants feared that
expert knowledge would become less important as AI systems
incorporated more up-to-date knowledge than ever possible for
a human being. The statements also contained multiple threats
to professional capabilities. As such, participants feared that
physicians might lose their autonomy in decision-making as
they might trust the AI system more than appropriate, whereas
the system would perform tasks autonomously. In addition,
participants perceived that it would become more difficult to
be a care provider with AI systems in place, as these systems
would increase the distance between physicians and patients.
The participants also feared that liability issues would arise if
they disagreed with AI decisions. Conversely, 3 categories of
positive statements emerged from the content analysis. AI
systems were perceived as supporting decision-making through
reduced uncertainty and complexity in diagnostics. Moreover,
they were seen as facilitators of access to knowledge, supporting
especially novice physicians, by providing access to the newest
guidelines and empirical findings.
Discussion
Principal Findings
Our work contributes to the knowledge on the impact of AI and
the future of work in health care [53-55]. It shows that
professional identity threats from AI systems are indeed a
serious concern for novice and experienced physicians and
contribute to resistance to AI. AI systems threaten both
professional recognition and professional capabilities of medical
professionals. Threats to professional capabilities directly
contribute to resistance to AI, whereas the effect of threats to
professional recognition is mediated through self-threat.
Professional experience and perceived temporal distance of AI
systems influence the relationship between perceived identity
threats and resistance attitudes. Medical novices experience
JMIR Form Res 2022 | vol. 6 | iss. 3 | e28750 | p. 11https://formative.jmir.org/2022/3/e28750 (page number not for citation purposes)
Jussupow et alJMIR FORMATIVE RESEARCH
XSL
FO
RenderX
stronger identity threats than medical professionals. In addition,
if AI systems are perceived as more relevant in the near future,
threats to professional capabilities are more profound. If,
however, AI systems are perceived as relevant in the distant
future, threats to personal recognition gain in importance.
Our findings have implications for the understanding of how
the medical professional identity changes with increasingly
powerful AI systems and how AI systems are integrated into
medical decisions. First, experienced identity threats influence
how physicians adapt their professional identity to the upcoming
change. For instance, study participants who indicated that “the
role of the physician will be more passive, since decisions will
be automated” might be less likely to choose specialties such
as radiology. This can lead to fewer physicians who actively
work with AI systems and develop the technological capabilities
to evaluate those systems. Furthermore, several participants
declared that they planned to focus on soft skills instead of
analytical decision-making skills, which would rather be
performed by an AI. Thus, instead of using AI systems as a
second opinion and engaging in elaborate decision-making,
physicians might end up delegating important tasks to AI
systems without considering them in detail.
Second, threats to the professional identity cause identity
protection responses [20] that directly impact technology use.
In health care, physicians are pivotal for developing the ground
truth for learning algorithms and for identifying relevant
explanations and predictive values. Furthermore, physicians
can make better diagnosis decisions with the support of trained
algorithms and use them as a second opinion [56]. However, if
they feel threatened in their identity, physicians are less likely
to engage in the active development and adaptation of AI
systems and resist their implementation. Moreover, identity
protection responses can lead to incorrect medical decisions
with AI systems if physicians reject AI advice as soon as it
contradicts their opinion and is perceived as threatening [51].
In particular, threats to professional capabilities play a focal
role in developing negative attitudes toward AI systems and
should, thus, be addressed through specific medical training in
interacting with AI systems.
Limitations and Future Research
This study has several limitations that can serve as a springboard
for future research. First, by using the survey method, we were
not able to capture how the identity develops through a longer
period of time and whether medical students who perceived
stronger threats to their future from AI would switch to a
nonthreatened profession that requires more subjective
interpersonal skills. In addition, it would be interesting to see
how the professional identity is affected in clinical practice
through a more intensive interaction with AI systems.
Furthermore, our study provides first insights into potential
differences in experienced identity threats across medical
specialties. Specialties such as radiology or pathology were
scarce in our sample, although those specialties often use AI in
medical practice. Consequently, a follow-up study that looks
at differences across specialties in more detail might provide
interesting insights. In addition, our sample consisted of
respondents who reported a relatively low degree of familiarity
with AI systems. This reflects the current situation in medical
education, in which medical novices are not trained in the use
of AI systems. However, whether a sample with more familiarity
would experience lower degrees of threat from AI systems needs
to be further researched. Second, as noted in the literature
[28,33,57] and underlined by the qualitative survey responses,
there are also positive appraisals of AI systems that can enhance,
rather than threaten, individuals’ identity. Given that there are
both strong positive and negative perceptions of the impact of
AI systems on the professional identity, future research should
consider the impact of ambivalence [58] on professional identity
formation and restructuring. Third, we presented an AI system
with a 90% accuracy rate. However, in clinical practice, the
accuracy rate is highly dependent on the context, that is, the
complexity of patient cases, and can be heavily disputed by
medical professionals. Furthermore, with lower perceived or
actual accuracy, physicians might develop more negative
attitudes toward the AI system. Finally, as our study examined
only 2 dependent variables, it is important to investigate how
professional identity threat from AI systems impacts other
variables, such as anxiety and long-term behaviors.
Conflicts of Interest
None declared.
Multimedia Appendix 1
Detailed overview of prior research on identity threats.
[DOCX File , 31 KB-Multimedia Appendix 1]
Multimedia Appendix 2
Overview of item development process.
[DOCX File , 38 KB-Multimedia Appendix 2]
Multimedia Appendix 3
Additional analysis details, including confirmatory factor analysis, common method bias analysis, regression analysis details for
models 2 and 3, robustness analysis with identity enhancement, and details on identity enhancement items.
[DOCX File , 41 KB-Multimedia Appendix 3]
JMIR Form Res 2022 | vol. 6 | iss. 3 | e28750 | p. 12https://formative.jmir.org/2022/3/e28750 (page number not for citation purposes)
Jussupow et alJMIR FORMATIVE RESEARCH
XSL
FO
RenderX
References
1. Russell S, Norvig P. Artificial Intelligence: A Modern Approach. Upper Saddle River: New Jersey: Pearson Education
Limited; 2010.
2. Mayo RC, Leung J. Artificial intelligence and deep learning - Radiology's next frontier? Clin Imaging 2018 May;49:87-88.
[doi: 10.1016/j.clinimag.2017.11.007] [Medline: 29161580]
3. Shen J, Zhang CJ, Jiang B, Chen J, Song J, Liu Z, et al. Artificial intelligence versus clinicians in disease diagnosis:
systematic review. JMIR Med Inform 2019 Aug 16;7(3):e10010 [FREE Full text] [doi: 10.2196/10010] [Medline: 31420959]
4. Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJ. Artificial intelligence in radiology. Nat Rev Cancer 2018
Aug 17;18(8):500-510 [FREE Full text] [doi: 10.1038/s41568-018-0016-5] [Medline: 29777175]
5. Jha S, Topol EJ. Adapting to artificial intelligence: radiologists and pathologists as information specialists. JAMA 2016
Dec 13;316(22):2353-2354. [doi: 10.1001/jama.2016.17438] [Medline: 27898975]
6. Baird A, Maruping LM. The next generation of research on IS use: a theoretical framework of delegation to and from
agentic IS artifacts. MIS Q 2021;45(1):315-341. [doi: 10.25300/misq/2021/15882]
7. Willcocks L. Robo-Apocalypse cancelled? Reframing the automation and future of work debate. J Inf Technol 2020 Jun
10;35(4):286-302. [doi: 10.1177/0268396220925830]
8. Tang A, Tam R, Cadrin-Chênevert A, Guest W, Chong J, Barfett J, Canadian Association of Radiologists (CAR) Artificial
Intelligence Working Group. Canadian association of radiologists white paper on artificial intelligence in radiology. Can
Assoc Radiol J 2018 May 01;69(2):120-135 [FREE Full text] [doi: 10.1016/j.carj.2018.02.002] [Medline: 29655580]
9. Bhattacherjee A, Hikmet N. Physicians' resistance toward healthcare information technology: a theoretical model and
empirical test. Eur J Inf Syst 2017 Dec 19;16(6):725-737. [doi: 10.1057/palgrave.ejis.3000717]
10. Lapointe L, Rivard S. A multilevel model of resistance to information technology implementation. MIS Quarterly
2005;29(3):461-491. [doi: 10.2307/25148692]
11. Abdullah R, Fakieh B. Health care employees' perceptions of the use of artificial intelligence applications: survey study. J
Med Internet Res 2020 May 14;22(5):e17620 [FREE Full text] [doi: 10.2196/17620] [Medline: 32406857]
12. Oh S, Kim JH, Choi S, Lee HJ, Hong J, Kwon SH. Physician confidence in artificial intelligence: an online mobile survey.
J Med Internet Res 2019 Mar 25;21(3):e12422 [FREE Full text] [doi: 10.2196/12422] [Medline: 30907742]
13. Blease C, Locher C, Leon-Carlyle M, Doraiswamy M. Artificial intelligence and the future of psychiatry: qualitative findings
from a global physician survey. Digit Health 2020;6:2055207620968355 [FREE Full text] [doi: 10.1177/2055207620968355]
[Medline: 33194219]
14. Chreim S, Williams BE, Hinings CR. Interlevel influences on the reconstruction of professional role identity. Acad Manag
J 2007 Dec 1;50(6):1515-1539. [doi: 10.5465/amj.2007.28226248]
15. Stets JE, Burke PJ. Identity theory and social identity theory. Soc Psychol Q 2000 Sep;63(3):224-237. [doi: 10.2307/2695870]
16. Tajfel H. Social psychology of intergroup relations. Annual Rev Psychol 1982 Jan;33(1):1-39. [doi:
10.1146/annurev.ps.33.020182.000245]
17. Pratt MG, Rockmann KW, Kaufmann JB. Constructing professional identity: the role of work and identity learning cycles
in the customization of identity among medical residents. Acad Manag J 2006 Apr;49(2):235-262. [doi:
10.5465/amj.2006.20786060]
18. Hoff TJ. The physician as worker: what it means and why now? Health Care Manage Rev 2001;26(4):53-70. [doi:
10.1097/00004010-200110000-00006] [Medline: 11721310]
19. Reay T, Goodrick E, Waldorff SB, Casebeer A. Getting leopards to change their spots: co-creating a new professional role
identity. AMJ 2017 Jun;60(3):1043-1070. [doi: 10.5465/amj.2014.0802]
20. Petriglieri J. Under threat: responses to and the consequences of threats to individuals' identities. AMR 2011
Oct;36(4):641-662. [doi: 10.5465/amr.2009.0087]
21. Murtagh N, Gatersleben B, Uzzell D. Self-identity threat and resistance to change: evidence from regular travel behaviour.
J Environ Psychol 2012 Dec;32(4):318-326. [doi: 10.1016/j.jenvp.2012.05.008]
22. Seibt B, Förster J. Stereotype threat and performance: how self-stereotypes influence processing by inducing regulatory
foci. J Pers Soc Psychol 2004 Jul;87(1):38-56. [doi: 10.1037/0022-3514.87.1.38] [Medline: 15250791]
23. Campbell WK, Sedikides C. Self-threat magnifies the self-serving bias: a meta-analytic integration. Rev General Psychol
1999 Mar 01;3(1):23-43. [doi: 10.1037/1089-2680.3.1.23]
24. Sassenberg K, Sassenrath C, Fetterman AK. Threat prevention, challenge promotion: the impact of threat, challenge
and regulatory focus on attention to negative stimuli. Cogn Emot 2015;29(1):188-195. [doi: 10.1080/02699931.2014.898612]
[Medline: 24650166]
25. Stein M, Newell S, Wagner EL, Galliers RD. Coping with information technology: mixed emotions, vacillation, and
nonconforming use patterns. MIS Q 2015 Feb 2;39(2):367-392. [doi: 10.25300/misq/2015/39.2.05]
26. Craig K, Thatcher J, Grover V. The IT identity threat: a conceptual definition and operational measure. J Manag Inf Syst
2019 Mar 31;36(1):259-288. [doi: 10.1080/07421222.2018.1550561]
27. Kyratsis Y, Atun R, Phillips N, Tracey P, George G. Health systems in transition: professional identity work in the context
of shifting institutional logics. Acad Manag J 2017 Apr;60(2):610-641. [doi: 10.5465/amj.2013.0684]
JMIR Form Res 2022 | vol. 6 | iss. 3 | e28750 | p. 13https://formative.jmir.org/2022/3/e28750 (page number not for citation purposes)
Jussupow et alJMIR FORMATIVE RESEARCH
XSL
FO
RenderX
28. Mishra AN, Anderson C, Angst CM, Agarwal R. Electronic health records assimilation and physician identity evolution:
an identity theory perspective. Inf Syst Res 2012 Sep;23(3-part-1):738-760. [doi: 10.1287/isre.1110.0407]
29. Walter Z, Lopez MS. Physician acceptance of information technologies: role of perceived threat to professional autonomy.
Decision Support Syst 2008 Dec;46(1):206-215. [doi: 10.1016/j.dss.2008.06.004]
30. Esmaeilzadeh P, Sambasivan M, Kumar N, Nezakati H. Adoption of clinical decision support systems in a developing
country: antecedents and outcomes of physician's threat to perceived professional autonomy. Int J Med Inform
2015;84(8):548-560. [doi: 10.1016/j.ijmedinf.2015.03.007] [Medline: 25920928]
31. Doolin B. Power and resistance in the implementation of a medical management information system. Inform Syst J 2004
Oct;14(4):343-362. [doi: 10.1111/j.1365-2575.2004.00176.x]
32. Nach H. Identity under challenge. Manag Res Rev 2015;38(7):703-725. [doi: 10.1108/MRR-02-2014-0031]
33. Jensen TB, Aanestad M. Hospitality and hostility in hospitals: a case study of an EPR adoption among surgeons. Eur J Inf
Syst 2017;16(6):672-680. [doi: 10.1057/palgrave.ejis.3000713]
34. Korica M, Molloy E. Making sense of professional identities: stories of medical professionals and new technologies. Human
Relations 2010 Sep 23;63(12):1879-1901. [doi: 10.1177/0018726710367441]
35. Zimmermann A, Ravishankar M. Collaborative IT offshoring relationships and professional role identities: reflections from
a field study. J Vocational Behav 2011 Jun;78(3):351-360. [doi: 10.1016/j.jvb.2011.03.016]
36. Luguri JB, Napier JL. Of two minds: the interactive effect of construal level and identity on political polarization. J
Experimental Soc Psychol 2013 Nov;49(6):972-977. [doi: 10.1016/j.jesp.2013.06.002]
37. MacKenzie SB, Podsakoff PM, Podsakoff NP. Construct measurement and validation procedures in MIS and behavioral
research: integrating new and existing techniques. MIS Q 2011;35(2):293-334. [doi: 10.2307/23044045]
38. Willis GB, Royston P, Bercini D. The use of verbal report methods in the development and testing of survey questionnaires.
Appl Cognit Psychol 1991 May;5(3):251-267. [doi: 10.1002/acp.2350050307]
39. Fisher RJ. Social desirability bias and the validity of indirect questioning. J Consum Res 1993 Sep;20(2):303-315. [doi:
10.1086/209351]
40. Bick M, Kummer T, Ryschka S. Determining anxieties in relation to ambient intelligence—explorative findings from
hospital settings. Inf Syst Manag 2015 Jan 08;32(1):60-71. [doi: 10.1080/10580530.2015.983021]
41. Moore GC, Benbasat I. Development of an instrument to measure the perceptions of adopting an information technology
innovation. Inf Syst Res 1991 Sep;2(3):192-222. [doi: 10.1287/isre.2.3.192]
42. Jarvis C, MacKenzie S, Podsakoff P. A critical review of construct indicators and measurement model misspecification in
marketing and consumer research. J Consum Res 2003 Sep;30(2):199-218. [doi: 10.1086/376806]
43. Fornell C, Larcker DF. Evaluating structural equation models with unobservable variables and measurement error. J Market
Res 2018 Nov 28;18(1):39-50. [doi: 10.1177/002224378101800104]
44. van der Heijden H. User acceptance of hedonic information systems. MIS Quarterly 2004;28(4):695-704. [doi:
10.2307/25148660]
45. Iacobucci D. Structural equations modeling: fit Indices, sample size, and advanced topics. J Consumer Psychol 2010
Jan;20(1):90-98. [doi: 10.1016/j.jcps.2009.09.003]
46. Straub D, Gefen D. Validation Guidelines for IS Positivist Research. CAIS 2004;13:380-427. [doi: 10.17705/1CAIS.01324]
47. Podsakoff PM, MacKenzie SB, Lee J, Podsakoff NP. Common method biases in behavioral research: a critical review of
the literature and recommended remedies. J Appl Psychol 2003;88(5):879-903. [doi: 10.1037/0021-9010.88.5.879] [Medline:
14516251]
48. Ho SY, Bodoff D. The effects of web personalization on user attitude and behavior: an integration of the elaboration
likelihood model and consumer search theory. MIS Q 2014 Feb 2;38(2):497-520. [doi: 10.25300/misq/2014/38.2.08]
49. Liang H, Saraf N, Hu Q, Xue Y. Assimilation of enterprise systems: the effect of institutional pressures and the mediating
role of top management. MIS Quarterly 2007;31(1):59. [doi: 10.2307/25148781]
50. Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: toward a unified view. MIS
Quarterly 2003;27(3):425. [doi: 10.2307/30036540]
51. Jussupow E, Spohrer K, Heinzl A, Gawlitza J. Augmenting medical diagnosis decisions? An investigation into physicians’
decision-making process with artificial intelligence. Inf Syst Res 2021 Sep;32(3):713-735. [doi: 10.1287/isre.2020.0980]
52. Hayes AF. Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach. New
York, USA: Guilford; 2013.
53. Verghese A, Shah NH, Harrington RA. What this computer needs is a physician: humanism and artificial intelligence.
JAMA 2018 Jan 02;319(1):19-20. [doi: 10.1001/jama.2017.19198] [Medline: 29261830]
54. Fazal MI, Patel ME, Tye J, Gupta Y. The past, present and future role of artificial intelligence in imaging. Eur J Radiol
2018;105:246-250. [doi: 10.1016/j.ejrad.2018.06.020] [Medline: 30017288]
55. Fichman RG, Kohli R, Krishnan R. The role of information systems in healthcare: current research and future trends. Inf
Syst Res 2011;22(3):419-428. [doi: 10.1287/isre.1110.0382]
56. Cheng J, Ni D, Chou Y, Qin J, Tiu C, Chang Y, et al. Computer-aided diagnosis with deep learning architecture: applications
to breast lesions in us images and pulmonary nodules in CT scans. Sci Rep 2016 Apr 15;6(1):24454 [FREE Full text] [doi:
10.1038/srep24454] [Medline: 27079888]
JMIR Form Res 2022 | vol. 6 | iss. 3 | e28750 | p. 14https://formative.jmir.org/2022/3/e28750 (page number not for citation purposes)
Jussupow et alJMIR FORMATIVE RESEARCH
XSL
FO
RenderX
57. Stein M, Galliers RD, Markus ML. Towards an understanding of identity and technology in the workplace. J Inf Technol
2013 Sep 01;28(3):167-182. [doi: 10.1057/jit.2012.32]
58. Piderit SK. Rethinking resistance and recognizing ambivalence: a multidimensional view of attitudes toward an organizational
change. Acad Manag Res 2000 Oct;25(4):783-794. [doi: 10.5465/amr.2000.3707722]
Abbreviations
AI: artificial intelligence
Edited by A Mavragani; submitted 13.03.21; peer-reviewed by A Baird, F Alvarez-Lopez; comments to author 10.05.21; revised
version received 27.05.21; accepted 03.01.22; published 23.03.22
Please cite as:
Jussupow E, Spohrer K, Heinzl A
Identity Threats as a Reason for Resistance to Artificial Intelligence: Survey Study With Medical Students and Professionals
JMIR Form Res 2022;6(3):e28750
URL: https://formative.jmir.org/2022/3/e28750
doi: 10.2196/28750
PMID:
©Ekaterina Jussupow, Kai Spohrer, Armin Heinzl. Originally published in JMIR Formative Research (https://formative.jmir.org),
23.03.2022. This is an open-access article distributed under the terms of the Creative Commons Attribution License
(https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information,
a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.
JMIR Form Res 2022 | vol. 6 | iss. 3 | e28750 | p. 15https://formative.jmir.org/2022/3/e28750 (page number not for citation purposes)
Jussupow et alJMIR FORMATIVE RESEARCH
XSL
FO
RenderX
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Background The potential for machine learning to disrupt the medical profession is the subject of ongoing debate within biomedical informatics. Objective This study aimed to explore psychiatrists’ opinions about the potential impact innovations in artificial intelligence and machine learning on psychiatric practice Methods In Spring 2019, we conducted a web-based survey of 791 psychiatrists from 22 countries worldwide. The survey measured opinions about the likelihood future technology would fully replace physicians in performing ten key psychiatric tasks. This study involved qualitative descriptive analysis of written responses (“comments”) to three open-ended questions in the survey. Results Comments were classified into four major categories in relation to the impact of future technology on: (1) patient-psychiatrist interactions; (2) the quality of patient medical care; (3) the profession of psychiatry; and (4) health systems. Overwhelmingly, psychiatrists were skeptical that technology could replace human empathy. Many predicted that ‘man and machine’ would increasingly collaborate in undertaking clinical decisions, with mixed opinions about the benefits and harms of such an arrangement. Participants were optimistic that technology might improve efficiencies and access to care, and reduce costs. Ethical and regulatory considerations received limited attention. Conclusions This study presents timely information on psychiatrists’ views about the scope of artificial intelligence and machine learning on psychiatric practice. Psychiatrists expressed divergent views about the value and impact of future technology with worrying omissions about practice guidelines, and ethical and regulatory issues.
Article
Full-text available
Background: The advancement of health care information technology and the emergence of artificial intelligence has yielded tools to improve the quality of various health care processes. Few studies have investigated employee perceptions of artificial intelligence implementation in Saudi Arabia and the Arabian world. In addition, limited studies investigated the effect of employee knowledge and job title on the perception of artificial intelligence implementation in the workplace. Objective: The aim of this study was to explore health care employee perceptions and attitudes toward the implementation of artificial intelligence technologies in health care institutions in Saudi Arabia. Methods: An online questionnaire was published, and responses were collected from 250 employees, including doctors, nurses, and technicians at 4 of the largest hospitals in Riyadh, Saudi Arabia. Results: The results of this study showed that 3.11 of 4 respondents feared artificial intelligence would replace employees and had a general lack of knowledge regarding artificial intelligence. In addition, most respondents were unaware of the advantages and most common challenges to artificial intelligence applications in the health sector, indicating a need for training. The results also showed that technicians were the most frequently impacted by artificial intelligence applications due to the nature of their jobs, which do not require much direct human interaction. Conclusions: The Saudi health care sector presents an advantageous market potential that should be attractive to researchers and developers of artificial intelligence solutions.
Article
Full-text available
Artificial intelligence (AI) is rapidly moving from an experimental phase to an implementation phase in many fields, including medicine. The combination of improved availability of large datasets, increasing computing power, and advances in learning algorithms has created major performance breakthroughs in the development of AI applications. In the last 5 years, AI techniques known as deep learning have delivered rapidly improving performance in image recognition, caption generation, and speech recognition. Radiology, in particular, is a prime candidate for early adoption of these techniques. It is anticipated that the implementation of AI in radiology over the next decade will significantly improve the quality, value, and depth of radiology's contribution to patient care and population health, and will revolutionize radiologists' workflows. The Canadian Association of Radiologists (CAR) is the national voice of radiology committed to promoting the highest standards in patient-centered imaging, lifelong learning, and research. The CAR has created an AI working group with the mandate to discuss and deliberate on practice, policy, and patient care issues related to the introduction and implementation of AI in imaging. This white paper provides recommendations for the CAR derived from deliberations between members of the AI working group. This white paper on AI in radiology will inform CAR members and policymakers on key terminology, educational needs of members, research and development, partnerships, potential clinical applications, implementation, structure and governance, role of radiologists, and potential impact of AI on radiology in Canada.
Article
Systems based on artificial intelligence (AI) increasingly support physicians in diagnostic decisions, but they are not without errors and biases. Failure to detect those may result in wrong diagnoses and medical errors. Compared with rule-based systems, however, these systems are less transparent and their errors less predictable. Thus, it is difficult, yet critical, for physicians to carefully evaluate AI advice. This study uncovers the cognitive challenges that medical decision makers face when they receive potentially incorrect advice from AI-based diagnosis systems and must decide whether to follow or reject it. In experiments with 68 novice and 12 experienced physicians, novice physicians with and without clinical experience as well as experienced radiologists made more inaccurate diagnosis decisions when provided with incorrect AI advice than without advice at all. We elicit five decision-making patterns and show that wrong diagnostic decisions often result from shortcomings in utilizing metacognitions related to decision makers’ own reasoning (self-monitoring) and metacognitions related to the AI-based system (system monitoring). As a result, physicians fall for decisions based on beliefs rather than actual data or engage in unsuitably superficial evaluation of the AI advice. Our study has implications for the training of physicians and spotlights the crucial role of human actors in compensating for AI errors.
Article
Robotics and the automation of knowledge work, often referred to as AI (artificial intelligence), are presented in the media as likely to have massive impacts, for better or worse, on jobs skills, organizations and society. The article deconstructs the dominant hype-and-fear narrative. Claims on net job loss emerge as exaggerated, but there will be considerable skills disruption and change in the major global economies over the next 12 years. The term AI has been hijacked, in order to suggest much more going on technologically than can be the case. The article reviews critically the research evidence so far, including the author’s own, pointing to eight major qualifiers to the dominant discourse of major net job loss from a seamless, overwhelming AI wave sweeping fast through the major economies. The article questions many assumptions: that automation creates few jobs short or long term; that whole jobs can be automated; that the technology is perfectible; that organizations can seamlessly and quickly deploy AI; that humans are machines that can be replicated; and that it is politically, socially and economically feasible to apply these technologies. A major omission in all studies is factoring in dramatic increases in the amount of work to be done. Adding in ageing populations, productivity gaps and skills shortages predicted across many G20 countries, the danger might be too little, rather than too much labour. The article concludes that, if there is going to be a Robo-Apocalypse, this will be from a collective failure to adjust to skills change over the next 12 years. But the debate needs to be widened to the impact of eight other technologies that AI insufficiently represents in the popular imagination and that, in combination, could cause a techno-apocalypse.
Article
Background: Artificial intelligence (AI) has been extensively used in a range of medical fields to promote therapeutic development. The development of diverse AI techniques has also contributed to early detections, disease diagnoses, and referral management. However, concerns about the value of advanced AI in disease diagnosis have been raised by health care professionals, medical service providers, and health policy decision makers. Objective: This review aimed to systematically examine the literature, in particular, focusing on the performance comparison between advanced AI and human clinicians to provide an up-to-date summary regarding the extent of the application of AI to disease diagnoses. By doing so, this review discussed the relationship between the current advanced AI development and clinicians with respect to disease diagnosis and thus therapeutic development in the long run. Methods: We systematically searched articles published between January 2000 and March 2019 following the Preferred Reporting Items for Systematic reviews and Meta-Analysis in the following databases: Scopus, PubMed, CINAHL, Web of Science, and the Cochrane Library. According to the preset inclusion and exclusion criteria, only articles comparing the medical performance between advanced AI and human experts were considered. Results: A total of 9 articles were identified. A convolutional neural network was the commonly applied advanced AI technology. Owing to the variation in medical fields, there is a distinction between individual studies in terms of classification, labeling, training process, dataset size, and algorithm validation of AI. Performance indices reported in articles included diagnostic accuracy, weighted errors, false-positive rate, sensitivity, specificity, and the area under the receiver operating characteristic curve. The results showed that the performance of AI was at par with that of clinicians and exceeded that of clinicians with less experience. Conclusions: Current AI development has a diagnostic performance that is comparable with medical experts, especially in image recognition-related fields. Further studies can be extended to other types of medical imaging such as magnetic resonance imaging and other medical practices unrelated to images. With the continued development of AI-assisted technologies, the clinical implications underpinned by clinicians' experience and guided by patient-centered health care principle should be constantly considered in future AI-related and other technology-based medical research.
Article
As individuals’ relationships with information technology (IT) grow more complex and personal, our understanding of the problem of resistance to IT continues to evolve. Current approaches to resistance are based on perceived threats to work tasks and social structure. This work enhances our understanding of resistance by developing a definition and measure of the IT Identity Threat, a new construct that integrates social, task-related, and personal factors of resistance. Grounded in identity theory, the IT Identity Threat offers a parsimonious means to explain and predict IT resistance behaviors. Using data from two independent studies conducted among students and faculty at a large university in the Southeastern United States, we validate an operational measure of IT Identity Threat as a second-order construct and demonstrate that it successfully predicts resistance to IT. Our findings provide support for the IT Identity Threat construct as a simple mechanism to study resistance to IT.
Article
Artificial intelligence (AI) is already widely employed in various medical roles, and ongoing technological advances are encouraging more widespread use of AI in imaging. This is partly driven by the recognition of the significant frequency and clinical impact of human errors in radiology reporting, and the promise that AI can help improve the reliability as well the efficiency of imaging interpretation. AI in imaging was first envisioned in the 1960s, but initial attempts were limited by the technology of the day. It was the introduction of artificial neural networks and AI based computer aided detection (CAD) software in the 1980s that marked the advent of widespread integration of AI within radiology reporting. CAD is now routinely used in mammography, with consistent evidence of equivalent or improved lesion detection, with small increases in recall rates. Significant false positive rates remain a limitation for CAD, although these have markedly improved in the last decade. Other challenges include the difficulty clinicians encounter in trying to understand the reasoning of an AI system, which may limit their confidence in its advice, and a question mark hangs over who should be liable if CAD makes an error. The future integration of CAD with PACS promises the development of more comprehensively intelligent systems that can identify multiple, challenging diagnoses, and a move towards more individualised patient outcome predictions based upon AI analysis.
Article
Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced. Full text: https://rdcu.be/O1xz