PreprintPDF Available

Generative AI in the Software Engineering Domain: Tensions of Occupational Identity and Patterns of Identity Protection

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

The adoption of generative Artificial Intelligence (GAI) in organizational settings calls into question workers' roles, and relatedly, the implications for their long-term skill development and domain expertise. In our qualitative study in the software engineering domain, we build on the theoretical lenses of occupational identity and self-determination theory to understand how and why software engineers make sense of GAI for their work. We find that engineers' sense-making is contingent on domain expertise, as juniors and seniors felt their needs for competence, autonomy, and relatedness to be differently impacted by GAI. We shed light on the importance of the individual's role in preserving tacit domain knowledge as engineers engaged in sense-making that protected their occupational identity. We illustrate how organizations play an active role in shaping workers' sense-making process and propose design guidelines on how organizations and system designers can facilitate the impact of technological change on workers' occupational identity.
Content may be subject to copyright.
Generative AI in the Soware Engineering Domain: Tensions of
Occupational Identity and Paerns of Identity Protection
Anuschka Schmitt
a.schmitt2@lse.ac.uk
The London School of Economics and
Political Science
London, United Kingdom
Krzysztof Z. Gajos
kgajos@seas.harvard.edu
Harvard University
Cambridge, USA
Osnat Mokryn
omokryn@is.haifa.ac.il
University of Haifa
Israel
ABSTRACT
The adoption of generative Articial Intelligence (GAI) in organi-
zational settings calls into question workers’ roles, and relatedly,
the implications for their long-term skill development and domain
expertise. In our qualitative study in the software engineering do-
main, we build on the theoretical lenses of occupational identity
and self-determination theory to understand how and why software
engineers make sense of GAI for their work. We nd that engineers’
sense-making is contingent on domain expertise, as juniors and
seniors felt their needs for competence, autonomy, and relatedness
to be dierently impacted by GAI. We shed light on the impor-
tance of the individual’s role in preserving tacit domain knowledge
as engineers engaged in sense-making that protected their occu-
pational identity. We illustrate how organizations play an active
role in shaping workers’ sense-making process and propose design
guidelines on how organizations and system designers can facili-
tate the impact of technological change on workers’ occupational
identity.
CCS CONCEPTS
HCI theory, concepts, and models;Human-centered com-
puting Empirical studies in HCI;
KEYWORDS
knowledge work, software engineering, occupational identity, agency,
control, generative AI
ACM Reference Format:
Anuschka Schmitt, Krzysztof Z. Gajos, and Osnat Mokryn. 2024. Generative
AI in the Software Engineering Domain: Tensions of Occupational Identity
and Patterns of Identity Protection. In ., 16 pages.
1 INTRODUCTION
The widespread availability of Generative Articial Intelligence
(GAI)-based models capable of performing highly complex tasks
is expected to elicit a paradigm shift in the workforce [
3
,
26
,
28
].
However, the exact nature of GAI’s impact on the workforce is
under debate. While some predictions go as far as suggesting the
replacement of knowledge-intensive jobs in the near future [
34
,
39
],
GAI also oers the potential to promote equality and enable un-
paralleled productivity gains by complementing workers’ skills
, 2024,
©2024
and expertise [
2
,
3
,
32
]. Empirical research found the augmenta-
tion of human work through technology to increase eciency and
productivity, e.g., for taxi drivers optimizing their route planning
with AI [
38
], or novice customer support agents handling customer
queries [
18
]. Especially for knowledge-intensive work, GAI has
become relevant, e.g., in terms of saving time for creative [
32
],
legal [
42
], and consulting tasks [
23
]. However, predicted productiv-
ity gains and workforce replacements introduce novel challenges
around learning and the preservation of tacit domain knowledge
in the workplace.
Scholars in Human-Computer Interaction (HCI) have empha-
sized the importance of considering worker perspectives in contexts
prone to technological changes [
41
,
61
]. As GAI increasingly de-
nes, manages, or even replaces what workers do, it changes not
only the nature of work processes and individual tasks. More so, it
disrupts how workers make sense of their work and how they view
themselves. Taking into perspective the individual perceptions of
workers becomes particularly important in consideration of ‘the
last mile problem’: while performance gains through novel tech-
nology are in theory possible, the adoption and appropriate use of
such technology hinge on the individual [6, 11].
However, how workers perceive GAI to impact their role and
work is not fully understood. A recent study with young profession-
als showed that some felt that working with GAI enhanced their
sense of competence and autonomy, while others experienced a di-
minished sense of ownership and perceived a lack of challenge [
43
].
Leveraging the theoretical lenses of occupational identity and self-
determination theory (SDT) enables us to explore the impact of
GAI in the workplace from a fresh yet crucial perspective. Through
their occupational identity, workers make sense of their occupa-
tion and their own role, providing workers with meaning and a
sense of distinctiveness [
5
,
67
]. According to SDT, satisfying the
three psychological needs for competence, autonomy, and related-
ness, is key to workers’ sense-making process and intrinsic mo-
tivation [
21
,
22
,
55
,
69
]. While extant ndings shed light on the
ontological feasibility of augmenting human work through GAI, it
is unclear how this paradigm shift is experienced and made sense
of by aected workers. In this study, we explore the eect of GAI
on software engineers’ sense-making of their occupational identity
by addressing the following two research questions (RQ):
RQ1: How does GAI aect software engineers’ need for competence,
autonomy, and relatedness?
RQ2: Which underlying individual and organizational factors can
help explain software engineers’ sense-making of GAI?
To address our research questions, we conducted a qualitative
study of engineers in a middle-sized software organization in the
arXiv:2410.03571v1 [cs.HC] 4 Oct 2024
, 2024, Anuschka Schmi, Krzysztof Z. Gajos, and Osnat Mokryn
United States. The context of our study enables us to explore a
work domain where the use of GAI, theoretically, can inuence
the nature and outcome of tasks and occupations, yet where the
workers currently choose whether to use GAI or not. Leveraging
survey data and semi-structured interviews, we study engineers’
sense-making of GAI for their occupational identity, and its impact
on work-related psychological needs.
We show that software engineers varied in their reactions to GAI.
We consider software engineers’ varying levels of domain expertise
as an important factor in shaping engineers’ sense-making process
of GAI. We nd that (i) GAI impacts software engineers’ need for
competence, autonomy, and relatedness and that (ii) threats toward
these three needs are experienced dierently by junior and senior
software engineers. In response to these threats, we unpack preva-
lent patterns of how software engineers protect their existing occu-
pational identity, and thereby also preserve their skill development.
Lastly, we identify how organizational measures and latent mes-
sages by management implicitly moderate workers’ sense-making
process of their occupational identity.
Addressing our research questions is relevant for three key rea-
sons. First, we contribute to occupational identity literature [
12
,
66
,
68
] by shedding light on how GAI as a new technology impacts
software engineers’ occupational identity. If highly skilled knowl-
edge work is prone to automation and augmentation, it is crucial to
understand how knowledge workers themselves make sense of the
advent of GAI for their occupation. Our ndings challenge preva-
lent assumptions of how the introduction of GAI in the workplace
will lead to a de-skilling of knowledge workers and a loss of tacit
domain expertise. We shed light on the importance of the individ-
ual’s role in preserving tacit domain knowledge as workers engage
in sense-making that protects their occupational identity. Second,
we show that workers’ sense-making of their occupational identity
is contingent on domain expertise. We nd that juniors and seniors’
psychological needs can help explain the underlying reasons for
engineers’ dierent reactions to GAI. Third, the introduction of GAI
into work raises the question of how we should envision desirable
and sustainable forms of technological change in the workplace,
and AI augmentation, specically [
2
,
3
,
50
]. We contribute to this
discussion by demonstrating the importance of the organization’s
role in sustaining knowledge workers’ agency in the form of (i)
enacting accountability and responsibility as a senior knowledge
worker, and (ii) exercising ownership to ensure skill development
as a junior knowledge worker.
2 RELATED WORK
2.1
Knowledge Work, Occupational Identity and
the Importance of Self-Determination
A body of work in organizational and information systems studies,
as well as in CSCW and HCI, has looked at the introduction of digital
technology in the workplace and its eect on knowledge work,
specically [
6
,
31
,
40
,
62
,
70
]. Knowledge work is distinguishable
from other forms of work by requiring the application of theoretical,
analytical, and tacit knowledge, and is thus highly contextualized
and domain-specic [
4
,
8
,
24
,
52
,
60
]. Programmers, analysts, and
researchers are common examples of knowledge workers [60].
Occupational identity can be understood as a worker’s identity
and is “the overlap of ‘who we are’ and ‘what we do [
48
]. People
rely on their occupational identity to shape and communicate their
understanding of what their occupation is and what its members
do. Occupational identity delivers a sense of coherence and dis-
tinctiveness, often in relation to other occupations [
66
,
68
]. Several
studies have noted the importance of knowledge work in shap-
ing workers’ understanding of self, and in giving meaning to their
work [
17
,
37
]. Next to positive implications for an individual’s well-
being, a strong occupational identity oers important organization-
level implications. Beyond continuous learning and teaching on the
job, productivity through knowledge work is contingent on a sense
of feeling responsible for one’s contributions [
4
]. This is where the
importance of workers’ self-determination comes into play.
According to self determination theory (SDT), the satisfaction
of the three psychological needs of competence, autonomy, and
relatedness, is key to helping explain workers’ intrinsic motivation
and, relatedly, their well-being and further important work out-
comes [
21
,
22
]. The need for competence pertains to an individual’s
perceived ecacy of doing their work [
55
]. Workers’ need for au-
tonomy is related to their desire to be “agents of their own behavior”
rather than feeling externally steered [
50
]. Lastly, the need for re-
latedness refers to workers’ need to feel connected with colleagues,
supervisors, and the organization as a whole [
21
]. SDT literature
has explored how the fulllment of these three needs has been as-
sociated with greater job satisfaction, organizational commitment,
and more proactive eorts in crafting one’s job [
22
,
55
,
56
,
63
,
69
].
With the introduction of new digital technology in the workplace,
however, workers’ psychological needs cannot be satised con-
sistently, yet may be challenged [
29
]. SDT has proven useful in
understanding the why of behavior related to identity work [
65
].
We therefore consider the needs for competence, autonomy, and
relatedness in understanding and explaining software engineers’
sense-making of their occupational identity in relation to GAI. We
view our work as complementing and extending SDT: while the
main model of this theory focuses on workers’ motivation, we
focus on its conceptualization and understanding of competence,
autonomy, and relatedness as key to a worker’s identity.
2.2 Technology as a Trigger for Identity Work
Shifting fulllment of the key psychological needs of SDT and the
resulting process of identity work can be conditioned by intra- and
extra-organizational inuences. As such, occupational identity is a
dynamic and ever-evolving process [
12
,
66
,
68
]. Through identity
work, we change the meaning of who we are in the workplace. This
can result in us maintaining or strengthening our existing occu-
pational identity, but also forming and revising our occupational
identity [66].
Organizational inuence can exacerbate or diminish how work-
ers make sense of their occupational identity. An organization can
unconsciously control how workers’ identity work is “triggered”
or even exercise control by explicitly triggering workers’ iden-
tity work. This enactment of identity work can be referred to as
“modes of identity work regulation” [
5
]. As such, organizations
may purposefully induce identity work, e.g., through events or job
Generative AI in the Soware Engineering Domain: Tensions of Occupational Identity and Paerns of Identity Protection , 2024,
promotions. But identity work can also occur unexpectedly, such
as through the informal adoption of a new digital technology [5].
Literature has illustrated how technology can play a determin-
ing role in workers’ occupational identity [
9
,
20
,
44
,
49
,
51
,
64
] by
serving as an identity referent [
53
,
68
] or by being viewed as a
form of an extended self [
49
,
51
]. Technology might also threaten
one’s occupational identity as it might render an occupation ob-
solete [
7
], e.g., if a programming language a software engineer
gained expertise in is replaced [
68
]. Some studies also illustrate
how digital technologies can lead to novel occupational identities
as workers frame their sense of self in relation to the IT they are
using or explicitly refraining from using [
64
]. In a study exploring
the impact of the internet [
48
], workers used internet search for
previous search practices and extended aspects of internet search
to other practices, ultimately redening their occupational iden-
tity by leveraging a technology that was claimed to replace them.
However, some workers missed innovation opportunities because
of their deep knowledge of non-internet research. They coined this
phenomenon as a “paradox of expertise”, positing that workers with
rich domain expertise may not necessarily be best positioned to
leverage new technologies. This raises important questions, such
as whether identity work is contingent on domain experience.
Aforementioned studies illustrate that occupations are deeply
aected by digital technologies as knowledge workers do or do
not use these technologies to do their work, and demonstrate that
workers may need to engage in identity work [9, 44].
2.3 GAI as a Digital Technology Augmenting
Knowledge Work
AI, and more recently, GAI specically, challenge what digital
technology means for occupations and their knowledge [
18
,
23
].
Due to its multi-modal, generative, and cross-domain applicability,
GAI’s economic potential has been explored in several knowledge-
intensive domains such as consulting [
23
], customer services [
18
],
and product development [
14
]. Exploring knowledge workers’ an-
ticipations and expectations of GAI, Woodru et al
. [71]
found that
workers expected to outsource mundane tasks such as note-taking
without forgoing any control and rejecting predictions of workforce
automation through GAI. Similarly, workers in a large international
technology company reported the use of GAI for supporting ac-
tivities, such as creating work documents, generating new ideas,
nding information, and improving their writing [
15
]. While the
incorporation of AI might be initially aimed at supporting workers
and enhancing productivity, a nascent research stream also points
towards the unintended and unconsidered consequences of GAI.
Power dynamics within organizations might shift as the introduc-
tion of AI in the workplace can result in heightened managerial
oversight and a reduced valuation of the workers’ practices [46].
GAI, like previous digital technologies, can trigger transforma-
tions in not only what workers do but in how workers perceive their
roles and their work. Opacity and complexity of new technology
can deprive workers of the ability to understand and master the
technology they are relying on as part of their work [
6
,
72
]. These
trends are expected to be reinforced through GAI as it might feel
like AI is executing work in their behalf. Literature illustrates the
importance of studying workers’ perspectives and the dynamics
between workers within an organization to understand the trans-
formations digital technologies can trigger, yet it is unclear how
exactly GAI aects knowledge workers’ occupational identity and
which underlying factors can help explain workers’ potentially
varied reactions towards GAI.
3 RESEARCH CONTEXT AND SITE
This study seeks to understand software engineers’ sense-making of
how GAI aects their occupational identity and their psychological
needs. The case of software engineering is particularly compelling
for two reasons. First, software engineering is a typical example of
knowledge work where workers gain and apply analytical domain
knowledge through formal training and extensive practice of, e.g.,
coding. Second, workers’ tacit and rich knowledge in the specic
domain of software engineering enables them to potentially under-
stand better the capabilities and limitations of GAI compared to
knowledge workers from other, non-computational domains.
To generate a grounded understanding of how GAI impacts
knowledge work in the software engineering domain, we con-
ducted an abductive study through survey data collection and semi-
structured interviews with engineers at a software organization,
which we refer to as SoftCloud [
36
,
59
]. SoftCloud runs a monitor-
ing platform for on-premises and cloud-based applications, thereby
oering its clients services around forecasting, anomaly detection,
and enterprise IT maintenance.
Team members work together in preventing, identifying, and re-
solving issues clients have with their cloud IT systems and services.
Junior engineers predominantly write scripts for debugging tickets
or code user interface (UI) components for the monitoring platform.
Senior engineers engage with the juniors by reviewing code and
through team meetings, yet also engage in their own coding and
research activities, e.g., by prototyping new features.
The way in which coding tasks are performed in software en-
gineering has evolved considerably through increasingly sophisti-
cated technology and, more recently, through the commercial avail-
ability of GAI. GAI can potentially subsume many of the repetitive
coding tasks historically performed by junior engineers. Further,
while monitoring tasks still rely on the same vendors and their
platforms, GAI, such as ChatGPT, can quickly retrieve and sum-
marize such vendors’ documentation. Because software engineers
require specic domain knowledge yet can potentially automate or
outsource parts of their work with GAI, we propose that software
engineering oers an ideal setting for theory building about how
knowledge workers come to understand and rely on GAI, and with
what underlying reasons and consequences.
3.1 Data Collection
This study was approved by an institutional review board at X
university
1
. Data collection was conducted remotely via online
surveys and Zoom due to geographical dispersion and to ensure
participants’ privacy. For survey participation, participants were
asked for written consent. For all interviews, the rst author asked
for verbal consent for participation, audio recording, and taking
notes. The study phases and participants involved are summarized
in Table 1.
1
Here and elsewhere in the manuscript, we remove any author-identifying information.
, 2024, Anuschka Schmi, Krzysztof Z. Gajos, and Osnat Mokryn
Table 1: Overview of data collection steps
Data Collection Subjects
Phase 1: Informal Interviews 2 executives, 1 junior
Phase 2: Online Survey 35 software engineers
Phase 3: Semi-structured interviews 11 software engineers
3.1.1 Understanding Knowledge Work: Informal Interviews. First
data collection for this study was informed by an unstructured
review of online material and documentation available on social
media platforms such as Reddit or Twitter and the promotional
descriptions and data policies of commercial GAI vendors [
27
]. Our
initial work (December 2022 to March 2023) provided a foundation
for understanding knowledge work augmentation through GAI and
informed our case selection.
Subsequently, we conducted three informal interviews with
knowledge workers from dierent domains: a junior employee
working in consulting, a brand director working in marketing and
creative services, and an engineering director working in software
engineering. As our goal was not to gain a comprehensive under-
standing of all types of knowledge work, but rather to learn whether
discussions around GAI in news and media substantiated for actual
knowledge workers in the eld, these interviews provided valuable
context. All interview partners shared the view that the use of GAI
beneted their work in becoming more ecient or even automating
certain sub-tasks.
Interestingly, the engineering director mentioned that workers
within the organization reacted dierently to the new technology.
Intrigued by the reported mixed reactions of workers within the
same organization and the organization’s interest in pursuing the
implementation of GAI, our research team was given access to
reach out to the engineers within the software organization. As we
reached out to the engineers directly, management was not aware
of who participated in our study. We communicated our role to
all study participants, including the engineers and management,
during the consent process, the distribution of the online survey,
and the one-on-one interviews following approved institute review
board guidelines.
3.1.2 Understanding the Organization: Online Survey. To gather
data in the context of one organization, we rst solicited work-
ers’ work processes, key tasks, and views on GAI through an on-
line survey [
45
]. A recruitment email was sent to all employees in
engineering-related positions, including management (May 2023
June 2023, 50% response rate
2
). The survey included both struc-
tured questions (i.e., 5-point Likert scale from ‘strongly disagree’ to
‘strongly agree’, e.g., ‘Using ChatGPT or another GAI tool improves
my work.’) and open-ended questions (e.g., ‘What 1-2 GAI tool(s)
are you using most heavily or frequently? Please list the name of
the tool(s) and a short description for which tasks you use them.’).
Appendix A provides an overview of selected survey results.
3.1.3 Understanding the Knowledge Worker: In-Depth Interviews.
The survey results raised questions about the perceived impact
of GAI on engineers’ work.We were particularly interested in the
2The percentage of female workers is omitted for privacy reasons.
dierences between GAI users and non-users, and engineers with
varying levels of experience.
We conducted eleven semi-structured interviews, lasting roughly
45 minutes (June 2023 August 2023). Interview questions tapped
the participants’ current work practices and activities (e.g., “How
would you describe your role and tasks?”), general impressions
and expectations around GAI (e.g., “What do you nd reward-
ing/concerning about GAI tools such as ChatGPT?”), and more
concrete questions about participants’ sense-making of GAI at
work (e.g., “Has ChatGPT replaced any tools you previously used
or modied your work processes? If yes, how?”). Depending on
participants’ expressed use or non-use, we modied and extended
questions to better understand the underlying reasons for their em-
brace or resistance to GAI (see Appendix B for our semi-structured
interview guideline). Over time, we also reallocated the focus of
our interview questions (i.e., we reduced the future-oriented ques-
tions on potential use cases of GAI within the organization towards
more worker-focused questions of individuals’ sense-making pro-
cess when dealing with GAI for their work). Table 2 provides an
overview of our participants who are pseudonymized for data doc-
umentation, analysis, and presentation. All interviews were con-
ducted in English.Interview transcription was conducted by the
rst author, with the second and third authors reviewing transcripts
for completeness and understandability.
Table 2: List of interview participants. S prex in ID stands
for “senior”; J prex stands for “junior”. To not allow the
identication of individual employees possible through a
combination of multiple characteristics, we provide an ag-
gregated overview of work focus. All employees work in
engineering (i.e., software and/or user interface).
ID Domain Expertise (in years) GAI User
S1 >12 Yes
S2 >12 No
S3 >5, <10 Yes
S4 >5, <10 Yes
S5 >5, <10 Yes
J1 >3, <5 Yes
J2 >3, <5 Yes
J3 <3 No
J4 <3 Yes
J5 <3 Yes
J6 <3 No
3.1.4 Data Analysis. To analyze and evaluate the interview data,
we employed a qualitative and thematic analysis [
16
,
36
,
54
]. Our
coding approach aimed to derive new insights and knowledge from
a rigorous content analysis of our textual data. For the initial cod-
ing step, we followed an inductive content analysis [
57
,
59
]. The
two coders developed codes independently and based on terms or
phrases used by the knowledge workers in the transcripts. We did
so to manage our preconceptions and to avoid being preliminary
deceived by our kernel theory into overseeing subtle insights and
interesting features early in the coding process. After discussing
and comparing the codes, the two coders specied the rules of
Generative AI in the Soware Engineering Domain: Tensions of Occupational Identity and Paerns of Identity Protection , 2024,
application on a common codebook and double-checked their cod-
ing against the common coding basis. Using our theoretical lens
around occupational identity, we then reviewed codes and identi-
ed potential connections between concepts. Our theoretical lens
was later extended to SDT. This structured analysis allowed us to
generate clear denitions and names for our second-order concepts
in line with our theoretical lens and to group our ndings towards a
more abstract, theoretical level. Both steps one and two were done
by both coders in multiple iterations. In total, we conducted four
rounds of coding. The second-order themes were discussed with
the third author who provided feedback on the data analysis and
connection to the theoretical lens of our study.
4 RESULTS
As part of the following, we explore our RQs centered around
software engineers’ sense-making of GAI for their occupational
identity, and its impact on work-related psychological needs.
4.1 GAI’s Inuence on Software Engineers’
Identity Work
To better understand how software engineers made sense of the
technological change induced by the introduction of GAI, we rst
briey describe how software engineers framed their occupational
identity. We then explore how engineers perceived GAI to impact
their competence, autonomy, and relatedness.
4.1.1 Occupational Identity and GAI. When software engineers
talked about their work and role in general, they often referred
to the importance of domain experience and the distinctiveness
of the engineering occupation. Workers relied on domain-specic
vocabulary, such as abbreviations for programming languages. An
illustration of the pride and distinction from other professions was
given by J5: “I think if you’re going to get by it as an engineer, you
can tell when someone’s written the code and when someone has
not written the code, and you can also tell when somebody’s skills
are progressing.”
Both junior and senior software engineers mentioned the neces-
sity of domain expertise to execute their work successfully. The
distinctiveness of software engineering from other occupations and
the importance of domain expertise also became apparent when we
asked about workers’ general perception of GAI, as depicted by S2:
“It’s really, really hard, because you need to have domain knowledge
[. . . ] In our domain, sometimes you wouldn’t even understand the
answer [of GAI], just because you are not an expert.”
The software engineers also attributed their knowledge of GAI
to their domain expertise and used their occupation as a signal of
knowledge and rationale for making judgments about GAI. When
being asked how they deal with oftentimes raised accuracy issues
with GAI, S2 talked about how “as an engineer, I know this problem
will get solved.” A junior engineer, J4, delineated themselves from
general, public opinions about GAI: “I think that there are a lot of
misconceptions about why [ChatGPT] is useful. But I denitely
think it’s exciting. [...] It’s the rst time that any new thing has
come out since I’ve started my career as a software engineer.”
While we identied consistent themes around software engi-
neers’ occupational identity, we found that software engineers dif-
fered in their responses when being asked about how GAI aected
their work. As summarized by one software engineer, S5, reactions
to GAI seemed to be contingent on domain experience: “There’s
mixed reaction. Some of my teammates, especially the younger
ones, are very pro ChatGPT, love it. Some of the older, [...], think
it’s just a parlour trick and it’s all just trickery and it doesn’t have
much use in a way. So it’s been a mixed reaction. And anecdotally,
the more experienced part of the team is more cautious, shall we
say.” We were, thus, interested in better understanding how GAI
impacted software engineers’ understanding of their own role as
well as the underlying reasons for potentially varied reactions.
4.1.2 Need for Competence. In many ways, software engineers
saw GAI positively aecting their competence (an overview of
GAI’s impact on software engineers’ need for competence and
related identity work engineers engaged in can be found in Table 3).
Software engineers highlighted the eciency gains as a major
benet of using GAI, or using GAI when being stuck with a certain
task, as illustrated by J5: “It’s just the speed. It’s very fast. Let’s
pretend I’ll send it a line of code that I don’t understand [...] [GAI]
saves you clicks, it saves you time typing which I think is every
engineer’s dream. It’s just time that you’re looking stu up. I think
just having that ability to reduce that time is very, very helpful.” S3
further elaborated on the competence gains through GAI: “It has
overall increased my eciency in the sense that when I am trying
to work on something, previously something might have taken me
a whole week to get up and running. Now with ChatGPT providing
me with a shortcut to assimilating a lot of information and giving
that to me—even considering the overhead of sifting through the
bad or inaccurate responses—now I can, approximately, get that
done in three, four days. It’s not like I can do other work. It’s more
that I can do my regular work faster.”
However, the introduction of GAI also presented a major threat
to juniors’ need for competence. Junior engineers expressed their
concerns about not developing their skills suciently. For example,
J1 highlighted the importance of not always relying on GAI in order
to develop their domain expertise: “I try to only use [GAI] to speed
things up. I think it could get dangerous if I don’t take the time to
think through something thoroughly, and see if I can gure it out
on my own without that resource. If I just dive straight into that,
I think it’s taking away from expanding my brain in the way the
way I want to learn things right. I want to know what my resources
are, not just jump straight to ChatGPT [...] Even though it’s a lot
quicker to do that, I think it harms me in the long run.” This quote
shows tensions regarding the eect GAI has on software engineers’
competence, especially for junior software engineers. The junior’s
hesitance can be viewed as a protection mechanism to develop skills
independently and to maintain an (idealized) occupational identity.
Interestingly, we did not nd that senior engineers felt simi-
larly threatened in their competence. Senior software engineers
expressed few suitable scenarios in how GAI would augment their
work, such as S2 responding to GAI by defending and strengthening
their existing occupational identity: “I have lots of feelings on it. I
am now [xx] years old. I studied some AI back in college [...]. And
everyone keeps saying ‘It’s going to come’. [...] Especially with GAI,
it can make new things. But when I talk about all the things that
we look at our company [...] We don’t need random things created.”
The quote depicts how the senior engineer stuck to their existing
, 2024, Anuschka Schmi, Krzysztof Z. Gajos, and Osnat Mokryn
Table 3: Tensions of Competence and Related Patterns of Identity Protection
Domain Expertise Junior Software Engineers Senior Software Engineers
GAI’s impact on engineers’
need for competence
Eciency gains in tension with potential lack
of skill development
Limited productivity gains
Identity work to address threat
to competence
- Recognizing the importance of developing
own domain expertise
-Recognizing concerns of uncontrollable data
security issues
- Restraining use of GAI to mundane and lim-
ited tasks only
- Vieweing GAI as unsuitable to augment own
work
- Treating GAI as a tool with implications sim-
ilar to those of other digital technologies
- Recognizing the limitations of GAI
routines and viewed GAI as just another technology passing by but
not directly impacting their work.
Next to recognizing the importance of establishing and develop-
ing one’s own domain expertise, some junior software engineers
also addressed the threat to competence by framing GAI as a tool,
and by claiming the implications of GAI to be similar to those of
other digital technologies. When being asked about their general
thoughts about GAI, J4 replied: “It hasn’t disrupted my daily ow
or anything like that. It’s a tool.” They continued: “I think there’s
nothing specic that I’ve been like ‘Wow! I can’t believe it came up
with that’. I’m more just like ‘Wow! It came up with what I would
have gotten from Google but 20 minutes faster. [...] It’s just a tool.”
J6 stressed the impact of GAI in a similar manner: “I am hesitant
about [GAI], I guess. I don’t heavily lean one way or another. Obvi-
ously it’s been in the news a lot, and a lot of people in the industry
are talking about it and are excited about applicable uses. [...] I
don’t think it’s as invaluable as some people make it out to be. At
the end of the day, it’s just another tool.”
An overview of GAI’s impact on software engineers’ need for
competence and related identity work engineers engaged in can be
found in Table 3.
4.1.3 Need for Autonomy. Software engineers expressed the im-
portance of agency and independence of GAI as an augmenting
technology. S4 said, “More or less, my attitude towards GAI is that
I don’t want to let the AI lead me around by the nose.” The impor-
tance placed on notions of ownership and independence of GAI was
particularly pronounced when talking to junior software engineers.
J5 described the need for autonomy and how it was aected by GAI
by describing their work: “It is what I do for 40 hours a week. You
want to take some pride and have some ownership of it. And you
don’t want it to just all be spit out of the machine. [...] A lot of this
job that I do is a lot of learning, and I try to learn something from
it. So I feel like you’re cutting yourself a little bit short when you’re
not doing these things.”
Senior engineers also felt their autonomy to be aected by GAI,
yet for dierent reasons. Responsibility was a key aspect seniors
referred to when discussing the impact of GAI. S2 expressed the
importance of senior engineers’ accountability and related threats
of automation induced by GAI: “If [GAI] gives the junior people bad
advice, I still feel like senior people are gonna have to be watching
the advice it gives and be like, ‘Oh, no, no, that’s not the right ad-
vice.’” They (S2) continued: “[GAI] will boost the needle, but not in
a way that it frees up a ton of time. Our senior people spend maybe
10 to 15% of a year mentoring. And even if you drop that down to
5%, you have to have 20 senior people to even reduce one [person].”
This quote is telling because it stresses the senior engineer’s per-
spective on the remaining importance of managing junior workers
and how their own work and role cannot be automated. At the same
time, the quote points towards the idea that GAI potentially also in-
uences workers’ relatedness and how workers of dierent domain
expertise are connected. Table 4 contrasts the dierent impact GAI
had on juniors’ and seniors’ need for autonomy, respectively.
Software engineers also grappled with the impact of GAI on their
autonomy by predominantly deecting from themselves as a path
to protecting their occupational identity. One junior engineer’s
(J6) quote illustrates this: “I know some people ask ChatGPT to
generate code or write scripts, which is, in fact, my entire job. But
yeah, I guess a lot of my job is looking at pre-existing code or
pre-existing libraries and then adding to it, so I can’t really see a
scenario where I would trust code generated by AI.”. Another junior
software engineer (J2) named entry-level engineers likely to be
aected by GAI but did not consider their own job to be threatened:
“In terms of my own work, I think I’m concerned, not for myself, but
for people who come after me, especially the younger entry-level
people. Because from the little bits of code that I’ve had ChatGPT
create personally, it does get about 95% of the way. There are some
basic mistakes that it makes but I think over time, it’s pretty simple
for it to solve it. [...] I think it’s going to hurt a lot of the entry-level
people when someone with more experience can instantly create
the code that those people would have been creating previously.”
This quote ties back to the importance junior engineers placed on
their skill development as being competent is directly tied to not
becoming obsolete.
Senior engineers also deected from their own roles by iden-
tifying automation opportunities for customer support tasks and
junior positions. S4 clearly distinguished themselves from junior
engineers: “For what I’m doing. [...] I’m not having it actually write
the functions that I’m doing. So for myself I’m not super worried
about it.” When asking about automation use case with GAI, engi-
neers like S3 referred to service tasks: “For example, when we call
customer support: [...] At rst, you will have an AI chatbot that’s
trying to solve your problem. And if it doesn’t, then you get to a
human.”
Generative AI in the Soware Engineering Domain: Tensions of Occupational Identity and Paerns of Identity Protection , 2024,
Table 4: Tensions of Autonomy and Related Patterns of Identity Protection
Domain Expertise Junior Software Engineers Senior Software Engineers
GAI’s impact on engineers’
need for autonomy
Threat to ownership and agency Threat to responsibility and control
Identity work to address threat
to autonomy
- Stressing the importance of agency in rela-
tion to skill development
-Stressing the importance of non-transferable
accountability and monitoring of juniors
- Deecting from oneself when thinking about
the automation of junior positions
- Deecting from oneself by identifying au-
tomation opportunities with service tasks and
junior positions
4.1.4 Need for Relatedness. The rise of GAI appeared to create stark
dierences between junior and senior engineers at SoftCloud in
how they saw their relatedness to colleagues and other engineers
impacted by GAI. When being asked how colleagues and other
engineers in the organization respond to GAI, S4 shared how they
had to intervene juniors’ use of GAI: “We had to snap down on
people using Copilot pretty early on that we really can’t have.
Because we’re writing proprietary code and so we can’t have people
shoving our proprietary code into a consumption engine so that
everyone else can get to it.” Next to the issue of proprietary data,
S4 added a second reason for their intervention: “We wanted to be
sure that people weren’t dumping a bunch of code that they didn’t
understand into the code base [...]. Ultimately, we want to make
sure that people are writing their own code. Not because we think
that the human code is necessarily better, but because if a human
is writing it and understands it, they can go to take responsibility
for it, and they can edit it when they need to.” The senior engineer
expressed their concerns about juniors’ skill development. The
reaction ties back to seniors’ need for responsibility while also
suggesting the necessity for junior engineers to take responsibility.
Senior engineers felt that they had to ensure that junior engineers
were not using GAI in unintended ways that could potentially harm
the organization as well as fear that the use of GAI can threaten
junior engineers’ skill development, as shared by S2: “The [junior]
tech people have no idea what they’re monitoring. They want to
just tell their manager they’re monitoring something, and if they’re
like, ‘Oh, this AI program told me I should monitor these things’,
they’re like, ‘That’s ne’. But if it’s not really what you should
monitor, that’s a problem.”
Junior engineers viewed themselves as more open and adaptive,
especially as compared to more senior engineers who were viewed
as more stubborn and xed in their existing routines. Without being
prompted, J1 dierentiated between junior and senior engineers:
“I’m not stuck in my way, since I’m a junior, so I’m open to every-
thing that’s going to make my life quicker and me more ecient,
and I know doing redundant things to me just takes away my time
that I could be spending doing the things that ChatGPT can’t do
for us. [... ] I think there are [dierent opinions]. [...] I just think
there’s an old-school way of programming, and once you’re kind of
stuck in it, you’re just stuck in it. It’s just kind of tricky to convince
people.” When discussing the usefulness of GAI, J1 continued: “It
is helpful but I just think it can get dangerous in a sense of conve-
nience, [.. . ] I just think it might be dicult to navigate in a team
setting where one person is using ChatGPT and the other person’s
taking the old research method.” It becomes clear that the junior
viewed this as a challenge to juniors and seniors working together.
This common ground was perceived as particularly important and
necessary for juniors’ skill development. J3 described the impor-
tance of learning, and learning from fellow engineers, in this way:
“There are benets to having a human teacher. I think ChatGPT
being primarily text-based... You don’t get little emotions, you don’t
get little jokes here and there that create an experience [...] I think
in long term learning, I think a human is still better because of
those little things.” Table 5 provides an overview of GAI’s impact
on software engineers’ relatedness and identity protection patterns
particularly junior software engineers engaged in to preserve a
common ground with the seniors.
4.2 Organizational Inuences on Engineers’
Identity Work
As mentioned earlier, the process of identity work can be condi-
tioned by intra- and extra-organizational factors beyond individu-
als’ sense-making, commonly referred to as regulation modes. In
the context of our study, SoftCloud organized multiple company-
internal hackathons on the theme of GAI. On the other hand, exter-
nal inuences on workers’ sense-making can occur as a by-product
of informal, unstandardized activities, e.g., due to larger social and
organizational factors. Software engineers discussed GAI over cof-
fee with their colleagues, for instance. In the following, we review
regulation modes present at SoftCloud (for an overview, see Figure
1).
4.2.1 Strategic and managerial eorts. Strategic and managerial
eorts are organization-level initiatives that can indicate an orga-
nization’s strategy.
Company-internal GAI hackathons. Many software engineers
talked about organization-wide hackathons on the theme of GAI,
which they viewed as an encouragement of management to explore
the use of GAI. As one junior engineer (J1) put it: “It’s still a meta
opinion that our company is excited about AI. We had an internal
hackathon where people tried to build something, using AI and that
was cool. So I think the higher up of engineering is really intrigued
with it, and ultimately they want to use it. There’s no denying that
it would make programmers faster. [...] They’re gonna start using
AI, and they’re gonna start integrating it into whatever they’re
building.” This framing is very contrasting to a junior engineer, J3,
who did not use GAI: “They put in a hackathon on the theme of GAI.
So I think there is pressure from above to think of something new
, 2024, Anuschka Schmi, Krzysztof Z. Gajos, and Osnat Mokryn
Table 5: Tensions of Relatedness and Related Patterns of Identity Protection
Domain Expertise Junior Software Engineers Senior Software Engineers
GAI’s impact on engineers’
need for relatedness
Loss of relatedness as seniors are viewed to
stick to their existing work practices and rou-
tines
Loss of relatedness as juniors are viewed to
use GAI in mindless and unintended ways
Identity work to address threat
to relatedness
- Stressing the importance of learning from
fellow engineers
- Stressing the importance of monitoring and
controlling junior work
- Focusing on developing skills and conrming
with existing practices that enable a common
ground with senior engineers
and cool that’s gonna sell and sit apart from other tech companies.
I think it’s normal, for the hackathons but in the previous years
it’s been more like “Just build something cool. But this year they
wanted us to use GAI. That was the theme of both the hackathons
I went to. So they denitely pressured us to use those tools.”
Proprietary data policy for using GAI. Engineers, e.g., S3, often-
times referred to an internal policy that concerned the use of GAI
for work: “Our ocial policy is to not use ChatGPT for any ocial
work. And I believe that’s also driven particularly by privacy and se-
curity concerns. Typically, as far as I know, [name of organization]
employees do not use ChatGPT for their actual day-to-day work.”
This policy was either used as a reason to not engage with GAI or to
nd a workaround to be still able to use GAI. A non-user of GAI, S2,
made the rst path quite clear: “We had a statement from legal that
said, ‘Do not put anything that’s proprietary in ChatGPT. Either
customers’ data, HR data, anything that’s proprietary. If you can’t
say it to a human being outside of our company, you cannot say it to
ChatGPT, which made it actually very clear. Because we go through
extensive training on what you are and are not allowed to say to
another human being outside of our company.” On the other hand,
some engineers stated that they did not share proprietary data with
GAI yet found a workaround to still use ChatGPT. S4 described this
approach, stating, “we had a policy come down from the top that
said that we weren’t allowed to just copy stu out of ChatGPT and
paste it into the code that we had to, you know, maturely modify
it before we put it in” (S4). Other engineers mentioned that they
used GAI for general, data-unrelated purposes or tasks outside of
work at SoftCloud such as J1: “I actually use it a lot. I don’t use it at
work because we have regulations.”
4.2.2 Managerial expectations. Next to more explicit initiatives
that expressed SoftCloud’s strategic aim regarding the use of GAI,
more implicit actions such as unspoken norms and codes of conduct
equally inuenced engineers’ identity work.
Shared norms and values. Software engineers at SoftCloud seemed
to have a shared vocabulary, viewing the organization as agile and
fast-moving, as described by S2: “A lot of the people on my team are
new. So I have to coach them through learning the process, learning
the tools, but then also how to do innovation. [...] We’re a very lean
organization. So you know how to do it eciently.” Consequently,
some engineers saw a natural, or even necessary, connection be-
tween the nature of the organization and the adoption of GAI, as
the quote by J2 depicts: “I think it will denitely be a lot faster for
us, we’ll see the impact rst with our company, because we’re more
dynamic and a younger company. So we can adapt a lot faster.”
We observed that through latent messages from managers, an
unspoken code of conduct was created around the exploration of
GAI for work, as the following quote by S1 demonstrates: “I think,
companies that will resist the change, this adopting, they will have
a lot of problems. [...] It’s a cultural thing. So if you are culturing
‘I need to adapt, I need to change. My engineers can move and do
something dierent.’, it’s awesome.”
Engineers were exposed to not only to dierent types of orga-
nizational inuences but also conicting and implicit messages
from the top. The following quote by J4 summarizes the mixed
signals that were sent through the policy and other eorts by the
organization, as well as SoftCloud’s attempt to actively inuence
engineers’ use of GAI: “I would say, one, my manager is very excited
about all this. [...] He just has been encouraging the AI hackathons
and he’s always talking about ChatGPT. But organization-wide,
we’ve just gotten a memo from legal that’s like ‘Hey, be careful,
you know. Don’t copy and paste. Don’t blame your eyes. But there
is unocially, semi-ocially, I guess, because it’s coming from like
upper levels of engineering, they’re talking about it all the time,
and denitely want us to use it. They want us to experience, or at
least know a lot about it. I think you would seem ‘not in the know’
if you didn’t have opinions or thoughts, and how best to use it right
now.” A senior engineer, S5, also mentioned that the use of GAI is
steered by the top: “Well, it’s coming right from the top, I think. It’s
not grassroot, I’m afraid. I think in the beginning it was. [...] And
now it’s coming from the top that we can start looking at it.”
Changing (engineering) skills. For both users and non-users of
GAI, there was a strong shared understanding of the nature of
their occupation. Their comments about this in relation to the
organization-internal hackathon articulate the importance placed
on domain expertise and engineering skills, as mirrored by user
of GAI, J4: “We had a hackathon recently, and it was being led by
this guy who’s way smarter than me and had way more experi-
ence, and he made it in AWS and I’m trying to recreate it with
the Azure components.” J4 similarly expressed this appreciation
of engineering skills in relation to the use of GAI, saying, “At the
beginning, I found some really cool uses [of GAI]. [...] It was the
week that ChatGPT dropped and I just made an account and used
it. I was like ‘Okay, guys, you gotta check this out. I had people
Generative AI in the Soware Engineering Domain: Tensions of Occupational Identity and Paerns of Identity Protection , 2024,
Figure 1: Overview of regulation modes of identity work at SoCloud. Strategic and managerial eorts are organization-level
initiatives that can indicate an organization’s strategy. (Managerial) expectations position the organization in terms of its
values and expectations toward its workers. Economic forces refer to larger organizational and economic forces that can
transform ’the way of doing things’ at an organization. This overview does not suggest to be en exhaustive presentation of
possible regulation modes but rather an overview of the regulation modes that moderated the impact GAI had on soware
engineers’ identity work.
come over and show them that. And that’s probably happened two
or three times since then.” The strong valuation of engineering
skills among the workers dominated workers’ pride at SoftCloud.
The use of GAI as a means to further express one’s skills hereby
acted as a mode of regulation as GAI was informally introduced
into the organization. The expectation of using GAI was embraced
by some of the engineers whereas others experienced the shift in
expectations as pressure.
4.2.3 Economic and competitive landscape. Larger social, organiza-
tional, and economic forces provide a worker and an organization
with particular conditions that can transform the “way of doing
things” [
5
]. Such modes of regulations also triggered software engi-
neers’ identity work at SoftCloud.
A number of engineers, particularly users of GAI such as S3,
shared this perspective on the need to evolve along large industry
changes: “I think in our industry as a whole, whenever something
this transformative comes along, you have to evolve your own
self alongside it, and so you have to adopt it into your regular
workow. [...] I feel like a lot of companies might feel pressured
to start adopting GAI.” A junior engineer, J1, also mentioned the
necessity to adopt GAI in order to not fall behind: “So if we don’t
start considering that, we’re behind, right? Because people already
are doing it. [...] These are companies already incorporating it. And
that’s been going on for years. [...] This perspective seems to be
also driven by engineers observing the market and competitors,
as the following quote by S5 depicts: “There was a really good PR
video by another company which works in the same sort of space
[described how other organization is using GAI]. And that’s how
we would use it.” J2 shared how management also recognized the
competitive landscape when adopting GAI: “I think the company
will push entry-level people to explore using the AI more frequently,
so it’s sort of a hedging. . . just in case our competitors start using it
and it is a huge competitive advantage”. Non-users also shared this
perspective yet distanced themselves from it, as the following quote
by J3 depicts: I think the company wants to use [GAI] to stay
relevant.[...] I mean, I’m not part of any management decisions or
stu. I’m just the developer, but it kind of feels that way.” This quote
illustrates how the engineer does not necessarily acknowledge the
usefulness of GAI for the organization or themselves but rather
describes it as a “necessary evil”.
5 DISCUSSION
The adoption of GAI in organizations can be viewed as a crucial
paradigm shift in the ever-evolving landscape of digital change
in the workplace [
19
]. The conceptual model in Figure 2 provides
an overview of the key results of this study, i.e., software engi-
neers’ sense-making of their occupational identity in relation to
the introduction of GAI in the workplace. Our conceptual model
hereby builds on seminal notions of SDT and occupational iden-
tity [
22
,
29
,
55
] and illustrates our ndings on i) how GAI impacted
software engineers’ need for autonomy, competence, and related-
ness, and, in turn, ii) how software engineers engaged in identity
work to protect and strengthen their occupational identity. We
also identify how software engineers’ identity work is further in-
uenced by implicit organizational measures and external forces,
commonly referred to as “regulation modes”. While we are explor-
ing the impact of technological change in the workplace, we do so
within the boundaries of the software engineering domain and at a
specic point in time, i.e., during the rise and the adoption of the
rst commercially available GAI-based models.
, 2024, Anuschka Schmi, Krzysztof Z. Gajos, and Osnat Mokryn
Figure 2: The impact of technological change on workers’ occupational identity and key psychological needs. The conceptual
model illustrates the eects of technological change on workers’ sense-making of their occupational identity. Regulation
modes and workers’ domain expertise help explain these eects and can guide the impact of new technology. Highlighted in
black is the individual sense-making of soware engineers’ identity, the main focus of our study’s investigation.
5.1 Theoretical Implications
Our ndings highlight the relevance of occupational identity and
human self-determination for understanding the impact of GAI for
knowledge work in the specic context of software engineering.
5.1.1 Occupational identity and self-determination theory.
Identity work. Our ndings contribute to the body of literature
exploring the (re)denition of occupational identity induced by the
introduction of new technology [
11
,
37
,
48
,
68
]. Software engineers
at SoftCloud varied in how they responded to the introduction of
GAI. Software engineers had a strong occupational identity that is
framed by their expertise and their distinction from other domains,
which they also relied on to justify their assessment of GAI. Our
study is congruent with earlier studies showing that highly skilled
workers with established routines and identity may feel threatened
due to the advent of a new technology [
68
] and might thereby,
paradoxically, not fully leverage the potential benets of these
new technologies in order to preserve their existing occupational
identity [
48
]. Literature has started to explore the underlying mech-
anisms of identity work, e.g., by better understanding the temporal
dynamics of changing occupational identity [
68
]. As part of this
study, we nd that knowledge workers’ level of domain expertise
plays an important role in shaping identity work. Engineers’ sense-
making process served as a powerful mechanism to protect their
occupational identity, enabling them to maintain their competence,
autonomy, and relatedness in a changing professional landscape.
We hereby challenge prevalent assumptions of how GAI threatens
learning and skill maintenance in knowledge organizations as we
found that software engineers’ patterns of identity protection en-
abled them to preserve and enhance domain-relevant skills and
expertise.
Illustrated along the three psychological needs of competence,
autonomy, and relatedness, the ndings of this study i) identify
prevalent patterns of identity protection and ii) unpack the nuances
of how GAI impacted junior and senior software engineers’ sense-
making of GAI for their work.
Competence. GAI’s impact on software engineers’ sense of com-
petence created a tension between the appreciation of GAI as a
useful tool to increase eciency, and a worry about unsustainable
over-reliance on GAI preventing the development of important do-
main skills. On the one hand, software engineers shared how using
GAI helped them improve productivity and eciency for selected
tasks, such as replacing Google searches, thereby strengthening
them in their perceived competence. At the same time, junior soft-
ware engineers were concerned that an over-reliance on GAI could
hinder necessary skill development. For senior engineers, implica-
tions for their competence were not as pronounced. Our ndings
illustrate how particularly junior engineers refrained from using
GAI too much or for certain tasks in order to hone their domain-
specic competencies and to maintain a common ground with their
senior colleagues. While previous studies such as [
6
,
11
] showed
that the introduction of new technology (e.g., robots in medical
surgery) can impair the skill development of more junior workers
(e.g., surgeons), our ndings call into question predominant as-
sumptions of how the outsourcing of work to novel technology, i.e.,
GAI, might lead to a loss of knowledge and skills. To preserve their
(idealized) occupational identity, junior software engineers’ identity
work served as a powerful mechanism to ensure the development
of skills and preserve important tacit domain knowledge.
Autonomy. Software engineers predominantly engaged in de-
ection mechanisms in response to GAI’s impact on their auton-
omy. Junior engineers acknowledged automation gains, e.g., by
outsourcing tedious and redundant tasks, yet deected from their
role when thinking about automation. Similarly, senior engineers
acknowledged automation gains outside of their task area, e.g., by
automating sales and service tasks and roles. Junior and senior en-
gineers diered from one another in the coping mechanisms they
engaged in to (re)claim their autonomy, which might be tied to
their respective importance placed on notions of ownership and
responsibility. While junior engineers expressed their desire for
ownership and pride in their work, seniors’ need for autonomy was
expressed using responsibility and accountability. It was important
Generative AI in the Soware Engineering Domain: Tensions of Occupational Identity and Paerns of Identity Protection , 2024,
for them to continue exercising their managing activities, including
training the junior engineers of the organization. They expressed
fears around junior workers training process, and their use of GAI
in an unsupervised and unintended manner.
Relatedness. Junior and senior engineers experienced and ex-
pected each other to behave and respond dierently to GAI. This
also led to discrepancies between juniors’ and seniors’ work prac-
tices, ultimately creating tensions of relatedness. Junior engineers
experienced seniors to be more resistant to GAI and xated on
existing work practices and routines. These perceptions ultimately
resulted in juniors’ fear of losing common ground with senior en-
gineers. Senior engineers experienced a loss of common ground
with their more junior counterparts as they felt like junior engi-
neers used GAI in a more reckless and unconsidered manner, and
worried how such use would threaten juniors’ skill development.
Engineers desired to close the gap between juniors and seniors,
i.e., expressed by juniors’ importance placed on their skill devel-
opment and preservation of domain expertise. At the same time,
engineers feared widening the gap between juniors and seniors, i.e.,
expressed by seniors’ (juniors’) concern of junior (senior) engineers
over-(under-)relying on GAI for their work.
Connecting occupational identity and SDT. In addition, our nd-
ings illustrate how the explicit consideration of domain expertise
oers a fresh yet important perspective on extending our under-
standing of occupational identity and identity work. This perspec-
tive is enabled by integrating ideas of occupational identity with
alternative theoretical frames, namely SDT. While work on SDT
predominantly focuses on the role of self-determination for mo-
tivation, our work contributes to a richer understanding of the
relevance of self-determination and its associated psychological
needs of competence, autonomy, and relatedness for identity in
organizational settings. Our study illustrates how threats regarding
workers’ psychological needs for self-determination can trigger
important identity work.
5.1.2 Modes of regulation. Lastly, we examine the eect of con-
icting organizational measures on engineers’ identity work. Our
interviews demonstrate how conicted some of the engineers’ opin-
ions on GAI were. This cautious appreciation of GAI and mixed
feelings might be driven by a general ambiguity and constant de-
velopments accompanying GAI, but also by the tension created by
internal policies and conicting information and measures driving
and hindering the use of GAI. While not giving software engineers
explicit orders or directions, changing values and new managerial
expectations regarding GAI were consciously enacted as a form of
organizational control. This became noticeable through the internal
hackathons that were formally organized by SoftCloud, as well as
expectations and latent messages shared by top management. Our
study thereby also contributes to the body of literature on identity
regulation.
SoftCloud’s norms and values acted as modes of regulation that
triggered identity work and hence further impacted workers’ occu-
pational identity. This is consistent with ndings in the literature
that indicate that organizational interventions and other external
inuences can impact workers’ occupational identity and identity
work [
5
]. While we previously saw that workers informally intro-
duced GAI into the workplace, management attempted to further
inuence and increasingly steer workers’ use of GAI. We found that
the shared organizational values of SoftCloud as a competitive, lean
company, as well as expectations regarding workers’ capabilities,
induced the engineers, particularly juniors, to test and experiment
with GAI. As such, the rise of GAI created new or modied expecta-
tions about what software engineers are capable of doing. However,
by some engineers, especially the non-users of GAI, the enforced
expectations around GAI and activities like the hackathon were
experienced as enforced, top-down pressure.
Larger organizational and economic forces seemed to inuence
workers’ sense-making of GAI, such as competitors already de-
ploying GAI, and thereby incite pressure on engineers to “not fall
behind”. This is in line with the notion of “spillovers from techno-
logical change”, suggesting that changes in the larger, institutional
environment (e.g., related occupations in the same domain) can
enforce a need to adapt with individual workers [
13
, p. 609]. These
observation strengthens the notion of occupational identity being
in constant ux [68].
Congruent with identity literature, our results show that these
external forces and organizational measures exist in tension with
one another, ultimately enabling a multiplicity of identity work
practices. In other words, engineers regulated the introduction of
GAI dierently, e.g., by actively embracing or passively accepting
GAI, despite being exposed to the same or similar organizational
interventions. This nding induces interesting questions about
what extent of occupational identity and identity work i) are formed
by personal, individual practices versus organizational measures,
and ii) can be deliberately shaped versus implicitly inuenced.
5.2 Design Guidelines for Identity Work
In our study, workers informally introduced and used GAI for their
work, as well as refrained from using GAI despite the encourage-
ment of management to use it. We saw that it was important for
engineers to preserve their occupational identity so that their needs
for competence, autonomy, and relatedness were fullled (and not
threatened). Research exploring the impact of technological inno-
vations on the nature of work has raised the question of “whether
and how technology designed for other purposes [...] can be de-
liberately designed to met these core human needs [of SDT and]
what can be done to inuence the process to create more human-
centered designs” [
29
, p. 388]. Organizations, management, as well
as system designers should be aware of these identity-protecting
sense-making mechanisms as deployed technology cannot be ex-
pected to be simply adopted. More so, these stakeholders can take
an active role in maintaining tacit knowledge within an organiza-
tion and improving the skills and domain expertise of their workers
by nurturing workers’ identity work.
5.2.1 Moderating technological changes through strategic measures.
As organizations are invested in technological changes having a
positive impact on workers’ productivity yet also self-fullment,
it is crucial for organizations to proactively co-design the adop-
tion and the use of the new technology. This becomes particularly
important given the ambiguity and uncertainty accompanying in-
creasing “General Purpose Technologies” such as GAI [
26
,
33
]. The
, 2024, Anuschka Schmi, Krzysztof Z. Gajos, and Osnat Mokryn
rise of GAI has shown that novel technology no longer is expected
to be formally introduced and controlled by the organization yet
can be informally introduced by workers themselves. It therefore
becomes even more important for organizational leaders to design
the adoption of technology in a proactive and transparent manner.
Organizational measures hereby become relevant to steer and facili-
tate the use of novel technology to oer workers a common ground
and to reduce any ambiguity that accompanies the use of the tech-
nology within the organization. The adoption of a new technology,
its processes and tasks, can be facilitated in a variety of ways. The
strategic and managerial level of an organization is hereby key in
order to dene a strategy regarding, e.g., the adoption of GAI, to
specify the strategic objectives surrounding this adoption [
10
]. This
can be done by explicitly delineating the goals but also the manage-
rial expectations of workers using the new technology. While not
all workers feel the need to adopt a disruptive technology, coherent
policies and strategic measures can encourage the adoption. The
implementation of new technology in work processes can also be
designed in a participatory way so that the adoption of a technology
is not experineced as top-down only. As an example from SoftCloud,
workers realized that management was strategically interested in
pursuing the use of GAI, e.g., by having their hackathons focus on
the topic of GAI and thereby also oering engineers to take part in
the conversation of how and for which tasks GAI should be used at
SoftCloud. However, the direction and the explicit expectations of
management towards the workers was not very clear as engineers
also voiced ambivalence regarding the use of GAI and pressure
from top management.
5.2.2 Designing for meaningful work. Beyond an organization’s
strategic direction of technology adoption, crafting meaningful
work, i.e., in terms of experiences, interactions, and tasks for an
individual worker, can be initiated by top management [
25
]. Dif-
ferent from strategic measures, work design can but does not have
to directly refer to a novel technology. Our interviews suggested
that while some engineers leveraged their existing skills and devel-
oped new skills using GAI, others explicitly refrained from using
GAI—either to preserve their existing work practices or as a means
to ensure the development of specic skills. Organizations are also
required to be sensitive to workers’ fears and judgement. Some of
the engineers at SoftCloud mentioned having little to no exchange
with their colleagues regarding the use of GAI, and others even
expressed concerns regarding the quality and scope of interaction
with and feedback from others. Forms of regulation can be used to
empower workers, e.g., by providing additional opportunities to
interact with others, or additional work programs concerned with
the re- and upskilling of workers. These programs and trainings
can but do not have to pertain to GAI-specic skills. But work de-
sign could also be co-enacted and changed by workers themselves.
Research has found that proactively crafting their work, e.g., in
terms of skills used and learned, as well as leveraging disruptive
events induced by novel technology, can have positive implications
on workers’ sense-making and work environment [
35
]. Software
engineers mentioned that the rise of GAI required them to exi-
bly adapt to the changes this technology would induce for their
work. As workers felt threatened in their occupational identity in
response to the rise of GAI, work design and occupational identity
are expected to be mutually inuencing each other.
5.2.3 Considering workers’ domain expertise. Feedback and quality
interactions across rank levels and within teams are expected to
be more important than ever. Our interviews illustrated how do-
main expertise moderated engineers’ sense-making process and
engineers’ perceptions of being able to hone their skills and the
outcomes of their work. Considering junior engineers’ fears of
not fully developing their skills in early phases of their job and
thereby losing touch with more senior engineers, organizational
eorts should pay particular attention to workers with less domain
expertise. These eorts can entail the provision of sucient de-
velopment opportunities yet also vast interaction opportunities
with senior workers within and outside of the team, similar to the
hackathons SoftCloud organized. Junior engineers reected on the
fact that writing code on their own was necessary for them to gain
the needed experience, whereas senior engineers voiced concerns
about juniors using GAI in unforeseen or unintended ways, i.e., by
learning from nonsensical information or from unreliable sources.
Dedicated interaction opportunities would not only enable juniors
to obtain the necessary skills and experience, yet could reduce bias
seniors have about juniors’ technology use (and vice versa).
Taking our proposed design guidelines and existing frameworks
on work design [
29
] as a point of departure, we urge future research
to explore how modes of regulation could be deployed to regulate
workers’ identity work.
5.3 Limitations and Future Research
Our ndings should be interpreted in the light of certain boundary
conditions, pointing towards important avenues for future research.
It is important to note that our study captures a specic moment in
time, i.e., the early adoption of GAI, as part of which we explored
the dynamics of how people deal with change induced by novel
technology. Due to the ever-changing and transient nature of GAI,
our study by no means oers a holistic understanding of the impact
of GAI but rather provides a specic snapshot in time. Due to
the nature of our single case study, we focused on the domain of
software engineering, a domain that is likely to be strongly aected
by GAI due to its proximity to and reliance on digital technologies.
While previous work studying occupational identity aected by
new digital technologies considered comparable domains [
6
,
68
],
the question arises of how our ndings can be generalized to other
domains of knowledge work, as well as other types of work.
Taking our informal conversation with an executive in the cre-
ative industry as a point of departure, it would be interesting to
better understand how GAI aects knowledge work that hinges
on creativity, novel ideas, and generativity. While GAI has been
claimed to oer emerging capabilities that appear after the initial
design, more recent studies have pointed towards the idea that
these emerging capabilities are limited or even non-existent [
58
]. In
a dierent vein, work on HCI has considered how blue-collar work
is aected by novel digital technologies [
62
]. As values and norms
of factory workers are shaped by the introduction of computing
services, future research would be useful to better understand the
impact of GAI on workers’ occupational identity beyond that of
knowledge work.
Generative AI in the Soware Engineering Domain: Tensions of Occupational Identity and Paerns of Identity Protection , 2024,
Extant work exploring the impact of new digital technologies in
the workplace have also shed light on distinct cultures and regions
beyond the dominant understanding of how Western countries deal
with technology [
41
]. First studies have pointed towards harmful
bias and undesirable stereotypes in the output of GAI [
1
,
30
,
47
].
Another alternative research design might compare knowledge
workers with similar occupations across cultures and regions, of-
fering a previously neglected yet important perspective into how
workers deal with stereotypes, bias, and discrimination in the con-
text of occupational identity.
6 CONCLUSION
Our qualitative interview study with junior and senior software en-
gineers in a medium-sized software company enables us to observe
engineers’ sense-making during the early days of the introduction
on GAI, and to shed light on the conicting forces and psycholog-
ical needs of engineers within the organization. We report how
engineers’ occupational identity is threatened dierently by GAI,
and how the engineers deal with these threats depending on their
domain expertise. We discuss how the introduction of GAI is mod-
erated by formal and informal forces in the workplace. We outline
how organizations might regulate the introduction of GAI more
consciously to enable a desirable and sustainable augmentation
of knowledge work through GAI as part of which workers feel
strengthened in their occupational identity.
REFERENCES
[1]
Abubakar Abid, Maheen Farooqi, and James Zou. 2021. Large language models
associate Muslims with violence. Nature Machine Intelligence 3, 6 (June 2021),
461–463.
[2]
Daron Acemoglu, David Autor, and Simon Johnson. 2023. Can we Have Pro-
Worker AI? (2023).
[3]
Ajay Agrawal, Joshua S Gans, and Avi Goldfarb. 2023. Do we want less automa-
tion? Science 381, 6654 (2023), 155–158.
[4] Ryan Allen and Prithwiraj (raj) Choudhury. 2022. Algorithm-Augmented Work
and Domain Experience: The Countervailing Forces of Ability and Aversion.
Organization Science 33, 1 (Jan. 2022), 149–169.
[5]
Mats Alvesson and Hugh Willmott. 2002. Identity regulation as organizational
control: Producing the appropriate individual. J. Manag. Stud. 39, 5 (July 2002),
619–644.
[6]
Callen Anthony. 2021. When Knowledge Work and Analytical Technologies Col-
lide: The Practices and Consequences of Black Boxing Algorithmic Technologies.
Adm. Sci. Q. 66, 4 (Dec. 2021), 1173–1212.
[7]
David H Autor. 2003. Outsourcing at will: The contribution of unjust dismissal
doctrine to the growth of employment outsourcing. Journal of labor economics
21, 1 (2003), 1–42.
[8]
David H Autor. 2015. Why Are There Still So Many Jobs? The History and Future
of Workplace Automation. J. Econ. Perspect. 29, 3 (Sept. 2015), 3–30.
[9]
Michael Barrett and Geo Walsham. 1999. Electronic Trading and Work Trans-
formation in the London Insurance Market. Information Systems Research 10, 1
(March 1999), 1–22.
[10]
M. R. Barrick, G. R. Thurgood, T. A. Smith, and S. H Courtright. 2015. Collective
organizational engagement: linking motivational antecedents, strategic imple-
mentation, and rm performance. Academy of Management Jourrnal 58 (2015),
111–135.
[11]
Matthew Beane. 2019. Shadow Learning: Building Robotic Surgical Skill When
Approved Means Fail. Adm. Sci. Q. 64, 1 (March 2019), 87–123.
[12]
Beth A Bechky. 2011. Making Organizational Theory Work: Institutions, Occupa-
tions, and Negotiated Orders. Organization Science 22, 5 (Oct. 2011), 1157–1167.
[13]
Beth A Bechky. 2020. Evaluative Spillovers from Technological Change: The
Eects of “DNA Envy” on Occupational Practices in Forensic Science. Adm. Sci.
Q. 65, 3 (Sept. 2020), 606–643.
[14]
Sebastian G Bouschery, Vera Blazevic, and Frank T Piller. 2023. Augmenting
human innovation teams with articial intelligence: Exploring transformer-based
language models. J. Prod. Innov. Manage. (Jan. 2023).
[15]
Michelle Brachman, Amina El-Ashry, Casey Dugan, and Werner Geyer. 2024.
How Knowledge Workers Use and Want to Use LLMs in an Enterprise Context. In
Extended Abstracts of the CHI Conference on Human Factors in Computing Systems.
1–8.
[16]
Virginia Braun and Victoria Clarke. 2021. Thematic Analysis: A Practical Guide.
SAGE.
[17]
Andrew D Brown. 2015. Identities and identity work in organizations. Int. J.
Manag. Rev. 17, 1 (Jan. 2015), 20–40.
[18]
Erik Brynjolfsson, Danielle Li, and Lindsey R Raymond. 2023. Generative AI at
work. Technical Report. National Bureau of Economic Research.
[19] W Warner Burke. 2017. Organization Change: Theory and Practice. SAGE Publi-
cations.
[20]
Michelle Carter and Varun Grover. 2015. Conceptualizing Information Technol-
ogy Identity and its Implications. Miss. Q. 39, 4 (2015), 931–958.
[21]
Edward L Deci, Anja H Olafsen, and Richard M Ryan. 2017. Self-determination
theory in work organizations: The state of a science. Annual review of organiza-
tional psychology and organizational behavior 4 (2017), 19–43.
[22]
Edward L Deci and Richard M Ryan. 2012. Self-determination theory. Handbook
of theories of social psychology 1, 20 (2012), 416–436.
[23]
Fabrizio Dell’Acqua, Edward McFowland, Ethan R Mollick, Hila Lifshitz-Assaf,
Katherine Kellogg, Saran Rajendran, Lisa Krayer,François Candelon, and Karim R
Lakhani. 2023. Navigating the Jagged Technological Frontier: Field Experimental
Evidence of the Eects of AI on Knowledge Worker Productivity and Quality.
(Sept. 2023).
[24]
Hubert L Dreyfus and Stuart E Dreyfus. 1984. From Socrates to expert systems:
The limits of calculative rationality. Technol. Soc. 6, 3 (Jan. 1984), 217–233.
[25]
Humphrey S. E., Nahrgang J. D., and Morgeson F. P. 2007. Integrating motiva-
tional, social, and contextual work design features: A meta-analytic summary and
theoretical extension of the work design literature. Journal of Applied Psychology
92 (2007), 1332–1356.
[26]
Tyna Eloundou, Sam Manning, Pamela Mishkin, and Daniel Rock. 2023. Gpts
are gpts: An early look at the labor market impact potential of large language
models. arXiv preprint arXiv:2303.10130 (2023).
[27]
Charles Ess and Steven Jones. 2004. Ethical Decision-Making and Internet Re-
search: Recommendations from the AoIR Ethics Working Committee. In Readings
in Virtual Research Ethics: Issues and Controversies. IGI Global, 27–44.
[28]
Morgan R Frank, David Autor,James E Bessen, Erik Brynjolfsson, Manuel Cebrian,
David J Deming, Maryann Feldman, Matthew Groh, José Lobo, Esteban Moro,
et al
.
2019. Toward understanding the impact of articial intelligence on labor.
Proceedings of the National Academy of Sciences 116, 14 (2019), 6531–6539.
[29]
Marylène Gagné, Sharon K Parker, Mark A Grin, Patrick D Dunlop, Caroline
Knight, Florian E Klonek, and Xavier Parent-Rocheleau. 2022. Understanding
and shaping the future of work with self-determination theory. Nat Rev Psychol
1, 7 (May 2022), 378–392.
[30]
Deep Ganguli, Danny Hernandez, Liane Lovitt, Amanda Askell, Yuntao Bai,
Anna Chen, Tom Conerly, Nova Dassarma, Dawn Drain, Nelson Elhage, Sheer
El Showk, Stanislav Fort, Zac Hateld-Dodds, Tom Henighan, Scott Johnston,
Andy Jones, Nicholas Joseph, Jackson Kernian, Shauna Kravec, Ben Mann, Neel
Nanda, Kamal Ndousse, Catherine Olsson, Daniela Amodei, Tom Brown, Jared
Kaplan, Sam McCandlish, Christopher Olah, Dario Amodei, and Jack Clark. 2022.
Predictability and Surprise in Large Generative Models. In Proceedings of the 2022
ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of
Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA,
1747–1764.
[31]
Katy Ilonka Gero, Tao Long, and Lydia B Chilton. 2023. Social Dynamics of
AI Support in Creative Writing. In Proceedings of the 2023 CHI Conference on
Human Factors in Computing Systems (<conf-loc>, <city>Hamburg</city>, <coun-
try>Germany</country>, </conf-loc>) (CHI ’23, Article 245). Association for
Computing Machinery, New York, NY, USA, 1–15.
[32]
Karan Girotra, Lennart Meincke, Christian Terwiesch, and Karl T Ulrich. 2023.
Ideas are dimes a dozen: Large language models for idea generation in innovation.
Available at SSRN 4526071 (2023).
[33]
A. Goldfarb, B. Taska, and F. Teodoridis. 2023. Could machine learning be a
general purpose technology? a comparison of emerging technologies using data
from online job postings. Research Policy 52, 1 (2023), 104653.
[34]
Goldman-Sachs. 2024. The US labor market is automating and becoming more
exible. https://www.goldmansachs.com/insights/articles/the-us-labor-market-
is-automating- and-more- ex
[35] D. T. Hall. 2002. Careers In And Out Of Organizations. SAGE.
[36]
Hsiu-Fang Hsieh and Sarah E Shannon. 2005. Three approaches to qualitative
content analysis. Qual. Health Res. 15, 9 (Nov. 2005), 1277–1288.
[37]
Herminia Ibarra and Roxana Barbulescu. 2010. Identity As Narrative: Prevalence,
Eectiveness, and Consequences of Narrative Identity Work in Macro Work Role
Transitions. AMRO 35, 1 (Jan. 2010), 135–154.
[38]
Kyogo Kanazawa, Daiji Kawaguchi, Hitoshi Shigeoka, and Yasutora Watanabe.
2022. AI, Skill, and Productivity: The Case of Taxi Drivers. Technical Report.
National Bureau of Economic Research.
[39]
Dominik K Kanbach, Louisa Heiduk, Georg Blueher, Maximilian Schreiter, and
Alexander Lahmann. 2024. The GenAI is out of the bottle: generative articial
intelligence from a business model innovation perspective. Review of Managerial
, 2024, Anuschka Schmi, Krzysztof Z. Gajos, and Osnat Mokryn
Science 18, 4 (2024), 1189–1220.
[40]
Naveena Karusala, David Odhiambo Seeh, Cyrus Mugo, Brandon Guthrie,
Megan A Moreno, Grace John-Stewart, Irene Inwani, Richard Anderson, and
Keshet Ronen. 2021. “That courage to encourage”: Participation and Aspira-
tions in Chat-based Peer Support for Youth Living with HIV. In Proceedings of
the 2021 CHI Conference on Human Factors in Computing Systems (<conf-loc>,
<city>Yokohama</city>, <country>Japan</country>, </conf-loc>) (CHI ’21, Arti-
cle 223). Association for Computing Machinery, New York, NY, USA, 1–17.
[41]
Naveena Karusala, Shirley Yan, Nupoor Rajkumar, and Richard Anderson. 2023.
Speculating with Care: Worker-centered Perspectives on Scale in a Chat-based
Health Information Service. Proceedings of the ACM on Human-Computer Inter-
action 7, CSCW2 (2023), 1–26.
[42]
Daniel Martin Katz, Michael James Bommarito, Shang Gao, and Pablo Arredondo.
2023. Gpt-4 passes the bar exam. Available at SSRN 4389233 (2023).
[43]
Charlotte Kobiella, Yarhy Said Flores López, Franz Waltenberger, Fiona Draxler,
and Albrecht Schmidt. 2024. " If the Machine Is As Good As Me, Then What
Use Am I?"–How the Use of ChatGPT Changes Young Professionals’ Perception
of Productivity and Accomplishment. In Proceedings of the CHI Conference on
Human Factors in Computing Systems. 1–16.
[44]
Roberta Lamb and Elizabeth Davidson. 2005. Information and Communication
Technology Challenges to Scientic Professional Identity. TheInformation So ciety
21, 1 (Jan. 2005), 1–24.
[45]
Jessica K Miller, Batya Friedman, Gavin Jancke, and Brian Gill. 2007. Value
tensions in design: the value sensitive design, development, and appropriation of
a corporation’s groupware system. In Proceedings of the 2007 ACM International
Conference on Supporting Group Work (Sanibel Island, Florida, USA) (GROUP ’07).
Association for Computing Machinery, New York, NY, USA, 281–290.
[46]
Emmanuel Monod, Raphael Lissillour, Antonia Köster, and Qi Jiayin. 2023. Does
AI control or support? Power shifts after AI system implementation in customer
relationship management. Journal of Decision Systems 32, 3 (July 2023), 542–565.
[47]
Moin Nadeem, Anna Bethke, and Siva Reddy. 2020. StereoSet: Measuring stereo-
typical bias in pretrained language models. (April 2020). arXiv:2004.09456 [cs.CL]
[48]
Andrew J Nelson and Jennifer Irwin. 2014. “DEFINING WHAT WE DO—ALL
OVER AGAIN”: OCCUPATIONAL IDENTI TY, TECHNOLOGICAL CHANGE,
AND THE LIBRARIAN/INTERNET-SEARCH RELATIONSHIP. Acad. Manage. J.
57, 3 (2014), 892–928.
[49]
Daniel Nyberg. 2009. Computers, Customer Service Operatives and Cyborgs:
Intra-actions in Call Centres. Organization Studies 30, 11 (Nov. 2009), 1181–1199.
[50]
Wanda J Orlikowski. 1992. The Duality of Technology: Rethinking the Concept
of Technology in Organizations. Organization Science 3, 3 (Aug. 1992), 398–427.
[51]
W J Orlikowski and S V Scott. 2008. 10 sociomateriality: challenging the separa-
tion of technology, work and organization. Acad. Manag. Ann. (2008).
[52]
Michael Polanyi and Amartya Sen. 2009. The Tacit Dimension. University of
Chicago Press.
[53]
Davide Ravasi and Anna Canato. 2010. We are what we do (and how we do
it): Organizational technologies and the construction of organizational identity.
In Technology and Organization: Essays in Honour of Joan Woodward. Vol. 29.
Emerald Group Publishing Limited, 49–78.
[54]
Suzanne Rivard and Liette Lapointe. 2012. Information Technology Implementers’
Responses to User Resistance: Nature and Eects. Miss. Q. 36, 3 (2012), 897–920.
[55]
R M Ryan and E L Deci. 2000. Self-determination theory and the facilitation of
intrinsic motivation, social development, and well-being. Am. Psychol. 55, 1 (Jan.
2000), 68–78.
[56]
R M Ryan and E L Deci. 2007. Active human nature: Self-determination theory
and the promotion and maintenance of sport, exercise, and health. Intrinsic
motivation and self-determination in exercise (2007).
[57]
Johnny Saldana. 2014. Thinking Qualitatively: Methods of Mind. SAGE Publica-
tions.
[58]
Rylan Schaeer, Brando Miranda, and Sanmi Koyejo.2023. Are Emergent Abilities
of Large Language Models a Mirage? (April 2023). arXiv:2304.15004 [cs.AI]
[59]
Linda Schamber. 2000. Time-line interviews and inductive content analysis: their
eectiveness for exploring cognitive behaviors. J. Am. Soc. Inf. Sci. 51, 8 (2000),
734–744.
[60]
Ulrike Schultze. 2000. A Confessional Account of an Ethnography about Knowl-
edge Work. Miss. Q. 24, 1 (2000), 3–41.
[61]
Anastasia V Sergeeva. 2023. A postphenomenological perspective on the changing
nature of work. Computer Supported Cooperative Work (CSCW) 32, 2 (2023), 215–
236.
[62]
Alyssa Sheehan and Christopher A Le Dantec. 2023. Making Meaning from the
Digitalization of Blue-Collar Work. Proc. ACM Hum.-Comput. Interact. 7, CSCW2
(Oct. 2023), 1–21.
[63]
Yanyan Shen and Wencheng Cui. 2024. Perceived support and AI literacy: the
mediating role of psychological needs satisfaction. Frontiers in Psychology 15
(2024), 1415248.
[64]
Mari-Klara Stein, Robert D Galliers, and M Lynne Markus. 2013. Towards An
Understanding of Identity and Technology in the Workplace. J. Inf. Technol.
Impact 28, 3 (Sept. 2013), 167–182.
[65]
S. M. Strachan, M. S. Fortier, M. G. Perras, and C. Lugg. 2013. Understanding varia-
tions in exercise-identity strength through identity theory and self-determination
theory. International Journal of Sport and Exercise Psychology 11 (3) (2013), 273–
285.
[66]
Stefan Sveningsson and Mats Alvesson. 2003. Managing Managerial Identities:
Organizational Fragmentation, Discourse and Identity Struggle. Hum. Relat. 56,
10 (Oct. 2003), 1163–1193.
[67]
Emmanuelle Vaast. [n.d.]. WHEN DIGI TAL TECHNOLOGIES ENABLE AND
THREATEN OCCUPATIONAL IDENTITY: THE DELICATE BALANCING ACT
OF DATA SCIENTISTS. ([n. d.]).
[68]
Emmanuelle Vaast, Alain Pinsonneault, McGill University, and McGill University.
2021. When digital technologies enable and threaten occupational identity: The
delicate balancing act of data scientists. Miss. Q. 45, 3 (Sept. 2021), 1087–1112.
[69]
Anja Van den Broeck, Joshua L Howard, Yves Van Vaerenbergh, Hannes Leroy,
and Marylène Gagné. 2021. Beyond intrinsic and extrinsic motivation: A meta-
analysis on self-determination theory’s multidimensional conceptualization of
work motivation. Organ. Psychol. Rev. 11, 3 (Aug. 2021), 240–273.
[70]
David Gray Widder, Laura Dabbish, James D Herbsleb, Alexandra Holloway,
and Scott Davido. 2021. Trust in Collaborative Automation in High Stakes
Software Engineering Work: A Case Study at NASA. In Proceedings of the
2021 CHI Conference on Human Factors in Computing Systems (<conf-loc>,
<city>Yokohama</city>, <country>Japan</country>, </conf-loc>) (CHI ’21, Arti-
cle 184). Association for Computing Machinery, New York, NY, USA, 1–13.
[71]
Allison Woodru, Renee Shelby, Patrick Gage Kelley, Steven Rousso-Schindler,
Jamila Smith-Loud, and Lauren Wilcox. 2024. How knowledge workers think
generative ai will (not) transform their industries. In Proceedings of the CHI
Conference on Human Factors in Computing Systems. 1–26.
[72]
Carlos Zednik. 2021. Solving the black box problem: A normative framework for
explainable articial intelligence. Philos. Technol. 34, 2 (June 2021), 265–288.
A SURVEY RESULTS
Table 6: Exemplary Job Titles of Junior and Senior Engineers
Domain Ex-
pertise
N Exemplary Job Title
Junior 11
Software Engineer, Software Developer,
Software Architect
Senior 17
Director of Engineering, Senior Software
Engineer, Manager
Table 7: Frequency of GAI Use of Junior and Senior Engineers
GAI Use Frequency (%) Junior Senior
Never 13 (46.4%) 6 7
Rarely 7 (25.0%) 3 4
Sometimes 6 (21.4%) 2 4
Frequently 2 (7.1%) 0 2
Generative AI in the Soware Engineering Domain: Tensions of Occupational Identity and Paerns of Identity Protection , 2024,
Table 8: Types of GAI Tools Used by Junior and Senior Engineers
To what extent are you using (or have you used) the following GAI tools before?
Type of GAI Tool
Never used
before
Tried once
or twice
< Once a
week
1-3 times a
week
Once a day
> Once a
day
ChatGPT - 5 3 5 1 1
BART - 15 - - - -
Midjourney 12 3 - - - -
Bard 12 1 - 1 1 -
Dall-E 12 1 1 1 - -
Stable Diusion 13 1 - - - 1
Table 9: Perceived Risks and Concerns of Using GAI of Junior and Senior Engineers
How true are the following concerns for you when thinking
of the use of ChatGPT or a similar tool at work?
I worry that by using ChatGPT or another
GAI tool...
Strongly
disagree
Somewhat
disagree
Neither
agree nor
disagree
Somewhat
agree
Strongly
agree
... my work becomes less authentic. 1 (4) 1 (6) 2 (2) 6 (1) 1 (2)
... no one can be held accountable if the
information provided by the tool is wrong.
3 (2) 3 (5) 0 (1) 0 (3) 5 (4)
... my colleagues view me as less skilled or
procient if they knew I used such a tool.
2 (3) 5 (5) 3 (2) 1 (5) -)
... my superiors view me as less skilled or
procient if they knew I used such a tool.
2 (3) 4 (5) 4 (1) 1 (6) -)
... I view myself as less skilled or procient.
1 (5) 5 (6) 3 (0) 1 (1) 1 (3))
... the information or data I submit to such
tool is used in a way I did not foresee.
0 (2) 0 (1) 0 (2) 3 (4) 8 (6))
... the information or data I submit to such
tool is shared and used by some other
party.
- 0 (1) 0 (1) 3 (2) 8 (11))
Note: Numbers indicated in junior (senior) respectively.
Survey participants were not required to respond to all questions.
Table 10: Perceived Benets of Using GAI of Junior and Senior Engineers
How true are the following concerns for you when thinking
of the use of ChatGPT or a similar tool at work?
Using ChatGPT or another GAI
tool...
Strongly
disagree
Somewhat
disagree
Neither
agree nor
disagree
Somewhat
agree
Strongly
agree
... improves my work. 1 (1) 1 (1) 1 (1) 2 (3) 0 (4)
... gives me condence in my work.
2 (1) 0 (3) 1 (2) 2 (2) 0 (2)
... does not really help me with im-
proving the quality of my work.
0 (1) 2 (3) 0 (2) 2 (1) 1 (3)
... does not really help me with im-
proving the creativity of my work.
1 (1) 1 (2) 0 (2) 0 (2) 2 (2)
... does not really help me with im-
proving the creativity of my work.
0 (2) 1 (5) - 3 (0) 1 (3))
Note: Questions only posed to users (N = 15). Numbers indicated in junior
(senior) respectively. Survey participants were not required to respond to all questions.
B INTERVIEW GUIDELINE
Current Work Practices
To get a better idea of your role and your work: How would
you describe your role and related tasks?
, 2024, Anuschka Schmi, Krzysztof Z. Gajos, and Osnat Mokryn
Think about yesterday: Can you walk me through your key
work activities; from entering the oce / opening your lap-
top to leaving the oce / closing your laptop?
General Impressions and Expectations of GAI
What do you know about ChatGPT? Are there aspects about
ChatGPT you are unsure about / you would like to know
more about?
What do you nd rewarding about GAI tools such as Chat-
GPT in today’s day and age?
What do you nd concerning about GAI tools such as Chat-
GPT in today’s day and age? [e.g., why would you not use
it for your work] [Followup: Tell me why this is a concern]
[Followup: Are there any privacy-related concerns?]
GAI Use
Do you use GAI-based support tools such as ChatGPT for
your work? If yes, what kind of tools do you use?
Do you have any expectations around the use of ChatGPT
within your organization / within your work practices? If
yes, what are some of these expectations?
Does your organization do anything to enforce these expec-
tations?
Do you experience any ChatGPT-related discussions within
your organization? If yes, can you give an example of a
discussion you faced?
Can you think of a recent situation where you used ChatGPT
for work and walk me through it?
Once ChatGPT generates an output to your request, how do
you integrate it into your existing task / work ow?
On what occasions do you nd the AI to be useful? Why?
(What’s working for you in regards to ChatGPT?) [if fol-
low up: What’s the best piece of work you’ve gotten from
ChatGPT?]
On what occasions do you nd the AI to be not useful? Why?
(What’s not working?)
Have you changed the way you perform tasks? Which ones
and how? item Would you carry out the same tasks without
the use of ChatGPT again? Why / why not?
Envisioning Future Use of GAI
Has the use of ChatGPT created additional tasks or work for
you? Do you think it will create additional or new tasks or
work for you in the future?
Has the use of ChatGPT enabled you to take on other work-
related acitivities (that you were previously not been able to
do)? Do you think it will enable you to do so in the future?
If yes, what kind of activities?
How do you think would GAI potentially inuence the struc-
ture and roles (within the organization)?
Closing Remarks
Do you want to share any other thoughts or comments you
have regarding ChatGPT? Did we forget an important aspect
of your work?
Do you have any questions regarding this research project
or interview?
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Artificial Intelligence (AI) exerts significant influence on both professional and personal spheres, underscoring the necessity for college students to have a fundamental understanding of AI. Guided by self-determination theory (SDT), this study explores the influence of psychological needs satisfaction on AI literacy among university students. A cross-sectional survey involving 445 university students from diverse academic backgrounds was conducted. The survey assessed the mediation effect of students’ psychological need satisfaction between two types of support—technical and teacher—and AI literacy. The results indicate that both support types positively influenced the fulfillment of autonomy and competence needs, which subsequently acted as mediators in enhancing AI literacy. However, the satisfaction of relatedness needs did not mediate the relationship between the types of support and AI literacy. Unexpectedly, no direct association was found between the two forms of support and AI literacy levels among students. The findings suggest that although technical and teacher support contribute to fulfilling specific psychological needs, only autonomy and competence needs are predictive of AI literacy. The lack of direct impact of support on AI literacy underscores the importance of addressing specific psychological needs through educational interventions. It is recommended that educators provide tailored support in AI education (AIEd) and that institutions develop specialized courses to enhance AI literacy.
Article
Full-text available
The introduction of ChatGPT in November 2022 by OpenAI has stimulated sub- stantial discourse on the implementation of artificial intelligence (AI) in various domains such as academia, business, and society at large. Although AI has been utilized in numerous areas for several years, the emergence of generative AI (GAI) applications such as ChatGPT, Jasper, or DALL-E are considered a breakthrough for the acceleration of AI technology due to their ease of use, intuitive interface, and performance. With GAI, it is possible to create a variety of content such as texts, images, audio, code, and even videos. This creates a variety of implications for busi- nesses requiring a deeper examination, including an influence on business model innovation (BMI). Therefore, this study provides a BMI perspective on GAI with two primary contributions: (1) The development of six comprehensive propositions outlining the impact of GAI on businesses, and (2) the discussion of three indus- try examples, specifically software engineering, healthcare, and financial services. This study employs a qualitative content analysis using a scoping review methodol- ogy, drawing from a wide-ranging sample of 513 data points. These include aca- demic publications, company reports, and public information such as press releases, news articles, interviews, and podcasts. The study thus contributes to the grow- ing academic discourse in management research concerning AI’s potential impact and offers practical insights into how to utilize this technology to develop new or improve existing business models.
Article
Seeking to address barriers to in-person care, governments and non-governmental organizations (NGOs) globally have been pushing for scaling chat- or phone-based information services that rely on care workers to engage with users. Despite theoretical tensions between care and scale and the essential role of care workers, workers' perspective on scale and its impact on care provision is rarely centered early on in decisions to scale. In this paper, we examine care and scale from the perspective of medical support executives (MSEs) who support a chat-based health information service for maternal and child health deployed across multiple states in India. We draw on observations of MSEs' work, interviews with MSEs, NGO staff who implement the service, and families who use the service, and speculative design sessions conducted with MSEs. We find that by centering MSEs' perspectives, we can differentiate between growth of the relationships and heterogeneity that enable social impact, versus scale-thinking that promotes the decontextualization of care. We leverage our findings to discuss implications for scale and automation in chat-based health information services, including the importance of human connection, place, and support for care workers.
Article
With rapid advances in computing, we are beginning to see the expansion of technology into domains far afield from traditional office settings historically at the center of CSCW research. Manufacturing is one industry undergoing a new phase of digital transformation. Shop-floor workers are being equipped with tools to deliver efficiency and support data-driven decision making. To understand how these kinds of technologies are affecting the nature of work, we conducted a 15-month qualitative study of the digitalization of the shipping and receiving department at a small manufacturer located in the Southeastern United States. Our findings provide an in-depth understanding of how the norms and values of factory floor workers shape their perception and adoption of computing services designed to augment their work. We highlight how emerging technologies are creating a new class of hybrid workers and point to the social and human elements that need to be considered to preserve meaningful work for blue-collar professionals.
Article
AI may provide a path to decrease inequality.