Content uploaded by Hannes Cools
Author content
All content in this area was uploaded by Hannes Cools on Apr 01, 2025
Content may be subject to copyright.
Journalism Practice
ISSN: (Print) (Online) Journal homepage: www.tandfonline.com/journals/rjop20
One Size Fits Some: How Journalistic Roles Shape the
Adoption of Generative AI
Lynge Asbjørn Møller, Hannes Cools & Morten Skovsgaard
To cite this article: Lynge Asbjørn Møller, Hannes Cools & Morten Skovsgaard (31 Mar 2025):
One Size Fits Some: How Journalistic Roles Shape the Adoption of Generative AI, Journalism
Practice, DOI: 10.1080/17512786.2025.2484622
To link to this article: https://doi.org/10.1080/17512786.2025.2484622
© 2025 The Author(s). Published by Informa
UK Limited, trading as Taylor & Francis
Group
Published online: 31 Mar 2025.
Submit your article to this journal
View related articles
View Crossmark data
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=rjop20
One Size Fits Some: How Journalistic Roles Shape the
Adoption of Generative AI
Lynge Asbjørn Møller
a
, Hannes Cools
a,b
and Morten Skovsgaard
a,c
a
Digital Democracy Centre, University of Southern Denmark, Odense, Denmark;
b
Amsterdam School of
Communication Research, University of Amsterdam, Amsterdam, The Netherlands;
c
Centre for Journalism,
University of Southern Denmark, Odense, Denmark
ABSTRACT
The rise of generative artificial intelligence (AI) has sparked debate
about its implications for journalism and the roles of journalists. Yet,
the interplay between journalistic roles and AI adoption remains
underexplored. Drawing on a survey of Danish journalists (N =
299), our study addresses this gap by exploring how journalists’
professional role conceptions inuence their adoption of
generative AI. The results reveal role-specific patterns that align
with traditional understandings of the respective role
conceptions, suggesting that professional identities shape how
journalists engage with new technologies. Journalists adhering to
mobilisation and entertainment roles express heightened
concerns about job security and work meaningfulness, while
those adhering to watchdog and detached observer roles rather
emphasise ethical and operational implications of generative AI
for journalism. Despite these concerns, entertainment journalists
actively employ generative AI to enhance content quality and
audience engagement, and watchdog journalists recognise its
potential to boost eciency and accuracy. These variations across
journalistic roles underscore the need for academia and news
organisations to avoid oversimplified one-size-fits-all narratives
about the adoption of generative AI in the news industry.
Technology is simultaneously shaped by and shapes the
journalists who use it, highlighting how professional identities
and technological innovation co-evolve in modern journalism.
ARTICLE HISTORY
Received 15 September 2024
Accepted 17 March 2025
KEYWORDS
Artificial intelligence;
generative AI; journalistic
roles; role conceptions;
journalistic work; news
automation
Introduction
The emergence of generative artificial intelligence (AI) marks a significant shift in how AI
interacts with journalism. Earlier AI applications in news organisations were tailored for
specific, often hidden tasks such as data analysis (Stray 2021), template-based news gen-
eration (Carlson 2015), or personalised news distribution (Gulla et al. 2021). Despite their
impact on journalism, these AI systems often remained opaque to much of the journalistic
community due to their complexity and behind-the-scenes nature (Jones, Jones, and
© 2025 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/
licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited. The terms on which this article has been published allow the posting of the Accepted Manuscript in a repository by the
author(s) or with their consent.
CONTACT Lynge Asbjørn Møller lymo@sam.sdu.dk
JOURNALISM PRACTICE
https://doi.org/10.1080/17512786.2025.2484622
Luger 2022). In contrast, generative AI tools are accessible and easy to use. Generative AI
refers to machine learning systems that produce outputs mimicking human creativity by
learning from extensive training data. The most common of these tools, such as ChatGPT,
Gemini, DALL·E, and MidJourney, allow users to generate text or images based on simple
prompts. This accessibility and versatility distinguish generative AI from earlier systems,
bringing AI directly into journalists’ workows. Journalists are already employing genera-
tive AI across various stages of the reporting process (Diakopoulos et al. 2024), and many
news organisations have begun systematically integrating the technology into their daily
news operations (Beckett and Yaseen 2023).
Generative AI’s potential to transform journalistic practice is likely to evoke a wide
range of perspectives and experiences among journalists. With generative AI seemingly
promising to streamline news production in ways previously unimaginable, some journal-
ists may view the technology as an opportunity to enhance their work (Moran and Shaikh
2022). However, as generative AI begins to undertake tasks traditionally performed by
journalists, others may worry not only about job displacement (Lindén 2018) but also
about losing the sense of purpose and satisfaction that comes with their work (Olsen
2023). Moreover, generative AI introduces ethical challenges that further complicate its
adoption, including concerns about its lack of accuracy (Hicks, Humphries, and Slater
2024) and potential for bias (Singh and Ramakrishnan 2023). This suggests that generative
AI is not a one-size-fits-all solution for journalism. Instead, its adoption is likely to be
shaped by the diversity of journalists’ professional identities and experiences. However,
academic literature has yet to explore the inuence of these factors on how journalists
respond to and use generative AI. Without a deeper understanding of these dierences,
there is a risk of oversimplifying the responses to AI, creating a binary narrative of threat
versus opportunity that ignores the profession’s inherent diversity.
Our study addresses this gap by exploring the inuence of journalists’ varying role con-
ceptions on their attitudes towards and usage patterns of generative AI. The study of pro-
fessional role conceptions in journalism is a longstanding body of research that examines
the dierent ways in which journalists perceive their normative roles in society (Hanitzsch
et al. 2019; Mellado 2015). These role conceptions shape journalists’ work processes,
ethical considerations, and overall approach to journalism (Donsbach 2008). Conse-
quently, journalists with diering role conceptions will face distinct ethical and oper-
ational challenges when integrating AI into their work. By examining how these
professional identities shape the adoption of generative AI, this research oers a more
nuanced understanding of the complex dynamics at play within the profession. Such
an understanding is important not only as a contribution to academic literature but
also as a foundation for navigating the ongoing integration of AI in newsrooms. Failure
to recognise the diverse perspectives of journalists could lead to ineective implemen-
tation strategies and unaddressed ethical dilemmas, with broader implications for journal-
ism’s role in society.
Specifically, we explore the relationship between generative AI and journalistic roles by
drawing on a survey of Danish journalists that measures their role conceptions, self-
reported uses of generative AI, and their views on benefits and concerns related to the
adoption of generative AI. This approach enables us to identify and compare patterns
of AI use and attitudes across four dierent role conceptions that emerged from the
survey responses: the mobilisation role, the watchdog role, the detached observer role,
2 L. A. MØLLER ET AL.
and the entertainment role. This analysis finds distinct dierences between the roles that
largely align with traditional understandings of the respective role conceptions,
suggesting that journalistic roles inuence how journalists interact with generative AI.
The survey is conducted in Denmark, a small but diverse media market characterised
by full digitisation (Sjøvaag et al. 2019) and an institutionalised professionalism rooted
in strong public interest traditions (Syvertsen et al. 2014). Danish news organisations
are already investing heavily in generative AI tools that are currently being implemented
in newsrooms (Newman et al. 2024). This provides an interesting setting to explore
the early adoption of generative AI among journalists who share a commitment to
high standards of journalistic practice, oering insights into variations within the pro-
fession that extend beyond the national context of the study.
Generative AI and News Automation
The public release of the text generation tool ChatGPT in November 2022 drew significant
attention to the capabilities of generative AI as a highly accessible AI technology. Genera-
tive AI systems typically feature conversational interfaces that allow users to ask questions
and receive responses in ways that mimic human interaction. Unlike the often hidden AI
technologies in journalism, generative AI can therefore be seamlessly integrated into
regular journalistic workows, with recent research revealing use cases at every stage
of the reporting process, from the gathering to the production, verification, and distri-
bution of news (Beckett and Yaseen 2023; Diakopoulos et al. 2024). To understand the
potential uses of and attitudes towards generative AI in journalism, this section links its
applications to literature on journalistic responses to previous implementations of algor-
ithms and AI in newsrooms.
One of generative AI’s core strengths lies in its ability to produce complex narrative text
at scale, significantly expanding the potential for automated news writing compared to
earlier template-based approaches (Nishal and Diakopoulos 2023). Consequently, genera-
tive AI may amplify concerns among journalists about job displacement. Studies of news-
room innovation show a trend towards increasing eciency under financial pressure, with
automation frequently adopted to reduce costs (Dickinson, Matthews, and Saltzis 2013).
As a result, many journalists perceive news automation as a threat to their job security and
satisfaction (van Dalen 2012). Research highlights an underlying fear among journalists
that automation technologies could either replace their roles entirely (Carlson 2015;
Jones, Jones, and Luger 2022; Kim and Kim 2018; Moran and Shaikh 2022) or undermine
the meaningfulness of their work by reducing task and skill variation (Olsen 2023).
Rather than being seen solely as a threat to jobs, AI and algorithms are sometimes por-
trayed as opportunities to improve journalism through the automation of routine tasks
(Jones, Jones, and Luger 2022). Generative AI eectively handles repetitive tasks such
as transcription, translation, and summarisation (Beckett and Yaseen 2023), thereby
freeing journalists to concentrate on the traditional but time-consuming tasks that
define their profession (Møller, Skovsgaard, and de Vreese 2024). Previous studies
suggest that journalists generally welcome automation technologies that reduce this
kind of routine work, enabling greater emphasis on in-depth reporting and investigative
journalism (Schapals and Porlezza 2020; van Dalen 2012; Wu, Tandoc, and Salmon 2019).
In line with these studies, Moran and Shaikh (2022, 1765) have found AI’s role in
JOURNALISM PRACTICE 3
journalism to be pitched as a “technological liberator of journalists which can untether
them from clutches of menial tasks that deprive them from producing quality work”.
Generative AI’s capacity to process, interpret, classify, and identify patterns in large and
complex datasets also demonstrates its potential as an augmentation tool that enhances
rather than replaces journalistic work (Nishal and Diakopoulos 2023). Such an augmenta-
tion approach has been found to deate automation-related fears among journalists by
ensuring that the technologies remain under human control (Bucher 2017; Milosavljević
and Vobič 2019; Rydenfelt 2022). Moreover, it mitigates ethical concerns prevalent in the
literature on automated news writing, such as maintaining standards of verification and
balance (Thurman, Dörr, and Kunert 2017). However, generative AI introduces significant
new ethical challenges. For instance, generative AI technologies are often trained on out-
dated data (Alkaissi and McFarlane 2023), they are known to consistently produce false-
hoods (Hicks, Humphries, and Slater 2024), and they reproduce biases in their training
data (Singh and Ramakrishnan 2023). Recent research highlights growing awareness
among journalists of these ethical challenges (Diakopoulos et al. 2024).
This literature underscores the numerous opportunities and challenges that generative
AI presents for journalists. However, a significant gap remains in understanding how gen-
erative AI might be viewed and utilised dierently across various segments of the journal-
ism profession. Current literature tends to approach its implications in broad terms,
overlooking the nuanced ways in which individual journalists’ values and beliefs shape
their perspectives and practices. Recognising these dierences is essential for compre-
hending the broader implications of AI integration in newsrooms. To address this gap,
our study explores how journalists view and use generative AI depending on their pro-
fessional role conceptions.
Generative AI and Journalistic Roles
The ways in which journalists think about their societal roles is an important area of
research in journalism studies. Central to this understanding is the concept of journalistic
roles, which describe how journalists define their relationship with society and their pro-
fessional responsibilities within it. Role conceptions are significant because they shape
journalists’ professional behaviours and practices (Donsbach 2008), bridging journalism
as an institution with the individual choices and actions of its practitioners (Mellado
and van Dalen 2014). Therefore, the concept is often used to study variations in journal-
ists’ professional identities across dierent contexts. Research into journalistic roles often
relies on surveys or interviews, asking journalists to articulate the normative roles they
believe they should fulfil within society (Hanitzsch et al. 2019). Building on Cohen’s
(1963) distinction between “neutral” and “participant” journalistic roles, this approach
has enabled scholars worldwide to identify and categorise distinct role conceptions
that journalists consider important for their work.
Some role conceptions are rooted in traditional normative notions of how journalism
should serve the public interest. For instance, the “neutral” (Johnstone, Slawski, and
Bowman 1972) and “disseminator” roles (Weaver and Wilhoit 1996) depict journalists as
detached observers, dedicated to reporting facts objectively and impartially. Other nor-
matively rooted roles adopt a more antagonistic stance toward power, such as the
“watchdog” (Johnstone, Slawski, and Bowman 1972; Weaver and Wilhoit 1996) and
4 L. A. MØLLER ET AL.
“monitorial” roles (Hanitzsch et al. 2019). These roles emphasise holding authority accoun-
table and safeguarding the public interest, often through investigative and critical report-
ing that uncovers hidden truths. In contrast, other roles are more audience focused. For
instance, the “infotainment” (Mellado 2015) or “accommodative” roles (Hanitzsch et al.
2019) frame the audience as spectators to be entertained, addressing their needs
through appealing and relatable content. Meanwhile, the “populist mobiliser” (Weaver
and Wilhoit 1996) or “interventionist” roles (Hanitzsch et al. 2019) seek to mobilise the
audience toward political, social, or cultural change. Journalists adhering to these roles
aim to inuence public opinion, advocate for change, and facilitate development.
Given their impact on behaviour, role conceptions also inuence how journalists adopt
emerging technologies. Previous research demonstrates that journalistic roles shape
online journalist–audience relationships (Holton, Lewis, and Coddington 2016) and signifi-
cantly aect how journalists respond to audience metrics (Belair-Gagnon, Zamith, and
Holton 2020). Similarly, role conceptions may explain how readily journalists adopt genera-
tive AI. For instance, journalists aligned with watchdog or detached observer roles might
resist generative AI due to concerns about editorial independence and the accuracy of
AI-generated content. However, journalistic role conceptions are not always reected in
practice (Mellado and van Dalen 2014). External factors such as organisational routines,
technological pressures, and precarious working conditions often constrain journalists’
ability to act in line with their values and beliefs (Tandoc, Hellmueller, and Vos 2013). In
an era of increasing precarity in journalism, resisting new technology based on normative
ideals for instance poses a greater risk of obsolescence or job displacement (Örnebring
2018). Consequently, journalists adhering to more audience-oriented roles may feel com-
pelled to adopt generative AI to meet increasing content production demands.
These relationships between journalistic role conceptions and the adoption of genera-
tive AI remain largely unexplored in current literature. This article utilises journalistic role
conceptions as independent variables to examine and explain variations in both journal-
ists’ attitudes towards and usage patterns of generative AI. Specifically, we ask the follow-
ing research questions:
.RQ1: How do journalistic roles inuence journalists’ attitudes towards generative AI?
.RQ2: How do journalistic roles inuence journalists’ use of generative AI?
Methods
To explore these research questions, this study employs a survey of Danish journalists
conducted from March to June 2024. The Danish media system shares characteristics
with those of the other Nordic countries, such as inclusive and diverse press markets,
strong public service traditions, high journalistic professionalism, and full digitalisation
(Brüggemann et al. 2014; Sjøvaag et al. 2019; Syvertsen et al. 2014). Moreover, the
news industry in Denmark is under increasing financial pressure, and many news organ-
isations are investing heavily in the implementation of generative AI in journalistic prac-
tices (Newman et al. 2024). These characteristics make Denmark a well-suited case for
studying the inuence of journalistic roles on the adoption of generative AI because ten-
sions with professional ideals are more likely to emerge in a highly professionalised media
system.
JOURNALISM PRACTICE 5
Surveying journalists is generally challenging due to journalism’s ill-defined bound-
aries complicating the definition of a target population, limited access making it
dicult to reach sucient members of that population, and time pressures inherent in
journalistic work reducing their availability to participate (Molyneux and Zamith 2022).
To mitigate these challenges, we carried out the sampling and distribution in collabor-
ation with several major Danish media organisations, including regional media organisa-
tions Jysk Fynske Medier and Nordjyske Mediehus; national commercial television station
TV2 Denmark; and national newspapers Jyllands-Posten, Ekstra Bladet, B.T., Weekendavisen,
and Kristeligt Dagblad. Managers and editors within these organisations distributed the
survey to their journalists. This approach allowed us to gain access to a large number
of journalists and increased the visibility of our survey, ensuring that we captured the
adoption of generative AI among a wide range of professional journalists in Denmark.
The survey was distributed to approximately 1,300 journalists, based on estimates pro-
vided by the managers overseeing its distribution. However, the exact number of journal-
ists invited remains uncertain due to restricted access to the internal mailing lists of the
news organisations. Of those contacted, 299 journalists responded, with 211 fully com-
pleting the survey and 88 partially completing it, resulting in a response rate of 26.3%
according to AAPOR’s Response Rate 2 (AAPOR 2023). The sample consists of 60% men
and 40% women, with most respondents being reporters (55%), followed by editors
(14%) and interns (7%). Their experience in the news industry ranges from fewer than 3
years (14%) to more than 20 years (35%), with 26% having 3–10 years of experience
and 25% reporting 11–20 years. While this does not constitute a representative sample
of Danish journalists, it serves as a non-probability sample designed to explore variations
in the adoption of generative AI. This approach is justified in cases where obtaining a
representative sample is unfeasible, particularly given the substantial variation observed
within our final sample on the key variables of interest in the study.
Measures
Journalists’ attitudes towards and uses of generative AI are employed as dependent vari-
ables in this study. We measure the concerns and benefits related to the adoption of gen-
erative AI in journalistic work through 12 items developed based on qualitative research
into journalists’ responses to recent advancements in generative AI (Cools and Diakopou-
los 2024; Møller, van Dalen, and Skovsgaard 2024). Six items measure concerns by asking
respondents to indicate on a five-point Likert scale the extent to which they are con-
cerned that generative AI will (1) “cause job cuts in the news industry”, (2) “take my job
in the future”, (3) “take away specific tasks that I enjoy”, (4) “make my job less meaningful”,
(5) “make my job more boring”, and (6) “compromise my journalistic integrity”. The other
six items measure benefits by asking respondents to indicate on a five-point Likert scale
the extent to which they believe that generative AI will (1) “enhance my creativity”, (2)
“improve my eciency”, (3) “enable me to pursue tasks that I enjoy”, (4) “enhance the
accuracy of my output”, (5) “improve the quality of my output”, and (6) “help my
output reach larger audiences”. The survey presents all 12 items in a single battery in
a randomised order. An exploratory factor analysis indicates that the six items measuring
concerns load onto one factor, while the six items measuring benefits load onto another
factor. Therefore, we have created two index scores with high internal reliability by
6 L. A. MØLLER ET AL.
averaging the score across the six items measuring concerns (α = 0.83, M = 2.73, SD = 0.92)
and benefits (α = 0.86, M = 3.28, SD = 0.84).
The survey also asks respondents about the importance of dierent ethical and oper-
ational issues related to the use of generative AI in journalism. On a five-point Likert
scale from 1 (not important) to 5 (extremely important), respondents are asked to rate
the importance of the following items: (1) “determining who is accountable for mistakes”,
(2) “creating fair data sharing policies between news organisations and tech companies”,
(3) “being transparent with the audience about the use of generative AI”, (4) “fact-checking
the output of generative AI”, (5) “training journalists in the use of generative AI”, (6) “creat-
ing internal guidelines for the use of generative AI in journalism”, (7) “regulating the use of
generative AI in journalism”, (8) “developing in-house generative AI tools rather than
relying on external companies”, and (9) “ensuring that journalists are involved in develop-
ing and implementing generative AI tools within news organisations”. In addition to
measuring the importance of the individual issues, we also combined them into an
index score by averaging the scores across all items to measure an overall score of the
rated importance of ethical and operational issues (α = 0.79, M = 4.17, SD = 0.57).
The use of generative AI is measured by asking respondents how often they use gen-
erative AI in their capacity as journalists in general and for specific applications on a five-
point Likert scale from 1 (never) to 5 (all the time). Specific applications include 12
dierent uses of generative AI found in existing research, such as brainstorming ideas,
processing data, and generating text (Beckett and Yaseen 2023; Diakopoulos et al.
2024). Additionally, the survey asks respondents to rate the future potential of generative
AI for dierent aspects of the journalistic work process on a five-point Likert scale from 1
(no potential) to 5 (huge potential). These include idea generation (brainstorming stories,
sources, interview questions), information analysis (summarising documents, processing
data), content creation (generating text and images), content editing (proofreading, feed-
back, image editing), reformatting content (news personalisation, reformatting for plat-
forms), and optimising content (audience metrics analysis, search engine optimisation).
Journalistic role conceptions are employed as independent variables to investigate
their inuence on the dependent variables described above. To measure role con-
ceptions, we draw on a well-established battery of items from the Worlds of Journalism
survey (Hanitzsch et al. 2019). In total, the survey asks respondents to evaluate the impor-
tance of 15 statements about the role of the news media on a five-point Likert scale from 1
(not important) to 5 (extremely important). An exploratory factor analysis with varimax
rotation identified four underlying dimensions with distinct loadings and minimal
cross-loadings. We grouped items into these dimensions based on factor loadings and
employed Cronbach’s alpha to ensure the reliability of each dimension, evaluating the
contribution of each individual item. Four items that did not contribute positively to a
higher Cronbach’s alpha for their respective dimensions were excluded.
This process resulted in four dierent role conceptions for which index scores were
created by averaging the scores across the items that constituted each role conception.
The role conceptions include: the mobilisation role (α = 0.74, M = 2.28, SD = 0.85) includ-
ing four items (inuence public opinion, advocate for social change, support national
development, motivate people to participate in political activity), the watchdog role (α
= 0.77, M = 3.34, SD = 1.19) including two items (monitor and scrutinise political leaders,
monitor and scrutinise businesses), the detached observer role (α = 0.73, M = 4.20, SD =
JOURNALISM PRACTICE 7
0.87) including two items (be a detached observer, report things as they are), and the
entertainment role (α = 0.55, M = 2.93, SD = 0.78) including three items (provide entertain-
ment and relaxation; provide the kind of news that attracts the largest audience; provide
advice, orientation, and direction for daily life).
Analytical Approach
We analysed the data using the statistical analysis software STATA. All 299 responses were
included in the analysis where applicable. First, we conducted a descriptive analysis to
gain a more detailed account of journalists’ overall attitudes towards and use of genera-
tive AI. Next, we employed ordinary least squares (OLS) linear regressions to examine the
relationship between journalistic role conceptions and the adoption of generative AI,
while controlling for potential confounding factors that might inuence this relationship.
Specifically, we included control variables measuring personal and professional charac-
teristics of the journalists and their environments. These include gender, which may
inuence attitudes towards and adoption of technology in general; professional experi-
ence (a four-category variable ranging from “Under 3 years” to “More than 20 years”),
which could shape perspectives on generative AI due to factors such as established rou-
tines or hierarchical dynamics; and news organisation size (a five-category variable
ranging from “1–10 journalists” to “100+ journalists”), which may aect the resources
available for adopting generative AI. These controls help isolate the eects of journalistic
role conceptions on generative AI adoption, accounting for other factors that may inde-
pendently inuence journalists’ attitudes and practices.
Results
The section presents the findings of the survey, focusing first on the journalists’ attitudes
towards the adoption of generative AI in journalism and secondly on their uses of genera-
tive AI in their work. In each subsection, we begin with an overview of descriptive statistics
before exploring the inuence of journalistic role conceptions. Coecient tables for all
regression models can be found in the appendices.
Attitudes Towards Generative AI
The findings reveal that journalists generally perceive the benefits of adopting generative
AI more favourably than their concerns, a dierence that is statistically significant. This
suggests a cautiously optimistic outlook among journalists towards the integration of
AI technologies in their profession. Figure 1 unpacks the responses to the individual
aspects of benefits and concerns. The highest-rated concern is that generative AI will
cause job cuts in the news industry in general, while concern regarding generative AI
replacing the journalists’ own individual jobs is notably lower. This result clearly indicates
a stronger concern for the broader impact of AI than for a direct threat to personal
employment. The concern that generative AI might compromise journalistic integrity is
also highly rated, while the concern that generative AI will make journalistic work
boring is rated the lowest. When it comes to benefits, the belief that generative AI
will improve eciency receives the highest rating, followed by its ability to improve
8 L. A. MØLLER ET AL.
output quality, while its potential to enhance output accuracy is rated the lowest among
the benefits. This pattern suggests that while journalists see generative AI as a tool for
improving eciency and quality, they recognise the need for human oversight to
ensure accuracy.
Turning to the first research question, the regression analysis indicates that dierent
journalistic role conceptions inuence attitudes towards generative AI (see regression
tables in Appendices 1 and 2). Focusing first on concerns, Figure 2 illustrates that journal-
ists who align with the mobilisation and entertainment roles tend to have a higher level of
concern about generative AI than those in the watchdog and detached observer roles.
This suggests that journalists adhering to the mobilisation and entertainment roles are
more apprehensive about AI’s impact on their work.
Figure 1. Descriptive statistics of journalists’ responses to individual benefits and concerns related to
the adoption of generative AI in journalistic work.
Figure 2. Regression models predicting how varying role conception scores influence concerns
related to the adoption of generative AI, controlling for gender, experience, and organisation size.
JOURNALISM PRACTICE 9
Unpacking specific concerns, the entertainment role is significantly associated with fears
that generative AI will take over their jobs, while the mobilisation role shows a borderline
significant relationship with concerns about job losses both generally and individually. The
potential of generative AI to automate tasks that are typically tied to these roles may pose a
greater threat to their job security. In contrast, journalists showing stronger adherence to
the watchdog and detached observer roles appear to feel relatively secure in their pos-
itions, though the negative association with job security concerns is only borderline signifi-
cant. Watchdog journalists likely view their investigative and accountability-focused work
as less susceptible to automation, while detached observers may see their role in delivering
objective analysis as similarly safe from the inuence of generative AI.
Job meaningfulness also appears to be aected by the adoption of generative AI, par-
ticularly for certain journalistic roles. The entertainment role shows a significant positive
relationship with concerns that AI will take away the enjoyable aspects of their work. Simi-
larly, the mobilisation role is borderline significantly associated with fears that AI will make
their work less meaningful and more boring. This suggests that generative AI’s auto-
mation potential may disrupt roles where journalists strongly identify with the tasks at
risk of replacement. For instance, if generative AI automates or augments the writing
process, these roles might become less fulfilling and more monotonous, leading to
reduced job satisfaction.
Further, journalistic roles shape positive attitudes towards generative AI (see Figure 3).
While no significant relationships exist between the benefits of generative AI and the mobil-
isation or detached observer roles, both the watchdog and entertainment roles show a sig-
nificant association with higher-rated benefits. This indicates that journalists adhering to
these roles see greater advantages in incorporating AI technologies into their work.
However, the nature of the specific benefits diers notably between the two roles.
Figure 3. Regression models predicting how varying role conception scores influence benefits related
to the adoption of generative AI, controlling for gender, experience, and organisation size.
10 L. A. MØLLER ET AL.
Specifically, the entertainment role shows a significant positive relationship with
beliefs that generative AI will improve the quality of their work and enable them to
reach larger audiences. This suggests that journalists aligning with this role see value in
generative AI assisting them in producing engaging content, which is arguably an impor-
tant part of entertainment journalism. Their relationship with the belief that generative AI
can enhance creativity, although only borderline significant, also fits within this under-
standing, as these journalists may see generative AI as a tool to enrich their content. Con-
versely, the watchdog role is significantly linked to beliefs that generative AI will improve
eciency and accuracy. Watchdog journalists may view generative AI as a means to
enhance the precision and speed of their investigative work by allowing them to, for
instance, process and analyse large datasets more eectively.
The survey also measures the importance of various issues for news organisations plan-
ning to implement generative AI (see Figure 4 for descriptive statistics). Overall, respon-
dents assign high importance to all issues, with fact-checking the output of generative AI
rated the highest, followed by the development of internal guidelines for its use. The issue
considered least important was building in-house generative AI tools instead of relying on
external providers.
Examining the inuence of journalistic roles on these issues reveals significant relation-
ships (see coecient table in Appendix 3). The mobilisation role, watchdog role, and
detached observer role are all significantly associated with higher index scores for the
importance of ethical issues (see coecient table in Appendix 3). This indicates that
these roles share a common concern about the ethical challenges posed by generative
AI in journalism. Conversely, the entertainment role does not show a significant relation-
ship with this index, suggesting that ethical implications are considered less immediately
critical among journalists showing stronger adherence to this role.
Figure 4. Descriptive statistics of journalists’ rated importance of various ethical and operational
issues related to the adoption of generative AI in journalism.
JOURNALISM PRACTICE 11
Responses to the specific issues reveal dierent priorities among the roles (see
Figure 5). The watchdog role exhibits significant positive relationships with a wide
range of issues, including accountability, data sharing policies, the development of in-
house tools, and journalist involvement in AI development. This underscores an emphasis
on preserving journalistic autonomy and traditional values in the adoption of generative
AI, perhaps driven by a desire to maintain professional distance from powerful platform
and technology companies. The detached observer role shows significant positive
relationships with priorities such as transparency, fact-checking, and the establishment
of internal guidelines. These findings reect the detached observer role’s commitment
to objectivity and impartiality, which translates into a heightened sensitivity towards
maintaining public trust and upholding the integrity of journalistic practices in the adop-
tion of generative AI in journalism.
For the mobilisation role, significant relationships are observed only with the impor-
tance of accountability and guidelines. Similarly, the entertainment role only shows a bor-
derline significant positive relationship with the importance of generative AI training and
a borderline significant negative relationship with the importance of transparency. These
findings suggest that journalists adhering to these roles may place less emphasis on
ethical considerations in implementing generative AI, potentially prioritising audience
engagement and content creation over issues like accountability or transparency.
Use of Generative AI
Turning to the second research question, we examine how journalistic role conceptions
shape the use of generative AI. Descriptive statistics show that over half of respondents
Figure 5. Regression models predicting how varying role conception scores influence the rated impor-
tance of ethical and operational issues related to the adoption of generative AI, controlling for gender,
experience, and organisation size.
12 L. A. MØLLER ET AL.
use generative AI at least once per month in their capacity as journalists, with a quarter
never using it and just over 10% using it daily. The most common applications involve
brainstorming potential headlines and story ideas, while generating or editing illus-
trations is the least common use.
The regression analysis indicates that journalistic roles inuence the use of generative
AI with particularly contrasts between the detached observer and entertainment roles.
These contrasting patterns likely reect varying professional norms and values associated
with each role. To illustrate these dierences, Figure 6 displays only the regression coe-
cients for the two specific roles. All regression results can be found in a coecient table in
Appendix 4.
The entertainment role shows a significant positive relationship with the current use of
generative AI. Journalists aligning with this role are significantly more likely to use AI for
brainstorming story ideas, brainstorming headlines, generating text for articles, and gen-
erating or editing illustrations. These findings suggest that while entertainment journal-
ists are among the most concerned about the adoption of generative AI in their work,
they are actively integrating the technology into their workow. This readiness to
adopt AI likely reects the role’s focus on reaching the largest possible audiences,
viewing generative AI as a valuable tool for boosting their content.
In contrast, journalists who identify with the detached observer role show a borderline
significant negative relationship with the current use of generative AI. Specifically, they
are significantly less likely to use AI for brainstorming story ideas, researching background
information, developing interview questions, brainstorming headlines, and generating
text for articles. This reluctance likely stems from concerns that generative AI may under-
mine the role’s commitment to objectivity, impartiality, and balanced reporting.
Figure 6. Regression models predicting how two role conception scores influence the use of genera-
tive AI for various journalistic tasks, controlling for gender, experience, and organisation size.
JOURNALISM PRACTICE 13
Looking ahead, journalists see the highest potential of generative AI in information
analysis and content optimisation. The lowest rated potential lies in content creation
tasks (see Figure 7 for descriptive statistics).
As Figure 8 illustrates, the regression analysis of potential applications highlights role-
specific expectations (see coecient table in Appendix 5). The entertainment role shows
Figure 7. Descriptive statistics of journalists’ responses to the potential of generative AI
across different phases of the journalistic work process.
Figure 8. Regression models predicting how varying role conception scores influence the
percieved potential of generative AI across different phases of journalistic work, controlling for
gender, experience, and organisation size.
14 L. A. MØLLER ET AL.
significant positive relationships with the potential of generative AI in the idea generation
phase as well as for news personalisation and platform formatting. These findings align
with the current reliance on generative AI for brainstorming among journalists adhering
to the role and reect their belief in AI’s capacity to enhance the appeal and reach of
journalistic content. In contrast, the watchdog role is significantly positively related to
the potential of generative AI in the content creation and editing phases and shows a bor-
derline significant positive relationship for information analysis, such as summarising
documents and processing data. This is consistent with watchdog journalists’ belief
that generative AI can improve eciency and accuracy, aligning with their focus on
enhancing the precision and eciency of their investigative work.
Discussion
In this article, we have explored how journalists’ alignment with dierent role conceptions
inuences their adoption of generative AI, finding significant variations across roles. The
most notable dierences emerge between the mobilisation and entertainment roles
on the one hand, and the watchdog and detached observer roles on the other. These
dierences largely align with traditional conceptions of the respective roles, suggesting
that journalists’ professional identities play a key role in shaping their approach to
generative AI.
Broadly speaking, journalists adhering more to the mobilisation or entertainment roles
seem more concerned about generative AI’s impact on their work than about the various
ethical and operational issues related to its adoption in journalism. Specifically, they are
worried about generative AI causing job losses and diminishing job meaningfulness.
The mobilisation and entertainment roles are both inherently audience-oriented, focusing
on motivating and entertaining broad audiences (Mellado 2015). This sort of journalism
has been found to be more susceptible to economic inuences than traditional news jour-
nalism (Hanusch 2019). Consequently, these journalists may feel more vulnerable to the
economic shifts driven by the automation of content creation through generative AI.
Despite this, entertainment journalists, unlike mobilisation journalists, also recognise
benefits in generative AI, such as enhancing content quality and reach, and they actively
incorporate the technology into their work. This reects the professional values of enter-
tainment journalists, who have been found to be increasingly profit-oriented and geared
towards audience engagement (Skovsgaard 2014).
In contrast, journalists adhering more to the watchdog or detached observer roles
exhibit less concern about generative AI’s direct impact on their job security and the
meaningfulness of their work, likely feeling more confident that their investigative and
analytical tasks cannot be as readily automated. However, they are significantly more con-
cerned with the ethical and operational implications of generative AI for journalism,
emphasising the need to uphold journalistic values and standards. Consequently,
detached observer journalists do not see much potential in generative AI and largely
avoid its use. These journalists may fear that the biases inherent in these technologies
will interfere with the values of objectivity and neutrality that their role is closely associ-
ated with (Weaver and Wilhoit 1996). On the other hand, journalists showing stronger
adherence to the watchdog role are more open to the integration of generative AI in
their work, recognising benefits such as improving eciency and accuracy. However,
JOURNALISM PRACTICE 15
its use appears to be conditioned on the preservation of journalistic autonomy, with these
journalists advocating for news organisations to develop and control generative AI tools
with active journalist involvement. This reects the professional legitimacy of the watch-
dog role, which is strongly anchored in journalism’s independent institutional position
vis-à-vis power structures (Hanitzsch and Vos 2018).
These results illustrate that technological advancements impact journalists dierently
based on their individual role conceptions. This is particularly important to consider in the
context of generative AI, which more accurately replicates the craft skills at the core of the
journalistic profession compared to previous automation technologies. Roles closely associ-
ated with these craft skills view generative AI as a greater professional threat than roles requir-
ing deep investigative work and critical analysis, which tend to view generative AI more as a
tool to enhance these processes. However, the results reveal that these dierences are more
complex. For example, those in entertainment roles, who express the greatest concern about
the adoption of generative AI in their work, also use the technology the most. There could be
several reasons for this. Perhaps it is the journalists who are most at risk of AI replacement that
are primarily encouraged to use it by management, which pushes for AI adoption to boost
productivity and eciency. Or perhaps these journalists have more direct and tangible
experiences with the technology, allowing them to better recognise its immediate
benefits and potential drawbacks compared to those with more theoretical knowledge of
AI. Future research could investigate the nuances of the relationship between AI attitudes
and usage through qualitative approaches.
Ultimately, this study oers a deeper understanding of the varied and nuanced ways in
which journalists engage with AI, moving beyond binary tales of acceptance or resistance
to technology. By acknowledging the heterogeneity of the journalistic workforce, we can
better grasp the complex dynamics that shape journalism as it adapts to technological
advancements. Not all journalists are the same, and their interactions with new technol-
ogies vary in ways that both reect and reinforce their professional identities. Mapping
these interactions reveals how technological innovation and professional identities are
deeply intertwined in modern journalism. Technology does not merely shape journalism
from the outside; it evolves through its interaction with the identities and practices of the
journalists who use it. This reciprocal relationship highlights the active role journalists play
in the co-evolution of technology and their profession.
However, this perspective alone does not capture the full picture of the issue. Journal-
ists’ ability to act in line with their role conception is inuenced by the organisations they
work for (Tandoc, Hellmueller, and Vos 2013). Despite the widespread accessibility of gen-
erative AI tools, their procurement and deployment within newsrooms are often deter-
mined by upper management, whose decisions shape what technologies are available
and how they are used (Wu 2024). These decisions are often inuenced by broader stra-
tegic goals, such as cost reduction, audience engagement, or competitive positioning,
which may not always align with journalistic values (Gutierrez Lopez et al. 2022). To
balance these considerations, many news organisations have introduced ethical guide-
lines that promote the responsible use of AI-based technologies, oering journalists
best practices and structured frameworks to navigate AI tools. However, many of these
guidelines are strikingly similar, potentially lacking the specificity needed to address
the distinct challenges and opportunities that arise for dierent journalists (de-Lima-
Santos, Yeung, and Dodds 2024).
16 L. A. MØLLER ET AL.
This speaks directly to the practical implications of our study. The complexity high-
lighted by the varied attitudes towards and uses of generative AI across dierent journal-
istic roles demonstrates that a one-size-fits-all approach to its integration is insucient.
Tailored solutions are necessary to address the specific concerns and needs of dierent
journalists. For instance, role-specific training programmes and ethical guidelines could
equip journalists with knowledge and skills to understand how generative AI aligns
with their own practices, bridging potential knowledge gaps and enabling informed
use of AI tools across journalistic roles. Furthermore, fostering cross-role collaboration
in the development and implementation of AI tools could help ensure that these technol-
ogies complement rather than disrupt the diverse practices and identities of dierent
roles. By embracing a nuanced and role-sensitive approach, news organisations can
better navigate the integration of generative AI, balancing eciency gains with the pres-
ervation of journalistic values and role diversity.
This study has certain limitations that should be acknowledged. Due to its focus on
Danish journalists and the sampling method employed, the findings may not be fully
applicable to journalists in other countries with dierent media environments. To
address this, future research could employ larger and more diverse samples to explore
whether the findings dier across various national and cultural contexts. Moreover, the
study captures a snapshot of attitudes and practices during a specific period, and the
rapid evolution of generative AI technologies suggests that these may evolve over
time. Longitudinal approaches that track changes in attitudes and practices as AI technol-
ogies progress would provide valuable insights into these dynamics. Despite these limit-
ations, this research adds novel insights into the previously underexplored intersection of
journalistic roles and AI, highlighting significant variations within the journalism pro-
fession that are relevant beyond the national context of this study. These variations are
important to consider in further academic explorations of the dynamic co-evolution of tra-
ditional journalistic roles and technological innovation.
Disclosure Statement
No potential conict of interest was reported by the author(s).
ORCID
Lynge Asbjørn Møller http://orcid.org/0000-0002-1632-2253
Hannes Cools http://orcid.org/0000-0002-4626-1527
Morten Skovsgaard http://orcid.org/0000-0003-3577-0761
References
AAPOR. (2023). Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys.
10th ed. https://aapor.org/wp-content/uploads/2024/03/Standards-Definitions-10th-edition.pdf
Alkaissi, H., and S. I. McFarlane. 2023. “Artificial Hallucinations in ChatGPT: Implications in Scientific
Writing.” Cureus 15 (2): e35179. https://doi.org/10.7759/cureus.35179.
Beckett, C., and M. Yaseen. 2023. Generating Change: A Global Survey of What News Organisations are
Doing with Artificial Intelligence. The London School of Economics and Political Science. https://
www.journalismai.info/research/2023-generating-change.
JOURNALISM PRACTICE 17
Belair-Gagnon, V., R. Zamith, and A. E. Holton. 2020. “Role Orientations and Audience Metrics in
Newsrooms: An Examination of Journalistic Perceptions and their Drivers.” Digital Journalism 8
(3): 347–366. https://doi.org/10.1080/21670811.2019.1709521.
Brüggemann, M., S. Engesser, F. Büchel, E. Humprecht, and L. Castro. 2014. “Hallin and Mancini
Revisited: Four Empirical Types of Western Media Systems: Hallin and Mancini Revisited.”
Journal of Communication 64 (6): 1037–1065. https://doi.org/10.1111/jcom.12127.
Bucher, T. 2017. “Machines Don’t Have Instincts’: Articulating the Computational in Journalism.” New
Media & Society 19 (6): 918–933. https://doi.org/10.1177/1461444815624182.
Carlson, M. 2015. “The Robotic Reporter.” Digital Journalism 3 (3): 416–431. https://doi.org/10.1080/
21670811.2014.976412.
Cohen, B. C. 1963. Press and Foreign Policy. Princeton, NJ: Princeton University Press.
Cools, H., and N. Diakopoulos. 2024. “Uses of Generative AI in the Newsroom: Mapping Journalists’
Perceptions of Perils and Possibilities.” Journalism Practice 1–19. https://doi.org/10.1080/
17512786.2024.2394558.
de-Lima-Santos, M.-F., W. N. Yeung, and T. Dodds. 2024. “Guiding the Way: A Comprehensive
Examination of AI Guidelines in Global Media.” AI & SOCIETY 1–19. https://doi.org/10.1007/
s00146-024-01973-5.
Diakopoulos, N., H. Cools, C. Li, N. Helberger, E. Kung, and A. Rinehart. 2024. Generative AI in
Journalism: The Evolution of Newswork and Ethics in a Generative Information Ecosystem. https://
cdn.theconversation.com/static_files/files/3273/AP_Generative_AI_Report_April_2024.pdf
Dickinson, R., J. Matthews, and K. Saltzis. 2013. “Studying Journalists in Changing Times:
Understanding News Work as Socially Situated Practice.” International Communication Gazette
75 (1): 3–18. https://doi.org/10.1177/1748048512461759.
Donsbach, W. 2008. “Journalists’ Role Perception.” In The International Encyclopedia of
Communication, edited by W. Donsbach, 1st ed. Hoboken, NJ: Wiley.
Gulla, J., R. Svendsen, L. Zhang, A. Stenbom, and J. Frøland. 2021. “Recommending News in
Traditional Media Companies.” AI Magazine 42 (3): 55–69. https://doi.org/10.1609/aimag.v42i3.
18146.
Gutierrez Lopez, M., C. Porlezza, G. Cooper, S. Makri, A. MacFarlane, and S. Missaoui. 2022. “A
Question of Design: Strategies for Embedding AI-Driven Tools into Journalistic Work Routines.”
Digital Journalism 11 (3): 484–503. https://doi.org/10.1080/21670811.2022.2043759.
Hanitzsch, T., and T. P. Vos. 2018. “Journalism Beyond Democracy: A New Look into Journalistic Roles
in Political and Everyday Life.” Journalism 19 (2): 146–164. https://doi.org/10.1177/
1464884916673386.
Hanitzsch, T., T. P. Vos, O. Standaert, F. Hanusch, J. F. Hovden, L. Hermans, and J. Ramaprasad. 2019.
“Role Orientations: Journalists’ Views on Their Place in Society.” In Worlds of Journalism:
Journalistic Cultures Around the Globe, edited by T. Hanitzsch, F. Hanusch, J. Ramaprasad, and
A. S. de Beer, 161–197. New York, NY: Columbia University Press.
Hanusch, F. 2019. “Journalistic Roles and Everyday Life: An Empirical Account of Lifestyle Journalists’
Professional Views.” Journalism Studies 20 (2): 193–211. https://doi.org/10.1080/1461670X.2017.
1370977.
Hicks, M. T., J. Humphries, and J. Slater. 2024. “ChatGPT is Bullshit.” Ethics and Information Technology
26 (2): 1–10. https://doi.org/10.1007/s10676-024-09775-5.
Holton, A. E., S. C. Lewis, and M. Coddington. 2016. “Interacting with Audiences: Journalistic Role
Conceptions, Reciprocity, and Perceptions about Participation.” Journalism Studies 17 (7): 849–
859. https://doi.org/10.1080/1461670X.2016.1165139.
Johnstone, J. W. C., E. J. Slawski, and W. W. Bowman. 1972. “The Professional Values of American
Newsmen.” Public Opinion Quarterly 36 (4): 522. https://doi.org/10.1086/268036.
Jones, B., R. Jones, and E. Luger. 2022. “AI ‘Everywhere and Nowhere’: Addressing the AI Intelligibility
Problem in Public Service Journalism.” Digital Journalism 10 (10): 1731–1755. https://doi.org/10.
1080/21670811.2022.2145328.
Kim, D., and S. Kim. 2018. “Newspaper Journalists’ Attitudes Towards Robot Journalism.” Telematics
and Informatics 35 (2): 340–357. https://doi.org/10.1016/j.tele.2017.12.009.
18 L. A. MØLLER ET AL.
Lindén, C.-G. 2018. “Algorithms are a Reporter’s New Best Friend: News Automation and the Case for
Augmented Journalism.” In The Routledge Handbook of Developments in Digital Journalism, edited
by S. A. Eldridge and B. Franklin, 237–250. London, UK: Routledge.
Mellado, C. 2015. “Professional Roles in News Content: Six Dimensions of Journalistic Role
Performance.” Journalism Studies 16 (4): 596–614. https://doi.org/10.1080/1461670X.2014.
922276.
Mellado, C., and A. van Dalen. 2014. “Between Rhetoric and Practice: Explaining the Gap between
Role Conception and Performance in Journalism.” Journalism Studies 15 (6): 859–878. https://
doi.org/10.1080/1461670X.2013.838046.
Milosavljević, M., and I. Vobič. 2019. “Human Still in the Loop.” Digital Journalism 7 (8): 1098–1116.
https://doi.org/10.1080/21670811.2019.1601576.
Møller, L. A., M. Skovsgaard, and C. de Vreese. 2024. “Reinforce, Readjust, Reclaim: How Artificial
Intelligence Impacts Journalism’s Professional Claim.” Journalism 1–18. https://doi.org/10.1177/
14648849241269300.
Møller, L. A., A. van Dalen, and M. Skovsgaard. 2024. “A Little of that Human Touch: How Regular
Journalists Redefine Their Expertise in the Face of Artificial Intelligence.” Journalism Studies 26
(1): 1–17. https://doi.org/10.1080/1461670X.2024.2412212.
Molyneux, L., and R. Zamith. 2022. “Surveying Journalists in the “New Normal”: Considerations and
Recommendations.” Journalism 23 (1): 153–170. https://doi.org/10.1177/1464884920935277.
Moran, R. E., and S. J. Shaikh. 2022. “Robots in the News and Newsrooms: Unpacking Meta-
Journalistic Discourse on the Use of Artificial Intelligence in Journalism.” Digital Journalism 10
(10): 1756–1774. https://doi.org/10.1080/21670811.2022.2085129.
Newman, N., R. Fletcher, C. T. Robertson, A. R. Arguedas, and R. K. Nielsen. 2024. Digital News Report
2024. Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/sites/
default/files/2024-06/RISJ_DNR_2024_Digital_v10%20lr.pdf.
Nishal, S., and N. Diakopoulos. 2023. “Envisioning the Applications and Implications of Generative AI
for News Media.” In CHI 2023 Workshop on Generative AI and HCI, 1–7. New York, NY: ACM.
Olsen, G. R. 2023. “Enthusiasm and Alienation: How Implementing Automated Journalism Aects
the Work Meaningfulness of Three Newsroom Groups.” Journalism Practice 19 (2): 1–17.
https://doi.org/10.1080/17512786.2023.2190149.
Örnebring, H. 2018. “Journalists Thinking About Precarity: Making Sense of the “New Normal”.” #ISOJ
Journal 8 (1): 109–126.
Rydenfelt, H. 2022. “Transforming Media Agency? Approaches to Automation in Finnish Legacy
Media.” New Media & Society 24 (12): 2598–2613. https://doi.org/10.1177/1461444821998705.
Schapals, A. K., and C. Porlezza. 2020. “Assistance or Resistance? Evaluating the Intersection of
Automated Journalism and Journalistic Role Conceptions”. Media and Communication 8 (3):
16–26. https://doi.org/10.17645/mac.v8i3.3054.
Singh, S., and N. Ramakrishnan. 2023. “Is ChatGPT Biased? A Review.” International Journal of
Engineering Research & Technology (IJERT) 12 (4). https://doi.org/10.17577/IJERTV12IS040122.
Sjøvaag, H., E. Stavelin, M. Karlsson, and A. Kammer. 2019. “The Hyperlinked Scandinavian News
Ecology: The Unequal Terms Forged by the Structural Properties of Digitalisation.” Digital
Journalism 7 (4): 507–531. https://doi.org/10.1080/21670811.2018.1454335.
Skovsgaard, M. 2014. “A Tabloid Mind? Professional Values and Organizational Pressures as
Explanations of Tabloid Journalism.” Media, Culture & Society 36 (2): 200–218. https://doi.org/
10.1177/0163443713515740.
Stray, J. 2021. “Making Artificial Intelligence Work for Investigative Journalism.” In Algorithms,
Automation, and News: New Directions in the Study of Computation and Journalism, edited by
N. Thurman, S. C. Lewis, and J. Kunert, 97–118. New York, NY: Routledge.
Syvertsen, T., G. Enli, O. J. Mjøs, and H. Moe. 2014. The Media Welfare State: Nordic Media in the Digital
Era. Ann Arbor, MI: University of Michigan Press.
Tandoc, E. C., L. Hellmueller, and T. P. Vos. 2013. “Mind the Gap: Between Journalistic Role
Conception and Role Enactment.” Journalism Practice 7 (5): 539–554. https://doi.org/10.1080/
17512786.2012.726503.
JOURNALISM PRACTICE 19
Thurman, N., K. Dörr, and J. Kunert. 2017. “When Reporters Get Hands-on with Robo-Writing:
Professionals Consider Automated Journalism’s Capabilities and Consequences.” Digital
Journalism 5 (10): 1240–1259. https://doi.org/10.1080/21670811.2017.1289819.
van Dalen, A. 2012. “The Algorithms Behind the Headlines: How Machine-Written News Redefines
the Core Skills of Human Journalists.” Journalism Practice 6 (5-6): 648–658. https://doi.org/10.
1080/17512786.2012.667268.
Weaver, D. H., and G. C. Wilhoit. 1996. The American Journalist in the 1990s: U.S. News People at the
End of an Era. Mahwah, NJ: Erlbaum.
Wu, S. 2024. “Journalists as Individual Users of Artificial Intelligence: Examining Journalists’ “Value-
Motivated Use” of ChatGPT and Other AI Tools Within and Without the Newsroom.” Journalism
14648849241303047. https://doi.org/10.1177/14648849241303047.
Wu, S., E. C. Tandoc, and C. T. Salmon. 2019. “Journalism Reconfigured: Assessing Human–Machine
Relations and the Autonomous Power of Automation in News Production.” Journalism Studies 20
(10): 1440–1457. https://doi.org/10.1080/1461670X.2018.1521299.
Appendices
Appendix 1. OLS regression coecients predicting how varying role conception
scores inuence concerns related to the adoption of generative AI, controlling
for gender, journalistic experience, and organisation size
Items: “I worry that generative AI will
… ”
Mobilisation
role
Watchdog
role
Detached observer
role
Entertainment
role
“cause job cuts in the news industry” 0.20* −0.06 −0.03 0.17
“take my job in the future” 0.20* −0.13* −0.21* 0.29**
“take away specific tasks that I enjoy” 0.17 −0.08 0.12 0.28**
“make my job less meaningful” 0.18* −0.10 −0.10 0.05
“make my job more boring” 0.19* −0.02 0.08 0.15
“compromise my journalistic integrity” 0.12 −0.04 0.02 −0.03
Index score 0.17** −0.07 −0.02 0.15*
*p < .10, **p < .05, ***p < .01. Cell entries are OLS regression coefficients.
Appendix 2. OLS regression coecients predicting how varying role conception
scores inuence benefits related to the adoption of generative AI, controlling
for gender, journalistic experience, and organisation size
Items: “I believe that generative AI will
… ”
Mobilisation
role
Watchdog
role
Detached observer
role
Entertainment
role
“enhance my creativity” −0.01 0.05 0.01 0.20*
“improve my efficiency” 0.03 0.19** 0.12 0.10
“enable me to pursue tasks that I
enjoy”
−0.16 0.15** −0.02 0.10
“enhance the accuracy of my output” 0.11 0.13* 0.08 0.09
“improve the quality of my output” 0.10 0.10 −0.01 0.29***
“help my output reach larger
audiences”
−0.03 0.05 −0.08 0.25**
Index score 0.01 0.11** 0.02 0.17**
*p < .10, **p < .05, ***p < .01. Cell entries are OLS regression coefficients.
20 L. A. MØLLER ET AL.
Appendix 3. OLS regression coecients predicting how varying role conception
scores inuence the rated importance of ethical and operational issues related
to the adoption of generative AI, controlling for gender, experience, and
organisation size
Items: “How important do you find … ?”
Mobilisation
role
Watchdog
role
Detached
observer role
Entertainment
role
Determining who is accountable for mistakes 0.25*** 0.15** 0.13 0.13
Creating fair data sharing policies between news
organisations and tech companies
0.11 0.14** 0.17* −0.02
Being transparent with the audience about the use
of generative AI
0.03 0.04 0.23*** −0.14*
Fact-checking the output of generative AI 0.04 0.09** 0.25*** 0.07
Training journalists in the use of generative AI 0.11 0.06 0.08 0.15*
Creating internal guidelines for the use of
generative AI injournalism
0.13** 0.12*** 0.18*** 0.09
Regulating the use of generative AI in journalism 0.04 0.08 0.18** 0.00
Developing in-house generative AI tools rather
than relying on external companies
0.18* 0.20*** 0.08 0.06
Ensuring that journalists are involved in
developing and implementing generative AI
tools within news organisations
0.10 0.20*** 0.25*** 0.07
Index score 0.11** 0.12*** 0.17*** 0.05
*p < .10, **p < .05, ***p < .01. Cell entries are OLS regression coefficients.
Appendix 4. OLS regression coecients predicting how varying role conception
scores inuence the use of generative AI for various journalistic tasks,
controlling for gender, experience, and organisation size
Items: “How often do you use generative
AI for … ”
Mobilisation
role
Watchdog
role
Detached observer
role
Entertainment
role
Overall use 0.11 0.02 −0.19* 0.56***
Brainstorming story ideas 0.21* −0.01 −0.38*** 0.53***
Researching background information on
a story
0.02 −0.05 −0.20** 0.20*
Processing vast amounts of data 0.07 0.08 −0.15 0.15
Translating content from foreign
languages
−0.08 0.04 −0.14 −0.15
Brainstorming interview questions 0.02 −0.12 −0.29** 0.23*
Generating interview transcriptions −0.03 −0.01 0.04 −0.20
Generating data visualisations 0.16** 0.04 −0.01 0.08
Brainstorming potential headlines 0.09 −0.10 −0.34** 0.59***
Generating text to use in articles 0.18* 0.00 −0.26*** 0.22**
Generating or editing illustrations for
articles
0.08 −0.02 −0.12* 0.16**
Proofreading text 0.02 0.07 0.08 0.17
Providing feedback to colleagues 0.05 0.04 −0.11 0.10
*p < .10, **p < .05, ***p < .01. Cell entries are OLS regression coefficients.
JOURNALISM PRACTICE 21
Appendix 5. OLS regression coecients predicting how varying role conception
scores inuence the rated potential of generative AI across dierent phases of
journalistic work, controlling for gender, experience, and organisation size
Items: “How do you imagine the potential
… ?”
Mobilisation
role
Watchdog
role
Detached
observer role
Entertainment
role
Idea generation (brainstorming stories,
sources, interview questions)
0.16* 0.10 −0.02 0.37***
Information analysis (summarising documents,
processing data)
0.17** 0.11* 0.01 0.16
Content creation (generating text and images) 0.08 0.12** −0.13 0.14
Content editing (proofreading, feedback,
image editing)
0.09 0.21*** 0.12 −0.03
Reformatting content (news personalisation,
reformatting for platforms)
0.11 0.12* 0.02 0.22**
Optimising content (audience metrics analysis,
search engine optimisation)
0.11 0.11 0.05 0.10
*p < .10, **p < .05, ***p < .01. Cell entries are OLS regression coefficients.
22 L. A. MØLLER ET AL.