Kevin Crowston’s research while affiliated with Syracuse University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (323)


Boundaries of data journalism in U.S. public radio newsrooms
  • Article

March 2025

·

8 Reads

Journalism

Stan Jastrzebski

·

·

Jocelyn McKinnon-Crowley

·

Kevin Crowston

The recent addition of data journalists to several dozen U.S. public radio newsrooms has created multiple new hybridities in the form. No longer are numbers and large datasets “audio poison.” Instead, they are an essential tool for these journalists, who prize journalism’s interpretive function, expressing information in new ways and challenging conventions of broadcast newsroom employment. This study, which relies on semi-structured interviews with 13 public radio data journalists, uses Carlson’s boundary work typology to analyze the ways in which data journalists are expanding the boundaries of U.S. public radio journalism, as well as ways in which they have pushed back against expulsionary pressures. This study’s findings problematize the idea that the results of boundary work must be expressed as in-or-out proposition. Rather, U.S. public radio data journalists suggest their boundaries are a continuum where they may be conditionally accepted by their colleagues, depending on deadlines and on the skills possessed by non-data journalists.


Summary of results from the literature.
Deskilling and upskilling with AI systems
  • Article
  • Full-text available

March 2025

·

52 Reads

Information Research an international electronic journal

Introduction. Deskilling is a long-standing prediction of the use of information technology, raised anew by the increased capabilities of AI (AI) systems. A review of studies of AI applications suggests that deskilling (or levelling of ability) is a common outcome, but systems can also require new skills, i.e., upskilling. Method. To identify which settings are more likely to yield deskilling vs. upskilling, we propose a model of a human interacting with an AI system for a task. The model highlights the possibility for a worker to develop and exhibit (or not) skills in prompting for, and evaluation and editing of system output, thus yielding upskilling or deskilling. Findings. We illustrate these model-predicted effects on work with examples of current studies of AI-based systems. Conclusions. We discuss organizational implications of systems that deskill or upskill workers and suggest future research directions.

Download

The Role of Human Creativity in the Presence of AI Creativity Tools at Work: A Case Study on AI-Driven Content Transformation in Journalism

February 2025

·

23 Reads

·

Jocelyn McKinnon-Crowley

·

·

[...]

·

As AI becomes more capable, it is unclear how human creativity will remain essential in jobs that incorporate AI. We conducted a 14-week study of a student newsroom using an AI tool to convert web articles into social media videos. Most treated the tool as a creative springboard, yet still had to edit many AI outputs. The tool enabled the team to publish successful content, receiving over 500,000 views. Yet creators sometimes treated AI as an unquestioned expert, accepting flawed suggestions. Editorial critique was essential to spot errors and guide creative solutions when AI failed. We discuss how AI's inherent gaps ensure human creativity remains vital.


The Task Matters: The Effect of Perceived Similarity to AI on Intention to Use in Different Task Types

January 2025

·

16 Reads

·

1 Citation

With the development of AI technologies, especially generative AI (GAI) like ChatGPT, GAI is increasingly assisting people in various tasks. However, people may have different requirements for GAI when using it for different kinds of tasks. For instance, when brainstorming new ideas, people may want GAI to propose different ideas that supplement theirs with different problem-solving perspectives, but for decision-making tasks, they may prefer GAI adopt a similar problem-solving process with people to make a similar or even the same decision as they would. We conducted an online experiment examining how perceived similarities between GAI and human task-solving influence people's intention to use GAI, mediated by trust, for four task types (creativity, planning, intellective, and decision-making tasks). We demonstrate that the effect of similarity on trust (and so intent to use AI) depends on the type of task. This paper contributes to understanding the impact of task types on the relationship between perceived similarity and GAI adoption, with implications for future use of GAI in various task contexts.


The Task Matters: The Effect of Perceived Similarity to AI on Intention to Use in Different Task Types

January 2025

·

3 Reads

With the development of AI technologies, especially generative AI (GAI) like ChatGPT, GAI is increasingly assisting people in various tasks. However, people may have different requirements for GAI when using it for different kinds of tasks. For instance, when brainstorming new ideas, people may want GAI to propose different ideas that supplement theirs with different problem-solving perspectives, but for decision-making tasks, they may prefer GAI adopt a similar problem-solving process with people to make a similar or even the same decision as they would. We conducted an online experiment examining how perceived similarities between GAI and human task-solving influence people's intention to use GAI, mediated by trust, for four task types (creativity, planning, intellective, and decision-making tasks). We demonstrate that the effect of similarity on trust (and so intent to use AI) depends on the type of task. This paper contributes to understanding the impact of task types on the relationship between perceived similarity and GAI adoption, with implications for future use of GAI in various task contexts.


Figure 1 Individuals' increase in capability, with the zone of proximal development (ZPD) in the centre of the figure.
Figure 2 Machine zone of proximal development (ZPD).
Figure 3 Co-augmentation of human and machine zones of proximal development (ZPD).
Figure 4 The Gravity Spy classification interface, showing a glitch to be classified on the left and the potential glitch classes on the right.
Supporting Human and Machine Co-Learning in Citizen Science: Lessons From Gravity Spy

December 2024

·

24 Reads

Citizen Science Theory and Practice

We explore the bi-directional relationship between human and machine learning in citizen science. Theoretically, the study draws on the zone of proximal development (ZPD) concept, which allows us to describe AI augmentation of human learning, human augmentation of machine learning, and how tasks can be designed to facilitate co-learning. The study takes a design-science approach to explore the design, deployment, and evaluations of the Gravity Spy citizen science project. The findings highlight the challenges and opportunities of co-learning, where both humans and machines contribute to each other’s learning and capabilities. The study takes its point of departure in the literature on co-learning and develops a framework for designing projects where humans and machines mutually enhance each other’s learning. The research contributes to the existing literature by developing a dynamic approach to human-AI augmentation, by emphasizing that the ZPD supports ongoing learning for volunteers and keeps machine learning aligned with evolving data. The approach offers potential benefits for project scalability, participant engagement, and automation considerations while acknowledging the importance of tutorials, community access, and expert involvement in supporting learning.





Project Archetypes: A Blessing and a Curse for AI Development

August 2024

·

48 Reads

Software projects rely on what we call project archetypes, i.e., pre-existing mental images of how projects work. They guide distribution of responsibilities, planning, or expectations. However, with the technological progress, project archetypes may become outdated, ineffective, or counterproductive by impeding more adequate approaches. Understanding archetypes of software development projects is core to leverage their potential. The development of applications using machine learning and artificial intelligence provides a context in which existing archetypes might outdate and need to be questioned, adapted, or replaced. We analyzed 36 interviews from 21 projects between IBM Watson and client companies and identified four project archetypes members initially used to understand the projects. We then derive a new project archetype, cognitive computing project, from the interviews. It can inform future development projects based on AI-development platforms. Project leaders should proactively manage project archetypes while researchers should investigate what guides initial understandings of software projects.


Citations (79)


... As AI continues to evolve, these technologies have become increasingly capable of acting as autonomous team members, allowing embodied AI the capability to collaborate with humans in various tasks (Liang et al., 2025;e.g., Zheng et al., 2025). Human-AI collaboration introduces new dynamics into traditional teamwork, and trust becomes a critical factor in determining the success of such partnerships (Cheng et al., 2025;Ju et al., 2025;Oberhofer, 2025). ...

Reference:

Introduction to the Minitrack on Collaboration with Intelligent Systems: Machines as Teammates
The Task Matters: The Effect of Perceived Similarity to AI on Intention to Use in Different Task Types

... Additionally, LLM-based topic modeling studies using BERTopic have been conducted in various fields such as healthcare [16], travel [17], and politics [18]. It is also noted that topic modeling studies on social media data with BERTopic have been available recently [19][20][21]. ...

Framing and feelings on social media: the futures of work and intelligent machines
  • Citing Article
  • April 2024

Information Technology and People

... Future research could investigate how organizations can design learning environments that promote active engagement with GenAI while maintaining sufficient domain expertise for critical evaluation of outputs, particularly given the risk of deskilling and overreliance on AI-generated solutions (Hannigan et al., 2024;Lindebaum & Fleming, 2024). Future studies may also investigate optimal approaches to algorithmic decision authority versus human discretion (Grote, Zürich, Parker, & Crowston, 2024;Hillebrand et al., 2025;Kim, Glaeser, Hillis, Kominers, & Luca, 2024), particularly given the need to maintain strategic alignment while enabling decentralized innovation through GenAI tools. pharmaceutics (Elbadawi, Li, Basit, & Gaisford, 2024). ...

Taming Artificial Intelligence: A Theory of Control-Acountability Alignment among AI Developers and Users
  • Citing Article
  • October 2024

Academy of Management Review

... HCI research on GAI has largely focused on designing and developing systems leveraging GAI to aid workers' tasks [118]. Other studies have used speculative design methods in controlled settings to gauge workers' preliminary perceptions and attitudes about tentative features to understand their potential benefits and/or harms in creative work processes [61,110,115]. In the context of writing, studies explore developing and implementing GAI to improve workers' outputs [50,89]. ...

ReelFramer: Human-AI Co-Creation for News-to-Video Translation
  • Citing Conference Paper
  • May 2024

... Gravity Spy [78][79][80][81] employs a CNN, a specialized deep learning algorithm designed for image recognition tasks, to classify glitches in LIGO detector data based on their time-frequency morphology. We utilized the dataset containing glitches detected by Gravity Spy in the O3a and O3b data, which can be found in [82], to minimize their influence on the background FAR calculated using AresGW. ...

Gravity Spy: lessons learned and a path forward

The European Physical Journal Plus

... Implementing AI solutions also involves overcoming unexpected challenges and establishing reliable meanings about technology and data. Dolata and Crowston (2024) argue that the sensemaking process is integral to addressing these challenges and ensuring the successful deployment of AI technologies. This involves a continuous cycle of interpretation and adaptation, which is essential for maintaining the relevance and efficacy of AI solutions. ...

Making Sense of AI Systems Development
  • Citing Article
  • January 2023

IEEE Transactions on Software Engineering

... In past decades, scientists produced and managed their own data, giving limited thought to their reusability. This is changing through generational turnover and continued diffusion of open science principles (Campbell et al., 2019, Borycz et al., 2023. The potential for misuse is a frequently noted barrier to scientists' sharing of data (Perrier et al., 2020), and further work is needed to build effective guardrails around ES knowledge reuse. ...

Perceived benefits of open data are improving but scientists still lack resources, skills, and rewards

Humanities and Social Sciences Communications

... Broussard [4] developed an AI system to analyze data and identify opportunities or newsworthy investigative ideas related to public affairs, while Park et al. [47] automated news articles' comments moderation and provided analytics for better storytelling. Similarly, Petridis et al. [48] worked on co-designing and prototyping a tool to explore different reporting angles using LLMs. Jamil and Rubaiat [32] enabled journalists to query data and extract insights from multiple online sources. ...

AngleKindling: Supporting Journalistic Angle Ideation with Large Language Models
  • Citing Conference Paper
  • April 2023

... It provokes intelligent algorithms to autonomously create and manipulate video content (Chen, Fu, and Lyu, 2023). Especially, the emerging chatbots of generative AI such as ChatGPT powered by OpenAI and BARD powered by Google become popular among content creators for creating video content such as videos, reels, and shorts on social media platforms (Wang et al., 2023). These chatbots based on pre-trained language models such as Generative Pre-trained Transformer (GPT) and Pathways Language Model (PaLM) can generate video scripts and provide video content ideas that help content creators to save time and make innovative videos, especially GPT-4, which is an updated version of GPT, can read images and provide insightful descriptions of an image (Olga et al., 2023). ...

ReelFramer: Co-creating News Reels on Social Media with Generative AI
  • Citing Preprint
  • April 2023

... We draw on many different sources of data collected throughout the project: interviews with the Laser Interferometer Gravitational-Wave Observatory (LIGO) and ML scientists (domain experts), interviews with volunteers, trace data documenting system use, participant observation, and our use of the system. Other publications provide more details about these data collection and analysis efforts (e.g., Crowston et al. 2023;Jackson et al. 2020a,b). ...

Design Principles for Background Knowledge to Enhance Learning in Citizen Science
  • Citing Chapter
  • March 2023

Lecture Notes in Computer Science