About
50
Publications
9,065
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
990
Citations
Introduction
Elizabeth is an organizational sociologist. She studies the human factors of AI in organizations, using interview methods and digital ethnography to gather qualitative data on how workers respond to, talk about, and work with new tech in their jobs. Her dissertation research examines the introduction of facial verification security protocols into gig work platforms.
Current institution
Additional affiliations
June 2017 - July 2017
September 2015 - June 2020
Publications
Publications (50)
[Winner of the Best Paper Award] In the discourse on human perceptions of algorithmic fairness, researchers have begun to analyze how these perceptions are shaped by sociotechnical context. In thinking through contexts of work, a half-century of research on organizational decision-making tells us that perceptions and interpretations made within the...
A team at Duke University and Duke Health system developed Sepsis Watch, an AI system that uses deep learning to assess a patients’ likelihood of developing sepsis, to support the Duke Emergency Department with caring for patients with sepsis. Sepsis is a deadly condition that develops from complications with an infection, and while treatable, it c...
The rise of biometric security changes how users make decisions about their privacy. As passwords give way to faces and fingerprints, the algorithmic nature of these processes creates new cognitive labor for users. When biometrics are used in spaces of algorithmic management, workers must negotiate tradeoffs between security, privacy, fairness, and...
This essay takes as its starting point Gernot Grabher and Jonas König's (2020) piece, "Dis-ruption, Embedded. A Polanyian Framing of the Platform Economy," and suggests fo-cusing on how digital platforms are realized on the ground. We propose that the people experiencing platformization have a strong influence over the futures that platforms can ev...
Agentic pipelines present novel challenges and opportunities for human-centered explainability. The HCXAI community is still grappling with how best to make the inner workings of LLMs transparent in actionable ways. Agentic pipelines consist of multiple LLMs working in cooperation with minimal human control. In this research paper, we present early...
While human-AI collaboration has been a longstanding goal and topic of study for computational research, the emergence of increasingly naturalistic generative AI language models has greatly inflected the trajectory of such research. In this paper we identify how, given the language capabilities of generative AI, common features of human-human colla...
In this short paper we address issues related to building multimodal AI systems for human performance support in manufacturing domains. We make two contributions: we first identify challenges of participatory design and training of such systems, and secondly, to address such challenges, we propose the ACE paradigm: "Action and Control via Explanati...
This paper introduces v0.5 of the AI Safety Benchmark, which has been created by the MLCommons AI Safety Working Group. The AI Safety Benchmark has been designed to assess the safety risks of AI systems that use chat-tuned language models. We introduce a principled approach to specifying and constructing the benchmark, which for v0.5 covers only a...
Trust is an important factor in people's interactions with AI systems. However, there is a lack of empirical studies examining how real end-users trust or distrust the AI system they interact with. Most research investigates one aspect of trust in lab settings with hypothetical end-users. In this paper, we provide a holistic and nuanced understandi...
Through intensive research on datasets, benchmarks, and models, the computer-vision community has taken great strides to identify the societal biases intrinsic to these technologies. Less is known about the last mile of the computer-vision machine-learning pipeline: on-the-ground integration into the real world. In this paper, I analyze facial veri...
We investigate the privacy practices of labor organizers in the computing technology industry and explore the changes in these practices as a response to remote work. Our study is situated at the intersection of two pivotal shifts in workplace dynamics: (a) the increase in online workplace communications due to remote work, and (b) the resurgence o...
Despite the proliferation of explainable AI (XAI) methods, little is understood about end-users' explainability needs. This gap is critical, because end-users may have needs that XAI methods should but don't yet support. To address this gap and contribute to understanding how explainability can support human-AI interaction, we conducted a study of...
Gig workers are typically thought of as individuals toiling in digitized isolation, not as communities of shared learning. While it’s accurate to say they don’t have the same information-sharing norms as people in traditional employment arrangements, some do gather, in part in digital communities. Online forums, in this space, have become popular s...
We investigate the privacy practices of labor organizers in the computing technology industry and explore the changes in these practices as a response to remote work. Our study is situated at the intersection of two pivotal shifts in workplace dynamics: (a) the increase in online workplace communications due to remote work, and (b) the resurgence o...
Scholars and industry practitioners have debated how to best develop interventions for ethical artificial intelligence (AI). Such interventions recommend that companies building and using AI tools change their technical practices, but fail to wrangle with critical questions about the organizational and institutional context in which AI is developed...
Central to a number of scholarly, regulatory, and public conversations about algorithmic accountability is the question of who should have access to documentation that reveals the inner workings, intended function, and anticipated consequences of algorithmic systems, potentially establishing new routes for impacted publics to contest the operations...
Computer scientists are trained to create abstractions that simplify and generalize. However, a premature abstraction that omits crucial contextual details creates the risk of epistemic trespassing, by falsely asserting its relevance into other contexts. We study how the field of responsible AI has created an imperfect synecdoche by abstracting the...
Human-centered artificial intelligence (AI) posits that machine learning and AI should be developed and applied in a socially aware way. In this article, we argue that qualitative analysis (QA) can be a valuable tool in this process, supplementing, informing, and extending the possibilities of AI models. We show this by describing how QA can be int...
This report maps the challenges of constructing algorithmic impact assessments (AIAs) by analyzing impact assessments in other domains—from the environment to human rights to privacy. Impact assessment is a promising model of algorithmic governance because it bundles an account of potential and actual harms of a system with a means for identifying...
Drawing on a detailed analysis of Grabher and König’s study of platformization (Grabher & König, 2020), this essay develops a revision of Actor-Network Theory by proposing how a Device, Representation, Actor and Network or a DRAN Approach can be more helpful in making sense of platform economic processes. First, it locates the ways in which Grabher...
The rise of social media platforms has produced novel security threats and vulnerabilities. Malicious actors can now exploit entanglements of once disparate technical and social systems to target exposed communities. These exploits pose a challenge to legacy security frameworks drawn from technical and state-based conceptions of referent objects an...
Journalism scholarship has for the last two decades grappled with a paradox: while the industry spent years mired in gloomy proclamations of falling ad revenue, shrinking newsrooms, and the death of local reporting, since the late 2000s the industry has also been caught up in a wave of jubilance about technology and innovation. After a tour through...
This paper examines the emerging contours of a new organizational form, in which firms move beyond the cooperative pacts of alliances to a radicalized, aggressive co-optation of external assets. Taking our point of departure from the literature on the "networked" firm, we point to an alternative to the make, buy, or cooperate decision: in the Möbiu...
Sensemaking is a common activity in the analysis of a large or complex amount of information. This active area of HCI research asks how DO people come to understand such difficult sets of information? The information workplace is increasing dominated by high velocity, high volume, complex information streams. At the same time, understanding how sen...
Sensemaking is a common activity in the analysis of a large or complex amount of information. This active area of HCI research asks how DO people come to understand such difficult sets of information? The information workplace is increasing dominated by high velocity, high volume, complex information streams. At the same time, understanding how sen...
Journalistic work is increasingly conducted using cooperative technologies. But while journalists need security and privacy just like professionals in sectors like health and education, constrained finances and missing legal requirements cause journalists to rely mostly on third-party platforms for their professional communications. In this study,...
Social media platforms make it easy to share and tough to conceal. Built on surveillance economics, they’re driven to harvest user data. When users want to share their data, opening their digital lives to collection, then the goals of the platform and the user are aligned. In the language of UX/UI designers, this is called a “success.” When a user...
Maintaining computer security in an organization requires navigating a thorny landscape of adversaries, devices, and systems. As organizations grow more complex, integrating remote workers and networked, third-party tools, security risks multiply, and become more difficult to fully comprehend. News organizations are exemplary of this type of risk-l...
Despite wide-ranging threats and tangible risks, journalists have not done much to change their
information or communications security practices in recent years. Through in-depth interviews,
we provide insight into how journalists conceptualize security risk. By applying a mental models
framework, we identify a model of “security by obscurity”—one...
Humanity's desire to record events happening in time has spawned a lineage of moving-image transcription systems, from early cinematographs to contemporary digital camcorder equipment. These technologies have arisen, however, amongst a setting of concentrated discourse surrounding the nature of what it means to exist as a durational being, also hap...