ArticlePDF Available

Abstract and Figures

Widespread use of machine learning (ML) systems could result in an oppressive future of ubiquitous monitoring and behavior control that, for dialogic purposes, we call "Informania." This dystopian future results from ML systems' inherent design based on training data rather than built with code. To avoid this oppressive future, we develop the concept of an emancipatory assistant (EA), an ML system that engages with human users to help them understand and enact emancipatory outcomes amidst the oppressive environment of Informania. Using emancipatory pedagogy as a kernel theory, we develop two sets of design principles: one for the near future and the other for the far-term future. Designers optimize EA on emancipatory outcomes for an individual user, which protects the user from Informania's oppression by engaging in an adversarial relationship with its oppressive ML platforms when necessary. The principles should encourage IS researchers to enlarge the range of possibilities for responding to the influx of ML systems. Given the fusion of social and technical expertise that IS research embodies, we encourage other IS researchers to theorize boldly about the long-term consequences of emerging technologies on society and potentially change their trajectory.
Content may be subject to copyright.
Avoiding an Oppressive
Future of Machine
Learning: A Design Theory
for Emancipatory
Assistants
Kane, G. C., Young, A. G., Majchrzak, A., & Ransbotham,
S. (2021). Avoiding an Oppressive Future of Machine
Learning: A Design Theory for Emancipatory Assistants.
MIS Quarterly, 45(1), pp. 371-396.








 ! 

Informania"
#

!$
$!


 !
%


& 


'
$
(

)
%

“A problem well stated is half
solved” *+,
+% 

 


(


*$


- ."/
 .

Emancipation 
#
!oppression#
%
 !


- .$

)
0%

1
#

2

3$
&

)
!(

450"
%
6
451" 
$!

452"4
$
7
453".

$


)design theory 




)$


-

$
$
6


$




8

9

-:
-:$



0"+
%

1":

$
2"+


3"/
$
0"


:-:




:-:


:-:
9

1"
%
;
:-:;
8;
 
:-:;
 
:-:
%
2"
%

:-:
9
:-:

:-:

:-:


3"



-:

-:


-:


4%

-<-:
-
-:
+



%!


 
$
-:
%9(
=(
)>.2!
.$3?"
"@@@#
@
"
"@@$@@
A!:(%$!:B,!C1D104%
$
$!International Journal of Information
Management
... For IPs, this critique is essential for understanding how these environments are designed and controlled by technology corporations with profit-driven agendas, reinforcing capitalist ideologies, consumerism, and digital surveillance (Zuboff, 2023). It also highlights how dominant paradigms in IS literature influence design philosophies and behavioral manipulations of users (Cecez-Kecmanovic, 2007;Kane et al., 2021;Myers & Young, 1997). ...
... IPs are inherently a type of social platform that requires algorithms to function. Algorithms, used to personalize user experiences, often exhibit bias stemming from non-representative training datasets and homogeneous design processes of proprietary algorithms, which perpetuates discrimination and inequality (Kane et al., 2021;Noble, 2018;O'neil, 2017). Additionally, algorithms (e.g., content curation algorithms) can manipulate user experiences, which can lastingly alter social beliefs and influence real-world behavior (Kane et al., 2021;Li et al., 2022). ...
... Algorithms, used to personalize user experiences, often exhibit bias stemming from non-representative training datasets and homogeneous design processes of proprietary algorithms, which perpetuates discrimination and inequality (Kane et al., 2021;Noble, 2018;O'neil, 2017). Additionally, algorithms (e.g., content curation algorithms) can manipulate user experiences, which can lastingly alter social beliefs and influence real-world behavior (Kane et al., 2021;Li et al., 2022). ...
Article
Full-text available
Immersive platforms, accessed via VR and AR interfaces, offer a profound digital experience but raise significant privacy and ethical concerns. Current systems collect extensive user data; this enables manipulative advertising and behavior control, fosters self-censorship, and diminishes authentic self-expression – especially among marginalized communities. The unchecked development of immersive platforms threatens to create oppressive environments due to power imbalances and a lack of regulation. This paper addresses the urgent need for emancipatory design guidelines to prevent such outcomes. By integrating critical theory with Participatory Design, we propose Emancipatory Participatory Immersive Platforms. These platforms involve users in their design and dismantle oppressive structures through four emancipatory tools (i.e., agency, dialogue, inclusion, and rationality), substantiated by eight actionable design principles. Our framework aims to empower users, promote inclusivity, and ensure rational engagement – and thereby foster immersive platforms that maintain functional benefits while enhancing freedom and equity.
... The effectiveness of these systems depends not only on technical performance metrics but also on their ability to support positive psychological outcomes and meaningful human connections. This raises questions about how platforms can foster healthy human-AI relationships while ensuring that automation enhances rather than diminishes human flourishing [54,89]. ...
... The potential for GenAI to tailor digital environments to individual psychological profiles gives rise to questions about transparency, agency, and the likelihood of manipulative personalization strategies [29,57,69]. As GenAI-driven systems curate content without explicit user control, individuals may experience a gradual erosion of autonomy in decision-making, as digital environments nudge behaviors in ways that are not always transparent [54,60]. These challenges are further compounded by risks to privacy, since GenAI-driven personalization depends on large-scale behavioral profiling for optimizing engagement strategies [40]. ...
Article
Full-text available
The emergence of generative artificial intelligence (GenAI) represents a watershed moment in the evolution of digital platforms. The capabilities of this AI technology go beyond traditional AI systems, enabling the autonomous generation of novel outcomes with significant implications for platform value creation, architecture, govern-ance, and stakeholder interactions. We develop an integrative conceptual framework that identifies four key mechanisms through which GenAI transforms digital platforms: intelligent automation, democratization, hyper-personalization, and collaborative innovation. Through intelligent automation, GenAI transforms boundary resources from passive interfaces into active, intelligent mediators of value creation. Democratization systematically lowers barriers to platform participation. Hyper-personalization enables dynamic, individual-level adaptation of platform content. Collaborative innovation transforms platform innovation by making GenAI an active participant in human-AI value co-creation. We use this framework to situate the papers in the special issue and develop a research agenda that explores the transformative impact of GenAI on platform stakeholder relationships.
... There are several strands of research in IR-adjacent fields that explicate prefigurative politics (Asad, 2019) and ground research in humanistic Bardzell, 2016a, 2015;Werthner et al., 2024), anti-oppressive and emancipatory (Smyth and Dimond, 2014;Bardzell and Bardzell, 2016a;Kane et al., 2021;Monroe-White, 2021;Saxena et al., 2023), feminist (Wajcman, 2004(Wajcman, , 2010Bardzell, 2010;Bardzell and Bardzell, 2016b;Bardzell, 2018;D'ignazio and Klein, 2020), queer (Light, 2011;Klipphahn-Karge et al., 2024;Guyan, 2022), postcolonial and decolonial (Irani et al., 2010;Philip et al., 2012;Dourish and Mainwaring, 2012;Sun, 2013;Ali, 2014Ali, , 2016Akama et al., 2016;Irani and Silberman, 2016;Adams, 2021;Mohamed et al., 2020), anti-racist (Abebe et al., 2022), anti-casteist (Kalyanakrishnan et al., 2018Sambasivan et al., 2021;Vaghela et al., 2022a,b;Shubham, 2022;Kanjilal, 2023), anti-ableist (Williams et al., 2021;Sum et al., 2024), anti-fascist (McQuillan, 2022), abolitionist (Benjamin, 2019; Barabas, 2020;Earl, 2021;Jones and Melo, 2021;Williams and Haring, 2023), post-capitalistic (Feltwell et al., 2018;Browne and Green, 2022), and anarchist (Keyes et al., 2019;Linehan and Kirman, 2014;Asad et al., 2017) epistemologies. Reviewing this full body of literature is out-of-scope of this work but we briefly present a sample to draw from and motivate new IR research agendas for sociotechnical change. ...
... Bardzell and Bardzell (2016a) define humanistic HCI as "any HCI research or practice that deploys humanistic epistemologies (e.g., theories and conceptual systems) and methodologies (e.g., critical analysis of designs, processes, and implementations; historical genealogies; conceptual analysis; emancipatory criticism) in service of HCI processes, theories, methods, agenda-setting, and practices", and include emanicipatory HCI as an aspiration of humanistic HCI. Kane et al. (2021) propose to incorporate emancipatory pedagogy (Freire, 2020) that does "not advocate the oppressed simply rise and overthrow their oppressors. Instead, [. . . ...
Article
Full-text available
Information retrieval (IR) technologies and research are undergoing transformative changes. It is our perspective that the community should accept this opportunity to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as social and political sciences, and should be co-developed with cross-disciplinary scholars, legal and policy experts, civil rights and social justice activists, and artists, among others. In this perspective paper, we motivate why the community must consider this radical shift in how we do research and what we work on, and sketch a path forward towards this transformation.
... Existing ML research has started to explore the interplay between humans and AI in a variety of contexts. This includes the adoption of ML (e.g., Boyacı et al., 2024;Chen et al., 2024), human-AI collaborations (e.g., Raisch & Fomina, 2024;Schuetz & Venkatesh, 2020), and emerging social challenges such as the ethical implications of AI (e.g., Kane et al., 2021;Marjanovic et al., 2022). Recently, scholars have emphasized the significant reciprocal interplay between humans and AI (e.g., Leavitt et al., 2021;Murray et al., 2021;Raisch & Fomina, 2024;Sturm et al., 2021a). ...
Conference Paper
Habits play a critical role in decision-making, affecting both our personal and professional lives. While good habits improve efficiency and decision quality, bad habits can reinforce biases and hinder adaptability. Artificial intelligence (AI) offers the potential of habit engineering: it can promote beneficial behavioral changes. This study examines how AI influences the habits of decision-makers and how organizations can use AI to foster effective routines. Through a case study at Allianz Global Investors (AGI), we analyze the impact of AI-driven advice on traders’ decision-making processes in a high-stakes environment. The results highlight AI’s ability to disrupt established habits, encouraging greater reflection and improved performance in trading decisions. This research contributes to the literature on habit engineering and organizational AI by highlighting the need for careful monitoring and evaluation of AI systems to balance efficiency and adaptability, ensuring that habits remain aligned with organizational goals and long-term success.
... For trainees, the boundaries of professionalism are less established, digitally and socially [5,6]. Residency programs increasingly grapple with questions like: ...
... In regions with limited technological infrastructure and high low literacy, some recent research has raised concerns that AI could potentially worsen the existing digital divide (Kane et al., 2021;Wessel et al., 2023). However, AI holds the potential to advance FI through strategically accessible and inclusive approaches tailored to meet the unique needs of these communities (Kshetri, 2021). ...
Article
Full-text available
The global commitment to advancing financial inclusion (FI) relies on technology to connect underserved communities with the formal financial sector. Existing traditional technologies have made some progress, but they often fail to adapt to the unique needs of these populations. Although artificial intelligence (AI) offers new possibilities to meet these limitations, its rapid advancement has outpaced the development of integrative studies, leaving its potential impact on financial access by the underserved largely unexplored. Existing research provides fragmented insights into how different technological interventions impact diverse groups. We develop a segment-outcome-focused analysis to structure a scoping review of 95 information systems studies to assess current technological advances for FI and explore how AI can address their limitations, including limited digital literacy, uneven and high cost of infrastructure, and service personalization. We then engender future research directions and conclude with theoretical contributions and practical implications, emphasizing the potential of AI solutions to advance FI.
Article
The rapidly growing amount and importance of data across all aspects of organisations and society have led to urgent calls for better, more comprehensive and applicable approaches to data governance. One key driver of this is the use of data in machine learning systems, which hold the promise of producing much social and economic good, but which simultaneously raise significant concerns. Calls for data governance thus typically have an ethical component. This can refer to specific ethical values that data governance is meant to preserve, most obviously in the area of privacy and data protection. More broadly, responsible data governance is seen as a condition of the development and use of ethical and trustworthy digital technologies. This conceptual paper takes the already existing ethical aspect of the data governance discourse as a point of departure and argues that ethics should play a more central role in data governance. Drawing on Habermas’s Theory of Communicative Action and using the example of neuro data, this paper argues that data shapes and is shaped by discourses. Data is at the core of our shared ontological positions and influences what we believe to be real and thus also what it means to be ethical. These insights can be used to develop guidance for the further development of responsible data governance.
Article
Purpose The metaverse, through artificial intelligence (AI) systems and capabilities, allows considerable data analysis in the workplace, largely exceeding traditional people analytics data collection. While concerns over surveillance and issues associated with privacy and discrimination have been raised, the metaverse has the potential to offer opportunities associated with fairer assessment of employee performance and enhancement of the employee experience, especially with respect to gender and race, inclusiveness and workplace equity. This paper aims at shedding light on the diversity, equity and inclusion (DEI) opportunities and challenges of implementing the metaverse in the workplace, and the role played by AI. Design/methodology/approach This paper draws on our past research on AI and the metaverse and provides insights addressed to human resources (HR) scholars and practitioners. Findings Our analysis of AI applications to the metaverse in the workplace sheds light on the ambivalent role of and potential trade-offs that may arise with this emerging technology. If used responsibly, the metaverse can enable positive changes concerning the future of work, which can promote DEI. Yet, the same technology can lead to negative DEI outcomes if implementations occur quickly, unsupervised and with a sole focus on efficiencies and productivity (i.e. collecting metrics, models etc.). Practical implications Managers and HR leaders should try to be first movers rather than followers when deciding if (or, better, when) to implement metaverse capabilities in their organizations. But how the metaverse is implemented will be strategic. This involves choices concerning the degree of invasive/pervasive monitoring (internal) as well as make or buy decisions concerning outsourcing AI capabilities. Originality/value Our paper is one among few (to date) that discusses AI capabilities in the metaverse at the intersection of the HR and information systems(IS) literature and that specifically tackles DEI issues. Also, we take a “balanced” approach when evaluating the metaverse from a DEI perspective. While most studies either demonize or celebrate these technologies from an ethical and DEI standpoint, we aim to highlight challenges and opportunities, with the goal to guide scholars and practitioners towards a responsible use of the metaverse in organizations.
Article
Websites are one of the most prolific forms of information and communication technology (ICT). Yet, their potential to contribute to the development of oppressed groups and marginalized populations remains largely understudied. The ICT for development literature often uses simplistic views of identity that ignore the diversity of human experiences arising from the interconnectedness of ethnicity, sexual orientation, and gender. These aspects of identity are interconnected, and ignoring their intersection disregards compounding inequalities contended by sexual and gender minorities who are also ethnic minorities. Motivated by this research gap, this study analyzes 33 Two-Spirit web pages to further our understanding of how websites can contribute to the development of Two-Spirit communities. Two-Spirit is a modern term that recognizes expressions of gender fluidity and sexual diversity advanced by Indigenous Peoples in North America for centuries. An interpretive approach revealed that websites can support the development of Two-Spirit communities through four affordances: open spaces for activism, life stories telling, initiative impact reporting, and enabling access to resources. This paper contributes to the ICT literature on development by explaining how these affordances coalesce into a visibilization mechanism whereby ICT mediates unconstrained expressions of the Two-Spirit identity free of prejudices and colonial distortion.
Chapter
Full-text available
Explanations—a form of post-hoc interpretability—play an instrumental role in making systems accessible as AI continues to proliferate complex and sensitive sociotechnical systems. In this paper, we introduce Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design. It develops a holistic understanding of “who” the human is by considering the interplay of values, interpersonal dynamics, and the socially situated nature of AI systems. In particular, we advocate for a reflective sociotechnical approach. We illustrate HCXAI through a case study of an explanation system for non-technical end-users that shows how technical advancements and the understanding of human factors co-evolve. Building on the case study, we lay out open research questions pertaining to further refining our understanding of “who” the human is and extending beyond 1-to-1 human-computer interactions. Finally, we propose that a reflective HCXAI paradigm—mediated through the perspective of Critical Technical Practice and supplemented with strategies from HCI, such as value-sensitive design and participatory design—not only helps us understand our intellectual blind spots, but it can also open up new design and research spaces.
Conference Paper
Full-text available
Emancipation is a key concept in critical theories. Prior work suggests that emancipation is a complex and multi-faceted concept. Many conceptualizations of emancipation exist, and emancipation is defined in different ways. Existing empirical studies mainly focus on one or few components of emancipation. To have an integrated understanding of emancipation, we review the literature on emancipation in information systems (IS), with a view toward developing a typology of components of emancipation in the IS field. The typology of emancipation components consists of four components: freedom to act, freedom to express, freedom to belong and freedom to think. These components relate to the concepts of agency, dialogue, inclusion, and rationality, respectively.
Article
Full-text available
This article is based on a panel discussion at the 2019 International Conference on Information Systems (ICIS) held in Munich, Germany. This panel was concerned with the ethics and politics of engagement with Indigenous peoples in information systems research. As members of a research team that have been studying the use of social media by Indigenous peoples to collaborate and further their cause, we have recently become aware of some of the unintended consequences of IS research. Since others could easily appropriate our findings for political purposes, we believe that we as IS researchers need to become more sensitive to the ways in which we study and engage with “the Other.” Hence, the panelists discussed and debated the nature and extent of a researcher’s engagement when studying Indigenous peoples and their uses of IS/IT. The panel, chaired by Michael Myers, included three panelists who have been studying Indigenous peoples’ use of social media (Liz Davidson, Amber Young and Hameed Chughtai), and one panelist who is an Indigenous scholar studying Indigenous theories in IS (Pitso Tsibolane).
Article
Full-text available
Cognitive computing systems (CCS) are a new class of computing systems that implement more human-like cognitive abilities. CCS are not a typical technological advancement but an unprecedented advance toward human-like systems fueled by artificial intelligence. Such systems can adapt to situations, perceive their environments, and interact with humans and other technologies. Due to these properties, CCS are already disrupting established industries, such as retail, insurance and healthcare. As we make the case in this paper, the increasingly human-like capabilities of CCS challenge five fundamental assumptions that we—as IS researchers—have held about how users interact with IT artifacts. These assumptions pertain to the (i) direction of the user-artifact relationship, (ii) artifact’s awareness of its environment, (iii) functional transparency, (iv) reliability, and (v) user’s awareness of artifact use. We argue that the disruption of these five assumptions limit the applicability of our extant body of knowledge to CCS. Consequently, CCS present a unique opportunity for novel theory development and associated contributions. We argue that IS is well positioned to take this opportunity and present research questions that, if answered, will lead to interesting, influential and original theories.
Article
Full-text available
Peer argumentation, especially the discussion of contrary points of view, has experimentally been found to be effective in promoting science content knowledge, but how this occurs is still unknown. The available explanations are insufficient because they do not account for the evidence showing that gains in content knowledge are unrelated to group outcomes and are still evident weeks after collaboration occurs. The aim of this article is to contribute to the understanding of the relationship between peer-group argumentation and science content knowledge learning. A total of 187 students (aged 10 to 11 years) from 8 classrooms participated in the study, with the classrooms spread across 8 public schools, all located in Santiago, Chile. We conducted a quasi-experimental study randomized at school-class level. Four teachers delivered science lessons following a teaching program especially developed to foster dialogic and argumentative classroom talk (the intervention group), and four teachers delivered lessons in their usual way (the control group). Students were assessed individually using both immediate and delayed post-test measures of science content knowledge. The results showed no differences in pre- to post-immediate content knowledge between conditions. However, the intervention-group students increased their content knowledge significantly more than the control-group students between post-immediate and post-delayed tests. Hierarchical multiple regression analyses showed that, after controlling for school-level variables, time working in groups, and scores in the pretest, the formulation of counter arguments, although occurring in both groups, significantly predicted delayed gains in the intervention group only. Moreover, the frequency of counterarguments heard by students during the group work did not make a difference. Focal analysis of one small-group work suggests that teachers’ instructional practice may have contributed to the consolidation of students’ knowledge at an individual level in a post-collaborative phase.
Book
Why an organization's response to digital disruption should focus on people and processes and not necessarily on technology. Digital technologies are disrupting organizations of every size and shape, leaving managers scrambling to find a technology fix that will help their organizations compete. This book offers managers and business leaders a guide for surviving digital disruptions—but it is not a book about technology. It is about the organizational changes required to harness the power of technology. The authors argue that digital disruption is primarily about people and that effective digital transformation involves changes to organizational dynamics and how work gets done. A focus only on selecting and implementing the right digital technologies is not likely to lead to success. The best way to respond to digital disruption is by changing the company culture to be more agile, risk tolerant, and experimental. The authors draw on four years of research, conducted in partnership with MIT Sloan Management Review and Deloitte, surveying more than 16,000 people and conducting interviews with managers at such companies as Walmart, Google, and Salesforce. They introduce the concept of digital maturity—the ability to take advantage of opportunities offered by the new technology—and address the specifics of digital transformation, including cultivating a digital environment, enabling intentional collaboration, and fostering an experimental mindset. Every organization needs to understand its “digital DNA” in order to stop “doing digital” and start “being digital.” Digital disruption won't end anytime soon; the average worker will probably experience numerous waves of disruption during the course of a career. The insights offered by The Technology Fallacy will hold true through them all. A book in the Management on the Cutting Edge series, published in cooperation with MIT Sloan Management Review.