ArticlePDF Available

Virginia Eubanks (2018) Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: Picador, St Martin’s Press

Authors:

Abstract

Law, Technology and Humans book review editor Dr Faith Gordon reviews Virginia Eubanks (2018) Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor.
https://lthj.qut.edu.au/ LAW, TECHNOLOGY AND HUMANS
Volume 1 (1) 2019 https://doi.org/10.5204/lthj.v1.i1.1386
This work is licensed under a Creative Commons Attribution 4.0 International Licence. As an open access
journal, articles are free to use with proper attribution. ISSN: 2652-4074 (Online)
162 © The Author/s 2019
Book Review
Virginia Eubanks (2018) Automating Inequality:
How High-Tech Tools Profile, Police, and Punish
the Poor. New York: Picador, St Martin’s Press
Faith Gordon*
Law, Technology and Humans Book Review Editor
ISBN: 9781250215789
It is difficult to imagine a future in which big data and artificial intelligence will not be prominent. Since the dawn of the digital
age, the decision-making processes in employment, finance, politics, health and human services have ‘undergone revolutionary
change’.
1
Previous decisions on offering opportunities like employment, insurance and government services were made by
humans; however, today ‘much of that decision-making power’ for outcomes that significantly shape lives has been given to
‘sophisticated machines’.
2
As decision-makers become more dependent on big data analytics, people’s privacy and freedom
often become more threatened. Additionally, this dependency amplifies the ‘digital divide’
3
or, as Eubanks calls it, ‘the digital
poorhouse’, yet, little or no political debates or discussions are taking place surrounding the negative consequences.
4
There is a growing body of research and literature on automated decision-making, algorithmic accountability and the processes
evolving as new forms of ‘digital discrimination’.
5
The inequality in these decision-making processes and the discriminatory
outcomes are the core issues explored in Virginia Eubanks’s groundbreaking new book: Automating Inequality: How High-
Tech Tools Profile, Police, and Punish the Poor. Eubanks builds on her previous work on technology and social justice by
exploring the challenges and consequences of decision-makers giving machines the power to make decisions about human
needs, public benefits and state interventions in the United States (US). The author argues that the same digital tracking that
operates in ‘low rights environments’ in which ‘poor and working class’ individuals have ‘few expectations of political
accountability and transparency’ is ‘used on everyone’ (p. 12).
Eubanks takes the reader on a journey, which is reflected in the structure of the book and the pages of firsthand accounts on the
realities of the ‘digital poorhouse’.
6
The book is structured neatly into an introduction, five chapters and a conclusion and draws
*Dr Faith Gordon, Lecturer in Criminology at Monash University, Australia
1
Eubanks, Automating Inequality, 3.
2
Eubanks, Automating Inequality, 3.
3
Andrejevic, The Big Data Divide.
4
Eubanks, Automating Inequality, 12.
5
Eubanks, Automating Inequality, 231.
6
Eubanks, Automating Inequality, 12.
Volume 1 (1) 2019 Book Review
163
on both documentary analysis and qualitative interviews. The introduction outlines the author’s first direct contact with
‘organizations working closely with families most directly impacted by the systems’ she explores.
7
Eubanks introduces the
context of the issue—the ‘complex integrated databases collect … personal information’, the predictive models and algorithms
‘tag’ people as ‘risky’ and ‘problematic’ and then law enforcement and other agencies can conduct surveillance.
8
She sensitively
documents the lived experiences of those who ‘are targeted by new tools of digital poverty management and face life-
threatening consequences as a result’.
9
The data analysis indicates that ‘these new systems have the most destructive and deadly
effects in low-income communities of color’ and ‘impact poor and working-class people across the color line’.
10
Eubanks
asserts that such technological innovations ‘hide poverty from the professional middle -class public and give the nation the
ethical distance it needs to make inhuman choices’.
11
Such ‘inhuman choices’ deeply affect the lives of those most marginalised individuals, with many poignant examples featuring
in Chapters one to five of the book, particularly in Indiana, California and Pennsylvania. Chapter One describes the introduction
of a computerised registry for ‘every welfare, Medicaid, and food stamp recipient in the state’ and the Temporary Assistance
for Needy Families program ‘put into effect a wide array of sanctions to penalize noncompliance’.
12
The chapter details how
these new measures are ‘an expansion and continuation of moralistic and punitive poverty management strategies’ that have
been in operation since the 1820s in the US. Eubanks convincingly argues that this use of technology, combined with restrictive
‘new rules’, has effectively ‘reversed the gains of the welfare rights movement’.
13
While the rhetoric of ‘the digital poorhouse’
is effectively ‘framed as a way to rationalize and streamline benefits’, in reality, the author argues that ‘the real goal’ has always
been ‘to profile, police, and punish the poor’.
14
Chapter Two introduces Sophie Stipesthe person to whom the book is dedicated. Sophie was born in 2002 and shortly
afterwards was diagnosed with a range of disabling conditions, which ‘without Medicaid … would have been financially
overwhelming’.
15
At age six, Sophie received a letter stating that she would no longer receive Medicaid as she had ‘failed to
cooperate’ by establishing her eligibility for the program.
16
The delay in receiving the letter resulted in Sophie’s family having
‘three days left’ to contact the agency, though they had not been informed about the necessary paperwork and were not given
time to address or challenge the decision.
17
Interviews with advocates refer to Sophie’s case as ‘particularly appalling’.
18
The
traditional model of face-to-face service was replaced by a digitally automated system with no designated human caseworker,
leaving individuals like Sophie vulnerable to further isolation and hardships. Eubanks detailed how media pressure and a
meeting with Lawren Mills, Governor Daniels’ policy director for human services, led to the return of Sophie’s Medicaid.
19
However, as the chapters that follow illustrate, Sophie’s case and the Stipes family’s experience is not an isolated one because
these systems designed by humans are established with the goal of reducing benefit claiming wherever possible.
Chapters three to five utilise several people’s lived experiences to demonstrate that traditional judgemental and stereotypical
assumptions about the working class and those experiencing poverty remain key drivers in the human design of algorithmic
decision-making aids. For example, the Allegheny Family Screening Tool predicts which children may need the intervention
of social services agencies. This system assesses the risk of a child being abused or neglected, positioning them on a scale of
one to two. The ‘predictive’ risk assessment combines information on schooling, criminal justice, health, family services and
other data from children’s lives and aggregates this into a multi-agency database.
20
The assessment number is then utilised
when deciding whether to intervene. Eubanks’s analysis exposes how the system’s tool generates data from past events,
including the childhood of a parent or grandparent, to aid decisions on conducting future surveillance and interventions in the
lives of families.
21
In addition to punishing contemporary families for circumstances experienced intergenerationally, the author
7
Eubanks, Automating Inequality, 231.
8
Eubanks, Automating Inequality, 11.
9
Eubanks, Automating Inequality, 11.
10
Eubanks, Automating Inequality, 12.
11
Eubanks, Automating Inequality, 13.
12
Eubanks, Automating Inequality, 36.
13
Eubanks, Automating Inequality, 36.
14
Eubanks, Automating Inequality, 38.
15
Eubanks, Automating Inequality, 41.
16
Eubanks, Automating Inequality, 42.
17
Eubanks, Automating Inequality, 43.
18
Eubanks, Automating Inequality, 45.
19
Eubanks, Automating Inequality, 45.
20
Eubanks, Automating Inequality, 127.
21
Eubanks, Automating Inequality, 152.
Volume 1 (1) 2019 Book Review
164
examines the alarming levels of racially biased data within the system, which is reflective of traditionally racist attitudes towards
African Americans and working class individuals.
22
Converse to the common assumption that algorithmic decision-making
aids produce fairer outcomes, the main body of the book demonstrates how the decision-making aspects designed, built and
programmed by humans often result in permanent and fixed notions based on traditional biases.
Eubanks exposes how the traditional bias and endemic targeting of those of a particular class and/or race unfolds in new
algorithmic decision-making aids; however, a minor aspect that requires further attention is the use of technology for social
good and social change (e.g., the work of McNutt).
23
While the author references social movements and campaigns, such as
Occupy Wall Street and Black Lives Matter,
24
more could have been drawn out in the analysis, particularly reflecting on the
use of technology for social good and its potential utilisation as a tool of resistance.
Eubanks’s conclusion calls for a ‘dismantling’ of ‘the digital poorhouse’ and acknowledges that altering ‘cultural
understandings and political responses to poverty will be difficult, abiding work’; however, technological development surges
on and will not wait for our new stories and visions to emerge.
25
Eubanks’s key argument is that ‘we need to develop basic
technological design principles to minimize harm’.
26
The author poses a series of poignant questions, as well as a first draft of
a Hippocratic Oath for data scientists, systems engineers, hackers and administrative officials that centres around the ‘non-
harm’ principle.
27
Eubanks asserts that ‘our ethical evolution still lags behind our technological revolutions’ and the ‘digital
revolution has warped to fit the shape’ of what is still an ‘inequitable world’ because society has failed to address the ‘crucial
challenges’ of ‘dismantling racism and ending poverty’.
28
Automating Inequality makes a timely and significant contribution to the social justice field. The book exposes the often hidden,
yet extremely damaging social implications of technological developments in the US. The book has clear international appeal
and the journalistic writing style ensures it is accessible to a diverse readership. Automating Inequality will be of interest to
academics, practitioners, policymakers and students in fields such as law, socio-legal studies, social policy, political science,
data science, digital society and sociology, as well as anyone with an interest in social justice, equality and the need for change.
The case studies in this book provide policymakers and those working in the fields of technology and justice, with timely
reminders of the social effects of these technological developments. For scholars of human rights, technology and social policy,
and advocates of social justice, the takeaway message is that in this ever-evolving world of technology, the experiences of
individuals such as Sophie are stark reminders that when systems ‘prioritize efficiency over empathy, tasks over families’, they
end up ‘degrad[ing] the extraordinary value of … emotional connection and commitments to each other’.
29
Eubanks argues that
such systems ‘are not designed to provide care or secure social justice’, rather they are ‘built to manage the symptoms of
austerity’.
30
While technological developments are typically framed as innovative, Eubanks convincingly argues in the
afterword that unless we design our digital, political and legal systems ‘from an unshakeable belief that everyone deserves …
basic human rights’, we are inevitably ‘doomed to repeat the oppressive patterns of the past’.
31
Automating Inequality presents
a clear call to action and the previously marginalised voices that run throughout this book deserve to be heard, listened to and
acted upon.
Bibliography
Andrejevic, Mark. “The Big Data Divide.” International Journal of Communication 8, no 17 (2014): 1673-1689.
Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: Picador,
St Martin’s Press, 2018.
McNutt, John G. Technology, Activism, and Social Justice in a Digital Age. Oxon: Oxford University Press, 2018.
22
Eubanks, Automating Inequality, 153.
23
McNutt, Technology, Activism, and Social Justice in a Digital Age.
24
Eubanks, Automating Inequality, 214-215.
25
Eubanks, Automating Inequality, 211.
26
Eubanks, Automating Inequality, 211.
27
Eubanks, Automating Inequality, 212-213.
28
Eubanks, Automating Inequality, 217.
29
Eubanks, Automating Inequality, 224.
30
Eubanks, Automating Inequality, 224-225.
31
Eubanks, Automating Inequality, 225.
... Bias and Discrimination. AI systems can perpetuate or even exacerbate existing biases, leading to discriminatory practices in targeting or profiling based on race, ethnicity, or other characteristics [10]. By developing algorithms that detect and mitigate biases in AI systems can prevent discriminatory outcomes. ...
Preprint
Full-text available
AI has made significant strides recently, leading to various applications in both civilian and military sectors. The military sees AI as a solution for developing more effective and faster technologies. While AI offers benefits like improved operational efficiency and precision targeting, it also raises serious ethical and legal concerns, particularly regarding human rights violations. Autonomous weapons that make decisions without human input can threaten the right to life and violate international humanitarian law. To address these issues, we propose a three-stage framework (Design, In Deployment, and During/After Use) for evaluating human rights concerns in the design, deployment, and use of military AI. Each phase includes multiple components that address various concerns specific to that phase, ranging from bias and regulatory issues to violations of International Humanitarian Law. By this framework, we aim to balance the advantages of AI in military operations with the need to protect human rights.
Chapter
Full-text available
AI is increasingly shaping critical sectors such as healthcare, education, commerce, and governance, offering advancements in efficiency, predictive accuracy, and personalised solutions. However, its rapid adoption raises ethical concerns, particularly regarding biases embedded in AI systems. These biases stem from data used to train machine learning models, reflecting implicit ontological, psychological, cognitive, and cultural assumptions. As such, AI biases perpetuate structural inequalities and reinforce historical discrimination. This chapter explores the urgent need for global frameworks to mitigate AI bias, highlighting policy initiatives to address these challenges. It then shifts to an interdisciplinary perspective, proposing empathy as a tool for tackling AI bias. Drawing on Baron-Cohen's Empathizing-Systemizing theory, the chapter argues that fostering empathy in AI development strengthens accountability and trust, essential for ethical AI deployment. Gender-balanced teams, with greater empathy emphasis, can offer solutions to reduce bias, supporting equitable AI systems.
Article
Full-text available
This article discusses fairness in artificial intelligence (AI) based policing procedures using facial recognition as an example. Algorithmic decisions based on discriminatory dynamics can (re)produce and automate injustice. AI fairness here concerns not only the creation and sharing of datasets or the training of models but also how systems are deployed in the real world. Quantifying fairness can distract rom how discrimination and oppression translate concretely into social phenomena. Integrative approaches can help actively incorporate ethical, legal, social, and economic factors into technology development to more holistically assess the consequences of deployment through continuous interdisciplinary collaboration.
Article
Full-text available
Background: Considerable effort has been directed to offering online health information and services aimed at the general population. Such efforts potentially support people to obtain improved health outcomes. However, when health information and services are moved online, issues of equality need to be considered. In this study, we focus on the general population and take as a point of departure how health statuses (physical functioning, social functioning, mental health, perceived health, and physical pain) are linked to internet access (spanning internet attitude, material access, internet skills, and health-related internet use). Objective: This study aims to reveal to what extent (1) internet access is important for online health outcomes, (2) different health statuses are important for obtaining internet access and outcomes, and (3) age and education moderate the contribution of health statuses to internet access. Methods: A sequence of 2 online surveys drawing upon a sample collected in the Netherlands was used, and a data set with 1730 respondents over the age of 18 years was obtained. Results: Internet attitude contributes positively to material access, internet skills, and health outcomes and negatively to health-related internet use. Material access contributes positively to internet skills and health-related internet use and outcomes. Internet skills contribute positively to health-related internet use and outcomes. Physical functioning contributes positively to internet attitude, material access, and internet skills but negatively to internet health use. Social functioning contributes negatively to internet attitude and positively to internet skills and internet health use. Mental health contributes positively to internet attitude and negatively to material access and internet health use. Perceived health positively contributes to material access, internet skills, and internet health use. Physical pain contributes positively to internet attitude and material access and indirectly to internet skills and internet health use. Finally, most contributions are moderated by age (<65 and ≥65 years) and education (low and high). Conclusions: To make online health care attainable for the general population, interventions should focus simultaneously on internet attitude, material access, internet skills, and internet health use. However, issues of equality need to be considered. In this respect, digital inequality research benefits from considering health as a predictor of all 4 access stages. Furthermore, studies should go beyond single self-reported measures of health. Physical functioning, social functioning, mental health, perceived health, and physical pain all show unique contributions to the different internet access stages. Further complicating this issue, online health-related interventions for people with different health statuses should also consider age and the educational level of attainment.
Article
Full-text available
Background: Artificial intelligence (AI) for use in health care and social services is rapidly developing, but this has significant ethical, legal, and social implications. Theoretical and conceptual research in AI ethics needs to be complemented with empirical research to understand the values and judgments of members of the public, who will be the ultimate recipients of AI-enabled services. Objective: The aim of the Australian Values and Attitudes on AI (AVA-AI) study was to assess and compare Australians' general and particular judgments regarding the use of AI, compare Australians' judgments regarding different health care and social service applications of AI, and determine the attributes of health care and social service AI systems that Australians consider most important. Methods: We conducted a survey of the Australian population using an innovative sampling and weighting methodology involving 2 sample components: one from an omnibus survey using a sample selected using scientific probability sampling methods and one from a nonprobability-sampled web-based panel. The web-based panel sample was calibrated to the omnibus survey sample using behavioral, lifestyle, and sociodemographic variables. Univariate and bivariate analyses were performed. Results: We included weighted responses from 1950 Australians in the web-based panel along with a further 2498 responses from the omnibus survey for a subset of questions. Both weighted samples were sociodemographically well spread. An estimated 60% of Australians support the development of AI in general but, in specific health care scenarios, this diminishes to between 27% and 43% and, for social service scenarios, between 31% and 39%. Although all ethical and social dimensions of AI presented were rated as important, accuracy was consistently the most important and reducing costs the least important. Speed was also consistently lower in importance. In total, 4 in 5 Australians valued continued human contact and discretion in service provision more than any speed, accuracy, or convenience that AI systems might provide. Conclusions: The ethical and social dimensions of AI systems matter to Australians. Most think AI systems should augment rather than replace humans in the provision of both health care and social services. Although expressing broad support for AI, people made finely tuned judgments about the acceptability of particular AI applications with different potential benefits and downsides. Further qualitative research is needed to understand the reasons underpinning these judgments. The participation of ethicists, social scientists, and the public can help guide AI development and implementation, particularly in sensitive and value-laden domains such as health care and social services.
ResearchGate has not been able to resolve any references for this publication.