Available via license: CC BY-NC 4.0
Content may be subject to copyright.
https://doi.org/10.1177/2056305118768296
Creative Commons Non Commercial CC BY-NC: This article is distributed under the terms of the Creative Commons Attribution-
NonCommercial 4.0 License (http://www.creativecommons.org/licenses/by-nc/4.0/) which permits non-commercial use, reproduction
and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages
(https://us.sagepub.com/en-us/nam/open-access-at-sage).
Social Media + Society
April-June 2018: 1 –9
© The Author(s) 2018
Reprints and permissions:
sagepub.co.uk/journalsPermissions.nav
DOI: 10.1177/2056305118768296
journals.sagepub.com/home/sms
SI: Ethics as Method
In 2015, The New York Times published a debate titled, “Can
predictive policing be ethical and effective?” Predictive
policing, as defined by RAND Corporation’s (Perry, McInnis,
Price, Smith, & Hollywood, 2013, p. xiii) report, is “the
application of analytical techniques—particularly quantita-
tive techniques—to identify likely targets for police inter-
vention and prevent crime or solve past crimes by making
statistical predictions.” The use of predictive policing in law
enforcement is part of a longer historical shift from reactive
policing to proactive policing. As described by Sarah Brayne
(2017, p. 989), since the 1980s, the police work has moved
from chasing suspects of crime into proactive policing of
particular hot spots where crimes may occur. According to
Brayne (2017), “[p]redictive policing is an extension of hot
spots policing, made possible by the temporal density of big
data (i.e., high-frequency observations)” (p. 989). As
explained by PredPol, one of the service providers, predic-
tive policing uses algorithmic analysis of “criminal behavior
patterns” together with three data points, “past type, place
and time of crime” to provide “law enforcement agency with
customized crime predictions for the places and times that
crimes are most likely to occur” (Predpol 2016).
The New York Times debate does not focus on particular
predictive policing technologies or service providers but dis-
cusses predictive policing on a general level. The debate
shows how predictive policing has been praised for its effec-
tiveness, while its ethicality has been criticized. As Cami
Chavis Simmons (2015), a former federal prosecutor and a
professor and the director of the criminal justice program at
Wake Forest University School of Law, puts it in the debate,
“some researchers claim [that predictive policing software]
is better than human analysts at determining where crime is
likely to occur and, thus, preventing it.” The caveat is, for
example, she (Chavis, 2015) continues, that
Algorithms cannot inform police about the underlying conditions
in the “hot spot” that contribute to crime in that area. A computer
cannot tell the police department that rival gangs that are about
to engage in a violent confrontation about territory, but a local
resident could.
While some of the commentators in the debate such as Sean
Young (2015), executive director of the University of
California (UC) Institute for Prediction Technology, believe
in the power of algorithmic sorting of data and are “develop-
ing a platform to analyze this social data and spit out real-
time predictions about future events and help public health
officials prevent disease outbreaks, stop violent crime and
reduce poverty,” others like Faiza Patel (2015), the
768296SMSXXX10.1177/2056305118768296Social Media + SocietyKarppi
research-article20182018
University of Toronto Mississauga, Canada
Corresponding Author:
Tero Karppi, University of Toronto Mississauga, Mississauga, Ontario L5L
1C6, Canada.
Email: tero.karppi@utoronto.ca
“The Computer Said So”: On the Ethics,
Effectiveness, and Cultural Techniques of
Predictive Policing
Tero Karppi
Abstract
In this paper, I use The New York Times’ debate titled, “Can predictive policing be ethical and effective?” to examine what are
seen as the key operations of predictive policing and what impacts they might have in our current culture and society. The
debate is substantially focused on the ethics and effectiveness of the computational aspects of predictive policing including the
use of data and algorithms to predict individual behaviour or to identify hot spots where crimes might happen. The debate
illustrates both the benefits and the problems of using these techniques, and makes a strong stance in favor of human control
and governance over predictive policing. Cultural techniques in the paper is used as a framework to discuss human agency and
further elaborate how predictive policing is based on operations which have ethical, epistemological, and social consequences.
Keywords
predictive policing, cultural techniques, data, agency, ethics
2 Social Media + Society
co-director of the Liberty and National Security Program at
the Brennan Center for Justice at New York University Law
School, are more skeptical about the effects arguing that
“algorithms used to predict the location of crime will only be
as good as the information that is fed into them.” In her com-
mentary, Seeta Peña Gangadharan (2015), an assistant pro-
fessor in the Department of Media and Communications at
The London School of Economics and Political Science,
even states that “these technologies are fundamentally dis-
criminatory.” These examples illustrate how the ethics and
effectiveness of predictive policing is located somewhere in
between new technologies and techniques that go into their
use and consequently are produced by them.
Using The New York Times debate as an inspiration, I
examine the intertwining of the ethics and effectiveness of
predictive policing through a framework German media the-
ory has called kulturtechniken.1 Kulturtechniken, or cultural
techniques, as the concept is now being translated, offers a
way to understand how “media and things” provide “their
own rules of execution” and “structure our possibilities in
praxis” (L. C. Young, 2015). By looking at The New York
Times debate, I ask what are the techniques important to pre-
dictive policing. Like the debate, my approach does not focus
on particular technologies or service providers, but speaks
about the general level of principles according to which the
predictive policing is seen to operate. Following Bernhard
Siegert (2008), I want to highlight “the operations or opera-
tive sequences that historically and logically precede” and
help us to generate and understand “media concepts” such as
predictive policing (p. 29). The debate itself consists of six
brief commentaries. The commentators Kami Chavis
Simmons, Aderson B. Francois, Seeta Peña Gangadharan,
Andrew Papachristos, Faiza Patel, and Sean Young are
experts of their own fields, and each of them represents a
slightly different approach to predictive policing. Importantly,
I will focus on what the experts say about predictive policing
in this particular debate instead of looking at the corpus of
their actual research on the matter. What I aim to achieve by
this delimitation is a description of the fundamental recur-
sive operations of predictive policing as they are described
for the general audience and of the distinctions or effects that
these operations are seen to produce in our current culture
and society. In practice, each of the subchapters of this article
begins with an introduction to one of these operations as they
are mentioned in The New York Times debate. What I aim to
bring forward from these operations is what Siegert (2015, p.
3) calls as the technical apriori: the technological conditions
which determine and define how the ethics and effectiveness
of predictive policing become possible in the first place.
Cultural Techniques or Ethics as
Method, Method as Ethics
In her article, “Ethics as method, method as ethics,” Annette
Markham (2006, pp. 50-51) describes method as a way of
getting something done and ethics as a dialogical process of
making sense of the world. Ethics then is neither a pre-estab-
lished value system nor a set of beliefs of what is right or
wrong but an active process of becoming. Getting something
done always involves ethical choices, and ethical choices are
made to get things done. Although Markham focuses specifi-
cally on researchers and their ethics and methods, I want to
suggest is that her understanding of ethics as method and
method as ethics could be extended to the non-human realm
as well.
If we remove both words “ethics” and “method,” we are
left with the idea of getting things done as a process that
makes sense of the world and vice versa. I posit both getting
things done and making sense of the world as particular
techniques. Techniques, here, are not merely different
human skills, aptitudes, and abilities but operations that are
co-constituted with non-human objects and the materialities
that enable them (Cf. Winthrop-Young, 2013, p. 6). This
take on techniques is specific to an approach called cultural
techniques—an approach that has been developed in
German media theory and only recently introduced to the
Anglo-American research tradition.
Now cultural techniques as a notion has different mean-
ings connected to different historical periods (see Siegert,
2013; Winthrop-Young, 2013), but here, I am relying on
Geoffrey Winthrop-Young’s (2015) recent formulation of
cultural techniques as
operative chains composed of actors and technological objects
that produce cultural orders and constructs which are
subsequently installed as the basis of these operations. At the
core of this [. . .] meaning of cultural techniques is the notion
that fairly simple operations coalesce into complex entities
which are then viewed as the agents or sources running these
operations. (p. 458)
Winthrop-Young exemplifies this statement by noting that
people draw pictures before the concept of an image was
conceived and played music before the concept of tonality
existed. Concepts do not have ontological priority, rather
they emerge from practice (Winthrop-Young, 2015, p.459).
In The New York Times debate predictive policing is
defined as a technique of incongruous mixture of systems,
technologies, events, and actors. It is seen as a computation-
ally based practice, which has the capacity to prevent crime,
but it also carries a capacity to bring along new unjust condi-
tions. One interesting feature of the debate is that many of
the critics of predictive policing focus on its computational
techniques; the practices of using computational systems in
making future predictions, and acting upon those predictions
in the real. For example, Gangadharan (2015) in her com-
mentary warns us that predictive policing can lead to ceding
judgment to predictive software programs and justifying
actions because “the computer said so.” In these critiques,
predictive policing is shaping our relations to the surround-
ing world autonomously and without human control.
Karppi 3
Jussi Parikka (2015a, p. 30) notes that cultural techniques
have epistemological, organizational, and social conse-
quences. One particular technique of predictive policing is
focused on flagging hot spots based on the probability of
crime. As I will discuss in the following sections, these pre-
dictions are seen to have the capacity to change not only how
we understand certain locations as high-crime areas but the
flagging of particular areas as hot spots also changes our ori-
entation and, for example, law enforcement officers’ orienta-
tion toward those areas and people living there (Chavis,
2015). This exemplifies, what Siegert (2013, p. 57) means
when he notes that human is always a product of cultural
techniques and has no ontological priority. Cultural tech-
niques are means by which humans come to understand
themselves and are formed as subjects (Parikka, 2015a,
p. 30). This is the point where the approach of cultural tech-
niques not only bears similarities with many traditional takes
on posthumanism but also differs from them. Winthrop-
Young (2014, p. 386) points out that posthumanism argues
for the hybridization of the human through and with technol-
ogy, but for cultural techniques, the human never existed
without the non-human.
The idea that we are entering to an era where things hap-
pen because the computer said so is very much in line with
how the cultural techniques approach frames human–tech-
nology relations. For the cultural techniques approach, the
human mastery over a technology is problematic. Cornelia
Vismann (2013), for example, maintains that
Cultural techniques define the agency of media and things. If
media theory were, or had, a grammar, that agency would find
its expression in objects claiming the grammatical subject
position and cultural techniques standing in for verbs.
Grammatical persons (and human beings alike) would then
assume the place assigned for objects in a given sentence. (p. 83)
What Vismann (2013, pp. 83-85) suggests is a change of
perspective where the sovereign subject or the autonomously
acting person is disempowered and situated within objects
and technologies, which have agency and as such determine
the possible courses of action. Her example, adapted from
Wolfgang Schadelwaldt, is of bather and spear thrower,
which denote two different ways to think about agency.
According to Vismann (2013), “the bather is carried by the
water” and “the trajectory of bathing remains bound to the
medium of water,” while the hand that throws the spear only
initiates a process with a goal in mind (p. 85). However,
while the example of water as a medium is more in line with
cultural techniques, the difference in these two processes is
actually only superficial because all “things and media will
always function as carriers of operations, irrespective of
what is at stake in their execution” (Vismann, 2013, p. 86).
Like the water, also the spear determines an act, and the
operation produces “a subject, who will then claim mastery
over both the tool and the action associated with it” (Vismann,
2013, p. 83).
This is where the effectiveness of predictive policing
meets its ethics. Predictive policing when executed carries
specific operations. These operations are tied, for example,
to the computational power of processing large datasets and
analyzing information from various sources ranging from
criminal records to weather reports and in some cases even
social media data. Computers and algorithms are more effec-
tive than human beings in making connections between dif-
ferent data sets. If we look at the current techniques of
analyzing data in predictive policing from the perspective of
effectiveness, we can quite easily accept Friedrich Kittler’s
(2017) notion that, we are in a situation where “computers
and cybernetics” are becoming “increasingly necessary” and
the humans are becoming “increasingly random” (p. 13). If
for the sake of effectiveness, we are moving to computa-
tional cultural techniques, then the question is how ethical
can a machine or a computational technique be? What is the
role assigned for humans? In other words, how does ethics
function through cultural techniques.
Location and Data
Geoffrey Winthrop-Young (2013) proposes that “Rather than
tackling the question ‘What are cultural techniques?’, it makes
more sense to ask: ‘What is the question to which the concept
of cultural techniques claims to be an answer?’” Following
this line of questioning, let us begin to explore The New York
Times debate and see what are the particular cultural tech-
niques that are highlighted in the context of ethics and effec-
tiveness and what are the questions they aim to answer. One
particular question related to the effectiveness of predictive
policing is where will a crime potentially take place?
According to Josh Scannell (2015), many of the current
predictive policing technologies are based on disease or
weather prediction models: these technologies focus on loca-
tions and operate preventively by, for example, increasing
law enforcement presences in the areas of potential crime. In
other words, the mapping of a location is based on techniques
of data analytics and data visualization, which is then used to
control certain areas. PredPol (2016), for example, “enables
law enforcement to enhance and better direct the patrol
resources they have” by automatically generating 500 feet ×
500 feet areas from the map as potential places where crime
can occur “for each shift for each day”. Here, we are reminded
of Siegert’s (2013) notion that spaces never “exist indepen-
dently of cultural techniques of spatial control” (p. 57). The
locations predictive policing distinguishes are very particular
because their borders are defined through predictive model-
ing rather than zip codes, street corners, or physical boundar-
ies of the place. These areas can be, for example, areas of
high crime and near-by areas which are determined as being
at risk for “subsequent crime” (Brayne, 2017, p. 989).
The importance of spatial control and cultural techniques
that are able to distinguish particular locations based on their
crime potential have also a significant focus in The New York
4 Social Media + Society
Times debate. Many of the experts, including Patel,
Gangadharan, and Chavis Simmons raise the concern that the
creation of “hot spots” is never entirely an objective process. .
In his commentary, Aderson B. Francois (2015), professor of
law at Howard University and supervising attorney of its Civil
Rights Clinic, notes that “predictive models carry an inherent
risk of racial profiling”. These authors point out that even if
predictive policing is seen as only mapping locations, it has
effects on the individuals, who are either passing through or
trying to live their lives in those locations. To be more explicit,
Patel (2015) argues that even if predictive policing technolo-
gies were using only locational crime data, they would hardly
be neutral: “If an algorithm is populated primarily with crimes
committed by black people, it will spit out results that send
police to black neighborhoods.”
What these authors maintain is that the hot spots are
actively produced by particular techniques and these tech-
niques have their own biases and problems. Francois (2015)
in his New York Times commentary points to the history of
crime prediction techniques:
Using data to forecast crime is not a new concept; in essence, it
relies on the ancient truism that criminals are creatures of habit
and, as such, will tend to commit the same crimes, at the same
times, in the same places.
Models that would predict the likeliness of parolee’s reof-
fending have been developed since the late 1920s, and data-
based risk assessment has been part of the justice system for
the past three decades (Brayne, 2017, p. 981). “What is new
about modern predictive policing” Francois (2015) continues
“is the promise that, using so-called big data, law enforce-
ment can use sophisticated objective statistical and geospa-
tial models to forecast crime levels, thereby making decisions
about, when, where, and how to intervene.” Gangadharan
(2015) calls this promise “myopic view of technology’s role
in public safety. A misguided belief in the objectivity and
neutrality of predictive technologies permeates every step of
the process.” Francois, Patel, Chavis, and Gangadharan all
use racial profiling as an example of how the objectivity of
statistical and geospatial models is a fallacy. Francois (2015)
points out that racial profiling has been a problem in crime
forecasting, long before computational analysis of data.
Gangadharan (2015), maintains that
over-reporting of crime incidence by law enforcement in
minority communities—whether due to implicit or explicit
racial bias—will literally color the computational analysis,
designating these areas a “hot spot” for more policing, which
will probably lead to increased incarceration rates there.
If we think about predictive policing as a cultural tech-
nique rather than technology, our register immediately
moves from neutrality or objectivity toward acknowledging
that these systems are constantly drawing distinctions and
shaping our culture in their own ways. Techniques and
technologies are never neutral; they, for example, establish
and maintain “power-laden boundaries across race, gender,
and class” and have differing consequences for different
people based on not only identity-related factors but also for
example access to the technologies and techniques in ques-
tion (Noble and Roberts, 2015, p.2, 9). Drawing distinctions
is not only a question of how tecniques or technologies are
being used by people. As Liam Cole Young (2015) notes,
“[t]he study of cultural techniques holds that media and
things are not simply passive objects to be activated at the
whim of an intentional (human) subject. Media and things
supply their own rules of execution.” He is here referring to
Siegert’s famous example of the door as a cultural tech-
nique. For Siegert, door has particular affordances, which
limit its potential usage. One can, for example, open or close
the door, and when opened one can move from one space to
another. To rephrase, big data techniques used in predictive
policing can be seen solving some ethical problems but
while doing so, they open others. For example, in the con-
text of locational data, Brayne (2017, p. 997) points out that
big data crime prediction can eliminate some problems such
as the human tendency of utilizing stereotypes regarding
class or race when facing incomplete information of a poten-
tial suspect. However, referring to previous research, she
(Brayne 2017, pp. 997-998) is also quick to note that big
data techniques only appear neutral on the surface, and
crime data are often incomplete; for example, crimes taking
place at public places have a heightened role because of
their reporting, crimes are not reported by groups and indi-
viduals not trusting the police, and the police attention is
often focused on particular neighborhoods at disproportion-
ately high rate. “These social dynamics inform the historical
crime data that are fed into the predictive policing algo-
rithm,” she (Brayne 2017, p. 998) notes.
Many of the critics of predictive policing in The New York
Times debate seem to suggest that we cannot really know
where a crime will take place because the data are biased.
They highlight that using biased data in locating hot spots
may even create “tension and further destabilizes an area
most in need of police protection” (Chavis, 2015). Could we
solve the problem by building more extensive and compre-
hensive mechanisms for data analysis, which would then bet-
ter inform us about potential crimes? What the cultural
techniques approach would argue is that data are only part of
the problem. Paraphrasing Siegert (2008) cultural techniques
never merely communicate or exchange information, but
they are acts that create “order by introducing distinctions”
(p. 35). The larger epistemological and ontological problem,
then, is related to drawing distinctions between different
areas and people living in those areas in the first place, in the
Roman Empire with a plow, today with data, and tomorrow
who knows how. Drawing a line, mapping a location, defin-
ing a hot spot is never neutral nor objective but an ethical
decision and a political act. It marks “the distinction between
inside and outside, civilization and barbarism, an inside
Karppi 5
domain in which the law prevails and one outside in which it
does not,” as Siegert (2013, p. 60) puts it.
Individual
On her New York Times commentary, Patel (2015) notes that
sometimes predictive policing is used not only to “forecast
where crime will likely take place” but also “who is likely to
commit a crime.” This question moves us to cultural tech-
niques, which target individuals and operate based on future
potential. Here predictive policing introduces cultural tech-
niques, which no longer evaluate the human based on his or
her individual characters or even his or her past history but
the future capacity to act. It is the predicted future, the poten-
tial that begins to condition human agency. What is, of
course, important here is that the predicted future is achieved
through particular cultural techniques. In The New York
Times debate, the particular cultural techniques that try to
predict individual behavior are discussed by Patel (2015) and
Sean Young (2015).
The data we produce of ourselves by ourselves through
social media sites have an increasing role in calculating
individual’s potential future. Andrew Guthrie (2017, pp.
1139-1140) notes that, for example, Chicago Police
Department studies “social networks, and even social media”
to map the relationships between gang members of the city
and to defuse retaliatory violence. Sean Captain (2015), in a
news story discussing Hitachi’s Visualization Predictive
Crime Analytics software, notes that “Social media plays a
big role in predicting crime, they [Hitachi] say, improving
accuracy by 15%.”
The importance of social media data in predictive polic-
ing is further exemplified by Sean Young (2015), who in his
New York Times commentary argues that
Just five years ago, many people thought social media was a
pointless tech fad. But social media’s use is no longer in dispute.
It allows people to connect with others, express themselves and
advertise brands, of course, but it is more than a tool for business
and self-promotion. Social media can be used to help predict and
prevent crime.
Young (2015) references the school shootings of
Marysville-Pilchuck High School in 2014 and notes that
information that the shooter might harm himself and others
circulated on social media, a month prior to the events. For
him, early detection and immediate treatment of the person-
in-risk might have prevented the situation. Young (2015)
notes,
As predictive technology becomes more available and reliable,
it could be used to provide immediate treatment (through a
collaboration between law enforcement and mental health
professionals) for a person-at-risk to prevent deaths, as well as
provide services and information to those in danger.
For Sean Young, the power of predictive policing culmi-
nates in the idea of recognizing and identifying risk subjects.
These techniques are of course already in use in different
spaces; Louise Amoore’s research, for example, focuses on
explicating how border control is identifying risk subjects
with big data techniques. According to Amoore (2011, p. 27),
contemporary risk calculus is not seeking causal relation-
ships between data points but is based on calculating uncer-
tainty and opening the world of probabilities. As an example,
Amoore gives us an equation: “if *** and ***, in association
with ***, then ***” and explains that
[i]n the decisions as to the association rules governing border
security analytics, the equation may read: if past travel to
Pakistan and duration of stay over three months, in association
with flight paid by a third party, then risk flag, detain.
This is what Amoore calls a data derivative.
According to Amoore (2011, p. 27), data derivative is a
specific form of abstraction that is deployed in contemporary
risk-based security calculations. It is a technique that specu-
lates on future value, the potential, rather than actual value.
Data derivatives are techniques of identifying potential ter-
rorists at airports (Amoore, 2011), but they are also used for
targeted marketing and identifying potential audiences
(Arvidsson, 2016). Adam Arvidsson (2016), who builds on
Amoore’s work, argues that derivatives have two fundamen-
tal characteristics: first “derivatives operate with derived
qualities: qualities that have been derived from an underly-
ing entity, or simply an ‘underlying’” and second, derivatives
are paths “projected into the future” (pp. 5-6). If we combine
these two characteristics, what is sought with derivatives is
the future value of an underlying (assets, goods, etc.). What
is emphasized by both Amoore and Arvidsson is that the pro-
cess of de-constructing the underlying into qualities, con-
stituent elements, and attributes, and then re-constructing
them into derivatives itself constructs a reality of its own
without any necessary (representational) relation to the
underlying entity. The question is no longer “who we are, nor
even on what our data says about us, but on what can be
imagined and inferred about who we might be—on our very
proclivities and potentialities” (Amoore 2011, p. 28).
According to a story on The Washington Post, predictive
policing system Beware, for example, translates data into
threat scores to inform police about the situation or person in
question:
As officers respond to calls, Beware automatically runs the
address. The searches return the names of residents and scans
them against a range of publicly available data to generate a
color-coded threat level for each person or address: green,
yellow or red. (Jouvenal, 2016.)
Futhermore, Justin Jouvenal (2016), a reporter behind the
story of Beware notes that
6 Social Media + Society
Exactly how Beware calculates threat scores is something that
its maker, Intrado, considers a trade secret, so it is unclear how
much weight is given to a misdemeanor, felony or threatening
comment on Facebook. However, the program flags issues and
provides a report to the user.
What can be imagined and inferred about people connect
to another cultural technique, which is how the imaginations
and inferred information are being mediated to people who
need it. The color-coded threat score is a result of data analyt-
ics but also techniques of mediating and visualizing informa-
tion. The color-coded threat level indicator has a precedent in
the world after 9/11. Brian Massumi (2005) has shown how
the Department of Homeland Security developed and used a
color-coded threat alert system not only to inform people after
the terrorist attacks but also to “calibrate the public’s anxiety”
(pp. 31-32). What is important here for Massumi’s argument
is the idea that the color-coded threat charts do not describe
the content of the threat to the public. Massumi calls these
color-codes signals without signification. Threat charts and
threat scores make data relations perceptible but as Munster
(2013) notes, “[t]o make something perceptible [. . .] is not
the same as perceiving something” (p. 82).
When predictive policing systems generate a color-
coded chart to visualize the threat level rather than describ-
ing how the threat is conceived, the police is—from the
perspective of epistemology—controlled by the technique.
Parikka (2015a) calls this as metaprogramming: “coding
the humans as computational aspects of an organization”
(p. 45). He (Parikka 2015a) is interested in organizations as
“software computerized environments” where the labor is
trained to follow abstracted commands and adjusts to the
patterns of organizational logic (p. 45). Metaprogramming
here relies on the psychological and physiological modula-
tion of a human being through different cultural techniques.
From the perspective of metaprogramming, the police offi-
cer using predictive policing becomes one operative chain
in the process where the technology is trying to get things
done. If the identity of the risk subject or potential criminal
is based on mathematical modeling, the capacity of the
police to act is based on a color in a diagram. Human agency
and the epistemological groundings for the capacity to act
are conditioned by the color-coded threat chart and the
ways in which the police is trained to use the information
which they receive.2
Policing
Ross Coomber, Leah Moyle, and Myesa Knox Mahoney
(2017) use the concept “symbolic policing” to describe a
form of policing that does not address the sources of crime
directly but tries to prevent crimes by signaling that the
areas under control. Predictive policing, with its methods to
highlight hot spots to locate police presence not to stop a
crime when it is happening but preventing it before it has
even started or to identify risk subject and intervene before
they are on harm’s way, seems to fit very well under this
definition.
Siegert (2015, p. 13) notes that material and symbolic go
often hand in hand with cultural techniques. Cultural tech-
niques operate with distinctions, and through distinctions,
“the symbolic is filtered out of the real” and “conversely, the
symbolic is incorporated into the real.” To illustrate this pro-
cess, Vismann (2013, p. 84) refers to the Roman Empire and
the cultural techniques of using a plow to draw a line, which
marks the limits of the city. Inside the lines are the material
and symbolic regime of the human, with walls and laws,
moral codes, and market places. What is left outside is nature,
which becomes the symbol of unruliness and barbarity. The
line the technique draws both materially and symbolically
differentiates us from them (Vismann, 2013; L. C. Young,
2015.). Similarly, predictive policing operates with the logic
of pre-emption, and it shows how the symbolic (the potential
for crime) is carved out from the real (the spatio-temporal
data), and after algorithmic filtering, the symbolic (the pres-
ence of police) is incorporated into the real (the street).
The effectiveness of predictive policing here is premised
upon not only the effective use of data but also the effective
use of police resources and the physical presence of human
bodies. Interestingly, Francois (2015) in The New York Times
debate asks, if using law enforcement is the most ethical way
to respond to predictive modeling of crimes:
the deepest flaw in the logic of predictive policing is the
assumption that, once data goes into the model, what the model
predicts is the need for policing, as opposed to the need for any
other less coercive social tools to deal with the trauma of economic
distress, family dislocation, mental illness, environmental stress
and racial discrimination that often masquerade as criminal
behavior.
Papachristos (2015) notes that in cities like Chicago,
potential criminals are being identified with predictive polic-
ing systems, but instead of arresting or judging these people,
more subtle ways to convince them out of harm’s way are
used:
Police and community members sit down at the same table with
those at risk. The police warn of legal consequences; community
and family members raise a moral and compassionate voice
against gun violence; and service providers offer access to
employment and health services.
Policing here no longer refers to the duty of a police force
to enforce the law but rather in the techniques of making
sense to the world with other means. Papachristos (2015), for
example, suggests a victim-centered public health approach;
here techniques of predictive policing—that is, risk assess-
ments and observations—would no longer be techniques of
only law enforcement but also something social services and
community members could use.
Karppi 7
These views reveal the complexity of predictive policing.
On one hand, the debate shows that the ethics and effective-
ness could be found from the technology. For example, Sean
Young (2015) states that “Technologies, whether they be
computer models or novel medical procedures, have risks
and benefits. [. . .] We, as a society, should continue to study
these ethical questions as we implement innovation.” On the
other, what Francois’ and Papachristos’ examples above
exemplify is that to be constructive, the criticism of predic-
tive policing needs to extend from the questions of technol-
ogy, data, and algorithms to the various physical
manifestations where that data can have a role, where it can
be used, and who uses it. This is a movement from technol-
ogy to techniques, a perspective through which neither the
effectiveness nor the ethics of predictive policing can be
predicated on, for example, issues related to big data and
algorithms but as Francois (2015) maintains, we need to
account for the roles and cultures of law enforcement and
their current policies.
Coda: Ethics
“Can predictive policing be ethical and effective?”, The New
York Times debate asks. In his answer, Papachristos (2015)
notes that “algorithms might help narrow the focus and reach
of the justice system, leading to fewer and fairer contacts
with citizens. But it cannot happen if police and prosecutors
use data without oversight or accountability.” A similar view
is echoed by Chavis (2015), who also warns against the over-
reliance of predictive policing technologies and addressed
that there is always a need for human analysis. Importantly,
positioning humans as ethical governors or gatekeepers of
predictive policing does not answer if predictive policing can
be ethical and effective but tries to figure out ways how it
could be both. The more routinized these techniques become
and the more spread out they will be in the different fields of
our society, the more our perspective is changing; we are no
longer asking if predictive policing should be used but where
and how it could be used. Predictive policing is becoming a
cultural technique in its own right, and our ethical under-
standings need to adapt to this new technique.
In the existing scholarship, very little has been said about
the connection between ethics and cultural techniques, per-
haps because the technical apriori seems to have little room
for human-based ethics. Yet, if “every choice one makes
about how to get something done is grounded in a set of
moral principles,” as Markham (2006, p. 50) notes, then also
ethics seem to have an important role in the discussion of
cultural techniques. When predictive policing is used, it sets
particular orders and constructs into the world. These orders
and constructs are cultural techniques through which ethics
function and ethical models are invented. These orders and
constructs are what Parikka (2015b) calls a “systematic rear-
ranging” of relations of sense and sensibilities which “are not
merely anymore expressed in what is directly perceivable by
the senses” (p. 181). These orders and constructs do not
emerge from the blue sky but are part of recursive chains of
operations; movements from reactive policing to proactive
policing are tied with the development of statistical analysis,
computational big data predictions, and even data visualiza-
tion. Proactive policing and predictive analytics bring with
them the technique of identifying and targeting hot spots,
which then could be transformed into techniques of identify-
ing and targeting individuals. These techniques can be
adapted from the fields of law and security into other fields
of our culture and society as well. As optimists like Sean
Young (2015) put it, “prediction technology gives us a class
of tools that were previously only accessible by secretive
agencies like the CIA and NSA. Let’s use them.”
The New York Times debate begins from an implication
that there is a distinction between ethics and effectiveness. In
the debate, if we follow this logic, we see that the effective-
ness is often defined by computational techniques (and effec-
tiveness here simply means that these techniques produce
something in the world rather than producing, for example,
accurate results) and ethics are located in the human realm.
Interestingly, The New York Times debate does not mention
that one proposed solution for the ethical governance of
computational systems is to imagine “the construction of
ethics, as an outcome of machine learning rather than a
framework of values” (Ganesh, 2017). In specific, computa-
tional ethical applications have been discussed in the context
of autonomous weapon systems, where researchers suggest
that ethical problems could be overcome by designing an
“ethical governor,” a system where moral decision-making
becomes a function of a machine (see Arkin, Ulam, &
Duncan, 2009).3 The effectiveness of this system is based on
not only in the ethical constraints that are coded in the com-
ponent but also in the use of data and statistics for making
predictions. In principle, these techniques could be used not
only in the context of military technology but also in other
fields of our culture where automation and computational
power has an important role and ethical governance is needed
and demanded (see Arkin, Ulam, & Wagner, 2012; Böhlen &
Karppi, 2017, p. 13). The ethical governor component, thus,
points toward the possibility for a computational approach to
ethics. But before these systems become implemented, let us
return to the role of the human in the debate.
The “study of cultural techniques raises questions about
how things and media operate,” Vismann (2013, p. 87) argues.
On one hand, the more we know about how predictive polic-
ing operates, the more the demands of ethical human gover-
nance and control of computational systems start to seem like
a paradox: the effectiveness of these operations is based on
going beyond the human threshold. As such, predictive polic-
ing highlights what Vismann (2013) calls as “the vantage point
of cultural techniques,” where “the sovereign subject becomes
disempowered, and it is things that are invested with agency
instead” (p. 86). But on the other, the demands for human-
based ethical governance of these systems are also a
8 Social Media + Society
manifestation of how we as humans are forced to find new
roles in a culture where many fundamental parts of the society
are being reorganized through computational techniques.
Paraphrasing Bruno Latour (2009, p. 174), we “are never
faced with people on the one hand and things on the other,” but
rather we “are faced with programs of action, sections of
which are endowed to parts of humans, while other sections
are entrusted to parts of nonhumans.” For Vismann (2013), the
sovereignty of a subject is not only limited by cultural tech-
niques which “determines the scope of the subject’s field of
action” (p. 84), but they also make sovereignty possible at
least in some form (2013, p. 88). In other words, if the sover-
eign human subject has become disempowered in the field of
big data and predictive analytics, maybe the field of ethics
could be a place where we as humans can find a new meaning-
ful role.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect
to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support
for the research, authorship, and/or publication of this article: This
research was partly funded by a research grant from the Kone
Foundation.
Notes
1. Siegert (2013) and Winthrop-Young (2013) give us three his-
torical definitions of cultural techniques. The first definition
contextualizes kulturechniken or cultural techniques with agri-
culture. Kultur or culture “is that which is ameliorated, nur-
tured, rendered habitable and, as a consequence, structurally
opposed to nature” (Winthrop-Young, 2013, p. 5). Cultural
techniques here are those which turn nature into culture. The
second definition comes from 1970s and is about culturaliza-
tion of technology; it relates to the skills and aptitudes need
to use and understand contemporary media technologies. The
third definition of cultural techniques relates to the academic
field of Kulturwissenshaften around 2000s and connects the
previous discussions to philosophical and anthropological
traditions.
2. Jouvenal (2016) in The Washington Post coverage notes that
the color-coded threat chart has been found problematic, and
police “is working with Intrado to turn off Beware’s color-
coded rating system and possibly the social media monitoring.”
3. Ronald Arkin, Patrick Ulam, and Brittany Duncan (2009), in
specific, have envisioned an “ethical governor component”
with an inbuilt “responsibility [. . .] to conduct an evalua-
tion of the ethical appropriateness of any lethal response
that has been produced by the robot architecture prior to its
being enacted.” One element of the system is a high-level
proportionality algorithm, which using statistical data and
environment information evaluate the likelihood of target
neutralization, including the possible damages to surround-
ings and the number of causalities caused by the possible
action (Arkin et al., 2009).
References
Amoore, L. (2011). Data derivatives: On the emergence of security
calculus for our times. Theory, Culture & Society, 28, 24–43.
Arkin, R. C., Ulam, P., & Duncan, B. (2009). An ethical governor
for constraining lethal action in an autonomous system (CSE
Technical reports. 163). Retrieved from http://digitalcommons.
unl.edu/csetechreports/163
Arkin, R. C., Ulam, P., & Wagner, A. (2012). Moral decision mak-
ing in autonomous systems: Enforcement, moral emotions, dig-
nity, trust, and deception. Proceedings of the IEEE, 10, 571–89.
Arvidsson, A. (2016). Facebook and finance: On the social logic of
the derivative. Theory, Culture & Society, 33, 3–23.
Böhlen, M., & Karppi, T. (2017). Making of robot care.
Transformations Issue, 29, 2–22.
Brayne, S. (2017). Big data surveillance: The case of policing.
American Sociological Review, 82, 977–1008.
Captain, S. (2015, September 28). Hitachi says it can predict crimes
before they happen. Fast Company. Retrieved from https://
www.fastcompany.com/3051578/elasticity/hitachi-says-it-
can-predict-crimes-before-they-happen
Chavis, S. K. (2015, November 18). Technology shouldn’t replace
community resources. The New York Times. Retrieved from
http://www.nytimes.com/roomfordebate/2015/11/18/can-pre-
dictive-policing-be-ethical-and-effective
Coomber, R., Moyle, L., & Mahoney, M. K. (2017). Symbolic
policing: Situating targeted police operations/“crackdowns”
on street-level drug markets. Policing and Society. Advance
online publication. doi:10.1080/10439463.2017.1323893
Francois, A. (2015, November 18). Data is not benign. The New York
Times. Retrieved from http://www.nytimes.com/roomforde-
bate/2015/11/18/can-predictive-policing-be-ethical-and-effective
Ganesh, M. I. (2017). Entanglement: Machine learning and human
ethics in driver-less car crashes. APRJA. Retrieved from http://
www.aprja.net/entanglement-machine-learning-and-human-
ethics-in-driver-less-car-crashes/
Gangadharan, S. (2015, November 18). Predictive algorithms are
not inherently unbiased. The New York Times. Retrieved from
http://www.nytimes.com/roomfordebate/2015/11/18/can-pre-
dictive-policing-be-ethical-and-effective
Guthrie, F. A. (2017). Policing predictive policing. Washington
University Law Review, 94(5), 1109–1189
Jouvenal, J. (2016, January 10). The new way police are surveil-
ling you: Calculating your threat “score.” The Washington
Post. Retrieved from https://www.washingtonpost.com/local/
public-safety/the-new-way-police-are-surveilling-you-cal-
culating-your-threat-score/2016/01/10/e42bccac-8e15-11e5-
baf4-bdf37355da0c_story.html
Kittler, F. (2017). Real time analysis, time axis manipulation.
Cultural Politics, 13(1), 1–18.
Latour, B. (2009). Where are the missing masses? The sociology
of few missing mundane artifacts. In D. J. Johnson & J. M.
Wetmore (Eds.), Technology and society, building our socio-
technical future (pp. 151–180). Cambridge: The MIT Press.
Markham, A. (2006). Method as ethic, ethic as method. Journal of
Information Ethics, 15(2), 37–55.
Massumi, B. (2005). Fear (the spectrum said). Positions, 13, 31–48.
Munster, A. (2013). An aesthesia of networks. Cambridge: The
MIT Press.
Nobel, S.U. & Roberts S.T. (2016). Through Google-Colored
Glass(es): Design, Emotion, Class, and Wearables as Commodity
Karppi 9
and Control. Media Studies Publications. Paper 13. Retrieved
from http://ir.lib.uwo.ca/commpub/13
Papachristos, A. (2015, November 18). Use of data can stop crime
by helping potential victims. The New York Times. Retrieved
from http://www.nytimes.com/roomfordebate/2015/11/18/
can-predictive-policing-be-ethical-and-effective
Parikka, J. (2015a). Cultural techniques of cognitive capitalism:
Metaprogramming and the labour of code. Cultural Studies
Review Vol, 20, No130-152.
Parikka, J. (2015b). Postscript: Of disappearances and the ontology
of media. In E. Ikoniadou & S. Wilson (Eds.), Media after kit-
tler (pp. 177–190). London, England: Rowman & Littlefield.
Patel, F. (2015, November 18). Be cautious about data-driven polic-
ing. The New York Times. Retrieved from http://www.nytimes.
com/roomfordebate/2015/11/18/can-predictive-policing-be-
ethical-and-effective
Perry, W. L., McInnis, B., Price, C. C., Smith, S. C., & Hollywood,
J. S. (2013). Predictive policing: The role of crime forecast-
ing in law enforcement operations. Santa Monica, Washington,
Pittsburgh, New Orleans, Jackson, Boston, Doha, Cambridge,
Brussels: RAND Corporation.
Predpol. (2016). How PredPol works: We provide guidance on
where and when to patrol. Retrieved from http://www.predpol.
com/how-predpol-works/. (accessed September 3, 2016).
Scannell, J. (2015). What can an algorithm do? Dis Magazine.
Retrieved from http://dismagazine.com/discussion/72975/josh-
scannell-what-can-an-algorithm-do/
Siegert, B. (2008). Cacography or communication? Cultural tech-
niques in German media studies. Grey Room, 29, 26–47.
Siegert, B. (2013). Cultural techniques: Or the end of the intellec-
tual postwar era in German media theory. Theory, Culture &
Society, 30, 48–65.
Siegert, B. (2015). Cultural techniques: Grids, filters, doors, and
other articulations of the real. New York, NY: Fordham
University Press.
Vismann, C. (2013). Cultural techniques and sovereignty. Theory,
Culture & Society, 303, 83–93.
Winthrop-Young, G. (2013). Cultural techniques: Preliminary
remarks. Theory, Culture & Society, 30(6), 3–19.
Winthrop-Young, G. (2014). The Kultur of cultural techniques:
Conceptual inertia and the parasitic materialities of ontologiza-
tion. Cultural Politics, 10, 376–388.
Winthrop-Young, G. (2015). Discourse, media, cultural techniques:
The complexity of Kittler. MLN, 130, 447–465.
Young, L. C. (2015). Cultural techniques and logistical media:
Tuning German and Anglo-American media studies.
M/C Journal Vol, 18(2). Retrieved from http://journal.
media-culture.org.au/index.php/mcjournal/article/view-
Article/961
Young, S. (2015, November 18). Social media will help prevent
crime. The New York Times. Retrieved from http://www.
nytimes.com/roomfordebate/2015/11/18/can-predictive-polic-
ing-be-ethical-and-effective
Author Biography
Tero Karppi is assistant professor in the Institute of Communication,
Culture, Information and Technology at the University of Toronto
Mississauga. His research uses critical and non-human based
approaches to examine social media sites and the modes of con-
nectivity these platforms establish. His work has been published in
journals such as Theory, Culture, & Society, International Journal
of Cultural Studies, and Fibreculture. He is a co-editor of Affective
Capitalism special issue for the ephemera journal (2016).