ArticlePDF Available

“The Computer Said So”: On the Ethics, Effectiveness, and Cultural Techniques of Predictive Policing

Authors:

Abstract

In this paper, I use The New York Times’ debate titled, “Can predictive policing be ethical and effective?” to examine what are seen as the key operations of predictive policing and what impacts they might have in our current culture and society. The debate is substantially focused on the ethics and effectiveness of the computational aspects of predictive policing including the use of data and algorithms to predict individual behaviour or to identify hot spots where crimes might happen. The debate illustrates both the benefits and the problems of using these techniques, and makes a strong stance in favor of human control and governance over predictive policing. Cultural techniques in the paper is used as a framework to discuss human agency and further elaborate how predictive policing is based on operations which have ethical, epistemological, and social consequences.
https://doi.org/10.1177/2056305118768296
Creative Commons Non Commercial CC BY-NC: This article is distributed under the terms of the Creative Commons Attribution-
NonCommercial 4.0 License (http://www.creativecommons.org/licenses/by-nc/4.0/) which permits non-commercial use, reproduction
and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages
(https://us.sagepub.com/en-us/nam/open-access-at-sage).
Social Media + Society
April-June 2018: 1 –9
© The Author(s) 2018
Reprints and permissions:
sagepub.co.uk/journalsPermissions.nav
DOI: 10.1177/2056305118768296
journals.sagepub.com/home/sms
SI: Ethics as Method
In 2015, The New York Times published a debate titled, “Can
predictive policing be ethical and effective?” Predictive
policing, as defined by RAND Corporation’s (Perry, McInnis,
Price, Smith, & Hollywood, 2013, p. xiii) report, is “the
application of analytical techniques—particularly quantita-
tive techniques—to identify likely targets for police inter-
vention and prevent crime or solve past crimes by making
statistical predictions.” The use of predictive policing in law
enforcement is part of a longer historical shift from reactive
policing to proactive policing. As described by Sarah Brayne
(2017, p. 989), since the 1980s, the police work has moved
from chasing suspects of crime into proactive policing of
particular hot spots where crimes may occur. According to
Brayne (2017), “[p]redictive policing is an extension of hot
spots policing, made possible by the temporal density of big
data (i.e., high-frequency observations)” (p. 989). As
explained by PredPol, one of the service providers, predic-
tive policing uses algorithmic analysis of “criminal behavior
patterns” together with three data points, “past type, place
and time of crime” to provide “law enforcement agency with
customized crime predictions for the places and times that
crimes are most likely to occur” (Predpol 2016).
The New York Times debate does not focus on particular
predictive policing technologies or service providers but dis-
cusses predictive policing on a general level. The debate
shows how predictive policing has been praised for its effec-
tiveness, while its ethicality has been criticized. As Cami
Chavis Simmons (2015), a former federal prosecutor and a
professor and the director of the criminal justice program at
Wake Forest University School of Law, puts it in the debate,
“some researchers claim [that predictive policing software]
is better than human analysts at determining where crime is
likely to occur and, thus, preventing it.” The caveat is, for
example, she (Chavis, 2015) continues, that
Algorithms cannot inform police about the underlying conditions
in the “hot spot” that contribute to crime in that area. A computer
cannot tell the police department that rival gangs that are about
to engage in a violent confrontation about territory, but a local
resident could.
While some of the commentators in the debate such as Sean
Young (2015), executive director of the University of
California (UC) Institute for Prediction Technology, believe
in the power of algorithmic sorting of data and are “develop-
ing a platform to analyze this social data and spit out real-
time predictions about future events and help public health
officials prevent disease outbreaks, stop violent crime and
reduce poverty,” others like Faiza Patel (2015), the
768296SMSXXX10.1177/2056305118768296Social Media + SocietyKarppi
research-article20182018
University of Toronto Mississauga, Canada
Corresponding Author:
Tero Karppi, University of Toronto Mississauga, Mississauga, Ontario L5L
1C6, Canada.
Email: tero.karppi@utoronto.ca
“The Computer Said So”: On the Ethics,
Effectiveness, and Cultural Techniques of
Predictive Policing
Tero Karppi
Abstract
In this paper, I use The New York Times’ debate titled, “Can predictive policing be ethical and effective?” to examine what are
seen as the key operations of predictive policing and what impacts they might have in our current culture and society. The
debate is substantially focused on the ethics and effectiveness of the computational aspects of predictive policing including the
use of data and algorithms to predict individual behaviour or to identify hot spots where crimes might happen. The debate
illustrates both the benefits and the problems of using these techniques, and makes a strong stance in favor of human control
and governance over predictive policing. Cultural techniques in the paper is used as a framework to discuss human agency and
further elaborate how predictive policing is based on operations which have ethical, epistemological, and social consequences.
Keywords
predictive policing, cultural techniques, data, agency, ethics
2 Social Media + Society
co-director of the Liberty and National Security Program at
the Brennan Center for Justice at New York University Law
School, are more skeptical about the effects arguing that
“algorithms used to predict the location of crime will only be
as good as the information that is fed into them.” In her com-
mentary, Seeta Peña Gangadharan (2015), an assistant pro-
fessor in the Department of Media and Communications at
The London School of Economics and Political Science,
even states that “these technologies are fundamentally dis-
criminatory.” These examples illustrate how the ethics and
effectiveness of predictive policing is located somewhere in
between new technologies and techniques that go into their
use and consequently are produced by them.
Using The New York Times debate as an inspiration, I
examine the intertwining of the ethics and effectiveness of
predictive policing through a framework German media the-
ory has called kulturtechniken.1 Kulturtechniken, or cultural
techniques, as the concept is now being translated, offers a
way to understand how “media and things” provide “their
own rules of execution” and “structure our possibilities in
praxis” (L. C. Young, 2015). By looking at The New York
Times debate, I ask what are the techniques important to pre-
dictive policing. Like the debate, my approach does not focus
on particular technologies or service providers, but speaks
about the general level of principles according to which the
predictive policing is seen to operate. Following Bernhard
Siegert (2008), I want to highlight “the operations or opera-
tive sequences that historically and logically precede” and
help us to generate and understand “media concepts” such as
predictive policing (p. 29). The debate itself consists of six
brief commentaries. The commentators Kami Chavis
Simmons, Aderson B. Francois, Seeta Peña Gangadharan,
Andrew Papachristos, Faiza Patel, and Sean Young are
experts of their own fields, and each of them represents a
slightly different approach to predictive policing. Importantly,
I will focus on what the experts say about predictive policing
in this particular debate instead of looking at the corpus of
their actual research on the matter. What I aim to achieve by
this delimitation is a description of the fundamental recur-
sive operations of predictive policing as they are described
for the general audience and of the distinctions or effects that
these operations are seen to produce in our current culture
and society. In practice, each of the subchapters of this article
begins with an introduction to one of these operations as they
are mentioned in The New York Times debate. What I aim to
bring forward from these operations is what Siegert (2015, p.
3) calls as the technical apriori: the technological conditions
which determine and define how the ethics and effectiveness
of predictive policing become possible in the first place.
Cultural Techniques or Ethics as
Method, Method as Ethics
In her article, “Ethics as method, method as ethics,” Annette
Markham (2006, pp. 50-51) describes method as a way of
getting something done and ethics as a dialogical process of
making sense of the world. Ethics then is neither a pre-estab-
lished value system nor a set of beliefs of what is right or
wrong but an active process of becoming. Getting something
done always involves ethical choices, and ethical choices are
made to get things done. Although Markham focuses specifi-
cally on researchers and their ethics and methods, I want to
suggest is that her understanding of ethics as method and
method as ethics could be extended to the non-human realm
as well.
If we remove both words “ethics” and “method,” we are
left with the idea of getting things done as a process that
makes sense of the world and vice versa. I posit both getting
things done and making sense of the world as particular
techniques. Techniques, here, are not merely different
human skills, aptitudes, and abilities but operations that are
co-constituted with non-human objects and the materialities
that enable them (Cf. Winthrop-Young, 2013, p. 6). This
take on techniques is specific to an approach called cultural
techniques—an approach that has been developed in
German media theory and only recently introduced to the
Anglo-American research tradition.
Now cultural techniques as a notion has different mean-
ings connected to different historical periods (see Siegert,
2013; Winthrop-Young, 2013), but here, I am relying on
Geoffrey Winthrop-Young’s (2015) recent formulation of
cultural techniques as
operative chains composed of actors and technological objects
that produce cultural orders and constructs which are
subsequently installed as the basis of these operations. At the
core of this [. . .] meaning of cultural techniques is the notion
that fairly simple operations coalesce into complex entities
which are then viewed as the agents or sources running these
operations. (p. 458)
Winthrop-Young exemplifies this statement by noting that
people draw pictures before the concept of an image was
conceived and played music before the concept of tonality
existed. Concepts do not have ontological priority, rather
they emerge from practice (Winthrop-Young, 2015, p.459).
In The New York Times debate predictive policing is
defined as a technique of incongruous mixture of systems,
technologies, events, and actors. It is seen as a computation-
ally based practice, which has the capacity to prevent crime,
but it also carries a capacity to bring along new unjust condi-
tions. One interesting feature of the debate is that many of
the critics of predictive policing focus on its computational
techniques; the practices of using computational systems in
making future predictions, and acting upon those predictions
in the real. For example, Gangadharan (2015) in her com-
mentary warns us that predictive policing can lead to ceding
judgment to predictive software programs and justifying
actions because “the computer said so.” In these critiques,
predictive policing is shaping our relations to the surround-
ing world autonomously and without human control.
Karppi 3
Jussi Parikka (2015a, p. 30) notes that cultural techniques
have epistemological, organizational, and social conse-
quences. One particular technique of predictive policing is
focused on flagging hot spots based on the probability of
crime. As I will discuss in the following sections, these pre-
dictions are seen to have the capacity to change not only how
we understand certain locations as high-crime areas but the
flagging of particular areas as hot spots also changes our ori-
entation and, for example, law enforcement officers’ orienta-
tion toward those areas and people living there (Chavis,
2015). This exemplifies, what Siegert (2013, p. 57) means
when he notes that human is always a product of cultural
techniques and has no ontological priority. Cultural tech-
niques are means by which humans come to understand
themselves and are formed as subjects (Parikka, 2015a,
p. 30). This is the point where the approach of cultural tech-
niques not only bears similarities with many traditional takes
on posthumanism but also differs from them. Winthrop-
Young (2014, p. 386) points out that posthumanism argues
for the hybridization of the human through and with technol-
ogy, but for cultural techniques, the human never existed
without the non-human.
The idea that we are entering to an era where things hap-
pen because the computer said so is very much in line with
how the cultural techniques approach frames human–tech-
nology relations. For the cultural techniques approach, the
human mastery over a technology is problematic. Cornelia
Vismann (2013), for example, maintains that
Cultural techniques define the agency of media and things. If
media theory were, or had, a grammar, that agency would find
its expression in objects claiming the grammatical subject
position and cultural techniques standing in for verbs.
Grammatical persons (and human beings alike) would then
assume the place assigned for objects in a given sentence. (p. 83)
What Vismann (2013, pp. 83-85) suggests is a change of
perspective where the sovereign subject or the autonomously
acting person is disempowered and situated within objects
and technologies, which have agency and as such determine
the possible courses of action. Her example, adapted from
Wolfgang Schadelwaldt, is of bather and spear thrower,
which denote two different ways to think about agency.
According to Vismann (2013), “the bather is carried by the
water” and “the trajectory of bathing remains bound to the
medium of water,” while the hand that throws the spear only
initiates a process with a goal in mind (p. 85). However,
while the example of water as a medium is more in line with
cultural techniques, the difference in these two processes is
actually only superficial because all “things and media will
always function as carriers of operations, irrespective of
what is at stake in their execution” (Vismann, 2013, p. 86).
Like the water, also the spear determines an act, and the
operation produces “a subject, who will then claim mastery
over both the tool and the action associated with it” (Vismann,
2013, p. 83).
This is where the effectiveness of predictive policing
meets its ethics. Predictive policing when executed carries
specific operations. These operations are tied, for example,
to the computational power of processing large datasets and
analyzing information from various sources ranging from
criminal records to weather reports and in some cases even
social media data. Computers and algorithms are more effec-
tive than human beings in making connections between dif-
ferent data sets. If we look at the current techniques of
analyzing data in predictive policing from the perspective of
effectiveness, we can quite easily accept Friedrich Kittler’s
(2017) notion that, we are in a situation where “computers
and cybernetics” are becoming “increasingly necessary” and
the humans are becoming “increasingly random” (p. 13). If
for the sake of effectiveness, we are moving to computa-
tional cultural techniques, then the question is how ethical
can a machine or a computational technique be? What is the
role assigned for humans? In other words, how does ethics
function through cultural techniques.
Location and Data
Geoffrey Winthrop-Young (2013) proposes that “Rather than
tackling the question ‘What are cultural techniques?’, it makes
more sense to ask: ‘What is the question to which the concept
of cultural techniques claims to be an answer?’” Following
this line of questioning, let us begin to explore The New York
Times debate and see what are the particular cultural tech-
niques that are highlighted in the context of ethics and effec-
tiveness and what are the questions they aim to answer. One
particular question related to the effectiveness of predictive
policing is where will a crime potentially take place?
According to Josh Scannell (2015), many of the current
predictive policing technologies are based on disease or
weather prediction models: these technologies focus on loca-
tions and operate preventively by, for example, increasing
law enforcement presences in the areas of potential crime. In
other words, the mapping of a location is based on techniques
of data analytics and data visualization, which is then used to
control certain areas. PredPol (2016), for example, “enables
law enforcement to enhance and better direct the patrol
resources they have” by automatically generating 500 feet ×
500 feet areas from the map as potential places where crime
can occur “for each shift for each day”. Here, we are reminded
of Siegert’s (2013) notion that spaces never “exist indepen-
dently of cultural techniques of spatial control” (p. 57). The
locations predictive policing distinguishes are very particular
because their borders are defined through predictive model-
ing rather than zip codes, street corners, or physical boundar-
ies of the place. These areas can be, for example, areas of
high crime and near-by areas which are determined as being
at risk for “subsequent crime” (Brayne, 2017, p. 989).
The importance of spatial control and cultural techniques
that are able to distinguish particular locations based on their
crime potential have also a significant focus in The New York
4 Social Media + Society
Times debate. Many of the experts, including Patel,
Gangadharan, and Chavis Simmons raise the concern that the
creation of “hot spots” is never entirely an objective process. .
In his commentary, Aderson B. Francois (2015), professor of
law at Howard University and supervising attorney of its Civil
Rights Clinic, notes that “predictive models carry an inherent
risk of racial profiling”. These authors point out that even if
predictive policing is seen as only mapping locations, it has
effects on the individuals, who are either passing through or
trying to live their lives in those locations. To be more explicit,
Patel (2015) argues that even if predictive policing technolo-
gies were using only locational crime data, they would hardly
be neutral: “If an algorithm is populated primarily with crimes
committed by black people, it will spit out results that send
police to black neighborhoods.”
What these authors maintain is that the hot spots are
actively produced by particular techniques and these tech-
niques have their own biases and problems. Francois (2015)
in his New York Times commentary points to the history of
crime prediction techniques:
Using data to forecast crime is not a new concept; in essence, it
relies on the ancient truism that criminals are creatures of habit
and, as such, will tend to commit the same crimes, at the same
times, in the same places.
Models that would predict the likeliness of parolee’s reof-
fending have been developed since the late 1920s, and data-
based risk assessment has been part of the justice system for
the past three decades (Brayne, 2017, p. 981). “What is new
about modern predictive policing” Francois (2015) continues
“is the promise that, using so-called big data, law enforce-
ment can use sophisticated objective statistical and geospa-
tial models to forecast crime levels, thereby making decisions
about, when, where, and how to intervene.” Gangadharan
(2015) calls this promise “myopic view of technology’s role
in public safety. A misguided belief in the objectivity and
neutrality of predictive technologies permeates every step of
the process.” Francois, Patel, Chavis, and Gangadharan all
use racial profiling as an example of how the objectivity of
statistical and geospatial models is a fallacy. Francois (2015)
points out that racial profiling has been a problem in crime
forecasting, long before computational analysis of data.
Gangadharan (2015), maintains that
over-reporting of crime incidence by law enforcement in
minority communities—whether due to implicit or explicit
racial bias—will literally color the computational analysis,
designating these areas a “hot spot” for more policing, which
will probably lead to increased incarceration rates there.
If we think about predictive policing as a cultural tech-
nique rather than technology, our register immediately
moves from neutrality or objectivity toward acknowledging
that these systems are constantly drawing distinctions and
shaping our culture in their own ways. Techniques and
technologies are never neutral; they, for example, establish
and maintain “power-laden boundaries across race, gender,
and class” and have differing consequences for different
people based on not only identity-related factors but also for
example access to the technologies and techniques in ques-
tion (Noble and Roberts, 2015, p.2, 9). Drawing distinctions
is not only a question of how tecniques or technologies are
being used by people. As Liam Cole Young (2015) notes,
“[t]he study of cultural techniques holds that media and
things are not simply passive objects to be activated at the
whim of an intentional (human) subject. Media and things
supply their own rules of execution.” He is here referring to
Siegert’s famous example of the door as a cultural tech-
nique. For Siegert, door has particular affordances, which
limit its potential usage. One can, for example, open or close
the door, and when opened one can move from one space to
another. To rephrase, big data techniques used in predictive
policing can be seen solving some ethical problems but
while doing so, they open others. For example, in the con-
text of locational data, Brayne (2017, p. 997) points out that
big data crime prediction can eliminate some problems such
as the human tendency of utilizing stereotypes regarding
class or race when facing incomplete information of a poten-
tial suspect. However, referring to previous research, she
(Brayne 2017, pp. 997-998) is also quick to note that big
data techniques only appear neutral on the surface, and
crime data are often incomplete; for example, crimes taking
place at public places have a heightened role because of
their reporting, crimes are not reported by groups and indi-
viduals not trusting the police, and the police attention is
often focused on particular neighborhoods at disproportion-
ately high rate. “These social dynamics inform the historical
crime data that are fed into the predictive policing algo-
rithm,” she (Brayne 2017, p. 998) notes.
Many of the critics of predictive policing in The New York
Times debate seem to suggest that we cannot really know
where a crime will take place because the data are biased.
They highlight that using biased data in locating hot spots
may even create “tension and further destabilizes an area
most in need of police protection” (Chavis, 2015). Could we
solve the problem by building more extensive and compre-
hensive mechanisms for data analysis, which would then bet-
ter inform us about potential crimes? What the cultural
techniques approach would argue is that data are only part of
the problem. Paraphrasing Siegert (2008) cultural techniques
never merely communicate or exchange information, but
they are acts that create “order by introducing distinctions”
(p. 35). The larger epistemological and ontological problem,
then, is related to drawing distinctions between different
areas and people living in those areas in the first place, in the
Roman Empire with a plow, today with data, and tomorrow
who knows how. Drawing a line, mapping a location, defin-
ing a hot spot is never neutral nor objective but an ethical
decision and a political act. It marks “the distinction between
inside and outside, civilization and barbarism, an inside
Karppi 5
domain in which the law prevails and one outside in which it
does not,” as Siegert (2013, p. 60) puts it.
Individual
On her New York Times commentary, Patel (2015) notes that
sometimes predictive policing is used not only to “forecast
where crime will likely take place” but also “who is likely to
commit a crime.” This question moves us to cultural tech-
niques, which target individuals and operate based on future
potential. Here predictive policing introduces cultural tech-
niques, which no longer evaluate the human based on his or
her individual characters or even his or her past history but
the future capacity to act. It is the predicted future, the poten-
tial that begins to condition human agency. What is, of
course, important here is that the predicted future is achieved
through particular cultural techniques. In The New York
Times debate, the particular cultural techniques that try to
predict individual behavior are discussed by Patel (2015) and
Sean Young (2015).
The data we produce of ourselves by ourselves through
social media sites have an increasing role in calculating
individual’s potential future. Andrew Guthrie (2017, pp.
1139-1140) notes that, for example, Chicago Police
Department studies “social networks, and even social media”
to map the relationships between gang members of the city
and to defuse retaliatory violence. Sean Captain (2015), in a
news story discussing Hitachi’s Visualization Predictive
Crime Analytics software, notes that “Social media plays a
big role in predicting crime, they [Hitachi] say, improving
accuracy by 15%.”
The importance of social media data in predictive polic-
ing is further exemplified by Sean Young (2015), who in his
New York Times commentary argues that
Just five years ago, many people thought social media was a
pointless tech fad. But social media’s use is no longer in dispute.
It allows people to connect with others, express themselves and
advertise brands, of course, but it is more than a tool for business
and self-promotion. Social media can be used to help predict and
prevent crime.
Young (2015) references the school shootings of
Marysville-Pilchuck High School in 2014 and notes that
information that the shooter might harm himself and others
circulated on social media, a month prior to the events. For
him, early detection and immediate treatment of the person-
in-risk might have prevented the situation. Young (2015)
notes,
As predictive technology becomes more available and reliable,
it could be used to provide immediate treatment (through a
collaboration between law enforcement and mental health
professionals) for a person-at-risk to prevent deaths, as well as
provide services and information to those in danger.
For Sean Young, the power of predictive policing culmi-
nates in the idea of recognizing and identifying risk subjects.
These techniques are of course already in use in different
spaces; Louise Amoore’s research, for example, focuses on
explicating how border control is identifying risk subjects
with big data techniques. According to Amoore (2011, p. 27),
contemporary risk calculus is not seeking causal relation-
ships between data points but is based on calculating uncer-
tainty and opening the world of probabilities. As an example,
Amoore gives us an equation: “if *** and ***, in association
with ***, then ***” and explains that
[i]n the decisions as to the association rules governing border
security analytics, the equation may read: if past travel to
Pakistan and duration of stay over three months, in association
with flight paid by a third party, then risk flag, detain.
This is what Amoore calls a data derivative.
According to Amoore (2011, p. 27), data derivative is a
specific form of abstraction that is deployed in contemporary
risk-based security calculations. It is a technique that specu-
lates on future value, the potential, rather than actual value.
Data derivatives are techniques of identifying potential ter-
rorists at airports (Amoore, 2011), but they are also used for
targeted marketing and identifying potential audiences
(Arvidsson, 2016). Adam Arvidsson (2016), who builds on
Amoore’s work, argues that derivatives have two fundamen-
tal characteristics: first “derivatives operate with derived
qualities: qualities that have been derived from an underly-
ing entity, or simply an ‘underlying’” and second, derivatives
are paths “projected into the future” (pp. 5-6). If we combine
these two characteristics, what is sought with derivatives is
the future value of an underlying (assets, goods, etc.). What
is emphasized by both Amoore and Arvidsson is that the pro-
cess of de-constructing the underlying into qualities, con-
stituent elements, and attributes, and then re-constructing
them into derivatives itself constructs a reality of its own
without any necessary (representational) relation to the
underlying entity. The question is no longer “who we are, nor
even on what our data says about us, but on what can be
imagined and inferred about who we might be—on our very
proclivities and potentialities” (Amoore 2011, p. 28).
According to a story on The Washington Post, predictive
policing system Beware, for example, translates data into
threat scores to inform police about the situation or person in
question:
As officers respond to calls, Beware automatically runs the
address. The searches return the names of residents and scans
them against a range of publicly available data to generate a
color-coded threat level for each person or address: green,
yellow or red. (Jouvenal, 2016.)
Futhermore, Justin Jouvenal (2016), a reporter behind the
story of Beware notes that
6 Social Media + Society
Exactly how Beware calculates threat scores is something that
its maker, Intrado, considers a trade secret, so it is unclear how
much weight is given to a misdemeanor, felony or threatening
comment on Facebook. However, the program flags issues and
provides a report to the user.
What can be imagined and inferred about people connect
to another cultural technique, which is how the imaginations
and inferred information are being mediated to people who
need it. The color-coded threat score is a result of data analyt-
ics but also techniques of mediating and visualizing informa-
tion. The color-coded threat level indicator has a precedent in
the world after 9/11. Brian Massumi (2005) has shown how
the Department of Homeland Security developed and used a
color-coded threat alert system not only to inform people after
the terrorist attacks but also to “calibrate the public’s anxiety”
(pp. 31-32). What is important here for Massumi’s argument
is the idea that the color-coded threat charts do not describe
the content of the threat to the public. Massumi calls these
color-codes signals without signification. Threat charts and
threat scores make data relations perceptible but as Munster
(2013) notes, “[t]o make something perceptible [. . .] is not
the same as perceiving something” (p. 82).
When predictive policing systems generate a color-
coded chart to visualize the threat level rather than describ-
ing how the threat is conceived, the police is—from the
perspective of epistemology—controlled by the technique.
Parikka (2015a) calls this as metaprogramming: “coding
the humans as computational aspects of an organization”
(p. 45). He (Parikka 2015a) is interested in organizations as
“software computerized environments” where the labor is
trained to follow abstracted commands and adjusts to the
patterns of organizational logic (p. 45). Metaprogramming
here relies on the psychological and physiological modula-
tion of a human being through different cultural techniques.
From the perspective of metaprogramming, the police offi-
cer using predictive policing becomes one operative chain
in the process where the technology is trying to get things
done. If the identity of the risk subject or potential criminal
is based on mathematical modeling, the capacity of the
police to act is based on a color in a diagram. Human agency
and the epistemological groundings for the capacity to act
are conditioned by the color-coded threat chart and the
ways in which the police is trained to use the information
which they receive.2
Policing
Ross Coomber, Leah Moyle, and Myesa Knox Mahoney
(2017) use the concept “symbolic policing” to describe a
form of policing that does not address the sources of crime
directly but tries to prevent crimes by signaling that the
areas under control. Predictive policing, with its methods to
highlight hot spots to locate police presence not to stop a
crime when it is happening but preventing it before it has
even started or to identify risk subject and intervene before
they are on harm’s way, seems to fit very well under this
definition.
Siegert (2015, p. 13) notes that material and symbolic go
often hand in hand with cultural techniques. Cultural tech-
niques operate with distinctions, and through distinctions,
“the symbolic is filtered out of the real” and “conversely, the
symbolic is incorporated into the real.” To illustrate this pro-
cess, Vismann (2013, p. 84) refers to the Roman Empire and
the cultural techniques of using a plow to draw a line, which
marks the limits of the city. Inside the lines are the material
and symbolic regime of the human, with walls and laws,
moral codes, and market places. What is left outside is nature,
which becomes the symbol of unruliness and barbarity. The
line the technique draws both materially and symbolically
differentiates us from them (Vismann, 2013; L. C. Young,
2015.). Similarly, predictive policing operates with the logic
of pre-emption, and it shows how the symbolic (the potential
for crime) is carved out from the real (the spatio-temporal
data), and after algorithmic filtering, the symbolic (the pres-
ence of police) is incorporated into the real (the street).
The effectiveness of predictive policing here is premised
upon not only the effective use of data but also the effective
use of police resources and the physical presence of human
bodies. Interestingly, Francois (2015) in The New York Times
debate asks, if using law enforcement is the most ethical way
to respond to predictive modeling of crimes:
the deepest flaw in the logic of predictive policing is the
assumption that, once data goes into the model, what the model
predicts is the need for policing, as opposed to the need for any
other less coercive social tools to deal with the trauma of economic
distress, family dislocation, mental illness, environmental stress
and racial discrimination that often masquerade as criminal
behavior.
Papachristos (2015) notes that in cities like Chicago,
potential criminals are being identified with predictive polic-
ing systems, but instead of arresting or judging these people,
more subtle ways to convince them out of harm’s way are
used:
Police and community members sit down at the same table with
those at risk. The police warn of legal consequences; community
and family members raise a moral and compassionate voice
against gun violence; and service providers offer access to
employment and health services.
Policing here no longer refers to the duty of a police force
to enforce the law but rather in the techniques of making
sense to the world with other means. Papachristos (2015), for
example, suggests a victim-centered public health approach;
here techniques of predictive policing—that is, risk assess-
ments and observations—would no longer be techniques of
only law enforcement but also something social services and
community members could use.
Karppi 7
These views reveal the complexity of predictive policing.
On one hand, the debate shows that the ethics and effective-
ness could be found from the technology. For example, Sean
Young (2015) states that “Technologies, whether they be
computer models or novel medical procedures, have risks
and benefits. [. . .] We, as a society, should continue to study
these ethical questions as we implement innovation.” On the
other, what Francois’ and Papachristos’ examples above
exemplify is that to be constructive, the criticism of predic-
tive policing needs to extend from the questions of technol-
ogy, data, and algorithms to the various physical
manifestations where that data can have a role, where it can
be used, and who uses it. This is a movement from technol-
ogy to techniques, a perspective through which neither the
effectiveness nor the ethics of predictive policing can be
predicated on, for example, issues related to big data and
algorithms but as Francois (2015) maintains, we need to
account for the roles and cultures of law enforcement and
their current policies.
Coda: Ethics
“Can predictive policing be ethical and effective?”, The New
York Times debate asks. In his answer, Papachristos (2015)
notes that “algorithms might help narrow the focus and reach
of the justice system, leading to fewer and fairer contacts
with citizens. But it cannot happen if police and prosecutors
use data without oversight or accountability.” A similar view
is echoed by Chavis (2015), who also warns against the over-
reliance of predictive policing technologies and addressed
that there is always a need for human analysis. Importantly,
positioning humans as ethical governors or gatekeepers of
predictive policing does not answer if predictive policing can
be ethical and effective but tries to figure out ways how it
could be both. The more routinized these techniques become
and the more spread out they will be in the different fields of
our society, the more our perspective is changing; we are no
longer asking if predictive policing should be used but where
and how it could be used. Predictive policing is becoming a
cultural technique in its own right, and our ethical under-
standings need to adapt to this new technique.
In the existing scholarship, very little has been said about
the connection between ethics and cultural techniques, per-
haps because the technical apriori seems to have little room
for human-based ethics. Yet, if “every choice one makes
about how to get something done is grounded in a set of
moral principles,” as Markham (2006, p. 50) notes, then also
ethics seem to have an important role in the discussion of
cultural techniques. When predictive policing is used, it sets
particular orders and constructs into the world. These orders
and constructs are cultural techniques through which ethics
function and ethical models are invented. These orders and
constructs are what Parikka (2015b) calls a “systematic rear-
ranging” of relations of sense and sensibilities which “are not
merely anymore expressed in what is directly perceivable by
the senses” (p. 181). These orders and constructs do not
emerge from the blue sky but are part of recursive chains of
operations; movements from reactive policing to proactive
policing are tied with the development of statistical analysis,
computational big data predictions, and even data visualiza-
tion. Proactive policing and predictive analytics bring with
them the technique of identifying and targeting hot spots,
which then could be transformed into techniques of identify-
ing and targeting individuals. These techniques can be
adapted from the fields of law and security into other fields
of our culture and society as well. As optimists like Sean
Young (2015) put it, “prediction technology gives us a class
of tools that were previously only accessible by secretive
agencies like the CIA and NSA. Let’s use them.”
The New York Times debate begins from an implication
that there is a distinction between ethics and effectiveness. In
the debate, if we follow this logic, we see that the effective-
ness is often defined by computational techniques (and effec-
tiveness here simply means that these techniques produce
something in the world rather than producing, for example,
accurate results) and ethics are located in the human realm.
Interestingly, The New York Times debate does not mention
that one proposed solution for the ethical governance of
computational systems is to imagine “the construction of
ethics, as an outcome of machine learning rather than a
framework of values” (Ganesh, 2017). In specific, computa-
tional ethical applications have been discussed in the context
of autonomous weapon systems, where researchers suggest
that ethical problems could be overcome by designing an
“ethical governor,” a system where moral decision-making
becomes a function of a machine (see Arkin, Ulam, &
Duncan, 2009).3 The effectiveness of this system is based on
not only in the ethical constraints that are coded in the com-
ponent but also in the use of data and statistics for making
predictions. In principle, these techniques could be used not
only in the context of military technology but also in other
fields of our culture where automation and computational
power has an important role and ethical governance is needed
and demanded (see Arkin, Ulam, & Wagner, 2012; Böhlen &
Karppi, 2017, p. 13). The ethical governor component, thus,
points toward the possibility for a computational approach to
ethics. But before these systems become implemented, let us
return to the role of the human in the debate.
The “study of cultural techniques raises questions about
how things and media operate,” Vismann (2013, p. 87) argues.
On one hand, the more we know about how predictive polic-
ing operates, the more the demands of ethical human gover-
nance and control of computational systems start to seem like
a paradox: the effectiveness of these operations is based on
going beyond the human threshold. As such, predictive polic-
ing highlights what Vismann (2013) calls as “the vantage point
of cultural techniques,” where “the sovereign subject becomes
disempowered, and it is things that are invested with agency
instead” (p. 86). But on the other, the demands for human-
based ethical governance of these systems are also a
8 Social Media + Society
manifestation of how we as humans are forced to find new
roles in a culture where many fundamental parts of the society
are being reorganized through computational techniques.
Paraphrasing Bruno Latour (2009, p. 174), we “are never
faced with people on the one hand and things on the other,” but
rather we “are faced with programs of action, sections of
which are endowed to parts of humans, while other sections
are entrusted to parts of nonhumans.” For Vismann (2013), the
sovereignty of a subject is not only limited by cultural tech-
niques which “determines the scope of the subject’s field of
action” (p. 84), but they also make sovereignty possible at
least in some form (2013, p. 88). In other words, if the sover-
eign human subject has become disempowered in the field of
big data and predictive analytics, maybe the field of ethics
could be a place where we as humans can find a new meaning-
ful role.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect
to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support
for the research, authorship, and/or publication of this article: This
research was partly funded by a research grant from the Kone
Foundation.
Notes
1. Siegert (2013) and Winthrop-Young (2013) give us three his-
torical definitions of cultural techniques. The first definition
contextualizes kulturechniken or cultural techniques with agri-
culture. Kultur or culture “is that which is ameliorated, nur-
tured, rendered habitable and, as a consequence, structurally
opposed to nature” (Winthrop-Young, 2013, p. 5). Cultural
techniques here are those which turn nature into culture. The
second definition comes from 1970s and is about culturaliza-
tion of technology; it relates to the skills and aptitudes need
to use and understand contemporary media technologies. The
third definition of cultural techniques relates to the academic
field of Kulturwissenshaften around 2000s and connects the
previous discussions to philosophical and anthropological
traditions.
2. Jouvenal (2016) in The Washington Post coverage notes that
the color-coded threat chart has been found problematic, and
police “is working with Intrado to turn off Beware’s color-
coded rating system and possibly the social media monitoring.”
3. Ronald Arkin, Patrick Ulam, and Brittany Duncan (2009), in
specific, have envisioned an “ethical governor component”
with an inbuilt “responsibility [. . .] to conduct an evalua-
tion of the ethical appropriateness of any lethal response
that has been produced by the robot architecture prior to its
being enacted.” One element of the system is a high-level
proportionality algorithm, which using statistical data and
environment information evaluate the likelihood of target
neutralization, including the possible damages to surround-
ings and the number of causalities caused by the possible
action (Arkin et al., 2009).
References
Amoore, L. (2011). Data derivatives: On the emergence of security
calculus for our times. Theory, Culture & Society, 28, 24–43.
Arkin, R. C., Ulam, P., & Duncan, B. (2009). An ethical governor
for constraining lethal action in an autonomous system (CSE
Technical reports. 163). Retrieved from http://digitalcommons.
unl.edu/csetechreports/163
Arkin, R. C., Ulam, P., & Wagner, A. (2012). Moral decision mak-
ing in autonomous systems: Enforcement, moral emotions, dig-
nity, trust, and deception. Proceedings of the IEEE, 10, 571–89.
Arvidsson, A. (2016). Facebook and finance: On the social logic of
the derivative. Theory, Culture & Society, 33, 3–23.
Böhlen, M., & Karppi, T. (2017). Making of robot care.
Transformations Issue, 29, 2–22.
Brayne, S. (2017). Big data surveillance: The case of policing.
American Sociological Review, 82, 977–1008.
Captain, S. (2015, September 28). Hitachi says it can predict crimes
before they happen. Fast Company. Retrieved from https://
www.fastcompany.com/3051578/elasticity/hitachi-says-it-
can-predict-crimes-before-they-happen
Chavis, S. K. (2015, November 18). Technology shouldn’t replace
community resources. The New York Times. Retrieved from
http://www.nytimes.com/roomfordebate/2015/11/18/can-pre-
dictive-policing-be-ethical-and-effective
Coomber, R., Moyle, L., & Mahoney, M. K. (2017). Symbolic
policing: Situating targeted police operations/“crackdowns”
on street-level drug markets. Policing and Society. Advance
online publication. doi:10.1080/10439463.2017.1323893
Francois, A. (2015, November 18). Data is not benign. The New York
Times. Retrieved from http://www.nytimes.com/roomforde-
bate/2015/11/18/can-predictive-policing-be-ethical-and-effective
Ganesh, M. I. (2017). Entanglement: Machine learning and human
ethics in driver-less car crashes. APRJA. Retrieved from http://
www.aprja.net/entanglement-machine-learning-and-human-
ethics-in-driver-less-car-crashes/
Gangadharan, S. (2015, November 18). Predictive algorithms are
not inherently unbiased. The New York Times. Retrieved from
http://www.nytimes.com/roomfordebate/2015/11/18/can-pre-
dictive-policing-be-ethical-and-effective
Guthrie, F. A. (2017). Policing predictive policing. Washington
University Law Review, 94(5), 1109–1189
Jouvenal, J. (2016, January 10). The new way police are surveil-
ling you: Calculating your threat “score.” The Washington
Post. Retrieved from https://www.washingtonpost.com/local/
public-safety/the-new-way-police-are-surveilling-you-cal-
culating-your-threat-score/2016/01/10/e42bccac-8e15-11e5-
baf4-bdf37355da0c_story.html
Kittler, F. (2017). Real time analysis, time axis manipulation.
Cultural Politics, 13(1), 1–18.
Latour, B. (2009). Where are the missing masses? The sociology
of few missing mundane artifacts. In D. J. Johnson & J. M.
Wetmore (Eds.), Technology and society, building our socio-
technical future (pp. 151–180). Cambridge: The MIT Press.
Markham, A. (2006). Method as ethic, ethic as method. Journal of
Information Ethics, 15(2), 37–55.
Massumi, B. (2005). Fear (the spectrum said). Positions, 13, 31–48.
Munster, A. (2013). An aesthesia of networks. Cambridge: The
MIT Press.
Nobel, S.U. & Roberts S.T. (2016). Through Google-Colored
Glass(es): Design, Emotion, Class, and Wearables as Commodity
Karppi 9
and Control. Media Studies Publications. Paper 13. Retrieved
from http://ir.lib.uwo.ca/commpub/13
Papachristos, A. (2015, November 18). Use of data can stop crime
by helping potential victims. The New York Times. Retrieved
from http://www.nytimes.com/roomfordebate/2015/11/18/
can-predictive-policing-be-ethical-and-effective
Parikka, J. (2015a). Cultural techniques of cognitive capitalism:
Metaprogramming and the labour of code. Cultural Studies
Review Vol, 20, No130-152.
Parikka, J. (2015b). Postscript: Of disappearances and the ontology
of media. In E. Ikoniadou & S. Wilson (Eds.), Media after kit-
tler (pp. 177–190). London, England: Rowman & Littlefield.
Patel, F. (2015, November 18). Be cautious about data-driven polic-
ing. The New York Times. Retrieved from http://www.nytimes.
com/roomfordebate/2015/11/18/can-predictive-policing-be-
ethical-and-effective
Perry, W. L., McInnis, B., Price, C. C., Smith, S. C., & Hollywood,
J. S. (2013). Predictive policing: The role of crime forecast-
ing in law enforcement operations. Santa Monica, Washington,
Pittsburgh, New Orleans, Jackson, Boston, Doha, Cambridge,
Brussels: RAND Corporation.
Predpol. (2016). How PredPol works: We provide guidance on
where and when to patrol. Retrieved from http://www.predpol.
com/how-predpol-works/. (accessed September 3, 2016).
Scannell, J. (2015). What can an algorithm do? Dis Magazine.
Retrieved from http://dismagazine.com/discussion/72975/josh-
scannell-what-can-an-algorithm-do/
Siegert, B. (2008). Cacography or communication? Cultural tech-
niques in German media studies. Grey Room, 29, 26–47.
Siegert, B. (2013). Cultural techniques: Or the end of the intellec-
tual postwar era in German media theory. Theory, Culture &
Society, 30, 48–65.
Siegert, B. (2015). Cultural techniques: Grids, filters, doors, and
other articulations of the real. New York, NY: Fordham
University Press.
Vismann, C. (2013). Cultural techniques and sovereignty. Theory,
Culture & Society, 303, 83–93.
Winthrop-Young, G. (2013). Cultural techniques: Preliminary
remarks. Theory, Culture & Society, 30(6), 3–19.
Winthrop-Young, G. (2014). The Kultur of cultural techniques:
Conceptual inertia and the parasitic materialities of ontologiza-
tion. Cultural Politics, 10, 376–388.
Winthrop-Young, G. (2015). Discourse, media, cultural techniques:
The complexity of Kittler. MLN, 130, 447–465.
Young, L. C. (2015). Cultural techniques and logistical media:
Tuning German and Anglo-American media studies.
M/C Journal Vol, 18(2). Retrieved from http://journal.
media-culture.org.au/index.php/mcjournal/article/view-
Article/961
Young, S. (2015, November 18). Social media will help prevent
crime. The New York Times. Retrieved from http://www.
nytimes.com/roomfordebate/2015/11/18/can-predictive-polic-
ing-be-ethical-and-effective
Author Biography
Tero Karppi is assistant professor in the Institute of Communication,
Culture, Information and Technology at the University of Toronto
Mississauga. His research uses critical and non-human based
approaches to examine social media sites and the modes of con-
nectivity these platforms establish. His work has been published in
journals such as Theory, Culture, & Society, International Journal
of Cultural Studies, and Fibreculture. He is a co-editor of Affective
Capitalism special issue for the ephemera journal (2016).
... The danger arising from inconclusive evidence and erroneous actionable insights also stems from the perceived mechanistic objectivity associated with computer-generated analytics (Karppi 2018;M. K. Lee 2018;Buhmann, Paßmann, and Fieseler 2019). ...
... Rubel Today, the failure to grasp the unintended effects of mass personal data processing and commercialisation, a familiar problem in the history of technology (Wiener 1950;Klee 1996;Benjamin 2019), is coupled with the limited explanations that most ML algorithms provide. This approach risks to favour avoidance of responsibility through "the computer said so" type of denial (Karppi 2018). ...
... be 'neutral' but in doing so, it risks entrenching existing social conditions(Green and Viljoen 2020, 20), while creating the illusion of precision(Karppi 2018;Selbst et al. 2019). For these reasons, the use of algorithms in some settings is questioned altogether(Selbst et al. 2019;Mayson 2019;Katell et al. 2020;Abebe et al. 2020). ...
Chapter
Full-text available
Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016 (Mittelstadt et al. Big Data Soc 3(2). https://doi.org/10.1177/2053951716679679, 2016). The golas are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms.
... There are also some common challenges that police departments will face when using GIS like other countries, for example, crime data omission (Perry, 2013) or failure to record all crimes, especially in communities that have less trust in the police (Howgego, 2019), which will lead to systematic errors in data analysis. Predictive policing analyzes a massive amount of information from historical crime data and aggregated individual data, which will cause and reinforce a series of issues such as data privacy (Perry, 2013;van Zoonen, 2016), historical bias and particular subpopulation bias (Knight, 2020), reliability and accuracy (Ferguson, 2012;Mohler et al., 2018), effectiveness and ethics (Karppi, 2018;Meijer & Wessels, 2019), legitimacy (Lee & Park, 2021), and accountability problems (Bennett Moses & Chan, 2018). The particular challenges raised by the new practice of place-based strategies, data-driven policing, and predictive policing are what researchers and police practitioners need to focus on in the future. ...
Article
Full-text available
The application of GIS in the public security industry is generally called “Police Geographic Information System (PGIS)” in Mainland China. Although China’s PGIS play important roles in protecting public safety and smart policing, no publications on this subject are found in English. This paper provides an overview of the four main development stages of PGIS in public security agencies in mainland China: the early exploration stage; the multi-department PGIS system development stage; the centralization stage; and the spread and improvement stage. Successful GIS applications and practices in local police departments are also introduced in this paper. At the end of this paper, three problems and challenges faced in the development of PGIS are introduced, involving the current PGIS application depth; the support of crime mapping and crime analysis theoretical research on policing practices; and the introduction and localization of mature foreign crime analysis and prediction technology. Additionally, we point out that the implementation of smart policing strategies under the context of big data has created favorable conditions for subsequent crime analysis, research, and application, encouraging police departments to more frequently take initiative by cooperating with research institutions in crime analysis and prediction technology research. These new opportunities will inevitably promote the rapid development of PGIS and the geography of crime in China.
... Institutional decisions to use algorithm-derived risk assessment tools in the criminal justice system have also generated concern that these algorithms may reproduce and exacerbate racial biases and amount to computerized racial profiling (Karppi, 2018;Shapiro, 2017). Risk assessment algorithms aim to forecast the risk of future criminal behavior but must rely on administrative data on arrests or convictions as a proxy for offending propensities, and such data may themselves reflect a biased system. ...
Article
Full-text available
Research summary This study estimates disparities in driving under the influence (DUI) convictions relative to the frequency with which racial/ethnic groups engage in alcohol-impaired driving. We use had-been-drinking crashes and self-reported alcohol-impaired driving to approximate alcohol-impaired driving frequency for racial/ethnic groups in California from 2001 to 2016. DUI conviction and had-been-drinking crash data are from a sample of 72,368 California men aged 21–49 in 2001. Self-reported alcohol-impaired driving rates are from male Californians who responded to the Behavioral Risk Factor Surveillance System. Relative to race/ethnicity-specific estimated rates of engaging in alcohol-impaired driving, Latino/Hispanic men had higher rates of DUI conviction than White men. This suggests racial bias plays a role in DUI convictions, with White men experiencing a lower probability of conviction than Latino/Hispanic men who engage in similar behavior. Policy implications These findings suggest actions aimed at reducing individual and structural biases could lead to more equitable DUI conviction rates. Such actions could include limiting discretion at each level of the criminal justice system, for example, by providing prescriptive guidance to officers on when to stop drivers or using local had-been-drinking crash rates to determine sobriety checkpoint and saturation patrol locations.
... Predictive policing is the application of analytics in order for targets of police intervention to be identified and crime prevention or resolution through the use of statistical predictions to be achieved. The use of predictive policing characterises the shift from reactive policing to proactive policing (Karppi 2018). Predictive policing uses algorithms in order to analyse patterns of criminal behaviour together with past, place and time of crime data in order to enable law enforcement agencies to predict places and times in which crimes can happen (Predpol 2016). ...
Article
Full-text available
Kolejnym krokiem cyfrowej transformacji jest przyjęcie sztucznej inteligencji (AI), nawet jeśli sama technologia wciąż ewoluuje. Niemniej jednak dyskusje na temat zalet i wad AI są żywe: menedżerowie znajdują się na pierwszej linii podejmowania decyzji dotyczących najlepszych sposobów wprowadzenia takich zmian. Jeśli korporacje są już zaznajomione ze sztuczną inteligencją, przynajmniej częściowo w przypadku niektórych procesów, małe i średnie przedsiębiorstwa (MŚP) stoją przed podwójną presją: nierównym stopniem dojrzałości cyfrowej, a także codziennymi ograniczeniami w zwiększaniu konkurencyjności. W szczególności MŚP z Europy Środkowo-Wschodniej znajdują się w skomplikowanych ramach, a przyjęcie sztucznej inteligencji, nawet jeśli jest trudne, może być jednym z rozwiązań umożliwiających postęp pod względem wydajności. Mimo wszystko ryzyko w takim podejściu musi być dokładnie rozważone. Opierając się na częściowo ustrukturyzowanym przeglądzie literatury przedmiotu, w niniejszym artykule omówiono główne zagrożenia, które menedżerowie MŚP w regionie Europy Środkowo-Wschodniej powinni zrozumieć w odniesieniu do sztucznej inteligencji, i wynikające z niej wyzwania związane z jej przyjęciem w biznesie. Końcowe rozważania i przyszłe dyskusje badawcze zamykają prace.
Chapter
Higher education is increasingly interested in utilizing data analytics to support all aspects of university operations, including enrollment management and learning outcomes. Despite potential benefits to improve results and resource efficiency, the use of student information and the creation of predictive models is a potential minefield which could undermine larger higher educational missions tied to civic responsibility and social mobility. Questions remain as to the impacts of predictive modeling on underrepresented communities like students of color and differently abled students. Emerging research on similar fields of analytics, including predictive policing, provides a window into the ethical considerations that must be made to use data analytics responsibly. This chapter uses the construct of social responsibility to propose a process model for the responsible use of data analytics in colleges and universities derived from Carroll's Pyramid of Corporate Social Responsibility.
Chapter
Pandemic escalated the need of adopting technology for human security and public service. Technological integration and digital transformation are of focus in the strategy to recover and reconstruct civic society post-pandemic across the globe, especially in the domains of healthcare, education, surveillance, and governance. Artificial intelligence (AI) is seen to benefit society through building and assisting critical socio-technical systems. Automated decision-making through algorithms is debated widely for its limitations in tackling biases and inability to discourage unintended. Moreover, AI learns patterns from the data, which by nature is biased due to existing socio-economic complexities. The pervasive application of AI when implemented and integrated with social systems is observed to pose socio-ethical challenges such as institutionalization of discrimination, biased decision-making, intrusiveness, low accountability, and mistrust. Various threats and vulnerabilities imposed to human community-like natural disasters, health pandemics, and economic uncertainties necessitate inevitable adoption of AI applications that can mitigate socio-ethical challenges and adhere to human security principles. Current data protection laws seem to be insufficient to protect human rights in the given scenarios. Literature advocates for transparency, explainability, and auditability of AI models. However, it may not necessarily lead to accountability and fairness. Embedding these socio-technical systems in the broader institutional frameworks of regulation and governance can balance the risks without compromising on the benefits of technological innovations. The socio-economic context in which AI model is deployed necessitates the responses to be local and context specific. This also necessitates AI governance framework to be comprehensive, prevention-oriented, while protecting and empowering human value and dignity. This chapter provides commentary on the social, ethical, and technical issues that AI can impose along with various aspects that need to be considered while governing AI. Finally, an AI governance framework is proposed based on socio-administrative principles to extend their credibility in mitigating, managing and governing the human threats and uphold human security.
Chapter
Self-exciting spatio-temporal point processes offer a flexible class of models that have found success in a range of applications. They involve a triggering effect that accounts for the clustering patterns observed in many natural and sociological applications. In this work, we focus on the key step of inferring or designing the form of the triggering function. In the inference setting, we use a nonparametric approach to fit a process to a range of datasets arising in criminology. By analysing this public domain data we find that the inferred trigger shape varies across different categories of crime. Motivated by these observations, and also by hypotheses from the criminology literature, we then propose a variation on the classical Epidemic-Type Aftershock Sequences trigger, which we call the Delayed Response trigger. After calibrating both parametric models, we show that Delayed Response is comparable with Epidemic-Type Aftershock Sequences in terms of predicting future events, and additionally provides an estimate of the time lag before the risk of triggering is maximised.
Article
Full-text available
The policing of local drug markets in England often takes the form of specific, high-profile, crackdown operations which themselves are mostly a generic, periodic response to particular criminality. Drawing on Innes’ (2004) concept of ‘control signals’ and Edelman’s (1985 Edelman, M., 1985. The symbolic uses of politics. Urbana, IL: University of Illinois Press. [Google Scholar]) notion of ‘symbolic policy’, we argue that ‘symbolic policing’ relates to activity that is principally about achieving symbolic aims – ‘being seen to be doing something’ rather than preventing or solving crime. This article, focusing on police crackdown operations on heroin and crack ‘cocaine dealers‘ in three English urban areas, considers the meanings of such operations, how they work, and in relation to local suppliers suggests they may in fact have counterproductive enforcement outcomes whilst still achieving symbolic objectives. It is concluded that generic crackdown operations at the level of local drug markets are unhelpfully insensitive to local conditions and that, in certain circumstances, they can be antithetical to more considered enforcement and public health aims.
Article
Full-text available
This paper is based on driver-less car technology as currently being developed by Google and Tesla, two companies that amplify their work in the media. More specifically, I focus on the moment of real and imagined crashes involving driver-less cars, and argue that the narrative of ‘ethics of driver-less cars’ indicates a shift in the construction of ethics, as an outcome of machine learning rather than a framework of values. Through applications of the ‘Trolley Problem’, among other tests, ethics has been transformed into a valuation based on processing of big data. Thus ethics-as-software enables what I refer to as big data-driven accountability. In this formulation, ‘accountability’ is distinguished from ‘responsibility’; responsibility implies intentionality and can only be assigned to humans, whereas accountability includes a wide net of actors and interactions (in Simon). ‘Transparency’ is one of the more established, widely acknowledged mechanisms for accountability; based on the belief that seeing into a system delivers the truth of that system and thereby a means to govern it. There are however limitations to this mechanism in the context of algorithmic transparency (Ananny and Crawford).
Article
Full-text available
The health industry is investing in robotics because it has the potential to optimize workflows and reduce the workloads of healthcare professionals. However, these optimizations come at a cost. By looking at three different robot systems and their underlying control architectures, this paper will describe some of the dynamics generated by the migration of computational logic developed for industrial robot systems to the healthcare domain. We combine a reading of robot control systems with perspectives from cultural techniques to uncover dynamics that neither approach can detect independently.
Article
Theories of Kulturtechniken (‘cultural techniques’) have been vigorously debated in the German media theory milieu since the turn of the millennium. Multiple, often competing conceptualizations of this slippery term (and its exemplars) have emerged as Kulturtechniken morphed into its current, most theoretically sophisticated incarnation (see Geoghegan; Winthrop-Young, “Kultur”). Though only now beginning to percolate in Anglo-American media studies, these debates have been transformative for research in Germany. Bernhard Siegert has gone so far as to pronounce the “conceptual transformation of media into cultural techniques” complete (Siegert, “End” 53). The dissolution of media as a concept—if not the entire discipline of media studies—likely reads as a shock to English-speaking scholars and students for whom the concept remains a powerful institutional and intellectual frame. But can Siegert’s claim serve as a catalyst for rethinking some of the foundational concepts and categories of Anglo-American media and cultural studies? To address this question, I offer an overview of German debates that introduces key thinkers and concepts to English readers. I then consider recent Anglo-American work that resonates with German discussions—specifically, work by Jonathan Sterne on media formats (e.g. MP3) and by John Durham Peters and Ned Rossiter on logistical media. I conclude with a few notes toward synthesizing these traditions by returning to Harold Innis’s early theorization of media. I argue that such a synthesis produces theoretical and methodological tools well suited to account for contemporary issues in digital culture such as Big Data and state surveillance. Kulturtechniken German debates around cultural techniques began in the wake of Friedrich Kittler’s controversial establishment of media as the technical a priori of the human sciences. Kittler’s media ontology sought to correct and displace Foucault’s conception of the archive as historical a priori. To sum up this move in one sentence: Kittler went a layer deeper than Foucault’s archaeologies did or could, showing the archive and discourse to be themselves always structured by media technologies: no discourse without pens, paper, and typewriters, no archives without recording media and address systems, no governmentality without files. According to Kittler, Foucault’s understanding did not go far enough because he—“the last historian or first archaeologist” (5)—was unable to think beyond conventional alphanumeric writing systems. Kittler showed that before it can ever condition subjects, or even be articulated as language, power/knowledge is forged via the processing, storage, and transmission of signals. In Kittler’s wake, the concept of media proliferated, eventually becoming over-extended and totalizing. Many were troubled that important considerations about what precedes media devices and networks had been pushed aside in the fevered dream of 1980s media analysis, with its proclivity for lost media stories, devices, and engineers. Their claim was that too much baby had been thrown out with the bathwater in the rush to, in Siegert’s words, replace the critique of reason with a critique of media (“End” 49). So Siegert, Cornelia Vismann, and others like Thomas Macho, Sybille Krämer—even Kittler himself—sought a way to unloosen the problematic knot the concept of media had become. They did so by rediscovering an old agricultural concept, Kulturtechniken. ‘Cultural techniques’ first emerged in the late 19th century to describe agricultural procedures like irrigation and draining, straightening riverbeds, or constructing water reservoirs (Winthrop-Young, “Preliminary” 4-5; “Kultur” 380-81; also described by Williams 87-89). Already we can see the Kultur in Kulturtechniken is a very far cry from ‘culture’ in the Anglo-American tradition, which describes—to use a crude heuristic binary—either the ‘best that has been thought and said’ (Arnold) or a ‘whole way of life’ (Williams). The culture in cultural techniques originally had to do with cultivation, nurturing, or rendering habitable. These are, after all, the etymological roots of the word (the Latin colere means: to tend, guard, cultivate, or till). This is culture in the sense of doing, handling, working; it involves hands, bodies, and tools, which converge to draw borders and process distinctions. Imported from agricultural science into media theory—after a brief stopover in mass media studies (see Winthrop-Young “Kultur” 381-82)—cultural techniques are “conceived as operative chains that precede the media concepts they generate” (Siegert, “End” 58). This approach starts not with totalizing concepts like ‘media,’ ‘network,’ or ‘power,’ but instead places at the basis of changes in cultural and intellectual history inconspicuous techniques of knowledge like card indexes, media of pedagogy like the slate, discourse operators like quotation marks, uses of the phonograph in phonetics, or techniques of forming the individual like practices of teaching to read and write. (Siegert, “Map” 14) Theories of Kulturtechniken hold that such techniques delineate and assemble the broader spatio-temporal infrastructures of societies (see Parikka 154). There is less emphasis on the devices, objects, or systems privileged by early German media analysis than on ontic operations that reproduce, displace, process or reflect the distinctions at the core of any society, e.g. inside and outside, subject and object, nature and culture, matter and form, etc. (Siegert, “Cacography”; “End”; Grids). At the level of ontics we observe the means by which humans and tools assemble basic categories of space, time and being. The concept of cultural techniques clearly and unequivocally repudiates the ontology of philosophical concepts. Humans as such do not exist independently of cultural techniques of hominisation, time as such does not exist independently of cultural techniques of time measurement, and space as such does not exist independently of cultural techniques of spatial control. (Siegert, “End” 56-57) By shifting the analytic gaze from the ontological to the ontic we are able to observe crucial distinctions in a process of becoming, rather than as a priori. Vismann puts it another way: “cultural techniques define the agency of media and things. If media theory were, or had, a grammar, that agency would find its expression in objects claiming the grammatical subject position and cultural techniques standing in for verbs” (“Sovereignty” 83). The study of cultural techniques holds that media and things are not simply passive objects to be activated at the whim of an intentional (human) subject. Media and things supply their own rules of execution—we do not choose how to open or close a door, to take one of Siegert’s favourite examples (see “Doors”). A door does not present us with an open horizon of possibility. We must act according to the rules it sets out for us: push or pull, open or close. A door has agency in the sense that it structures what is possible for praxis. Thinking of a door in this way shows the picture of agency we usually work with, as reserved for acting human subjects, to be insufficient. As Vismann reminds us, in an echo of Latour, “certain actions cannot be attributed to a person; and yet they are somehow still performed” (“Sovereignty” 84). Another famous example from literature on cultural techniques is the plough that draws a furrow in the earth to mark the threshold of a city that will be built. Inside this space there will be order, law, custom, exchange; outside will be chaos and barbarism. The furrow, and the door or gate that replaces it, is a cultural technique of hominisation: inside is the space of the human, outside the space of the beast (see Vismann, “Sovereignty”; Siegert, “Doors”; “End”). Entire moral, political, and ethical worldviews are built upon such distinctions; they are the fabric with which social orders are woven. According to Vismann,the agricultural tool determines the political act; and the operation itself produces the subject, who will then claim mastery over both the tool and the action associated with it. Thus, the Imperium Romanum is the result of drawing a line – a gesture which, not accidentally, was held sacred in Roman Law. (“Sovereignty” 84) Property still works like this. Ownership only comes to exist after the drawing of a boundary: a line on a map. In this way Vismann can claim the drawing of the furrow as a cultural technique not just of property and ownership, but sovereignty itself. This tradition is not interested in the content or meaning of media or things, historically the focus of Anglo-American media and cultural studies, only in ways of doing—counting, measuring, collecting, observing, playing, confessing, listing—because these engender systems of knowing and modes of social organization. ‘Media’ as we understand them (e.g. gramophones, telegraphs, and computers) communicate and order by encoding non-sense into sense (and vice versa). This is done via the recording or transmission of signals, or the translating of data into alphanumeric characters. Cultural techniques are the parasitic third: neither sense nor non-sense, but that which engenders the distinctions and operations required for media to do their communicative and ordering work (Siegert, “End” 61-62; Serres). Listing, for instance, is a cultural technique that precedes a whole host of media networks, from Ancient Sumerian clay tablets to contemporary computer code (Young; see also Vismann Files 5-10). A list draws a border around certain items, inscribing order on a field of possible data. When placed in a list, persons, words, or things become dynamic units available for processing, storage, or transmission. Listing is the cultural technique by which things from the world (or from imagination) become encoded into the symbolic order and thereby subject to manipulation, revision, erasure, or reversibility (see Krämer; Winkler). What is included in a list vs. excluded is a basic distinction upon which rests all kinds of second-order operations, speculations, and actions that comprise media networks of trade and circulation, whether in Ancient Sumeria (Goody), early modern Europe (Poovey; Vismann Files) or Wall Street in 2008. There are major political stakes in such operations: the form of protocol determines how computation unfolds; how a person is listed can determine his or her fate. Similar conceptual innovations are being pursued across the Atlantic, though they have not yet coalesced as a ‘movement.’ Geoffrey Winthrop-Young highlights connections between cultural techniques and the post-humanism of Haraway, Wills, Wolfe and Hayles (“Kultur” 386), while Jussi Parikka maps resonances "with a range of cross-disciplinary approaches that the Anglo-American academic world is interested in: post-humanities, the non-human, questions of materiality and objects, the affective turn, media archaeology, historical methods and archives, as well as the role of anthropology in media studies" (149). There is a French connection, as well; affinities with Bruno Latour's work—highly influential in Anglo-America—are receiving similar attention (Siegert, Grids). To contribute to such efforts, I turn now to research from the Anglosphere that focuses on those points at which media concepts, devices and networks are still in-formation, having not yet taken their final institutional or epistemological forms. Formats: Behind and beneath Black Boxes Jonathan Sterne’s recent call to “focus on the stuff beneath, beyond, and behind the boxes our media come in” (Sterne, MP3 11) is part of a broader shift in the humanities toward materiality (see e.g. Grusen on ‘the nonhuman turn’; Ernst on media archaeology; Dolphijn and van der Tuin on new materialism; Starosielski on infrastructure studies). Sterne develops the concept of format to describe the ‘layers’ of technical development and social practice that occur before media devices or networks emerge as such. His case study is MP3, and he traces a long history of experimentation (and failure) with audio compression modes that prefigure its standardization. Though formats like MP3 (and the media devices in which they operate) appear to users fully formed, Sterne’s point is that there are myriad historical, institutional, technical and other factors that precede their appearance. They do not fall from the sky. “Cross-media formats” like MP3 “operate like catacombs under the conceptual, practical, and institutional edifices of media” (16). The concept of format offers a corrective to a trend Sterne sees in media studies that misguidedly conflates under the concept of ‘media’ a vast array of processes, mechanisms, histories, techniques, practices, etc., that operate at distinct layers of any given medium (be they spatial, temporal, institutional, or imaginary):Format theory would ask us to modulate the scale of our analysis of media somewhat differently. Mediality happens on multiple scales and time frames. Studying formats highlights registers like software, operating standards, and codes, as well as larger infrastructures, international corporate consortia, and whole technical systems. (MP3 11)The MP3 research is an extension of Sterne’s earlier work on the development in the late 19th and early 20th centuries of audile technique, “a set of practices of listening that were articulated to science, reason, and instrumentality and that encouraged the coding and rationalization of what was heard” (Sterne, Audible 23). Through this concept Sterne links together practices of hearing from diverse social, cultural and technical environments. The shift in medical practice from listening to patients’ speech (descriptions about how they were feeling) to listening to their bodies (using technological apparatuses) encodes a technique of hearing that affects the technical development of everything from telegraphy to radio and headphones, and the social development of concepts like private space and private property. Sterne’s emphasis on the granularity of technique—how humans and their devices converge to establish ways of doing, hearing, seeing, and thinking that are the ground upon which concepts, desires, and institutions are built—resonates very strongly with cultural techniques as theorized by Siegert and others. However, Sterne is far less invested in questions of philosophy and ontics than in methodology. He is not seeking to replace media studies with ‘format studies,’ only to address the erroneous conflation of various ‘sedimentary’ layers under media devices and networks. Siegert’s analysis of media operations at the level of ontics should be seen as a media-philosophical compliment to Sterne’s rigourous historical research. From the other end, cultural techniques could use some of Sterne’s “historical grit – institutional politics, economic marketing, and social history” (Peters “Strange” 12). Logistical Media: Arrangements of Space and Time Both cultural techniques and format theory describe ontic operations that precede concepts. These are actions that have to do with handedness; the verbs of media theory that operate on its objects, to recall Vismann’s characterization. John Durham Peters similarly describes what he calls ‘logistical media,’ which “arrange people and property into time and space” (Peters, “Calendar” 40). These are “prior to and form the grid in which messages are sent […] Logistical media establish the zero points of orientation, the convergence of the x and y axis” (40). In ancient societies, technologies like the calendar and clock established grids through which time came to be experienced, measured and calculated (as Mumford understood in 1934). The tower established terrain as a visible field over which power could be exerted. Time and space converge in these objects: towers render the time required to move over terrain as a spatial horizon that can be processed by the eye; the discrete, spatialised movements of a clock’s hands freeze the ephemeral arrow of time; the calendar renders cultural cycles into a spatial form by which these can be standardized and canonized (for a discussion of media and ‘the geometry of time’ see Winkler). Ned Rossiter extends Peters’s concept to account for conditions of labour and life in contemporary network cultures. For him, logistical media “coordinate and control the movement of labour, people, and things situated along and within global supply chains” (Rossiter, “Coded”). They are devices, protocols, and structures that establish parameters within which movement occurs. According to Bratton, design—whether architectural, infrastructural, or computational—produces “logistical media for mobilization and its administration, technologies that consolidate territory into logistical field and enable a Modern governance based on the abstracted calculation over omnidirectional spaces and surfaces, from open oceans to shared spreadsheets” (8). Logistical capitalism is ‘omnidirectional’ in the sense that distributed computation enables operations to be synchronized in time and distributed over space. “Finance capital and supply chain operations intersect with labor-power through logistical technologies that measure productivity and calculate value using real-time computational procedures.” (Rossiter, “Coded” 135). Operations occur almost simultaneously (in human time) rather than sequentially. In this regard coordinated informational and logistical environments seek to emulate the frictionless “oceanic vectors from which [logistics] is born” (Bratton 12; see also Virilio). Even though they use the term ‘media’ to develop the concept of logistical media, Peters and Rossiter actually identify moments prior to media, in which devices and techniques process logistical distinctions that establish concepts like time, space, being and ‘media’ itself. Peters’s and Rossiter’s logistical media are cultural techniques by another name. In fact, conceiving of logistical media as a series of interrelated cultural techniques may be a more productive move. Cultural techniques open up the black boxes of media and remove the conceptual baggage that has accumulated over the years (or arguably was always there—we understand ‘media’ no better now than McLuhan did in 1964). Cultural techniques turn our gaze away from dominant media forms, industries, content, or some abstract notion of ‘the (mass) media,’ and toward protocols, standard operating procedures and operations that facilitate infrastructures that Rossiter argues “rob living labour of time” (“Logistical” 67).Concepts like cultural techniques, format and logistical media grapple with questions of space and time. In so doing, they offer tools well suited to reckon with not just history but contemporary algorithmic culture. New spaces and times have emerged that structure everything from the rhythms of life and labour to expectations regarding communication and commodity circulation. Human subjects, once administered by written formats (over which the state enjoyed a monopoly), have become users (self)regulated by proprietary enterprise systems and code. Our behaviour as consumers is continuously marshaled by computational processes that we cannot see and often do not understand. These problems are inherently logistical; they demand analyses properly tuned to the ways in which the cultural techniques of algorithmic culture structure what is possible for social, political, and imaginative life.Harold Innis’s ‘Civilizational’ Medium Theory In foregrounding issues of space and time, the above approaches bring us back to the ‘civilizational’ tradition of media studies most closely associated with Canadian scholars Harold Innis and Marshall McLuhan. Peters suggests that this tradition, which “ponders the civilizational stakes of media as a cultural complex,” has received less emphasis in the last 30 years than dominant streams (Peters schematises the latter as: textual and interpretive; social and explanatory; historical and institutional, see “Strange” 4-5). The more elusive fourth stream has to do with understanding the way that the biases of dominant media shape the character of civilizations, marshaling culture and politics toward certain tendencies: e.g. spatial conquest, as with Rome and its parchment administration, or temporal endurance, as with religions of the papyrus book (see Innis, Bias; Empire). Innis’s great insight was to suggest that an understanding of large-scale civilizational issues can be derived by observing both the granular techniques of e.g. memory and preservation, administration, or communication, and the knowledge practices that structure these realms (aside from medial bias, Innis’s most well known concept is probably ‘monopoly of knowledge’). Innis was after something like a theory of civilizational cultural techniques. Vismann argues that the Canadian’s distinction between space and time informs a similar conceptual split between cultural techniques that organize spatial categories (“border regimes and surveying techniques […] the act of drawing a line,”) and genealogical techniques, “which govern notions of duration, assign origins and secure the future: record-keeping, adoption and inheritance regulations, but also breeding and grafting” (“Sovereignty” 91-92). In Innis we can also anachronistically observe a convergence of the original terrain-based definition of Kulturtechniken and its more recent media theoretical incarnation. The concept, as noted above, originally described engineering processes aimed at cultivating land and rendering it habitable. Innis’s early ‘dirt research’ of the 1920s and 30s has a similar emphasis on terrain. The Fur Trade of Canada (1927), for instance, shows that colonial trade networks are shaped by habitation patterns of non-human actors like the beaver. The book’s famous opening gambit was that “it is impossible to understand the characteristic developments of the [fur] trade or of Canadian history without some knowledge of [the beaver’s] life and habits” (3). A detailed description follows of certain biological, geographical, material and even social characteristics of the beaver. These remarks grant context to the later discussion of temporary trade routes, sites of exchange, and navigational patterns established by colonial and indigenous fur trappers in the 16th and 17th centuries. Innis highlights, for instance, that alterations to hunting techniques (arising from encounters between these cultures), combined with the immobility of the beaver (its ‘heavy fixed capital’), played a crucial role in pushing trade and settlement further west.In the language of the economists, the heavy fixed capital of the beaver became a serious handicap with the improved technique of Indian hunting methods, incidental to the borrowing of iron from Europeans […] With [beaver] destruction in the easterly part of North America came the necessity of pushing to the westward and northwestward to tap new areas of the more valuable furs. The problem of the fur trade became one of organizing the transport of supplies and furs over increasingly greater distances (Innis, Fur Trade 5-6).The organization of economic activity around fur, a staple good relatively abundant in a peripheral colony (Canada) but highly desired in a central empire (France, later Britain), established techniques, infrastructure and fixed capital that would orient Canada’s emergent economy toward other staples in subsequent centuries, such as lumber, cod, grain, and oil (see Innis, Staples and Cod; Watkins). Innis’s description frames the fur trade and Canadian economic history in terms of cultural techniques and logistics: the movement of things, people, and data. He uses the language of his intellectual training ground, Economics, but the early studies went far beyond conventional economic histories. They trace both the cultural techniques of cultivating land for extraction of staple goods, and the process by which such techniques draw distinctions (in maps, settlements, trade routes, fixed capital, etc.). In later works, Innis extrapolated, seeking to understand the way such techniques and distinctions produce different civilizational ‘biases’ toward space and time (Empire; Bias). Innis’s early studies were archaeologies of trade and infrastructure that showed how nation states, economies, cycles of accumulation and circulation, and even national identities arise from encounters between humans, terrain, waterways, and fauna, and in response to problems of transportation and navigation, i.e. logistics. This was a radically new approach to understanding economic and civilizational history that emerged from Innis’s commitment to what he called ‘dirt research.’ From 1924 Innis traveled hundreds of miles by Canoe across Canada in order to gather first-hand observations about staples. He learned about the conditions of extraction and production, transportation, exchange, and so on (Creighton 61-64). Innis paid careful attention to terrain and what happens on its surface. He mapped the ebbs and flows not just of rivers, but also social encounters (conducting interviews with those involved in staple economies). He developed tools to understand what precedes not only networks of circulation and communication but also the cultural, political and institutional life built on such networks. Such work stands as a theory of cultural techniques avant la lettre. Conclusion Innis’s fourth, civilizational stream of media and cultural studies grapples with issues of infrastructure and logistics, which is where cultural techniques, format theory and logistical media brush up against one another. In conversation, what kind of advantages do these approaches offer in thinking through the dissolution of mass media into digital computation? And how might they help us to connect new modes of data organization and processing—analytics, algorithmic trading, state surveillance, etc.—with older modes, bringing seemingly divergent historical periods into contact? Big Data, for instance, is an orienting principle not just for state surveillance and corporate business plans, but everything from city planning (“smart cities”) and political campaigning (“data consultants”) to counterterrorism (“predictive policing”). How do the cultural techniques of Big Data compare with those of modernity’s earlier “datascapes”—techniques of surveillance and administration like the state census, or private sector efficiency, e.g. Taylorism? Furthermore, does Innis, the great political superego of the civilizational tradition (with his insistence that the key to peace and prosperity is balance amongst the biases of communication), offer a more productive political orientation than currently on offer in the approaches sketched above? Such questions remain for future dirt research. The modest aims of this paper are to introduce the emergent concept of cultural techniques to English readers and connect it with similar research happenings in the Anglosphere. More ambitiously, I suggest that synthesizing these traditions produces conceptual and methodological tools that are well equipped to account for contemporary developments in digital or ‘algorithmic’ culture. A benefit of these approaches is that they bring to light what precedes foundational concepts like network, system, nation, identity, even ‘media’ and ‘culture’ themselves. They also enable us to more clearly understand that justice in the age of algorithmic capital is as much an engineering problem as a political or philosophical one. Cultural techniques are means by which extant systems enframe life and labour, yet they are also means by which new, more just systems might be built.References Bratton, Benjamin. "Logistics of Habitable Circulation." Speed and Politics. By Paul Virilio. Los Angeles: Semiotext(e), 2006. Creighton, Donald. Harold Adams Innis: Portrait of a Scholar. Toronto: U of Toronto P, 1957. Dolphijn, Rick, and Iris van der Tuin, eds. New Materialism: Interviews & Cartographies. Open Humanities Press, 2012. Ernst, Wolfgang. Digital Memory and the Archive. Ed. Jussi Parikka. Minneapolis: U of Minnesota P, 2012. Foucault, Michel. The Order of Things: An Archaeology of the Human Sciences. London: Routledge, 2009. Geoghegan, Bernard Dionysius. “After Kittler: On the Cultural Techniques of Recent German Media Theory.” Theory, Culture, and Society 30.3 (2013): 66-82. Goody, Jack. The Domestication of the Savage Mind. Cambridge UK: Cambridge UP, 1977. Grusen, Richard, ed. The Nonhuman Turn. Minneapolis: U of Minnesota P, 2015. Innis, Harold A. The Cod Fisheries: The History of an International Economy. Toronto: The Ryerson Press, 1940. ———. The Fur Trade in Canada: An Introduction to Canadian Economic History. Toronto: U of Toronto P, 1973. ———. Staples, Markets, and Cultural Change. Ed. Daniel Drache. Montreal: McGill-Queens UP, 1995. ———. Empire and Communication. Toronto: Dundurn, 2007. ———. The Bias of Communication. 2nd Ed. Toronto: U of Toronto P, 2002. Kittler, Friedrich A. Gramophone, Film, Typewriter. Trans. Geoffrey Winthrop-Young and Michael Wutz. Stanford: Stanford UP, 1999. Krämer, Sybille. “The Cultural Techniques of Time-Axis Manipulation: On Friedrich Kittler’s Conception of Media.” Theory, Culture, and Society 23.7 (2006): 93-109. Latour, Bruno. Reassembling the Social: An Introduction to Actor-Network Theory. Oxford UK: Oxford UP, 2005. McLuhan, Marshall. Understanding Media. Toronto: U of Toronto P, 1964. Mumford, Lewis. Technics and Civilization. New York and Burlingame: Harbinger 1936. Parikka, Jussi. “Afterward: Cultural Techniques and Media Studies.” Theory, Culture, and Society 30.3 (2013): 147-159. Peters, John Durham. “Strange Sympathies: Horizons of German and American Media Theory.” American Studies as Media Studies. Eds. Frank Kelleter and Daniel Stein. Heidelberg: Universitätsverlag, 2008. 3-23. ———. “Calendar, Clock, Tower.” Deus in Machina: Religion and Technology in Historical Perspective. Ed. Jeremy Stolow. New York: Fordham UP, 2013. 25-42. Poovey, Mary. A History of the Modern Fact: Problems of Knowledge in the Sciences of Wealth and Society. Chicago: U of Chicago P, 1997. Rossiter, Ned. “Logistical Worlds.” Cultural Studies Review 20.1 (2014): 53-76. ———. “Coded Vanilla: Logistical Media and the Determination of Action.” South Atlantic Quarterly 114.1 (2015): 135-52. Serres, Michel. The Parasite. Trans. Lawrence R. Schehr. Minneapolis: U of Minnesota P, 2007. Siegert, Bernhard. “Cacography or Communication? Cultural Techniques in German Media Studies.” Trans. Geoffrey Winthrop-Young. Grey Room 29 (2008): 26-47. ———. “The Map Is the Territory.” Radical Philosophy 169 (2011): 13-16. ———. “Doors: On the Materiality of the Symbolic.” John Durham-Peters (Trans.). Grey Room 47 (2012): 6-23. ———. “Cultural Techniques: Or the End of the Intellectual Postwar Era in German Media Theory.” Trans. Geoffrey Winthrop-Young. Theory, Culture, and Society 30.3 (2013): 48-65. ———. Cultural Techniques: Grids, Filters, Doors and Other Articulations of the Real. Trans. Geoffrey Winthrop-Young. New York: Fordham UP, 2015. Starosielski, Nicole. The Undersea Network. Durham: Duke UP, 2015. Sterne, Jonathan. The Audible Past: Cultural Origins of Sound Reproduction. Durham: Duke UP, 2003. ———. MP3: The Meaning of a Format. Durham: Duke UP, 2012. Vismann, Cornelia, Files: Law and Media-Technology. Trans. Geoffrey Winthrop-Young. Stanford: Stanford UP, 2008. ———. “Cultural Techniques and Sovereignty.” Trans. Ilinca Irascu. Theory, Culture, and Society 30.3 (2013): 83-93. Virilio, Paul. The Original Accident. Trans. Rose Juile. Cambridge UK: Polity. Winkler, Hartmut. “Geometry of Time: Media, Spatialization and Reversibility.” Media Theory on the Move conference. Potsdam, Germany. 21-24 May 2009. Online Transcript. ‹http://www.uni-paderborn.de/~winkler/hase_e.pdf›. Williams, Raymond. Keywords. New York: Fontana, 1983. Winthrop-Young, Geoffrey. “Cultural Techniques: Preliminary Remarks.” Theory, Culture, and Society 30.3 (2013): 3-18. ———. “The Kultur of Cultural Techniques: Conceptual Inertia and the Parasitic Materialities of Ontolization.” Cultural Politics 10.3 (2015): 376-88. Young, Liam Cole. “On Lists and Networks: An Archaeology of Form.” Amodern 2 (2013). 15 Feb 2015. ‹http://amodern.net/article/on-lists-and-networks/›.
Article
This article examines the intersection of two structural developments: the growth of surveillance and the rise of “big data.” Drawing on observations and interviews conducted within the Los Angeles Police Department, I offer an empirical account of how the adoption of big data analytics does—and does not—transform police surveillance practices. I argue that the adoption of big data analytics facilitates amplifications of prior surveillance practices and fundamental transformations in surveillance activities. First, discretionary assessments of risk are supplemented and quantified using risk scores. Second, data are used for predictive, rather than reactive or explanatory, purposes. Third, the proliferation of automatic alert systems makes it possible to systematically surveil an unprecedentedly large number of people. Fourth, the threshold for inclusion in law enforcement databases is lower, now including individuals who have not had direct police contact. Fifth, previously separate data systems are merged, facilitating the spread of surveillance into a wide range of institutions. Based on these findings, I develop a theoretical model of big data surveillance that can be applied to institutional domains beyond the criminal justice system. Finally, I highlight the social consequences of big data surveillance for law and social inequality.
Article
This essay traces the advances in time axis manipulation brought about by the media switches from symbolic mediation (alphabet) to analogue recording (phonography and cinematography) and digital processing (computers). Special emphasis is on the mathematical dimension of the final stage. The Fourier transform enables the conversion of sound events into periodicities with numerical values that can then be manipulated and converted back into sound events, even if there was no original source involved. The media access frequencies and operate at speeds beyond all human thresholds. Kittler argues that the resulting ability to subvert and simulate human perception is the very definition of technical media.
Article
This article suggests that Facebook embodies a new logic of capitalist governance, what has been termed the ‘social logic of the derivative’. The logic of the derivative is rooted in the now dominant financial level of the capitalist economy, and is mediated by social media and the algorithmic processing of large digital data sets. This article makes three precise claims: First, that the modus operandi of Facebook mirrors the operations of derivative financial instruments. Second, that the algorithms that Facebook uses share a genealogy with those of derivative financial instruments – both are outcomes of the influence of the ‘cyber sciences’ on managerial practice in the post-war years. Third, that the future potential of Facebook lies in its ability to apply the logic of derivatives to the financial valuation of ordinary social relations, thus further extending the process of financialization of everyday life.
Article
This article offers an introduction to the German concept of Kulturtechniken (cultural techniques), with a special focus on the term’s multilayered semantic career, as well as on the way old notions of Kultur are at play in the concept.