ArticlePDF Available

Ways of Machine Seeing

Authors:

Abstract and Figures

Algorithms do not act alone or with magical (totalising) power but exist as part of larger infrastructures and ideologies. Some well-publicised recent cases have come to public attention that exemplify a contemporary politics (and crisis) of representation in this way. The problem is one of learning in its widest sense, and “machine learning” techniques are employed on data to produce forms of knowledge that are inextricably bound to hegemonic systems of power and prejudice.
Content may be subject to copyright.
Geoff Cox
WAYS OF MACHINE SEEING:
AN INTRODUCTION
APRJA Volume 6, Issue 1, 2017
ISSN 2245-7755
CC license: ‘Attribution-NonCommercial-ShareAlike’.
9
You are looking at the front cover of the book
Ways of Seeing written by John Berger in
1972.[1] The text is the script of the TV series,
and if you’ve seen the programmes, you can
almost hear the distinctive pedagogic tone of
Berger’s voice as you read his words: “The
relation between what we see and what we
know is never settled.”[2]
The image by Magritte on the cover
further emphasises the point about the deep
ambiguity of images and the always-present
difculty of legibility between words and see-
ing.[3] In addition to the explicit reference to
the “artwork” essay by Walter Benjamin,[4]
the TV programme employed Brechtian
techniques, such as revealing the technical
apparatus of the studio; to encourage view-
ers not to simply watch (or read) in an easy
way but rather to be forced into an analysis
of elements of “separation” that would lead
to a “return from alienation”.[5] Berger further
reminded the viewer of the specics of the
technical reproduction in use and its ideo-
logical force in a similar manner:
But remember that I am controlling and
using for my own purposes the means
of reproduction needed for these
programmes […] with this programme
as with all programmes, you receive
images and meanings which are
arranged. I hope you will consider what
I arrange but please remain skeptical
of it.
That you are not really looking at the
book as such but a scanned image of a
book — viewable by means of an embedded
link to a server where the image is stored
testies to the ways in which what, and
how, we see and know is further unsettled
through complex assemblages of elements.
The increasing use of relational machines
such as search engines is a good example
of the ways in which knowledge is ltered at
the expense of the more specic detail on
how it was produced. Knowledge is now pro-
duced in relation to planetary computational
infrastructures in which other agents such as
algorithms generalise massive amounts of
(big) data.[6]
Clearly algorithms do not act alone or
with magical (totalising) power but exist as
part of larger infrastructures and ideologies.
Some well-publicised recent cases have
come to public attention that exemplify a con-
temporary politics (and crisis) of representa-
tion in this way, such as the Google search
results for “three black teenagers” and “three
white teenagers” (mug shots and happy
teens at play, respectively).[7] The problem
is one of learning in its widest sense, and
“machine learning” techniques are employed
on data to produce forms of knowledge that
are inextricably bound to hegemonic systems
Geoff Cox: WAYS OF MACHINE SEEING
Figure 1: The Cover of Ways of Seeing by John Berger
(1972). Image from Penguin Books.
10
APRJA Volume 6, Issue 1, 2017
of power and prejudice.
There is a sense in which the world be-
gins to be reproduced through computational
models and algorithmic logic, changing what
and how we see, think and even behave.
Subjects are produced in relation to what
algorithms understand about our intentions,
gestures, behaviours, opinions, or desires,
through aggregating massive amounts of
data (data mining) and machine learning (the
predictive practices of data mining).[8] That
machines learn is accounted for through
a combination of calculative practices that
help to approximate what will likely happen
through the use of different algorithms and
models. The difculty lies in to what extent
these generalisations are accurate, or to
what degree the predictive model is valid, or
“able to generalise” sufciently well. Hence
the “learners” (machine learning algorithms),
although working at the level of generalisa-
tion, are also highly contextual and specic
to the elds in which they operate in a com-
ing together of what Adrian Mackenzie calls
a “play of truth and falsehood”.[9]
Thus what constitutes knowledge can
be seen to be controlled and arranged in
new ways that invoke Berger’s earlier call for
skepticism. Antoinette Rouvroy is similarly
concerned that algorithms begin to dene
what counts for knowledge as a further
case of subjectivation, as we are unable to
substantively intervene in these processes of
how knowledge is produced.[10] Her claim is
that knowledge is delivered “without truth”
through the increasing use of machines that
lter it through the use of search engines that
have no interest in content as such or detail
on how knowledge is generated. Instead they
privilege real-time relational infrastructures
that subsume the knowledge of workers and
machines into generalised assemblages as
techniques of “algorithmic governmentality”.
[11]
In this sense, the knowledge produced
is bound together with systems of power
that are more and more visual and hence
ambiguous in character. And clearly comput-
ers further complicate the eld of visuality,
and ways of seeing, especially in relation
to the interplay of knowledge and power.
Aside from the totalizing aspects (that I have
outlined thus far), there are also signicant
“points of slippage or instability” of epistemic
authority,[12] or what Berger would have no
doubt identied as the further unsettling of the
relations between seeing and knowing. So,
if algorithms can be understood as seeing,
in what sense, and under what conditions?
Algorithms are ideological only inasmuch as
they are part of larger infrastructures and
assemblages.
Figure 2: The Ways of Seeing book cover image seen
through an optical character recognition program.
Created by SICV.
11
But to ask whether machines can see
or not is the wrong question to ask, rather
we should discuss how machines have
changed the nature of seeing and hence
our knowledge of the world.[13] In this we
should not try to oppose machine and human
seeing but take them to be more thoroughly
entangled a more “posthuman” or “new
materialist” position that challenges the onto-
epistemological character of seeing and
produces new kinds of knowledge-power
that both challenges as well as extends the
anthropomorphism of vision and its attach-
ment to dominant forms of rationality. Clearly
there are other (nonhuman) perspectives
that also illuminate our understanding of the
world. This pedagogic (and political) impulse
is perfectly in keeping with Ways of Seeing
and its project of visual literacy.[14] What
is required is an expansion of this ethic to
algorithmic literacy to examine how machine
vision unsettles the relations between what
we see and what we know in new ways.
Geoff Cox: WAYS OF MACHINE SEEING
12
APRJA Volume 6, Issue 1, 2017
13
Notes
[1] This essay was rst commissioned
by The Photographers Gallery for their
Unthinking Photography series, https://
unthinking.photography/themes/machine-
vision/ways-of-machine-seeing. The title is
taken from a workshop organised by the
Cambridge Digital Humanities Network, con-
vened by Anne Alexander, Alan Blackwell,
Geoff Cox and Leo Impett, and held at
Darwin College, University of Cambridge,
11 July 2016, http://www.digitalhumanities.
cam.ac.uk/Methods/waysofmachineseeing;
a subsequent workshop, Ways of Machine
Seeing 2017, is a two-day workshop
organised by the Cambridge Digital
Humanities Network, and CoDE (Cultures of
the Digital Economy Research Institute) and
Cambridge Big Data, to be held 26-28 June
2017, http://www.digitalhumanities.cam.
ac.uk/Methods/woms2017/woms2017CFP.
[2] Ways of Seeing, Episode 1
(1972), https://www.youtube.com/
watch?v=0pDE4VX_9Kk. The 1972 BBC
four-part television series of 30-minute lms
was created by writer John Berger and
producer Mike Dibb. Berger’s scripts were
adapted into a book of the same name,
published by Penguin also in 1972. The
book consists of seven numbered essays:
four using words and images; and three
essays using only images. See https://
en.wikipedia.org/wiki/Ways_of_Seeing.
[3] René Magritte, The Key of Dreams
(1930), https://courses.washington.edu/
hypertxt/cgi-bin/book/wordsinimages/key-
dreams.jpg. Aside from the work of Magritte,
Joseph Kosuth’s One and Three Chairs
Geoff Cox: WAYS OF MACHINE SEEING
Figure 3: Code by The Scandinavian Institute for Computational Vandalism.
14
APRJA Volume 6, Issue 1, 2017
(1965) comes to mind, that makes a similar
point in presenting a chair, a photograph of
the chair, and an enlarged dictionary deni-
tion of the word “chair”, https://en.wikipedia.
org/wiki/One_and_Three_Chairs.
[4] The rst section of the programme/book
is acknowledged to be largely based on
Benjamin’s essay “The Work of Art in the
Age of Mechanical Reproduction” (1936),
https://www.marxists.org/reference/subject/
philosophy/works/ge/benjamin.htm.
[5] The idea is that “separation” pro-
duces a disunity that is disturbing to the
viewer/reader — Brecht’s “alienation-
effect” (Verfremdungeffekt) — and that
this leads to a potential “return from
alienation”. See https://en.wikipedia.org/
wiki/Distancing_effect.
[6] To give a sense of scale and its conse-
quences, Facebook has developed the face-
recognition software DeepFace. With over
1.5 billion users that have uploaded more
than 250 billion photographs, it is allegedly
capable of identifying any person depicted
in a given image with 97% accuracy. See
https://research.facebook.com/publications/
deepface-closing-the-gap-to-human-level-
performance-in-face-verication/.
[7] Antoine Allen “The ‘three black
teenagers’ search shows it is society,
not Google, that is racist”, The Guardian
(10 June 2016), https://www.theguard-
ian.com/commentisfree/2016/jun/10/
three-black-teenagers-google-racist-tweet.
[8] Adrian Mackenzie, “The Production of
Prediction: What Does Machine Learning
Want?,” European Journal of Cultural
Studies, 18, 4–5 (2015): 431.
[9] Mackenzie, “The Production of
Prediction,” 441.
[10] See, for instance, Antoinette Rouvroy’s
“Technology, Virtuality and Utopia:
Governmentality in an Age of Autonomic
Computing,” in The Philosophy of Law
Meets the Philosophy of Technology:
Computing and Transformations of Human
Agency, eds. Mireille Hildebrandt and
Antoinette Rouvroy (London: Routledge,
2011), 136–157.
[11] This line of argument is also close
to what Tiziana Terranova has called an
“infrastructure of autonomization”, making
reference to Marx’s views on automation,
particularly in his “Fragment on Machines”,
as a description of how machines subsume
the knowledge and skill of workers into
wider assemblages. Tiziana Terranova,
“Red Stack Attack! Algorithms, capital and
the automation of the common”, Efmera
(2014), accessed August 24, 2016, http://
efmera.org/red-stack-attack-algorithms-
capital-and-the-automation-of-the-common-
di-tiziana-terranova/.
[12] Mackenzie, “The Production of
Prediction,” 441.
[13] I take this assertion from Benjamin
once more, who considered the question
of whether lm or photography to be art
secondary to the question of how art itself
has been radically transformed:
“Earlier much futile thought had been
devoted to the question of whether pho-
tography is an art. The primary question —
whether the very invention of photography
had not transformed the nature of art — was
not raised. Soon the lm theoreticians asked
the same ill-considered question with regard
to lm.” https://www.marxists.org/reference/
subject/philosophy/works/ge/benjamin.htm.
15
[14] Berger was associated with The Writers
and Readers Publishing Cooperative,
aiming to “advance the needs of cultural
literacy, rather than cater to an ‘advanced’
[academic] but limited readership” (From the
Firm’s declaration of intent). In this sense it
draws upon the Marxist cultural materialism
of Raymond Williams and Richard Hoggart’s
The Uses of Literacy (1966).
Works cited
Allen, Antoine. “The ‘three black teenagers’
search shows it is society, not Google, that
is racist.” The Guardian (10 June 2016),
https://www.theguardian.com/commentis-
free/2016/jun/10/three-black-teenagers-
google-racist-tweet. Web.
Benjamin, Walter. “The Work of Art in the
Age of Mechanical Reproduction.” (1936).
https://www.marxists.org/reference/subject/
philosophy/works/ge/benjamin.htm. Print.
Berger, John. Ways of Seeing. London:
Penguin, 1972. Print.
Cox, Geoff. “Ways of Machine Seeing.”
Unthinking Photography. London: The
Photographers Gallery, 2016. https://
unthinking.photography/themes/machine-
vision/ways-of-machine-seeing. Web.
Mackenzie, Adrian. “The Production of
Prediction: What Does Machine Learning
Want?” European Journal of Cultural
Studies, 18, 4–5 (2015): 431. Print.
Rouvroy, Antoinette. “Technology, Virtuality
and Utopia: Governmentality in an Age
of Autonomic Computing.” Eds. Mireille
Hildebrandt and Antoinette Rouvroy.
The Philosophy of Law Meets the
Philosophy of Technology: Computing and
Transformations of Human Agency. London:
Routledge, 2011. 136–157. Print.
Terranova, Tiziana. “Red Stack Attack!
Algorithms, capital and the automation
of the common.” Efmera (2014). http://
efmera.org/red-stack-attack-algorithms-
capital-and-the-automation-of-the-common-
di-tiziana-terranova/. Accessed August 24,
2016. Web.
Geoff Cox: WAYS OF MACHINE SEEING
... These systems solve image based tasks by identifying and understanding objects, subsequently making decisions from these information. A large set of images where the featured objects were labelled, known as datasets, are commonly used to develop and enhance machine vision algorithms (Cox 2016). However, errors in datasets are usually induced or even magnified in algorithms, at times resulting in issues such as recognising black people as gorillas and misrepresenting ethnicities in search results (Nieva 2015;Prabhu and Birhane 2020). ...
Preprint
Full-text available
From face recognition in smartphones to automatic routing on self-driving cars, machine vision algorithms lie in the core of these features. These systems solve image based tasks by identifying and understanding objects, subsequently making decisions from these information. However, errors in datasets are usually induced or even magnified in algorithms, at times resulting in issues such as recognising black people as gorillas and misrepresenting ethnicities in search results. This paper tracks the errors in datasets and their impacts, revealing that a flawed dataset could be a result of limited categories, incomprehensive sourcing and poor classification.
... Metaphorical comparisons between human and machine forms of vision have led to the tendency to consider visual technologies as extensions of or even stand-ins for the eye. Yet in spite of the fact that machines may expand the realm of human perceptual ability and experience, there remains a noticeable gulf between our ways of seeing (Berger 1973) and ways of machine seeing (Cox 2016). Machine forms of vision may take the eye to new places, but they also impose their own -machinic -logic onto the process. ...
Thesis
Full-text available
This thesis addresses how current notions of image production remain tied to historical ideas which often prove inadequate for the description of visual artefacts of machine learning (ML). ML refers to the notion of simulating the process of information acquisition computationally, and when applied to the generation of images, it enables visual content to be influenced based on the statistical analysis of data. The increasing use of ML in image production highlights several aspects which have been present in older forms of media, but which now take on new forms and relevance, especially within artistic contexts. This research seeks to clarify the mediating role played by visual technologies and to demonstrate how images produced using ML offer new ways of approaching theories of the image. Images exist at the interstices between human perceptual experience and its technological mediation, which is especially relevant as the development and implementation of technologies offers new possibilities to produce visualisations from data. In so doing, technological mediation tangibly augments relationships between how images are produced, experienced and interpreted. The present incorporation of ML into various forms of visual media offers insight into this issue by enabling images to be produced as the result of the statistical analysis of datasets. Computational relations which are extracted and inferred between features within images help to construct learned representations which are in turn used to generate new images. This results in a form of computationally-determined representation which is informed by the interpretive processes performed by machines. Artists have taken great interest in the potential of ML, in an aesthetic, but also a processual capacity, often considering its relation to human vision. Their productions offer insight into novel aspects of ML in the creation of images through experimental practice which is informed by theory and by art history. Using and reflecting on ML, often in novel or reactionary ways, artistic and humanistic perspectives provide vital counter-narratives to those of computer science (CS), and which facilitate cross-disciplinary understanding. In spite of the hype which surrounds it currently, ML does not present an entirely novel approach to image production and rather builds upon existing modalities and narratives surrounding the technical production of images. Notions of technically produced images often lean heavily on historical narratives regarding the technical production of images, even perpetuating inaccuracies from them. These tend to misconstrue images either as inherently accurate reflections of reality or as the product of artificial perception and genius, by virtue of their engagement with technological processes. This research therefore adopts a media archaeological approach, in order to understand how processes that have been present in visual media much longer than the use of ML continue to colour discourse.
... Pre-cinema (Mannoni, 2000) is filled with optical tricks, techniques and devices which aim to deceive at the same time as to delight, and serves as a reminder that ultimately, the power of the image rests in illusion. In similar fashion to optical tricks used to fool the human eye into seeing two images in one depending on how one looks ( Figure 1), algorithmically produced images may also function on two levels: meeting our ways of seeing (Berger, 1973) with ways of machine seeing (Cox, 2016), vacillating between conceptual categoriesfor us and for computers. The chihuahua or muffin meme (Figure 2), for example, points to the fact that certain ML algorithms have misclassified images of muffins and chihuahuas interchangeably. ...
Article
Full-text available
The incorporation of algorithmic procedures into the automation of image production has been gradual, but has reached critical mass over the past century, especially with the advent of photography, the introduction of digital computers and the use of artificial intelligence (AI) and machine learning (ML). Due to the increasingly significant influence algorithmic processes have on visual media, there has been an expansion of the possibilities as to how images may behave, and a consequent struggle to define them. This algorithmic turnhighlights inner tensions within existing notions of the image, namely raising questions regarding the autonomy of machines, author- and viewer- ship, and the veracity of representations. In this sense, algorithmic images hover uncertainly between human and machine as producers and interpreters of visual information, between representational and non-representational, and between visible surface and the processes behind it. This paper gives an introduction to fundamental internal discrepancies which arise within algorithmically produced images, examined through a selection of relevant artistic examples. Focusing on the theme of uncertainty, this investigation considers how algorithmic images contain aspects which conflict with the certitude of computation, and how this contributes to a difficulty in defining images.
... Pre-cinema (Mannoni, 2000) is filled with optical tricks, techniques and devices which aim to deceive at the same time as to delight, and serves as a reminder that ultimately, the power of the image rests in illusion. In similar fashion to optical tricks used to fool the human eye into seeing two images in one depending on how one looks ( Figure 1), algorithmically produced images may also function on two levels: meeting our ways of seeing (Berger, 1973) with ways of machine seeing (Cox, 2016), vacillating between conceptual categoriesfor us and for computers. The chihuahua or muffin meme (Figure 2), for example, points to the fact that certain ML algorithms have misclassified images of muffins and chihuahuas interchangeably. ...
Article
Full-text available
The incorporation of algorithmic procedures into the automation of image production has been gradual, but has reached critical mass over the past century, especially with the advent of photography, the introduction of digital computers and the use of artificial intelligence (AI) and machine learning (ML). Due to the increasingly significant influence algorithmic processes have on visual media, there has been an expansion of the possibilities as to how images may behave, and a consequent struggle to define them. This algorithmic turnhighlights inner tensions within existing notions of the image, namely raising questions regarding the autonomy of machines, author- and viewer- ship, and the veracity of representations. In this sense, algorithmic images hover uncertainly between human and machine as producers and interpreters of visual information, between representational and non-representational, and between visible surface and the processes behind it. This paper gives an introduction to fundamental internal discrepancies which arise within algorithmically produced images, examined through a selection of relevant artistic examples. Focusing on the theme of uncertainty, this investigation considers how algorithmic images contain aspects which conflict with the certitude of computation, and how this contributes to a difficulty in defining images.
Article
Google has become an “increasing invisible information infrastructure” that “organizes the world's information,” simultaneously shaping and organizing users through “ubiquitous googling” with keywords as a daily habit of new media. However, there is limited knowledge about how Google ranks information, intervenes, and the veracity of its search results. How can they be captured, analyzed, and understood in regard to search ecosystems? This article addresses these questions through a digital ethnography with a group of students as an “experiment in living” that investigates whether individuals receive so‐called “personalized” search results with the keyword “mink.” The method of screenshotting makes permanent the top results, which can then be compared, offering a “partial perspective” as “situated knowledge.” Building on previous empirical search studies using screenshotting, an analysis demonstrates that similar search results are obtained due to Google's recent tendency for “social relevance” and not individual “user relevance.” Students were sorted and grouped into categories of others “like them,” in this case dependent on a static university Internet Protocol address. This educational intervention contributes to screenshotting literature and feminist STS by introducing a method that empowers citizen agency, thereby contributing to developing strategies for generating more democratic, inclusive, and healthier information ecosystems.
Article
Algorithms have been the focus of important geographical critique, particularly in relation to their harmful and discriminatory effects. However, less attention has been paid to engaging more deeply with the epistemological effects of algorithms, the result being that geographers continue to overlook more generative algorithmic potentials, practices, epistemes and methodologies. This paper progresses our engagements with algorithms by first considering practices of care as a means to reframe our relationship with algorithms. Second, the paper identifies an epistemological rupture that allows us to reconceptualise algorithms as co-researchers, enabling us to encounter new spaces and understand these spaces in new ways.
Chapter
Full-text available
Este livro foca em diálogos interdisciplinares sobre inteligência artificial. Cada um dos capítulos busca trazer reflexões de como as diferentes áreas do conhecimento têm lidado, integrado e aplicado a Inteligência Artificial. Os textos vêm das áreas da Ciências da Computação e Humanidades. Dividimos o livro em três grandes seções, “Ética e Estética”, “Ciências” e “Ciências Sociais Aplicadas”, além de uma seção introdutória. Os temas que compõem cada parte deste livro vão de reflexões mais abstratas a reflexões aplicadas.
Article
Full-text available
During 2018, as part of a research project funded by the Deviant Practice Grant, artist Bruno Moreschi and digital media researcher Gabriel Pereira worked with the Van Abbemuseum collection (Eindhoven, NL), reading their artworks through commercial image-recognition (computer vision) artificial intelligences from leading tech companies. The main takeaways were: somewhat as expected, AI is constructed through a capitalist and product-focused reading of the world (values that are embedded in this sociotechnical system); and that this process of using AI is an innovative way for doing institutional critique, as AI offers an untrained eye that reveals the inner workings of the art system through its glitches. This paper aims to regard these glitches as potentially revealing of the art system, and even poetic at times. We also look at them as a way of revealing the inherent fallibility of the commercial use of AI and machine learning to catalogue the world: it cannot comprehend other ways of knowing about the world, outside the logic of the algorithm. But, at the same time, due to their “glitchy” capacity to level and reimagine, these faulty readings can also serve as a new way of reading art; a new way for thinking critically about the art image in a moment when visual culture has changed form to hybrids of human–machine cognition and “machine-to-machine seeing”.
Chapter
Full-text available
This paper attempts to identify the repercussions, for our understanding of human identity and legal subjectivity, of an increasingly statistical governance of the 'real' resulting from a strategic convergence of technological and socio-political evolutions. Epitomized by the rise of autonomic computing in the sectors of security and marketing, this epistemic change in our relation to the 'real' institutes a specific regime of visibility and intelligibility of the physical world and its inhabitants. This new perceptual regime affects a specific and arguably essential attribute of the human subject, which may be called his 'virtuality' (as opposed to 'actuality'). This 'virtuality', which acts as preserve for individuation over time, presupposes the recognition of 'différance' (being over time) and potentiality (spontaneity) as essential qualities of the human being. This virtual quality of the self, being a precondition to the experience of 'utopias' (spaces without location, according to Foucault), also conditions cultural, social and political vitality. Seeing the impacts of autonomic computing on human personality and legal subjectivity in terms of the governmental rationality these new technological artefacts implement allows for a normative evaluation of the impact of autonomic computing on both individual self-determination and collective self-government.
Article
Retail, media, finance, science, industry, security and government increasingly depend on predictions produced through techniques such as machine learning. How is it that machine learning can promise to predict with great specificity what differences matter or what people want in many different settings? We need, I suggest, an account of its generalization if we are to understand the contemporary production of prediction. This article maps the principal forms of material action, narrative and problematization that run across algorithmic modelling techniques such as logistic regression, decision trees and Naive Bayes classifiers. It highlights several interlinked modes of generalization that engender increasingly vast data infrastructures and platforms, and intensified mathematical and statistical treatments of differences. Such an account also points to some key sites of instability or problematization inherent to the process of generalization. If movement through data is becoming a principal intersection of power relations, economic value and valid knowledge, an account of the production of prediction might also help us begin to ask how its generalization potentially gives rise to new forms of agency, experience or individuations.
Article
Autonomic computing and ambient intelligence, reconfiguring human perceptions and experience, challenge traditional philosophical conceptions of both self-constitution and agency of the human subject in relation to its human and nonhuman environment, with crucial consequences for the theory and practice of constitutional self-government. Perturbing and/or emancipatory as they may appear to philosophers and lawyers, these issues provide an unprecedented occasion for interdisciplinary exchanges and cross-fertilization, and allow a new field of inquiry to emerge, combining the approaches and methods of philosophy of technology and philosophy of law. Exploring the transformations of self-constitution, individual agency and constitutional self-government in the advanced information society on the cusp of an age of autonomic computing and ambient intelligence, provides a litmus test for a trans-disciplinary philosophical inquiry enriching current debates in the fields of both legal philosophy and philosophy of technology. The proposed volume brings together philosophers of both disciplines to reflect and launch a dialogue around the theme of autonomic computing and the transformations of human agency. With contributions from Roger Brownsword, Rafael Capurro, Jos de Mul, Massimo Durante, Mireille Hildebrandt, Don Ihde, Jannos Kallinikos, Hyo Yoon Kang, Paul Mathias, Stefano Rodotà, Antoinette Rouvroy, Bibi Van den Bergh and Peter-Paul Verbeek.
The 'three black teenagers' search shows it is society, not Google, that is racist
  • Antoine Allen
Allen, Antoine. "The 'three black teenagers' search shows it is society, not Google, that is racist." The Guardian (10 June 2016), https://www.theguardian.com/commentisfree/2016/jun/10/three-black-teenagersgoogle-racist-tweet. Web.
Ways of Seeing. London: Penguin
  • John Berger
Berger, John. Ways of Seeing. London: Penguin, 1972. Print.
Unthinking Photography. London: The Photographers Gallery
  • Geoff Cox
Cox, Geoff. "Ways of Machine Seeing." Unthinking Photography. London: The Photographers Gallery, 2016. https:// unthinking.photography/themes/machinevision/ways-of-machine-seeing. Web.
Red Stack Attack! Algorithms, capital and the automation of the common
  • Tiziana Terranova
Terranova, Tiziana. "Red Stack Attack! Algorithms, capital and the automation of the common." Effimera (2014). http:// effimera.org/red-stack-attack-algorithmscapital-and-the-automation-of-the-commondi-tiziana-terranova/. Accessed August 24, 2016. Web.