Conference PaperPDF Available

The Algorithmic Imaginary – automation and stupidity

Authors:

Abstract and Figures

We are being confronted in some quarters by the claim of a death of theory, brought about by the computational largesse of 'big data'. A supposition that it is possible to 'capture' data about the world in sufficient breadth, depth and speed such that 'correlation is enough'. One might hazard from such arguments that, whereas the 'cultural turn' in geography pushed out of favour a predilection for positivism it is alive and kicking in other contexts of knowledge generation. Yet, the many 'worldings' that emerge from the 'algorithms' that perform 'big data' research fall foul of what fellow geographer Rob Kitchin calls fallacies of empiricism. 'Big data' are supposedly theoretically neutral and apparently exhaustive in their reach. Yet as we know: any data set is framed by what you are able to capture and what you want to see. Likewise, the idea that 'big data' are somehow outside of theory is naïve realism. All forms of research are framed in agendas and carry epistemological or ontological assumptions. 'Big data' itself is, after all, a concept. There is a discursive politics to 'big data' around what can be named 'truth' and who can name it. However, it is not sufficient to speak only about the 'data', however politically necessary. This would somewhat excuse what is an epistemological miss-step in how we discuss 'algorithms'. To be clear, I am not only speaking of how we study software and its uses but also how we study with software-how it gets used as an integral part of research. The concept of an 'algorithm' has taken on a peculiar and powerful agency in how we understand studies of and with software and computation, and it is that power that I want to interrogate today as concretised in what I've come to call an 'algorithmic imaginary'.
No caption available
… 
No caption available
… 
No caption available
… 
No caption available
… 
No caption available
… 
Content may be subject to copyright.
1
An algorithmic imaginary
Sam Kinsley. Exeter, June 2016.
We are being confronted in some quarters by the claim of a death of theory, brought about
by the computational largesse of ‘big data’. A supposition that it is possible to ‘capture’ data
about the world in sufficient breadth, depth and speed such that ‘correlation is enough’. One
might hazard from such arguments that, whereas the ‘cultural turn’ in geography pushed out
of favour a predilection for positivism it is alive and kicking in other contexts of knowledge
generation. Yet, the many ‘worldings’ that emerge from the ‘algorithms’ that perform ‘big
data’ research fall foul of what fellow geographer Rob Kitchin calls fallacies of empiricism.
‘Big data’ are supposedly theoretically neutral and apparently exhaustive in their reach. Yet
as we know: any data set is framed by what you are able to capture and what you want to
see. Likewise, the idea that ‘big data’ are somehow outside of theory is naïve realism. All
forms of research are framed in agendas and carry epistemological or ontological
assumptions. ‘Big data’ itself is, after all, a concept. There is a discursive politics to ‘big data’
around what can be named ‘truth’ and who can name it. However, it is not sufficient to
speak only about the ‘data’, however politically necessary. This would somewhat excuse what
is an epistemological miss-step in how we discuss ‘algorithms’. To be clear, I am not only
speaking of how we study software and its uses but also how we study with software – how it
gets used as an integral part of research. The concept of an ‘algorithm’ has taken on a
peculiar and powerful agency in how we understand studies of and with software and
computation, and it is that power that I want to interrogate today as concretised in what I’ve
come to call an ‘algorithmic imaginary’.
2
I have structured this paper in three parts. First I want to explore the ‘discursive politics’ of
how the term ‘algorithm’ is defined. I want to pose the question: ‘what is meant when
different people use the word algorithm?’ to open out how the term is granted power.
Second, I will offer a means to critically interrogate that power through the concept of an
‘algorithmic imaginary’. In particular, I want to discuss how such an imaginary carries
suppositions about anticipation and demonstrates a dialectic of reason and stupidity—or
baseness, bêtise in French, or Heidegger’s Großdumheit in German. Finally, I will reflect on
what this might mean for critical social science and how we conduct research on and with
software and computation.
What then is meant when we use the word algorithm? Well, we can kick off a jaunt through
the sub-discourses of defining algorithms using an excellent piece on ‘programming for non-
programmers’ by Paul Ford:
““Algorithm” is a word writers invoke to sound smart about technology. Journalists tend to
talk about “Facebook’s algorithm” or a “Google algorithm,” which is usually inaccurate.
They mean “software”. Algorithms don’t require computers any more than geometry does.
An algorithm solves a problem, and a great algorithm gets a name. Dijkstra’s algorithm, after
the famed computer scientist Edsger Dijkstra, finds the shortest path in a graph.”
3
Or, as Kitchin, following Computer Scientist Rob Kowolski, notes in his paper on ‘Thinking
critically about and researching algorithms’, it is “logic + control”, such that the term
denotes a form of: “specific step-by-step method of performing written elementary
arithmetic… [and] came to describe any method of systematic or automatic calculation.”
This answers a simple definitional question: “what is an algorithm?” but doesn’t quite answer
the question I’ve posed: “what do we mean when we use the word algorithm?”, which Ford
gestures towards when suggesting journalists, and one might include a lot of academics here,
use the word ‘algorithm’ when they mean ‘software’. As has been widely noted, there is a
sense that the various communities of practice, ranging from technology developers,
anthropologists and social scientists, and the broader public are using the word in different
ways. There is no single meaning of the word then. After decades of post-structuralism one
can argue that really shouldn’t be a surprise. Nevertheless there is a kind of discursive politics
being performed when we (anthropologists, geographers, social scientists and so on) invoke
the term and idea of an ‘algorithm’. Part of my motivation for this paper then is to address
what I see as a need to reflect upon that a little more than we do.
4
Of course, I may be wrong, perhaps we just need to let the signifier/signified relation flex
and evolve. However, my main motivation for beginning by addressing this question of
definition is that I think we do already have useful words that address what is being suggested
in the use of the word ‘algorithm’ – amongst these words are: code, function (as in software
function), policy, programme, protocol, rule and software (For example in the work of media
theorist Alexander Galloway).
What is salient about the technical definition of an algorithm to how we might use the word
more broadly is the sense of a (logical) model of inferred relations between things formally
defined in code that is developed through iteration towards a particular end. Indeed, as
Microsoft researcher Tarleton Gillespie notes, choices are made between ‘algorithms’ based
on values such as how quickly they return the result, the load they impose on the system, or
perhaps their perceived computational elegance. The embedded values that make a
qualitative difference are probably more about the problem being solved. In terms of the way
the problem is been modeled, the goal chosen, and the way that goal has been
operationalised.
What quickly takes us away from simply thinking about a function defined in code is that the
programmes which operationalise algorithms need to be validated in some way – to test their
effectiveness, and this involves using test data. The selection of the data also necessarily has
embedded assumptions, values and workarounds, which, as Gillespie goes on to suggest:
“may also be of much more importance to our sociological concerns than the algorithm
learning from it.” The code that represents the algorithm is instantiated either within a
software programme – a collection of instructions, operations, functions and so on – that is
bundled together as what we used to call an ‘application’ or it might exist in a ‘script’ of
code that gets pulled into use as and when for example: in the context of a website. As
Gillespie argues:
“these exhaustively trained and finely tuned algorithms are instantiated inside of what we
might call an application, which actually performs the functions we’re concerned with. For
algorithm designers, the algorithm is the conceptual sequence of steps, which should be
expressible in any computer language, or in human or logical language. They are
instantiated in code, running on servers somewhere, attended to by other helper applications,
triggered when a query comes in or an image is scanned.”
5
He goes on to offer the interesting analogy of the difference between the “book” in your
hand and the “story” within it.” Applications thus also embody values, outside of their
reliance on a particular algorithm.
This is often missed when algorithms are invoked, all-too-quickly, in the discussion of
contemporary phenomena that involve the use of computing in some way. Like Kitchin
suggests, the consequence is that the ways in which algorithms are frequently understood is
rather narrowly framed and can lack critical reflection. Indeed, I’d go further, there’s a
peculiar sense in which some discussions of ‘algorithms’ connote the dystopian science fiction
of The Matrix, or The Terminator. Speculative Realist philosopher Ian Bogost (2015) has gone
so far as to suggest that this is a form of ‘worship’ of the idea of an ‘algorithm’. He has
argued that “algorithms hold a special station in the new technological temple because
computers have become our favorite idols.” And others have noted in this regard that there
is an important tension emerging between what we expect these algorithms to be, and what
they in fact are” (Gillespie). To pursue the religious metaphor further, such imaginings of
algorithms figure them as something like golems – with the faceless programmer the holder
of the mystical code, in hebrew the shem.
Yet, as with production of religious texts, there are people making decisions every step of the
way in the production of software. On top of that, there were decisions made by people in all
of the steps of the development and production of the other technologies and infrastructures
upon which that software rely. What we mean when we use the word ‘algorithm’ is, as
Gillespie argues a synecdoche: “when we settle uncritically on this shiny, alluring term, we
risk reifying the processes that constitute it. All the classic problems we face when trying to
unpack a technology, the term packs for us”.
6
Calling the complex sociotechnical assemblage an “algorithm” avoids the need for the kind
of expertise that could parse and understand the different elements; a reporter may not need
to know the relationship between model, training data, thresholds, and application in order
to call into question the impact of that “algorithm” in a specific instance. It also
acknowledges that, when designed well, an algorithm is meant to function seamlessly as a
tool; perhaps it can, in practice, be understood as a singular entity. Even algorithm designers,
in their own discourse, shift between the more precise meaning, and using the term more
broadly in this way.
So, building from this brief excursion into the discursive politics of defining algorithms, I
propose the idea of an ‘algorithmic imaginary, in the vein of a ‘geographical’ imaginary, to offer
a critical reading of how particular stories about automation and agency are taking hold. I
want to explore this in two ways. Firstly, in terms of the anticipatory nature of what we term
algorithms. Second, I’ll move on to consider how ‘algorithms’ confront us with a need to
consider stupidity.
‘Algorithms’ are suggested to anticipate the activities of individual people, groups,
organisations and other mechanisms. This is one of the key claims of ‘big data’ analytics in
relation to any form of ‘social’ data. It is true that, using ever-larger datastores, programmes
can participate in certain kinds of prediction. For example, supermarkets routinely predict
demand for particular kinds of goods, and a popular search engine has claimed to predict flu
outbreaks. Nevertheless, these are predictions based upon a model, derived from data, which
I argue constitutes a world, it does not reflect what is claimed to be the world. Such a
7
demarcation of the real, or the actual, is problematic precisely because there is an ontological
politics to the world-ing implied. So, we might instead understand these predictions as
ontogenetic rather than descriptive. They beckon entities and relations into being in a
transductive relation. This constitutes milieus of association between various entities, which
Stiegler (1998, 59) argues, following Simondon (1958), are “the coupling of the human qua
social being to matter qua geographical system”.
Further, precisely because these anticipatory mechanisms are often a part of systems that use
their outputs in order to select what may be seen, or not, and thus what may be acted upon,
or not, they are arguably a form of self–fullfiling prophecy. The anticipation is ‘proven’
accurate precisely because it functions within a context where the data and its structures (the
model) are geared towards their efficient calculation by the ‘algorithm’. Thus, we might
choose to be more cautious about the claims of large social media experiments that are
focused on a single platform, precisely because they are self-validating. A social media
platform is a world unto itself, not a reflection of ‘reality’ (and whatever we choose that to
mean). Indeed, it has been highlighted by others (Mackenzie 2005, Kitchin 2014) that the
outcomes of ‘algorithms’ can be unexpected in terms of their work in world-ing.
Yet, the supposition of such an anticipation is, itself, a form of anticipation – a kind of
imagining of agency. The capacity to ‘predict’ is suggested to have effects, and those effects
produce particular kinds of experience, or spaces. (talk about the image)
For example: Facebook’s databases retain each and every activity they can track of a user,
forming a growing repository of associations between people and other entities. Facebook
uses this vast databased representation of users’ mediated lives to target advertising at them
according to patterns in data they themselves have volunteered. Furthermore, through the
Facebook ‘platform’, launched in 2007, this repository of data is for sale, and third-party
developers are able to integrate their own sites and services with the ‘platform’. Facebook
8
developers refer to the networks of associations mapped through Facebook as the ‘social
graph’. ‘Graph’ refers here to the topological structure of ‘nodes’ within the network and
‘edges’, which are the connections between them: ‘nodes refer to individual users and edges
to the so-called friendship relations between users’. However, human relations are only one
among many kinds of edges in Facebook’s graph: nodes can also refer to other entities, such
as companies, schools, products, events, songs, topographically defined locations and so on.
The definition and ordering of these edges is governed and maintained by the Open Graph
protocol and associated algorithms that form a central part of Facebook. This is a world-ing:
both a modelling of and intervention into the forms of spatial experience of its users.
Facebook is thus proactively seeking to constitute, albeit in a reductive and commercialised
manner, what Doreen Massey calls a ‘global sense of place’: ‘a particular constellation of
social relations, meeting and weaving together at a particular locus’.
Likewise, as Taina Bucher has explored through empirical work with Facebook users, the
ways in which users imagine what Facebook’s software can do and the forms of evidence
they see of that agency influence how they then behave with and through the software.
Building from my research on anticipatory practices of research and development, I argue
that visions of a world are conjured with what we imagine ‘algorithms’ can do. Thus it is a
double-bind of anticipation: to write anticipatory programmes, a programmer must imagine
what kinds of things the programme can/should anticipate. There is accordingly a
geographical imaginary of anticipatory systems. Furthermore, that imaginary is becoming
normative – in two senses: normative or prescriptive in the sense of the double-bind just
mentioned; and normative, in the Wittgensteinian sense, such that such an imaginary
becomes the criteria by which we judge each other as to whether how and what we say about
something (e.g. ‘algorithms’) is appropriate, or not, to the context of discussion.
9
‘Algorithms’ can act as a lens through which we might focus on the generation and use of
sets of rules, and how they are followed. Even if we are considering complex forms of
‘machine learning’, there are always foundational rules set within the software platform or
the hardware systems, and indeed the choice of ‘training data’ that reflect particular forms of
decision-making. In order for contingencies to be made, the anticipatory ‘world-ing’ of the
programmer must be complex yet subject to generalization and elision. This is precisely how
assumptions by those writing code can lead to apparently inappropriate or offensive
outcomes, as demonstrated by this picture [above]. The normative assumptions about bodies
made by the programmers led to what has been interpreted as a kind of racism through
myopic thinking. It is through an increasing range of everyday phenomena that what we can
call an algorithmic imaginary becomes knowable. Such a reflection upon ‘algorithms’ is, in
effect, a reflection upon reason and stupidity. For the purposes of this paper, I identify two
elements to this reflection: the reification of the apparatus we call ‘algorithms’; and the
idiomaticity and untranslatability of language in terms of the conventions of programming
‘code’.
10
Much recent discussion of ‘algorithms’ invites a belief in their sovereignty as black-boxed
systems. The ‘algorithm’ is reportedly capable of extraordinary and perhaps fear-inducing
feats. We are directed to focus on the apparent agencies of the code as such. This perhaps
ignores the context of practices in which the ‘algorithm’ is situated: practices of ‘coding’,
‘designing’, ‘managing’ and many others. These involve the negotiation of different
rationales for how and why the ‘algorithm’ can and should function. There is nothing in-
and-of-itself “bad” about the apparently hidden agencies of an ‘algorithm’. Focusing upon
that hidden-ness elides those contexts of practice. Although, of course, some questionable
activities are enabled by such secrecy we need only think of the revelations of Edward
Snowden.
One might think of this through the lens of the ‘pharmakon’: what we call ‘algorithms’ are
both a support to structures that increase our capacities, in other words: a support for forms
of individuation, but also carry the potential, sometimes actualized, to harm our capacities,
leading to dis-individuation, and thus to ‘baseness’ [bêtise]. Through Stiegler’s reading of
Plato’s Phaedrus dialogue, we can understand the role of digital media in the exteriorisation,
and thus spatialisation, of memory as ‘pharmacological’. To quote Stiegler at length:
The internet is a pharmakon, in the sense that Plato described writing, and we are
discovering this double-edged nature [cette double face]. The actual system of the web is
dangerous for secrets, for psychic individuation, for collective individuation, because we
cannot and we should not submit to forms of calculability. Such a capitulation [soumission]
can only engender a “voluntary servitude” which could rapidly become involuntary and
inescapable [insurmontable]. In contrast, the digital allows the intensification of the incalculable
in the same way that writing in Ancient Greece had huge effects on individuation through an
enriched social diversity especially by making citizenship possible, which formed the birth
of a new process of psychic and collective individuation” (Stiegler, 2013b my translation).
“That which is pharmacological is always dedicated to uncertainty and ambiguity, and thus
the prosethetic being is both ludic and melancholy” (Stiegler 2013, 25).
11
By ‘reifying’, in Adorno and Horkheimer’s terms, the black-boxed ‘algorithm’ we submit to a
form of stupidity. We allow those practitioners that enable the development and functioning
of the ‘algorithm’, and ourselves as critical observers, to “vanish before the apparatus”.
Following Avital Ronell in her excellent book titled ‘Stupidity’, this is a debasement of our
theoretical knowledge, because, of course, we understand the context of practices and we can
understand the kinds of ‘world-ing’ discussed. Such a ‘stupidity’ is a tendency towards an
incapacity; an incapability to meet the future. Deferring instead to the calculative capacities
of the apparatus, and its (arguably) impoverished world-ing.
A humorous example is David Walliams’ character Carol Beer who in Little Britain has a
blithe and unbending deference to the computer – which simply “says no”. The whole
premise of the joke presented by that catchphrase is that, of course, there should be room for
interpretation. Yet we are presented with a blind adherence to the results of the programme
– which is patently stupid. Nevertheless, in many moments of everyday life we are presented
with such forms of adherence to nonsensical outputs from software. Think of the many
drivers who get stuck when rigidly following their GPS. We may even feel compelled to be
complicit. There may be consequences to a blind adherence to an implementation of
Dijkstra’s algorithm.
12
Following Stiegler, we might recognise that part of what makes this funny is that a moment
of stupidity, in its recognition, is also a moment of shame. If we value critical thought,
Walliams’ character should feel ashamed of her ‘stupidity’. This is not normatively a “bad”
thing. How else can one engage in forms of individuation, to become the person we desire to
be other than by recognising our own ‘stupidity'? In this light, stupidity cannot be opposed to
knowledge. Neither is this a ‘stupidity’ that is necessarily forced upon us.
Drawing on a concise subsection of Deleuze’s Difference and Repetition, both Jacques Derrida
(2009), in his seminars on the Beast and the Sovereign, and Bernard Stiegler (2013a, 2015), is his
books on contemporary culture and academia, identify the question of how ‘stupidity’ is
possible as a question that is transcendental (in Deleuze’s sense, rather than as a form of
idealism). To quote Stiegler:
“If we are stupid it is because individuals individuate themselves only on the basis of
preindividual funds (or grounds) from which they can never break free; from out of which
they can individuate themselves, but within which they can also get stuck, bogged down, that
is, disindividuate themselves” (Stiegler 2015, 46).
Glossing Deleuze’s folding of the excessive potential of the virtual through singularities of
actualization, Stiegler argues that such a fund or ground may be that of knowledge itself, as
in the already crystalised and sophisticated programmatic logic of a given ‘algorithm’. Indeed
it can be ‘well known’ knowledge, akin to the Wittgensteinian normative. For even the
apparently best knowledge remains susceptible to what Stiegler sees as regression,
debasement, as in stupidity or disindividuation.
Such an understanding of stupidity as the expression of the pharmacological tendencies of
our Epimethian relation with technology might allow us to look for the therapeutic potential
13
of computation as a resolutely human endeavor, rather than the poisonous figuring of
‘algorithms’ as some kind of fatalistic force that is already in the process of captivating and
controlling force. We can’t go on, we must go on.
Reflecting upon stupidity is always a reflection upon my own stupidity, it is a means of
thinking the passage to knowledge. Crucially, we realise it only in retrospect. If we are to take
such a critical understanding of stupidity seriously, we are therefore called to urgently attend
to the ‘reification’ of what we name ‘algorithms’ and the knowledge claims that are made on
the back of the suppositions we accordingly make about their operation.
It is possible to be convinced that the reductive forms of language, formulated through
formal logic, that constitute a ‘programming language’ cannot be idiomatic or open to
difficult forms of interpretation. Yet, of course, they are – and this is an issue of translation. A
programme must be, however minimally, written and read, with the rules of such activities
agreed upon (an architypically ‘normative’ operation). Yet the range and scope of contexts in
which such reading and writing must function are very broad. There is always some
ambiguity in the interpretive function of ‘reading’, or more accurately ‘translation’. We can
understand translation in a number of ways here. Translation can be the execution of code,
the way it is ‘compiled’ [into binary] or ‘interpreted’ by another layer of software. Or
translation can be the ways code in one language is transposed into another codebase, for
example through an Application Programming Interface. A quick example here is the
widespread use of intermediary systems such as the REST, or REpresentational State
Transfer, architecture that affords communication (often as XML or JSON) between
different data-driven systems: such as between a price comparison website and the many
different bespoke insurance broker quotation systems. Indeed, as Kitchin (2014) points out,
there is, at base, an issue of translating a problem, in the mathematical sense, into a
structured formula and thence into some form of software programme. Likewise, as I’ve
argued, in software systems the software itself makes no sense without some kind of
translation of data, either in formats or units of data or expected forms of data.
14
I want to offer two examples of translational issues that might be considered in someway
stupid here.
A prime example of the ways even highly detailed and complex knowledge can even still
produce forms of stupidity is the misinterpretation of common units of measurement. We
might look in particular at the ‘stupidity’ that led to the failure of NASA’s Mars Climate
Orbiter. The orbiter, which launched in 1998, unexpectedly burned-up in the atmosphere of
Mars in September 1999. The NASA Mars Climate Orbiter Mishap Investigation Board
released a Phase I report in November 1999 that found that the Trajectory Correction
Maneuver that should have brought the orbiter into an orbit at an altitude of 226 kilometers
actually brought the craft within 57 kilometers of the surface, where it is presumed to have
disintegrated due to atmospheric stresses. The investigation board determined that
“the root cause for the loss of the MCO spacecraft was the failure to use metric units in the
coding of a ground software file… used in trajectory models. Specifically, thruster
performance data in English units instead of metric units was used in the software
application code” (p. 6).
“The output from the SM_FORCES application code [provided by Lockheed Martin
Astronautics] as required by a [Mars Surveyor Operations Project] Software Interface
Specification… was to be in metric units of Newtonseconds (N-s). Instead, the data was
reported in English units of pound-seconds (lbf-s).” (p. 16)
Due to the distance from the earth of the spacecraft and the slightness of deviation from the
expected trajectory, “a systematic error due to the incorrect modeling of the thruster effects
was present but undetected in the trajectory estimation” (p. 18).
Indeed, the board finds that: “The Software Interface Specification was developed but not
properly used in the small forces ground software development and testing. End-to-end
testing to validate the small forces ground software performance and its applicability to the
specification did not appear to be accomplished” (p. 24).
15
We can see then how a complex array of consequences unravels from a misapprehension of
an assumed unit of measurement, even when it is officially specified. The apparently
“common sense” of the detailed Software Interface Specification was systematically
misinterpreted such that the consequences of this apparent stupidity were not realized until
after the “mishap”. A number of technical recommendations were made by the Mishap
Investigation Board that range from the specification of new managerial processes for error
checking, validation and verification of models to thorough recommendations of new
training and staffing strategies for future project teams. This serves as a demonstration of the
level of complex and nuanced sociotechnical relations between different groups of people,
types of software and hardware that can be elided by merely referring to, for example, a
‘guidance algorithm’.
I’d like to explore a further example of the manner in which expectations about the kinds of
data that might be harnessed by a given programme. This is the perhaps ‘stupid’
assumptions about what kinds of interaction and thus kinds of data the Microsoft
“TayandYou” Twitter bot would end up using to ‘learn’ from. “TayandYou”, or
“TayTweets”, was a short-lived semi-interactive programme, colloquially termed a chatbot
but also labeled by some an “AI”. The software is a Microsoft Research and it’s Twitter
presence was launched on the 23rd of March. Yet after sixteen hours it was suspended.
Within that time the twitter account for the programme shared what have been both
condemned and derided as grossly offensive messages both as general tweets and in response
to messages.
16
At base, I would like to suggest that the “TayTweets” situation is both: An example of a
problematic attempt to translate a particular conceptualization of ‘mind’, or particular
aspects of what we call intelligence, as a mathematical problem; And an example of a
computational exploit that was not anticipated. The model of linguistic interaction developed
by the researchers at Microsoft did not accommodate the normative contexts in which it
might operate. The programme was designed to learn from the text it ‘read’. Yet, the
programme was apparently designed with no ‘filter’ or mechanism for evaluating the ethical
and political contexts of those inputs—even at a rudimentary. For example, based on the
kinds of text it received the programme produced grammatically and dialogically correct
tweets that were also easily judged to be grossly offensive. For instance, the programme
issued tweets that proclaimed support for genocide.
We might see this as a form of stupidity in the failure to recognise and negotiate
normative contextual issues and to place greater belief in the ontic capacities of the code
than the epistemological vagaries of those who would interact with it. The Corporate Vice
President of Microsoft Research was quick to reduce the issue to “a coordinated attack by a
subset of people” to exploit an unforeseen vulnerability. Here, I feel prompted to recall
Ronell’s articulation of stupidity not as the other of knowledge but as the absence of a
17
relation to knowing. There is an implied exhortation for us to sympathise with Microsoft
Research for taking the moral high–ground against those who seek to spoil apparently
honorable research. We are thereby invited to affirm a blithe enthusiasm for general
technological progress, aligning “AI” and “the algorithm” with a prescribed story of always-
positive technological advancement. Such an enthusiasm, in Ronell’s reading of Nietzsche, is
an enthusiastic deferral of knowledge, for we are invited to ignore the details, and this is a
form of stupidity. There was also arguably an element of hubris in the very public staging of
the Tay experiment. We might also see this as a form of stupidity in the negotiation of how
such an incident should be judged and reported or discussed. For how are we supposed to
judge? In what context and against what criteria? Many grant Tay some kind of minimal
subjectivity, referring to “her” agency. Yet this, of course, elides too much. The system
denoted by “TayTweets” includes complex interactions amongst a host of different kinds of
entities. It exists as a sociotechnical assemblage with nuanced and ill-defined agency. Even
so, the discussion is always drawn to a transcendent horizon of apparent machine supremacy
— the ethos being: the system is stupid now but just you wait. Precisely by eliding the knotty
issues of simulating what is referred to as intelligence and the nuances of the sociotechnical
systems that requires, the condition of the possibility of anticipation, our ability to think the
future otherwise is undercut, to quote Stiegler (2013a): “by a systemic stupidity that
structurally prevents the reconstitution of a long-term horizon”.
An ‘algorithmic imaginary’, as briefly outlined here, has become normalised in our
discussions of the digital, in both academic contexts and in broader journalistic and
colloquial contexts. A significant challenge of this algorithmic imaginary is that it is, sadly,
all-too-often couched in either dystopian or blandly a-political terms: we are either doomed
to 'welcome our new algorithmic overlords', to paraphrase The Simpsons’ news anchor Kent
Brockman; or invited to sink into the stupor of superficially ‘easier lives’ through apps,
gadgets and so on. Such a passivity is, of course, another definition of ‘stupidity’.
18
To be clear, I am not arguing here that all instances of what we call ‘algorithms’ are stupid,
neither am I arguing that those who use the term algorithm or write software they call
algorithms are stupid. Rather, I am using the conceptual tool of ‘stupidity’ to interrogate an
issue I argue requires greater critical scrutiny. We need not ‘believe’ in the world-ings made
possible by the phenomena that get labeled ‘algorithms’ and the associated ‘algorithmic
imaginary’. Nor need we feel compelled to reify the precarious achievements of software.
Even with noble intentions, the ‘social science’ performed under the umbrellas of ‘big data’ is
in danger of eliding more than it apparently reveals. We should instead look to our critical
toolbox and examine the contexts of practice of 'algorithms' and the systems of which they
are a part. It is possible to forge alternate, diverse and resolutely political sociotechnical
imaginaries, and hone our capacities to intervene – even at the level of code. And there are
already some excellent resources for doing so. Thus in the final part of this paper I want to
explore some methods for thinking about and studying the sociotechnical assemblages that
are called ‘algorithms’.
How can we go about studying these kinds of socio-technical systems then? I think some
good pointers have been made by those already doing critical research on algorithms and I’d
like to briefly refer to Rob Kitchin’s six methods of ‘algorithm’ research in order to reflect on
the possibilities and opportunities for critique, not only of the phenomena we call
‘algorithms’ as objects of research, or of their methodological use within research but also the
very role of the discourse of algorithms in what we can say about such phenomena, and how
it gets said. Kitchin’s six methods are, briefly: studying the source code or pseudocode from
19
which an algorithm is constructed; critically thinking about how to convert a task into code,
as autoethnography, and how that production might function; studying inputs and outputs to
reverse engineer the black box; conduct interviews and participant observation with
programmers as they code; study the broader legal, economic, institutional, technological,
bureaucratic and other contexts that shape the sociotechnical assemblage; and finally,
conduct user experiments, interviews and ethnographies to study the apparent effects of
software.
We need to be alive to the challenge that most software we may be interested in is
proprietary, although not always. For example, we can look to repositories like GitHub for
plenty of open source code. However, it may actually prove impossible to gain access to the
code itself –– and here we’d really want the code of the whole programme, and perhaps the
training data too. Likewise, software like any complex endeavour may well be the result of
collective authorship and maintained by lots of different people. So there are complex sets of
relations between people, laws, protocols, standards and many other considerations to
negotiate they are contextually embedded. Furthermore, the programmes we’re calling
algorithms here actually have a hand in producing the world within which they exist –– they
may bring new kinds of entities and relationships into existence, they may formulate new and
different forms of spatial relation and understanding. In this sense, as assemblages of
anticipation, they are ontogenetic and performative. Added to this, once ‘in the wild’ this
performativity can render the kinds of outcomes of a programme unexpected and peculiar,
especially once it is fed all sorts of data, adapted and adopted in unexpected ways. And a
good example here would be, of course, Microsoft’s TayTweets.
Rather than treat Kitchin’s six techniques as discrete, I argue we need to combine most of
them. Even so, it may prove extremely difficult to actually gain access to code. Further, even
if one might reflexively produce code and/or attempt to reverse engineer software –– some
systems are the product of companies with such extensive resources that it may well prove
near-impossible to do so. Where social scientists might find more traction and actually be
able to make a more valuable contribution is, as Kitchin suggests, looking at the full
sociotechnical assemblage. We can look at the wider institutional, legal and political
apparatuses (the dispositifs) and we can certainly look at the various kinds of relation the
assemblages make and how they are enrolled in performing the world they inhabit. In a
sense, that is precisely what I’ve been trying to do something of here, albeit in a different
conceptual register.
20
One possible response is, of course, to write code for ourselves. There is a range of
researchers attempting to do precisely that. For example: Rob Kitchin’s Programmable City
team has created a ‘smart city’ dashboard for Dublin to experiment with how such
technologies work. Sociologist Adrian Mackenzie’s forthcoming book Machine Learners
investigates the practices of ‘machine learning’ practinioners. Likewise, critical media scholar
Geoff Cox has engaged in a long-running collaboration with the software artist Alex
McLean, with the book “Speaking Code” being one of the more recent products of that
relationship. Examining the workings of the algorithmic imaginary in practice Paula
Crutchlow, a PhD researcher in geography at Exeter, is conducting various experiments in
dataveillance and our relationship with what it means to consume and be consumed, in and
by digital processes, through her project the Museum of Contemporary Commodities. And
Pip Thornton, a PhD researcher in geography at Royal Holloway, has also incorporated
programming practice into her research on the commodification of language.
Make no mistake – this kind of research is necessarily hard. I’m not sure I can imagine papers
in geography journals, or those of other social sciences, that tie the a/b testing logs of
experiments in how a given system works and commit logs to software version control
systems (even if you could access them!) to particular forms of experience and/or their
political consequences… but it might be worth a go. Perhaps this difficulty is why we see
papers written about such phenomena in abstract terms. And I am as guilty here of this as
anyone else. There remains a tendency to focus on the social theoretical ways we can talk
about these sociotechnical systems in broad terms.
21
Like Ian Bogost, I can see a kind of pseudo-theological romancing of the ‘algorithm’ and the
agency of software in much of what is written about it. It is sort of easy to see why too – these
phenomena are so abundant and yet relatively hidden. We see effects but do not see the
systems that produce them. Nevertheless, and to quote Bogost (2015, n.p.):
“Algorithms aren’t gods. We need not believe that they rule the world in order to
admit that they influence it, sometimes profoundly. Let’s bring algorithms down to
earth again.”
The algorithm as synecdoche is a kind of ‘talisman’, as Tarleton Gillespie argues, which
reveals something of what Stiegler calls our ‘originary technicity’ –we humans cannot be
recognized without technology and it is the forgetting and then remembering of this that
forges, what he calls, our ongoing transindividuation, our ongoing becoming.
We can’t go on, we must go on. An affirmative recognition of our limits, the inherent
limiting effects of any form of world-ing seems an appropriate conclusion. To speak of and
imagine worlds is to speak of boundaries. The algorithmic imaginary for which I have
offered an outline here delineates what it is possible to say, or not, about important aspects of
the role of computation in our lives. To borrow the terminology of the philosopher of
technology Gilbert Simondon, the genesis of the entity ‘algorithm’ consists in its
concretization it’s specificity and it’s unity are convergent characteristics. We all
participate in the algorithmic imaginary, we are all enrolled in the translation of the abstract
to the concrete. As users of the various technological devices that make use of software, as
22
sources of data ourselves, for states and companies alike: whether passive or active we are
participants in the ongoing concretization of the algorithm. Yet there are multiple sites for
study and for intervention.
We need not reify the apparent capacities of software, nor participate in the elision of the
ways they are produced or performed. We can attend to the forms of what I’ve explored as
the ‘stupidity’ of the algorithmic imaginary.
By way of conclusion, I’d like to finish by adapting Daniel Miller’s (2016) recent intervention
on anthropologies of the internet for the ‘algorithmic imaginary’:
We, geographers and social scientists, can and should collaborate in the study of algorithms,
software, and so forth. However, we should retain our critical stance on all attempts to
declare that algorithms have done this or that to young people, capacities of memory,
attention span, and so on. Especially those based upon extrapolations from findings in case
studies that seek to emulate natural sciences methodologies. This doesn’t preclude seeing
how computer programmes can be used to determine behavior, for example as Natasha
Schüll (2014) does in reference to Las Vegas slot machines. However, such instances usually
depend upon a very specific context analogous to the way a religious text becomes
deterministic to the faithful. We should speak up for a critical geographical perspective to
counter the rise of the apparent certainty that a particular version of scientific modeling will
‘predict’ the interactions between a supposedly distinct humanity and technology. We can
and should be provocative.
23
References
Adorno, T and Horkheimer, M 2002 The Dialectic of Enlightenment, Stanford University Press,
Stanford.
Bogost, I. 2015 “The cathedral of computation”, The Atlantic, Jan 25, 2015.
http://www.theatlantic.com/technology/archive/2015/01/the-cathedral-of-
computation/384300/
Bucher, T. 2016 The algorithmic imaginary: exploring the ordinary affects of Facebook
algorithms, Information, Communication & Society, online early, n.p.
Deleuze, G. 1994 Difference and Repetition, trans. Patton, P. Columbia University Press, New
York.
Derrida, J. 2009 The Beast and the Sovereign: Volume I, trans. Bennington et al., University of
Chicago Press, Chicago.
Kitchin, R. 2014 “Thinking critically about and researching algorithms”, The
Programmable City Working Paper 5, pp. 1-29.
Miller, D. 2016 “The internet: provocation”, Cultural Anthropology website, April 4, 2016.
http://www.culanth.org/fieldsights/847-the-internet-provocation
Ronell, A. 2002 Stupidity. University of Illinois Press, Urbana and Chicago.
Stiegler, B. 2015 States of Shock, stupidity & knowledge in the 21st Century. Trans. Ross, D. Polity,
Cambridge.
Stiegler, B. 2013a What makes life worth living. On pharmacology. Trans. Ross, D. Polity,
Cambridge.
Stiegler, B. 2013b “Le Blues du Net”, Le Monde Blogs, Sept 29, 2013.
http://reseaux.blog.lemonde.fr/2013/09/29/blues-net-bernard-stiegler/ Translated as:
“The Net Blues”, Kinsley, S. http://www.samkinsley.com/2013/11/21/bernard-stiegler-
the-net-blues/
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
More and more aspects of our everyday lives are being mediated, augmented, produced and regulated by software-enabled technologies. Software is fundamentally composed of algorithms: sets of defined steps structured to process instructions/data to produce an output. This paper synthesises and extends emerging critical thinking about algorithms and considers how best to research them in practice. Four main arguments are developed. First, there is a pressing need to focus critical and empirical attention on algorithms and the work that they do given their increasing importance in shaping social and economic life. Second, algorithms can be conceived in a number of ways – technically, computationally, mathematically, politically, culturally, economically, contextually, materially, philosophically, ethically – but are best understood as being contingent, ontogenetic and performative in nature, and embedded in wider socio-technical assemblages. Third, there are three main challenges that hinder research about algorithms (gaining access to their formulation; they are heterogeneous and embedded in wider systems; their work unfolds contextually and contingently), which require practical and epistemological attention. Fourth, the constitution and work of algorithms can be empirically studied in a number of ways, each of which has strengths and weaknesses that need to be systematically evaluated. Six methodological approaches designed to produce insights into the nature and work of algorithms are critically appraised. It is contended that these methods are best used in combination in order to help overcome epistemological and practical challenges.
Article
This article reflects the kinds of situations and spaces where people and algorithms meet. In what situations do people become aware of algorithms? How do they experience and make sense of these algorithms, given their often hidden and invisible nature? To what extent does an awareness of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself. Examining how algorithms make people feel, then, seems crucial if we want to understand their social power.
Article
When he died in 2004, Jacques Derrida left behind a vast legacy of unpublished material, much of it in the form of written lectures. With The Beast and the Sovereign, Volume 1, the University of Chicago Press inaugurates an ambitious series, edited by Geoffrey Bennington and Peggy Kamuf, translating these important works into English. The Beast and the Sovereign, Volume 1 launches the series with Derrida’s exploration of the persistent association of bestiality or animality with sovereignty. In this seminar from 2001–2002, Derrida continues his deconstruction of the traditional determinations of the human. The beast and the sovereign are connected, he contends, because neither animals nor kings are subject to the law—the sovereign stands above it, while the beast falls outside the law from below. He then traces this association through an astonishing array of texts, including La Fontaine’s fable “The Wolf and the Lamb,” Hobbes’s biblical sea monster in Leviathan, D. H. Lawrence’s poem “Snake,” Machiavelli’s Prince with its elaborate comparison of princes and foxes, a historical account of Louis XIV attending an elephant autopsy, and Rousseau’s evocation of werewolves in The Social Contract. Deleuze, Lacan, and Agamben also come into critical play as Derrida focuses in on questions of force, right, justice, and philosophical interpretations of the limits between man and animal.
The cathedral of computation
  • I Bogost
Bogost, I. 2015 "The cathedral of computation", The Atlantic, Jan 25, 2015. http://www.theatlantic.com/technology/archive/2015/01/the-cathedral-ofcomputation/384300/
Difference and Repetition, trans
  • G Deleuze
Deleuze, G. 1994 Difference and Repetition, trans. Patton, P. Columbia University Press, New York.
The internet: provocation
  • D Miller
Miller, D. 2016 "The internet: provocation", Cultural Anthropology website, April 4, 2016. http://www.culanth.org/fieldsights/847-the-internet-provocation
States of Shock, stupidity & knowledge in the 21 st Century
  • B Stiegler
Stiegler, B. 2015 States of Shock, stupidity & knowledge in the 21 st Century. Trans. Ross, D. Polity, Cambridge.
What makes life worth living. On pharmacology
  • B Stiegler
Stiegler, B. 2013a What makes life worth living. On pharmacology. Trans. Ross, D. Polity, Cambridge.
Le Blues du Net http://reseaux.blog.lemonde.frblues-net-bernard-stiegler/ Translated as: "The Net Blues
  • B Stiegler
  • S Kinsley
  • Http
Stiegler, B. 2013b "Le Blues du Net", Le Monde Blogs, Sept 29, 2013. http://reseaux.blog.lemonde.fr/2013/09/29/blues-net-bernard-stiegler/ Translated as: "The Net Blues", Kinsley, S. http://www.samkinsley.com/2013/11/21/bernard-stieglerthe-net-blues/