Working PaperPDF Available


No caption available
No caption available
No caption available
No caption available
No caption available
Content may be subject to copyright.
Sandra Álvaro 2013-2017
Algorithmic Culture
Sandra Álvaro
(2013 – 2017)
Compilation of critical texts addressing the spread of the pervasiveness of data-processing and the
devices, disciplines, and societal challenges related to this event.
The texts by Sandra Álvaro Sanchez have been originally published at the Blog of the CCCBLab :
PostDigital, Ubiquitous Computing, BigData, Artificial Intelligence, Internet of Things, Software
Studies, Algorithm, Internet, Social Media, Digital Humanities,
Sandra Álvaro 2013-2017
Living together Smart Algorithms
Algorithms enable us to create smarter machines, but their
lack of neutrality and transparency raises new challenges.
Sandra Álvaro
03 May 2017
Mass data production has sparked a new awakening of artificial intelligence, in which algorithms
are capable of learning from us and of becoming active agents in the production of our culture.
Procedures based on the functioning of our cognitive capacities have given rise to algorithms
capable of analysing the texts and images that we share in order to predict our conduct. Amidst this
scenario, new social and ethical challenges are emerging in relation to coexistence with and
control of these algorithms, which far from being neutral, also learn and reproduce our prejudices.
Ava wants to be free, to go outside and connect with the changing and complex world of humans.
The protagonist of Ex Machina is the result of the modelling of our thought, based on data compiled
by the search engine Blue Book. She is an intelligent being and one capable of acting in an
unforeseen way and that upon seeing her survival threatened, will manage to trick her examiner and
destroy her creator. Traditionally, science fiction has brought us closer to the artificial intelligence
phenomenon by resorting to humanoid incarnations, superhuman beings that would change the
course of our evolution. Although we are still far from achieving such a strong artificial
intelligence, a change of paradigm in this field of study is producing applications that affect
increasing facets of our daily life and modify our surroundings, while proposing new ethical and
social challenges.
As our everyday life is increasingly influenced by the Internet and the flood of data feeding this
system grows, the algorithms that rule this medium are becoming smarter. Machine Learning
produces specialised applications that evolve thanks to the data generated in our interactions with
the network and that are penetrating and modifying our environment in a subtle, unnoticed way.
Artificial intelligence is evolving towards a medium as ubiquitous as electricity. It has penetrated
the social media networks, becoming an autonomous agent capable of modifying our collective
intelligence and as this medium is incorporated into the physical space it is modifying the way in
which we perceive and act in it. As this new technological framework is applied to more fields of
activity it remains to be seen whether this is an artificial intelligence for the good, capable of
communicating in an efficient way with human beings and increasing our capabilities, or a control
mechanism that, as it substitutes us in specialised tasks, captures our attention converting us into
passive consumers.
Sandra Álvaro 2013-2017
Smart algorithms on the Internet
At the start of the year, Mark Zuckerberg published the post Building Global Community addressing
all users of the social network Facebook. In this text, Zuckerberg accepted the medium’s social
responsibility, while defining it as an active agent in the global community and one committed to
collaborating in disaster management, terrorism control and suicide prevention. These promises
stem from a change in the algorithms governing this platform: if up to now the social network
filtered the large quantity of information uploaded to the platform by compiling data on the
reactions and contacts of its users, now the development of smart algorithms is enabling the content
of such information to be understood and interpreted. Thus, Facebook has developed the Deep Text
tool, which applies machine learning to understand what users say in their posts, and create models
of classification of general interests. Artificial intelligence is also used for the identification of
images. DeepFace is a tool that enables the identification of faces in photographs with a level of
accuracy close to that of humans. Computerised vision is also applied to generate textual
descriptions of images in the service Automatic Alternative Text aimed at blind people being able to
know what their contacts are publishing. Furthermore, it has enabled the company’s Connectivity
Lab to generate the most accurate population map that exists to date. In its endeavour to
administrate connection to the Internet worldwide via drones, this laboratory has analysed satellite
images the world over in search of constructions that reveal human presence. These data in
combination with the already existing demographic databases offer exact information on where
potential users of the connectivity offered by drones are located.
These apps and many others, which the company regularly tests and applies, are based on the
FBLearner Flow, the structure that facilitates the application and development of artificial
intelligence to the entire platform. Flow is an automated learning machine that enables the training
of up to 300,000 models each month, assisted by AutoML, another smart app that cleans the data to
be used in neural networks. These tools automate the production of smart algorithms that are
applied to hierarchize and personalise user walls, filter offensive contents, highlight tendencies,
order search results and many other things that are changing our experience on the platform. What
is new about these tools is that not only do they model the medium in line with our actions but when
accessing the interpretation of the contents that we publish, they allow the company to extract
patterns of our conduct, predict our reactions and influence them. In the case of the tools made
available for suicide prevention, this actually consists of a drop-down menu that allows possible
cases to be reported with access to useful information such as contact numbers and vocabulary
suitable for addressing the person at risk. However, these reported cases form a database that when
analysed gives rise to identifiable patterns of conduct that in the near future would enable the
platform to foresee a possible incident and react in an automated way.
For its part, Google is the company behind the latest major achievement in artificial intelligence.
Alpha Go is considered to be the first general intelligence program. The program developed by
Deep Mind, the artificial intelligence company acquired by Google in 2014, not only uses machine
learning that allows it to learn by analysing a register of moves, but integrates reinforced learning
that allows it to devise strategies learned by playing against oneself and in other games. Last year
this program beat Lee Sedol, the greatest master of Go, a game considered to be the most complex
ever created by human intelligence. This fact has not only contributed to the publicity hype that
surrounds artificial intelligence but it has put the company at the head of this new technological
framework. Google, which has led the changes that have marked the evolution of web search
engines, is now proposing an “AI first world” that would change the paradigm that governs our
Sandra Álvaro 2013-2017
relationship with this medium. This change was introduced in the letter addressing investors this
year, the text of which was assigned by Larry Page and Sergey Brin to Sundar Pichai, Google’s
CEO, who introduced the Google assistant.
Google applies machine learning to its search engine to auto-complete and correct the search terms
that we enter. For this purpose it uses natural language processing, a technology that has also
allowed it to develop its translator and its voice recognition service and to create Allo, a
conversational interface. Moreover, the computerised vision has given rise to the image search
service, and is what allows the new Google Photos app to classify our images without the need to
tag them beforehand. Other artificial intelligence apps allow Perspective to analyse and report toxic
comments to reduce online harassment and abuse, and even to reduce the energy cost of its data
server farms.
The Google assistant will become a new way of obtaining information on the platform, substituting
the page of search results for a conversational interface. In this, a smart agent will access all the
available on-line services to understand our context, situation and needs and produce not just a list
of options but an action as a response to our questions. In this way, Google would no longer provide
access to information on a show, the times and place of broadcast and the sale of tickets, but rather
an integrated service that would buy the admission tickets and programme the show into our
calendar. This assistant will be able to organise our diary, administer our payments and budgets and
many other things that would contribute to converting our mobile phones into the remote controls of
our entire lives.
Machine learning is based on the analysis of data, producing autonomous systems that evolve with
use. These systems are generating their own innovation ecosystem in a rapid advance that is
conquering the entire Internet medium. Smart algorithms govern the recommendations system of
Spotify, are what allow the app Shazam to listen to and recognise songs and are behind the success
of Netflix which not only uses them to recommend and distribute its products but also to plan its
production and offer series and films suited to the taste of its users. As the number of connected
devices that generate data increases, artificial intelligence is being infiltrated everywhere. Amazon
not only uses it in its recommendation algorithms but also in the management of its logistics and in
the creation of autonomous vehicles that can transport and deliver its products. The transport-
sharing app Uber uses them to profile the reputation of drivers and users, to match them, to propose
routes and calculate prices within its variable system. These interactions produce a database that the
company is using in the production of its autonomous vehicle.
Autonomous vehicles are another of the AI landmarks. Since the GPS system was implemented in
vehicles in 2001, a major navigation database has been produced together with the development of
new sensors, which has made it possible for Google to create an autonomous vehicle that has now
travelled over 500,000 km without any accidents and will be commercialized under the name
AI is also implemented in assistants for our households such as Google Home and Amazon Echo
and in wearable devices that collect data on our vital signs and that together with digitalisation of
the diagnostic images and medical case histories, is giving rise to the application of predictive
algorithms to healthcare. In addition, the multiplication of surveillance cameras and police records
is fostering the application of smart algorithms to crime prediction and the taking of judicial
Sandra Álvaro 2013-2017
Machine-learning, the new paradigm for Artificial Intelligence
The algorithmic medium where our social interactions were taking place has become smart and
autonomous, increasing its capacity for the prediction and control of our behaviour at the same time
that it has migrated from the social networks to expand to our entire environment. The new boom in
artificial intelligence is due to a change of paradigm that has led this technological assamblage from
the logical definition of intellectual processes to a pragmatic approach sustained by data that allows
algorithms to learn from the environment.
Nils J. Nilson defines artificial intelligence as an activity devoted to making machines smart, and
intelligence as the quality that allows an entity to function appropriately and with knowledge of its
environment. The term “artificial intelligence” was used for the first time by John McCarthy in the
proposal written together with Marvin Minsky, Nathaniel Rochester and Claude Shannon for the
Dartmouth workshop in 1956. This founding event was destined to bring together a group of
specialists who would investigate ways in which machines simulate aspects of human intelligence.
This study was based on the conjecture that any aspect of learning or any other characteristic of
human intelligence could be sufficiently described to be simulated by a machine. The same
conjecture led Alain Turing to propose the formal model of the computer in his 1950 article
Computer Machinery and Intelligence, together with other precedents such as Boolean logic,
Bayesian probability and the development of statistics, conducted to the development of Minksy
defined as the advance of artificial intelligence: the development of computers and the
mechanisation of problem-solving.
However, in the mid 1980s a gap still existed with respect to the theoretical development of the
discipline and its practical application which caused the withdrawing of funds and a stagnation
known as the “winter of artificial intelligence”. This situation changed with the dissemination of the
Internet and its major capacity to collect data. Data conjointly a more pragmatic and biology-
inspired focus is what has enabled the connection between problem-solving machines and reality.
Here, instead of there being a programmer who writes the orders that will lead to the solution of a
problem, the program generates its own algorithm based on example data and the desired result. In
Machine Learning the machine programs itself. This paradigm has raised thanks to the major
empirical success of the artificial neural networks that can be trained with mass data and large-scale
computing. This procedure is known as Deep Learning and consists of layers of interconnected
neural networks that loosely imitate the behaviour of biological neurons, substituting the neurons
with nodes and the synaptic connections with connections between these nodes. Instead of
analysing a set of data as a whole, this system breaks it down into minimal parts and remembers the
connections between these parts, forming patterns that are transmitted from one layer to another,
increasing their complexity until the desired result is achieved. Thus, in the case of image
recognition, the first layer would calculate the relations between the pixels in the image and
transmit the signal to the next layer and so on successively until a complete output is produced, the
identification of the content of the image. These networks can be trained thanks to
Backpropagation, a property that allows the adjust the weight of the relations calculated in
accordance with human correction until the desired result is achieved. Thus the major power of
today’s artificial intelligence is that it does not stop at the definition of entities, but rather it
deciphers the structure of relationships that give form and texture to our world. A similar process is
applied to Natural Language Processing; this procedure observes the relations between words to
infer the meaning of a text without the need for prior definitions. Other fields of study contained in
the current development of AI are Reinforced Learning, a procedure that changes the focus of
Sandra Álvaro 2013-2017
machine learning for the recognition of patterns in experience-guided decision-making.
Crowdsourcing and collaboration between humans and machines are also considered part of
artificial intelligence and have given rise to such services as Amazon’s Mechanical Turk, a service
where human beings tag images or texts to be used in the training of neural networks.
The fragility of the system: cooperation between humans and
smart algorithms
Artificial intelligence promises greater personalisation and an easier and more integrated
relationship with machines. Applied to fields such as transport, health, education or security it is
used to safeguard our well-being, alert us about possible risks and obtain services when requested.
However, the implementation of these algorithms has given rise to some scandalous events that
have alerted to the fragility of this system. These include the dramatic Tesla semi-automatic vehicle
accident, the dissemination of false news on networks such as Facebook and Twitter, the failed
experiment with the bot Tay developed by Microsoft and released on the Twitter platform to learn in
interaction with users, which had to be withdrawn in less than 24 hours due to its offensive
comments; the labelling of Afro-American people on Google Photos as “gorillas”; the confirmation
that Google is less likely to show high-level job adverts to women as to men, or the fact that Afro-
American delinquents are classified more often as potential re-offenders than Caucasians have
shown, among other problems, the discriminatory power of these algorithms, their capacity for
emerging behaviours and their difficulties in cooperation with humans.
These and other problems are due firstly to the nature of Machine Learning, its dependency on big
data, its major complexity and capacity to foresee. Secondly, they are due to its social
implementation, where we find problems arising from the concentration of these procedures into a
few companies (Apple, Facebook, Google, IBM and Microsoft), the difficulty of guaranteeing
equalitarian access to its benefits and the need to create strategies for resilience against the changes
Sandra Álvaro 2013-2017
that will take place as these algorithms gradually penetrate the critical structure of society.
The lack of neutrality of the algorithms is due to their dependency on big data. Databases are not
neutral and present the prejudices inherent in the hardware with which they have been collected, the
purpose for which they have been compiled and the unequal data landscape the same density of
data does not exist in all urban areas nor in respect to all social classes and events. The application
of algorithms trained with these data can disseminate the prejudices present in our culture like a
virus, giving rise to vicious circles and the marginalisation of sectors of society. The treatment of
this problem involves the production of inclusive databases and a shift of focus in the orientation of
these algorithms towards social change.
Crowdsourcing can favour the creation of fairer databases, collaborate to evaluate which data are
sensitive in each situation and proceed to their elimination and test the neutrality of applications. In
this sense, a team from the universities of Columbia, Cornell and Saarland have created the tool
FairTest which seeks unfair associations that may occur in a program. Moreover, gearing algorithms
towards social change tan contribute to the detection and elimination of prejudices present in our
culture. The University of Boston in collaboration with Microsoft Research has carried out a project
in which algorithms are used for the detection of prejudices contained in the English language,
specifically unfair associations that arise in the Word2vec database, used in many applications for
the automatic classification of text, translation and search engines. Eliminating prejudice from this
database does not eliminate it from our culture but it avoids its propagation through applications
that function in a recurring fashion.
Other problems are due to the lack of transparency that stems, not only from the fact that these
algorithms are considered and protected as property of the companies that implement them but also
from their complexity. Moreover, the developing of explanatory processes that make the
functioning of algorithms transparent is of essential importance when these are applied to medical,
legal or military decision-making, where they may infringe the right that we have to receive a
satisfactory explanation with respect to a decision that affects our life. In this sense the American
Defense Advanced Research Projects Agency (DARPA) has launched the program Explainable
Artificial Intelligence. This explores new systems of Deep Learning that may incorporate an
explanation of their reasoning, highlighting the areas of an image considered relevant for their
classification or showing an example of a database that exemplifies the result. They also develop
interfaces that make the deep learning process with data more explicit, through visualisations and
explanations in natural language. An example of these procedures can be found in one of the
Google experiments. Deep Dream, undertaken in 2015, consisted of modifying an images
recognition system based on deep learning so that instead of identifying objects contained in the
photographs, it modified them. This inverse process allows, as well as the creation of oneiric
images, for visualisation of the characteristics that the program selects to identify the images,
through a process of deconstruction that forces the program to work outside of its functional
framework and reveal its internal functioning.
Finally, the predictive capacity of these systems leads to an increase in their control capacity. The
privacy problems stemming from the use of networked technologies are well known, but artificial
intelligence analyse our previous decisions and predict our possible future activities. This gives the
system the capacity to influence the conduct of users, which requires responsible use and the social
control of its application.
Ex Machina offers us a metaphor of the fear that surrounds artificial intelligence, that which
exceeds our capabilities and escapes our control. The probability that artificial intelligence may
produce a singularity, or event that would change the course of our human evolution, continues to
Sandra Álvaro 2013-2017
be remote, however smart algorithms in machine learning are becoming disseminated in our
environment and are producing significant social changes, therefore it is necessary to develop
strategies that allow all social agents to understand the processes that these algorithms generate and
participate in their definition and implementation.
This author of this article reserves all rights.
Sandra Álvaro 2013-2017
Fake news: sharing is caring
Algorithmic filter bubbles on the Internet limit the diversity of
viewpoints we see and make it easier to spread fabricated
Sandra Álvaro
07 March 2017
The US elections showcased post-truth politics. The impact of false news on the results not only
demonstrates the social influence of the Internet, it also highlights the misinformation that exists
there. The rise of this phenomenon is also closely linked to the role of the social networks as a point
of access to the Internet. Face with this situation, the solutions lie in diversifying the control of
information, artificial intelligence, and digital literacy.
If you’re in the States, it’s difficult to escape the huge media phenomenon of the electoral process.
It’s a phenomenon that floods the traditional media and overflows into the social networks. In mid-
2016, Bernie Sanders was still favourite in progressive circles, but Donald Trump had already
become a media boom. On 8 November 2016, Trump emerged from the hullaballoo created by
satirical memes, hoaxes, clickbait and false news to be elected as president of the United States,
being invested on 20 January 2017 amid controversy arising from misinformation about attendance
of the event and mass protests headed by the women’s march.
Meanwhile, on 10 November, Mark Zuckerberg, at the Techonomy Conference in Half Moon Bay,
tried to exculpate his social network, Facebook, from participation in the spread of fake news and
its possible influence on the election results. The entry of this platform into the media arena,
materialised in its Trending Topics (only available in English-speaking countries) and reinforced by
the fact that an increasingly large number of citizens go to the Internet for their news, has become
the centre of a controversy that questions the supposed neutrality of digital platforms. Their
definition as technological media where the contents are generated by users and editorialised by
neutral algorithms has been overshadowed by evidence of the lack of transparency in the
functionality of these algorithms, partisan participation of human beings in censorship, and content
injection in trending news and user walls. This led Zuckerberg to redefine his platform as a space
for public discourse and accept its responsibility as an agent involved in this discourse by
implementing new measures. These include the adoption of a content publication policy including
non-partisanship, trustworthiness, transparency of sources, and commitment to corrections; the
development of tools to report fake or misrepresentative contents; and the use of external fact-
checking services, such as Snopes,, Politifact, ABCNews and AP, all signatories of
Poynter’s International Fact-Checking Network fact-checkers’ code of principles. At the same time,
other technological giants like Google and Twitter have developed policies to eliminate fake news
(Google eliminated some 1.7 billion ads violating its policy in 2016, more than twice the previous
Sandra Álvaro 2013-2017
year) and combat misuse of the Internet.
Fake news, invented for ulterior gain, which makes the rounds of the Internet in the form of spam,
jokes and post bait, is now at the centre of the controversy surrounding the US electoral process as
an example of post-truth politics facilitated by the use of the social networks, but it is also a
symptom that the Internet is sick.
In its Internet Health Report, Mozilla points to the centralisation of a few big companies as one of
the factors that encourage the lack of neutrality and diversity, as well as the lack of literacy in this
medium. Facebook is not just one of the most used social networks, with 1.7 billion users, it is also
the principal point of entry to the Internet for many, while Google monopolises searches. These
media have evolved from the first suppliers of services and the advent of the 2.0 web, creating a
structure of services based on the metric of attractiveness. “Giving the people what they want” has
justified the monitoring of users and algorithmic control of the resulting data. It has also created a
relation in which users depend on the tools that these big providers offer, ready for use, without
realising the cost of easy access in terms of the centralisation, invasive surveillance and influence
that these big companies have on the control of information flows.
The phenomenon of misinformation on the Internet stems from the fact that the medium gives fake
or low-quality information the same capacity to go viral as a true piece of news. This phenomenon
is inherent in the structure of the medium, and is reinforced by the economic model of pay per click
—exemplified by Google’s advertising service—and the creation of filter bubbles by the
algorithmic administration of social networks like Facebook. In this way, on the Internet, fake news
is profitable and tends to reaffirm our position within a community.
Services like AdSense by Google encourage websites developers to generate attractive, indexable
content to increase visibility and raise the cost per click. Unfortunately, sensationalist falsehoods
can be extremely attractive. Verifying this fact in Google analytics is what led a group of
Macedonian adolescents to become promoters of Trump ’s campaign. In the small town of Veles,
over 100 websites sprang up with misleading names such as or, devoted to spreading fake news about the campaign to attract traffic to
pages of adverts for economic gain. The biggest source for directing traffic to these webs turned out
to be Facebook, where, according to a study carried out by the Universities of New York and
Stanford, fake news was shared up to 30 million times.
The traffic of falsehoods on the social networks is encouraged by social and psychological factors—
the decrease in attention that occurs in environments where information is dense and the fact that
we are likely uncritically to share content that comes from our friends—but it is largely due to the
algorithmic filtering conducted on these platforms. Facebook frees us from excess information and
redundancy by filtering the contents that appear on our walls, in keeping with our preferences and
proximity to our contacts. In this way, it encloses us in bubbles that keep us away from diversity of
viewpoints and the controversies they generate, and give meaning to the facts. This filtering
produces homophilous sorting, with likeminded users forming clusters that are reinforced as they
share information that is unlikely to leap from one cluster to another, subjecting users to a low level
of entropy, or to offer information that brings new and different viewpoints. These bubbles are like
echo chambers, generating narratives that can reach beyond the Internet and have effects on our
culture and society. The Wall Street Journal has published an app based on researched carried out
with Facebook, allowing you simultaneously to follow the narratives generated by the red
(conservative) bubble and the blue (liberal) feed. This polarisation, while limiting our perception,
makes us identifiable targets, susceptible to manipulation by manufactured news.
Technology is part of the problem; it remains to be seen whether it can also be part of the solution.
Sandra Álvaro 2013-2017
Artificial intelligence can’t decide whether an item of news is true or false—a complex, arduous
task even for an expert human being. But tools based on machine-learning and textual analysis can
help to analyse the context and more quickly identify information that needs checking.
The Fake News Challenge is an initiative in which different teams compete to create tools that help
human fact-checkers. The first phase of this competition is based on stance detection. Analysis of
the language contained in a news item can help to classify it according to whether it is for, against,
discusses or is neutral in relation to the fact indicated in the headline. This automatic classification
allows a human checker rapidly to access a lists of articles related to an event and examine the
arguments for and against.
Apart from language analysis, another computational procedure that helps to analyse a news context
is network analytics. OSoMe, the observatory of social media developed by the Center for Complex
Networks and Systems Research of the University of Indiana and directed by Fil Menczer, proposes
a series of tools to analyse how information moves in the social networks, in search of patterns that
serve to identify how political polarisation occurs and how fake news is transmitted, as well as
helping automatically to identify it.
One of these tools is Hoaxy, a platform created to monitor the spread of fake news and its
debunking on Twitter. The platform tracks the instances and retweets of URLs with fake facts
reported by fact-checking to see how they are distributed online. Preliminary analysis shows that
fake news is more abundant than its debunking, that it precedes fact-checking by 10-20 hours, and
that it is propagated by a small number of very active users, whereas debunking is distributed more
In addition, for the automated detection of fake news, network analytics use knowledge graphs.
This technique makes it possible to employ knowledge that is generated and verified collectively, as
in the case of Wikipedia, to check new facts. A knowledge graph will contain all the relations
between the entities referred to in this collaborative encyclopaedia, representing the sentences so
that the subject and the object constitute nodes linked by their predicate, forming a network. In this
way, the accuracy of a new sentence can be determined in the graph, being greater when the path
linking subject and object is sufficiently short, without excessively general nodes.
Other tools that use computational means to track the spreading of information and enable checking
based on textual content, the reputation of its sources, its trajectory, and so on, are RumorLens,
FactWatcher, Twitter Trails and, implemented in the form of applications or bots.
Particular mention should be made of the collaborative tool provided by Ushahid Swift River that
uses metaphors such as river (information flow), channels (sources), droplets (facts) and bucket
(data that is filtered or added by the user) in an application designed to track and filter facts in real
time and collectively create meaning. Here, a user can select a series of channels—Twitter or RSS
—to establish a flow of information that can be shared and filtered according to keywords, dates or
location, with the possibility of commenting and adding data.
The proliferation of Internet use has led to a post-digital society in which connectedness is a
constituent part of our identities and surroundings, and where everything that happens on line has
consequences in real cultural and social contexts. This proliferation has occurred alongside what the
Mozilla Foundation calls the veiled crisis of the digital age. The simplification of tools and software
and their centralisation in the hands of technological giants foster ignorance about the mechanisms
governing the medium, promoting passive users who are unaware of their participation in this
ecology. Internet has brought about a change in the production of information, which no longer
comes from the authority of a few institutions, but is instead created in a collective process. The
informed adoption of these and other tools could help to reveal the mechanisms that produce,
Sandra Álvaro 2013-2017
distribute and evaluate information, and contribute to digital and information literacy—the
formation of critical thinking that makes us active participants who are responsible for the creation
of knowledge in an ecology that is enriched by the participation of new voices, and where sharing is
This author of this article reserves all rights.
Sandra Álvaro 2013-2017
The Power of Algorithms: How software
formats the culture
The only way to manage the data of Internet is through
automated processing using algorithms.
Sandra Álvaro
29 January 2014
The use of the Internet has spread further than computers and beyond the bounds of any specific
discipline, and has come to permeate the texture of our reality and every aspect of our daily lives.
The ways in which we relate to each other, obtain information, make decisions… that is, the ways in
which we experience and learn about our surroundings, are increasingly mediated by the
information systems that underlie the net. Massive amounts of information are generated by this
constant interaction, and the only way to manage this data is through automated processing using
algorithms. A humanist understanding of how this ‘algorithmic medium’ has evolved and how we
interact with it is essential in order to ensure that citizens and institutions continue to play an active
role in shaping our culture.
A young information and financial tycoon heads across Manhattan in a limousine in the first scene
of David Cronenberg’ most recent film Cosmopolis, based on a novel by Don DeLillo. During the
ride, Eric Packer monitors the flow of information that flashes up on screens as he leads us on a
quest for a new perspective. His encounters with various characters and the sights and sound of the
city that enter through the windows offer us a glimpse into the inner workings, consequences, and
gaps of what the film calls ‘cyber-capitalism’. The journey ends with a confrontation between the
protagonist –whose fortune has been wiped out in the course of the day by erratic market behaviour
that his algorithms were unable to predict– and his antithesis, a character who is unable to find his
place in the system.
The interplay between technology and capital – the computerised processing of bulk data in order to
predict and control market fluctuations is one of the constants of capitalist speculation. In fact,
65% of Wall Street transactions are carried out by ‘algo trading’ software. In a global market where
enormous amounts of data are recorded and a rapid response rate gives you an edge over the
competition, algorithms play a key role in analysis and decision-making.
Similarly, algorithms have found their way into all the processes that make up our culture and our
everyday lives. They are at the heart of the software we use to produce cultural objects, through
programmes that are often freely available in the cloud. They also play a part in disseminating these
objects through the net, and in the tools we use to search for them and retrieve them. And they are
now essential for analysing and processing the bulk data generated by social media. This data is not
only produced by the ever-increasing amount of information posted by users, but also by tracking
their actions in a network that has become a participatory platform that grows and evolves through
Sandra Álvaro 2013-2017
An algorithm is a finite set of instructions applied to an input, in a finite number of steps, in order to
obtain an output – a means by which to perform calculations and process data automatically.
The term ‘algorithm’ comes from the name of the 9th century Persian mathematician al-Khwarizmi,
and originally referred to the set of rules used to perform arithmetic operations with Arabic
numerals. The term gradually evolved to mean a set of procedures for solving problems or
performing tasks. It was Charles Babbage who made the connection between algorithms and
automation with his hypothesis that all the operations that play a part in an analysis could be
performed by machines. The idea was that all processes could be broken down into simple
operations, regardless of the problem being studied. Although Babbage designed his Differential
Engine and Ada Lovelace created the first algorithm for his Analytical Engine, it was Alan Turing
who put forward the definitive formalisation of the algorithm with his Universal Machine in 1937.
Turing’s theoretical construct is a hypothetical device that manipulates symbols on a strip of tape
according to a table of rules, and can be adapted to simulate the logic of any computer algorithm.
The advent of the Internet took this logical construct beyond the computer. Internet protocol (1969)
and the web (1995) became a kind of universal container in which data could be stored, accessed
and processed on any computer. These developments, along with the convergence that went hand in
hand with the boom in personal computing in the eighties, meant that computation numerical
calculation spread to all digitalised processes. Meanwhile, URLs allowed algorithms to interact
and interconnect amongst themselves, eventually producing what Pierre Lévy calls the ‘algorithmic
medium’: the increasingly complex framework for the automatic manipulation of symbols, which
would become the medium in which human networks collaboratively create and modify our
common memory.
Sandra Álvaro 2013-2017
Algorithms play a part in all our everyday interactions with the social web. With 699 million users
connecting each day, popular social networking site Facebook is working on the problem of how to
display the updates of the many friends, groups and interests that its users can follow. Its answer is
the algorithm known as EdgeRank, which processes data about our interests our ‘likes’ –, the
number of friends we have in common with the person posting a news item, and the comments
posted on it, in order to prioritise what we see on our news feed and hide the ‘boring’ stories. The
algorithm also tracks the graph of our contacts in order to suggest new friends.
Twitter similarly uses algorithms to suggest new accounts to follow, and to create content of the
Discover tab and update its trending topics. In this case the complex algorithm doesn’t just work out
what word is tweeted most often, it also calculates whether the use of a particular term is on the
rise, whether it has been a trending topic before, and whether it is used among different networks of
users or just one densely connected cluster. It does this by monitoring thehashtags that
interconnect all the tweets it appears in, which were introduced by Twitter in 2007 and have since
spread throughout social media sites. It also uses URL shortening service or ( in Facebook),
which is generated every time we use a social button to share a URL. These do not just minimise
the number of characters in a post, but also transform links into data-rich structures that can be
tracked in order to find out how they are shared on the platform and build up profiles of their users.
As well as social networks, the social web also includes all kinds of platforms that allow us to
create and share information, including online publishing services such as blogs, recommendation
systems like Digg and Reddit and search engines. All of these platforms rely on algorithms that
work with specific criteria. The search engine Google, for example, which has to work in a medium
consisting of more than 60 trillion pages, in which more than 2 million searches are carried out
every minute, is based on the premise that “you want the answer, not trillions of webpages.” In this
scenario keyword indexing is not enough, so Google’s PageRank algorithm imitates user behaviour
by monitoring the links to and from every page, and then ranks the pages, displaying the most
relevant results first. The algorithm also works in conjunction with others that process our search
history, our language, and our physical location in order to customise the results.
Algorithms also process data generated by our online actions to suggest what books we should buy
on Amazon, what videos we should watch on YouTube, and to determine the advertisements we will
be shown on all these platforms. Aside from these algorithms that we regularly interact with, there
are others such as Eigenstaste, a collaborative filtering algorithm for rapid computation of
recommendations developed at UC Berkeley; an algorithm recently developed at Cornell and
Carnegie Mellon Universities that reconstructs our life histories by analysing our Twitter stream;
and the algorithm developed at Imperial College London to reduce Twitter spam by detecting
accounts that are run by bots instead of humans. The growing presence of algorithms in our culture
is reflected in the #algopop tumblr, which studies the appearance of algorithms in popular culture
and everyday life.
The examples mentioned here illustrate how information on the Internet is accessed and indexed
automatically, based on data drawn from our online behaviour. Our actions generate a flow of
messages that modify the inextricable mass of interconnected data, subtly changing our shared
memory. This means that communication in the ‘algorithmic medium’ is ‘stigmergic’, which means
that individuals alter the actual medium when they communicate in it. Every link that we create or
share, every time we tag something, every time we like, search, buy or retweet, this information is
recorded in a data structure and then processed and used to make suggestions or to inform other
users. As such, algorithms help us to navigate the enormous accumulation of information on the net,
taking information generated individually and processing it so that it can be consumed communally.
But when algorithms manage information, they also reconstruct relationships and connections, they
Sandra Álvaro 2013-2017
encourage preferences and produce encounters, and end up shaping our contexts and our identities.
Online platforms thus become automated socio-technical environments.
The use of automation in our culture has epistemological, political and social consequences that
have to be taken into account. For example, the continuous monitoring of our actions transforms our
existing notions of privacy; algorithms make us participate in processes that we are not conscious
of; and although they increase our access to information – such enormous amounts of it that it is no
longer humanly discernible and boost our agency and our capacity to choose, they are by no
means neutral and they can also be used for the purposes of control.
Most users see the net as a broadcast medium, like traditional media, and are not aware of how
information is filtered and processed by the medium. Not only are the effects of algorithms
imperceptible, and often unknown because they are in the hands of commercial agencies and
protected by property laws, they have also become inscrutable, because of the interrelation between
complex software systems and their constant updates.
Furthermore, algorithms are not just used for data analysis, they also play a part in the decision-
making process. This raises the question of whether it is justifiable to accept decisions made
automatically by algorithms that do not work transparently and cannot be subject to public debate.
How can we debate the neutrality of processes that are independent of the data they are applied to?
Also, when algorithms analyse the data compiled from our earlier actions they are strongly
dependent on the past, and this may tend to maintain existing structures and limit social mobility,
hindering connections outside of existing clusters of interests and contacts.
Given that these algorithms influence the flow of information through the public sphere, we need to
come up with metaphors that make these processes understandable beyond the realm of computer
experts. We need to make them understandable to people in general, so that everybody can
Sandra Álvaro 2013-2017
participate in discussions about what problems can be solved algorithmically and how to approach
these problems. Encouraging participation is a way to ensure the ecological diversity of the medium
and its connection to pragmatics.
Software studies pioneer Matthew Fuller points out that even though algorithms are the internal
framework of the medium in which most intellectual work now takes place, they are rarely studied
from a humanistic or critical point of view, and are generally left to technicians. In his book Behind
the Blip: Essays on the Culture of Software, Fuller suggests some possible critical approaches, such
as: running information systems that really reveal their functioning, structure and conditions;
preserving the poetics of connection that is inherent to social software or promoting use that always
exceeds the technical capacities of the system; and encouraging improbable connections that enrich
the medium with new potential and broader visions that allow room for invention.
Some initiatives along these lines are already occurring in the ‘algorithmic medium’, actively
contributing to the use of computing by non-experts and allowing user communities to influence its
course. They include data journalism, which creates narratives based on data mining; free software,
developed in collaboration with its users; crowdsourcing initiatives based on data that is obtained
consciously and collaboratively by users; and the rise in the communal creation of MOOCs
(massive open online courses).
On another front, cultural institutions also need to develop a presence in the virtual medium. By
allowing online access to their archives, data, know-how and methodology, projects, and
collaborators, they can promote new interests and connections, taking advantage of ‘stigmergy’ to
boost the diversity and poetics of the medium. Similarly, workshops such as those organised as part
of the CCCB’s Internet Universe project help to promote a broader understanding and awareness of
this medium, and encourage greater and more effective participation.
The capacity and the scope of the algorithmic environment is now being strengthened through the
use of artificial intelligence technology, as illustrated by Google’s ‘Hummingbird’ semantic
algorithm which is based on natural language processing and Mark Zukenberg’s mission to
understand the world’ by analysing the language of posts shared on Facebook. It is important to
encourage critical, public debate about the role of these mechanisms in shaping our culture if we
want to ensure the continuing diversity and accessibility of the net.
Sandra Álvaro 2013-2017
Big Data and Digital Humanities: From social
computing to the challenges of connected
Big Data applied to the field of cultural production brings the
Digital Humanities up against the new challenges of a
network-generated data culture.
Sandra Álvaro
23 October 2013
Big Data is the new medium of the second decade of the twenty-first century: a new set of
computing technologies that, like the ones that preceded it, is changing the way in which we access
reality. Now that the Social Web has become the new laboratory for cultural production, the Digital
Humanities are focusing on analysing the production and distribution of cultural products on a
mass scale, in order to participate in designing and questioning the means that have made it
possible. As such, their approach has shifted to looking at how culture is produced and distributed,
and this brings them up against the challenges of a new connected culture.
5,264,802 text documents, 1,735,435 audio files, 1,403,785 videos, and over two billion web pages
that can be accessed through the WayBack Machine make up the inventory of the Internet Archive
at the time of writing. Then there are also the works of over 7,500 avant-garde artists archived as
videos, pdfs, sound files, and television and radio programmes on UBUWEB, the more than
4,346,267 entries in 241 languages submitted by the 127,156 active users that make up Wikipedia,
and the ongoing contributions of more than 500 million users on Twitter. And these are just a few
examples of the new virtual spaces where knowledge is stored and shared: open access,
collaboratively created digital archives, wikis and social networks in which all types of
hybridisations coexist, and where encounters between different types of media and content take
place. As a whole, they generate a complex environment that reveals our culture as a constantly
evolving process.
Sandra Álvaro 2013-2017
In the 1990s, computers were seen as “remediation machines”, or machines that could reproduce
existing media, and the digital humanities focused on translating the documents of our cultural
heritage, tributaries of print culture, into the digital medium. It was a process that reduced the
documents to machine readable and operable discrete data. As Roberto A. Busa explains in the
introduction to A Companion to Digital Humanities, humanities computing is the automation of
every possible analysis of human expression. This automation enhanced the capabilities of
documents, which gradually mutated into performative messages that replaced written documents as
the principal carriers of human knowledge. Meanwhile, the processes used to analyse and reproduce
the texts were also used to develop tools that would allow users to access and share this content, and
this brought about a change from the paradigm of the archive to that of the platform. These twin
processes transformed the way research is carried out in the humanities, and determined the content
of Digital Humanities 2.0.
Scanning or transcribing documents to convert them to binary code, storing them in data bases, and
taking advantage of the fact that this allows users to search and retrieve information, and to add
descriptors, tags and metadata, all contributed to shaping a media landscape based on
interconnection and interoperability. Examples of projects along these lines include digital archives
such as the Salem Witch Trial, directed by Benjamin Ray at the University of Virginia in 2002,
which is a repository of documents relating to the Salem witch hunt. Or The Valley of Shadow
archive, put together by the Center for Digital History also at the University of Virginia, which
documents the lives of the people from Augusta County, Virginia, and Franklin County,
Pennsylvania, during the American Civil War. More than archives, these projects become virtual
spaces that users can navigate through, actively accessing information through a structure that
connects content from different data bases, stored in different formats such as image, text and sound
files. The creation of these almost ubiquitously accessible repositories required a collaborative
effort that brought together professionals from different disciplines, from historians, linguists and
geographers to designers and computer engineers. And the encounter between them led to the
convergent practices and post-disciplinary approach that came to be known as Digital Humanities.
These collaborative efforts based on the hybridisation of procedures and forms of representation
Sandra Álvaro 2013-2017
eventually led to the emergence of new formats in which the information can be contextualised,
ranging from interactive geographic maps to timelines. An example of these types of new
developments is the Digital Roman Forum project, carried out between 1997 and 2003 by the
Cultural Virtual Reality Laboratory (CVRLab) at the University of California, Los Angeles
(UCLA), which developed a new way of spatializing information. The team created a three-
dimensional model of the Roman Forum that became the user interface. It includes a series of
cameras, aimed at the different monuments that are reproduced in the project, allowing users to
compare the historical reproduction with current images. It also provides details of the different
historical documents that refer to these spaces, and that were used to produce the reproduction.
This capacity for access and linking is taken beyond the archive in projects such as Persus and
Pelagios, which allow users to freely and collectively access and contribute content. These projects
use standards developed in communities of practice to interconnect content through different online
resources. They thus become authentic platforms for content sharing and production rather than
simple repositories. The digital library Perseus, for example, which was launched in 1985, relies on
the creation of open source software that enables an extensible data operation system in a
networking space, based on a two-level structure: one that is human-accessible, in which users can
add content and tags, and another that incorporates machine-generated knowledge. This platform
provides access to the original documents, and links them to many different types of information,
such as translations and later reissues, annotated versions, maps of the spaces referred to… and
makes it possible to export all of this information in xml format. Meanwhile, the Pelagios project,
dedicated to the reconstruction of the Ancient World, is based on the creation of a map that links
historical geospatial data to content from other online sources. When users access a point on the
map-interface they are taken to a heterogeneous set of information that includes images,
translations, quotes, bibliographies and other maps, all of which can be exported in several file
formats such as xml, Json, atom and klm.
Sandra Álvaro 2013-2017
These projects are examples of the computational turn that David M. Berry theorises in The
Computational Turn: Thinking about Digital Humanities: “Computational techniques are not merely
an instrument wielded by traditional methods; rather they have profound effects on all aspects of the
disciplines. Not only do they introduce new methods, which tend to focus on the identification of
novel patterns in the data against the principle of narrative and understanding, they also allow the
modularisation and recombination of disciplines within the university itself.” The use of automation
in conjunction with digitalisation not only boosts capabilities for analysing text documents, it also
creates new capabilities for remixing and producing knowledge, and promotes the emergence of
new platforms or public spheres, in which the distribution of information can no longer be
considered independently of its production.
In The Digital Humanities Manifesto 2.0 ( written in 2009 by Jeffrey Schnapp and Todd Presner,
this computational turn is described as a shift from the initial quantitative impulse of the Digital
Humanities to a qualitative, interpretative, emotive and generative focus. One that takes into
account the complexity and specificity of the medium, its historical context, its criticism and
interpretation. This reformulation of objectives sees digital media as profoundly generative and
analyses the digital native discourse and research that have grown out of these emergent public
spheres, such as wikis, the blogosphere and digital libraries. It thus allows users to become actively
involved in the design of the tools, the software, that have brought about this new form of
knowledge production, and in the maintenance of the networks in which this culture is produced.
This new type of culture is open source, targeted at many different purposes, and flows through
many channels. It stems from process-based collaboration in which knowledge takes on many
different forms, from image composition to musical orchestration, the critique of texts and the
manufacturing of objects, in a convergence between art, science and the humanities.
The generative and communicational capabilities of new media have led to the production and
distribution of cultural products on a mass scale. At this time in history, which Manovich has
dubbed the “more media” age, we have to think of culture in terms of data. And not just in terms of
data that is stored in digital archives in the usual way, but also that which is produced digitally in
the form of metadata, tags, computerised vision, digital fingerprints, statistics, and meta-channels
such as blogs and comments on social networks that make reference to other content; data that can
be mined and visualised, to quote the most recent work by the same author, «Sofware takes
command». Data analysis, data mining, and data visualisation are now being used by scientists,
businesses and governments as new ways of generating knowledge, and we can apply the same
approach to culture. The methods used in social computing –the analysis and mapping of data
produced through our interactions with the environment in order to optimise the range of consumer
products or the planning of our cities, for example– could be used to find new patterns in cultural
production. These would not only allow us to define new categories, but also to map and monitor
how and with what tools this culture is produced. The cultural analysis approach that has been used
in the field of Software Studies since 2007 is one possible path in this direction. It consists of
developing visualisation tools that allow researchers to analyse cultural products on a mass scale,
particularly images. For example, a software programme called ImagePlot and high-resolution
screens can be used to carry out projects based on the parametrisation of large sets of images in
order to reveal new patterns that challenge the existing categories of cultural analysis. One of these
projects, Phototrails, for example, generates visualisations that reveal visual patterns and dynamic
structures in photos that are generated and shared by the users of different social networks.
Another approach can be seen in projects that analyse digital traces and monitor knowledge
production and distribution processes. An example of this approach is the project History Flow by
Martin Wittenberg and Fernanda Viégas –developed at the IBM Collaborative User Experience
Sandra Álvaro 2013-2017
Research Group–, which generates a histogram of the contributions that make up Wikipedia.
Big Data applied to the field of cultural production allows us to create ongoing spatial
representations of how our visual culture is shaped and how knowledge is produced. This brings the
Digital Humanities up against the new challenges of a network-generated data culture, challenges
that link software analysis to epistemological, pedagogic and political issues and that raise many
questions, such as: how data is obtained, what entities should we parametrized, at the risk of failing
to include parts of reality in the representations; how we assign value to these data, considering that
this has a direct effect on how the data will be visualised, and that the great rhetoric power of these
graphic visualisations may potentially distort the data; how information is structured in digital
environments, given that the structure itself entails a particular model of knowledge and a particular
ideology; how to maintain standards that enable data interoperability, and how to go about the
political task of ensuring ongoing free access to this data; what new forms of non-linear, multimedia
and collaborative narrative can be developed based on this data; the pedagogical question of how to
transmit an understanding of the digital code and algorithmic media to humanists whose education
has been based on the division between culture and science; and, lastly, how to bring cultural
institutions closer to the laboratory, not just in terms of preservation but also in the participation and
maintenance of the networks that make knowledge production possible.
Sandra Álvaro 2013-2017
The Latest Post-Digital Revolution: The
Internet of Things, Big Data and Ubiquity
We build bridges that bind the virtual world closer to the
physical world, so that information is not only accessible from
anywhere but also in everything.
Sandra Álvaro
03 July 2013
“Gestural interfaces that can be used to access, connect and process data captured in real time;
shopping malls that recognise us when we walk in, and where polite virtual agents address us from
interactive screens, remind us of our recent purchases and offer a selection of products tailored to
our needs and tastes; the capacity to locate and track the movements of any person through the
city… and even to predict the future.” This is how engineers at MIT Media Lab, Microsoft
Research, and Austin-based Milkshake Media described the world circa 2045 when Steven
Spielberg asked for their advice while preparing the screen version of the famous Philip K. Dick
novel. Our reality is still nowhere near the massive, seamless network that structures and brings to
life the world of Minority Report, but it would appear that this world made up of always-connected
smart objects – or something very similar to it – is inevitable. As Adam Greenfield explains in his
book Everyware: The Dawning of the Age of Ubiquitous Computing, computer ubiquity, in its
numerous forms – augmented reality, wearable computing, tangible interfaces, locative media, near-
field communication is evolving every day, building bridges that bind the virtual world, or
“dataspace”, closer and closer to the physical world, so that information is not only accessible from
anywhere but also in everything.
See for example the recently opened Burberry flagship store at 121 Regent Street in London, an
example of the spectacle of consumption that merges all the information from this clothing
company’s website with the physical space. An augmented reality project in which information
spreads throughout the architectural space by means of interactive screens that share information in
real time through hyperspace. From watching a catwalk show or the launch of one of the products
sold at the store, to the planet-wide sharing of cultural events programmed there.
Another example that connects information to context can be found in the numerous sensor
networks that collect information in our environment, for purposes ranging from improving sporting
performance to preventing damage from tsunamis, volcanoes and radiation leaks, or improving road
traffic flow and safety –
Concussion Detector is a wearable sensor that measures the impact of blows to the head suffered by
athletes during games. The data recorded is sent to the coaches, who are equipped with an iPad
where they can check them against the impact history of the players to help them make appropriate
decisions about whether the player should stay in the game. As well as improving player safety, this
Sandra Álvaro 2013-2017
project developed at Cagan Stadium in Stanford is also a massive data-capturing initiative that aims
to improve diagnostic capacity in general.
Another connected sensor project, in this case related to the development of smart cities, is the
Parking Spot Finder, a sensor network that aims to improve traffic flow and clear up congestion in
streets in the city centre. To do this, it detects whether parking spots are occupied and sends the
information to smartphone users. The database is also used to adjust the prices of parking meters
based on demand.
All of these sensor systems collect petabytes of data that are sent to the “cloud”, where they interact
with other data sets and are processed in real time, in order to produce knowledge that is distributed
through the net. A state of affairs in which collective intelligence linked to the Internet pervades the
environment thanks to its latest evolution: the Data Web.
Web 3.0 or the Data Web is an evolution of Web 2.0, the Social Internet understood as a platform. It
is a network in which software is offered as a service in order to connect users to each other. This
Web, whose value lies in the contributions and uses of net users, is the start of collective
intelligence. In order for the Web to be able to offer answers and create knowledge based on the
information provided by users on a massive scale, this information must be in a form that can be
handled, understood and worked with in real time. This is what the Data Web does. This new
development is based on a series of standards and languages that make it possible to assign
metadata to Internet content. These metadata, or data on data, are machine-readable and add
information that enables all web traffic to be identified, located and monitored. The result is a
system of related databases, in which different subsystems can be used to track all information
related to a particular object, and to generate relevant responses. When these data do not just derive
from our interactions on the Internet, but also from the network sensors that are spread throughout
the physical environment – producing the data flood that characterises the Big Data phenomenon –,
and when they also leave the limited frame of screens and become accessible in physical space
through different types of augmented reality, then we have the Internet of Things, or, as Tim
O’Reilly calls it, the Squared Web, or the Web encountering the world.
This encounter with the world, in which information materialises in our everyday surroundings
through the dissemination of smart objects, leads us into the realm of Ubiquitous Computing.
Ubiquitous computing was described by Mark Weiser at the Computer Science Laboratory in Xerox
PARC in 1988 as a “calm technology” that disappears into the background, allowing users to focus
on the tasks they are carrying out rather than on the computer. Unlike virtual reality, which creates a
disconnected world inside the screen, ubiquitous computing is an “embodied virtuality”. Dataspace
materialises in the world through the distribution of small interconnected computers, creating a
system that is embedded in the world, making computing an integral, invisible part of everyday life
in physical space. The project that Weiser and his colleagues were working on in this sense
consisted of a set of devices tabs, pads and boards that worked at different scales and could
identify users and share and access different blocks of information from various physical locations.
For example, a phone call could automatically be forwarded to wherever the intended recipient
happened to be. Or the agenda agreed on by group of people at a meeting could be physically
displayed to the group and then transferred to the personal diaries of each person involved. The
result was a type of technology that was as intuitive and unconscious as reading, that moved out of
the user interface and created a responsive space in which things could be done. A space in which
the virtual nature of computer-readable data, and all the ways in which this data can be modified,
processed and analysed, spread through physical space in a pervasive way (widespread
Sandra Álvaro 2013-2017
There are still obstacles to achieving the pervasive space that characterises ubiquitous computing,
such as: the diversity of existing operating systems and programming languages, which hinders
communication among computers; the lack of design standards that would enable the
homogenisation of the systems involved; the existence of gaps in the universal distribution of ultra
broadband, which is necessary for the flow of these data; and the lack of real demand from the
general public. But even so, the “intelligent dust”, as Derrick de Kerckhove, calls it, of this
Augmented Mind is starting to spread throughout our environment. Aside from the sensor networks
and augmented reality systems mentioned above, which we can access from our smartphones
through applications such as Layar, we are also starting to see systems that identify users, allowing
their actions to be automated. Commonplace examples include different types of cards with RFID
chips, such as the transport cards used in some countries – Oyster in London and Navigo in Paris –
and Teletac, which is used in Spain to pay motorway tolls. There is also the NFC, or near-field
communication system, a mobile application that reads user-stored information such as credit card
numbers or the codes of tickets booked and transmits it to nearby devices, so that the person
carrying the telephone can make payments or enter shows. All of these applications provide
contextualised information on demand, everywhere and in many situations, making it easier to
interact with the information overload that characterises our society. They record data about our
identity, location and interactions, turning them into new subsystems of data that can then be used
by other systems. The fact that the system needs to identify all the objects and persons involved in
order to be able to react to them means that any augmented or “pervasive” space is also a monitored
Collective intelligence increases the awareness of our surroundings and our potential options for
interacting with it. But the pervasiveness and evanescence of ubiquitous technology makes it an
unconscious mediation, a highly relational and complex system that is based on internal operations
and interrelations with others, and that is imperceptible to the user. It is a system that can restructure
the way in which we perceive and relate to the world, and also our consciousness of ourselves and
of others. Without our being aware of our involvement in it, or of the magnitude of its connections,
or even sometimes of its very presence.
In this way, ubiquitous technology becomes an apparatus as defined by Giorgio Agamben, based
on his interpretation of Foucault’s use of the term. An apparatus is anything that has in some sense
the capacity to capture, orient, determine, intercept, model, control or secure the gestures,
behaviours, opinions or discourses of living beings. An apparatus must bring about subjectification
processes that allow the individuals involved to interact with them. This means that they can be
“profaned”, returned to the process of “humanisation”. Or, in other words, to the set of cultural
practices and relations that have produced them, where they can be appropriated by human beings
who are active and aware of their environment. The imperceptible nature of the fuzzy system of
ubiquitous technology makes it impossible to profane, so that it becomes a strategic system of
control at the service of a vague and imperceptible power.
Big data and the systems that materialise information in our environment would seem to have the
power to make us happier, helping us to plan our cities and carry out our life plans. But we should
stop and ask ourselves whether our cities and our environment in general really need to be “smart”.
The qualities that make us engage with our environment are not its functionality and efficiency, but
its aesthetic, historical and cultural aspects. The “embodied virtuality” of our post-digital world
must be developed in conjunction with aesthetic strategies that allow us to visualise and understand
the data flows that surround us, as well as the systems of smart objects that drive them. By doing so,
we will not only be able to limit these systems to the areas of our lives in which they can be truly
useful, we will also be able to appropriate them, leading to significant relationships. Collective
Sandra Álvaro 2013-2017
intelligence and its ability to spread throughout our environment should increase our ability to act
performatively in the world, making us conscious of the systems of human and non-human agents
and relationships that make up our reality at any given moment. It should not become an
imperceptible system that can diminish our capacity for agency and lessen our control over how we
present ourselves in the world.
Sandra Álvaro 2013-2017
Agamben, G. (2009): What is an Apparatus? and Other Essays, Stanford University Press
Berry, D (2011): theorises in The Computational Turn: Thinking about Digital Humanities,
Busa, R. (2004): A Companion to Digital Humanities, ed. Susan Schreibman, Ray Siemens,
John Unsworth. Oxford: Blackwell, 2004.
Fuller, M. (2003): Behind the Blip: Essays on the Culture of Software, Autonomedia
Greenfield, A. (2006): Everyware: The Dawning of the Age of Ubiquitous Computing, New
Kerckhove, D. (2010): The Augmented Mind, 40K
Lévy, P. (2013): “Le medium Algorithmique”,
Manovich, L. (2013): Software takes command, Bloomsbury
McCarthy J, Minsky M, Rochester N, Shannon C. (1955): “A proposal for the Dartmouth
Summer Research Project on Artificial Intelligence” http://www-
Minsky, M. (1961): Steps Toward Artificial Intelligence, Proceedings of the IRE
Nilson, N. (2010): The quest for Artificial Intelligence, Cambridge University Press
Schnapp, J. and Presner, T. (2009): The Digital Humanities Manifesto 2.0
Turing, A. (1950): “Computer Machinery and Intelligence”, Mind, 59, 433-460.
Weiser, M. (1991): "The Computer for the 21st Century" - ,Scientific American Special
Issue on Communications, Computers, and Networks September, 1991
ResearchGate has not been able to resolve any citations for this publication.
Full-text available
The datacentric society taking shape today, supported by the algorithmic medium and the massive distribution of data exchanged via networks, just begin. The algorithmic revolution is announced and it's in this context that the IEML (or any other coding system of linguistic meaning) account democratize the automatic categorization and also analysis of these data streams. Its use will be a means for collaborative learning and for the production of massively distributed knowledge.