ArticlePDF Available

Shoshana Zuboff, The age of surveillance capitalism: the fight for a human future at the new frontier of power: New York: Public Affairs, 2019, 704 pp. ISBN 978-1-61039-569-4 (hardcover) 978-1-61039-270-0 (ebook)

Authors:
Vol.:(0123456789)
1 3
AI & SOCIETY
https://doi.org/10.1007/s00146-020-01100-0
BOOK REVIEW
Shoshana Zubo, The age ofsurveillance capitalism: theght
forahuman future atthenew frontier ofpower
New York: Public Aairs, 2019, 704 pp. ISBN 978-1-61039-569-4 (hardcover) 978-1-61039-270-
0 (ebook)
SorajHongladarom1
Received: 13 September 2020 / Accepted: 27 October 2020
© Springer-Verlag London Ltd., part of Springer Nature 2020
The Age of Surveillance Capitalism is a big tome of more
than 700 pages in the hardcover edition. My review copy,
however, is on my Kindle account, and it must take quite a
lot of space on my tablet too. In these densely argued and
richly detailed pages, Shoshana Zuboff, an emerita professor
of business at Harvard Business School, shows that there
is a new, insidious kind of control. While the traditional
media try to ‘control’ or ‘influence’ us through their words
and images alone, this new form goes much deeper, and it
exploits the very constitution of the people’s mental makeup.
With the information left behind by the user when they surf
the Internet, software giants such as Google are able to pro-
duce a very accurate model of who we are—sometimes in
such high detail that we do not even know that those details
are parts of ourselves—and use it to predict our preferences
in myriad ways. The information is sold to advertisers, who
pay Google a huge sum of money and feed us with targeted
advertisements, which are almost guaranteed that they will
work since the commercials are based on exactly who we are
and where our thoughts and desires are located. According
to Zuboff, this ushered in a new kind of capitalism, surveil-
lance capitalism, where the currency is no longer the die-
sel engine, manufacturing prowess, or even electronic data
itself, but personal information. In this case, the users are no
longer consumers. The consumers are the advertisers who
buy up information from Google; we the users then are the
product.
Zuboff tells a story of how Google discovered a new way
of making money from the way people browse the Internet.
The usual method of creating a revenue stream for a search
engine website is to sell advertisements through keywords.
When a user types certain words to do a search, those words
trigger an algorithm whereby ads relevant to the search key-
words will be shown to the user. However, this method is not
very efficient in capturing the user’s real intention and pref-
erences. Search terms vary, and it is plausible that the terms
that a user happens to type into the search engine might not
accurately reflect who she really is. She might only happen
to become interested in this particular search term for a par-
ticular purpose at that particular time. Zuboff tells us that in
the few years after Google was founded, the web site gained
fame and praise from the industry due to the quality of its
search results. However, there was no reliable way at the
beginning phase of turning this quality into money. When
the dot com bust took place in the early 2000s, the company
was hit hard.
According to Zuboff, Google was saved from its demise
by the discovery of what she calls the ‘behavioral surplus’
(Chapter One, Section III). The topic had already been a
subject of research by Google engineers. In Zuboff’s words,
Amit Patel, a young Stanford graduate student with a spe-
cial interest in ‘data mining,’ is frequently credited with the
groundbreaking insight into the significance of Google’s
accidental data caches” (Chapter Three, Section II). The idea
is that the so-called digital trails inevitably left by a user as
she browses the Internet reveals in a very significant way
who that user really is—what she likes, what she dislikes,
what her aspirations are, and so on. This led to Google’s
being able to increase the quality of its search results, and as
a consequence of the dot com bust, the company then found
a way to use the behavioral surplus, not to improve the qual-
ity of its search results which benefit the user, but to predict
* Soraj Hongladarom
hsoraj@chula.ac.th
1 Department ofPhilosophy, Faculty ofArts, Center
forScience, Technology, andSociety, Chulalongkorn
University, Bangkok10330, Thailand
AI & SOCIETY
1 3
(and thus control) the behavior of each of its user. Instead of
choosing keywords that advertisers think are relevant to their
products, advertisers now are offered a chance by Google to
specifically “target” their products to individual potential
customers. The ads would be much more relevant because
the behavioral surplus obtained from the individual users
reveals a lot about the identity of each user, thus making it
much more direct for advertisers to target their wares. In a
nutshell, this means that, as an individual user’s own iden-
tity is revealed through her behavioral surplus (or, in other
words, her digital traces), the advertisement will be targeted
directly as this particular individual qua a unique individual.
Thus, advertising is much more powerful. Zuboff says that
this has been Google’s way of selling services and products
to advertisers until today, with the result that Google is one
of the most powerful and richest corporations on earth.
Zuboff’s main argument throughout the book is that
Google’s new way of selling their services has resulted in
the loss of what she calls “the right to a future tense” for
all of its users (Chapter Two, Section VI). There is thus
a strong ethical element in her argument. One loses one’s
right to a future tense when one loses the ability to shape
one’s own future. It is a future tense for “me”—I will be in
such and such a state in the future, or I will become this or
that in the future. According to Zuboff, all this is lost when
the one who is talking about herself in the future tense is
already a Google user, which is practically everyone on the
planet today. By browsing the Internet and leaving one’s
own digital trails along the way, which are cleanly hoovered
up by Google, one thereby forfeits one’s right to a future
tense, leaving it to the company to decide. Instead of being
an autonomous individual who has the ability to realize the
vision of her own future, the user is trapped in Google’s
circle of apps and streams of advertisements targeted spe-
cifically to her. Her future tense is lost when all the future
she can envision now is already predicted by Google’s algo-
rithms. When one’s future can be accurately predicted, then
one loses one’s autonomy and with it one’s dignity. One
becomes a mere source of information to be sold by Google
in order to increase their wealth and power even further.
It is intriguing to see that one’s own future and one’s own
choices can be very accurately predicted by these algorithms
that rely on our digital traces. So far, I have not seen any
theory that purports to explain why this is in fact the case.
That this is indeed the case appears not to be in doubt, as
the streams of advertisements entering Google’s account can
attest. But the problem is why. Obviously, Google relies on
statistics; it is not the case of this or that particular indi-
vidual whose right to future tense is predicted by these
digital traces or behavioral surpluses. In fact, if there are
only one or two, or even a hundred individuals this might
not have been possible in the first place. On the contrary,
Google relies on millions and millions of its users and their
behavioral surpluses, and it is possible that, when viewed
from afar, these millions of data points emerge as some kind
of predictable patterns.
However, if that is indeed the case, then the prediction
is not actually about individual persons. My behavior can
then be predicted only if the digital traces that I leave behind
happen to fall under a recognizable pattern that the algo-
rithm can detect, and the algorithm can only detect such a
pattern if there are enough of users who leave behind their
surpluses in much the same way as I do. The quantity of the
data points must be large enough so that a recognizable pat-
tern can emerge. This then points toward a way in which I
can confound Google’s system. If I do not behave like other
people—if I behave in a peculiarly unique way such that the
machine cannot compare my case with those of other people,
then it is possible that the machine cannot predict my behav-
ior. I can then preserve my right to a future tense this way.
But, at what price? One might think that one’s behavior
is odd enough so that it becomes unique. However, given
the millions and millions of data points that are in Google’s
hands that might be actually wishful thinking. No matter
how odd or strange one’s intended behavior can be, there
could well be many others who act in the same way, leaving
behind the same kinds of digital traces. In that case, then,
my behavior can be predicted by the machine just that those
of the others who act the same way as I do.
Zuboff’s argument, thus, points toward a startling claim
with regards to the age-old debate in metaphysics, namely
that of free will and determinism. If my behavior can be
accurately predicted, then is everything in my life deter-
mined? A possible solace for an opponent of determinism is
that, if an individual refrains from leaving her digital traces
from the beginning, then it of course would not be possible
for Google—or any other entity—to predict her behavior.
But that would mean opting out of the digital world alto-
gether, and for a vast majority of us throughout the world,
that actually is not a realistic option at all.
In any case, however, Zuboff’s book has opened a way
for philosophers to think about the problem of free will and
determinism in a new way. If the details of my behavior
can be predicted based on a very huge database of indi-
vidual behaviors, this does not imply that we necessarily
have lost our freedom. Zuboff is scathing in her criticism
of Google’s practice, and she suggests toward the end of
the book a political solution where laws, regulations, and
democratic control of corporate practice must be in place.
I have no objection against her proposal in the least, and
in fact I would like to suggest that Zuboff’s suggestion be
expanded to include also political authorities themselves,
who might be tempted to use the same technique developed
by Google to control the behavior of their population. That
would indeed be much scarier than control by the corpora-
tions, for that would indeed be George Orwell’s Big Brother
AI & SOCIETY
1 3
writ large. The situation is particularly insidious because in
this case, the political control would be in the minds of the
people themselves.
Apart from the problems with political control, there is
also a metaphysical conundrum that the phenomenon of big
data analytics has given rise. In a world where everyone’s
behavior can be predicted, how is it possible that one can
enjoy one’s freedom? One answer, of course, is that, at the
moment of your deliberation before choosing, you are aware
of your subjective experience in such a way that you know
that you are not being coerced by any outside influence,
and that you could have done otherwise. This has been the
standard response of those who wish to preserve a modicum
of free will in the world completely determined by outside
forces. Perhaps philosophers might realize that the prob-
lem of free will is an illusion, a pseudo-problem, from the
beginning. In the past, it was thought that the possibilities
of moves on a chessboard were limitless, and thus, there
was corresponding freedom, which was also thought to be
limitless, in how one made the moves according to the rules.
But now we know full well that everything on a chessboard
is limited and has been captured without any remainder by
the machine. In a way, we have come to see what is there all
along—that there is no absolute freedom (infinite possibili-
ties) from the beginning. But that does not have to imply that
we thereby have lost our sense of responsibility. Nor does it
imply that we then become automata. Perhaps the trick lies
in figuring out what is best for us, what would be the kind of
life that would be the best for us to live, and arranging our
environment in such a way that it is conducive to bringing it
about. Perhaps, it is our very notion of freedom that should
be revisited in light of these developments.
Zuboff has pointed out to us an insidious form of control,
one that goes deep down to our subjective experiences and
preferences. It remains to be seen how the regulations and
democratic control that she envisions actually look like in
practice. Furthermore, what makes this book also appropri-
ate for this special issue is that the digital traces left behind
by the users are taken up and in fact hermeneutically manip-
ulated by the algorithm. There is, thus, an untapped reser-
voir of questions and research topics that can be investigated
further. For example, what does it mean for an algorithm to
make sense of the trails of data? Is the hermeneutic process
involved in its action? And in what way?
To conclude, then, Zuboff’s book shows us that what
seems to be good about this predictive algorithm is that it
is a bonanza to advertisers. But that is not, and should not,
be the best use of technology. What it needs to do is that it
needs to work for us, and not for the giant corporations. We
need to fight to cease being mere products and to take our
sense of agency back.
Publisher’s Note Springer Nature remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
Article
Responsible Research and Innovation (RRI) is crucial to address the risks associated with emerging technologies with the aim of minimising negative consequences and promoting positive outcomes. This paper uses RRI principles and semi-structured interviews to examine the implications of tools designed to combat human trafficking. Although traffickers use technology to recruit and exploit victims, it also equips stakeholders, such as governments, policymakers, researchers, and anti-trafficking experts, to combat these crimes. This paper focuses on the impact of anti-trafficking software systems that identify victims and human traffickers in the United States and the United Kingdom, highlighting the positive and negative consequences of such tools and proposing strategies to mitigate unintended negative consequences, thereby promoting the development of responsible anti-trafficking tools. In addition, this paper demonstrates the practical implementation of RRI within the information and communication technology sector, with a specific focus on the evaluation of existing software systems.
Article
Розвиток штучного інтелекту (ШІ) як інструменту автоматизованої інтерпретації тексту породжує низку етичних і філософських питань. Особливо гостро ці питання проявляються у системах, які працюють із людськими текстовими описами для генерації творчих або когнітивно-значущих продуктів, таких, наприклад, як стратегічні ігри, зокрема –варгейми, які, як специфічний вид стратегічних настільних ігор, базуються на моделюванні воєнних конфліктів. Або ненавмисно, або за сценарним задумом користувача, вхідні дані можуть містити чутливу, суперечливу та/або етично неприйнятну інформацію. Чи вимагає це від розробників ШІ врахування етичних обмежень, аби уникнути створення інформаційного наповнення, яке потенційно може носити чутливий характер? Чи має ШІ вцілому слідувати етичним "нормам", чи його задача –забезпечити максимально точну та нейтральну інтерпретацію?
ResearchGate has not been able to resolve any references for this publication.