ArticlePDF Available

Algorithms, advertising and the intimacy of surveillance

Authors:

Abstract

This article develops the notion of the intimacy of surveillance, a characteristic of contemporary corporate marketing and dataveillance fueled by the accumulation of consumers’ economically valuable digital traces. By focusing on emotional reactions to targeted advertisements, we demonstrate how consumers want contradictory things: they oppose intrusive and creepy advertising based on tracking their activities, yet expect more relevant real-time analysis and probabilistic predictions anticipating their needs, desires, and plans. The tension between the two opposing aspects of corporate surveillance is crucial in terms of the intimacy of surveillance: it explains how corporate surveillance that is felt as disturbing can co-exist with pleasurable moments of being ‘seen’ by the market. The study suggests that the current situation where social media users are trying to comprehend, typically alone with their devices, what is going on in terms of continuously changing algorithmic systems, is undermining public culture. This calls for collective responses to the shared pleasures and pains while living alongside algorithms. The everyday distress and paranoia to which users of social media are exposed is an indicator of failed social arrangements in need of urgent repair.
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=rjce20
Journal of Cultural Economy
ISSN: 1753-0350 (Print) 1753-0369 (Online) Journal homepage: https://www.tandfonline.com/loi/rjce20
Algorithms, advertising and the intimacy of
surveillance
Minna Ruckenstein & Julia Granroth
To cite this article: Minna Ruckenstein & Julia Granroth (2019): Algorithms, advertising and the
intimacy of surveillance, Journal of Cultural Economy, DOI: 10.1080/17530350.2019.1574866
To link to this article: https://doi.org/10.1080/17530350.2019.1574866
Published online: 11 Feb 2019.
Submit your article to this journal
View Crossmark data
Algorithms, advertising and the intimacy of surveillance
Minna Ruckenstein
a
and Julia Granroth
b
a
Consumer Society Research Centre, Helsinki Center for Digital Humanities, University of Helsinki, Helsinki, Finland;
b
Social Anthropology, University of Helsinki, Helsinki, Finland
ABSTRACT
This article develops the notion of the intimacy of surveillance, a
characteristic of contemporary corporate marketing and dataveillance
fueled by the accumulation of consumerseconomically valuable digital
traces. By focusing on emotional reactions to targeted advertisements,
we demonstrate how consumers want contradictory things: they oppose
intrusive and creepy advertising based on tracking their activities, yet
expect more relevant real-time analysis and probabilistic predictions
anticipating their needs, desires, and plans. The tension between the
two opposing aspects of corporate surveillance is crucial in terms of the
intimacy of surveillance: it explains how corporate surveillance that is
felt as disturbing can co-exist with pleasurable moments of being seen
by the market. The study suggests that the current situation where
social media users are trying to comprehend, typically alone with their
devices, what is going on in terms of continuously changing algorithmic
systems, is undermining public culture. This calls for collective responses
to the shared pleasures and pains while living alongside algorithms. The
everyday distress and paranoia to which users of social media are
exposed is an indicator of failed social arrangements in need of urgent
repair.
ARTICLE HISTORY
Received 31 March 2018
Accepted 25 December 2018
KEYWORDS
Corporate surveillance; the
intimacy of surveillance;
targeted advertising;
corporate marketing; post-
marketing; algorithm
Across various domains, in elds from health to communication, in political life and the private
sphere, the tracking and surveillance of the daily actions of consumers is expanding and becoming
ever more ne-grained (Pridmore and Lyon 2011, Zubo2015, Ruckenstein and Schüll 2017). Con-
sumers contribute to data gathering when they purchase goods and services online, take part in cus-
tomer loyalty programs, use online search engines, click advertisements, upload content to social
media platforms, or sign into other services with their personal Google ID or Facebook prole.
The capacity to gather and analyze individualsbehavioral and geo-locational data, combined
with the digital economys classicatory mechanisms, and backed by algorithmic techniques and
large volumes of quantitative data, suggest a new kind of intertwining of consumers and market
aims (Andrejevic 2014, Fourcade and Healy 2017). As Fourcade and Healy describe (2017,
pp. 1011), the use of data and rules for calculation and prediction have a longer history, but a
shift can be detected in the way the market operates as a classier: personal records and the scores
and segments derived from them are now tradable objects that act back on people, shaping intimate
experiences and promoting behavioral modication. Markets have learned to seein a new way,
and are teaching us to see ourselves in that way, too,as Fourcade and Healy observe (2017, p. 10).
This article takes a closer look at how the market seesthe consumer and how consumers react
to that seeing by focusing on emotional responses to targeted advertisements in social media sites
© 2019 Informa UK Limited, trading as Taylor & Francis Group
CONTACT Minna Ruckenstein minna.ruckenstein@helsinki.Consumer Society Research Centre, PO Box 24, 00014 Uni-
versity of Helsinki, Helsinki, Finland
JOURNAL OF CULTURAL ECONOMY
https://doi.org/10.1080/17530350.2019.1574866
like Facebook and Instagram. The aim of the exercise is to posit digital marketing as a form of con-
temporary surveillance that employs consumption-oriented technologies and methods to render
consumers as both known and knowable entities(Pridmore and Lyon 2011, p. 115). We use the
term dataveillanceto signal the shifts in the market, and in modes of surveillance that increasingly
monitor users through social media and online communication technologies by means of sophisti-
cated tracking technologies (Raley 2013, p. 124). Dataveillance aims at predictive and prescriptive
outcomes in terms of consumer behavior, suggesting profound consequences for the market-consu-
mer relationship. As van Dijck argues (2014), p. 205), dataveillance penetrates every ber of the
social fabric,going well beyond intentions of monitoring individuals for specic purposes.
We treat dataveillance as a product of the accumulation of data by the machinery of corporate
marketing, which collects digital traces of consumers, including likes, downloads, shares, and eye-
balls, that have potential economic value (Zubo2015). The actions of users of online services
remain open and visible to the surveillers, service developers, and marketers but, in the everyday,
practices of online tracking tend to fade into the background. Tracking technologies are tolerated
and ignored despite their larger political-economy context of surveillance, characterized by privacy
threats and opaque forms of dataed power; indeed, people have few options not to participate as
data-generating subjects (Andrejevic 2014, Michael and Lupton 2016, Skeggs and Yuill 2016, Lupton
and Michael 2017). Fourcade and Healy (2017, p. 17) describe how the matchingof consumer data
traces and corporate goals feels natural and eortless when the market is a successful classier: when
the cues about behavior that are left behind and extracted from users of social media sites generate a
superior service experience or become part of an improved digital infrastructure. Personalized rec-
ommendation systems, for example, are seen as useful because they allow people to navigate vast
amounts of information and nd music, movies, or restaurants that they would not learn about
otherwise.
Ultimately, these digital economy developments introduce infrastructural changes, emphasizing
larger aims in terms of societal and economic restructuring than advertising and sales. In this paper,
however, we are particularly interested in what happens in the consumer-corporate relationship in
terms of the intimacy of increasingly pervasive, market-connected surveillance techniques. As
explorations of intimacy call for psychological and psychoanalytical assessments of individually
charged fears and desires, the notion of intimacy might appear counterintuitive in the context of cor-
porate marketing. The goal is to draw attention to intimacy in this context: not to promote or demo-
nize intimacy in the course of surveillance, but to better understand connected practices. As will be
explained further below, we use the term intimacyin a non-individualistic sense; it is a spatial rather
than a psychological concept, used for mapping the emotional territory of consumption-oriented
corporate surveillance that either supports the alignment of market interests with those of consu-
mers, or violates their privacy and self-understandings. By introducing and developing the notion
of the intimacy of surveillance(Berson 2015, p. 40), we can address the friction and ambivalence
that accompanies consumer reactions to corporate uses of personal data. Michael and Lupton
(2016, p. 111) refer to the inconsistencies in opinions and the ambivalence that people feel towards
corporate surveillance as oscillatory awareness,emphasizing the co-existence of anxiety and routi-
nized utility. Similarly, we demonstrate that consumer reactions to advertising and marketing prac-
tices remain various, situational, and deeply ambivalent.
The empirical data that we use for developing the notion of the intimacy of surveillance was gath-
ered in a research project exploring everyday understandings of algorithms. We situate our analysis
within the study of mundane experiences of data gathering and data analytics (Kennedy and Hill
2017, Lupton and Michael 2017,Kennedy 2018), highlighting shared understandings and reactions
to ways in which corporately produced algorithms manifest in the everyday (Bucher 2017a; Seaver
2017). As Kennedy (2018) argues, research seeking to develop understanding of how ordinary people
experience and live with data and algorithms is still rather limited. Critical public debate is intensify-
ing over how data tracking by corporations undermines personal autonomy, liberty, and privacy, yet
consumers continue to embrace the practices and products of social media and self-tracking. This
2M. RUCKENSTEIN AND J. GRANROTH
new intimacy of surveillance,as Berson (2015, p. 40) characterizes it, has been the focus of ethno-
graphic inquiries into the social, narrative, and emotional dimensions of self-tracking and data prac-
tices (see Ruckenstein and Schüll 2017), yet, in terms of targeted advertising, little is known of how it
might inform us about intimate aspects of corporate surveillance. Facebooks revenue is predomi-
nantly ad-driven, which makes it a powerful advertising oligopolywith promises of accelerated
time between product advert and sale(Skeggs and Yuill 2016, p. 381). At the same time, however,
Facebook is considered a personal and social space which, as we demonstrate, at least partly explains
why consumers react to online advertisements within social media the way they do.
In the following, we rst dene what is meant by the intimacy of surveillance, then outline briey
the context for the advertisement encounters that we explore, before describing how we studied the
seeing workof the market and how it might be understood as either disturbing and unsuccessful, or
relevant and pleasing. The work by market agents of seeing and knowing the tastes and social
relations of consumers, particularly when conducted by mechanical forces, can remain imperceptible
to those whose data traces are being used. The invisibility indicates that the digital infrastructure, of
which personal data collection is an integral part, has become naturalized. At times, however,
attempts to target consumer tastes and desires become observable as disturbing or unsuccessful,
and it is these failures that are of particular interest in terms of the intimacy of surveillance. As
other scholars have argued, a focus on failures and breakages rather than novelty and innovation
is a fruitful starting point for thinking through the use and eects of digital technology (see Jackson
2014, Pink et al.2018). We follow the failures of online marketing with the intent of underlining how
the limitations of algorithm-driven ad buying and targeting are experienced. Skeggs and Yuill (2016)
demonstrate that Facebooks algorithm is best at identifying attention-worthycontent for highly
connected users who are part of an inuential network, while for the less connected users the key
words used for placing advertisements might be completely otarget. By examining incidents
when marketing fails to match products and consumers in a satisfying manner, and the negative
emotional reactions that are triggered, we cast light on the intimate aspects of consumption-oriented
corporate surveillance, and reect on what they suggest in terms of consumers, marketing, and future
research.
The intimacy of surveillance
The concept of intimate surveillancerefers to the scrutiny of children and young people by parents,
caretakers, and friends (Leaver 2015), or technologically enabled control of women by their hus-
bands (Hannaford 2015). This focus has led to interest in the normalization of surveillance as
care (Leaver 2017) and suggestions that parental monitoring of babies and infants contributes to
a surveillance culture wherein choosing not to survey can be read as a failure of good parenting.
Rather than on the intra- or interpersonal dynamics of intimacy, however, the exploration promoted
in this paper focuses on the intimacy of surveillance within the consumer-corporate relationship,
arguing that the latter is also an important site of intimacy. As Jamieson (2011) notes in her histor-
icization of the concept, intimacy refers to the quality of close connection between people and the
process of building this.
By promoting the notion of the intimacy of surveillance, we build on the work of Moore (1999,
p. 16, 2004), arguing for the usefulness of concept-metaphors to open up spaces within which truths,
specics, connections, and relationships can be presented and imagined. As a concept-metaphor, the
intimacy of surveillance is dened in practice and in context; thus it is not a foundational concept,
but a partial and perspectival framing device for exploring the surveillance aspects of contemporary
corporate marketing. The intimacy of surveillance, when dened as a concept-metaphor, sensitizes
us to the less stable and more ambivalent aspects of contemporary surveillance, facilitating the
interrogation of practices that are seen as either supporting or violating personal autonomy and
self-understandings. This kind of framing sheds light on where, how, and when corporate
JOURNALOFCULTURALECONOMY 3
surveillance begins to threaten personal autonomy and is felt as intrusive, scary, or creepy (Lupton
and Michael 2017, p. 267).
When intimacy is dened as a spatial concept, it is useful to focus on the boundaries: How does
corporate marketing and related surveillance violate the consumer-corporate relationship by infring-
ing on notions of personal autonomy? As we demonstrate, the boundary crossings become visible as
emotional responses: people feel angry when Facebook monetizes their personal data (Skeggs and
Yuill 2016, p. 387). They feel strange sensations(Bucher 2017a, p. 35) when their actions are
exposed to an outside surveiller: for instance, one visits a friend and immediately afterwards, that
same friend appears on ones Facebook newsfeed. In order to identify where the corporate marketing
fails in its object, the moments that highlight misalignments between data uses and consumer aims
are of key importance: following them we start to see how and why people feel powerless (Andrejevic
2014). What are the situations that invoke fear and threaten personal autonomy online? What kinds
of data uses generate data insecurities? On the other hand, however, it is equally important to attend
to the consensual and positive aspects of corporate surveillance. As Lupton and Michael (2017)
describe, the cultural imaginary of surveillance celebrates intimate and intrusive forms of surveil-
lance. Consumers also want to be intimately seen, known, and understood, which suggests that
they embrace surveillance that is convenient and entertaining and digs deep into their social
world (Ellerbrok 2011; Albrechtslund and Lauritsen 2013). From this perspective we ask: When
does corporate marketing promote closeness and intimacy in the consumer-corporate relationship
in a manner that makes surveillance tolerable or even stimulating and fun? Before introducing
the empirical study, we outline the context for the targeted advertisement encounters that we
explore, highlighting recent developments in the marketing eld.
Marketing failures
The history of advertising makes it apparent that it has been seen as a necessary means of encoura-
ging consumption by creating demands and new markets (Dyer 1982). Over time, brokers of goods
and services developed a number of scientic techniques to manipulate desires, tastes, consumption
decisions and purchases through advertising. The digital era is signicantly increasing advertising
and the use of data for sales purposes; online advertising space is expanding and much of this expan-
sion is propelled by social media, the locus of the advertising discussed in this study. The advertising
industry operates with the idea that as technology, software, and algorithms develop, the targeting of
advertisements and content are becoming more sophisticated and eective (McStay 2016).
While commercial content and marketing messages have developed into an inescapable part of
the digital media environment, marketers are also struggling with the weakening eects of traditional
advertising. As Fourcade and Healy (2017, p. 23) describe, the old classier of targeted advertising
remains outside of the consumers lifeworld, looking in: advertising is based on guessing likes and
needs based on general information, such as gender and age. Instead, the new classier seeks to pos-
ition itself inside consumerslifeworlds, not remaining at the margins but entering the everyday,
scouting and browsing it: the market sees you from within, measuring your body and emotional
states, and watching as you move around your house, the oce, or the mall(Fourcade and Healy
2017, p. 23). Becoming a new classier means abandoning the traditional advertising model, even
highly targeted advertisements, in favor of one where consumers are more dynamically classied:
The new idea is a personalized presence that is so embedded in daily routines that it becomes second
nature(Fourcade and Healy 2017, p. 23). In the advertising eld, personalization translates into a
heightened interest in automatic and involuntary consumer responses, emotional reactions and
behavioral clues that are seen to bypass cognition(McStay 2016, p. 4).
Despite the recognition that traditional advertising might not be eective in the social media
environment, a lot of the everyday marketing that people encounter is still based on traditional
advertising principles. Thus, the context of the advertising discussed in this study is one in which,
online, the so-called traditional advertising model is under pressure from two sides, the market
4M. RUCKENSTEIN AND J. GRANROTH
and the consumer, both expecting improvements. Within this context, we can explore how the tra-
ditional advertising model and the move away from that model is experienced by focusing on how
consumers feel about market agents that follow their actions in real time and proactively suggest
actions and choices likely to be of interest to them. Online platforms and device-makers rely on
algorithm-driven logics to distribute targeted advertisements. In social media platforms, the ads
are physically placed either on the sidebar, or within the newsfeed of the service; in the latter
case, the ads are mixed with user-generated content. Machine-led targeting is supposed to support
advertising goals, to seduce and manipulate in subtle ways that inspire and generate consumer
desires and needs. Yet ads are also seen as completely irrelevant. Online marketing is frequently
not appreciated or reacted to; it is merely digital noise or waste. Reuters Report notes that up to
fty percent of users install ad-blocking software on their devices and actively avoid sites where
ads interfere with the content (Austin and Newman 2015). Participants in our study talk of blocking
and actively ignoring advertisements. Facebook ads are a Turkish bazaar that I try to avoid,as one
of them put it.
By paying attention to online advertisements that trigger emotional responses, we can explore
what distinguishes those advertisements or advertisement types from the usual ow of ads to the
degree that people remember them and want to share their experiences. We follow the remembered
with the idea that advertising that is recounted in interview situations operates as a fruitful entry
point for thinking about the intimacy of surveillance. After presenting our empirical material we dis-
cuss what the ndings of our study suggest in terms of the consumer-market relationship. We situate
the ndings within the framework of a recognized shift away from traditional advertising towards so-
called post-marketing(Zwick 2017) that attempts to respond to marketing failures by enhancing
the intimacy of surveillance: the relationship between marketing, the self, and the everyday. In the
post-marketing era, Zwick (2017) argues, marketing is not just recording, analyzing, and targeting;
rather, it wants to become biopolitical,a force that celebrates the productive value of life itself
(Zwick and Bradshaw 2016, p. 95). This form of marketing is characterized by the interpenetration
of the cultural and the economic that rejects any clear distinction between marketing and consumer
(Zwick and Bradshaw 2016, p. 93). We argue that it is important to acknowledge that online market-
ing might be more likely to please consumers when it feels like post-marketing: the marketing eorts
do not stand out, but are personalized in an inconspicuous manner, anticipating needs, desires, and
plans before they are fully formed(Fourcade and Healy 2017, p. 23). Our ambition here, however, is
not to promote or celebrate post-marketing as the future of marketing, but to see it as an integral part
of the consumer-market relationship that should be a subject of analysis.
The emotional as a methodological entry point
Inspired by the work of Bucher (2017a), focusing on the algorithmic imaginationas a way of think-
ing about how algorithmic systems are imagined and experienced, and the kind of imaginaries they
enable and promote, our aim was to add to the scholarly work on how algorithms are experienced.
The second author conducted semi-structured interviews in Helsinki, Finland, between June and
August of 2017 that also oered time for the interviewees to discuss their feelings and observations
concerning algorithmic systems. The interviews were conducted in peoples homes, cafes, or at the
University, depending on what the research participants felt comfortable with. Initially, we used
social media channels for recruitment, leading to snowball sampling beyond online networks. Over-
all, we had 25 research participants, 14 women and 11 men between the ages of 19 and 56; edu-
cational backgrounds ranged from primary school to a doctoral degree. The interviewees included
students in various elds, a chef, an unemployed person, a lifestyle hippie, a post-doctoral researcher,
a photographer, a radiographer, a practical nurse, a nutritionist, an internet marketer, and the pro-
duct marketing manager of a security service.
Based on the empirical data, the term algorithm works well as a conversation opener. Much like
Buchers(2017a) research participants, ours did not know how Facebooks or Googles proprietary
JOURNALOFCULTURALECONOMY 5
algorithms operate, but they clearly recognized the workings of algorithms online. All those who
agreed to the interview knew or thought they knew what algorithms are. When algorithms are pub-
licly discussed, the attention tends to focus on newsworthy revelations. In contrast, the everyday
workings of algorithms are mostly observed alone, with associated feelings of astonishment or dis-
tress, particularly when their operating principles are not understood (Bucher 2017b). Seaver (2017)
discusses the terminological confusion and anxiety around the algorithm: the term has drifted out of
computer science into popular discourse where it might refer to either a new kind of cultural
phenomenon or authority or a symbol of unwanted forces in the digital world (Gillespie 2016).
Everyday understandings of algorithms are shaped by what is taught at school and discussed with
friends or in the media; the interviewees were, for example, well-versed in terms of surveillance
and privacy threats. The role of algorithms in political processes and the threat they pose for demo-
cratic decision-making and elections in the form of social media bubblesand algorithmic biases
were also discussed in detail, while future-oriented talk focused on the developing technical
capacities and potentials of machine learning and articial intelligence.
Interestingly, however, the interview material highlighted one unremarkable way in which algor-
ithms become a part of everyday knowledge and experiences. Forms of social categorization exer-
cised through algorithmic systems become knownand feltthrough online marketing on social
media sites. Yet the fact that targeted advertisements are one of the main ways by which people
acquire rsthand knowledge about the workings of algorithms seemed such a banal nding that
at rst we ignored it. As we kept reading the interview transcripts, however, and arranging the
material thematically, it became obvious that targeted advertisements constitute an opportunity to
explore the intimacy of surveillance by way of emotional reactions to the workings of algorithms.
Bucher (2017a, p. 35) talks about the intimate powers of algorithms,and when we noted that
the failures of machine-led marketing become a subject of critical commentary, we started seeing
a pattern in how ads are understood to oer suggestions and evoke desires that do not align with
personal aspirations and aims; this suggests that emotional reactions to ads, or at least how such
reactions are shared in interviews, could operate as a fruitful methodological entry point for explor-
ing how corporate marketing becomes an intrusion.
We follow the lead of Kennedy and Hill (2017) by treating narrated emotional reactions as a
methodological aid in exploring everyday knowledge and experience. They situate reactions to
data visualizations within the context of the sociology of emotions (Hochschild 2002, Bericat
2016), underlining the epistemological value of emotions in knowledge formation and the need to
take emotions seriously in the analysis of social structures and arrangements. By tracing emotional
reactions, we could separate between three divergent emotional realms in our material fear, irri-
tation, and pleasure that open a discursive vista into ways marketing promotes or fails to promote
closeness and intimacy in the consumer-corporate relationship. Emotional reactions to targeted
advertisements are not only about the advertisements; rather, the emotional extends to discontents
and pleasures connected with datacation, surveillance, market aims, identity pursuits, gender
stereotypes, and self-understandings. By triggering similar responses, emotional reactions to ads
reveal organizing principles shared by people of dierent backgrounds. Of course, as is typical of
qualitative research, the ndings of this study are limited to what our empirical material tells us.
Yet the aspects of targeted advertising that the narrated emotional reactions expose are familiar to
social media users; in that sense they are not remarkable or surprising in any way. When brought
together, the narrated emotional reactions are not simply individual reactions; rather, they tell a
more generalizable story of corporate surveillances violation of notions of personal autonomy
and privacy. Alternatively, when the line is not crossed, corporate surveillance may be seen as toler-
able or even stimulating and entertaining. As Kennedy and Hill (2017, p. 12) argue, the emotional
operates as an epistemic resource,pointing towards the most important components for under-
standing what moves, annoys, or energizes people in terms of data uses and algorithms.
In light of our empirical material, the emotional reactions illustrate a situational eld of position-
ing in terms of corporate marketing and surveillance. The emotional range of dataveillance is deeply
6M. RUCKENSTEIN AND J. GRANROTH
ambivalent, explaining how the same person that criticizes the Big Brother logic of the corporation
is pleasantly surprised by an advertisement that appears to inform her of exactly the right product.
The fearful reactions are triggered by online targeted advertising that mimics the users past behav-
ior, thereby generating unpleasant sensations of being surveilled. From this perspective, corporate
consumer surveillance appears as disturbance, a feeling of being followed, that somebody is peering
over your shoulder. These experiences of surveillance align with the critical debate that focuses on
how data tracking by corporations undermines personal autonomy and privacy.
The second area of emotional reactions, narrated as nuisance and irritation, are triggered by the
machine logic of the ads that are shown. Interviewees complained that the ads operate in too general
and mechanical a manner, relying on age-gender-location-based categories. Instead of subtly guiding
consumers, classication schemes become visible as crude sorting mechanisms: young women com-
plain that they are continually informed about beauty products and pregnancy tests; young men are
targeted by dating sites and claims of hot singles near you.Here the irritation with targeted adver-
tisements is not directed at the dataveillance per se but, rather, at algorithmic operations, decisions,
and choices that appear too rigid and rule-bound.
Finally, in contrast with the more negative responses, the third area of emotional reactions, the
neutral and pleasurable, applies when algorithms operate the way people want, pleasantly surprising
with recommendations of music, or movies. Overall, the pleasurable aspects of targeted advertise-
ment were talked about in the interviews much less than negative reactions; possibly, people take
the pleasurable aspects for granted, or think that they are too apparent or light-hearted to be dis-
cussed in a research interview. As Fourcade and Healy (2017, p. 17) suggest when consumer data
traces and corporate goals align successfully, the work of the market is naturalized. The positive reac-
tions that were shared, however, reveal how important the pleasurable experiences are in terms of
understanding the intimacy of surveillance. We discuss the three areas in a more detailed manner
in order to demonstrate their specic features; in peoples talk, however, they are all aspects of
the same dataveillance phenomenon.
Violating the intimacy of surveillance
In terms of felt violations of intimacy, fear is the emotional reaction that interviewees refer to most.
Those interviewees who did not share fearful reactions in terms of dataveillance, held it unlikely that
their information would be misused. As one of the women explained: Admittedly, I do watch cat
videos for close to two hours a day, which does say something about the quality of my life and the
fact that I should get a life and so on.At the same time, she emphasized that she has nothing to
hide and hence, no reason to be afraid. Another young woman explained that personally she is not
bothered by data capture, but she does feel anxious about its societal expansion. Typically, fears
are related to misuse of information, hacking, and identity thefts. As Lupton and Michael (2017)
demonstrate, it might not be the data-gathering itself that bothers people, but, rather, damaging
data movements and uses, particularly those that are not company-initiated: frauds and scams that
violate both the consumer and the market. As one of their informants puts it: it depends on whos
got the data.The most pervasive fear reported by our interviewees, however, relates to the uncertainty
about what and how much information is collected and what it is used for. They acknowledge that
they have no way of knowing what is going on in terms of dataveillance: the corporate machinery
operates like an intruder and a stalker, following people across online services like a shadow.
In the interviews, people described becoming exposed in unpleasant ways to the knowledge of
algorithms through cookies and targeted advertising. For instance, a practical nurse in her fties
recounted that she had been thinking of buying a new cell phone and had compared phones on
the website of a chain store. As soon as she closed the site, she could see the same phones appearing
in her Facebook feed, something she found irritating, even a bit scary. She felt that her personal space
was being invaded and that she could not nd peace online. A similar example was given by a man in
his thirties, who talked about how, after buying ight tickets, advertisements oer hotels to the same
JOURNALOFCULTURALECONOMY 7
destination. He called them wait a minute momentsthat feel potentially dangerous.The violation
that these experiences speak of has to do with the awareness of being found(Bucher 2017a, p. 35).
They are everyday situations that oer glimpses of how the corporate surveillance machinery works.
Even if data analytics are not identifying certain individuals, but categorizing and classifying con-
sumers and oering them marketing based on those classications, the ads feel personally invasive.
One of the research participants pondered how consumable things just appear on the screen. She
says that the aptness of algorithms is scary at times. Do they know me completely?she asks. The
interviewees talk about situations that have startled them, or made them feel fearful and insecure,
adding to a sense of paranoia about their safety online. Typically, these stories involve strange coinci-
dences: one writes a Facebook update about a daughters broken ankle and the next day there is a
phone call from an insurance company oering accident insurance. The stories told reiterate how
people suspect that their cell phones are tapped, private messages read, and their behavior and
locations trailed. The way people discuss corporate surveillance underlines ad targeting as a public
concern: people have no control over the data trails they leave behind and how they are taken advan-
tage of. The sense of personal violation is intensied because of the impossibility of knowing how
information is collected, who uses it, and for what purposes. In light of the concept-metaphor of
the intimacy of surveillance, the moments of corporate surveillance that generate distress demon-
strate how the sense of intimacy is injured or lost in the corporate-consumer relationship. If the
new intimacy of surveillance is built on its acceptance and the closeness between the market and
the consumer, corporate surveillance to which our research participants react negatively violates
this social contract.
The market fails to see me
According to the participants of our study, targeted advertising is typically seen as superior to non-
targeted, but it still tends to irritate social media users. One of them observed how, at rst, he would
press not a legal adto every advertisement oered. Then he started thinking that he should not
block them all so as to receive more enjoyable examples than diaper ads,which annoyed him
because he had no intention of having anything to do with babies in the near future. Irritation
and nuisance follow the workings of algorithms: the interviewees repeatedly commented on how
annoyed they were with the way they were categorized, emphasizing that statistically calculated
groups and identities might be in conict with how users of online services feel about and see them-
selves. One of the younger women complained that being shown pregnancy test ads because of her
demographic prole irritates her, because she would like to continue to live a wild youth, but society
is deciding that it is time to settle down.She sees ads reecting normative values that the algorithms
are merely replicating; consequently she is knownas being of child-rearing age and encounters pic-
tures of babies everywhere.
The algorithm-driven advertising that follows the grouping logic triggers annoyance because the
user is treated as a caricature and allocated an algorithmic identity based on one or two easily legible
characteristics, typically gender and age (Cheney-Lippold 2011). She would be interested in technol-
ogy, one of the interviewees explained, but the ads she gets only feature womens technology,hair
curlers and driers. Even if it is logical, following well-known gender stereotypes, to target a woman
over forty with wrinkle cream ads, the participants of our study resisted this idea. Mostly women, but
also some men, talked about how boring or infuriating it is that marketing is based on gender stereo-
types, a reminder that the way people read advertisements reects the larger values of local society: in
Finland, a country with an established history of politics of gender equality (Holli 2003), women cri-
ticize gender stereotypes and this criticism extends to machine logics replicating those stereotypes.
Here, you could even argue that the machine is in conict with society, suggesting that people are
very ready to probe machine logic if it limits or distorts their self-understandings. They might not
be able to work against algorithms, or design better ones, but at least they feel strongly that the
machine is not reading them right.
8M. RUCKENSTEIN AND J. GRANROTH
Thus, in light of the intimacy of surveillance, ads irritate and feel dysfunctional because the digi-
tally constructed categories of identity feel too crude, with the market failing to appreciate the many
ways in which people pursue identities. For the shared irritation felt about consumer grouping
extends to a more general critique of the machine not seeing me.A vegetarian might be oered
meat dishes; a meat lover is targeted with a vegan cookbook. Failures of marketing are also felt in
the form of a temporal lag: consumers are bombarded with ads for ights that are already bought
and hotel rooms already booked. As one of the women complained, she gets ads for things she
owns and not for things that she could buy. When targeted ads are not apt or relevant, they
reect life already lived and experiences already had. Algorithms reckon wrong,as one of the inter-
viewees put it, emphasizing that the ads fail to anticipate needs or desires and replicate steps already
taken. Here, the interviewees did not complain about corporate marketing because of violations of
their personal autonomy or privacy. Instead, they talked about the discrepancy between the adver-
tising machine and their lifeworld: in terms of their life, the machine is too slow and out of sync.
They would prefer the machine generated targeting to be more precise and personalized, agile
and real-time, remaining ahead of them and predicting and anticipating where they are heading.
Pleasurable encounters with the market
The pleasurable aspects of targeted advertisement are linked to successful processing of information.
Advertisement can trigger a pleasurable feeling of recognition: the machine really knows me. When
the classication work of the company feels successful, people talk about advertisements being timely
and suitably personalized. A woman in her twenties talked about how she prefers aesthetically pleas-
ing ads, clothes rather than sausages; sausages belong to a world that she wants to keep out of her
social media. Successfully targeted ads market things that people want to see and that they could
or will buy. A chef in her thirties was critical about cookies and corporate surveillance, but she enjoys
aptads. She gave as an example a top and a t-shirt that she had bought after perfectly targeted mar-
keting, because they had exactly the right slogans for her. They were pieces of clothing that she had to
getand despite problems with corporate surveillance, these purchases do not bother her in any way.
A research participant who generally enjoys algorithmic recommendations talked about how
market manipulation can become overly aggressive,a reminder of the balance needed to maintain
the intimacy of surveillance. A part time student / concierge enjoyed relevant ads, but also mentioned
the risk of advertisements being too apt, arguing that it is a good thing that marketing is irrational:
the less relevant the ads are, the more protected she is from desiring unneeded things. Marketing
experiences are engendered within the larger context of corporate marketing, which is always poten-
tially harmful in terms of privacy and personal autonomy and always potentially pleasurable in terms
of needs and desires.
For the ve interviewees who had experience of self-initiated marketing online, satisfying encoun-
ters with the market extended to how they themselves had employed marketing techniques. Social
media marketing gives ordinary people the tools to set up their own marketing campaigns; they
can select where to post their ads and follow responses to them by way of data analytics. The
more consumers learn about online marketing, the more they can also behave like marketers (Abidin
2017). Digital forms of marketing, characterized by the interpenetration of the cultural and the econ-
omic, blur the boundaries between consumer and marketing, while marketing also promotes new
consumer practices. Online, users are both producers and consumers of content, harnessing
emotional energies to communicating things that are relevant for them.
Less and more surveillance
We have developed the notion of the intimacy of surveillance to study where, how, and when the line
is drawn beyond which corporate surveillance becomes too creepy and intrusive. We identied three
emotional responses to corporate marketing, each revealing recurrent features of the consumer-
JOURNALOFCULTURALECONOMY 9
corporate relationship. Not surprisingly, consumers have genuine concerns about the collection and
use of personal information; the powerlessness that people feel about corporate surveillance is well
documented, and concerns over personal data exploitation are not diminishing (Andrejevic 2014).
On the contrary, with the regular advent of new revelations about the extent of online surveillance
and consumer manipulation, consumers become even more concerned. Indeed, surveillance can
occur whether one actively uses social media services or not (Skeggs and Yuill 2016). Facebook infa-
mously revealed that it keeps shadow proles of people who have never signed up to the service.
With layers of facial recognition being applied by social media platforms, simply being in a photo-
graph, even if no tags or personal identications are added, can be enough to leave a trail on a com-
pany database.
Within the culture of surveillance, it is easy to agree with interviewees who talk about our no
longer being free or becoming slaves of the data giants (Andrejevic 2014, p. 1685). Yet consumers
also tend to ignore and downplay losses of personal autonomy and privacy. The aim of advertisers
is to be present in consumersdaily lives in ways that feel relevant and meaningful, and this is what
the users of online sites might also be after: they talk about ads not reecting their needs and inter-
ests, indirectly expressing a desire not for less, but for more, targeted advertising. Exploring the inti-
macy of surveillance suggests that consumers might criticize advertising that does not assist them
personally to nd objects and aims related to their lives; they do not want advertising that fails to
touch them. From this perspective, consumers clearly want contradictory things: they oppose ad tar-
geting based on tracking their activities and demographic proles, but also want more real-time
analysis and better probabilistic predictions. The pleasurable encounters with marketing, exemplied
by the successful alignment of advertisements and pursued identities, are of key importance in terms
of the intimacy of surveillance. Zwick (2017) discusses the aect of relevancetriggered by the plea-
sure of being seen by the market and of being unobtrusively guided and manipulated. Importantly,
the aect of relevance overrides concerns about privacy and marketing manipulation; it is at the
heart of the new intimacy of surveillance. As Leaver argues (2017, p. 4), the emotional impact is
more eective than the relationship with more recognizable forms of knowledge.When marketing
is felt as relevant, consumers feel less reluctance in giving in and following marketing messages. They
fail to see or feel that personal autonomy is being manipulated, or personal space invaded.
Our study underlines the importance of not letting the tension and ambivalence between the
experienced positive and negative aspects of digital marketing to fade into the background when
seeking to understand dataveillance: it explains how corporate surveillance that is disturbing and
unpleasant can co-exist with pleasurable moments of being seen.The contradictory aspects of sur-
veillance call for more research on the aective aspects of the intimacy of surveillance, which attends
to how the aective becomes a key element in the process of normalization and intensication of
surveillance (Leaver 2017, p. 3). We suggest that a fruitful direction in which to open discussion
with research participants is how felt personal autonomy and liberty intertwine with the ways dier-
ent kinds of consumers are sorted and classied by machinic agencies. This connects to the study of
how machine classications are perceived: whether people think that they have been appropriately
recognized by algorithmic systems or simply oered a rerun of the past or caricatures and stereotypes
that reveal how little the machines understand us despite all the talk about their classicatory powers.
Towards post-marketing
We have argued for the usefulness of focusing on failures of automated targeted advertising, exem-
plied by how crude machine logics disturb consumers online. The failures resonate with the pre-
diction that the age of generic and even nichemarketing is slowly coming to an end(Fourcade
and Healy 2017, p. 23). Our study suggests that online marketing pleases consumers more when
it feels like post-marketing. Replicating the logic of dataveillance, marketing becomes deeply
inserted into, and increasingly indistinguishable from, the fabric of everyday life,as Zwick and Brad-
shaw (2016, p. 93) argue. As we have observed above, corporate marketing disturbs when ads feel too
10 M. RUCKENSTEIN AND J. GRANROTH
invasive and aggressive, triggering evaluations of whether the marketing is relevant. The remem-
bered failures of marketing, when advertisements have been regarded as neither interesting nor
seductive, demonstrate misalignments and moments of disillusionment associated with machine-
led, human-technology interactions. In response to such misalignments, biopolitical or post-market-
ing calls for ways the market can make lifeand better anticipate and parasitically latch onto con-
sumer needs and desires (Zwick and Bradshaw 2016). One way of doing this is to engage proactive
machines, such as recommendation engines, in the marketing work in order to promote consumer
needs and desires that are embedded in everyday aims.
Based on the empirical material, the move towards post-marketing is awaited by those consumers
who wish that the market would perform better in terms of anticipating and molding their needs and
desires. The positive responses to marketing messages promoting relevant means for identity work
and identication, for instance in the form of a slogan t-shirt, are evidence that corporate surveil-
lance is gradually preparing us for more intimate surveillance. The emotional reactions uncovered
by focusing on the algorithmic imaginary suggest the longing for passive metrics to become more
active, oering personalized and proactive action and variety, surprises, and suggestions beyond
stereotypes. The risks and harms of everyday surveillance are evident to consumers, but the promise
of commodifying life without consumer-corporation antagonism is seductive (Zwick and Bradshaw
2016, p. 96).
Concluding remarks
Post-marketing is a vision or a professional utopia that is highly unlikely to materialize fully in the
digital marketing world. The reality that consumers are facing online is that even if they were ready
for new visions of marketing, targeted advertisements continue to hit and miss their lifeworlds. As a
related development, proactive machines, in the form of search engines and recommendation sys-
tems, have started to behave like marketing devices. Biopolitical forms of marketing are also
being promoted by consumers themselves. Marketing takes place beyond the advertising and mar-
keting profession; online we can all become everyday marketers, promoting ourselves and our causes
as inuencers(Abidin 2017). The market is teaching us to see ourselves not only as data-generating
subjects for the corporate surveillance machinery, but also as everyday marketers, capable of seg-
menting and shaping the world with our own targeted ads.
When targeted ads become part of online experiences, they are not merely oering purchas-
ing suggestions, but are transformed into shaping forces in how people see themselves, others,
and their online environment. This is what the participants of our study were also indicating
when they discussed the sociotechnical realities of which algorithmic systems are part. The
lack of knowledge about how their personal data traces are tampered with is distressing, making
thedesireforamorealgorithm-awaresocietyunderstandable. The current situation in which
people are trying to comprehend, typically alone with their devices, what is going on in terms
of continuously changing algorithmic systems, is clearly undermining public culture. This
calls for collective responses to the shared pleasures and pains of living alongside algorithms.
The way forward suggested in this paper takes seriously emotional reactions in terms of data-
veillance and uses them as evidence of recurring failures. The everyday distress and paranoia
to which users of social media are exposed are indicators of failed social arrangements in
need of urgent repair.
Acknowledgements
We thank Detlev Zwick and Vassilis Charitsis for valuable comments and critiques for an earlier version of the article.
Special thanks also to Franck Cochoy and the organisers and participants of the Digitalising markets and consumption
workshop in Gothenburg.
JOURNALOFCULTURALECONOMY 11
Disclosure statement
No potential conict of interest was reported by the authors.
Funding
This work was supported by The Helsingin Sanomat Foundation.
Notes on contributors
Minna Ruckenstein is an associate professor at the Consumer Society Research Centre and the Helsinki Center for
Digital Humanities, University of Helsinki. Her research focuses on the emotional, social, political and economic
aspects of datacation and current and emerging data practices.
Julia Granroth holds a BA in social anthropology from the University of Helsinki. She has worked as a researcher in a
project led by Ruckenstein that studies everyday understandings of algorithms.
ORCID
Minna Ruckenstein http://orcid.org/0000-0002-7600-1419
References
Abidin, C., 2017. #familygoals: Family inuencers, calibrated amateurism, and justifying young digital labor. Social
Media + Society, 3 (2), 205630511770719.
Albrechtslund, A. and Lauritsen, P., 2013. Spaces of everyday surveillance: unfolding an analytical concept of partici-
pation. Geoforum, 49, 310316.
Andrejevic, M., 2014. The big data divide. International Journal of Communication, 8, 16731689.
Austin, S. and Newman, N., 2015. Attitudes to sponsored and branded content (Native Advertising). Digital news
report 2015 [online]. Reuters Institute for the Study of Journalism. Available from: http://www.digitalnewsreport.
org/essays/2015/attitudes-to-advertising/ [Accessed 27 March 2018].
Bericat, E., 2016. The sociology of emotions: four decades of progress. Current Sociology, 64 (3), 491513.
Berson, J., 2015.Computable bodies: instrumented life and the human somatic niche. London: Bloomsbury.
Bucher, T., 2017a. The algorithmic imaginary: exploring the ordinary aects of Facebook algorithms. Information,
Communication & Society, 20 (1), 3044.
Bucher, T., 2017b. Neither black nor box: ways of knowing algorithms. In: S. Kubitschko and A. Kaun, eds. Innovative
methods in media and communication research. Cham: Palgrave Macmillan, 8198.
Cheney-Lippold, J., 2011. A new algorithmic identity: soft biopolitics and the modulation of control. Theory, Culture &
Society, 28 (6), 164181.
Dyer, G., 1982.Advertising as communication. London: Routledge.
Ellerbrok, A., 2011. Playful biometrics: controversial technology through the lens of play. The Sociological Quarterly,52
(4), 528547.
Fourcade, M., and Healy, K., 2017. Seeing like a market. Socio-Economic Review, 15 (1), 929.
Gillespie, T., 2016. Algorithm. In: B. Peters, ed. Digital keywords: a vocabulary of information society and culture.
Princeton, NJ: Princeton University Press, 1830.
Hannaford, D., 2015. Technologies of the spouse: intimate surveillance in Senegalese transnational marriages. Global
Networks, 15 (1), 4359.
Hochschild, A.R., 2002. The sociology of emotion as a way of seeing. In: G. Bendelow and S.J. Williams, eds. In
emotions in social life. London: Routledge, 3144.
Holli, A-M., 2003.Discourse and politics for gender equality in late twentieth century Finland. Acta Politica 23. Helsinki:
University of Helsinki.
Jackson, S.J., 2014. Rethinking repair. In: T. Gillespie, P. Boczkowski, and K. Foot, eds. Media technologies: essays on
communication, materiality and society. Cambridge, MA: The MIT Press, 221239.
Jamieson, L., 2011. Intimacy as a concept: explaining social change in the context of globalisation or another form of
ethnocentricism? Sociological Research Online, 16 (4), 113.
Kennedy, H., 2018. Living with data: aligning data studies and data activism through a focus on everyday experiences
of datacation. Krisis: Journal for Contemporary Philosophy, (1), 1830.
Kennedy, H. and Hill, R.L., 2017. The feeling of numbers: emotions in everyday engagements with data and their visu-
alisation. Sociology [online]. Available from: https://doi.org/10.1177/0038038516674675.
12 M. RUCKENSTEIN AND J. GRANROTH
Leaver, T., 2015. Born digital? Presence, privacy, and intimate surveillance. In: J. Hartley and W. Qu, eds. Re-orien-
tation: translingual transcultural transmedia. Studies in narrative, language, identity, and knowledge. Shanghai:
Fudan University Press, 149160.
Leaver, T., 2017. Intimate surveillance: normalizing parental monitoring and mediation of infants online. Social Media
& Society [online], 3 (2). Available from: https://doi.org/10.1177/2056305117707192 [Accessed 27 March 2018].
Lupton, D. and Michael, M., 2017.Depends on whos got the data: public understandings of personal digital data-
veillance. Surveillance & Society, 15 (2), 254268.
McStay, A., 2016. Empathic media and advertising: industry, policy, legal and citizen perspectives (the case for inti-
macy). Big Data & Society, 3 (2), 205395171666686.
Michael, M. and Lupton, D., 2016. Toward a manifesto for the public understanding of big data.Public Understanding
of Science, 25 (1), 104116.
Moore, H., 1999. Anthropological theory at the turn of the century. In: H. Moore, ed. Anthropological theory today.
Cambridge: Polity Press, 123.
Moore, H., 2004. Global anxieties: concept-metaphors and pre-theoretical commitments in anthropology.
Anthropological Theory, 4 (1), 7188.
Pink, S., et al.,2018. Broken data: conceptualising data in an emerging world. Big Data & Society, 5 (1),
2053951717753228.
Pridmore, J. and Lyon, D., 2011. Marketing as surveillance: assembling consumers as brands. In: D. Zwick and J. Cayla,
eds. Inside marketing: practices, ideologies, devices. New York, NY: Oxford University Press, 115136.
Raley, R., 2013. Dataveillance and counterveillance. In: L. Gitelman, ed. Raw data is an oxymoron. Cambridge: MIT
Press, 121145.
Ruckenstein, M. and Schüll, N.D., 2017. The datacation of health. Annual Review of Anthropology,46, 261278.
Seaver, N., 2017. Algorithms as culture: some tactics for the ethnography of algorithmic systems. Big Data & Society,4
(2), 205395171773810.
Skeggs, B. and Yuill, S., 2016. Capital experimentation with person/a formation: how Facebooks monetization
regures the relationship between property, personhood and protest. Information, Communication & Society,19
(3), 380396.
Van Dijck, J., 2014. Datacation, dataism and dataveillance: big data between scientic paradigm and ideology.
Surveillance and Society, 12 (2), 197208.
Zubo, S., 2015. Big other: surveillance capitalism and the prospects of an information Civilization. Journal of
Information Technology, 30, 7589.
Zwick, D., 2017. Digital Marketing, biopolitical marketing and the utopia of a post-marketing world. In: Digital
Consumption research network workshop, May 2017, Sweden: University of Gothenburg, 1819.
Zwick, D. and Bradshaw, A., 2016. Biopolitical marketing and social media brand communities. Theory, Culture &
Society, 33 (5), 91115.
JOURNALOFCULTURALECONOMY 13
... However, algorithmic feedback loops are not implemented merely to govern and order social action. In addition to 'harder' aims of control, a 'softer' aim is to build more intimate human-machine relations (Ruckenstein and Granroth, 2020). Scholars have pointed out how digital surveillance and algorithmic technologies are used to seek increasingly personalized and emotionally based connections. ...
... In addition to the more informed aspects of autonomy, research has called attention to the crucial role of attachments, vulnerability and emotional agency in and for autonomous personhood (Govier, 1993;Helm, 1996;Mackenzie, 2014b;Nedelsky, 1989). These aspects of autonomy are best understood as activated by a related but analytically separable development in human-algorithm relations, where the aim is not to control at a distance but to cultivate an increasingly close and emotionbased relationship with the consumer through the use of personalization-enabling algorithmic techniques (Ruckenstein and Granroth, 2020). As a result, in the everyday, algorithms are not responded to merely as cold, technical procedures but give rise to affective engagements. ...
... Whereas the quest for autonomy as breathing space is triggered by the way algorithmic technologies press against the intimate sphere, autonomy is also felt as enhanced by the personal nature of algorithmic systems. Successful personalization generates pleasurable feelings of being 'seen' and 'recognised' by the algorithmic system (Ruckenstein and Granroth, 2020). Here, the intimate horizon directs attention to sharing personal space and confidential information with algorithmic systems. ...
Article
Full-text available
This article reorients research on agentic engagements with algorithms from the perspective of autonomy. We separate two horizons of algorithmic relations – the instrumental and the intimate – and analyse how they shape different dimensions of autonomous agency. Against the instrumental horizon, algorithmic systems are technical procedures ordering social life at a distance and using rules that can only partly be known. Autonomy is activated as reflective and informed choice and the ability to enact one’s goals and values amid technological constraints. Meanwhile, the intimate horizon highlights affective aspects of autonomy in relation to algorithmic systems as they creep ever closer to our minds and bodies. Here, quests for autonomy arise from disturbance and comfort in a position of vulnerability. We argue that the dimensions of autonomy guide us towards issues of specific ethical and political importance, given that autonomy is never merely a theoretical concern, but also a public value.
... In the literature, the opacity of algorithmic governance has often steered comparisons with 13 Foucault's panopticist views of society (Cheney-Lippold 2011; Bucher 2012b). However, recent empirical works have highlighted how consumers actively attempt to make sense, in a bottom-up way, of the obscure functioning of platform-based technologies, such as the recommender algorithm of Spotify (Siles et al. 2020) or the advertising systems of Facebook and Instagram (Ruckenstein and Granroth 2020). ...
... Therefore, consumers' negotiations of misaligned algorithmic outputs may remain entirely implicit, such as when one distractedly hides a disturbing Facebook ad, or ignores an automated recommendation. Nonetheless, even these unreflective reactions produce datafied feedback that affects algorithmic behavior in a novel iteration of the circuit, thus pivoting the algorithmic articulation toward the consumer-empowerment end of the continuumfor example, by providing more 'relevant' and 'pleasing' content in the future (Ruckenstein and Granroth 2020). ...
Article
Full-text available
This article conceptualizes algorithmic consumer culture, and offers a framework that sheds new light on two previously conflicting theorizations: that (1) digitalization tends to liquefy consumer culture and thus acts primarily as an empowering force, and that (2) digitalized marketing and big data surveillance practices tend to deprive consumers of all autonomy. By drawing on critical social theories of algorithms and AI, we define and historicize the now ubiquitous algorithmic mediation of consumption, and then illustrate how the opacity, authority, non-neutrality, and recursivity of automated systems affect consumer culture at the individual, collective, and market level. We propose conceptualizing ‘algorithmic articulation’ as a dialectical techno-social process that allows us to enhance our understanding of platform-based marketer control and consumer resistance. Key implications and future avenues for exploring algorithmic consumer culture are discussed.
... technophobia) among the general public is one of the main concerns for AI developers (McClure 2018). Also, people are legitimately concerned about corporate surveillance and algorithmic bias (Katzenbach and Ulbricht 2019;Ruckenstein and Granroth 2020). Two-thirds of participants in a survey predict that AI will do much of the jobs currently done by human beings within five decades (Smith 2016). ...
Article
This study examined the possibility of cooperation between human and communicative artificial intelligence (AI) by conducting a prisoner’s dilemma experiment. A 2 (AI vs human partner) × 2 (cooperative vs non-cooperative partner) between-subjects six-trial prisoner’s dilemma experiment was employed. Participants played the strategy game with a cooperative AI, non-cooperative AI, cooperative human, and non-cooperative human partner. Results showed that when partners (both communicative AI and human partners) proposed cooperation on the first trial, 80% to 90% of the participants also cooperated. More than 75% kept the promise and decided to cooperate. About 60% to 80% proposed, committed, and decided to cooperate when their partner proposed and kept the commitment to cooperate across trials, no matter whether the partner was a cooperative human or communicative AI. Overall, participants were more likely to commit and cooperate with cooperative AI partners than with non-cooperative AI and human partners.
... These imaginaries and feelings were connected to the participants' desire for efficiency and convenience; an app that knew and understood the preferences and movements of its user as well as possible was favourably viewed. This aspect of our findings is similar to those from other Australian-based research (Lupton, 2020) as well as Finnish (Ruckenstein and Granroth, 2019) and Danish (Lomborg and Kapsch, 2019) studies on people's attitudes towards algorithmic personalisation and personal data profiling and their desire to be better known and understood by apps and other software. However, our findings extend from this previous body of research by identifying new facets of app use which surface the unexpected ways in which diverse apps facilitated relational connections. ...
Article
Purpose In this article, the authors aim to explore mobile apps as both mundane and extraordinary digital media artefacts, designed and promoted to improve or solve problems in people's lives. Drawing on their “App Stories” project, the authors elaborate on how the efficiencies and affordances credited to technologies emerge and are performed through the specific embodied practices that constitute human–app relationships. Design/methodology/approach The project involved short written accounts in an online survey from 200 Australian adults about apps. Analysis was conducted from a sociomaterial perspective, surfacing the emotional and embodied responses to and engagements with the apps; the relational connections described between people and their apps or with other people or objects; and what the apps enabled or motivated people to do. Findings Findings point to three salient concerns about apps: (1) the need for efficiency; (2) the importance and complexity of human relationships and maintaining these connections; and (3) the complex relationships people have with their bodies. These concerns are expressed through themes that reflect how everyday efficiencies are produced through human–app entanglements; apps as relational agents; apps' ability to know and understand users; and future app imaginaries. Originality/value This project explores the affective and embodied dimensions of app use and thinks through the tensions between the extraordinary and mundane dimensions of contemporary techno-social landscapes, reflecting on how apps “matter” in everyday life. Our analysis surfaces the active role of the body and bodily performances in the production of app efficiencies and underlines the ways mobile apps are always situated in relation to other media and materialities.
... The non-neutral working of algorithms affects the way in which we perceive the technological infrastructure. For example, targeted ads can make it very evident to users that their online interactions are being monitored and consequently create a sense of distrust towards their devices (Ruckenstein & Granroth, 2020). In recent years, some researchers have attempted to "decode" how algorithms shape our behaviour, monitor our activity and remain constantly present, if invisible, elements of our lives influencing our self-perception. ...
Article
Full-text available
In this article, we address the case of self-tracking as a practice in which two meaningful backgrounds (physical world and technological infrastructure) play an important role as the spatial dimension of human practices. Using a (post)phenomenological approach, we show how quantification multiplies backgrounds, while at the same time generating data about the user. As a result, we can no longer speak of a unified background of human activity, but of multiple dimensions of this background, which, additionally, is perceived as having no pivotal role in the process, often being hidden, situated beyond human consciousness, or taken for granted. Consequently, the phenomenological experience of the background turns into a hermeneutic practice focused on the interpretation of representations and descriptions. By adopting a (post)phenomenological approach, we show the problems and limitations of quantification of human activities occurring in self-tracking and the theoretical problems associated with the scheme of human-technology relations.
... Since at least the inauguration of market research, organizations have had a keen interest in capturing, accumulating, and analyzing data related to existing and potential customers of their products and services to understand their interests, preferences, and experiences with a view to developing predictive capabilities about future demands and aspirations, effective targeted advertising, new product development and up-and cross-selling of products (Mariani and Wamba, 2020;Ruckenstein and Granroth, 2020). With the gradual but persistent transition of organization-consumer touchpoints to digital platforms and systems, organizations have had near free reign for several decades to design digital infrastructure and systems that, not only, capture consumers' digital traces, but also encourage them to share an increasing amount of information about themselves (e.g., see Cusumano et al., 2021). ...
Article
Full-text available
Organizations continue to create digital interfaces and infrastructure that are designed to heighten consumers’ online visibility and encourage them to part with their data. The way these digital systems operate and the rules they are governed by are often opaque, leaving consumers to deploy their own strategies for managing their online information sharing with organizations. In this study, we draw upon Erving Goffman’s metaphor of expression games and three forms of concealment or cover moves to explore how consumers, who have been well socialized as digital natives, engage in dynamic and game-like interactions with organizations in an attempt to manage their level of online visibility and information sharing in relation, inter alia, to the ‘convenience’ and ‘benefits’ that are afforded to them. Our research is based on in-depth interviews in combination with photo-elicitation with 20 participants. Based on the insight generated, we offer a new framework, ‘Propensity to Game’ (P2G), which present the processual dynamics that characterize these consumers’ evolving and game-like engagements with organizations. These are Game Awareness, Rule Familiarization, Player Commitment and Game Play. Our work contributes with new insight into how these consumers actively engage in the orchestration of their online visibility by surfacing the nuanced and multifaceted decision-making and thought processes that they engage in when they, situation-by-situation, decide on the tactics and methods to use in their efforts to manage the data and information they share with organizations.
... Yet, the automation logic is not the same everywhere -nor does it operate with the same kind of intensity on every occasion of use or every geographical location. People can and do resist -and, indeed, they may call for more customisation and personalisation (Lupton, 2019;Ruckenstein and Granroth, 2020) or even an expansion of datafcation of their lives so that their needs are better met (Milan and Treré, 2020). If we believe that human life can be limitlessly captured with datafying technologies, we are giving far too much credit to technologies and far too little to the human agencies involved. ...
Chapter
Full-text available
Everyday life is increasingly automated with the use of new and emerging digital technologies and systems. Discussion of these automated technologies is often shrouded with narratives which highlight extreme and spectacular examples, rather than the ordinary mundane realities that characterise the overwhelming majority of people's actual encounters with them.
Article
Full-text available
How is the ongoing “datafication” in society experienced by consumers? Critical discussions regarding the impact of datafication on consumers seldom study consumers’ actual experiences. Conversely, the studies that do exist of consumers and their experiences of datafication tend to take an individualistic approach, arguing that how consumers experience and respond to the ongoing datafication is the result of their individual psychological make-up or the result of processes of cost–benefit calculations. Against that background, this article will instead show that the ways in which consumers experience and respond to datafication is linked to a number of broader sociotechnical imaginaries. Based on in-depth user interviews and drawing on previous work on sociotechnical imaginaries, this article develops an analysis of consumers’ multiple imaginaries of data collection practices. Findings show that how consumers approach data collection operations is shaped by sociotechnical imaginaries that were both individually and collectively performed by consumers interacting with and using data-collecting devices.
Chapter
Datafication is a social and political process that has mainly been led by powerful commercial interests leaving the citizens of datafied societies as mere bystanders. How could a datafied society become a welfare data society—a society that takes care of all citizens’ rights and wellbeing by providing them sufficient means to cope with a datafied everyday life? In this chapter, I claim that in a data society the rights and wellbeing of citizens are strengthened through education: by increasing the level of digital and data infrastructure literacy. While regulations such as the GDPR are much needed, they are only effective if citizens understand how to use the rights they grant them. Our workshops with users showed that on average, people are capable of forming a considered opinion on fair data-gathering practices. Furthermore, they were able to discuss and even develop new ideas based on how they would like data gathering to be organised and regulated after being introduced to data collection in practice. Basic education in European countries has already made efforts to improve digital literacy, but education on digital literacy and especially data infrastructure literacy should also reach older generations. In this chapter, I propose that public service media should also play a significant role in strengthening citizenship through education in a datafied society as demonstrated by the Finnish public broadcasting company YLE who has already taken on that role. The results from our cooperation with YLE Learning show that public service media (PSM) already possesses inventive means through which different kinds of users can be reached. Still, more controlled cooperation is needed among different public institutions and European PSM to increase the general level of data infrastructure literacy.
Article
People’s ideas and practices concerning their personal data and digital privacy have received growing attention in social inquiry. In this article, we discuss findings from a study that adopted the story completion method together with a theoretical perspective building on feminist materialism to explore how people make sense of and respond to digital privacy dilemmas. The Digital Privacy Story Completion Project presented participants with a set of four story prompts (‘stems’) for them to complete. Each introduced a fictional character facing a privacy dilemma related to personal data generated from their online interactions or app use. Our analysis surfaces how privacy is imagined as simultaneously personal and social, redolent with affective intensities, and framed through relational connections of human and nonhuman agents. While the story stems involved scenarios using digital technologies, participants’ stories extended beyond the technological. These stories offer insight into why and how the potential for and meaning of digital privacy unfolds into more-than-digital worlds.
Article
Full-text available
In this article, we introduce and demonstrate the concept-metaphor of broken data. In doing so, we advance critical discussions of digital data by accounting for how data might be in processes of decay, making, repair, re-making and growth, which are inextricable from the ongoing forms of creativity that stem from everyday contingencies and improvisatory human activity. We build and demonstrate our argument through three examples drawn from mundane everyday activity: the incompleteness, inaccuracy and dispersed nature of personal self-tracking data; the data cleaning and repair processes of Big Data analysis and how data can turn into noise and vice versa when they are transduced into sound within practices of music production and sound art. This, we argue is a necessary step for considering the meaning and implications of data as it is increasingly mobilised in ways that impact society and our everyday worlds.
Article
Full-text available
This article responds to recent debates in critical algorithm studies about the significance of the term “algorithm.” Where some have suggested that critical scholars should align their use of the term with its common definition in professional computer science, I argue that we should instead approach algorithms as “multiples”—unstable objects that are enacted through the varied practices that people use to engage with them, including the practices of “outsider” researchers. This approach builds on the work of Laura Devendorf, Elizabeth Goodman, and Annemarie Mol. Different ways of enacting algorithms foreground certain issues while occluding others: computer scientists enact algorithms as conceptual objects indifferent to implementation details, while calls for accountability enact algorithms as closed boxes to be opened. I propose that critical researchers might seek to enact algorithms ethnographically, seeing them as heterogeneous and diffuse sociotechnical systems, rather than rigidly constrained and procedural formulas. To do so, I suggest thinking of algorithms not “in” culture, as the event occasioning this essay was titled, but “as” culture: part of broad patterns of meaning and practice that can be engaged with empirically. I offer a set of practical tactics for the ethnographic enactment of algorithmic systems, which do not depend on pinning down a singular “algorithm” or achieving “access,” but which rather work from the partial and mobile position of an outsider.
Article
Full-text available
Over the past decade, data-intensive logics and practices have come to affect domains of contemporary life ranging from marketing and policy making to entertainment and education; at every turn, there is evidence of “datafication” or the conversion of qualitative aspects of life into quantified data. The datafication of health unfolds on a number of different scales and registers, including data-driven medical research and public health infrastructures, clinical health care, and self-care practices. For the purposes of this review, we focus mainly on the latter two domains, examining how scholars in anthropology, sociology, science and technology studies, and media and communication studies have begun to explore the datafication of clinical and self-care practices. We identify the dominant themes and questions, methodological approaches, and analytical resources of this emerging literature, parsing these under three headings: datafied power, living with data, and data–human mediations. We conclude by urging scholars to pay closer attention to how datafication is unfolding on the “other side” of various digital divides (e.g., financial, technological, geographic), to experiment with applied forms of research and data activism, and to probe links to areas of datafication that are not explicitly related to health. Expected final online publication date for the Annual Review of Anthropology Volume 46 is October 21, 2017. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Article
Full-text available
Parents are increasingly sharing information about infants online in various forms and capacities. To more meaningfully understand the way parents decide what to share about young people and the way those decisions are being shaped, this article focuses on two overlapping areas: parental monitoring of babies and infants through the example of wearable technologies and parental mediation through the example of the public sharing practices of celebrity and influencer parents. The article begins by contextualizing these parental practices within the literature on surveillance, with particular attention to online surveillance and the increasing importance of affect. It then gives a brief overview of work on pregnancy mediation, monitoring on social media, and via pregnancy apps, which is the obvious precursor to examining parental sharing and monitoring practices regarding babies and infants. The examples of parental monitoring and parental mediation will then build on the idea of “intimate surveillance” which entails close and seemingly invasive monitoring by parents. Parental monitoring and mediation contribute to the normalization of intimate surveillance to the extent that surveillance is (re)situated as a necessary culture of care. The choice to not survey infants is thus positioned, worryingly, as a failure of parenting.
Article
Full-text available
Following in the celebrity trajectory of mommy bloggers, global micro-microcelebrities, and reality TV families, family Influencers on social media are one genre of microcelebrity for whom the “anchor” content in which they demonstrate their creative talents, such as producing musical covers or comedy sketches, is a highly profitable endeavor. Yet, this commerce is sustained by an undercurrent of “filler” content wherein everyday routines of domestic life are shared with followers as a form of “calibrated amateurism.” Calibrated amateurism is a practice and aesthetic in which actors in an attention economy labor specifically over crafting contrived authenticity that portrays the raw aesthetic of an amateur, whether or not they really are amateurs by status or practice, by relying on the performance ecology of appropriate platforms, affordances, tools, cultural vernacular, and social capital. In this article, I consider the anatomy of calibrated amateurism, and how this practice relates to follower engagement and responses. While some follower responses have highlighted concerns over the children’s well-being, a vast majority overtly signal their love, support, and even envy toward such parenting. I draw on ethnographically informed content analysis of two group of family Influencers on social media to illustrate the enactment and value of calibrated amateurism in an increasingly saturated ecology and, investigate how such parents justify the digital labor in which their children partake to produce viable narratives of domestic life.
Article
Full-text available
Post-Snowden, several highly-publicised events and scandals have drawn attention to the use of people's personal data by other actors and agencies, both legally and illicitly. In this article, we report the findings of a project in which we used cultural probes to generate discussion about personal digital dataveillance. Our findings suggest the prevailing dominance of tacit assumptions about the uses and benefits of dataveillance as well as fears and anxieties about its possible misuse. Participants were able to identify a range of ways in which dataveillance is conducted, but were more aware of obvious commercial and some government actors. There was very little identification of the types of dataveillance that are used by national security and policing agencies or of illegal access by hackers and cybercriminals. We found that the participants recognised the value of both personal data and the big aggregated data sets that their own data may be part of, particularly for their own convenience. However, they expressed concern or suspicion about how these data may be used by others, often founded on a lack of knowledge about what happens to their data. The major question for our participants was where the line should be drawn. When does personal dataveillance become too intrusive, scary or creepy? What are its drawbacks and risks? Our findings suggest that experimenting with innovative approaches to elicit practices and understandings of personal digital data offers further possibilities for greater depth and breadth of social research with all types of social groups.
Article
In this paper I argue that there is an urgent need for more empirical research into everyday experiences of living with datafication, something that has not been prioritised in the emerging field of data studies to date. As a result of this absence, the knowledge produced within data studies is not as aligned to the aims of data activism as it might be. Data activism seeks to challenge existing, unequal data power relations and to mobilise data in order to enhance social justice, yet data studies has focused primarily on identifying the former and not on imagining the latter. To build a picture of what more just conditions of datafication might look like, I argue that it is important to take account of everyday experiences of datafication, of what people themselves say would enable them to live better with data and would constitute more just data arrangements, based on their experiences. I explore two possible approaches to this endeavour, both of which suggest the need for a vocabulary of emotions in researching everyday living with data.
Chapter
Bucher uses the concept of the black box as a heuristic device to discuss the nature of algorithms in contemporary media platforms, and how we, as scholars and students interested in this nature, might attend to and study algorithms, despite, or even because of, their seemingly secret nature. The argument is made that despite the usefulness in pointing out some of the epistemological challenges relating to algorithms, the figure of the black box constitutes somewhat of a distraction from other, perhaps more pressing, questions and issues. Moving beyond the notion that algorithms are black boxes, the chapter synthesizes and extends existing approaches and makes a case for using well-known methods to new domains, not only generating knowledge about emerging issues and practices, but contributing to (re)inventing methods.
Article
What do markets see when they look at people? Information dragnets increasingly yield huge quantities of individual-level data, which are analyzed to sort and slot people into categories of taste, riskiness or worth. These tools deepen the reach of the market and define new strategies of profit-making. We present a new theoretical framework for understanding their development. We argue that (a) modern organizations follow an institutional data imperative to collect as much data as possible; (b) as a result of the analysis and use of this data, individuals accrue a form of capital flowing from their positions as measured by various digital scoring and ranking methods; and (c) the facticity of these scoring methods makes them organizational devices with potentially stratifying effects. They offer firms new opportunities to structure and price offerings to consumers. For individuals, they create classification situations that identify shared life-chances in product and service markets. We discuss the implications of these processes and argue that they tend toward a new economy of moral judgment, where outcomes are experienced as morally deserved positions based on prior good actions and good tastes, as measured and classified by this new infrastructure of data collection and analysis.