Content uploaded by Jukka Jouhki
Author content
All content in this area was uploaded by Jukka Jouhki on Jul 30, 2016
Content may be subject to copyright.
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 1
Media and Communication (ISSN: 2183-2439)
2016, Volume 4, Issue 4, Pages X-X
Doi: 10.17645/mac.v4i4.579
Article
Facebook’s Emotional Contagion Experiment as a Challenge to
Research Ethics
Jukka Jouhki 1,*, Epp Lauk 2, Maija Penttinen 1, Niina Sormanen 2 and Turo Uskali 2
1 Department of History and Ethnology, University of Jyväskylä, 40014 Jyväskylä, Finland; E-Mails: jukka.jouhki@jyu.fi
(J.J.), maija.s.penttinen@student.jyu.fi (M.P.)
2 Department of Communication, University of Jyväskylä, 40014 Jyväskylä, Finland; E-Mails: epp.lauk@jyu.fi (E.L.), ni-
ina.sormanen@jyu.fi (N.S.), turo.uskali@jyu.fi (T.U.)
* Corresponding author
Submitted: 31 January 2016 | Accepted: 12 April 2016 | Published: in press
Abstract
This article analyzes the ethical discussion focusing on the Facebook emotional contagion experiment published by the
Proceedings of the National Academy of Sciences in 2014. The massive-scale experiment manipulated the News Feeds
of a large amount of Facebook users and was successful in proving that emotional contagion happens also in online
environments. However, the experiment caused ethical concerns within and outside academia mainly for two
intertwined reasons, the first revolving around the idea of research as manipulation, and the second focusing on the
problematic definition of informed consent. The article concurs with recent research that the era of social media and
big data research are posing a significant challenge to research ethics, the practice and views of which are grounded in
the pre social media era, and reflect the classical ethical stances of utilitarianism and deontology.
Keywords
Big data; emotional contagion; Facebook; informed consent; manipulation; methodology; privacy; research ethics; so-
cial media; user data
Issue
This article is part of the issue “Successes and Failures in Studying Social Media: Issues of Methods and Ethics”, edited
by Epp Lauk and Niina Sormanen (University of Jyväskylä, Finland).
© 2016 by the authors; licensee Cogitatio (Lisbon, Portugal). This article is licensed under a Creative Commons Attribu-
tion 4.0 International License (CC BY).
1. Introduction
In June 2014 the Proceedings of the National Academy
of Sciences (PNAS) published an article entitled “Exper-
imental Evidence of Massive-Scale Emotional Conta-
gion Through Social Networks”. It was about an
experiment
1
conducted by Adam D. I. Kramer from Fa-
cebook’s Core Data Science Team together with Jamie
E. Guillory and Jeffrey T. Hancock from Cornell Univer-
sity. The article provided experimental evidence about
emotional contagion, a phenomenon that has been
widely studied before but mostly in offline environ-
ments. In January 2012, the research team manipulat-
1
Henceforth, “the Facebook experiment” or “the experiment”.
ed the News Feeds of a massive number (N = 689,003)
of Facebook users for a week, reducing the amount of
emotional content in their feeds. After analyzing over
three million posts and over 122 million words, the re-
sults showed that when the amount of positive status
updates published in their News Feed was reduced, us-
ers published more negative status updates and fewer
positive updates. Conversely, when the amount of
negative status updates was reduced, users published
more positive status updates and fewer negative up-
dates. Moreover, the less emotional content the users
were exposed to, the fewer words they used in their
status updates. (Kramer, Guillory, & Hanckock, 2014).
The research suggested that emotional states “can
be transferred to others via emotional contagion, lead-
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 2
ing people to experience the same emotions without
their awareness” (Kramer et al., 2014, p. 8788). Emo-
tional contagion had been proved earlier (e.g. Barsade,
2002; Huntsinger, Lun, Sinclair, & Clore, 2009; Kramer
et al., 2014, p. 8788 also refer to several other studies),
but proving that it happens “outside of in-person inter-
action” and particularly in the increasingly popular so-
cial media was new (see e.g. Ferrara & Yang, 2015 for a
similar but more recent study). Moreover, as there are
common conceptions about positive social media post-
ings making people sad or envious (e.g. Copeland,
2011), the experiment produced valuable information
to the contrary. The experiment suggested that peo-
ple’s “hearts and minds”, as Schroeder (2014, p. 3) puts
it, can be manipulated online, for good or ill. (See also
Shah, Capella, & Neuman, 2015; Summers-Effler, Van
Ness, & Hausmann, 2015, p. 472; cf. Parkinson & Man-
stead 2015, p. 377.)
Academic and non-academic reactions to the
study—defined as ethically controversial (Ananny, 2015,
p. 101; Harriman & Patel, 2014; Pejovic & Musolesi,
2015, p. 18; Simon, 2014; Thorson & Wells, 2015, p.
10)—were mixed. On a broader view, the heterogeneity
of the views on the ethics of the experiment is a sign of
how contested and fluid the concept of privacy is (e.g.
Ess, 2013, p. 260). Moreover, as Facebook cooperates
with several universities such as Cornell, Stanford and
Harvard (see e.g. Cheng, Adamic, Dow, Kleinberg, &
Leskovec, 2014; Friggeri, Adamic, Eckles, & Cheng,
2014; Sun, Rosenn, Marlow, & Lento, 2009)
2
the exper-
iment has raised debate about whose research ethics
prevail in such joint ventures—those of a private com-
pany or those of an academic research institution. In
this article, we focus on the academic but also look to
some extent at the non-academic ethical commentary
on the Facebook experiment, and ask what it tells us
about ethical research issues in the current era of so-
cial media research.
The ethical discussion presented in this article is
founded on an integrative literature review (see e.g.
Card, 2010; Torraco, 2005) that we conducted by
searching major journal databases such as Science Di-
rect, Google Scholar, Sage Journals, and Ebsco Academ-
ic Search Elite for articles covering the experiment. As a
result we obtained articles from journals such as Re-
search Ethics; Big Data & Society; Media, Culture & So-
ciety; Nature; and Information, Communication &
Society. In addition to journal articles, we searched for
conference proceedings on the experiment as well as
scholarly analyses of the issue published in blogs and
2
For Stanford’s recent collaboration with Facebook, see
www.sserg.org/new-collaboration-with-stanford-university-
and-facebook. More about Facebook’s partnerships at https://
research.facebook.com. On Facebook’s exclusive cooperation
with a few universities that have been granted access to
Facebook’s data, see e.g. Paolillo, 2005, p. 50.
other internet sites. Some news and magazine articles
as well as blog posts were also included in order to of-
fer some non-academic views on the issue.
Overall, our approach to the ethical discussion re-
volving around the Facebook experiment is essayistic in
nature (see e.g. Ceserani, 2010; Cornelissen, Gajew-
skade, Piekkari, & Welch, 2012, pp. 198-199), which
means that we prefer exploring and discussing the top-
ic in a heuristic manner: we tend to concentrate on
raising questions rather than put forward any definite
results based on empirical research. However, we do
argue that there are two crucial themes of debate
which sum up the ethical discussion revolving around
the experiment: research as manipulation (discussed in
Section 3) and the related informed consent (discussed
in Section 4). Moreover, we suggest that the debates
about the ethics of human-subject big data research,
while demanding a rethink of research ethics, still reflect
the classical divide between the utilitarian and the deon-
tological points of view. In the next section we will intro-
duce some key questions of research ethics in the era of
social media. Then we move on to present the Facebook
experiment and the ensuing ethical discussion.
3
2. Research Ethics and the Human Subject
The views on research ethics generally put into practice
in any academic research can be seen as balancing be-
tween two classic moral philosophical stances. Utilitar-
ianism attempts to calculate the morality of an act by
estimating the total amount of happiness or suffering
produced by the act, while deontology views certain
actions as immoral or moral per se, regardless of their
consequences. Both these stances are applied, for ex-
ample, in social media research when scholars contem-
plate the effect of their study on the subjects’ privacy:
the utilitarian view of privacy might allow certain incur-
sions into privacy if the result is the greater good,
whereas from the deontological point of view, a certain
level of privacy is a right that should not be violated, for
example, by conducting a study without receiving the in-
formed consent of the subjects of the study (Ess, 2013,
pp. 256-262; Shrader-Frechette, 2000). Both stances are
problematic, and neither of them is applied in research
without any consideration of the other—or in moral de-
cision-making outside of academia, for that matter. At
any rate, the utilitarian emphasis on avoidance of harm
and the more deontological value of receiving informed
consent from research subjects are considered the two
most significant imperatives of research ethics in studies
with human participants (e.g. the British Psychological
Society, 2010). Actual policies as to how exactly the im-
peratives are defined and in what situations they apply
(e.g. in big data research) vary significantly.
3
This article is based on an unpublished conference paper by
Jouhki et al. (2015).
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 3
One of the key ethical principles of the Association
of Internet Researchers (AoIR)—that the greater the
vulnerability of the subject of study, the greater the ob-
ligation of the researcher to protect the subject—is a
good example of how challenging it is to formulate
specific rules of ethical research (Markham & Buchan-
an, 2012, pp. 4-5). Obviously, protecting the research
subject depends on how one defines both the harm
that might be inflicted on the unprotected person and
also a research subject. The context of any research
setting means that ethical codes are not so much strict
rules as incentives to individual researchers to reflect
on the moral ground of their research and make ethical
decisions using their own judgment of what is in fact
practicable in the circumstances. Especially when in-
formed consent cannot be obtained in human-subject
research, the benefits of the study should outweigh the
harm of any invasion of privacy.
Often anonymity is seen as enough to ensure the
no-harm rule in cases of non-experimental (e.g. purely
observational) research. In experiments that affect the
participants’ behavior, the rules are stricter (See e.g.
Vainio, 2012; Vanderpool, 1996.). The level of sensitivi-
ty required for the decision-making to be ethically suf-
ficient is a constant topic of debate. For example, a
research institution or a commercial company engaging
in research might hold the view that obeying the law is
enough to make the research ethical (Hudson &
Bruckman, 2004, pp. 132-133). If the participants are
not harmed in any way during the data gathering, an
ethically sensitive researcher—whether working in a
private company or a university—might still take into
account the hypothetical situation that a research par-
ticipant at some point learns about his or her role in
the research and is offended (i.e. harmed) by having
been a participant without having given consent (e.g.
Hudson & Bruckman, 2004, pp. 136-138). Moreover, an
ethically sensitive researcher might treat public content
on the internet (e.g. tweets, blog posts) as intimate parts
of their creator’s personhood. Most researchers, how-
ever, use this content without securing informed con-
sent (Hesse, Moser, & Riley, 2015, p. 27).
The fact that data are accessible and public does
not necessarily mean that using them is not jeopardizing
privacy and is thus ethically justified (see e.g. boyd,
2010;
4
Marx, 2013; Tinati, Halford, Carr, & Pope, 2014, p.
673; Zimmer, 2010). The boundaries between private
and public information—especially on the internet—are
frustratingly ambiguous, contested and changing (Mark-
ham & Buchanan, 2012, p. 6; Ess, 2007, p. 499; see also
Rooke, 2013; Rosenberg 2010; Weeden, 2012, pp. 42-
43). Even when a researcher wants to have participants’
informed consent to take part in a study, it might be
impossible for him or her to obtain it if the research in
4
The author danah boyd wants her name to be written in
lower case.
question concerns, for example, massive data mining
processes and projects. Moreover, big data researchers
often ignore the whole question of informed consent
because they define their data as either public or pro-
prietary (Paolillo, 2015, p. 49). Also, when there is no
direct contact between the researchers and their hu-
man subjects it is questionable whether the subjects
should even be called participants. Besides, when ex-
periments are made on them, it is unclear whether
they are to be subject to the same ethical research
scrutiny as human-subject study participants normally
are (Hutton & Henderson, 2015, p. 178; Kahn, Vayena,
& Mastroianni, 2014, p. 13677.). Even if a researcher
did in such cases manage to obtain the participants’
consent, there would be no real guarantee that it was
indeed informed (Flick, 2016, p. 15-17).
To problematize the issue further, even if informed
consent was verified and the researcher was allowed
to use the participants’ personal data, the data might
also include information about people (e.g. contacts of
the users) who had not given their informed consent
(Phillips, 2011, p. 32). Thus it is no surprise that a large
number of extensive data mining projects are carried
out without informing the groups or individuals target-
ed by the researchers; the only measure taken to pro-
mote the ethicality of the research is making sure that
the participants are anonymous, thus ensuring confi-
dentiality (Lindsay & Goldring, 2010; Zwitter, 2014, p.
5; see also Sormanen et al., 2016).
In contrast, when conducting qualitative research
like virtual ethnography or, more specifically, partici-
pant observation, in smaller internet forums, obtaining
the consent of participants is technically relatively
easy. However, it is rarely done because of the possibil-
ity that knowing that they are being observed might
cause participants to act differently from usual, which
would skew the data. Then again, in practice, many
scholars do not seek informed consent because they
are afraid it would be denied (e.g. Hine, 2000, pp. 23-
24.). Sometimes participant observation even without
consent is impossible (e.g. in the case of private discus-
sion groups), so the researcher might engage in decep-
tion (e.g. an invented alias) in order to gain access to
the group of participants. As Brotsky and Giles (2007,
pp. 95-96) put what is indeed rather obvious, covert
participant observation is “highly controversial from an
ethical position”, but as in most completed research
projects with ethical research challenges, it is ultimate-
ly justified by reference to the benefits brought by the
results. Sometimes even informed consent does not
create an authentic consensual atmosphere, for exam-
ple if the subjects of the research do not feel they have
been treated fairly or if the purpose of the research is
not felt to be morally valuable enough (Kennedy,
Elgesem, & Miguel, 2015). Lastly, even if informed con-
sent is received, there is the problem of the level of in-
formedness. How can a researcher be sure that the
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 4
research subject has sufficiently understood the pur-
pose and the consequences of the research? (E.g.
Escobedo, Guerrero, Lujan, Ramirez, & Serrano, 2007;
Svanteson, 2007, p. 72.)
3. The Facebook Experiment as Manipulation
On Facebook, the News Feed is practically a list of sta-
tus updates of the contacts in a user’s network. The
updates shown in or omitted from the News Feed de-
pend on “a ranking algorithm that Facebook continual-
ly develops and tests in the interest of showing viewers
the content they will find most relevant and engaging”.
Facebook is thus like any traditional media as it pro-
vides content to its users selectively, but where it dif-
fers from the old media is that the content is modified
individually according to what the medium evaluates to
be the optimally engaging experience. (Kramer et al.,
2014, p. 8788.) Users accept this practice when signing
up for Facebook.
In their massive-scale experiment, Kramer et al.
(2014) tested the emotional engagement of Facebook
users by modifying their News Feed. The “experiment
on the manipulative power of Facebook feeds”, as Pea-
cock (2014, p. 8) described it, was criticized almost
immediately upon publication of the article. Bloggers
claimed Facebook made users “sad for a psych experi-
ment” (Grimmelmann, 2014) or the company was us-
ing people as “lab rats” (a blogger quoted by Rushe,
2014). According to The Guardian’s poll (Fishwick,
2014), most people who read about the experiment
were not surprised that Facebook would experiment
on user data the way it did but, at the same time, they
declared they had now “lost trust” in Facebook and
were considering closing their account. The “secret”
experiment, as The Guardian called it, “sparked out-
rage from people who felt manipulated by the compa-
ny”. It can be speculated that had Facebook known
what the public reaction to their experiment was going
to be, they would not have published it. danah boyd
(2014; see also Paolillo, 2015, p. 49) suggests that the in-
tended PR outcome of the experiment from Facebook’s
point of view was to show that Facebook can downplay
negative content in their service and thus make custom-
ers happier. Presumably this was seen as better for users
and better for Facebook, as experimentation is how
websites make their services better (Halavais, 2015, pp.
689-690; Kahn et al., 2014, p. 13677).
It is possible that many people missed the benevo-
lent intention of the research team and concentrated
on the contestable ethics of their method. The criticism
about the experiment reached such levels that Face-
book’s researcher and the first author of the article,
Adam Kramer, defended the experiment in his own Fa-
cebook page, pointing to the minimal “actual impact
on people”. During the week of the experiment, he ex-
plained, the users who were affected “produced an av-
erage of one fewer emotional word, per thousand
words”. (Kramer, 2014.) The magnitude of the impact
was perhaps unknown to many critics of the experi-
ment, as many objected to it on the grounds that Face-
book was “controlling the emotions” of its users.
Moreover, regardless of the magnitude of the impact
of the experiment, the user agreement of Facebook
can be interpreted to mean that users of Facebook al-
low researchers to experiment on them.
Thus, many ethicists would agree with Meyer
(2014), who published a statement with five co-authors
and on behalf of 27 other ethicists “to disagree with
these sweeping condemnations” of Facebook’s ethics
in the experiment. She wrote that “the experiment was
controversial, but it was not an egregious breach of ei-
ther ethics or law.” If Facebook is permitted to mine
user data and study users for personal profit but aca-
demics are not permitted to use that information and
learn from it, it “makes no one better off” (Meyer,
2014). However, for many critics it was more a matter
of ethical principle than actual impact. For example,
Kleinsman and Buckley (2015, p. 180) rejected Meyer’s
statement and claimed that “[i]f an experiment is in
‘breach of either ethics or law,’ then whether it is an
‘egregious’ breach or not is irrelevant.” In this view,
there is no grey area in research ethics, and conse-
quently, a person as a subject of research is—in a bina-
ry way—either harmed or not harmed.
Many scholars were even more critical than Kleins-
man and Buckley (2015). Recuber (2016), for example,
noted how quick scholars were to draw analogies be-
tween the Facebook experiment and the infamous Mil-
gram’s (1963) experiment analyzing obedience to
authority, as well as to the Stanford Prison experiment,
also known as the Zimbardo experiment (Haney, Banks,
& Zimbardo, 1973; Zimbardo, 1973), that studied the
psychological effects of becoming a prisoner or a
guard. According to Recuber, there were indeed some
similarities between the Facebook experiment and the
two notorious experiments from the 1960s, one being
the fact that all three studied the researchers’ ability to
manipulate change in the participants’ behavior. How-
ever, the Facebook experiment was different in its fail-
ure to reflect on this aspect (Recuber, 2016, pp. 46-47).
The user reactions studied in the Facebook experiment
were caused by the observers but the power relations
between the experimenters and the experimentees
were downplayed or normalized, and not at all prob-
lematized. This, at least to Recuber, is a typical and in-
sidious element of contemporary big data research.
(Recuber, 2016.) When the number of research sub-
jects is so high, individually they tend to vanish in the
haze of the overarching term “big data”. However, the
“power” exerted per capita over the participants in the
Facebook experiment can be viewed as rather minimal
(albeit massive in scale). The experiments carried out
by Milgram and Zimbardo, on the other hand, caused
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 5
their participants to suffer severe physical and psycho-
logical stress.
The ethics of human-subject research is mainly
about protecting the subject. In this sense, the Face-
book experiment was found ethically questionable.
Strict assessments of the experiment conclude that the
study indeed “harmed” its participants (albeit almost
unnoticeably), because it changed the participants’
mood (e.g. Bryman & Bell, 2015, p. 141; Grimmelmann,
2014; Kleisman & Buckley, 2015, p. 181). However, if
harming is defined as changing a participant’s mood,
then a vast quantity of empirical research on humans is
harmful, especially research that requires face-to-face
interaction. In general, big data studies or techniques to
test or predict personality or actions might not be legally
problematic but they do undermine a “sense of individ-
uality on a personal level”, claims Schroeder (2014, p. 7).
Facebook has experimented on its users before,
and has published research about it (see e.g. Bond et
al., 2012; Chan, 2015, p. 1081; Simonite, 2012). How-
ever, these experiments were explicit in their intention
to influence users. For example, in 2010 on the day of
the US congressional elections, Facebook encouraged
randomly assigned users to vote, managed to increase
voting activity, and afterwards published an article
about it in Nature (Bond et al., 2012). Moreover, in
2012 Mark Zuckerberg, the CEO of Facebook, used Fa-
cebook to encourage people to register as organ do-
nors, after which organ donor enrollment increased
significantly in the US (Simonite, 2012). These forms of
“manipulation” did not raise as much ethical debate as
the experiment we discuss here did. The reason for this
might be that people see explicit forms of intended ma-
nipulation as more acceptable than covert forms, even if
the explicit manipulation attempts to elicit significantly
greater change in the subject than the covert form.
Research ethics are often implemented more strict-
ly in the academic world than in the corporate research
environment. Then again, the ethical views of social
media users might be quite flexible, and a lot of how
users relate to being studied and experimented on by
researchers depends on the application of the results
(Kennedy et al. 2015, pp. 8-10). It seems like people do
not want to be experimented on for the sake of an ex-
periment but they are more likely to accept it if the ex-
periment might result in some kind of benefit for
themselves or others. Many people also do not mind
commercials or other manipulations—even outright
propaganda—as they are often part of the deal be-
tween users and service providers (cf. Searls 2015 on
ad blockers). In the case of the Facebook experiment,
even though scholars did not read any status updates,
some people still felt that their privacy was violated.
The problem in these kinds of cases is often the fact
that one has a feeling of being private while actually
being public (Kennedy et al., 2015, p. 13). According to
Chan (2015, p. 1080), the fact that neither Facebook
nor Cornell University—the two parties involved in
conducting the study—apparently anticipated the pub-
lic backlash they would face for the data manipulation
shows “the vast disconnect between the research cul-
ture of big data (whether based in corporate or aca-
demic institutions) and the general public’s cultural
expectations.”
4. The Problem of Informed Consent
It is the “informed” in informed consent that is the
other major ethical research issue in the experiment
that worried both the general public and academia (see
e.g. Kahn et al., 2014, p. 13677). Cornell University re-
searchers (Guillory and Hancock) analyzed the data af-
ter Facebook (Kramer) had collected them. The study
therefore did not go through an ethical review at Cor-
nell University, which might have been critical of how
the informed consent of the participants was going to
be secured (Paolillo, 2015, p. 50). In the article, re-
search ethics is discussed in two sentences (Kramer et
al., 2014, p. 8789). The first sentence states that the
researchers themselves did not read any of the texts
analyzed for the experiment as a linguistic software
program was used to analyze the data. The other sen-
tence declares that the data collection “was consistent
with Facebook’s Data Use Policy, to which all users
agree prior to creating an account on Facebook, consti-
tuting informed consent for this research.” In other
words, the authors interpreted Facebook’s user
agreement to mean informed consent.
In that case, the level of informedness is highly de-
batable, as most users of Facebook do not read or
completely understand the data use policy (Flick, 2016,
p. 17; see also Kennedy et al., 2015, pp. 10-15). When a
user accepts the terms and signs up for Facebook, he
or she is informed that the service provider will use the
personal data for all sorts of things (Facebook, 2015a).
The user might give their consent but is most likely not
well informed, since the description of the data use
policy is not very precise (see e.g. Grady, 2015, p. 885;
Sloan, Morgan, Burnap, & Williams, 2014, p. 16.). For
example, at the time of the experiment, the research
use of personal data was not mentioned although, fol-
lowing the wide publicity the experiment received, it
has subsequently been added to the policy.
Kleisman and Buckley (2015; see also Bail, 2015, p.
23) hold the view that because the authors of the Fa-
cebook experiment could have asked for proper in-
formed consent from the users, they should have done
so. It does not matter whether the research is unlikely
to cause harm or if it is beneficial or otherwise im-
portant: consent is always essential if it can be ob-
tained. The scholars should at least have informed
those users who were affected afterwards (Recuber,
2016, p. 54; see also McKelvey, Tiessen, & Simcoe,
2015, pp. 580-581). A month after the publication of
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 6
the experiment, PNAS’s Editor-in-Chief, Inder M. Ver-
ma (2014), added a foreword to the contested article.
It was entitled “Editorial Expression of Concern and
Correction” and it defended the authors’ ethical choic-
es by separating Facebook’s data collection process
from the actions of Cornell University. Readers were
reminded that it was a non-academic private company
(= Facebook’s Kramer) that gathered the data, and the
academics (= Cornell’s Guillory and Hancock) only ana-
lyzed them. However, as the responsibility fell partly on
the journal (see e.g. Kahn et al., 2014, p. 13679), Verma
(2014, p. 10779; see also Schroeder, 2014, pp. 2-3) did
concede that perhaps everything was “not fully con-
sistent with the principles of obtaining informed con-
sent and allowing people to opt out.”
Many human-subject big data scientists know that a
strict interpretation of the opting-out option makes
their research extremely difficult. The problem is fur-
ther complicated by the fact that in many cases the da-
ta are not collected by academics but by third parties
such as Facebook. Should the data collectors abide by
the ethical research norms of academia? If they did,
there would be a lot of ethical problems, particularly
with data produced by third parties, such as filmed
footage, photographs, Google Street View data, tele-
vised rock concert recordings, and so on. Even if partic-
ipant anonymity was secured, the human subjects in
these cases could not opt out. If scholars did not have
to worry about opting out as an ethical norm, they could
team up with someone outside of academia to do their
“dirty work” (see e.g. Kahn et al., 2014, p. 13677; Wrzus
& Mehl, 2015, p. 264; cf. boyd, 2014.). On the other
hand, one could say that a person can opt out of any Fa-
cebook experiment by not signing up for Facebook in the
first place—just like a potential participant in a psychol-
ogy experiment can decide not to attend the experiment
if he or she does not want to be manipulated.
In general, an ethically pragmatic social media us-
er’s informed consent is more like meta-informedness,
or “implicit informed consent” (Bryman & Bell, 2015, p.
139), where the user knows that for example Facebook
will do various known and unknown things with its user
data but is unlikely to do anything that is morally too
dubious—although it has been observed that users
tend to underestimate the level of their privacy when
they are excited about a social media application (Kehr,
Kowatsch, Wentzel, & Fleisch, 2015). For most users,
Facebook’s data policy is thus a reasonably informed
and fair trade-off between the user who gets to use the
service without a fee, and the service provider who gets
to sell the data to third parties such as advertisers (Ken-
nedy et al., 2015, p. 12; see also Hutton & Henderson,
2015, p. 178). This is actually the common logic of com-
mercial media, and the “ethical fig leaf” (O’Hara, Ngu-
yen, & Haynes, 2014, p. 4) of a social media researcher.
As Chan (2015, p. 1080; see also Aiken & Mahon,
2014, p. 4) notes, Facebook’s data use policy “enables
any user to potentially become an experiment subject
without need for prior consent”. In the end, a scholar
interested in research ethics might ask if there is any-
thing ethically new in the Facebook experiment. People
were studied without their knowing about it but they
had allowed it by signing up for Facebook. (Schroeder,
2014, p. 3; see also Zwitter, 2014, p. 1.) Certainly com-
panies have been doing experiments with only vaguely
informed consent before, as have psychologists, so
many people think the Facebook experiment is merely a
recent example of an old ethical research issue
(Schroeder, 2014, pp. 1-2; cf. Selinger & Hartzhog, 2016).
In a way, the Facebook user agreement is similar to
the informed consent form the participants in most
psychological experiments have to fill out. Participants
are informed that they (and the data they will produce)
will be used for scientific purposes but the participant
might not know exactly what those purposes are. He or
she might even be deceived about the real purpose of
the study to which they have consented. The message
of informed consent is, “I trust you. Do what is need-
ed.” Perhaps the only new aspect in this case is that
there are over a billion people on Facebook every day.
It is an essential networking tool for a large amount of
people, many of whom are dependent (to a greater or
lesser extent) on the service. This means that its user
agreement is not necessarily an ethical act between two
equal parties: opting out of an experiment becomes
equal to opting out of a significant part of one’s social
life (see e.g. Gertz, 2016). One might therefore suggest
that a participant might be sufficiently informed but the
question of consent is more controversial.
After multiple critical reviews of the experiment,
Mike Schroepfer, the Chief Technology Officer for Fa-
cebook, wrote an apologetic post for Facebook’s News-
room. According to him, they should have “considered
non-experimental ways” to do the research. Also, the
research would have “benefited from more extensive
review by a wider and more senior group of people”.
Schroepher also notes that they did not inform the
public about the experiment well enough (Schroepfer,
2014). Schroepher also introduced a new framework of
research that Facebook is going to implement. It in-
cluded clearer guidelines for researchers, a more ex-
tensive review stage, and training (including on privacy
and security matters), as well as the establishment of a
special research website (Facebook, 2015b).
Describing the new guidelines section, Schroepher
announced that a more enhanced review process
would be conducted prior to research if the intended
research focused on “studying particular groups or
populations (such as people of a certain age) or if it re-
lated to content that may be considered deeply per-
sonal (such as emotions).” Also, a further review would
be conducted if there was any collaboration with the
academic community. The statement ends with trying
to convince the reader—supposedly a daily Facebook
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 7
user—that Facebook wants to do research “in a way
that honors the trust you put in us by using Facebook
every day.” (Schroepher, 2014.) This seems to be Face-
book’s way of admitting that the experiment lacked in-
formed consent. Perhaps for PR reasons as well as due
to potential legal issues, Schroepher could not say out-
right that the experiment failed to obtain informed
consent (cf. Verma, 2014).
5. Discussion
In this article we have shown how the debate around
the Facebook experiment brings up two crucial and in-
terrelated themes of research ethics: research as ma-
nipulation, and the problem of informed consent. The
debate around the experiment shows that the era of
big data research demands some rethinking of research
ethics. Although the two key issues presented here are
not unique to contemporary research but had been
debated for decades before big data research came in
(see e.g. Faden & Beauchamp, 1986; Roelcke, 2004),
the unprecedentedly large amount of human subjects
that are called for in such research has led to a need
for special scrutiny. At the same time, it seems that the
ethical evaluation of such experiments is based on the
classical ethical stances of utilitarianism or deontology.
The proponent of the former sees little or no harm
done in such an experiment and no loss of happiness
caused by it, while the proponent of the latter consid-
ers that, regardless of the degree of actual harm, hu-
man integrity has been violated (see e.g. Ess, 2013, pp.
256-262; Harman & Cornelius, 2015, p. 58; Shrader-
Frechette, 2000).
Reaching any ethical consensus about the Facebook
experiment is further impeded by disagreements over
the definition of key concepts such as the “harm” done
to human subjects, and their “informed consent”.
When academic research ethics is so vague, it might
seem simpler for scholars to leave it to the law and us-
er agreements to define the ethics of the research.
However, according to Chan (2015, p. 1082; see also
Paolillo, 2015, p. 50; Burgess & Bruns, 2015, p. 99),
commercial companies’ ethical research standards
should not be allowed to spread to the academic world.
Flick (2016; see also Halavais, 2015, p. 592) agrees and
thinks that the commercial and academic sectors should
negotiate and agree on standards, but without making
any concessions in the commercial companies’ favor.
However, as universities’ opportunities to cooperate
with private companies working with big data increase,
the opportunities to leave the problematic ethics of da-
ta collection to companies increase likewise.
Mike Schroepher, the Chief Technology Officer of
Facebook, stated that Facebook should have communi-
cated “clearly why and how” they did the experiment
(Schroepfer, 2014). The statement implies that a per-
son is deprived of optimal well-being if the reasons and
methods of any actions carried out on him or her are not
properly communicated. On the other hand, one could
easily claim the opposite: a person suffers less when he
or she does not know or notice anything about such ac-
tions. As Stilgoe (2015, pp. 46-47) observes, the Face-
book experiment was rare in being openly published and
publicly scrutinized, since most such experiments are
conducted in secret. We can wonder if people were out-
raged about the experiment because Facebook altered
its users’ states of mind or because it reminded them
that their states of mind are being altered all the time by
all kinds of things, people and organizations (see e.g.
boyd, 2016; see also Kehr et al., 2015).
At the same time, Kennedy et al. (2015, p. 2) ob-
serve that there has been little research about what
social media users themselves actually think about be-
ing observed, studied and—we would add—
experimented on. This is rather disconcerting, given
the massive number of people that use social media
and are in some form or other observed and experi-
mented on by researchers. Perhaps surprisingly, the
social media users Kennedy et al. (2015, pp. 3-4) stud-
ied seemed to be concerned about privacy, but mainly
about social privacy. That is, they wanted to be sure
that they could choose which individuals in their net-
work have access to their personal information. They
were not so worried about institutional privacy, or “the
mining of personal information by social media plat-
forms, commercial companies and governments”. Alt-
hough we are talking about only one study, there is
reason to suggest that the ethical criticism of the Face-
book experiment made by academics might not reflect
users’ worries. This is a topic that should be further
studied, as it would be relevant for research ethics in
the era of social media to be more grounded in the us-
er level. A more holistic and inclusive ethical research
study would ensure that researchers do more than de-
fine what is morally optimal in big data research; or, as
Tama Leaver (2013) states, “Big Data needs Big Ethics,
and we don’t have them yet.”
If we go further into the ethical implications of so-
cial media experiments that aim to enhance user expe-
rience, we are faced with a more profound ethical
challenge than a discussion of manipulation and in-
formed consent reveals. If in Facebook we are fed im-
agery that further filters our experiences of the “real”
world, then what are the ethical ramifications of re-
searchers teaming up with companies that aim to give
people “the experience they want” (Simonite, 2012)?
Would the companies be in charge of the “hard ethical
choice…of what content to show…without oversight,
transparency, or informed consent” (boyd, 2014)? The
way media and new media influence our perceptions
of reality has already been widely studied (e.g. Fair-
clough, 1995; Macey, Ryan, & Springer, 2014) but there
has been little consideration so far of the ethics of aca-
demics taking part in these kinds of studies.
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 8
The way big data is “all at once essential, valuable,
difficult to control, and ubiquitous” seems to be re-
flected in our complex, context-dependent attitudes
toward it (Pushcmann & Burgess, 2014, p. 1695). Gertz
(2016, p. 56) notes that despite the Facebook contro-
versy, the number of Facebook users is still growing. At
the same time, users’ autonomy seems to be diminish-
ing. From this it can be concluded that many users do
not mind the asymmetrical relationship they have with
the service provider. As Ess (2013, p. 254) notes, “our
engagements with new digital media appear to bring in
their wake important transformations in our sense of
self and identity.” Our “foundational conception of au-
tonomous self” that has legitimated concepts of priva-
cy that “modern liberal-democratic” states respect
seems to be changing. Perhaps the question we should
ask is primarily existential rather than ethical, as Gertz
(2016, p. 61) suggests. According to him, we should
first think about the increasingly significant role tech-
nology plays in our lives. If we accept it, then we can
have a more meaningful discussion on the ethics of
scholars experimenting with it.
Conflict of Interests
The authors declare no conflict of interests.
References
Aiken, M., & McMahon, C. (2014). A primer of research
in mediated environments: Reflections on cyber-
methodology. SSRN Working Papers. Retrieved from
http://papers.ssrn.com/sol3/Papers.cfm?abstract_id
=2462700
Ananny, M. (2015). Toward an ethics of algorithms: Con-
vening, observation, probability, and timeliness. Sci-
ence, Technology, and Human Values, 4(1), 93-117.
Bail, C. A. (2015). Taming big data: Using app technology
to study organizational behavior on social media. So-
ciological Methods & Research. Retrieved from
http://smr.sagepub.com/content/early/2015/05/15/
0049124115587825.full
Barsade, S. (2002). The ripple effect. Emotional conta-
gion and its influence on group behavior. Administra-
tive Science Quarterly, 47(4), 644-675.
Bond, R. M., Fariss, C. J., Jones, J. J., Kramer, A. D. I., Mar-
low, C., Settle, J. E., & Fowler, J. H. (2012). A 61-
million-person experiment in social influence and po-
litical mobilization. Nature, 489(7415), 295-298.
boyd, d. (2010, April). Publicity and privacy in web 2.0.
Keynote speech at WWW2010, Raleigh, USA. Re-
trieved from www.danah.org/papers/talks/2010/SX
SW2010.html
boyd, d. (2014). What does the Facebook experiment
teach us? Growing anxiety about data manipulation.
The Message. Retrieved from https://medium.com/
message/what-does-the-facebook-experiment-
teach-us-c858c08e287f
boyd, d. (2016). Untangling research and practice: What
Facebook’s “emotional contagion” study teaches us.
Research Ethics, 12(1), 4-13.
The British Psychological Society (2010). Code of human
research ethics. Leicester: The British Psychological
Society. Retrieved from www.bps.org.uk/sites/de
fault/files/documents/code_of_human_research_et
hics.pdf
Brotsky, S. R., & Giles, D. (2007). Inside the “Pro-ana”
community: A covert online participant observation.
Eating Disorders: The Journal of Treatment & Preven-
tion, 15(2), 93-109.
Bryman, A., & Bell, E. (2015). Business research methods.
Oxford: Oxford University Press.
Burgess, J., & Bruns, A. (2015). Easy data, hard data: The
politics and pragmatics of Twitter research after the
computational turn. In G. Langlois, J. Redden, & G.
Elmer (Eds.), Compromised data: From social media
to big data (pp. 93-111). New York: Bloomsbury Aca-
demic.
Card, N. A. (2010). Literature review. In N. J. Salkind
(Ed.), Encyclopedia of research design (pp. 726-729).
Thousand Oaks: Sage.
Ceserani, R. (2010). The essayistic style of Walter Ben-
jamin. Primerjalna književnost, 33(1), 83-92.
Chan, A. (2015). Big data interfaces and the problem of
inclusion. Media, Culture & Society, 37(7), 1078-
1083.
Cheng, J., Adamic, L. A., Dow, P. A., Kleinberg, J., &
Leskovec, J. (2014). Can cascades be predicted? Pro-
ceedings of the 23rd International Conference on
World Wide Web (pp. 925-936). New York: ACM. Re-
trieved from http://dl.acm.org/citation.cfm?id=2567
997
Copeland, L. (2011). The anti-social network. Slate. Re-
trieved from www.slate.com/articles/double_x/dou
blex/2011/01/the_antisocial_network.html
Cornelissen, J., Gajewskade, M., Piekkari, R., & Welch, C.
(2012). Writing up as a legitimacy seeking process:
Alternative publishing recipes for qualitative re-
search. In S. Gillian & C. Cassel (Eds.), Qualitative or-
ganizational research: Core methods and current
challenges (pp. 185-203). London: Sage.
Escobedo, C., Guerrero, J., Lujan, G., Ramirez, A., &
Serrano, D. (2007). Ethical issues with informed con-
sent. Bio-Ethics, 1, 1-8. Retrieved from
http://cstep.cs.ut
ep.edu/research/ezine/Ezine-ethicalIssueswithInfor
medConsent.pdf
Ess, C. M. (2007). Internet research ethics. In A. Joinson
(Ed.), Oxford handbook of internet psychology (pp.
481-502). Oxford: Oxford University Press.
Ess, C. M. (2013). Global media ethics? Issues, require-
ments, challenges, resolutions. In S. J. A. Ward (Ed.),
Global media ethics: Problems and perspectives (pp.
253-271). West Sussex: Wiley-Blackwell.
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 9
Facebook (2015a). Terms of service. Retrieved from
www.facebook.com/terms.php
Facebook (2015b). Research at Facebook. Retrieved
from https://research.facebook.com
Faden, R. R., & Beauchamp, T. L. (1986). A History and
theory of informed consent. Oxford: Oxford Universi-
ty Press.
Fairclough, N. 1995. Media discourse. London: Blooms-
bury.
Ferrara, E., & Yang, Z. (2015). Measuring emotional con-
tagion in social media. PloS ONE, 10(11). Retrieved
from http://journals.plos.org/plosone/article?id=10.
1371/journal.pone.0142390
Fishwick, C. (2014, June 30). Facebook’s secret mood ex-
periment: Have you lost trust in the social network?
The Guardian. Retrieved from www.theguardian.
com/technology/poll/2014/jun/30/facebook-secret-
mood-experiment-social-network
Flick, C. (2016). Informed consent and the Facebook
emotional manipulation study. Research Ethics,
12(1), 14-28.
Friggeri, A., Adamic, L. A., Eckles, D., & Cheng, J. (2014).
Rumor cascades. Proceedings of the Eighth Interna-
tional AAAI Conference on Weblogs and Social Media
(ICWSM) (pp. 101-110). Palo Alto, CA: AAAI Press.
Retrieved from www.aaai.org/ocs/index.php/ICWSM
/ICWSM14/paper/view/8122
Gertz, N. (2016). Autonomy online: Jacques Ellul and the
Facebook emotional manipulation study. Research
Ethics, 12(1), 55-61.
Grady, C. (2015). Enduring and emerging challenges of
informed consent. The New England Journal of Med-
icine, 372(9), 855-862.
Grimmelmann, J. (2014). As flies to wanton boys. The
Laboratorium. Retrieved from http://laboratorium.
net/archive/2014/06/28/as_flies_to_wanton_boys
Halavais, A. (2015). Bigger sociological imaginations:
Framing big social data theory and methods. Infor-
mation, Communication & Society, 18(5), 583-594.
Haney, C., Banks, C., & Zimbardo, P. (1973). A study of
prisoners and guards in a simulated prison. Naval Re-
search Reviews, 30(9), 4-17.
Harman, L. B., & Cornelius, F. (2015). Ethical health in-
formatics. Burlington, MA: Jones & Bartlett.
Harriman, S., & Patel, J. (2014). The ethics and editorial
challenges of internet-based research. MBC Medi-
cine, 12, 124-127.
Hesse, B. W., Moser, R. P., & Riley, W. T. (2015). From
big data to knowledge in the social sciences. The An-
nals of American Academy of Political and Social Sci-
ence, 659(1), 16-32.
Hine, C. (2000). Virtual ethnography. London: Sage.
Hudson, J. M., & Bruckman, A. (2004). Go away: Partici-
pant objections to being studied and the ethics of
chatroom research. The Information Society, 20(2),
127-139.
Huntsinger, J. R., Lun, J., Sinclair, S., & Clore, G. L. (2009).
Contagion without contact: Anticipatory mood
matching in response to affiliative motivation. Per-
sonality and Social Psychology Bulletin, 35(7), 909-
922.
Hutton, L., & Henderson, T. (2015). “I didn’t sign up for
this!”: Informed consent in social network research.
Proceedings of the Ninth International AAAI Confer-
ence on Web and Social Media (ICWSM) (pp. 178-
187). Palo Alto, CA: AAAI Publications. Retrieved
from www.aaai.org/ocs/index.php/ICWSM/ICWSM1
5/paper/view/10493
Jouhki, J., Lauk, E., Penttinen, M., Rohila, J., Sormanen,
N., & Uskali, T. (2015, November). Social media per-
sonhood as a challenge to research ethics: Exploring
the case of the Facebook experiment. Paper present-
ed at the Social Media Research Symposium,
Jyväskylä, Finland.
Kahn, J. P., Vayena, E., & Mastroianni, A. C. (2014). Opin-
ion: Learning as we go: Lessons from the publication
of Facebook's social-computing research. Proceed-
ings of the National Academy of Sciences of the Unit-
ed States of America, 111(38), 13677-13679.
Kehr, F., Kowatsch, T., Wentzel, D., & Fleisch, E. (2015).
Blissfully ignorant: The effects of general privacy
concerns, general institutional trust, and affect in the
privacy calculus. Information Systems Journal, 25,
607-635.
Kennedy, H., Elgesem, D., & Miguel, C. (2015). On fair-
ness: User perspectives on social media data mining.
Convergence: The International Journal of Research
into New Media Technologies. Retrieved from
http://con.sagepub.com/content/early/2015/06/26/
1354856515592507.full
Kleinsman, J., & Buckley, S. (2015). Facebook study. A lit-
tle bit unethical but worth it? Journal of Bioethical
Inquiry, 12(2), 179-182.
Kramer, A. (2014). A post on 29.6.2014. Facebook. Re-
trieved from www.facebook.com/akramer/posts/10
152987150867796
Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014).
Experimental evidence of massive-scale emotional
contagion through social networks. Proceedings of
the National Academy of Sciences of the United
States of America, 111(24), 8788-8790.
Leaver, T. (2013, September). Birth, death and Facebook.
Paper presented at Adventures in Culture in Tech-
nology (ACAT) Seminar Series, Perth. Retrieved from
www.tamaleaver.net/2013/10/03/birth-death-and-
facebook
Lindsay, S., & Goldring, J. (2010). Anonymizing data for
secondary use. In A. J. Mills, G. Durepos, & E. Wiebe
(Eds.), Encyclopedia of case study research (pp. 25-
27). London: Sage.
Macey, D. A., Ryan, K. M., & Springer, N. J. (Eds.). (2014).
How television shapes our worldview: Media repre-
sentations of social trends and change. New York:
Lexington Books.
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 10
Markham, A., & Buchanan, E. (2012). Ethical decision-
making and Internet research. Recommendations
from the AoIR ethics working committee (version
2.0). Chicago: Association of Internet Researchers.
Retrieved from http://aoir.org/reports/ethics2.pdf
Marx, G. T. (2013). An ethics for the new (and old) sur-
veillance. In F. Flammini, R. Stola, & G. Franceschetti
(Eds.), Effective surveillance for homeland security:
Balancing technology and social issues (pp. 2-20).
Boca Raton, FL: Taylor & Francis.
McKelvey, F., Tiessen, M., & Simcoe, L. (2015). A consen-
sual hallucination no more? The Internet as simula-
tion machine. European Journal of Cultural Studies,
18(4-5), 577-594.
Meyer, M. N. (2014). Misjudgments will drive social trials
underground. Nature, 511(7509), 265.
Milgram, S. (1963). Behavioral study of obedience. Jour-
nal of Abnormal and Social Psychology, 67(4), 371-
378.
O’Hara, K., Nguyen, M.-H. C., & Haynes, P. (2014). Intro-
duction. In K. O’Hara, M.-H. C. Nguyen, & P. Haynes
(Eds.), Digital enlightenment yearbook 2014: Social
networks and social machines, surveillance and em-
powerment (pp. 3-24). Amsterdam: IOS Press.
Paolillo, J. C. (2015). Network analysis. In A. Geor-
gakopoulou & T. Spilioti (Eds.), The Routledge hand-
book of language and digital communication (pp. 36-
54). London: Routledge.
Parkinson, B., & Manstead, A. S. R. (2015). Current emo-
tion research in social psychology: Thinking about
emotions and other people. Emotion Review, 7(4),
371-380.
Peacock, S. E. (2014). How web tracking changes user
agency in the age of Big Data: The used user. Big Da-
ta & Society, 1(2), 1-11. Retrieved from http://
bds.sagepub.com/content/1/2/2053951714564228
Pejovic, V., & Musolesi, M. (2015). Anticipatory mobile
computing. A survey of the state of the art and re-
search challenges. ACM Computing Surveys, 47(3), 1-
29.
Phillips, M. L. (2011). Using social media in your re-
search. Experts explore the practicalities of observing
human behavior through Facebook and Twitter.
gradPSYCH, 9(4), 32.
Recuber, T. (2016). From obedience to contagion: Dis-
courses of power in Milgram, Zimbardo, and the Fa-
cebook experiment. Research Ethics, 12(1), 44-54.
Roelcke, V. (2004). Introduction: Historical perspectives
on human subjects research during the 20th century,
and some implications for present day issues in bio-
ethics. In V. Roelcke & G. Maio (Eds.), Twentieth cen-
tury ethics of human subjects research: Historical
perspectives on values, practices, and regulations
(pp. 11-18). Stuttgart: Franz Steiner Verlag.
Rooke, B. (2013). Four pillars of internet research ethics
with Web 2.0. Journal of Academic Ethics, 11(4), 265-
268.
Rosenberg, Å. (2010). Virtual world research ethics and
the private/public distinction. International Journal
of Internet Research Ethics, 3(1), 23-37.
Rushe, D. (2014, October 2) Facebook sorry—almost—
for secret psychological experiment on users. The
Guardian. Retrieved from www.theguardian.com/
technology/2014/oct/02/facebook-sorry-secret-psy
chological-experiment-users
Schroeder, R. (2014). Big Data and the brave new world
of social media research. Big Data & Society, 1(2), 1-
11.
Searls, D. (2015, November 6). Ad blockers and the next
chapter of the Internet. Harvard Business Review.
Retrieved from https://hbr.org/2015/11/ad-blocker
s-and-the-next-chapter-of-the-internet
Selinger, E., & Hartzog, W. (2016). Facebook’s emotional
contagion study and the ethical problem of co-opted
identity in mediated environments where users lack
control. Research Ethics, 12(1), 35-43.
Shah, D. V., Cappella, J. N., & Neuman, W. R. (2015). Big
data, digital media, and computational social science:
Possibilities and perils. The Annals of the American
Academy of Political and Social Science, 659(1), 6-13.
Shrader-Frechette, K. (2000). Ethics of scientific research.
London: Rowman & Littlefield.
Simon, J. R. (2014). Corporate research ethics: Whose
responsibility? Annals of Internal Medicine, 161(12),
917-918.
Simonite, T. (2012, June 13). What Facebook knows. MIT
Technology Review. Retrieved from www.technology
review.com/s/428150/what-facebook-knows
Sloan, L., Morgan, J., Burnap, P., & Williams, M. (2014).
Who tweets? Deriving the demographic characteris-
tics of age, occupation and social class from Twitter
user meta-data. LloS ONE, 10(3), 1-20.
Sormanen, N., Rohila J., Lauk, E., Uskali T., Jouhki J., &
Penttinen M. (2016). Chances and challenges of
computational data gathering and analysis: The case
of issue-attention cycles on Facebook. Digital Jour-
nalism, 4(1), 55-74.
Stilgoe, J. (2015). Experiment earth: Responsible innova-
tion in Geoengineering. London: Routledge.
Summers-Effler, E., Van Ness, J., & Hausmann, C. (2015).
Peeking in the black box: Studying, theorizing, and
representing the micro-foundations of day-to-day in-
teractions. Journal of Contemporary Ethnography, 44
(4), 450-479.
Sun, E., Rosenn, I., Marlow, C., & Lento, T. (2009). Ge-
sundheit! Modeling contagion through Facebook
news feed. Proceedings of the Third International
ICWSM Conference (pp. 146-153). Palo Alto, CA: AAAI
Publications. Retrieved from http://aaai.org/ocs/in
dex.php/ICWSM/09/paper/view/185
Svanteson, D. J. B. (2007). Private international law and
the Internet. Alphen aan den Rijn: Kluwer Law Inter-
national.
Thorson, K., & Wells, C. (2015). Curated flows: A frame-
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 11
work for mapping media exposure in the digital age.
Communication Theory. Retrieved from http://ssc.
sagepub.com/content/early/2015/10/19/089443931
5609528.refs
Tinati, R., Halford, S., Carr, L., & Pope, C. 2014. Big data:
Methodological challenges and approaches for socio-
logical analysis. Sociology, 48(4), 663-681.
Torraco, R. J. (2005). Writing integrative literature re-
views: Guidelines and examples. Human Resource
Development Review, 4(3), 356-367.
Vainio, A. (2012). Beyond research ethics: Anonymity as
‘ontology’, ‘analysis’ and ‘independence’. Qualitative
Research, 13(6), 685-698.
Vanderpool, H. Y. (1996). The ethics of research involving
human subjects: Facing the 21st century. Frederick:
University Publishing Group.
Verma, I. M. (2014). Editorial expression of concern and
correction. Proceedings of the National Academy of
Sciences, 111(29), 10779.
Weeden, M. R. (2012). Ethics and on-line research
methodology. Journal of Social Work Values and Eth-
ics, 9(1), 40-51.
Wrzus, C., & Mehl, M. R. (2015). Lab and/or field? Meas-
uring personality processes and their social conse-
quences. European Journal of Personality, 29(2), 250-
271.
Zimbardo, P. (1973). On the ethics of intervention in
human psychological research: With special refer-
ence to the Stanford prison experiment. Cognition,
2(2), 243-256.
Zimmer, M. (2010). “But the data is already public”: On
the ethics of research in Facebook. Ethics and Infor-
mation Technology, 12(4), 313-325.
Zwitter, A. (2014). Big Data ethics. Big Data & Society,
1(2), 1-6.
About the Authors
Jukka Jouhki (PhD, Docent) is a Cultural Anthropologist working as a Senior Lecturer of Ethnology at
the Department of History and Ethnology, and a member of the Social Media Research Institute at
University of Jyväskylä, Finland. Jouhki’s research interests include democracy, nationalism, imagined
communities, online gambling, old and new media, as well as various issues in human-technology re-
lations, and cultural phenomena related to them.
Epp Lauk (PhD) is Professor of Journalism and Head of the Department of Communication at the Uni-
versity of Jyväskylä, Finland. Her research and publications focus on journalism cultures and history,
media and journalism in Central and East European countries, media self-regulation and innovations
in journalism.
Maija Penttinen is an undergraduate student at the Department of History and Ethnology, University
of Jyväskylä. Her research interests include both political participation and civic action on social me-
dia, in addition to studying the integration of social networking sites as platforms for everyday activi-
ties. She is currently working on her Master’s thesis on everyday activities and experiences
manifested in research literature concerning the social networking site Facebook.
Niina Sormanen (MA) is a PhD candidate of Organizational Communication and Public Relation (PR)
at the University of Jyväskylä, Department of Communication. Her research interests include com-
municative behavior and power relations in the social media context. Her PhD thesis is focused on
the interplay of organizational and media professionals and individuals in the social media context
and uses of social media in building their communicative power.
Media and Communication, 2016, Volume 4, Issue 4, Pages X-X 12
Turo Uskali (PhD) is the Head of Journalism and Senior Research Scholar at the Department of Com-
munication, University of Jyväskylä, Finland. He leads several research projects focusing on innova-
tions in journalism. The most recent ones focus on mobile data journalism, and wearables. Uskali is
also an Associate Professor at the University of Bergen, Norway, and he has authored or co-authored
seven books about the evolution of global journalism and the changes in media industries.