Shared identity and shared information in social media:
development and validation of the identity bubble
Markus Kaakinen, Anu Sirola, Iina Savolainen, and Atte Oksanen
Faculty of Social Sciences, University of Tampere, Tampere, Finland
Social media facilitates the formation of identity bubbles that
reinforce shared identities, social homophily, and reliance on
the information shared within the bubbles. Currently, research-
ers need more social psychological measures to assess how
people perceive their relationships with online networks and
information shared on social media. This article reports the
development and Finnish and English validations of the iden-
tity bubble reinforcement scale (IBRS), consisting of subscales
on social identification, homophily, and information bias.
Studies 1 and 2 (N= 1200, N= 160; age 15–25 years) validated
the 6-item measure (IBRS-6) in Finland and Study 3 replicated
the results with both 6-item and 9-item (IBRS-9) measures in
the United States (N= 501, age 18–25 years). Across all 3
studies and 2 countries, the IBRS showed good construct valid-
ity and correlated positively with social media use and group-
behavior measures. The IBRS is an applicable measure for social
media group behavior, and its subscales can also be used
The development of social media has quickly transformed social interactions
taking place every day. The concept of social media covers diverse types of
Web 2.0-based technologies consisting of user-generated content (Kaplan &
Haenlein, 2010). Different social networking sites are prominent examples of
social media platforms, which allow individuals to interact with other users,
as well as provide tools to maintain and form social networks (Boyd &
Ellison, 2007; Wilson, Gosling, & Graham, 2012). With the introduction of
social media sites and platforms, such as Facebook, Twitter, and Instagram,
people are able to reach other like-minded people at unprecedented rates.
Social media thus responds to a basic human need to interact and seek
companionship (Baumeister & Leary, 1995).
The social selectivity and social media platforms’filtering technologies can
lead to formation of psychosocial bubbles that limit the diversity of social
contacts and information exposure online. In earlier literature, these bubbles
CONTACT Atte Oksanen firstname.lastname@example.org Faculty of Social Sciences, University of Tampere, Tampere
Supplemental data for this article can be accessed here.
© 2018 Taylor & Francis Group, LLC
are referred to as echo chambers (i.e., social interaction limited to like-
minded communication; see, e.g., Zollo et al., 2017)orfilter bubbles (i.e.,
customized online reality due to algorithmic filtering technology; Pariser,
2011), for example. These perspectives were grounded on computer science
and on structural measures of online behavior and social networks.
Currently, researchers need more social psychological measures to assess
how people perceive their relationships with online networks and informa-
tion shared on social media. The identity bubble reinforcement model
(IBRM) by Keipi, Näsi, Oksanen, and Räsänen (2017) was the first attempt
to integrate a social psychological perspective into the discussion of social
media bubbles. In contrast to previous attempts in computer science (e.g.,
Pariser, 2011; Zollo et al., 2017), the IBRM sought more specifically to
understand human motivation and social psychological aspects of the
According to the IBRM, the expanded possibilities for communication and
social network formation in social media allow individuals to search for social
interactions with others who share and validate their identities (Keipi et al.,
2017). This identity-driven online use can lead to identity bubbles, which are
manifested in three elements: identification with online social networks (social
identification), a tendency to interact with like-minded others (homophily),
and reliance on like-minded information on social media (information bias).
Within social media, these three elements are correlated and together they
reflect the identity bubble reinforcement process.
This theorization is supported by other research findings on online social
networks and their importance in shaping interaction, participation, and
information consumption online (Bakshy, Messing, & Adamic, 2015; Jost
et al., 2018). Furthermore, social media, indeed, tends to gather users into
social cliques of similar individuals. Earlier computational analyses have
shown how these social aggregates are formed around shared group member-
ships and ideologies (i.e., political factions) and they significantly influence
the way users engage with information online (Bakshy et al., 2015; Bessi et al.,
2015; Boutyline & Willer, 2017; Del Vicario et al., 2016; Zollo et al., 2017).
The IBRM, however, relies on the perspective that individual differences
contribute to the tendency to be involved in these like-minded social cliques
Over the past century, researchers in the social psychological research tradi-
tion have extensively studied group-forming behavior. People not only form
groups very quickly, even on minimal bases, but they also tend to prioritize
their own in-group. These basic ideas were already very well established in
Muzafer Sherif’s experiments in boys’summer camps (Sherif, Harvey, White,
2M. KAAKINEN ET AL.
Hood, & Sherif, 1961). Later, Henri Tajfel and his colleagues established
a series of minimal group experiments, noting that it takes very little for
people to start identifying with their group and favoring it over the perceived
out-group (Tajfel, Billig, Bundy, & Flament, 1971).
The minimal group experiments led to the formation of social identity
theory proposed by Henri Tajfel and his colleague John C. Turner in the late
1970s (Tajfel & Turner, 1979; Turner & Reynolds, 2010). Social identity theory
proposes that individuals internalize a group membership so that it becomes
an aspect of their self-concept, thus making identity dependent upon connect-
edness to relevant social groups (Tajfel & Turner, 1979). An essential part of
a social identity approach is self-categorization theory, which emphasizes how
individuals cognitively describe themselves in terms of a group identity, rather
than a personal identity; that is, become depersonalized (Turner, 1985;Turner,
Oakes, Haslam, & McGarty, 1994).
The role of online social identification has been shown in a number of
studies on social media. Younger generations of online users have been
shown to identify with online groups and communities, sometimes even
more strongly than with traditional offline peer groups (Lehdonvirta &
Räsänen, 2011; Mikal, Rice, Kent, & Uchino, 2016). Those who strongly
identify with their online social networks also tend to social-categorize, in
other words, they perceive themselves as online community members instead
of having a personalized identity, in online interaction (Guo & Li, 2016; Jans,
Leach, Garcia, & Postmes, 2015).
Chung (2013) found that preferring online groups over offline contacts is
especially prominent among those who are dissatisfied with offline relations
and who tend to form intimate relationships online. Online social bonds and
strong social identification with online communities are consequently related
to increased use of the Internet (Flanagin, Hocevar, & Samahito, 2014; Mikal
et al., 2016). Social identification online can also be problematic because
people tend to feel obligated to increase their social media use in response to
increased group-level social activity, which may lead to compulsive use of the
Internet (Turel, 2015; Turel & Osatuyi, 2017).
Social identification has been operationalized and measured in various
manners (for review, see Leach, van Zomeren, Zebel, Vliek, & Ouwerkerk,
2008). In a widely referenced measure created by Leach et al. (2008), aspects
of social identification are considered under the dimensions of self-
investment and self-definition. The self-investment dimension of social iden-
tification consists of in-group-related solidarity, satisfaction, and centrality,
whereas the self-description dimension includes individual self-stereotyping
and in-group homogeneity subdimensions. Of these dimensions, the self-
investment dimension can be seen as closer to the traditional social identity
theory approach, whereas self-definition resembles self-categorization (Leach
et al., 2008; Postmes, Haslam, & Jans, 2013).
MEDIA PSYCHOLOGY 3
People are likely to form social relationships with others who are similar to
them. This is called homophily (McPherson, Smith-Lovin, & Cook, 2001).
Homophily is based on shared background factors such as social or economic
status or geographic region, or attitudinal factors such as similar thoughts or
values (McCroskey, McCroskey, & Richmond, 2006). Homophily markedly
shapes human social networks (McPherson et al., 2001), and similar indivi-
duals seem to form social relationships that are more stable over time (Hartl,
Laursen, & Cillessen, 2015). Homophily in social relations tends to increase
when possibilities for social selectivity increase (Bahns, Pickett, & Crandal,
2011). Because social media provides new possibilities for social network
formation and management, it is not surprising that people often choose to
interact with similar others online (Kang & Chung, 2017; Oksanen, Hawdon,
& Räsänen, 2014). In addition, discussions taking place in social media often
emerge around shared ideologies or emotional valence (Himelboim, Smith, &
Shneiderman, 2013; Himelboim et al., 2016).
Personal networks on social media have an impact on what kind of
content and information users encounter online, even though the algorithmic
filtering technology of online platforms also plays a role (Bakshy et al., 2015;
Himelboim et al., 2013). Because the online experience is modified by one’s
social networks, homophilic social relations lead to reduced informational
diversity. Reduced diversity may contribute to the formation of homophilic
social aggregates, or echo chambers, which reinforce polarization and con-
flicts between different social and ideological cliques online (Boutyline &
Willer, 2017; Densley & Peterson, 2017; Zollo et al., 2017).
Perceived similarity also affects the way people interact online. The per-
ception of congruence between personally held and public opinion online
makes people more willing to participate in interaction and share their views
(Gearhart & Zhang, 2015; Liu, Rui, & Cui, 2017; Zerback & Fawzi, 2017).
Thus, it is likely that social homophily online is related to increased activity
on social media.
Homophily is measured as homogeneity in individual social networks
(McPherson et al., 2001;Robinson&Aikens,2010). McCroskey, Richmond, and
ceived homophily scale, homophily is assessed as individual perceptions of simi-
larity in terms of background and attitudinal factors (McCroskey et al., 2006).
The third aspect of the IBRM is information bias. It consists of selective
exposure to opinion-congruent information and the perception that infor-
mation is reliable on social media. Both of these are influenced by social and
4M. KAAKINEN ET AL.
cognitive factors that shape the way people approach information and eval-
uate its credibility. As indicated in the motivated reasoning literature, indi-
viduals are more motivated to confirm their established attitudes than to
reach accurate conclusions (Westen, Blagov, Harenski, Kilts, & Hamann,
2006; Ziva, 1990). In addition, people tend to overestimate the popularity
of their attitudes and downplay opposing evidence (Kuru, Pasek, & Traugott,
Social media and technology enable high selectivity over consumed informa-
tion. Thus, social media is likely to enhance motivated reasoning, exposure to
like-minded information, and the tendency to ignore attitude-threatening infor-
mation. In addition, motivated reasoning can be accentuated by online plat-
forms’algorithmic filtering, which adjusts the social media environment
according to users’former interests and preferences (Pariser, 2011). Social
media provides users practically unlimited information and the ability to make
decisions very rapidly. In these circumstances, people tend to trust other people’s
opinions and evaluations of information credibility (Flanagin, 2017; Hocevar,
Flanagin, & Metzger, 2014; Metzger, Flanagin, & Medders, 2010). Series of
experiments have shown that social endorsements play a major role in how
people consume social media content (Messing & Westwood, 2014).
Social identification and homophily both contribute to perceptions of online
information credibility. People who strongly identify with their social media
networks are likely to see online information as more reliable. In-group mem-
bers are in general seen as trustworthy and competent (see, e.g., Leach, Ellemers,
& Barreto, 2007). In addition, information coming from one’s in-group or
similar others is perceived as more reliable (Flanagin, 2017;Flanaginetal.,
2014;Hocevaretal.,2014; Shin, Van der Heide, Beyea, Dai, & Prchal, 2017).
Furthermore, people are, in general, prone to overestimate the degree to
which others, and especially other in-group members, agree with their thoughts
and behaviors (Holtz & Miller, 1985; Jones, 2004; Rogers, Moore, & Norton,
2017; Ross, Greene, & House, 1977). According to the theory of in-group
projection (Mummendey & Wenzel, 1999), individuals tend to project further
the characteristics of their in-group in relation to more inclusive superordinate
groups (Imhoff, Dotsch, Bianchi, Banse, & Wigboldus, 2011;Mummendey&
Wenzel, 1999). On social media, these social psychological mechanisms indicate
that users would overestimate the perceived homogeneity of their online in-
groups and further project group characteristics to a wider group of social media
users, leading to an inflated perception of similarity.
Together the perceived credibility and selective exposure elements of
information bias lead to online experiences in which users strongly rely on
like-minded social media communication. Thus, our presumption is that
social media information bias is manifested in individual tendencies to be
mainly exposed to information that agrees with one’s established views and
to trust and value social media as a source of information.
MEDIA PSYCHOLOGY 5
The aim of this series of studies was to develop and validate the IBRS. The
scale was developed during three studies that took place in 2017. The scale
was theoretically based on both the IBRM (Keipi et al., 2017) and empirical
and theoretical perspectives established within social psychology (e.g., Leach
et al., 2008; McCroskey et al., 2006) and research on computer-mediated
communication (Bakshy et al., 2015; Boutyline & Willer, 2017; Flanagin,
2017; Flanagin et al., 2014; Zollo et al., 2017). It measures the cognitive
process of relating oneself with online social networks.
Based on reviewed literature, we hypothesized that the IBRS will consist of
three correlated factors: social identification, homophily, and information bias.
Furthermore, these three social elements reflect a second-order factor (see
Burke Jarvis, MacKenzie, & Podsakoff, 2003) of an identity bubble (i.e.,
a personal tendency to relate oneself with online social cliques). Identity
bubbles can be viewed as self-reinforcing, as those who strongly relate them-
selves with online social cliques are also more likely to display social identifica-
tion, homophily, and reliance on similar-minded information in their future
behavior (Keipi et al., 2017; see also Hogg & Rinella, 2018;Jostetal.,2018).
As reviewed, high involvement in online social networks is related to
increased (Mikal et al., 2016), and even to compulsive (Turel, 2015; Turel
& Osatuyi, 2017), Internet use, because those most involved in online com-
munities often feel obligated to respond to others’activity. Thus, we hypothe-
sized that the IBRS should be positively associated with increased social
media activity and compulsive Internet use. In addition, the IBRS should
be positively associated with reciprocity in group activity (Mikal et al., 2016;
Turel, 2015; Turel & Osatuyi, 2017) and with other forms of group behavior;
that is, self-categorization (Guo & Li, 2016; Jans et al., 2015) and group
influence (Flanagin, 2017; Flanagin et al., 2014; Hocevar et al., 2014; Shin
et al., 2017) in online group experiments. Furthermore, these forms of online
group behavior should be correlated with the IBRS; a user’s shared group
membership with similar others is especially expected to be emphasized.
To test our hypotheses, we first investigated the validity and reliability of the
IBRS with three samples from two countries (Finland & the United States).
Then we analyzed the hypothesized associations between the IBRS and our
validation variables. The country selection was based on the fact that both the
United States and Finland are highly technologically developed countries, but at
the same time, societally and culturally distinct. Although some authors of
earlier studies have pointed out major cultural differences (the Finnish being
quieter and more reserved than their American counterparts; e.g., Sallinen-
Kuparinen, McCroskey, & Richmond, 1991), other authors of social media
research have also pointed out major similarities, especially in youth behavior
and how social media is used among adolescents and young adults, and what
6M. KAAKINEN ET AL.
kind of factors explain risks encountered online (Keipi et al., 2017; Näsi et al.,
2014). Theoretically and empirically, it was important to show whether the
IBRS could be validated in different cultural settings.
We first conducted a small pilot study by collecting a convenience sample (N
= 60, M
= 24.17, SD
= 3.57, 86% female) consisting of university
students to construct and validate the IBRS in Finnish. Studies 1 and 2
were targeted to Finnish young people aged 15 to 25. Participants of Study
1 were recruited from a pool of volunteer respondents administrated by
Survey Sampling International in March–April 2017. This demographically
balanced sample (N= 1,200, M
= 21.29, SD
= 2.85, 50% female) matches
the Finnish population aged 15 to 25 in terms of age, gender, and living area.
Participants of Study 2 were recruited from Finnish discussion forums and
social networking sites in April–June 2017 (N= 160, M
= 22.48, SD
= 2.58, 57% female).
Study 3 was targeted to respondents in Amazon’s Mechanical Turk, which
enables collection of high-quality data at low costs. Mechanical Turk data
have been increasingly used as a source of research participants in the social
sciences and psychology (Buhrmester, Kwang, & Gosling, 2011; Paolacci &
Chandler, 2014). Study 3 participants were 18- to 25-year-olds located in the
United States (N= 501, M
= 22.84, SD
= 1.85). The participants were
from 48 states, with the highest response rates coming from California
(10.16%) and New York (9.16%).
YouGamble online surveys measured social media use and online behavior
from a social psychological perspective. The surveys were part of a Finnish
research project on gambling among young people that expanded to
a comparative cross-national project. In this article, we only report data
and measures that are relevant for the validation of the IBRS.
The study format was approved by The Academic Ethics Committee of the
Tampere Region. All participants agreed voluntarily to take part in the
YouGamble online surveys regarding gambling, and they were informed
about the aims of the study.
The surveys were conducted using LimeSurvey software, and they were
optimized for computers and mobile devices. A pilot study was conducted
before Study 1 to get comments from the Finnish University students on the
IBRS questions. All the students filled out the question form anonymously,
MEDIA PSYCHOLOGY 7
and they were given an opportunity to comment on survey questions. We
also discussed questions with them in a group discussion.
Studies 1–3 were identical in layout and order of questions. Study 2 replicated
the Study 1 questionnaire, but four additional questions were added. Study 3 was
conducted for the purpose of validating the IBRS in English. Thus, it was
significantly shorter because it only included measures that were needed for
the validation. These validation measures were the same as those in Studies 1
and 2, and some additional questions were used as attention checks. The median
response time for Study 1 was 15.50 min and for Study 2 it was 17.83 min. The
shorter Study 3 had a median response time of 6.27 min.
In the first section of the surveys, respondents were asked about their back-
ground factors (e.g., age & gender) and their habits of social media use. After
that, respondents were assigned to a vignette experiment with a 2 ×2×2×2
factorial design. The design included one two-level between-subject factor
(group condition or control condition) and three two-level within-subject
factors (majority opinion, stance on gambling, and used narration).
In the vignette experiment, respondents were asked to evaluate how they
would react (i.e., like, dislike, or not react at all) toward the manipulated
gambling-related social media content presented to them in the vignette
scenarios. In addition, respondents were presented with six items in which
they were asked to indicate how interesting the content would appear to them
in a real social media situation, such as “How likely would you find the
message interesting”or “How likely would you share the link in social
media.”The answer options had a scale of 1 (not at all likely)to10(very likely).
As a between-subject factor, respondents were randomly assigned to two
groups for the vignette experiment. For the salient group identity condition,
respondents were told they were assigned into a Group C consisting of
similar respondents, based on their answers to previous questions. Those in
the control condition were given no group information.
In case of within-subject factors, we manipulated the majority opinion,
expressed stance on gambling, and the used narration in the vignettes. In the
case of majority opinion, respondents were shown a manipulated distribution
of other respondents’earlier reactions. In half of the vignettes, a majority
(about 85%) had liked the content, whereas in the other half the majority had
disliked the content. The distribution of minority opinion (likes or dislikes)
in the vignettes was about 13%, and for those who had not stated an opinion
(no reaction) it was about 2%. For those in the salient identity condition, this
distribution was framed as their in-group members’earlier reactions.
We also manipulated the expressed stance on gambling (pro or anti) in the
vignettes. Half of the vignettes expressed a positive stance on gambling (i.e.,
discussed positive gambling-related phenomena), whereas the other half had
a negative stance (i.e., discussed negative gambling-related phenomena). In
case of the used narration in the vignette contents, half of the vignettes were
8M. KAAKINEN ET AL.
manipulated to utilize a subjective experience-driven argumentation, whereas
the other half utilized an objective fact-driven argumentation. The exact
manipulations of the within-subject factors are presented in Table A1 (sup-
The 2 ×2×2 within-subject factorial design resulted in eight vignette
scenarios, of which each participant was shown four. The factorial structure
was designed so that each type of content (pro/anti, experience-driven/fact-
driven) was liked and disliked once (see Atzmüller & Steiner, 2010). Thus,
the group did not favor any form of gambling orientation or narration.
Corresponding measures and controls
Based on the review of literature and existing instruments concerning the
IBRM, social identity, homophily, and information biases, 16 items were
created to measure the three dimensions of IBRS (Table A2 in supplementary
material). Many of the generated items were adapted from earlier measures, but
new items were also formulated on the basis of reviewed literature. Created
items were discussed and modified within the author group to reach consensus
and to enhance the face validity of created measures. All items had a scale
ranging from 1 (does not describe me at all)to10(describes me completely). In
Study 3, the six-item version of IBRS (IBRS-6) was extended with three addi-
tional items (IBRS-9) to facilitate using each subdimension as separate mea-
sures. The three additional items were formulated (one for each IBRS
dimension) on the basis of earlier measures and reviewed literature.
Social media activity
Social media activity was measured with two items in which respondents were
asked to indicate how often they shared content on social media and how often
they posted pictures of themselves on social media. Both items had the
following reply options: 0 = never,1=less than once a year,2=at least
once a year,3=at least once a month,4=more than once a month,5=once
aweek,6=more than once a week,and7=daily. The Cronbach’salpha
coefficients for these items were .76 in Study 1, .83 in Study 2, and .71 in Study
3, and they were summed up as a count variable for further analyses (Table 1).
Compulsive Internet use
Compulsive Internet use was measured with the compulsive Internet use
scale (Meerkerk, Van Den Eijnden, Vermulst, & Garretsen, 2009). This
measure consists of 14 items concerning excessive Internet use with
responses ranging from 0 (never)to4(very often) with a higher score
indicating excessive Internet use. All items in the measure were summed
up as a count variable (Table 1).
MEDIA PSYCHOLOGY 9
The self-categorization measure was adapted from Leach et al.’s(2008) group-
level self-definition scale. The measure was included in Studies 2 and 3 after the
vignette experiment (self-categorization was not measured in Study 1). The
measure consists of components of individual self-stereotyping and in-group
homogeneity. Individual self-stereotyping was measured with two items: (a)
I have a lot in common with the average group member (or average respondent
in control condition), and (b) I am similar to the average group member. In-
group homogeneity was also measured with two items: (a) Group members have
a lot in common with each other, and (b) group members are very similar to
each other. All the items had a response range from 1 (strongly disagree) to 10
(strongly agree). The scale had good internal consistency (Cronbach’s alpha = .89
in Study 2 and .87 in Study 3), and items were summed up to a count variable for
further analyses (Table 1). The original self-categorization measure by Leach
et al. (2008), however, is two-dimensional. This should be noted even though the
scale showed good internal consistency.
Group influence and responding to group activity
Group influence and responding to group activity measures were derived
from the vignette experiment (Table 1). After each vignette, respondents
Table 1. Descriptive statistics on measures.
Study 1 Study 2 Study 3
Variables Range M SD M SD M SD
IBRS-6 6–60 27.79 9.97 27.57 10.01 35.85 10.04
Social identification 2–20 10.60 4.92 11.32 5.11 11.56 4.53
Homophily 2–20 9.09 4.20 8.91 4.50 14.06 3.78
Information bias 2–20 8.10 3.56 7.34 3.45 10.24 3.90
IBRS-9 9–90 . . . . 54.88 14.90
Social identification 3–30 . . . . 17.40 6.67
Homophily 3–30 . . . . 20.87 5.49
Information bias 3–30 . . . . 17.06 5.14
Social media activity 0–14 5.25 3.08 5.42 3.37 7.00 3.22
Compulsive Internet use 0–56 18.79 11.13 18.15 11.30 22.08 12.14
Group condition 4–40 . . 16.52 7.60 19.97 8.24
Control condition 4–40 . . 15.96 7.61 19.86 7.84
Group condition −108–108 2.14 11.95 0.94 9.77 3.07 13.85
Control condition −108–108 1.44 10.56 1.12 10.59 3.26 12.64
Responding to activity
Group condition 0–4 2.17 1.57 1.76 1.57 2.43 1.46
Control condition 0–4 1.90 1.60 1.81 1.62 2.55 1.44
Categorical variables Coding n % n % n %
Control 0 563 46.92 85 53.12 252 50.20
Group 1 637 53.08 75 46.88 249 49.80
Note. The descriptive analyses are based on the following sample sizes: Study 1: N= 1,200, Study 2: N= 160,
Study 3: N= 501; M = mean. SD = standard deviation. IBRS = identity bubble reinforcement scale.
10 M. KAAKINEN ET AL.
were asked about the levels of their interest toward the presented vignette
content with six survey items (see the procedure section for further details).
All items had a scale ranging from 1 to 10, and these questions were summed
up to a composite variable. The within-subject effect of group influence was
calculated as the level of interest toward content that other group members
(respondents in the control condition) had liked, minus the interest in
vignettes that others had disliked (for a similar calculation, see Atzmüller &
Steiner, 2010). Thus, a higher value indicated higher interest in content that
others had evaluated in a positive manner. Responding to a group activity
was calculated as a sum of all responses (likes or dislikes) to presented
vignettes. In all vignettes, other group members (or respondents) were
shown as highly active in terms of liking or disliking the content, with only
a small minority (about 2%) not indicating their positive or negative opi-
nions. Thus, a small number of given responses indicated low reciprocity to
In the pilot study, principal component analysis was used to reduce the
number of items for the final scale. In the analysis, the number of
components was chosen on the basis of theory, eigenvalues (> 1), and
a scree plot test (Cattell, 1966). An oblique rotation (promax) was then
conducted with the suggested number of components. The item reduction
was based on face validity, item loadings, explained variance, and interitem
In Studies 1–3, the construct validity of the IBRS was assessed by con-
ducting confirmatory factor analysis (CFA) with maximum likelihood esti-
mation. In the hypothesized three-factor models, all items were allowed to
load solely on the hypothesized factor, and error terms were not allowed to
correlate with each other. In all models, the three hypothesized IBRS factors
were allowed to correlate with each other.
To assess the fit of estimated CFA models, we report widely used fit
indices including root mean squared error of approximation (RMSEA)
with 90% confidence interval, standardized root mean squared residual
(SRMR), comparative fit index (CFI), and Tucker-Lewis index (TLI) esti-
mates (Table A3 in supplementary materials). Cut-off criteria used for these
estimates include values of .06 for RMSEA, .08 for SRMR, and .95 for CFI
and TLI, as suggested by Hu and Bentler (1999). In addition, the χ
along with degrees of freedom and corresponding significance test, are
reported. However, these measures are hard to interpret because they are
highly dependent on sample size (Hu & Bentler, 1999). For model compar-
ison, we also estimated competing CFA models with a one-factor solution (all
items loading on one general factor) and three different two-factor solutions
MEDIA PSYCHOLOGY 11
(with one original factor retained and the remaining two combined). We also
report the reliability of the IBRS scales and their subscales (Table A4 in
supplementary materials) as Spearman-Brown coefficients for two-item sub-
scales (see Eisinga, Te Grotenhuis, & Pelzer, 2013) and as McDonald’s omega
(total) coefficients for the multidimensional IBRS and its three-item subscales
(in Study 3; see Dunn, Baguley, & Brunsden, 2014; Watkins, 2017). In the
pilot study, however, we report scale reliabilities as Cronbach’s alphas due to
the relatively small sample size.
The convergent validity was estimated by assessing the hypothesized
associations between the IBRS and social media activity, compulsive
Internet use, and online group behavior (self-categorization, group influence,
and responding to group activity) in a social media vignette experiment (for
a correlation matrix and additional analyses for the IBRS subscales and our
validation variables, see Tables A5–A8 in supplementary material).
Associations were estimated as standardized regression coefficients (not
controlling for third factors), and for group behavior they were calculated
separately for respondents in the group condition and respondents in the
Finally, we tested the measurement invariance of the IBRS-6 between our
study samples. This was done by comparing three nested measurement models
(see, e.g., Little, Slegers, & Card, 2007; Putnick & Bornstein, 2016). In the first
model, no parameter constraints were set. In Model 2, all the factor loadings
were constrained to be equal between our study samples. In Model 3, factor
loadings and intercepts were set to be equal across our study samples. Thus,
Model 2 tested the metric invariance of our IBRS-6 (weak factorial invariance)
and Model 3 tested the scalar invariance (strong factorial invariance). For
Models 2 and 3, we analyzed the change in the chi-square statistic and CFI.
A nonsignificant change in the chi-square statistic (with 95% confidence inter-
val) and a decrease in CFI of less than −.01 were considered as evidence for the
tested measurement invariance (Putnick & Bornstein, 2016).
Development and piloting
Sixteen items in total were formulated to measure the hypothesized three
components the IBRS (Table A2 in supplementary materials). In the pilot
study, we used principal component analysis to test our hypothesized three-
factor model and to reduce the number of items down to the six most
appropriate ones to measure our scale’s three subdimensions. The principle
component analysis supported our three-factor solution. The selection of the
remaining six items was based on face validity, item loadings, explained
variance, and interitem correlations. In other words, we chose items that
12 M. KAAKINEN ET AL.
best captured the essence of our theoretical model, loaded on hypothesized
factors, were sufficiently explained by the three-factor model, and resulted in
subscales with appropriate internal consistency. The resulting set of items
and their formulations are presented in Table 2. The three-component
solution accounted for 81% of the variance of the final six items, and factor
loadings on the components varied between .41 and .90. The Cronbach’s
alpha for the final scale was .77, and it was .78, .77, and .61 for the subscales
of social identification, homophily, and information bias, respectively.
To test the construct validity of the three-factor solution suggested by our
theoretical framework, CFA was performed on the six IBRS items selected in
the pilot study (Table 2). According to fit statistics (Table A3 in supplemen-
tary materials), we observed a good fit between the data and our model, χ
(6.94)/6 = 1.16, p= .327, RMSEA = .011, 90% CI [.000, .040], SRMR = .006,
CFI = 1.000, and TLI = .999. The fit was better than for competing one- and
two-factor models. Standardized factor loadings (.66 at lowest) and factor
covariances are presented in Figure 1. The reliability estimates of the IBRS
and its subdimensions are reported in Table A4 (supplementary materials).
Table 2. Item formulations for IBRS-6 and IBRS-9.
No Item Formulation
Social identification 1 In social media, I belong to a community or
communities that are important part of my
Leach et al. (2008)
2 In social media, I belong to a community or
communities that I’m proud of.
Leach et al. (2008)
Homophily 3 In social media, I prefer interacting with people
who are like me.
McCroskey et al.
et al. (1975)
4 In social media, I prefer interacting with people
who share similar interests with me.
Information bias 5 In social media, I trust the information that is
shared with me.
6 In social media, I feel that people think like me. Self-developed
IBRS-9: added items
Social identification 7 In social media, I belong to a community or
communities that I can commit to.
Leach et al. (2008)
Homophily 8 In social media, I prefer interacting with people
who share my values.
Information bias 9 In social media, I can keep myself well
Note. IBRS = identity bubble reinforcement scale. The scale for all the items was from 1 (does not describe me
at all)to10(describes me completely). References are provided only for items adapted from earlier
measures. Item No = item number.
MEDIA PSYCHOLOGY 13
The convergent validity of the IBRS was estimated by calculating its correlation
with social media activity, compulsive Internet use, group influence, and
responding to group activity in a social media vignette experiment (Table 3).
Figure 1. Standardized factor loadings and factor covariances for identity bubble reinforcement
scale (IBRS-6) in study 1.
Table 3. Associations between IBRS-6 and IBRS-9 and validation measures estimated separately
for each validation variable.
Study 1 Study 2 Study 3
Validation measures β95% CI β95% CI β95% CI β95% CI
Social media activity .40*** [.35, .46] .32*** [.18, .47] .42*** [.34, .50] .43*** [.35, .51]
Compulsive Internet use .26*** [.21, .32] .25*** [.10, .41] .25*** [.16, .34] .26*** [.17, .34]
Group condition . . .30** [.07, .52] .36*** [.24, .49] .31*** [.18, .44]
Control condition . . .21 [−.01, .44] .31*** [.20, .42] .33*** [.22, .44]
Group condition .10* [.02, .18] .05 [−.18, 29] .02 [−.12, .16] −.00 [−.14, .13]
Control condition .04 [−.04, .11] .18 [−.06, .43] −.00 [−.12, .11] −.00 [−.12, .11]
Responding to activity
Group condition .14*** [.06, .22] .21 [−.01, .42] .16* [.03, .29] .18** [.05, .31]
Control condition .13*** [.05, .22] .07 [−.15, .30] .10 [−.02, .22] .10 [−.02, .22]
Note. IBRS = identity bubble reinforcement scale. Analyses are based on the following sample sizes: Study 1:
N= 1,200, Study 2: N= 160, Study 3: N= 501; β= standardized regression coefficient, 95% CI = 95%
confidence interval. ***p< .001. **p< .01. *p< .05.
14 M. KAAKINEN ET AL.
We observed significant positive correlations between the IBRS and social media
activity (β=.40,p< .000) and compulsive Internet use (β=.26,p<.000).Inthe
vignette experiment, the IBRS was positively correlated with group influence in
the group condition (β=.10,p= .019), but not in the control condition (β=.04,
p= .309). In addition, the IBRS was positively correlated with responding to
group activity in the group condition (β=.14,p< .001) and the control
condition (β= .14, p= .001).
Study 2: replication of Study 1
Here, CFA was conducted with an additional data set to replicate the measures
in Study 1. According to fit statistics (Table A3 in supplementary materials), the
fit between the data and the three-factor model was good in the second sample,
as well, χ
(2.51)/6 = 0.42, p= .867, RMSEA = .000, 90% CI [.000, .053],
SRMR = .009, CFI = 1.000, and TLI = 1.030. The fit was better than for
competing models. Standardized factor loadings (.64 at lowest) and factor
covariances are presented in Figure 2, and reliability coefficients of the IBRS
and its subdimensions are reported in Table A4 (supplementary materials).
Figure 2. Standardized factor loadings and factor covariances for identity bubble reinforcement
scale (IBRS-6) in Study 2.
MEDIA PSYCHOLOGY 15
We observed significant positive correlations between the IBRS and social
media activity (β= .32, p< .000) and compulsive Internet use (β= .25,
p= .001; Table 3). The correlation between the IBRS and self-categorization
was significant and positive for respondents in the group condition (β= .30,
p= .011) and positive but nonsignificant for those in the control condition
(β= .21, p= .062). However, the IBRS was not significantly correlated to
group influence in either group condition (β= .05, p= .651) or control
condition (β= .18, p= .138). The association between the IBRS and respond-
ing to group activity was positive but nonsignificant in the group condition
(β= .21, p= .065) and control condition (β= .07, p= .513).
Study 3: English validation of the IBRS
Confirmatory factor analysis
In Study 3, we tested the validity of both the six-item version of the IBRS
(IBRS-6) and the nine-item version (IBRS-9) consisting of original items plus
one additional item for each subscale (Table 2). Based on earlier measures
and the reviewed theory, along with our reflection on the findings of our
pilot study, we hypothesized these items to enhance the face validity of our
measure and to facilitate the usage of the IBRS subdimensions as separate
measures in future studies (for which at least three indicator variables are
required per dimension).
The IBRS-6 had an acceptable fit between the data and our three-factor
(13.110)/6 = 2.185, p= .041, RMSEA = .049, 90% CI [.009, .085],
SRMR = .016, CFI = .994, TLI = .984, and the fit was better than for competing
models (Table A3 in supplementary materials). Standardized factor loadings
(.62 at lowest) and factor covariances are presented in Figure 3.
According to fit statistics, there was also an acceptable fit between the IBRS-
(50.77)/24 = 2.12, p= .001, RMSEA = .047, 90% CI [.029,
.065], SRMR = .027, CFI = .988, TLI = .982. In case of both the IBRS-6 and the
IBRS-9, the result of the χ
test was significant, but we interpret the overall
model fit to be acceptable because all the smaller sample size dependent fit
indices suggested by Hu and Bentler (1999) showed excellent fit. Here again,
the fit between the data and the hypothesized three-factor model was better
than that for competing models (Table A3 in supplementary materials).
Standardized factor loadings (.69 at lowest) and factor covariances are pre-
sented in Figure 4. The reliability estimates of the IBRS-6 and the IBRS-9 and
their subdimensions are reported in Table A4 (supplementary materials).
The IBRS-6 had a significant positive correlation with social media activity
(β= .42, p< .001) and compulsive Internet use (β= .25, p< .000). In case of
16 M. KAAKINEN ET AL.
the IBRS-9, correlation coefficients were .43 (p < .001) for social media
activity and .26 (p< .001) for compulsive Internet use (Table 3).
The IBRS-6 was positively associated with self-categorization in group
condition (β= .36, p< .001) and control condition (β= .31, p< .001).
However, the IBRS-6 was not associated with group influence in either group
condition (β= .02, p= .781) or control condition (β= .00, p= .934). There
was, however, a significant positive correlation between the IBRS-6 and
responding to group activity in the group condition (β= .16, p= .018) but
not for control condition (β= .10, p= .104). The same was true in the case of
the IBRS-9 because the scale was only significantly correlated with respond-
ing to group activity in the group condition (β= .18, p= .007).
Testing the measurement invariance of the IBRS-6 between the study
To test the IBRS-6’s measurement invariance between our study samples, we
conducted three nested measurement models. The first model did not pose
any restrictions for the parameters between groups. In the second model, all
the factor loadings for our latent variables were constrained to be equal
Figure 3. Standardized factor loadings and factor covariances for identity bubble reinforcement
scale (IBRS-6) in Study 3.
MEDIA PSYCHOLOGY 17
between our samples, and in the third model all factor loadings and inter-
cepts were restricted to be equal. The corresponding fit indices for the model
comparisons are reported in Table 4. We found strong evidence for metric
invariance because there was no significant change in the chi-square statistic
(p= .757) after the factor loadings we set to be equal across samples (Model
2), and the decrease in the CFI statistic was less than −.01. The evidence for
scalar invariance, in turn, was mixed because the decrease in the CFI statistic
Figure 4. Standardized factor loadings and factor covariances for identity bubble reinforcement
scale (IBRS-9) in Study 3.
Table 4. The model comparison indices for measurement invariance estimation.
Models df χ
diff. df diff pr(χ
diff.) CFI CFI diff. RMSEA RMSEA diff. TLI TLI diff
Model 1 18 22.56 . . . .999 . .020 .997
Model 2 24 25.96 3.40 6 .757 .999 −.001 .011 −.009 .999 .002
Model 3 30 54.17 28.21 6 < .001 .994 −.006 .036 .016 .991 −.006
Df = degrees of freedom, χ
= chi-square statistic, χ
diff. = the change in chi-square statistic, df diff. = the
change in degrees of freedom, pr(χ
diff.) = the statistical significance of the change in chi-square statistic,
CFI = comparative fit index, CFI diff. = the change in comparative fit index, RMSEA = Root Mean Square
Error of Approximation, RMSEA diff. = the change in Root Mean Square Error of Approximation,
TLI = Tucker–Lewis index, TLI diff. = the change in Tucker–Lewis index.
18 M. KAAKINEN ET AL.
was less than −.01 after the fixed factor loadings and intercepts were con-
strained. However, the change in the chi-square statistic was significant
As the IBRS was administered after an experimental manipulation in our
vignette experiment, we also tested the measurement invariance between the
group identity and control conditions. The tests supported both metric
invariance and scalar invariance.
This article is based on a series of studies that validated the IBRS for both
Finnish and English. The development of the scale was connected to the
ongoing discussion about different online bubbles (Abisheva, Garcia, &
Schweitzer, 2016; Bozdag, Gao, Houben, & Warnier, 2014;Nguyen,Hui,
Harper, Teryeen, & Konstan, 2014; Pariser, 2011; Zuiderveen Borgesius
et al., 2016). The phenomenon of social media bubbles is by no means
trivial, because social networks are major drivers of social media experi-
ences and online media consumption (Bakshy et al., 2015;Jostetal.,
2018). It has been well documented in previous studies that online inter-
actions tend to center around social cliques or bubbles that are manifested
in shared-group ideologies (e.g., political factions, world views) with
a strong reliance on similar-minded social networks as sources of reliable
information (Bessi et al., 2015;Bakshyetal.,2015; Boutyline & Willer,
2017; Del Vicario et al., 2016; Zollo et al., 2017). The IBRM by Keipi et al.
(2017) further elaborates how these elements manifest at the individual
level as identity bubbles.
Although earlier discussion of filter bubbles and echo chambers has
focused on perspectives originally developed in computer science, our work
is grounded on social psychology. The starting point was grounded in the
IBRM (Keipi et al., 2017). Based on earlier literature, we hypothesized that
online identity bubbles are reflected in social identification to online net-
works, online social homophily, and a tendency to rely on like-minded
information online (i.e., information bias). Our empirical approach sought
to create the first measure for examining individual points of view in relating
to social media networks. This kind of approach extends the existing view of
online bubbles by focusing on the social psychological side of the phenom-
enon, and it provides insight in understanding the underlying social factors
of online bubbles and their identity processes.
The three-factor structure of the IBRS was validated in three separate
studies. We also tested the measurement invariance of the IBRS-6 measure
between the study samples. We found strong evidence for configural invar-
iance (the same theory-based model showed a good fit in all study samples)
and metric invariance (equal factor loading between study samples) and
MEDIA PSYCHOLOGY 19
mixed evidence for scalar invariance. In addition, Studies 1 and 2 first
showed that the six-item version had good reliability in Finnish. Study 3
validated the IBRS-6 in English, with equally good reliability. We also
extended the IBRS-6 to a nine-item version in Study 3. Finally, the validated
IBRS-9 had good reliability and functional three-item subscales of social
identification, homophily, and information bias.
In all three studies, both the IBRS-6 and the IBRS-9 were associated with
social media activity and compulsive Internet use. We also found that the
IBRS-6 and the IBRS-9 were associated with self-categorization and respond-
ing to group activity in case of shared group identity in our vignette experi-
ments. However, the IBRS was associated with group influence in shared
group condition only in Study 1. This finding is interesting and demands
some reflection. In the vignette experiment, the shared group membership
was primed by a sentence indicating that respondents had been placed in
a group based on their earlier responses to the survey. A stronger group
priming, of course, could have induced a stronger effect. In addition, the
primed group membership was based on placement instead of self-selection.
In the case of IBRS, however, a self-selected group membership might have
been a more relevant factor. These points could also explain why there was
no significant difference in self-categorization between the group condition
and control condition in Study 2 and Study 3 (see the overlapping confidence
intervals in Table 3).
Overall, our findings underline the validity and potential usability of both
the IBRS-6 and the IBRS-9 scales. Our findings are consistent with prior
literature on social media and group identity processes (e.g., Flanagin, 2017;
Mikal et al., 2016; Shin et al., 2017; Turel & Osatuyi, 2017).
On social media, people often make fast decisions in communication and
content evaluation, and they tend to rely on their friends’opinions and
evaluations (Flanagin, 2017; Hocevar et al., 2014; Metzger et al., 2010).
Thus, online social networks and group memberships have an impact on
which content people trust and encounter on social media (Keipi et al., 2017).
Although this does not necessarily narrow people’s worldview, it may create
antagonism between different online groups (Abisheva et al., 2016). Hence, it
is important to understand differences among identity bubbles. In addition,
following Nobel Prize winner Daniel Kahneman’s(2011) distinction between
System 1 and System 2 thinking, we suggest that characteristics of social
media tend to enforce fast and intuitive System 1 thinking compared to slow
and calculative System 2 thinking. System 1 thinking may also lead to various
spontaneous perceptual and logical errors.
As the IBRS measures individual experiences of one’s relation to social
media and online networks in general, its generalizability and applicability
are not limited to any particular type of social media platform. From a user’s
point of view, social media covers various kinds of platforms that are
20 M. KAAKINEN ET AL.
typically not used in isolation. Platforms and their technological affordances
can also be interconnected, which makes it difficult to fully separate different
platforms. Also, preferring or being exposed to one-sided contacts or infor-
mation on some platforms does not necessarily indicate that this is the case
concerning an individual’s social media experiences in general.
The IBRS should be seen as an important addition to the literature concern-
ing social media and identity. Many social media sites and applications have
facilitated both social identification and homophily. People are also increasingly
dependent on information gained online. These aspects underline the general
need to understand processes of identity reinforcement in an online environ-
ment. The IBRS essentially measures the tendency to affiliate and rely on others
on social media at a single point of time in this reinforcement process. People
with high IBRS scores have a stronger sense of social identification and homo-
phily and stronger reliance on information provided by their friends.
Our results also showed that people with a high tendency for online identity
bubbles are active, and even compulsive, social media users. Together, these
results help us understand the phenomenon of identity bubbles in social media.
By tying together three underlying social mechanisms influencing individuals’
attitudes, relationships, and opinion forming, the IBR concept can provide
additional, well founded, and theoretically sound information on how different
social and interactional processes actualize in social media contexts. The con-
cept of an identity bubble might be applied to understanding a variety of
phenomena, ranging from subjective well-being and mental health to studies
on consumer behavior, politics, and social movements. Our study also demon-
strated that the phenomenon has cross-cultural relevance; social media usage
among young people is similar in different cultures and its outcomes function
in a consistent manner across cultures.
Limitations and strengths
Although our results were encouraging, there are some limitations. Because
the IBRS measures identity bubble reinforcement phenomenon only at one
point in time, in the future, it will be important to assess the temporal
stability of the IBRS in longitudinal settings to explore the dynamic reinfor-
cement process. It might also be important to analyze how self-reported
information gained from the IBRS is associated with naturally occurring
online behavior by using social media data sets. The developed scales are
relatively short because they only consist of six or nine items. However, our
aim here was to develop and validate a short scale because it may perform as
well as longer scales and have practical advantages. The applicability of the
IBRS for different social media studies is a major strength of the scale.
Another major strength is that we have already replicated our results with
MEDIA PSYCHOLOGY 21
three different studies in two countries. Further studies should continue
Currently, social processes in social media, and their wider consequences, are
widely discussed. There is a need for social psychological measures examining
how individuals relate to and rely on their social media interactions. Together,
our three studies validate both IBRS-6 for Finnish and English and IBRS-9 for
English as measures for social media users’tendency toward being involved in
online bubbles. The IBRS contributes to the research of online bubbles at least in
two important ways: First, unlike earlier studies scrutinizing explicit social
networks structures in social media, the IBRS is the first measure to emphasize
the individuals’own perception of their relations to social media
networks. Second, the IBRS can be used in survey studies examining the social
media use of certain populations of interest in general instead of being limited to
specific online platforms and discussion topics. Perhaps the most comprehen-
sive understanding of current social mechanisms in social media could be
derived using the IBRS in combination with social network analysis. The IBRS
could be applied to a variety of research fields, including social sciences, psy-
chology, and economics. It is also possible to use subscales of social identifica-
tion, homophily, and information bias separately. Our results, based on
translating the IBRS-6 into English, are encouraging for its use in other lan-
guages as well.
No potential conflict of interest was reported by the authors.
This work was supported by the Finnish Foundation for Alcohol Studies [Problem Gambling
and Social Media Project].
Abisheva, A., Garcia, D., & Schweitzer, F. (2016, May). When the filter bubble bursts:
Collective evaluation dynamics in online communities. In Proceedings of the 8th ACM
Conference on Web Science, Hannover, Germany. (pp. 307–308). New York, NY: ACM.
Atzmüller, C., & Steiner, P. M. (2010). Experimental vignette studies in survey research.
Methodology,6(3), 128–138. doi:10.1027/1614-2241/a000014
Bahns, A. J., Pickett, K. M., & Crandal, C. S. (2011). Social ecology of similarity: Big schools,
small schools and social relationships. Group Processes and Intergroup Relations,15(1),
22 M. KAAKINEN ET AL.
Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and
opinion on Facebook. Science,348(6239), 1130–1132. doi:10.1126/science.aaa1160
Baumeister, R. F., & Leary, M. R. (1995). The need to belong: Desire for interpersonal
attachments as a fundamental human motivation. Psychological Bulletin,117(3), 497–529.
Bessi, A., Coletto, M., Davidescu, G. A., Scala, A., Caldarelli, G., & Quattrociocchi, W. (2015).
Science vs conspiracy: Collective narratives in the age of misinformation. PLoS ONE,10(2).
Boutyline, A., & Willer, R. (2017). The social structure of political echo chambers: Variation
in ideological homophily in online networks. Political Psychology,38(3), 551–569.
Boyd, D. M., & Ellison, N. B. (2007). Social network sites: Definition, history, and scholarship.
Journal of Computer-Mediated Communication,13(1), 210–230. doi:10.1111/j.1083-
Bozdag, E., Gao, Q., Houben, G. J., & Warnier, M. (2014). Does offline political segregation
affect the filter bubble? An empirical analysis of information diversity for Dutch and
Turkish Twitter users. Computers in Human Behavior,41, 405–415. doi:10.1016/j.
Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon’s mechanical turk: A new
source of inexpensive, yet high-quality, data? Perspectives on Psychological Science,6(1),
Burke Jarvis, C., MacKenzie, S. B., & Podsakoff, P. M. (2003). A critical review of construct
indicators and measurement model misspecification in marketing and consumer research.
Journal of Consumer Research,30(2), 199–218. doi:10.1086/376806
Cattell, R. B. (1966). The scree test for the number of factors. Multivariate Behavioral
Chung, J. E. (2013). Social interaction in online support groups: Preference for online social
interaction over offline social interaction. Computers in Human Behavior,29(4),
Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., Stanley, H. E., &
Quattrociocchi, W. (2016). The spreading of misinformation online. In Proceedings of the
National Academy of Sciences,113(3), 554–559.
Densley, J., & Peterson, J. (2017). Aggression in groups. Current Opinion in Psychology,19,
Dunn, T. J., Baguley, T., & Brunsden, V. (2014). From alpha to omega: A practical solution to
the pervasive problem of internal consistency estimation. British Journal of Psychology,105
(3), 399–412. doi:10.1111/bjop.12046
Eisinga, R., Te Grotenhuis, M., & Pelzer, B. (2013). The reliability of a two-item scale:
Pearson, Cronbach, or Spearman-Brown? International Journal of Public Health,58(4),
Flanagin, A. J. (2017). Online social influence and the convergence of mass and interpersonal
communication. Human Communication Research,43, 450–463. doi:10.1111/hcre.12116
Flanagin, A. J., Hocevar, K. P., & Samahito, S. N. (2014). Connecting with the user-generated
Web: How group identification impacts online information sharing and evaluation.
Information, Communication and Society,17(6), 683–694. doi:10.1080/
Gearhart, S., & Zhang, W. (2015). “Was it something I said?”“No, it was something you
posted!”A study of the spiral of silence theory in social media contexts. Cyberpsychology,
Behavior, and Social Networking,18(4), 208–213. doi:10.1089/cyber.2014.0443
MEDIA PSYCHOLOGY 23
Guo, T.-C., & Li, X. (2016). Positive relationship between individuality and social identity in
virtual communities: Self-categorization and social identification as distinct forms of social
identity. Cyberpsychology, Behavior, and Social Networking,19(11), 680–685. doi:10.1089/
Hartl, A. C., Laursen, B., & Cillessen, A. H. (2015). A survival analysis of adolescent friend-
ships: The downside of dissimilarity. Psychological Science,26(8), 1304–1315. doi:10.1177/
Himelboim, I., Smith, M., & Shneiderman, B. (2013). Tweeting apart: Applying networks
analysis to explore selective exposure on Twitter. Communication Methods and Measures,7
(3), 169–197. doi:10.1080/19312458.2013.813922
Himelboim, I., Sweetser, K. D., Tinkham, S. F., Cameron, K., Danelo, M., & West, K. (2016).
Valence-based homophily on Twitter: Network analysis of emotions and political talk in
the 2012 presidential election. New Media and Society,18(7), 1382–1400. doi:10.1177/
Hocevar, K. P., Flanagin, A. J., & Metzger, M. J. (2014). Social media self-efficacy and
information evaluation online. Computers in Human Behavior,39, 254–262. doi:10.1016/
Hogg, M. A., & Rinella, M. J. (2018). Social identities and shared realities. Current Opinion in
Holtz, R., & Miller, N. (1985). Assumed similarity and opinion certainty. Journal of
Personality and Social Psychology,48(4), 890–898. doi:10.1037/0022-35220.127.116.110
Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis:
Conventional criteria versus new alternatives. Structural Equation Modeling,6,1–55.
Imhoff, R., Dotsch, R., Bianchi, M., Banse, R., & Wigboldus, D. H. J. (2011). Facing Europe:
Visualizing spontaneous in-group projection. Psychological Science,22(12), 1583–1590.
Jans, L., Leach, C. W., Garcia, R. L., & Postmes, T. (2015). The development of group
influence on in-group identification: A multilevel approach. Group Processes &
Intergroup Relations,18(2), 190–209. doi:10.1177/1368430214540757
Jones, P. E. (2004). False consensus in social context: Differential projection and perceived
social distance. British Journal of Social Psychology,43, 417–429. doi:10.1348/
Jost, J. T., Barberá, P., Bonneau, R., Langer, M., Metzger, M., Nagler, J., …Tucker, J. A.
(2018). How social media facilitates political protest: Information, motivation, and social
networks. Advances in Political Psychology,39(Suppl. 1), 85–118. doi:10.1111/pops.12478
Kahneman, D. (2011). Thinking, fast and slow. New York, NY: Farrar, Straus & Giroux.
Kang, J. H., & Chung, D. Y. (2017). Homophily in anonymous online community:
Sociodemographic versus personality traits. Cyberpsychology, Behavior, and Social
Kaplan, A. M., & Haenlein, M. (2010). Users of the world, unite! The challenges and
opportunities of Social Media. Business Horizons,53(1), 59–68. doi:10.1016/j.
Keipi, T., Näsi, M., Oksanen, A., & Räsänen, P. (2017). Online hate and harmful content:
Cross-national perspectives. New York, NY: Routledge.
Kuru, O., Pasek, J., & Traugott, M. W. (2017). Motivated reasoning in the perceived cred-
ibility of public opinion polls. Public Opinion Quarterly,81(2), 422–446. doi:10.1093/poq/
24 M. KAAKINEN ET AL.
Leach, C. W., Ellemers, N., & Barreto, M. (2007). Group virtue: The importance of morality
(vs. competence and sociability) in the positive evaluation of in-groups. Journal of
Personality and Social Psychology,93(2), 234–249. doi:10.1037/0022-3518.104.22.168
Leach, C. W., van Zomeren, M., Zebel, S., Vliek, M. L. W., & Ouwerkerk, J. W. (2008).
Group-level self-definition and self-investment: A hierarchical (multicomponent) model of
in-group identification. Journal of Personality and Social Psychology,95(1), 144–165.
Lehdonvirta, V., & Räsänen, P. (2011). How do young people identify with online and offline
peer groups? A comparison between UK, Spain and Japan. Journal of Youth Studies,14(1),
Little, T. D., Slegers, D. W., & Card, N. A. (2007). A nonarbitrary method of identifying and
scaling latent variables in SEM and MACS models. Structural Equation Modeling,13(1),
Liu, Y., Rui, J. R., & Cui, X. (2017). Are people willing to share their political opinions on
Facebook? Exploring roles of self-presentational concern in spiral of silence. Computers in
Human Behavior,76, 294–302. doi:10.1016/j.chb.2017.07.029
McCroskey, J. C., Richmond, V. P., & Daly, J. A. (1975). The development of a measure of
perceived homophily in interpersonal communication. Human Communication Research,1
(4), 323–332. doi:10.1111/hcre.1975.1.issue-4
McCroskey, L. L., McCroskey, J. C., & Richmond, V. P. (2006). Analysis and improvement of
the measurement of interpersonal attraction and homophily. Communication Quarterly,54
(1), 1–31. doi:10.1080/01463370500270322
McPherson, M., Smith-Lovin, L., & Cook, J. M. (2001). Birds of a feather: Homophily in
social networks. Annual Review of Sociology,27(1), 415–444. doi:10.1146/annurev.
Meerkerk, G. J., Van Den Eijnden, R. J., Vermulst, A. A., & Garretsen, H. F. (2009). The
compulsive Internet use scale (CIUS): Some psychometric properties. Cyberpsychology and
Messing, S., & Westwood, S. J. (2014). Selective exposure in the age of social media:
Endorsements trump partisan source affiliation when selecting news online.
Communication Research,41(8), 1042–1063. doi:10.1177/0093650212466406
Metzger, M. J., Flanagin, A. J., & Medders, R. B. (2010). Social and heuristic approaches to
credibility evaluation online. Journal of Communication,60, 413–439. doi:10.1111/j.1460-
Mikal, J. P., Rice, R. E., Kent, R. G., & Uchino, B. N. (2016). 100 million strong: A case study
of group identification and deindividuation on Imgur.com. New Media and Society,18(11),
Mummendey, A., & Wenzel, M. (1999). Social discrimination and tolerance in intergroup
relations: Reactions to inter-group difference. Personality and Social Psychology Review,3,
Näsi, M., Räsänen, P., Oksanen, A., Hawdon, J., Keipi, T., & Holkeri, E. (2014). Association
between online harassment and exposure to harmful online content: A cross-national
comparison between the United States and Finland. Computers in Human Behavior,41,
Nguyen, T. T., Hui, P. M., Harper, F. M., Teryeen, L., & Konstan, J. A. (2014). Exploring the
filter bubble: The effect of using recommender systems on content diversity. In Proceedings
of the 23rd International Conference on World Wide Web (pp. 677–686). New York, NY:
MEDIA PSYCHOLOGY 25
Oksanen, A., Hawdon, J., & Räsänen, P. (2014). Glamorizing rampage online: School shoot-
ing fan communities on YouTube. Technology in Society,39,55–67. doi:10.1016/j.
Paolacci, G., & Chandler, J. (2014). Inside the Turk: Understanding mechanical turk as
a participant pool. Current Directions in Psychological Science,23(3), 184–188.
Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. London, England:
Postmes, T., Haslam, S. A., & Jans, L. (2013). A single-item measure of social identification:
Reliability, validity, and utility. British Journal of Social Psychology,52(4), 597–617.
Putnick, D. L., & Bornstein, M. H. (2016). The state of the art and future directions for
psychological research. Developmental Review,41, 71–90. doi:10.1016/j.dr.2016.06.004
Robinson, D. T., & Aikens, L. (2010). Homophily. In J. M. Levine & M. A. Hogg (Eds.),
Encyclopedia of group processes and intergroup relations (pp. 669–672). Thousand Oaks,
CA: Sage. doi:10.4135/9781412972017.n121
Rogers, T., Moore, D. A., & Norton, M. I. (2017). The belief in a favorable future.
Psychological Science,28(9), 1290–1301. doi:10.1177/0956797617706706
Ross, L., Greene, D., & House, P. (1977). The “false consensus effect”: An egocentric bias in
social perception and attribution processes. Journal of Experimental Social Psychology,13,
Sallinen-Kuparinen, A., McCroskey, J. C., & Richmond, V. P. (1991). Willingness to com-
municate, communication apprehension, introversion, and self-reported communication
competence: Finnish and American comparisons. Communication Research Reports,8(1),
Sherif, M., Harvey, O. J., White, B. J., Hood, W. R., & Sherif, C. W. (1961). Intergroup conflict
and cooperation: The robbers cave experiment. Norman, OK: University Book Exchange.
Shin, S. Y., Van der Heide, B., Beyea, D., Dai, Y., & Prchal, B. (2017). Investigating
moderating roles of goals, reviewer similarity, and self-disclosure on the effect of argument
quality of online consumer reviews on attitude formation. Computers in Human Behavior,
76, 218–226. doi:10.1016/j.chb.2017.07.024
Tajfel, H., Billig, M. G., Bundy, R. P., & Flament, C. (1971). Social categorization and
intergroup behavior. European Journal of Social Psychology,1(2), 149‒178. doi:10.1002/
Tajfel, H., & Turner, J. C. (1979). An integrative theory of intergroup conflict. In
W. G. Austin & S. Worchel (Eds.), The social psychology of intergroup relations (pp.
33–47). Monterey, CA: Brooks Cole.
Turel, O. (2015). An empirical examination of the “vicious cycle”of Facebook addiction.
Journal of Computer Information Systems,55(3), 83–91. doi:10.1080/
Turel, O., & Osatuyi, B. (2017). A peer-influence perspective on compulsive social networking
site use: Trait mindfulness as a double-edged sword. Computers in Human Behavior,77,
Turner, J. C. (1985). Social categorization and the self-concept: A social cognitive theory of
group behavior. In E. J. Lawler (Ed.), Advances in group processes: Theory and research
(Vol. 2, pp. 77–122). Greenwich, CT: JAI.
Turner, J. C., Oakes, P. J., Haslam, S. A., & McGarty, C. (1994). Self and collective: Cognition
and social context. Personality and Social Psychology Bulletin,20, 454‒463. doi:10.1177/
26 M. KAAKINEN ET AL.
Turner, J. C., & Reynolds, K. J. (2010). The story of social identity. In T. Postmes &
N. R. Branscombe (Eds.), Rediscovering social identity: Key readings (pp. 13‒32).
New York, NY: Psychology Press.
Watkins, M. W. (2017). The reliability of multidimensional neuropsychological measures:
From alpha to omega. The Clinical Neuropsychologist,31(6–7), 1113–1126. doi:10.1080/
Westen, D., Blagov, P. S., Harenski, K., Kilts, C., & Hamann, S. (2006). Neural bases of
motivated reasoning: An fMRI study of emotional constraints on partisan political judg-
ment in the 2004 U.S. presidential election. Journal of Cognitive Neuroscience,18(11),
Wilson, R. E., Gosling, S. D., & Graham, L. T. (2012). A review of Facebook research in the
social sciences. Perspectives on Psychological Science,7(3), 203–220. doi:10.1177/
Zerback, T., & Fawzi, N. (2017). Can online exemplars trigger a spiral of silence? Examining
the effects of exemplar opinions on perceptions of public opinion and speaking out. New
Media and Society,19(7), 1034–1051. doi:10.1177/1461444815625942
Ziva, K. (1990). The case for motivated reasoning. Psychological Bulletin,108(3), 480–498.
Zollo, F., Bessi, A., Del Vicario, M., Scala, A., Caldarelli, G., Shekhtman, L., …
Quattrociocchi, W. (2017). Debunking in a world of tribes. PloS ONE,12(7), e0181821.
Zuiderveen Borgesius, F. J., Trilling, D., Moeller, J., Bodó, B., De Vreese, C. H., &
Helberger, N. (2016). Should we worry about filter bubbles? Internet Policy Review,5(1),
MEDIA PSYCHOLOGY 27