Characterizing the Demographics Behind the #BlackLivesMatter Movement
Idiap and EPFL
The debates on minority issues are often dominated by or
held among the concerned minorities: gender equality de-
bates have often failed to engage men, while those about race
fail to engage the dominant group. To test this observation,
we study the #BlackLivesMatter movement and hashtag on
Twitter—that has emerged and gained traction after a series of
events typically involving the death of African-Americans as
a result of police brutality—aiming to quantify the population
biases across user types (individuals vs. organizations), and
(for individuals) across 3 demographics factors (race, gender
and age). Our results suggest that more African-Americans
engage with the hashtag, and that they are also more ac-
tive than other demographic groups. We also discuss ethical
caveats with broader implications for studies on sensitive top-
ics (e.g. mental health or religion) that focus on users.
While the growing number of discussions about minor-
ity1issues—including gender (O’Brien and Kelly 2013),
income (Moodie-Mills 2015), or race (Lashinsky 2015)—
is good news, empirical evidence suggests that they are
held mainly among the discriminated group: women dom-
inate the debate on gender (Royles 2014), while African-
Americans dominate the one on race (Pettit 2006). Although
social media has led to a paradigm shift for advocacy by in-
creasing the effectiveness, the speed and the outreach of so-
cial campaigns, many still fail to reach far beyond the com-
munities for which they advocate.
In this paper, we explore this observation in the context of
the #BlackLivesMatter movement2on Twitter. We want to
gain insights into the level of involvement across user demo-
graphics. What can be said about the demographic composi-
tion of the communities engaged in the discussions? Does
the discriminated group dominate the debate? Ultimately,
engaging diverse stakeholder groups is beneﬁcial for the so-
cial campaign’s success (Ward 2013), and knowing the ex-
tent to which they contribute to the debate is helpful in learn-
ing how to alter the message to appeal to them.
2016, Association for the Advancement of Artiﬁcial
Intelligence (www.aaai.org). All rights reserved.
1Throughout the paper, by minority we refer to a group that is
subordinate to a more dominant group in society.
#BlackLivesMatter is a movement (and a hashtag) created
after the killing of Trayvon Martin in 2012, with over 1,000
demonstrations being held since then.3The hashtag has been
used during a number of events involving disproportionate
police violence against African-Americans, as well as dis-
proportionate reaction of mainstream media when terror at-
tacks occur in Western countries compared to when they oc-
cur in African countries (Zuckerman 2015).
Contributions. Our main contribution is a demographic
characterization of users involved in the #BlackLivesMat-
ter movement on Twitter. Our ﬁndings suggest that African-
Americans are both more numerous and active than other de-
mographic groups. Young females are more likely to actively
engage in the debate than men, yet, the proportions of white
and African American females are similar. Looking at male
users, we see a slightly different pattern: young adults still
dominate the discussions, but they are largely African Amer-
icans. Contrasting individuals to organizations, amounting
for ∼5% of proﬁles, we see a 3 times higher tweeting rate.
To run this study, we also created a collection of about
6,000 Twitter users annotated with demographic informa-
tion such as race, age, and gender. In contrast with previous
work that reports demographic information by automatically
predicting demographic factors for each user based e.g. on
their proﬁle picture or name (Minkus, Liu, and Ross 2015;
Zagheni et al. 2014; Bakhshi, Shamma, and Gilbert 2014;
Mislove et al. 2011), we crowdsourced these annotations.
Although more expensive, we do so to work around known
pitfalls of automated user classiﬁcation such as low re-
call (Minkus, Liu, and Ross 2015) and classiﬁcation er-
rors (Yadav et al. 2014).
Limitations and Ethical Challenges. We note that such
an endeavor is not without caveats. First, there are intrin-
sic issues with hashtag-based analyses, and the reliance on
a single media platform and public APIs (Tufekci 2014;
Boyd and Crawford 2012): The hashtag we focus on does
not cover all the discussions and contributions around the
issue at core. The movement and hashtag use are recent
and we cannot capture the long-term evolution of the de-
mographics behind the core debate.
Second, there are important ethical challenges (Boyd and
Crawford 2012): Although publicly available, user proﬁle
Table 1: The basic stats of our dataset.
Movement Tweets Users Start Day End Day
#BlackLivesMatter 3.54M 0.88M April 11, 2012 May 10, 2015
data is inherently sensitive as e.g. users might not antic-
ipate a particular use of their data, especially when cre-
ated in a context sensitive space and time. This becomes
even more delicate when explicitly analyzing their demo-
graphic attributes. We discuss these challenges as we detail
our methods and their implications.
Data Collection and Annotation
The Movement On Twitter. The #BlackLivesMatter hash-
tag (whose usage over time is shown in Figure 1) was ﬁrst
used on Twitter on April 2012 in relation to the killing of
Trayvon Martin (Graeff, Stempeck, and Zuckerman 2014).
Yet, it grew into a movement only after the acquittal of
George Zimmerman (the man who fatally shot Martin) in
July 2013,4and got consistent traction after the killing of
Michael Brown and with the Ferguson unrest.5The move-
ment gained momentum after the killing of Tamir Rice,6a
12 year-old school boy, and the decision of a grand jury not
to indict the ofﬁcer that put Eric Garner in a chokehold.7
Since then, the movement periodically regained public at-
tention after events involving police brutality, including the
deaths of Walter Scott8and Freddie Gray.9
Collecting Tweets. To collect tweets published from the
day before the ﬁrst use of the hashtag10 until May 10, 2015,
we crawled Topsy11 in April-May 2015—dataset ﬁgures in
Table 1 and Figure 1. To maximize the coverage of our col-
lection, we repeated the crawling with various time window
sizes until its volume converged.
Data Collection and Annotation. User data (public proﬁle
data and crowdsourced annotations) were collected in June
2015. User proﬁles were annotated according to the entity
behind the Twitter accounts via the crowdsourcing platform
Crowdﬂower12. We asked crowdworkers to categorize users
as individuals, governmental agencies, NGOs, media, oth-
ers; and, then, the individuals according to 3 perceived de-
mographic attributes: race, age and gender. Crowdworkers
were shown automatically generated screenshoots of the up-
per part of users’ public proﬁles, including the picture ban-
ner, the proﬁle picture, the name and proﬁle description, and
the last one or two tweets. The screenshoots were provided
via short-lived URLs in order to limit access to user proﬁle
information and minimize the risk of privacy violations.
4http://en.wikipedia.org/wiki/Black Lives Matter
5http://en.wikipedia.org/wiki/Shooting of Michael Brown
6http://en.wikipedia.org/wiki/Shooting of Tamir Rice
7http://en.wikipedia.org/wiki/Death of Eric Garner
8http://en.wikipedia.org/wiki/Shooting of Walter Scott
9http://en.wikipedia.org/wiki/Death of Freddie Gray
10First tweet containing a term obtained via http://ctrlq.org/ﬁrst/
We annotated about 6,000 users from 6 random samples
with various characteristics (e.g. from all users, from highly
active ones, from users tweeting about the topic even when
the media attention fades away). We showed crowdworkers
5-6 user proﬁles at a time, out of which one proﬁle was la-
beled by one of the authors (gold standard), and used to con-
trol the quality of the annotations. Given that we collect per-
ceived attributes and some of them might be subjective, the
proﬁles picked as gold standards were selected to be obvi-
ous cases for each of the categories. For all annotation jobs,
we collected at least 3 independent annotations for each pro-
ﬁle and categorization criteria, and kept the majority label.
About 100 crowdworkers participated in each task. Full an-
notation instructions are included in our data release.
The users distribution according to the number of tweets13
is long tailed (Figure 2): most users post only a few tweets
on the topic (e.g. ∼62% of users have only one tweet in the
collection), while only a few users post in the order of thou-
sands of tweets (only 3 users with more than 10K tweets).
This indicates that many users participate in the debate only
incidentally. For our analysis, we split users according to
their level of activity in 3 categories: a) non-active users—
769,231 users with less than 5 tweets; b) moderately ac-
tive users—96,905 users with 5 to 25 tweets; and c) highly
active users—14,033 users with more than 25 tweets. We
make this categorization as we conjecture that the activity
w.r.t. a topic is a proxy for a user’s interest in the topic and
her level of involvement, and we are interested in the inter-
play between the activity level and users demographics.
Further, by brieﬂy exploring the triggers behind the peaks
of attention received by the movement,14 we ﬁnd that most
of them are generated by events involving killing of African-
Americans by police in the US (when the debate focuses on
the discrimination against African-Americans), see Figure 1.
In addition, the attention peaks for a topic may be indicative
of the topic entering and exiting the public debate: when the
topic is in the spotlight, a larger community tends to get in-
volved in the debate, yet, as the topic fades away, only the
concerned community might care. To this end, we deﬁne a
peak window (or the spotlight interval) as a 4 days inter-
val including the day of the peak, the day before the peak,
and two days after the peak. Using this deﬁnition, we found
611,871 users tweeting in the peak times, as compared to
less than half of that number being active before the topic
“enters” or after it “exits” the public debate—268,298 users.
To study the demographic composition of users involved in
the debate we extracted 6 random samples15: 2,000 users
13For simplicity, by tweet(ing) we refer to both the creation of
an original tweet, as well as to passing on content, i.e. re-tweeting.
14To detect peaks we used a readily available implementation:
15Due to technical limitations related to how the screenshots
were displayed—resulting in proﬁles not being shown correctly for
annotation—we were able to label only 5976 users.
Figure 1: The distribution of the volume of tweets for #BlackLivesMatter per day over time.
Figure 2: The distribution of number of tweets per user.
Table 2: Accounts of organizations vs. of individuals across
samples. Asterisks indicate stat. signif. differences w.r.t. the
distribution of all users at p<0.01 (**) and p<0.05 (*)
All Peak Non High Mod. Low
Users Peak Activ. Activ. Activ.
Org. 5.0% 4.6% 4.9% 11.1% 5.5% 4.2%
Indiv. 95.0% 95.4% 95.1% 88.9% 94.5% 95.8%
** * **
sampled from all users in our dataset, and 5samples of 1,000
users from: users tweeting during peak times, users tweeting
outside the peak times, highly active users, moderately ac-
tive users, and non-active users. The samples were labeled
in two rounds: the ﬁrst annotation task aimed to distill the
accounts of individuals from those of organizations, while
the second task was designed to categorize accounts of indi-
viduals along 3 demographic criteria: race, gender and age.
Accounts of Organizations. Looking at the fraction of or-
ganization accounts w.r.t those of individuals, we notice that
the sample drawn from highly active users contains twice as
many organization accounts than other samples. The frac-
tion of organization accounts seems typically higher within
more active users: e.g. there are more organization accounts
among moderately active users than among non-active users.
This is largely explained by a higher fraction of accounts
associated with NGOs (7.4%, 3.6%, 1% for highly active,
moderately active and non-active users, and 2.2% across
all users) and media organizations, which, however, attains
the highest fraction among moderately active users (a possi-
ble artifact of the fact that media organizations tweet about
many topics, while NGOs are typically focused on a handful
of causes). Finally, accounts associated with governmental
agencies account for less than half a percent in all samples.
User Demographics. For individuals, we looked at the dis-
(a) Distribution of users’ age per sample.
(b) Distribution of users’ race per sample.
(c) Distribution of users’ gender per sample.
Figure 3: Race, age, and gender distribution across samples.
Asterisks indicate stat. signif. differences w.r.t. the distribu-
tion of all users at p<0.01 (**) and p<0.05 (*). (best
seen in color)
tribution of demographic factors. (Age) Figure 3(a) shows
that the fraction of young adults is lower among highly ac-
tive users, while the fraction of adults between 30 to 64 years
old is lowest outside the peak times—these users engaging
with the hashtag more actively during peak times when the
topic is in the public spotlight. (Race) Figure 3(b) shows user
distribution across racial groups and samples. We notice that
the fraction of African-Americans is the highest within the
sample of highly active users, and the smallest among the
non-active users or during peak times. (Gender) Finally, in
Figure 3(c) we see that the user distribution according to
their gender is relatively stable across samples.
Next, we looked at the distribution of users across age
<17 years 18-29 years 30-64 years >65 years
(a) Distribution of male users as a function of age and
race. All cells sum to 100%.
<17 years 18-29 years 30-64 years >65 years
0.7% 32.5% 14.4% 0.1%
1.2% 26.4% 19.2% 0.1%
0.4% 3.2% 0.5% 0.0%
0.1% 0.7% 0.5% 0.0%
(b) Male (M) to female (F) ratio. Red (resp. blue) in-
dicates a higher fraction of female (resp. male) users
w.r.t. the overall distribution (∼0.78 marked by white
in the colorbar). The percentages indicate the overall
distribution of users.
Figure 4: Race and age distribution for female vs. male
users. (best seen in color)
and race per gender—see Figure 4.16 We notice that the
most active users are white and African-American adults be-
tween 18 to 64 years old. However, while for male African-
American users the fraction of young adults (18 to 29 years
old) is higher, for white users it is lower. Inspecting the dif-
ferences between genders (Figure 4(b)), we see that women
younger than 29 years old are more active than men in the
same age category, while for users older than 30 years old,
men tend to tweet more about the movement.
User Involvement. Finally, we checked if users belonging
to speciﬁc demographic groups tend to be more vocal, or,
in other words, if they generate more content on average.
First, we ﬁnd that organizations are more active than indi-
viduals (7:2). Then, depending on the demographic criteria,
we see that: (a) African-Americans are most active, followed
closely by white users; (b) women are more active than men
(3.8:2.6); and (c) adults between 30 and 64 years old are the
most active, followed by young adults (3.9:2.6:2).
We started the study after one of the related events—the
shooting of Walter Scott—and based on empirical evidence
we hypothesized that the debate would be hold largely
among African-Americans. While our ﬁndings support this
premise, African-Americans being the largest group (even
up to 60% among highly active users), overall, whites make
up about 40% of individuals and Asians 4%. Future work
naturally includes an analysis of demographic factors across
various movements related to minority groups issues in or-
der to validate and broaden the observations we make here.
Parting Thoughts on Ethics. Although important, studies
investigating social media to understand the public opinion
and various narratives on minority issues across stakehold-
16This is based on users annotated along all demographic factors,
as only some factors may be perceptible based on user proﬁle info.
ers are scant, but growing. One reason are the limits in col-
lecting and annotating users accurately and at scale (either
manually or automatically). Yet, as we learn to work around
these limits, we also need to develop protocols to mindfully
study such user collections while protecting the users.
Data Release. The list of tweet ids are available for research
purposes at http://crisislex.org/. The list of annotated users is
available upon signing for not using it to study users in iso-
lation or to single them out for their demographic attributes.
Acknowledgements. We thank Carlos Castillo for feedback
on an early draft. A.O. was partially supported by the grant
Sinergia (SNF 147609).
Bakhshi, S.; Shamma, D. A.; and Gilbert, E. 2014. Faces engage
us: Photos with faces attract more likes and comments on insta-
gram. In CHI.
Boyd, D., and Crawford, K. 2012. Critical questions for big
data: Provocations for a cultural, technological, and scholarly phe-
nomenon. Information, communication and society.
Graeff, E.; Stempeck, M.; and Zuckerman, E. 2014. The battle for
‘Trayvon Martin’: Mapping a media controversy online and off-
line. First Monday.
Lashinsky, A. 2015. Seven signs you are clueless about income
Minkus, T.; Liu, K.; and Ross, K. W. 2015. Children seen but
not heard: When parents compromise children’s online privacy. In
Mislove, A.; Lehmann, S.; Ahn, Y.-Y.; Onnela, J.-P.; and Rosen-
quist, J. N. 2011. Understanding the demographics of twitter users.
Moodie-Mills, D. 2015. Black lives matter: A tale of two
O’Brien, S., and Kelly, T. 2013. Gen-
der equality won’t happen unless men speak up.
Pettit, J. 2006. Can we talk about race? a few rules
of engagement. http://articles.baltimoresun.com/2006-08-
01/news/0608010135 1 racial-inequality-political-change-
Royles, D. 2014. What’s missing from the debate about women
leaders in the nhs? men. http://www.theguardian.com/healthcare-
Tufekci, Z. 2014. Big questions for social media big data:
Representativeness, validity and other methodological pitfalls. In
Ward, J. A. 2013. The next dimension in public relations cam-
paigns: A case study of the it gets better project. Public Relations
Yadav, D.; Singh, R.; Vatsa, M.; and Noore, A. 2014. Recognizing
age-separated face images: Humans and machines. PloS one.
Zagheni, E.; Garimella, V. R. K.; Weber, I.; et al. 2014. Inferring
international and internal migration patterns from twitter data. In
WWW companion publication.
Zuckerman, E. 2015. Paying attention to Garissa.