Content uploaded by Sam Gregory
Author content
All content in this area was uploaded by Sam Gregory on Jan 10, 2020
Content may be subject to copyright.
Article
Cameras Everywhere Revisited: How Digital
Technologies and Social Media Aid and
Inhibit Human Rights Documentation and
Advocacy
Sam Gregory*
Abstract
Pessimism currently prevails around human rights globally, as well as about the im-
pact of digital technology and social media in supporting rights. However, there have
been key successes in the use of these tools for documentation and advocacy in the
past decade, including greater participation, more documentation, and growth of new
fields around citizen evidence and fact-finding. Governments and others antagonistic
to human rights have caught up in terms of weaponizing the affordances of the inter-
net and pushing back on rights actors. Key challenges to be grappled with are consis-
tent with ones that have existed for a decade but are exacerbated now—how to pro-
tect and enhance safety of vulnerable people and provide agency over visibility and
anonymity; how to ensure and improve trust and credibility of human rights docu-
mentation and advocacy campaigning; and how to identify and use new strategies
that optimize for a climate of volume of media, declining trust in traditional sources,
and active strategies of distraction and misinformation. All of these activities take
place primarily within a set of platforms that are governed by commercial imperatives
and attention-based algorithms, and that increasingly use unaccountable content
moderation processes driven by artificial intelligence. The article argues for a prag-
matic approach to harm reduction within the platforms and tools that are used by a di-
verse range of human rights defenders, and for a proactive engagement on ensuring
that an inclusive human rights perspective is centred in responses to new challenges
at a global level within a multipolar world as well as specific areas of challenge and
opportunity such as fake news and authenticity, deepfakes, use of artificial intelligence
to find and make sense of information, virtual reality, and how we ensure effective sol-
idarity activism. Solutions and usages in these areas must avoid causing inadvertent
as well as deliberate harms to already marginalized people.
Keywords: artificial intelligence (AI); deepfakes; Facebook; fake news; social media; trust
* The author is Program Director of WITNESS (https://www.witness.org) which supports anyone,
anywhere using video and technology to fight for human rights; from 2010–2018 he taught at the
Harvard Kennedy School.
V
CThe Author(s) 2019. Published by Oxford University Press. All rights reserved.
For permissions, please email: journals.permissions@oup.com
Journal of Human Rights Practice, 2019, 1–20
doi: 10.1093/jhuman/huz022
Article
Downloaded from https://academic.oup.com/jhrp/advance-article-abstract/doi/10.1093/jhuman/huz022/5540961 by guest on 22 August 2019
The violence is not new, it’s the cameras that are new.
—Ta-Nehisi Coates (Goodman 2015)
Over the past decade the rapid expansion of mobile and broadband connectivity, online so-
cial networks, and visual image technology has created a set of opportunities for individuals
and communities who work to advance justice and accountability around the world. In
today’s information ecosystem, these digital tools have demonstrated the capacity to sup-
port increased civic engagement and participation—particularly for marginalized and vul-
nerable groups—enabling activists and ordinary people to document abuse, speak truth to
power, make their voices heard, and protect and defend their rights. Yet there is now a
backlash against the role and value of digital communications and social media for human
rights. In this article I explore the value of the explosion of participatory communication
and identify where we can make the case for optimism, and where we must push back
against very real reasons for pessimism.
We have seen significant advances over the past decade in terms of the ability of ordi-
nary people, people’s movements, and NGOs to document police violence, expose war
crimes at scale, track environmental pollution, and mobilize sentiment and action around
shared causes and local human rights priorities. These include advances in multiple areas of
human rights practice: from initial documentation of violations, to verification and cura-
tion of this information, to its dissemination and sharing in raw and processed formats for
news, advocacy and mobilizing and organizing within movements, to archiving and preser-
vation for usage as evidence as well as retrospective reconstruction of events. In this article
I primarily focus on the role of digital technologies and social media in documentation, ver-
ification, dissemination and deployment of evidence of rights abuses.
An increased technology-enabled ability to create and share grassroots documentation
of abuses has diversified the human rights field by enabling a greater range and diversity of
voices and issues to be publicly visible and a greater participation in human rights fact-
finding (see Land 2016). By reducing the need to convince human rights gatekeepers that a
topic or pattern of violations is worthy of focus, this has de-colonized control of who gets
to speak and determine what matters. Within my own work at WITNESS I have seen re-
peatedly the desire of traditional and non-traditional human rights movements to use the
tools in their and their communities’ hands for positive purposes—for example, in docu-
menting military and police impunity towards favela residents in Rio de Janeiro (Shaer
2015). The evolution of open-source fact-finding approaches, often based around this
grassroots documentation, has permitted extensive real-time documentation of atrocities in
Syria via ongoing projects like Syria Tracker (https://www.humanitariantracker.org/syria-
tracker) as well as newsgathering by groups like Storyful. There has been the concomitant
growth of a field of improved citizen evidence-gathering and archiving that is fuelling inves-
tigations (Rajagopalan 2018). In the case of Al-Werfalli, a militia commander in Libya, for
the first time an arrest warrant by the International Criminal Court was issued based
largely on evidence found on social media (Irving 2017). Amnesty International has used
social media footage to identify widespread extrajudicial killings in Nigeria and the combi-
nation of geospatial imagery and shaky citizen footage to confirm massacres in Burundi
(Koettl 2017). An increasing number of human rights groups now have the skills and capac-
ities to do this type of work.
Local activists have also skipped around traditional media gatekeepers to present alter-
native narratives and grassroots advocacy to the world, such as the courageous Raed Fares
2Sam Gregory
Downloaded from https://academic.oup.com/jhrp/advance-article-abstract/doi/10.1093/jhuman/huz022/5540961 by guest on 22 August 2019
and the Kafranbel media collective in Syria with their satirical photos, social media posts
and videos (Stack 2014), while creative remix has characterized many distant witnesses’
solidarity participation in popular movements (Gregory 2016). At the intersection of grass-
roots documentation and movement organizing, the relentless succession of citizen-shot
videos of the killings of unarmed African-American men and boys cast a spotlight on perva-
sive police misconduct in the USA, and shaping by activists and media of narrative around
these accounts has contributed to the mobilization of the Movement for Black Lives
(Freelon et al. 2016).
In the space of organizing and movement-building, the Bersih protests in Malaysia chal-
lenged the long-time ruling party using Facebook and other tools to link ‘individuals from
different social groups, bridging the diverse public in interconnected conversations’ (Lim
2017: 225). Hashtags (and memes) have helped bind together conversations globally and
locally on human rights issues including #MeToo and #Kony2012 (Xiao Mina 2019). Gezi
Park, Tahrir Square and Occupy provide examples of new forms of temporally-bound rapid
horizontal organizing and mobilizing (as articulated by Zeynep Tufekci (2017) and others),
and of the ability of social media and messaging to—at times—provide connective power
and capacity to marginalized groups (including both those for and those against human
rights). Online communications enable them to synchronize opinion, move discursive activ-
ism into the streets (often alongside an impetus provided by mainstream media coverage),
and conduct lateral organizing and route tasks to the ready people within a movement.
Tufekci also notes the weaknesses in these innovations—how the speed of online organizing
may conceal fundamental organizational weaknesses and gaps in capacity for decision-
making and engagement with other actors, and confuse expected signals of movement
power by implying greater strength and coherence than actually exists.
However, many sectors of the human rights movement are currently feeling profound
pessimism. The rise of populism on a global scale, often empowered by use of social media,
has involved an assault on rights and organized civil society in both illiberal democracies
and in authoritarian societies, while economic inequality has called into question the focus
of many human rights groups (Alston 2017 and others in the same issue of this journal).
The formal human rights sector is in a crisis of confidence around its advocacy strategies,
its approaches to institutional, legalistic and elite engagement, and its response to populism
(Rodriguez-Garavito and McAdams 2016). Like other institutions of truth and accuracy it
is also grappling with the ‘post-truth’ movement (Rodriguez-Garavito and Gomez 2018)in
which its message is frequently drowned out in a sea of oppositional information and ambi-
ent volume of content.
This pessimism extends to the field of technology and particularly the role of digital
technologies and social media. As a recent journal edition of ‘Social Media and Society’
asked: ‘Social Media for Good or Evil’? (Hemsley et al. 2018). The technology darlings of
the early 2010s such as Facebook are now seen as enabling ‘fake news’, privacy violations
and electoral disruption in democracies such as the USA and hate speech and genocide in
Sri Lanka and Myanmar (Taub and Fisher 2018). The reality of the double-edged sword of
these technologies is also clear. Digital tools have expanded not only the space for rights
dialogue and for more voices of historically marginalized people to be heard, but also the
so-called Overton window, the space for the ideas that are acceptable in public discourse,
allowing malicious and anti-rights dialogue to proliferate. Actors hostile to human rights,
from states to individuals, use the same technologies and platforms as human rights acti-
vists and community and grassroots journalists in order to spread misinformation,
Cameras Everywhere Revisited 3
Downloaded from https://academic.oup.com/jhrp/advance-article-abstract/doi/10.1093/jhuman/huz022/5540961 by guest on 22 August 2019
disinformation, and malinformation,
1
to undermine civil society and democracy, and to
perpetuate hate speech. The identifying digital footprint of activists and the digital power
of trolls is used to find, silence, and direct physical violence against dissenting voices and
human rights activists. Both mainstream journalists and rights activists face similar patterns
of attack, as well as critiques on the basis of ‘fake news’. While social media has facilitated
broader information-sharing by more people with fewer explicit and visible gatekeepers,
there are plenty of less visible gatekeepers. These include not only armies of content moder-
ators on platforms looking for violations of opaque content moderation rules (Gillespie
2018) but also states and their electronic armies of trolls, censors and bots that are used for
harassment, swamping information and misinformation (Monaco and Nyst 2018). Most
governments have become savvy at how to turn the affordances of the internet and social
media to their own ends and many have made a concerted push-back against the use of this
connective power for human rights, as well as taking approaches to co-opt it for authoritar-
ian and repressive purposes. They watch closely and respond with demonization of new
mediums, and by blocking and flooding spaces online with supporters, trolls or strategic
distraction (see King et al. 2017 on the Chinese government’s strategic use of social media).
Mobilization of publics on social media as allies to states which are authoritarian and an-
tagonistic to human rights—for example, in Turkey or the Philippines—occurs via phenom-
ena such as state-sponsored trolling. In the Philippines, Maria Ressa, a prominent
investigative journalist and critic of President Duterte, continues to be targeted for online
harassment and violence. Globally in some countries this is state-executed (and state-
funded, for example with the so-called ‘50 Cent Army’ in China), other times it is state-
directed, incited or endorsed (Monaco and Nyst 2018).
Additionally, formal legal and policy push-backs include the trend worldwide to crimi-
nalize protest and online expression. This includes efforts to target bloggers and those who
post on social media under laws on ‘fake news’, registration, and counter-terrorism. It also
incorporates the push-back on the right to record and share images of violations, which
reflects states’ recognition of how people increasingly turn to the option to film and share
as their most basic choice to exercise freedom of expression and show abuses of their rights.
The internet and social media are no longer seen so much as a threat by governments but as
a safety valve, a surveillance mechanism, and an attack vector on dissidents. At a systemic
level we are seeing the rise of what Freedom House (Shahbaz 2018) has called ‘digital au-
thoritarianism’—a model in which people’s participation online facilitates censorship and
surveillance: ‘governments control their citizens through technology, inverting the concept
of the internet as an engine of human liberation’. The clearest example of this is China,
which is increasingly exporting this model and its related technologies to other countries
worldwide.
1Misinformation is information that is false, but not intended to cause harm. For example, individuals
who don’t know a piece of information is false may spread it on social media in an attempt to be
helpful. Disinformation is false information that is deliberately created or disseminated with the ex-
press purpose to cause harm. Producers of disinformation typically have political, financial, psy-
chological or social motivations. Malinformation is genuine information that is shared to cause
harm. This includes private or revealing information that is spread to harm a person or reputation.
See First Draft, Information Disorder, Part 1: The Essential Glossary (https://medium.com/1st-draft/in
formation-disorder-part-1-the-essential-glossary-19953c544fe3, referenced 5 July 2019).
4Sam Gregory
Downloaded from https://academic.oup.com/jhrp/advance-article-abstract/doi/10.1093/jhuman/huz022/5540961 by guest on 22 August 2019
The aspiration of social media as a tool of sousveillance—or the idea that those in power
would be monitored by ‘watching from below’, rather than control surveillance from
above—has given way to a recognition that social media is as much a tool of continued sur-
veillance if not ‘omniveillance’.
My observations in this article are grounded in the work of WITNESS (http://www.wit
ness.org), a human rights network with a mission to support anyone, anywhere using video
and technology to protect and defend human rights. We work at the intersection of the con-
sumer technologies of mobile, internet and video with emerging cultures of civic participa-
tion. We work closely with local communities who are using these tools to help them do it
better and to have the relevant literacies and tools to be more confident, ethical and effec-
tive. We go on to share what is learned in other similar contexts (for example, of war crimes
documentation or police violence monitoring) and we also engage with key global plat-
forms—based on the diverse grassroots experience of the people we work with—to ensure
their products and policies support both human rights users and human rights values. We
also prototype and intervene in emerging technologically-driven areas of challenge and op-
portunity for human rights work, such as the growing use of artificial intelligence (AI) and
its intersection with media manipulation, disinformation and rising authoritarianism. In
this article, I talk about what has changed and evolved in the last years of experience of dig-
ital technologies and social media, our own experience at WITNESS of grappling with these
changes, what is happening now, where technology trends may lead, and share some mani-
festo ideas on how the human rights community should respond.
As noted earlier, I focus primarily on the stages of the human rights documentation and
advocacy life cycle that include the use of social and digital media as documentation, its
preservation, and its usage in advocacy, organizing and as evidence. The rich scope of social
media as a movement-building tool is too broad to do full justice to it here.
Any conversation about ‘technology and human rights’ has multiple implied layers.
There is the direct impact of technology on the protection and realization of human
rights—for example, in the use of digital tools for censorship or facial recognition for sur-
veillance, or the deployment of artificial intelligence to enable better media manipulation
(as discussed below in relation to so-called ‘deepfakes’)—and how companies, and local
and international law, policy and norms shape and respond to that. There is also how, or if,
human rights values such as freedom of expression and privacy are embedded in the devel-
opment and productization of technology. This links closely to the ‘Silicon Valley bubble’
problem of who builds and markets technologies, and for whom. Related to this, and ger-
mane to this article, are the ways in which human rights actors proactively and reactively
influence technology development to be more ethical and responsible—for example, as coa-
litions have done in response to the failures of Facebook’s content moderation policies to
push for transparency and change, and as WITNESS has pushed for with better privacy
protections such as a blurring feature on YouTube. Additionally, there are the explicit uses
of niche and dual-use technology in human rights work and by human rights defenders as
well as others in related organizing and solidarity work. An ecosystem of tools has devel-
oped in the past decade around mobile documentation, evidence-gathering and personal
protection for activists. The best of these tools are well aligned with the actual human rights
ecosystems of skills, collaborations and work in concert with established and new partici-
pants and in relation to the lifecycles of human rights documentation, evidence-gathering
and advocacy. The worst are poorly designed, with limited consultation and fail to be well-
maintained.
Cameras Everywhere Revisited 5
Downloaded from https://academic.oup.com/jhrp/advance-article-abstract/doi/10.1093/jhuman/huz022/5540961 by guest on 22 August 2019
At WITNESS we focus both on the importance of human rights-specific tools and also
on the reality that the burgeoning number of people who create and share digital media on-
line will do it using the tools they have available to them—like Facebook, Twitter,
WhatsApp and YouTube. Any approach to digital communication tools and social media
for human rights has to focus as much on these tools, and promoting their better usage and
harm reduction around their failings, as on dedicated human rights community tools.
Although in this article I focus primarily on consumer technology, social media and human
rights, I take to heart the caution by Enrique Piraces (2018) who, in his writing about the
future of human rights technology, notes the over-emphasis on information communication
technologies at the expense of other areas.
What challenges and opportunities shape the use of digital
communication tools and social media for human rights?
A series of key ongoing questions shape an understanding of human rights and digital me-
dia technology.
Firstly, how do we understand safety and security when engaging online and with digital
tools and who defines what this means, and for whom? Secondly, how do we think about
trust and credibility—from the individual level of a single video shared from a crisis context
right through to the credibility of institutions engaged in this space who rely on a reputation
for accuracy but operate in a world of uncertain information and ‘fake news’? Thirdly, how
do we redefine effective human rights documentation and advocacy in an era of informa-
tion overload, information confusion, deliberate misinformation and disinformation, and
of increasing volume and velocity of data and media? As a fundamental backdrop to this
question and the issues of safety and trust, we must also ask how we reconcile and navigate
the dichotomy between how technology enables distributed power and yet increasingly re-
centralizes that distributed power in a few key platforms that both serve primarily commer-
cial imperatives and also exist under increasing government pressure. There is a further
backdrop—which I address less in this article—of how we must address the increasing
trends towards ‘digital authoritarianism’ where social media is at the service of govern-
ments. In the sections below I focus on each of these three questions—around visibility and
safety, trust, and how the media environment of volume intersects with consolidated plat-
form power, outlining a status quo and what comes next.
However the lens through which we consider these questions is important. None of
these challenges are abstract. All of these challenges are experienced very directly at a hu-
man level by individuals who speak out for their rights, and organizations who support hu-
man rights. How are these individuals and organizations to be found, trusted, believed, and
connect to others in a way that is safer from retaliation and harm, and more likely to lead
to change which has a positive impact on human rights? How do people have agency to
make effective choices around staying hidden and being visible (Gregory 2015)?
Centring the vulnerable person and community at the heart of how we think about the
positive and negative sides of digital technologies and human rights is critical. The practices
of WITNESS over the past decade have focused on people-centric guidance and tools to in-
crease the odds that digital media protect rather than compromise human rights—for exam-
ple, guidance on ethical sharing of footage found online so that it does not raise risks for
people seen in it. This centring is also key to the ongoing debate about the effectiveness of
niche ‘human rights technology’ and who it serves. This approach also relates to what my
6Sam Gregory
Downloaded from https://academic.oup.com/jhrp/advance-article-abstract/doi/10.1093/jhuman/huz022/5540961 by guest on 22 August 2019
colleague Dia Kayyali has framed as a ‘harm reduction’ approach to the negative impacts
of major commercial tools, recognizing that people will continue to use them because they
provide affordances and communication opportunities they and their communities need
(Kayyali 2018a). Recent work by WITNESS on closed messaging apps such as WhatsApp
(Kayyali 2018b, 2018c) includes steps both that companies should take and that individuals
can take to reduce harms knowing that human rights defenders choose to continue to use
services that are not optimized for their work or their values.
Most of these concerns are not new. They have remained consistent over the past dec-
ades of my work in this space. In an article in this journal in 2010, I described how ‘these
concerns—which could also be expressed in terms of questions of authenticity, efficacy
for action and safety—have only been magnified in an environment of radically increased
participation in visual documentation and testimony of human rights violations’
(Gregory 2010). In a 2011 WITNESS report, ‘Cameras Everywhere’ (Padania et al.
2011), we highlighted the problems emerging at that time under similar categories that
included: ‘privacy and safety’; ‘technology providers as human rights facilitators’; and
‘information overload, authentication and preservation’. What has changed is the scope
of the concerns, and the extent to which they are being weaponized against rights
defenders.
Visibility and safety
The rhetoric of the early 2010s from the dominant social media platforms was of a trend to-
wards openness, transparency, disclosure, and of a decline in the expectations of privacy.
Yet even in that moment it was clear that if you centred vulnerable people and human
rights defenders at the core of thinking, ‘the realities of human rights risks on the ground
... are not connected to these changing online norms, whether they may be real or imag-
ined. It remains as risky as ever to challenge power or to speak out against injustice, and
power holders continue to trample people’s rights’ (Gregory 2012a). It was also clear that
‘consent—emerging from established human rights practices and traditions of documentary
ethics and social science, and grounded in a recognition of real dangers on the ground—is
central, but needs to be re-grounded in new communities of practice’ and new tools like
YouTube (Gregory and Zimmerman 2010).
The last five years have demonstrated the risks facing individuals using consumer infor-
mation communication technologies, and the continuing failure of platforms to adequately
recognize the privacy harms that occur to both ordinary users and vulnerable people. As
ever, complex networked technology appears ‘ever simpler for users to operate, but not to
control’ (Padania et al. 2011). By ‘virtue’ of their design for ease of use and monetization as
well as by our habits of usage, mobile and internet technologies generate voluminous data.
In a human rights context of vulnerable defenders, there is a set of resulting risks.
Omnivorous data collection leads to the ability to actively deanonymize and track individu-
als online and via their mobile connectivity. The intersection of cloud computing, artificial
intelligence and computer vision with facial and gait recognition allows for the identifica-
tion of individuals through their images, visages and movement. As a result, most methods
of thinking about protection and consent in individual images or any individual moment
are rendered far less effective. The promise that ubiquitous cellphone cameras would pro-
vide sousveillance or bottom-up watching of those in power has been weaponized against
people as they become potentially another data source for surveillance. Sousveillance has
Cameras Everywhere Revisited 7
Downloaded from https://academic.oup.com/jhrp/advance-article-abstract/doi/10.1093/jhuman/huz022/5540961 by guest on 22 August 2019
also been complicated by explicit efforts to stop citizens’ uses of technologies to document,
access and share information safely, by means of attacks on end-to-end encryption in mes-
saging, bans on the use of VPNs (virtual private networks), and crackdowns on the ‘right to
record’ in public. Governments argue that they need more explicit surveillance powers both
at the street level with body-worn cameras on police, and at the network level. Aspirations
of sousveillance are now as likely to reflect a reality of continued surveillance or at best
what Jamais Cascio (2013) describes as ‘ambiveillance’. Meanwhile new harms to privacy
and safety have also emerged, including the way that states and trolls have learned to mobi-
lize the affordances of platforms like Twitter and Facebook to target dissidents and swamp
their speech (Monaco and Nyst 2018).
What comes next and what should we do?
So what does the future look like and what might our response look like? Harm reduction
tools need to evolve to reflect new realities of the power of artificial intelligence and sur-
veillance and the ever-increasing discrepancy in power and agency between technology
platforms and governments and individuals, particularly vulnerable people and dissidents.
Visual anonymity, for example, faces new challenges when our own self-directed sharing
of selfies combines with ubiquitous government and commercial monitoring of us in pub-
lic and private spaces. Companies need to protect critical rights to anonymity and pseudo-
nymity and provide products that facilitate these (Gregory 2012b), such as better tools
for blurring faces. However, we also need to think out of the box on the new privacy tools
for the future. One proactive solution would relate both to the ready access to ubiquitous
surveillance and to emerging threats of deepfakes (realistic simulation of an individual’s
likeness or voice) and other ways to realistically synthesize plausible video and audio, dis-
cussed further below. These deepfakes will likely be used to conduct credibility attacks,
to muddy the waters of investigations, and to incite violence against journalists and hu-
man rights defenders (Gregory 2018a, 2018b; Edwards and Livingston 2018). They can
be created and used at low cost and with few consequences for those who use them and
incite real-world violence in online spaces. However, they also rely on access to large
quantities of ‘training data’ to build them—typically, images of an individual’s face.
Platforms and search engines are the repositories of much of this content for people who
are in the public eye. They should be pursuing opt-in solutions—such as the use of so-
called adversarial perturbations, invisible changes to images that disrupt the ability of
computer vision to recognize an image—to allow people to remove their face-print from
being easy training data in publicly searchable databases such as search engines (see
Gregory 2018a).
One temptation is to build only for our human rights silos—while the reality is that the
growth in human rights documentation, advocacy and organizing has mainly happened
with people using commercial and consumer tools. It is why calls to #DeleteFacebook, as
Dia Kayyali (2018a) has noted, often neglect the network effect, the established networks
of usage and the economic trade-offs for people with fewer resources that lead to them con-
tinuing to use these platforms. Many of these users will be among the broader networks of
non-formal human rights documenters, media activists and civic witnesses. We need to
keep up the pressure on technology developers and companies to take seriously how their
tools affect vulnerable users and how they will be manipulated by governments—a critical
example here would be the current debate around to whom and how a company like
8Sam Gregory
Downloaded from https://academic.oup.com/jhrp/advance-article-abstract/doi/10.1093/jhuman/huz022/5540961 by guest on 22 August 2019
Amazon should sell facial recognition technology (NBC News 2018). This attention needs
to begin right from the start of the product cycle and be about inclusive design and human
rights, not just ethics. It cannot be applied just as a band-aid after products have been re-
leased under an ‘fail early, apologize later’ approach. It should centre human rights and vul-
nerable groups at the heart of its process.
In terms of the ability to leverage human rights into the debate, some grounds for opti-
mism already exist. There have been a growing number of civil society actors engaging di-
rectly on these questions, including the development in the last decade of a field around
technology and human rights. While professionalization and ‘fields of practice’ carry dan-
gers in terms of who they exclude and include, they also carry potential for consolidated
action and learning. While real power continues to lie in the hands of governments and
corporate non-state actors like the platforms and social media network, civil society is de-
veloping a better level of preparedness and a stronger ability to challenge techno-
utopianism that assumes human rights will follow naturally from the tech, or blatant ne-
glect in handling human rights impacts. Early seeds and shoots of this can be seen in the
organizing that happens around conferences and convening spaces like RightsCon and
the Internet Freedom Festival, in the movement towards public interest technology
among US foundations, and in the pressure on Facebook to respond to civil rights com-
plaints in the USA and to review how they could improve their responses to coordinated
hate speech and incitement to war crimes in Myanmar (though this did not actually in-
clude doing a human rights impact assessment on the harms they had already caused)
(Allison-Hope 2018;UN OHCHR 2018). We are now seeing the development of coali-
tions like the Next Billions grouping representing many countries neglected by
Facebook’s and other companies’ failure to treat them as equally valuable humans and
users in their ecosystem. The current critical lens on artificial intelligence and human
rights comes in many cases from researchers and academics who have actively led efforts
around inclusion, diversity and identifying harms for many years, often in alignment with
civil society organizations. Alongside this, the current debate, at both a regulatory level
and in the public sphere, about platform company responsibilities and their power pro-
vides opportunities to push a human rights perspective and has led many other civil soci-
ety groups to focus more explicitly on challenging platform power (for example, the
organizing group Color of Change in the USA). These external challenges increasingly
align with organizing by employees in key technology companies—for example the way
that Google employees pushed back on Google re-entering the Chinese market or on the
choice of artificial intelligence and ethics advisers for the company (Googlers Against
Transphobia 2019). Within these intersections of new allies, and internal and external
pressures, lies some increased potential to advocate for more rights-respecting technology
platforms.
Concomitantly, at a policy level, human rights actors need to support government
regulation that is actually rights-respecting and promotes privacy and free expression—
because the internet is becoming a set of internets, under competing national regimes
(McDonald and Xiao Mina 2018) and the models that will be generalized for govern-
ment and corporate regulation will be even worse if they originate in a Chinese ‘digital
authoritarian’ model or make authoritarian usage of new ‘fake news’ laws. One clear
example of where this government regulation is necessary is in the area of facial
recognition.
Cameras Everywhere Revisited 9
Downloaded from https://academic.oup.com/jhrp/advance-article-abstract/doi/10.1093/jhuman/huz022/5540961 by guest on 22 August 2019
Trust
We are in a real and manufactured crisis of trust in sources of credible information, includ-
ing human rights organizations. The pressures of ‘fake news’ (and the variety of phenomena
within this that Wardle and Derakshan (2017) describe as ‘information disorder’) and the
manipulation of the idea of ‘fake news’ by those in power have pushed many people to-
wards a default either of disbelief or of scepticism about mainstream sources. These trust
issues are at multiple levels—not only ‘how do I trust anything?’ but also ‘how do I trust
this video of human rights violations I am watching now?’. Attacks on trust target both
credible mainstream media sources as well as content from human rights defenders and out-
siders, whose efforts are often, if not always, amplified in symbiosis with mainstream me-
dia. Any response to attacks on trust must reflect holistically how to reinforce both
freedom of the press and trust in media institutions, while ensuring not to do so in a way
that excludes new human rights voices.
This attack on trust threatens one of the most significant advances driven by the profu-
sion of digital technologies and social media in the past decade—the explosion described
above of the use of open-source intelligence for human rights and war crimes investigations.
Open-source materials have always had their place in investigations. However, since the
2010–11 Arab Spring, international war crimes investigators, human rights advocates, and
journalists have developed increasingly effective practices for how to use the videos and
photos and social media accounts shared by purposeful citizen documenters, by bystanders
and by perpetrators to drive forward investigations, prosecutions and advocacy (Koenig
et al. 2019 (forthcoming), Gregory 2018c,Matheson 2015, and WITNESS, no date).
Current responses focused on building stronger trust in digital documentation and imagery
found from non-traditional sources, or discovered via open-source searching, have given us
the ‘open-source intelligence’ (OSINT) movement and the inclusion of verification practices
into both human rights training—for example, the Amnesty International Citizen Evidence
Lab (https://citizenevidence.org) or the WITNESS Media Lab (https://lab.witness.org)—as
well as into the practices of journalists covering human rights and war crimes—for example,
the New York Times Visual Investigations Team and Bellingcat (https://www.bellingcat.com).
It has also led to the growth of the field of ‘forensic architecture’ (Weizman 2018) and spatial
analysis compositing together multiple forms of evidence and data to provide in-depth recon-
struction and analysis of particular human rights incidents such as the work of Situ Research
(https://situ.nyc/research) and Ukrainian groups in documenting the Euro-Maidan shootings
in Ukraine (Schwartz 2018), often based on archives of digital content. These technology-
enabled approaches try to find new ways to challenge distrust and scepticism, and provide
new ways to present evidence. They intersect with similar emerging practices in the field of
journalism, including coalition-based approaches to making these assessments such as the
work of First Draft with Comprova in Brazil (https://firstdraftnews.org/project/comprova).
The work of WITNESS in this area focuses on the facilitation of choices around visibil-
ity and protection. We recognize that people confronting injustice need as much control as
possible to manage constant decisions around the friction between staying hidden and
wanting to be found and that commercial tools do not easily facilitate this. Our work has
included collaborating with the Guardian Project to develop niche tools that human rights
documenters and citizen witnesses can choose to use to increase trust in their images and
content—for example, the CameraV and ProofMode (https://github.com/guardianproject/
proofmode/blob/master/README.md;Kayyali 2017) tools for enhancing the metadata
10 Sam Gregory
Downloaded from https://academic.oup.com/jhrp/advance-article-abstract/doi/10.1093/jhuman/huz022/5540961 by guest on 22 August 2019
around an image, and cryptographically signing it so it can be confirmed as coming from a
particular user, as well as adding a hash that will allow someone to confirm if the image
has been manipulated or changed since it was captured. These tools exist within a small
ecosystem of human rights-centric documentation apps which attempt to turn ‘metadata
from a surveillance risk into a method for the production of public proof’ (van der Velden
2015), such as Eyewitness to Atrocities, MediCapt (Naimer et al. 2017). They also include
more commercially driven apps such as TruePic. Yet, in line with our recognition that niche
tools have value but also reach only a small percentage of the people documenting rights
violations in their communities, we use our tools as ‘reference designs’ to make the case to
companies like YouTube and Google as to how they should think about supporting users to
opt in to include in their media more signals of authenticity, provenance and trust, and for
an approach to ethical technology adoption at scale that looks at ‘scaling by theft’, when
larger entities and commercial platforms steal and appropriate our ideas.
What comes next?
Placing human rights concerns, human rights defenders, grassroots journalists and margin-
alized communities at the centre of responses to new threats to trust is key. I will illustrate
this via two examples.
The above examples of tools for validating images, videos or audio recordings from
the moment of capture through the content’s lifecycle and use—from TruePic to
WITNESS/Guardian Project’s ProofMode—relate to a growing set of commercial entities
and journalistic coalitions that are pushing for a greater reliance on supposedly ‘credible
news sources’ and the ability to validate content from source. Many of these tools seek to
use approaches to rich metadata, to incorporate signing of images at origin and controlled
image capture processes as well as the use of distributed ledger technologies like
Blockchain to track origins and edits. These innovations will have a profound impact on
how we assess the credibility of the world of participatory communication. If we ap-
proach this from a human rights values and human rights defender perspective, we need
to recognize the potential here for validating evidence of war crimes, providing verifiable
tracking in supply chains, and facilitating signals to aid verification by fact-finders, social
media platforms and journalists. Yet we also need to be cautious about the negative im-
pact these approaches might have on citizen journalists and community media as well as
marginal or dissident voices who cannot safely choose to embed all this additional data,
others who need anonymity or the ability to remove content from the public domain
when security risks change, and people with limited access and connectivity or who use
older devices. We also need to consider the broader impact on society that these solutions
conceptually propose—implying that a ‘blue tick’ or a watermark is necessary to be be-
lieved. What is the impact of a potential move to ‘disbelief as default’ on images and vid-
eos and who gets included and who left out by this? Our technological solutions to the
collapse of trust may end up exacerbating the problem and doubling down on the exclu-
sion of the accountability activists and human rights defenders who have decades of expe-
riences of already being called ‘fake news’ and of those in power using every option to
discredit and delegitimize their stories. We will have just given those in power another
weapon against us.
Our voices are stronger if we are at the table earlier. In the case of the sophisticated, per-
sonalized audio and video manipulations called ‘deepfakes’ that I mentioned above, there is
Cameras Everywhere Revisited 11
Downloaded from https://academic.oup.com/jhrp/advance-article-abstract/doi/10.1093/jhuman/huz022/5540961 by guest on 22 August 2019
a critical need to have a pragmatic human rights perspective at the table before we are in
the middle of the real explosion of usage of these deepfakes as well as other forms of artifi-
cial intelligence-generated and manipulated ‘synthetic media’ and malicious edits to video
and audio (Gregory 2018b). These forms of manipulation have the potential to amplify, ex-
pand, and alter existing problems around trust in information, verification of audio-visual
materials, and weaponization of online spaces. However, the conversation around them
has largely been one that is both apocalyptic in nature, technical in focus, and primarily fo-
cused on consequences in the global North (ibid.). For example, it largely does not reflect
the reality that for many human rights defenders they have been confronted with a volume
of ‘shallowfakes’ for most of the last decade (see Gregory 2016 and Bair and Maglio 2014).
By shallowfakes I mean the tens of thousands of videos circulated with malicious intent
worldwide right now—crafted not with sophisticated artificial intelligence, but often sim-
ply relabelled and re-uploaded, claiming an event in one place has just happened in another.
Or else videos that have simple edits or a new audio track. Videos like this showing horrible
atrocities crop up repeatedly, recycled between countries. Notorious examples include the
miscontextualized lightly edited videos that have incited mob violence in India via closed
messaging groups (BBC News 2018) and the video claiming to show migrant violence in
Europe that President Trump re-tweeted from British far right sources (BBC News 2017).
Any response to deepfakes needs to look at not only the particular threats they pose to hu-
man rights defenders and already marginalized communities globally and not just in the
USA and Europe, but also the expertise that already exists in detecting and combating fak-
ery. It also needs to account for the central but complicated role and expectations of plat-
forms like Facebook and YouTube and how they will respond to this threat—will it be with
sufficient transparency, a response to clear violations of privacy and bodily autonomy, yet
include concern for free speech and satire? We need a diverse range of global civil society,
human rights activists and media activists to be at the table for this decision-making.
Volume and consolidated platform power
The volume of online media and communications keeps growing and growing. In the time
it will have taken the average reader to have reached this far in this article, at least five
thousand hours of video were uploaded to YouTube and seven million hours watched
(YouTube for Press, no date). Just this one platform’s growth in the past five years gives a
sense of the trajectory of creation, sharing and consumption on many of the major plat-
forms like Facebook, Instagram, Twitter and Snapchat. In 2010–11, YouTube was touting
48 hours a minute uploaded; in 2012, YouTube was exceeding 100 hours a minute; in
2014. 300 hours a minute; in 2015, 400 hours ... Of course, the majority of this is not ei-
ther explicit or implicit human rights content. It’s the mundane, the offensive, the violent,
as well as the small percentage but still high volume of evidence and documentation of hu-
man rights violations.
The volume creates opportunity and problems. As an opportunity, in that volume there
is both a data set of information and a host of compelling individual stories and realities.
The availability of more local, more proximate, more varied human rights stories creates
possibilities for a more varied human rights movement (Xiao Mina 2018) (see also Land
(2016) on the possibilities of participatory human rights fact-finding). Local stories of hu-
man rights issues also reflect the characteristics of the internet that rely more on networked
authenticity than authoritative sources, favouring what you hear from sources that you
12 Sam Gregory
Downloaded from https://academic.oup.com/jhrp/advance-article-abstract/doi/10.1093/jhuman/huz022/5540961 by guest on 22 August 2019
trust and know (Xiao Mina 2018). However, there is also the reality that critical stories
will get lost like needles in the haystack. The realities of the risk of documenting a human
rights violation have not diminished—for example, if you are a favela resident in Rio de
Janeiro documenting an extrajudicial killing outside your home—yet the likelihood of your
video or photos making a difference may have diminished. In ubiquity, even the strongest
human rights evidence can be easily lost in the volume of content, particularly if the as-
sumption of success is widespread visibility or content going viral (see Gregory 2012a).
Your media must compete against other content for public attention—both cat videos and
celebrities, but also deliberate attempts to discredit human rights material or distract audi-
ences (Xiao Mina 2019). And even as we consider how accounts are lost in the volume we
need to recognize that the digital divide is still prevalent as an issue on a global scale, and
along dividing lines such as wealth, gender and rural/urban within societies. Not everyone
has access to the means to provide this form of documentation of human rights abuses.
This reinforces what Patrick Ball has noted on the availability of data and data darkspots
on human rights issues: ‘the absence of evidence is not evidence of absence’ (quoted in
Guberek and Silva 2014: 26). Perversely, we also see how increasingly the expectation of
readily available social media or visual evidence of rights violations is weaponized against
human rights advocates who do not have this content—in the absence of this documenta-
tion media and courts dismiss claims in countries like Israel. Put colloquially, for multiple
reasons, ‘pics or it didn’t happen’ is still not a good rule for human rights documentation.
A range of strategies have evolved for grappling with volume in human rights advo-
cacy. One approach is for professionalized fact-finders and advocates to get better at
finding the needle in the haystack. Here, alongside the explosion of OSINT techniques
for searching for and finding information both lying hidden as well as visible and public
(see Aronson 2018), we see the growing usage of artificial intelligence, natural language
processing and computer vision to scour online material—for example, the archives of
the Syrian war as analysed via the E-Lamp project from Carnegie Mellon University
(https://aladdin1.inf.cs.cmu.edu/human-rights/E-LAMP), and the VFRAME project of
Adam Harvey (https://vframe.io) and Mnemonic/the Syrian Archive (https://syrianarch
ive.org).
Another approach is to focus on how to ensure that those on the ground who create hu-
man rights content are better placed to make their content stand out in the crowd, or make
it relevant to particular types of usages and audiences. For WITNESS this emphasis on the
broad-based skills and literacy side of tools and technology deployment is key. An example
of this work would be the work by WITNESS in the field of ‘video as evidence’ (https://vae.
witness.org, discussed further in Gregory 2018c) focusing on how to ensure that individuals
and groups documenting war crimes and other crimes under international law understand
the types of content that are most useful to investigations and prosecutions, and document
material in ways that enhance credibility and the ability to utilize it.
Another approach is to get better at compositing together multiple sources to tell spatial
and temporal stories that self-corroborate as well as explain complexity—for example, the
approach of the Euro-Maidan cited above of Situ Research in documenting the killings of
protestors by paramilitaries in Kiev (see, in this journal, Aronson et al. 2018). In our own
work at WITNESS we have also in our Media Lab pursued projects that try to make sense
of the haystack and use the sheer scope of video out there as data points. In the ‘Capturing
Hate’ project (WITNESS Media Lab 2017) WITNESS analysed the hundreds of videos
shared online on YouTube and niche video sites showing transphobic violence, shot and
Cameras Everywhere Revisited 13
Downloaded from https://academic.oup.com/jhrp/advance-article-abstract/doi/10.1093/jhuman/huz022/5540961 by guest on 22 August 2019
shared for malice, entertainment and monetization. We did not share or re-post the vid-
eos—we used the volume and the engagement data to demonstrate the depth of transphobic
hatred and direct violence, and the complicity of platforms in allowing it.
Crowd-mapping and curation as strategies for presenting a range of narratives and in-
formation have been a prevalent strategy since the launch of Ushahidi (https://www.usha
hidi.com), a crowd-mapping tool for soliciting information about incidents, in the after-
math of the disputed 2007 Kenyan elections. It also includes more structured curation proj-
ects such as the work by WITNESS with Storyful curating the Human Rights Channel on
YouTube (https://www.youtube.com/user/humanrights), and in projects such as Watching
Western Sahara (https://lab.witness.org/projects/citizen-video-in-western-sahara) that
highlighted multiple citizen accounts of rights violations in that occupied, closed territory.
There are also more polyvocal approaches to curation practices that try to highlight multi-
ple stories rather than privileging one voice—a notable example here is the ‘It Gets Better’
project around LGBT youth (https://itgetsbetter.org/stories). Many of these strategies over-
lap with other emergent fields based on using the sensors inherent in mobile devices, or that
can be easily added, as part of citizen science projects (see Piraces 2017).
What comes next?
Human rights defenders—across the spectrum, not just the elite national and international
organizations—will need to get better at using these volume-optimized strategies and tools.
As an example, to date many technical tools for sorting content utilizing artificial intelli-
gence have not been mainstreamed, but initiatives like HURIDOCS Uwazi (https://www.
uwazi.io) are looking to make machine learning more accessible across the human rights
defenders’ community.
However, to deal with the underlying question of volume requires the human rights
community to be an advocate at a systems level on the mechanisms behind the scenes that
determine what gets seen and what does not. The ability of human rights defenders and
marginalized communities to have their voices seen, heard, found, and preserved is related
to the consolidated power of feed-based platforms, video-sharing sites and search engines.
Close to a decade ago, the internet commentator and scholar Ethan Zuckerman noted that
‘hosting your political movement on YouTube is a little like trying to hold a rally in a shop-
ping mall. It looks like a public space, but it’s not—it’s a private space, and your use of it is
governed by an agreement that works harder to protect YouTube’s fiscal viability than to
protect your rights of free speech’ (Zuckerman 2010). But we can expand this further—it’s
not just that YouTube controls what content you share, but also that it controls what other
people see. Recommendation algorithms based on attention drive people towards more ex-
tremist content, and throttling practices that can reduce the visibility of a piece of content
in a newsfeed or content stream exercise hidden censorship power on platforms like
Facebook and Twitter. For the average user, personalization and social search determine
what you see, while proprietary ranking systems for authoritativeness and relevance are
opaque. Fifteen years into the social media revolution, the tensions of using private infra-
structure for our public sphere are only getting more explicit.
For human rights media, content moderation policies often rapidly remove evidence
from platforms before it is seen by human rights advocates or can be analysed for curation,
presentation or evidence—a notable example related to the increasing use of artificial intel-
ligence was the loss of hundreds of thousands of videos documenting the Syrian civil war
14 Sam Gregory
Downloaded from https://academic.oup.com/jhrp/advance-article-abstract/doi/10.1093/jhuman/huz022/5540961 by guest on 22 August 2019
on YouTube in 2017 (Asher-Schapiro 2017;Rajagopalan 2018). Dissident content is often
taken down or challenged on spurious grounds. As a response to platform content modera-
tion and the increasing use of artificial intelligence without transparency or adequate over-
sight we need concerted civil society action to keep the pressure on companies to align their
content moderation with human rights principles. Emergent coalitions like the Next
Billions that reflect broader constituencies outside the global North are a key part of this.
The UN Special Rapporteur on Freedom of Expression, David Kaye, has also recently out-
lined how human rights-based content moderation might be done—including respect for
human rights standards and push-back on authoritarian demands, as well as principles of
transparency, accountability and redress (UN Human Rights Council 2018). We will need
to be clear where we want and demand humans in the loop of algorithmic accountability.
YouTube content is taken down by users as well as by the company—it is estimated
one-third of content from the Iranian Green Revolution of 2009 was gone by a few years af-
ter, while roughly one-third of the Arab Spring content tracked by Storyful had gone within
two to three years of the events (Malachy Browne, personal conversation), and close to ten
per cent of the content WITNESS tracked on the Human Rights Channel was gone within a
year or two. Closed messaging apps, such as WhatsApp where many videos are shared, are
not visible to outsiders who are not part of the groups and are ephemeral in terms of con-
tent preservation, forcing activists using the tool in situations like northern Myanmar and
in Rohingya communities there to develop workarounds (Ng and Ivens 2018). These losses
place even greater emphasis on the need to think about digital archiving practices and to
revalorize both the role of the archivist and community documentation efforts that help
preserve critical content, as well as new initiatives like Mnemonic and WITNESS’s own ar-
chival programmes (https://archiving.witness.org) that look for new tools and approaches
to supporting a greater diversity and accessibility of both practices and templates for man-
aging digital archives.
Human rights defenders will also have to get better at dealing with the changes in the
underlying narrative environment as previous forms of advocacy become less effective.
The social media ecosystem is driven by credibility based in perceived authenticity not in-
stitutional trust, by over-production of information and misinformation, and by
attention-based algorithms. These present potential challenges to practices of strategic
communication and what WITNESS has described as video advocacy (Gregory 2012a)in
which ‘smart narrowcasting’ targets particular audiences within the continuum of a cam-
paign. Amid volume, reaching the right audience does still remain key, otherwise human
rights content is just lost in the deluge. But communication practices also need to recog-
nize that networked authenticity comes from peer attention as much as from pushing con-
tent to an explicit audience. Additionally, adversaries have developed ways to confuse
and challenge even these strategies—for example, the ‘firehose of falsehood’ model that
Russia deployed in Ukraine around rights violations, barraging Ukrainians with an array
of contradictory but plausible narratives around atrocities (Paul and Matthews 2016), or
the piling-on of state-driven or funded trolls onto dissident narratives and voices
(Monaco and Nyst 2018). This is compounded by the challenges of maintaining compas-
sion amid overwhelming volume. No one wants to watch images of atrocity non-stop,
and there is a fundamental question of whether the demand side for human rights imagery
and information matches the supply side. Recognizing that reality, we have to ensure that
as purposeful actors in the human rights space we enable compassion, solidarity, under-
standing and action in response to social media content on human rights rather than
Cameras Everywhere Revisited 15
Downloaded from https://academic.oup.com/jhrp/advance-article-abstract/doi/10.1093/jhuman/huz022/5540961 by guest on 22 August 2019
facilitating vicarious trauma, psychic numbing, compassion fatigue, forensic voyeurism,
vicarious witnessing, or facilitating online trolling and violence by anti-rights actors
(Gregory 2018b).
Virtual reality (VR), immersive experience and augmented reality (AR) are starting to
move from novelty and the film festival circuit into an increasing day-to-day incorporation
into human rights journalism, fundraising and messaging. As they do so, they accentuate
many of the same issues as other forms of human rights digital media. This is one area that
WITNESS has tried to explore through its work on the sense of ‘co-presence’ that can be fa-
cilitated via livestreaming or certain forms of virtual reality and immersive experience (see,
for example, the Mobil-Eyes Us project trying to convey the lived reality and opportunities
to act in solidarity alongside favela residents facing violence in Brazil) (WITNESS 2018). It
also relates back to a focus on supporting people from the moment of creation to create
and share with audience and purpose in mind rather than just launching content out into
the social web. Key questions raised by the growth of virtual reality and augmented reality
will relate back to existing concerns, often magnified. From an audience engagement per-
spective, how will we ensure that these media channel solidarity, provide safe spaces and
mobilize action rather than facilitate vicarious spectating in immersive spaces? (Gregory
2016,2018a)? How will we deal with misinformation-based multi-sensory and personal-
ized environments in virtual reality and augmented reality? And how will the voracious
data collection that goes on in virtual reality of our body movements, our eye movements,
our gaze, be weaponized as data by governments and others and used to track us in the real
world or create deceptive simulacra of us using artificial intelligence?
Recognizing the reasons for pessimism, making the case for optimism
Human rights documentation and advocacy is in flux sector-wide. Our intersection with
digital media and technologies is a part of this. But it also reflects the seeds of hope in a rec-
ognition that human rights will thrive if it is relevant and inclusive. That is still the promise
of a more participatory public sphere. The challenges are many of the same that have been
there since the start of this current explosion of digital communication. As I noted previ-
ously, the phenomenon of increased public voice in societies is filled ‘with emancipatory po-
tential in terms of securing accountability for rights abuses. But this is only as long as we
can make sure that the footage that circulates helps facilitate voice, action and change,
rather than enabling apathy, or, at worst, social control, public humiliation, and state re-
pression’ (Gregory 2012a).
To secure a more positive future we need to recognize the need for an approach to hu-
man rights, digital media and technology that prioritizes a number of factors. It needs to be
inclusive and rights-centred and argue its case to counter the multipolarity of a Chinese in-
ternet of ‘digital authoritarianism’ and the Silicon Valley laissez-faire and complacent bub-
ble. It needs to argue for systemic change but also for ‘harm reduction’ from the start and
on an ongoing basis. Companies need to be pressured to remember that in a global world,
as a Sri Lankan government official noted to a New York Times journalist, ‘We’re a soci-
ety, we’re not just a market’ (Taub and Fisher 2018)—a sentiment that many people and
activists in places like Myanmar would echo. This will require advocacy efforts by a greater
range of external voices from communities most affected (often working in coalition for in-
creased power), companies listening to those communities meaningfully in designing prod-
ucts and policies, as well as increased internal action by employees who see the discrepancy
16 Sam Gregory
Downloaded from https://academic.oup.com/jhrp/advance-article-abstract/doi/10.1093/jhuman/huz022/5540961 by guest on 22 August 2019
between rhetoric and reality. Our advocacy needs to grapple with both the human
rights-abusing and self-serving power of new government regulation, for example, around
‘fake news’, while also supporting the role of legitimate regulatory and public bodies to set
standards and make laws.
Our approach needs to proactively engage with the new tools that can serve to better fa-
cilitate documentation, sense-making and advocacy, such as artificial intelligence. We can
use those for our work, but we also need to be critics at the table to ensure that these tools
are not used to perpetuate harms and discrimination and that the right values are embedded
and verified on an ongoing basis in their usage. We must make the argument for agency, in-
clusion and meaningful control for people over their own visibility and privacy, and ensure
that solutions around trust and credibility do not cause inadvertent harms to the most vul-
nerable and to human rights defenders who have always been called ‘fake news’. And we
must be responsible innovators in what human rights advocacy looks like, how it includes
many more voices, and how it responds to the growing threat to credibility-based systems
of trust and to the need to communicate in a loud, crowded environment.
Acknowledgements
I thank the WITNESS team and our collaborators and partners worldwide whose courage and
willingness to try and try again informs my optimism in the face of world events.
References
Allison-Hope, D. 2018. Our Human Rights Impact Assessment of Facebook in Myanmar: How
Can Social Media Platforms Respect Freedom of Expression While Protecting Users from
Harm? BSR Blog. https://www.bsr.org/en/our-insights/blog-view/facebook-in-myanmar-hu
man-rights-impact-assessment (referenced 22 June 2019).
Alston, P. 2017. The Populist Challenge to Human Rights. Journal of Human Rights Practice
9(1): 1–15.
Aronson, J. 2018. The Utility of User-Generated Content in Human Rights Investigations. In J.
Aronson and M. Land (eds), New Technologies for Human Rights Law and Practice, pp.
129–48. Cambridge University Press.
Aronson, J., M. Cole, A. Hauptmann, D. Miller, and B. Samuels. 2018. Reconstructing Human
Rights Violations Using Large Eyewitness Video Collections: The Case of the Euromaidan
Protestor Deaths. Journal of Human Rights Practice 10(1): 1–20.
Asher-Schapiro, A. 2017. YouTube and Facebook are Removing Evidence of Atrocities,
Jeopardizing Cases Against War Criminals. The Intercept. https://theintercept.com/2017/11/02/
war-crimes-youtube-facebook-syria-rohingya (referenced 27 December 2018).
Bair, M., and V. Maglio. 2014. Video Exposes Police Abuse in Venezuela (Or is it Mexico? Or
Colombia?). WITNESS Blog. http://blog.witness.org/2014/02/video-exposes-police-abuse-vene
zuela-mexico-colombia (referenced 29 December 2018).
BBC News. 2017. Donald Trump Retweets Far-Right Group’s Anti-Muslim Videos. https://www.
bbc.co.uk/news/world-us-canada-42166663 (referenced 4 January 2019).
———. 2018. India WhatsApp ‘Child Kidnap’ Rumours Claim Two More Victims. https://www.
bbc.co.uk/news/world-asia-india-44435127 (referenced 4 January 2019).
Cascio, J. 2013. Twitter. https://twitter.com/cascio/status/364112024818556928 (referenced 30
May 2019).
Edwards, S., and S. Livingston. 2018. Fake News is About to Get a Lot Worse. That Will Make it
Easier to Violate Human Rights—and Get Away With it. Washington Post. 3 April.
Cameras Everywhere Revisited 17
Downloaded from https://academic.oup.com/jhrp/advance-article-abstract/doi/10.1093/jhuman/huz022/5540961 by guest on 22 August 2019
Freelon, D., C. D. McIlwain, and M. D. Clark. 2016. Beyond the Hashtags: #Ferguson,
#Blacklivesmatter, and the Online Struggle for Offline Justice. Center for Media and Social
Impact. School of Communication, American University, Washington DC. https://cmsimpact.
org/resource/beyond-hashtags-ferguson-blacklivesmatter-online-struggle-offline-justice (refer-
enced 27 December 2018).
Gillespie, T. 2018. Custodians of the Internet: Platforms, Content Moderation, and the Hidden
Decisions that Shape Social Media. New Haven, CT: Yale University Press.
Goodman, A. 2015. Ta-Nehisi Coates on Police Brutality: ‘The Violence Is Not New, It’s the
Cameras That Are New’. Interview. Alternet. https://www.alternet.org/2015/09/ta-nehisi-
coates-police-brutality-violence-not-new-its-cameras-are-new (referenced 4 January 2019).
Googlers Against Transphobia. 2019. Google Must Remove Kay Coles James from its Advanced
Technology External Advisory Council (ATEAC). Medium. https://medium.com/@against.trans
phobia/googlers-against-transphobia-and-hate-b1b0a5dbf76 (referenced 2 April 2019).
Gregory, S. 2010. Cameras Everywhere: Ubiquitous Video Documentation of Human Rights and
Considerations of Safety, Security, Dignity and Consent. Journal of Human Rights Practice
2(2): 191–207.
———. 2012a. The Participatory Panopticon and Human Rights: WITNESS’ Experience
Supporting Video Advocacy. In M. McLagan and Y. McKee (eds), Sensible Politics: Visual
Cultures of Nongovernmental Politics. Cambridge, MA: MIT Press.
———. 2012b. Visual Anonymity and YouTube’s New Blurring Tool. WITNESS Blog. https://
blog.witness.org/2012/07/visual-anonymity-and-youtubes-new-blurring-tool (referenced 3
January 2019).
———. 2015. Technology and Citizen Witnessing: Navigating the Friction between Dual Desires
for Visibility and Obscurity. The Fiberculture Journal 26. http://twentysix.fibreculturejournal.
org/fcjmesh-005-technology-and-citizen-witnessing-navigating-the-friction-between-dual-desires-
for-visibility-and-obscurity (referenced 22 June 2019).
———. 2016. Immersive Witnessing: From Empathy and Outrage to Action. WITNESS Blog.
https://blog.witness.org/2016/08/immersive-witnessing-from-empathy-and-outrage-to-action
(referenced 28 December 2018).
———. 2018a. Deepfakes and Synthetic Media: Survey of Solutions against Malicious Usages.
WITNESS Blog. https://blog.witness.org/2018/07/deepfakes-and-solutions (referenced 3
January 2019).
———. 2018b. Heard about Deepfakes? Don’t Panic. Prepare. World Economic Forum Agenda
Blog. https://www.weforum.org/agenda/2018/11/deepfakes-video-pragmatic-preparation-wit
ness (referenced 5 January 2019).
———. 2018c. Ubiquitous Witnessing in Human Rights Activism. In S. Ristovska and M. Price
(eds), Visual Imagery and Human Rights Practice, pp. 253–73. Global Transformations in
Media and Communication Research—a Palgrave and IAMCR (International Association for
Media and Communications Research) Series. Palgrave Macmillan.
Gregory, S., and P. Zimmermann. 2010. The Ethical Engagements of Human Rights Social
Media. WITNESS Blog. http://blog.witness.org/2010/11/the-ethical-engagements-of-human-
rights-social-media (referenced 27 December 2018).
Guberek, T., and R. Silva. 2014. Human Rights and Technology: Mapping the Landscape to
Support Grantmaking. Prima. https://www.fordfoundation.org/media/2541/prima-hr-tech-re
port.pdf (referenced 21 May 2019).
Hemsley, J., J. Jacobson, A. Gruzd, and P. Mai. 2018. Social Media for Good or Evil: An
Introduction. Social Media and Society July–September 2018: 1–5.
Irving, E. 2017. And So It Begins ... Social Media Evidence in an ICC Arrest Warrant. Opinio
Juris. http://opiniojuris.org/2017/08/17/and-so-it-begins-social-media-evidence-in-an-icc-arrest-
warrant/#undefined.uxfs (referenced 4 January 2019).
18 Sam Gregory
Downloaded from https://academic.oup.com/jhrp/advance-article-abstract/doi/10.1093/jhuman/huz022/5540961 by guest on 22 August 2019
Kayyali, D. 2017. Set Your Phone to Proofmode: Prove Human Rights Abuses to the World.
WITNESS Blog. https://blog.witness.org/2017/04/proofmode-helping-prove-human-rights-
abuses-world (referenced 4 January 2019).
———. 2018a. Delete Facebook? Not Just Yet. WITNESS. https://witness.org/delete-facebook-
not-just-yet (referenced 5 January 2019).
———. 2018b. Harm Reduction for WhatsApp. WITNESS Blog. https://blog.witness.org/2018/
11/harm-reduction-whatsapp/ (referenced 4 January 2019).
———. 2018c. What’s Up, WhatsApp? WITNESS Blog. https://blog.witness.org/2018/11/whats-
up-whatsapp (referenced 4 January 2019).
King, G., J. Pan, and M. Roberts. 2017. How the Chinese Government Fabricates Social Media
Posts for Strategic Distraction, Not Engaged Argument. American Political Science Review
111(3): 484–501.
Koenig, A., S. Dubberley, and D. Murray (eds). 2019 (forthcoming). Digital Witness: Using Open
Source Information for Human Rights Documentation, Advocacy and Accountability. Oxford
University Press.
Koettl, C. 2017. Sensors Everywhere: Using Satellites and Mobile Phones to Reduce Information
Uncertainty in Human Rights Crisis Research. Genocide Studies and Prevention: An
International Journal 11(1): 36–54.
Land, M. 2016. Democratizing Human Rights Fact-Finding. In P. Alston and S. Knuckey (eds),
The Transformation of Human Rights Fact-Finding, pp. 399–424. Oxford University Press.
Lim, M. 2017. Digital Media and Malaysia’s Electoral Reform Movement. In W. Berenschot, H.
Schulte Nordholt, and L. Bakker (eds), Citizenship and Democratization in Southeast Asia, pp.
211–37. Leiden: Brill.
Matheson, K. 2015. Video as Evidence: Basic Practices. WITNESS Blog. http://blog.witness.org/
2015/02/video-as-evidence-basic-practices (referenced 3 January 2019).
McDonald, S., and A. Xiao Mina. 2018. The War-Torn Web. Foreign Policy. 19 December
(online).
Monaco, N., and C. Nyst. 2018. State-Sponsored Trolling: How Governments are Deploying
Disinformation as Part of Broader Digital Harassment Campaigns. Institute for the Future
Digital Intelligence Futures Lab. http://www.iftf.org/statesponsoredtrolling (referenced 28
December 2018).
Naimer, K., W. Brown, and R. Mishori. 2017. MediCapt in the Democratic Republic of the
Congo: The Design, Development, and Deployment of Mobile Technology to Document
Forensic Evidence of Sexual Violence. In Information and Communication Technologies in
Mass Atrocities Research and Response, Genocide Studies and Prevention: An International
Journal 11(1): 25–35.
NBC News. 2018. Lawmakers Demand Answers from Amazon on Facial Recognition Tech.
https://www.nbcnews.com/tech/tech-news/eight-lawmakers-demand-answers-amazon-facial-
recognition-tech-n942476 (referenced 4 January 2019).
Ng, Y., and G. Ivens. 2018. How to Export Content from WhatsApp. WITNESS Blog. https://
blog.witness.org/2018/12/export-content-whatsapp (referenced 5 January 2019).
Padania, S., S. Gregory, Y. Alberdingk-Thijm, and B. Nunez. 2011. Cameras Everywhere: Current
Challenges and Opportunities at the Intersection of Human Rights, Video and Technology.
WITNESS. https://technology.witness.org/tools/cameras-everywhere (referenced 21 May 2019).
Paul, C., and M. Matthews. 2016. The Russian ‘Firehose of Falsehood’ Propaganda Model: Why
it Might Work and Options to Counter it. RAND Perspective. https://www.rand.org/pubs/per
spectives/PE198.html (referenced 21 May 2019).
Piraces, E. 2017. A Blueprint for Optimism: Civil Society, Open Technology and Transnational
Solidarity. Open Technology Initiative blog. https://www.newamerica.org/oti/blog/blueprint-op
timism-civil-society-open-technology-and-transnational-solidarity (referenced 24 March 2019).
Cameras Everywhere Revisited 19
Downloaded from https://academic.oup.com/jhrp/advance-article-abstract/doi/10.1093/jhuman/huz022/5540961 by guest on 22 August 2019
———. 2018. The Future of Human Rights Technology. In J. Aronson and M. Land (eds), New
Technologies for Human Rights Law and Practice, pp. 289–308. Cambridge University Press.
Rajagopalan, M. 2018. The Histories of Today’s Wars Are Being Written on Facebook and
YouTube. But What Happens When They Get Taken Down? Buzzfeed. https://www.buzzfeed
news.com/article/meghara/facebook-youtube-icc-war-crimes (referenced 28 December 2018).
Rodrı´guez-Garavito, C., and K. Gomez. 2018. Rising to the Populist Challenge. Dejusticia. https://
www.dejusticia.org/en/publication/rising-to-the-populist-challenge (referenced 5 January 2019).
Rodrı´guez Garavito, C., and S. McAdams. 2016. A Human Rights Crisis? Unpacking the Debate
of the Future of the Human Rights Field. https://ssrn.com/abstract¼2919703 (referenced 21
May 2019).
Schwartz, M. 2018. Who Killed the Kiev Protestors? A 3-D Model Holds the Clues. New York
Times. 30 May.
Shahbaz, A. 2018. Fake News, Data Collection, and the Challenge to Democracy. Freedom on the
Net 2018: The Rise of Digital Authoritarianism. Freedom House. https://freedomhouse.org/re
port/freedom-net/freedom-net-2018/rise-digital-authoritarianism (referenced 26 December 2018).
Shaer, M. 2015. ‘The Media Doesn’t Care What Happens Here’. New York Times. 22 February.
Stack, L. 2014. Syria’s Conflict Told Through a Caustic Wit. New York Times blog. 11 January.
https://thelede.blogs.nytimes.com/2014/01/11/syrias-conflict-told-through-a-caustic-wit (refer-
enced 3 January 2019).
Taub, A., and M. Fisher. 2018. Where Countries are Tinderboxes and Facebook the Match. New
York Times. 21 April.
Tufekci, Z. 2017. Twitter and Tear Gas: The Power and Fragility of Networked Protest. New
Haven, CT: Yale University Press.
UN Human Rights Council. 2018. Report of the Special Rapporteur on the Promotion and
Protection of the Right to Freedom of Opinion and Expression. A/HRC/38/35.
UN OHCHR (Office of the High Commissioner for Human Rights). 2018. Myanmar: UN
Fact-Finding Mission Releases its Full Account of Massive Violations by Military in Rakhine,
Kachin and Shan States. https://www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?
NewsID¼23575&LangID¼E(referenced 22 June 2019).
Van der Velden, L. 2015. Forensic Devices for Activism: Metadata Tracking and Public Proof. Big
Data & Society July–December 2015: 1–14.
Wardle, C., and H. Derakshan. 2017. One Year On, We’re Still Not Recognizing the Complexity
of Information Disorder Online. First Draft. https://firstdraftnews.org/coe_infodisorder (refer-
enced 5 January 2019).
Weizman, E. 2018. Forensic Architecture: Violence at the Threshold of Detectability. Zone Books.
WITNESS. No date. Video as Evidence project. https://vae.witness.org (referenced 3 January
2019).
———. 2018. Introduction to Mobil-Eyes Us. WITNESS Blog. https://blog.witness.org/2018/11/
introduction-to-mobil-eyes-us (referenced 28 December 2018).
WITNESS Media Lab. 2017. Capturing Hate project. https://lab.witness.org/projects/transgen
der-violence (referenced 5 January 2019).
Xiao Mina, A. 2018. The Death of Consensus, Not the Death of Truth. Nieman Lab. http://www.
niemanlab.org/2018/12/the-death-of-consensus-not-the-death-of-truth (referenced 8 January
2019).
———. 2019. Memes to Movements: How the World’s Most Viral Media is Changing Social
Protest and Power. Boston, MA: Beacon Press.
YouTube for Press. No date. https://www.youtube.com/intl/en-GB/yt/about/press (referenced 3
January 2019).
Zuckerman, E. 2010. Public Spaces, Private Infrastructure. Open Video Conference. 1 October.
My Heart’s in Accra blog. http://www.ethanzuckerman.com/blog/2010/10/01/public-spaces-pri
vate-infrastructure-open-video-conference (referenced 28 December 2018).
20 Sam Gregory
Downloaded from https://academic.oup.com/jhrp/advance-article-abstract/doi/10.1093/jhuman/huz022/5540961 by guest on 22 August 2019