Content uploaded by Cosima Rughinis
Author content
All content in this area was uploaded by Cosima Rughinis on Dec 08, 2017
Content may be subject to copyright.
Citizen Science, Gallaxies and Tropes
Knowledge Creation in Impromptu Crowd Science Movements
Cosima Rughiniș
Faculty of Sociology and Social Work
University of Bucharest
Bucharest
cosima.rughinis@sas.unibuc.ro
Abstract— This paper analyses two impromptu crowd science
movements, comparing the TVTropes wiki community with the
Bechdel Wallace test movement. Impromptu crowd science
contributors engage in methodical knowledge creation, in
absence of academic scientists’ involvement. The two examples
illustrate the potential of these movements to create data,
analyses and concepts relevant for social and humanistic
disciplines, and to inspire scholarly research.
Keywords—impromptu crowd knowledge; computer supported
collaborative work; Tvtropes.org; Bechdel Wallace test
I. I
NTRODUCTION
This paper aims to document a novel form of collaborative
knowledge creation, the impromptu crowd science of media
tropes, by studying the digitally coordinated work of the
community of contributors to the TVTropes.org wiki, in
comparison with the Bechdel-Wallace test movement.
The concept of ‘impromptu crowd science’ has been
advanced in [1] to describe the distributed research work of
people interested in the Bechdel-Wallace test of gender
representation in films. We argue that we can identify a
second instance of impromptu crowd science in the
TVTropes.org wiki. In what follows we present the specific
features of this community of practice, and we discuss its
relevance in relation to current discussions of crowd science,
with a focus on social and humanistic disciplines.
The paper is structured as follows: the next section
discusses varieties of crowd science and outlines research
questions. The following section presents knowledge creation
activities in the Bechdel Wallace test and TVTropes
movements, from data production to data analysis, conceptual
work and contributor specialization. The final section
concludes the paper, discussing the specificity of the two
movements and their relevance for academic research.
II. V
ARIATIONS OF
C
ROWD
S
CIENCE
With the advance of online collaborative platforms and
social media, the scientific community has sought for ways to
engage the digital crowds in contributing to large scale
scholarly challenges. Citizen science, or crowd science,
includes projects in which large numbers of lay contributors
(who, as a rule, do not know each other) take part in various
tasks, coordinating their work through digital interfaces.
Typical citizen science projects are initiated and organized by
teams of scientists, who supervise citizen contributors’ inputs
and are in charge with final stages of data analysis and
reporting. Still, some contributors from the ‘crowd’ may be
involved in all stages of the research project.
The two defining features of crowd science projects,
according to Franzoni and Sauermann [2], are openness in
project participation and openness in the disclosure of
intermediate inputs (such as methods, data, analyses). This
combination differentiates crowd science from innovation
contests or crowdsourcing in scientific projects, which relies
on open participation but keeps the scientific activity,
including methods, datasets, and preliminary findings, largely
closed.
The impromptu crowd science movements discussed in
this paper, centered on the Bechdel Wallace test and the
TVTropes.org wiki, share both features: anybody is welcome
to participate, and there is wide public availability of methodic
guidelines, datasets and various analysis results. The key
difference, in relation to the typical citizen science projects,
consists in their independence from scholarly institutions.
They are not organized by scientific teams, and their results
are not published in academic venues. They are taken forward
by contributors whose scientific expertise is unknown. They
have also been largely ignored as topics of scholarly research,
with few exceptions - such as Selisker’s discussion of the
Bechdel Wallace test [3] or Börzsei’s thesis on literary
criticism in new media [4]. Unlike Wikipedia, a collaborative
movement which aims to synthesize existing, published
knowledge in an encyclopaedic style, the two impromptu
crowd-science movements aim to create new knowledge,
largely independently from any professional scientific project.
Yet, there include both explicit1 and implicit dialogues with
ideas and concepts from academic fields. TVTropes pages
often can be read as a respecification of concepts from
scientific fields, for the purpose of understanding creative
works. As an example, the “Establishing Character Moment”2
can be considered a discursive respecification of the concept
of “first impressions” from social psychology [5],
accomplished with the community’s specific focus and
methods.
1
See for example the TVTropes “Books on Trope” page:
http://tvtropes.org/pmwiki/pmwiki.php/Main/BooksOnTrope
2
The TVTropes “Establishing Character Moment” page:
http://tvtropes.org/pmwiki/pmwiki.php/Main/EstablishingCharacterMoment
From a disciplinary perspective, both movements are
related to the humanities and social sciences and are focused
on a descriptive mission, identifying facts, types and
tendencies3. The Bechdel-Wallace movement pursues a
critical agenda, but TVTropes.org strives for ideological
neutrality. Yet, their knowledge-making and falsification
infrastructure is substantially less developed than in
professional scientific programmes. Overall, the two
movements are significant contributors to knowledge through
their descriptive work, creating comprehensive representations
of mediated communication.
This paper aims to identify and compare the specific
contributions to knowledge of the two impromptu crowd
science movements. We will discuss the types of data,
analyses and findings of the two communities, and the role of
collaborative and collective contributions, focusing on the
TVTropes.org wiki. The background for our discussion on the
Bechdel Wallace test movement can be found in [1].
A. Variations in the division of labor
Franzoni and Sauermann [2] propose a classification of
crowd science projects according to the complexity of tasks
delegated to contributors, and their expected level of skills.
Most citizen science projects outsource well-structured
independent subtasks, which require only basic skills. Galaxy
Zoo4 is a typical instance, in which contributors classify
galaxies, displayed in online photographs, according to their
shapes. This task requires only a couple of mouse clicks per
classification. These projects represent the “distributing
coding” type of citizen science. Other crowd science projects
are similar in task structure but require higher levels of skills
for identifying the right specimens, amounting to “distributed
data collection” projects – such as the ability to identify birds
from a given species. If tasks also become more complex, we
find “distributed sub-problems” projects, such as Foldit5 – in
which contributors were required to solve complex visual
puzzles contributing to the study of protein folding. Even
more complicated and less pre-structured tasks lead to
“collective problem-solving” projects, such as in Polymath6,
where contributors with professional expertise share their
efforts to tackle difficult challenges.
Starting from this classification of contributors’
involvement, the paper aims to answer the following question:
What kinds of discovery work, learning and expertise are
encouraged in the two impromptu crowd science movements
under study?
B. Disciplinary variations
Kullenberg and Kasperowski [6] review Web of Science
(WoS) indexed academic publications referring to crowd
science to identify patterns in its distribution and evolution.
They conclude that the two largest threads of research relying
3
There are also some attempts at formulating explanations, particularly
relating gender representations in film to the gender composition of the
production team.
4
The Galaxy Zoo project: https://www.galaxyzoo.org/
5
The Foldit project: https://fold.it/portal/
6
The Polymath project: https://polymathprojects.org/
on crowds to complete scientific work consist in: a) biology,
conservation and ecology, where contributors collect and
classify empirical data; b) geographic information research,
where contributors collect geographic information. Astronomy
and bioscience have emerged as the fields that took advantage
of the breakthrough of digitally based crowd science projects
in the 2000s (ibid). The authors remark the virtual absence of
projects in the social sciences and humanities, at least from
the population of projects generating scientific publications in
WoS.
Still, digital humanities have also explored this novel form
of distributed scholarly work. Carletti et al. [7] review a
variety of humanities projects that rely on crowds,
distinguishing initiatives that ask contributors to optimize an
existing collection, or to create new resources. The first
category includes projects of: crowd-based curation through
tagging, selection or classification, revision through
transcription or correction, or location through mapping or
other forms of geographical matching. The public may also
contribute to creating novel items in collections, by
documenting private or public events, or by adding value to
locations through storytelling.
Starting from this disciplinary distinction, we observe that
both instances of impromptu crowd science that we discuss,
the Bechdel Wallace test movement and the TVTropes.org
community, are relevant for social and humanistic disciplines.
They are related to media critique and also to a variety of
social fields that inform the study of creative works, such as
philosophy, sociology, psychology, gender studies etc. This
paper will thus try to address the following question:
What is the relevance of collaborative and collective
participation in these two knowledge creation movements,
given their disciplinary orientation towards social and
humanistic research?
Last but not least, in order to specify the added value of
these two impromptu crowd science movements for scholarly
research, we will discuss the issue of how academic
researchers’ work can derive value from these movements’
knowledge-creation:
How can scholarly research enter into a direct dialogue
with impromptu crowd science findings?
III. T
WO
E
XAMPLES OF
I
MPROMPTU
C
ROWD
S
CIENCE
The concept of impromptu crowd science [1] was
proposed to describe digital crowds who take part in
distributed knowledge-creation activities methodically, in the
absence of any scientific team to initiate or coordinate their
work. Such contributors rely on concepts, methods and
instruments to generate empirical data, to analyze it and to
report findings, usually in publications of broad interest such
as blogs, wikis, or digital magazines.
We argued that the Bechdel-Wallace test has become the
instrument of choice for a thread of digitally-mediated,
popular media critique. A brief summary: the Bechdel-
Wallace test, introduced in 1985 by graphic novelist Alison
Bechdel in her comic strip “The Rule”[8], asks whether a film
satisfies three conditions:
1) There are at least two women characters7;
2) That talk with one another…
3) … about something else than a man.
The test, remarkable through its simplicity and through its
unexpectedly high fail rate for Hollywood productions [9]–
[12], gained popularity in the 2000’s and, since then, it has
enabled thousands of people to assess films, record their
conclusions on a collaborative archiving platform [13] and,
occasionally, analyze available data [14] [15] and discuss the
current state and the evolution of gender representation in
films [16] [12]. People who use the Bechdel Wallace test
publish, discuss and archive their findings on a variety of
online sites and platforms. In counterdistinction, the second
example, TVTropes.org, offers a more centralized instance of
an impromptu crowd science movement.
TVTropes.org is a wiki platform founded in 2004 by a
programmer working under the name of Fast Eddie8. It is
currently owned by three partners, and managed by a team of
five members. In December 2014 TVTropes successfully
launched a Kickstarter campaign to redesign the site, earning a
total of USD 105.186 from 3109 backers9. This is an indicator
of the popularity of the site and the commitment of its
contributors and readers.
A. Data production in the two impromptu crowd science
movements
TVTropes.org is dedicated to the identification and
classification of media tropes. The platform definition for
tropes is: “a storytelling device or convention, a shortcut for
describing situations the storyteller can reasonably assume the
audience will recognize. Tropes are the means by which a
story is told by anyone who has a story to tell. We collect
them, for the fun involved”10. To quote Madrugada, one of the
project administrators interviewed in 2011 by Meeks [17]:
“What we do here? Pattern-spotting. Pattern analysis.
Ingredient identification. By that last one, I’m talking about
treating fiction the way some folks treat a food they’ve never
had before; tasting it and then trying to figure out what all
went into making it”. The platform encourages a certain
detachment and analytical stance towards creative works,
offering participants both a set of inputs for their judgments
(specific tropes and a local theory of tropes), and a modulation
of their relationship with fiction [18].
The quest for tropes is by no means unique to this wiki.
Borzsei [19] identifies a functional similarity between
TVTropes use of the ‘trope’ concept and Umberto Eco’s
7
In current practice the first condition refers only to characters who have
names.
8
TVTropes “About us” page: http://tvtropes.org/pmwiki/about.php
9
The TV Tropes Revitalization Project:
https://www.kickstarter.com/projects/tvtropes/the-tv-tropes-revitalization-
project
10
TVTropes “Tropes” page;
http://tvtropes.org/pmwiki/pmwiki.php/Main/Tropes
definition of “intertextual archetypes” (p. 3-4). She traces the
scholarly roots of TVTropes to early 20th century, through the
literary movements of Russian Formalism and archetypal
literary criticism (p. 6). The wiki also includes a dedicated
section for “Books on tropes”, introducing it with the
observation “We here at TV Tropes are not the first to collect
tropes and try to put them in some semblance of order”11.
These acknowledged conceptual connections situate
TVTropes in a position of dialogue with a scholarly
community interested in media analysis and critique.
Unlike scholarly work, involving one or several highly
qualified individuals as the authors of analyses and
publications, TVTropes engage thousands of contributors in
the distributed, collaborative work of trope identification and
classification. How can we understand the complexity of
contributors’ tasks, in order to map TVTropes in the
continuum of crowd science projects outlined by Franzoni and
Sauermann [2]?
1) Data production on TVTropes.org
The main task of “tropers”, the name under which
TVTropes contributors self-identify, is to pinpoint tropes,
describe them, and match them with creative works. Tropes
are patterns – but not all patterns are tropes. For a given
pattern to be a trope, it needs to convey meaning to its
audience, to function as a resource for authors in
communicating with their publics. This means that the basic
unit of knowledge created on this wiki, the trope, is usually
not an easily identifiable, self-evident item. Whether a certain
recurrent situation amounts to a trope, or not, is often a matter
of controversy. This is why TVTropes includes a complex set
of guidelines and an online debate area for tropers aiming to
pinpoint a specific trope and put it into words: the “Trope
Launch Pad”12.
The methodological guidelines include pages such as:
What is a trope?13 What is not a trope14? How can a trope be
used in a given work15? For example, a trope can be played
straight – but it can also be inverted, subverted, parodied,
averted, played for laughs, or played for drama – among
others.
Contributors who have found a candidate for a trope are
encouraged to submit it to the Trope Launch Pad, and to
discuss in the community whether it is “tropable” or not. This
is also the place to find the trope “a snappy name16” and to
identify illustrative examples, in collaboration with other
tropers.
11
TVTropes “Books on Trope” page:
http://tvtropes.org/pmwiki/pmwiki.php/Main/BooksOnTrope
12
The Trope Launch Pad:
http://tvtropes.org/pmwiki/tlp_activity.php?interval=.25
13
TVTropes “Trope” page:
http://tvtropes.org/pmwiki/pmwiki.php/Main/Trope
14
TVTropes “Not a trope” page:
http://tvtropes.org/pmwiki/pmwiki.php/Administrivia/NotATrope
15
TVTropes “Playing with a trope” page:
http://tvtropes.org/pmwiki/pmwiki.php/Main/PlayingWithATrope?from=Mai
n.MetaTropeIntro
16
The Trope Launch Pad:
http://tvtropes.org/pmwiki/tlp_activity.php?interval=.25
This construction of guidelines and collaborative settings
indicate that the first step in the production of empirical data
in the TVTropes community is a complex task. Moreover, it is
a task that relies both on collaborative, dialogical decision
making, and on cumulative contributions from individuals that
do not interact directly – which we have termed ‘collective
contributions’. Collaboration is required, among others, to
make common decisions on whether a pattern is worth to
become a trope, and to fine tune its title. Collective
contributions include examples on the trope page, which a
contributor usually adds without requiring debate or collective
validation.
If we examine data production as a process of trope
discovery, then guidelines for trope formulation are the
equivalent of disciplinary conditions of truth. As such, they
are critical for the evolution of this community of knowledge
making, since the explicit and implicit rules of trope
identification will determine what counts as the basic fact in
this line of inquiry17.
After a trope has been formulated, validated and has
received its own page on the wiki, contributors continue to
improve on this description by adding examples of works that
illustrate the trope, and by indexing the trope with old or new
indexes. Each trope includes, on its page, a series of examples
classified by media – ranging from several items to hundreds
of examples. There are also hundreds of indices that
contributors can use to classify tropes, many of them listed
under the Index Index18. Matching tropes with specific works
and with indexes is, as a rule, an individual contribution that
does not require collective approval, although others may
disagree and require argumentation and possibly conflict
arbitration, as in any wiki authorship system.
2) Data production in the Bechdel-Wallace test movement
Unlike the work of identifying and formulating tropes, the
task of assessing whether a film passes or fails the Bechdel
Wallace test is more straight-forward. The first two conditions
can be a matter of interpretation, but it happens quite rarely –
that is, whether there are two named women characters in a
film, who talk to one another. The third condition raises more
often divergent opinions, because whether a conversation
counts as being “about a man” or about something else is
sometimes not obvious. For example, the bechdeltest.com
contributors for Star Wars: The Force Awakens rated it as
passing all three conditions, noticing that “Maz Kanata and
Rey talk about Rey's Destiny.” Still, was this discussion not
about Luke and Vader, which are men? The outcome was
decided by the site admin on the basis of collaborative
argumentation on the movie page19. In similar cases, coding a
film as a pass or a fail for the third condition depends on
collaborative interpretation. In other instances, the first
contributors may omit one conversation that would qualilfy
the film as a pass; the diagnosis is updated when a later
contribution invokes a dialogue sequence that is accepted as
valid (see for example the discussion on the Batman v
17
I am grateful for this insight from an anonymous reviewer.
18
TV Tropes Index Index page: http://tvtropes.org/pmwiki/index_report.php
19
Star Wars: The Force Awakens on bechdeltest.com:
http://bechdeltest.com/view/6610/star_wars:_the_force_awakens/
Superman film20). In such instances, the final verdict relies on
the collective, distributed effort of observing the film,
memorizing relevant sequences, and contributing them on the
bechdeltest.org platform.
TABLE I. C
ROWD
C
ONTRIBUTIONS
Contributors’ work Bechdel-Wallace
test movement
TVTropes.org
community
Data
production
Collaborative
interpretation
Collaborative
diagnosis for the
3
rd
criterion
Identifying “tropable”
patterns
Formulating the trope
Specifying trope use
(“playing with”)
Collective
contributions
Collective data
(movie sequences)
for the pass/fail
diagnosis
Creating links:
- Matching tropes with
examples
- Indexing tropes
Data analysis Individual or group
authorship, on
individually or
collectively
elaborated datasets
Collaborative
authorship of trope
indexes, relationships,
patterns
Conceptual work Individual or group
authorship
Wiki page
presentations on
Wikipedia,
TVTropes
Collaborative
authorship of meta-
discussion pages
Collaborative emotion
work: ethical
positioning, humor,
self-reflexivity
B. Data analysis in the two impromptu crowd science
movements
As discussed in [1], people interested in the Bechdel-
Wallace test have taken the opportunity to analyze the rich
datasets generated by bechdeltest.com or by other forms of
coding. The approach has been mostly quantitative, involving
forms of data visualization, formulation of trends, and
correlating pass/fail information with data about films’
production teams [14] or financial success [16].
On the contrary, the approach in working with tropes on
TVTropes.org is dominantly qualitative. Tropers’ work
consists, to a large extent, in establishing links between tropes,
between tropes and creative works, and between tropes and
classification devices such as indices, meta-tropes and trope
patterns. This makes TVTropes.org a very densely
interconnected site, at least in comparison with other notable
wikis such as Wikipedia [17]. Contributors’ work of analysis
of one or multiple tropes therefore include operations such as:
• Tropes classifications through indices, either among
those already formulated, or by creating new indices;
• Identifying relations between tropes – such as meta-
tropes, sub-tropes, sister tropes, or other related tropes;
• Identification of patterns among tropes, such as
‘sorting algorithms’ or ‘sliding scales’21.
20
Batman v Superman on bechdeltest.com:
http://bechdeltest.com/view/6788/batman_v_superman:_dawn_of_justice/
21
TVTropes Sorting Algorithm of Tropes page:
http://tvtropes.org/pmwiki/pmwiki.php/Main/SortingAlgorithmOfTropes
TABLE II. K
NOWLEDGE CREATION
Knowledge creation
approach:
Bechdel-Wallace test
movement
TVTropes.org
community
Use of digital
infrastructure
Distributed: archival
platform, blog and
journal articles, wiki
pages
Centralized: the
TVTropes.org wiki
(and similar wikis)
Main empirical data Pass/fail results for
specific films
Tropes
Trope examples in
various works
Approach in dealing
with data
Mostly quantitative Mostly qualitative
Analysis Visualizations
Evolutions
Correlations with film
authorship, financial
indicators
Trope classifications
Trope patterns
Trope evolution
Trope reception
Conceptual work Discussion of the test
in relation to women
representation in media
Impasse in identifying
the value of the test
What are tropes?
What are the trope
patterns specific to
various topics (gender,
race, death, morality
etc)?
Contributors’ gradual
specialization
There is no clear
learning trajectory or
community
‘Tropers’ are expected
to learn the troping
practice by
participation in the
community
There are also other forms of analysis, such as studying the
evolution in time of a certain trope (on the page “Discredited
trope”22), or the public reception of tropes, especially as they
travel to a different cultural context, across space or time (see
the page on “Unfortunate Implications”23).
The results of these analyses are published on the wiki, and
are tightly interlinked with relevant tropes and creative works.
There are also several external publications that rely on data
created by TVTropes.org. Such works include Harris’ Periodic
Table of Storytelling [20], that uses tropes’ title and
popularity; Meeks’ analysis of network patterns on the wiki
[17], analyzing links; Assogba, Ros and Vallandingham’s
Stereotropes [21], analyzing the ‘always female’ and ‘always
male’ tropes.
C. Conceptual work in the two impromptu crowd science
movements
As discussed in [1], the conceptualization effort related to
the Bechdel Wallace test has been focused on clarifying the
meaning of the test. If it is an indicator of the quality of
women representation in films, what precisely does it
indicate? Contributors have offered a range of answers,
without finding a distinctive value for the test. We argue in [1]
that the dialogue between scholarly research and the Bechdel
test movement has managed to overcome this impasse.
Specifically, Selisker [3] argues that the test highlights the
quality of feminine characters’ networks, rather than the
quality of characters themselves.
TVTropes contributors are engaged in a broader
conceptualization work. As discussed before, we find on the
22
TVTropes Discredited Trope page:
http://tvtropes.org/pmwiki/pmwiki.php/Main/DiscreditedTrope
23
TVTropes Unfortunate Implications page:
http://tvtropes.org/pmwiki/pmwiki.php/Main/UnfortunateImplications
platform a growing conceptualization of tropes, including: a
discussion of what is a trope and what is not a trope; a
formulation of types of tropes and possible relationships
between tropes; a typology of tropes’ possible uses in a
creative work.
Since tropes cover a wide range of topics, we also find on
the platform emerging conceptualizations of field-specific
representations in media. Based on lists, classifications and
patterns of tropes, contributors interact with an evolving
conceptualization of gender and race representations in a
variety of media – as well as representations of morality,
beauty, or death, among others. All these discussions result
from collaborative authorship and cumulative, collective
contributions.
This diverse collection of trope patterns, on different
topics, makes TVTropes a useful resource for scholars
interested in detailed studies of a specific creative work. As an
example, we can examine Ubisoft’s game Valiant Hearts: The
Great War [22]. Researchers have studied this game to
understand the role of video games in digital commemoration
[23] and in representing historical events [24]–[26]. The game
is discussed on TVTropes.org on a dedicated page24, including
a list of in-game tropes, ordered alphabetically, identified by
platform contributors. Several tropes are relevant for
examining the game’s portrayal of soldiers – including those
dedicated to combatants from colonies (“Hero of another
story”), to representations of generals, sergeants, deserters,
and heroes, and also to the portrayal of the “Dog hero”. The
page also includes the “Grey and gray morality” trope25,
framing the game in a broader landscape of approaches to
morality in creative works, as discussed in pages such as the
“Shades of conflict” scale26 and the Neutrality Index27.
D. Contributor specialization
The Bechdel Wallace test movement relies on two main
tasks: production of data, and analysis. Production of data
does not require any special individual skills; the collective
accumulation of evidence on a given film, with occasional
collaborative decision making, leads to a diagnosis of pass /
fail. Analyses of datasets of pass/fail data require statistical
analysis skills, which are not specific to this topic. We can
conclude that this movement does not present a clear path of
skill evolution or topic-specific learning.
In counterdistinction, the TVTropes movement offers a
clear path of evolution for the would-be troper. Contributors
have much to learn, starting from the initial tasks of trope
identification and formulation, to more complicated tasks of
creating new indices, finding novel trope patterns and
relationships between tropes, or discussing the relationships
between tropes and the real life.
24
TVTropes Valiant Hearts page:
http://tvtropes.org/pmwiki/pmwiki.php/Videogame/ValiantHearts
25
TVTropes Grey and Gray morality page:
http://tvtropes.org/pmwiki/pmwiki.php/Main/GreyAndGrayMorality
26
TVTropes Shades of conflict page:
http://tvtropes.org/pmwiki/pmwiki.php/Main/ShadesOfConflict
27
TVTropes Neutrality Index page:
http://tvtropes.org/pmwiki/pmwiki.php/Main/NeutralityIndex
Unlike other knowledge making and sharing platforms,
such as Stack Exchange or Wikipedia, TVTropes does not
offer contributors a reputation system. User profiles are
minimalistic, and there is no badge architecture to ‘talk’ about
users’ achievements and skills [27], [28], [29]. There is no
gamification to support user engagement through simple
gameplay [30]. Still, the quizzical and often self-referential
writing style creates a playful atmosphere that supports
enjoyment and challenges writers to live up to the
community’s style. This also opens a secondary path of user
specialization, in learning the community rhetoric.
IV. C
ONCLUSIONS
This paper discusses the concept of impromptu crowd
science and illustrates it through a comparison between two
instances: the TVTropes wiki community and the Bechdel
Wallace test movement. Table I and Table II present a
synthetic comparison of crowd contributions and knowledge
creation in the two movements.
Both movements create new knowledge, in the field of
media critique, relevant for social and humanistic disciplines.
While the Bechdel Wallace movement relies on relatively
straightforward data production and dominantly quantitative
analyses, the reciprocal is true of TVTropes.org. Trope
specification is a relatively complex task, and contributors are
also engaged in broader activities of qualitative analysis and
conceptualization. Collaboration is essential for tropers’
activity, because tropes are identified through their capacity to
resonate with a public – thus requiring a shared validation.
The two knowledge creation movements work
independently of any scholarly research project or institution.
At the same time, there is considerable potential in linking
academic research with the work of impromptu crowd science
contributors. TVTropes.org offers a comprehensive and
detailed conceptualization of tropes in various media, and
broad list of field-specific and work-specific tropes. These
results of tropers’ collaborative and cumulative contributions
offer scholars a valuable starting point for analyses of specific
creative works.
R
EFERENCES
[1] C. Rughiniș, R. Rughiniș, and B. Humă, “Impromptu Crowd Science
and the Mystery of the Bechdel-Wallace Test Movement,” in
Proceedings of the 2016 CHI Conference Extended Abstracts on Human
Factors in Computing Systems - CHI EA ’16, 2016, pp. 487–500.
[2] C. Franzoni and H. Sauermann, “Crowd science: The organization of
scientific research in open collaborative projects,” Res. Policy, vol. 43,
no. 1, pp. 1–20, Feb. 2014.
[3] S. Selisker, “The Bechdel Test and the Social Form of Character
Networks,” New Lit. Hist., vol. 46, no. 3, pp. 505–523, 2015.
[4] L. K. Börzsei, “Literary Criticism in New Media,” Loránd Eötvös
University, 2012.
[5] B. Humă, “Enhancing the authenticity of assessments through grounding
in first impressions.,” Br. J. Soc. Psychol., Oct. 2014.
[6] C. Kullenberg and D. Kasperowski, “What Is Citizen Science? – A
Scientometric Meta-Analysis,” PLoS One, vol. 11, no. 1, pp. 1–16,
2016.
[7] L. Carletti, D. McAuley, D. Price, G. Giannachi, and S. Benford,
“Digital Humanities and Crowdsourcing: An Exploration,” 2013.
[8] A. Bechdel, “The Rule,” Dykes to Watch Out For, 2005. [Online].
Available: http://dykestowatchoutfor.com/wp-
content/uploads/2014/05/The-Rule-cleaned-up.jpg.
[9] K. McKinney, “Almost half of 2015’s top movies failed the Bechdel
test,” fusion.net, 2015. [Online]. Available:
http://fusion.net/story/250292/2015-movies-bechdel-test-sexism/.
[10] Contributors of bechdeltest.com, “Bechdel Test Movie List - Stats and
Graphs,” 2016. [Online]. Available: http://bechdeltest.com/statistics/.
[11] A. Sarkeesian, “The Oscars and The Bechdel Test,” Feminist Frequency,
2012. [Online]. Available:
https://www.youtube.com/watch?v=PH8JuizIXw8.
[12] Silk Team, “Women in Film: A Data Analysis of 1500 Movies on
Bechdel Test Criteria,” 2015. [Online]. Available: http://women-in-
film.silk.co/.
[13] Contributors of bechdeltest.com, “Bechdel Test Movie List,” 2015.
[Online]. Available: http://bechdeltest.com/.
[14] L. Friedman, M. Daniels, and I. Blinderman, “Men making movies
about men,” 2015. [Online]. Available: http://poly-graph.co/bechdel/.
[15] D. Mariani, “Visualizing the Bechdel Test,” 2013. [Online]. Available:
http://tenchocolatesundaes.blogspot.com.br/2013/06/visualizing-
bechdel-test.html.
[16] W. Hickey, “The Dollar-And-Cents Case Against Hollywood’s
Exclusion of Women,” FiveThirtyEight, 2014. [Online]. Available:
http://fivethirtyeight.com/features/the-dollar-and-cents-case-against-
hollywoods-exclusion-of-women/.
[17] E. Meeks, “TVTropes Pt 1: The Weird Geometry of the Internet,” 2011.
[Online]. Available: https://dhs.stanford.edu/social-media-
literacy/tvtropes-pt-1-the-weird-geometry-of-the-internet/.
[18] R. Rughiniș, “Serious Games as Input versus Modulation: Different
Evaluations of Utility,” in 26th Conference on People and Computers
BCS-HCI 2012, 2012, pp. 175–184.
[19] L. K. Börzsei, “Literary Criticism in New Media,” Loránd Eötvös
University, 2012.
[20] J. Harris, “Periodic Table of Storytelling,” 2011. [Online]. Available:
http://jamesharris.design/periodic/.
[21] Y. Assogba, I. Ros, and J. Vallandigham, “Stereotropes.” [Online].
Available: http://stereotropes.bocoup.com/.
[22] Ubisoft, “Valiant Hearts. The Great War.” 2014.
[23] R. Rughiniș and Ștefania Matei, “Play to Remember: The Rhetoric of
Time in Memorial Video Games,” in Human-Computer Interaction:
Interaction Technologies Volume 9170 of the series Lecture Notes in
Computer Science, 2015, pp. 628–639.
[24] C. Kempshall, “Pixel Lions – the image of the soldier in First World
War computer games,” Hist. J. Film. Radio Telev., vol. 35, no. 4, pp.
656–672, 2015.
[25] Ş. Matei, “Games as a Genre of Historical Discourse. The Past on Fast
Forward,” in Proceedings of DiGRA 2015: Diversity of play: Games –
Cultures – Identities, 2015.
[26] C. Kempshall, “‘They Will Not Be Able To Make Us Play It Again
Another Day’ — The End in First World War Games,” in The First
World War in Computer Games, London: Palgrave Macmillan UK,
2015, pp. 82–95.
[27] R. Rughiniș, “Talkative Objects in Need of Interpretation. Re-Thinking
Digital Badges in Education,” in CHI ’13 Extended Abstracts on Human
Factors in Computing Systems, 2013, pp. 2099–2108.
[28] R. Rughinis and S. Matei, “Badge Architectures as Tools for Sense-
Making and Motivation in Engineering Education,” International Journal
of Engineering Pedagogy (iJEP), vol. 5, no. 4. pp. 55–63, 2015.
[29] J. Antin and E. Churchill, “Badges in Social Media : A Social
Psychological Perspective,” in Proceedings of the 2011 annual
conference on Human factors in computing systems CHI 11, 2011, pp.
1–4.
[30] R. Rughiniș, “Gamification for Productive Interaction. Reading and
Working with the Gamification Debate in Education,” in The 8th Iberian
Conference on Information Systems and Technologies CISTI 2013,
2013, pp. 1–5.