ArticlePDF Available

Big data meets open political science: an empirical assessment of transparency standards 2008–2019

Authors:

Abstract

Over the last decade, the field of political science has been exposed to two concomitant developments: a surge of Big Data (BD) and a growing demand for transparency. To date, however, we do not know the extent to which these two developments are compatible with one another. The purpose of this article is to assess, empirically, the extent to which BD political science (broadly defined) adheres to established norms of transparency in the discipline. To address this question, we develop an original dataset of 1555 articles drawn from the Web of Science database covering the period 2008–2019. In doing so, we also provide an assessment of the current level of transparency in empirical political science and quantitative political science in general. We find that articles using Big Data are significantly less likely than other, more traditional works of political science, to share replication files. Our study also illustrates some of the promises and challenges associated with extracting data from Web of Science and similar databases.
Big Data meets Open Political Science:
An Empirical Assessment of Transparency Standards 2008-2019
Introduction
We have witnessed a strong push to make political science research more transparent in
recent years. There are several reasons for this. For one thing, increased pressure from public
authorities is forcing academics and publishers to consider a variety of open-access
publication alternatives (see, e.g.., May 2005 and Bull 2016). For another, growing awareness
of the potential for fraud, manipulation and/or simple mistakes is encouraging political
scientists to call for greater transparency (e.g., Elman and Kapiszewski 2014; Lupia and
Elman 2014).1 As a result, journal editors have begun to publish datasets, along with analyses
files, so that the resulting publications can be scrutinized and studied more thoroughly. These
developments have spearheaded much critical discussion, and this (in turn) has led to more
awareness of the need to protect sensitive sources, and to be aware of concerns about
propriety/ownership. Consequently, the discipline is engaged in a healthy dialogue about the
need for (and limits to) transparency.
At the same time, some of the most popular fields of political science find themselves
under increased critical scrutiny. Once powerful approaches to political behaviour have had
difficulty predicting a series of recent and surprising election results—e.g., the UK elections
of 2015, the Brexit referendum in the UK, and the 2016 and 2020 US presidential elections.
These failures have drawn attention to a number of challenges to traditional polling methods
and models (Milliband 2016), sparking concerns about the reliability of election surveys,
their representativeness (e.g., Bethlehem 2017), and the possibility of social desirability bias
(Coppock 2017; Lax et al. 2016), among other things.2
1
Data scientists have raced to fill the resulting void, employing “Big Data” (BD). The
digital exhaust being generated by some of the largest and most influential social media sites
and search motors (e.g., Facebook, Twitter, Weibo, Google, and YouTube) can be—and has
been—used to provide alternative accounts of voter preferences and attitudes. While we can
disagree about whether BD will ever be a reliable alternative to surveys, there can be no
doubt that these data are new and different, and these differences matter.
Although we think there is significant potential for using BD in political science (PS)
inquiry, we are concerned about how its use may challenge the trajectory of mainstream
social science. In particular, we are concerned that much of Big Data Political Science
(BDPS) may be occurring beyond the critical gaze of practicing social scientists—i.e., it is
not being published in traditional PS venues, and that the nature of BD (and the venues in
which BDPS are published) has the potential to threaten the trend of greater openness and
transparency. After all, what does replication entail with a dataset that includes “billions of
interactions” (Blumenstock et al. 2015, 1073), or which is the legal and secret property of a
multinational enterprise?
In this paper, we seek to assess the extent to which BDPS articles provide full
replication materials (dataset, code, or any other material necessary to verify the empirical
analyses). To do this we develop and analyse an original dataset of 1,555 articles drawn from
the Web of Science in the period 2008-2019, which are coded for type (theoretical or
empirical), research design (qualitative, quantitative, or mixed methods), and transparency.
Using these data,3 we provide an empirical mapping of the growing field of BDPS; this
provides us with a survey of where this new research is being published. We then compare
trends in transparency among BDPS articles against a sample of core PS articles that do not
employ BD-- as these articles provide the baseline or benchmark, against which BDPS
articles can be compared. Our analysis confirms that much of the new BDPS work is being
2
published outside traditional PS journals and does not adhere to the transparency standards in
the discipline.
The remainder of the paper is structured as follows. We begin with a mapping of
recent developments in political science, to demonstrate our increased reliance on both
transparency (especially in large-N studies) and BD. This discussion is followed by a
description of our research design, including the construction of our dataset. In the
subsequent section we outline the main trends in BDPS and assess the level of transparency
relative to conventional PS articles. The final section includes a discussion and some
concluding remarks.
Transparency and Big Data trends
The past two decades have seen a growing concern about transparency in political science,
but the need for, and desirability of greater transparency is not new. Already in 1995, PS
published a symposium on replication in the social sciences, which addressed issues of
transparency (September 1995 issue), and which concluded with a varied and influential
group of contributors promoting replication and transparency as essential elements to study
science.4 In 2003, another symposium—this time in International Studies Perspectives
(Bueno de Mesquita et al. 2003)—took a closer look at the state of replication in the (sub)
discipline, and the authors were clearly disappointed by the lack of progress. By 2001, a little
less than 1/5 of political science journals had some sort of replications policy.5
Concerns for transparency have accelerated in the wake of several high-profile
examples of fraud, error, and deceit. Over the past decade, the media have revealed stories
about the politically-convenient sloppiness of Reinhart and Rogoff (2010; cf. Herndon et al.
2013) and the astonishing example of deceit and manipulation by LaCour and Green (2014;
cf. Broockman et al. 2015 and Singal 2015). As a result, there is growing recognition among
3
political scientists about the need and desire for greater transparency (for recent reviews, see
Elman et al. 2018 and Laitin and Reich 2017; see also Miguel et al. 2014). Aware of the costs
of being associated with ‘fake’ science, political scientists doubled down: a new transparency
movement took hold, in which mainstream PS journals aimed to increase transparency and
facilitate replication (Janz 2018).
In response, a growing number of political science journals have embraced the need
for greater transparency, replication (and the data depositories this would require), and pre-
registration. This movement was sparked by the 2010 decision of an ad hoc Committee of the
American Political Science Association (APSA) to launch the Data Access and Research
Transparency (DA-RT) group. This group approached leading PS journal editors in an
attempt to have them sign their Journal Editors Transparency Statement (JETS).6
Consequently, 29 PS journals—including EPS—have committed themselves to greater data
access and research transparency (see the Online Supplement Material (OSM), D.1) and to
implement policies that would make their publications more transparent and accessible. This
public push for greater transparency can be seen across the social sciences, and in journals
that extend beyond the JETS list.7
At the same time, we have seen a phenomenal rise in political scientists’ use of “Big
Data” (BD) to generate alternative measures of individual preferences, attitudes, and
behavior. behaviour. We provide evidence of this rise below, in Figure 4. Drawing from a
wide variety of technologies—e.g., data-processing hardware and software and a plethora of
digitized apparatuses—and a remarkably broad array of sources (e.g., public, commercial,
proprietary), political scientists have been able to map multiple aspects of political life by
wading through the data produced by these many technologies and sources.
The most famous, or infamous, example comes from the Cambridge Analytica
scandal, where millions of Facebook users found their data was being used, without their
4
knowledge, to aid the political campaigns of conservative candidates in the 2016 US
presidential campaign, including Donald Trump (see, e.g., Wylie 2019 and Berghel 2018).
This example is just one of many political analyses employing data from numerous platforms
(e.g., Twitter, Facebook, Weibo, Google, Flickr, YouTube…) and techniques (e.g., automatic
content analysis; scraping/web crawling; network analysis; sentiment analysis and topic
modelling; machine learning, etc). See OSBM: B.3 for an overview of platforms and
approaches uncovered in our research.
While this claimit should not be controversial to note the rise in BDPS research, it is
difficult to confirm empirically. While it is common to define BD with reference to 3Vs
(velocity, volume, variety),8 this definition is hard to transform into something measurable.
Our solution is to focus on the way that BD employs large, repurposed, datasets.9 This
repurposing provides social scientists with access to (e.g.) sensor data, satellite and
measurement data, enormous digital archives, social media exhaust and geotags, that can be
“taken” from their original mission and repurposed for subsequent social scientific analysis.
While neither large datasets or repurposing data is new to political science, the scope of this
BD collection is novel, as is the fact that the datasets produced are often too large for
traditional approaches and programs.
Such a definition taps into the new tools that are being used to repurpose data (e.g.,
scraping and machine-learning that allow us to use “digital footprints” in real time, over a
number of platforms), and allows us to reflect on the enormous scale (hence “Big”) that this
repurposing provides (in terms of volume, velocity and variety). But it also introduces at
least three new challenges for political scientists that employ Big Data, each of which are
relevant to our recent embrace of transparency.10
The first challenge concerns proprietary control. Many BD companies (e.g.
Facebook, Google…) keep their data and algorithms as proprietary trade secrets, resulting in
5
analytical black boxes, over which subsequent users have very little control or insight. In
effect, data scientists often work with data controlled by a monopoly holder. Buyers/users of
the data do not know the algorithms that go into their parsing and cannot know the details of
how that data was created (Dalton and Thatcher 2015, 6).
Another important challenge concerns the selection criteria or sampling techniques.
Traditional social science data collection is often publicly funded, and data is generally
collected with the explicit aim of securing a representative sample from a clearly defined
population so findings can be generalized. Think of census data, and the effort that lies
behind its collection. Unlike traditional social science data, some forms of BD do not need to
(and make no effort) to be representative, and we know that the collected data are often
strongly biased with regard to class, language, and use of technology.
This uniquely commercial nature of the data, in turn, raises several additional issues
related to replication and access. The way that BD is often gathered and used makes it nearly
impossible for researchers to emulate (forget about replicate) the approach and its results
(Longley 2012). Both the structure of the data, and the way they are accessed (through
networked streams of data), determine what can be known from that data (Burgess and Bruns
2012). Most firms are leery of sharing their data, whatever the cost—and these costs can be
substantial. Independent researchers, outside of key companies that control the data, do not
have access to the core proprietary algorithms that process and interpret the data.11 They are
forced to navigate blind.
Our third and final concern lies at the heart of this paper: we are worried that the
nature of BD (and the venues in which BDPS are published) threatens the trend of greater
openness and transparency.
Research design
6
Our research proceeded in three steps, as outlined in Figure 1. We began by identifying
articles as being examples of Political Science (PS), Big Data (BD) or both (BDPS). We then
screened the results, to ensure the quality of the resulting samples. Finally, we directed those
articles deemed to be BDPS (n=355) into our working database, and those articles that were
classified as PS but not BD (n=1,200) into a baseline for comparison. The existence of this
benchmark will allow us to compare the relative level of transparency in traditional PS vis-à-
vis BDPS articles. The remainder of this section describes how the two search strings were
developed and how we proceeded to construct the two datasets from the search results. More
details are provided in the OSM. The details of the benchmark are described below, in a
subsequent section
Figure 1 about here
After all, what does replication entail with a dataset that includes “billions of
interactions” (Blumenstock et al. 2015, 1073), or which is the legal and secret property of a
multinational enterprise?
In light of this rapidly developing research front, it is important that we engage in
discussions about the need for data sharing, verification, replication and transparency. Our
project offers a step in that direction.
In this article, we seek to assess the extent to which BDPS articles provide full
replication materials (dataset, code, or any other material necessary to verify the empirical
analyses). In doing so, we also need to gauge the degree of transparency in PS articles that do
not employ BD, as these articles provide the baseline or benchmark, against which BDPS
articles are compared. We develop and analyze an original dataset of 1,555 articles drawn
from the Web of Science in the period 2008-2019, which are coded for type (theoretical or
empirical), research design (qualitative, quantitative, or mixed methods), and transparency.
Using these data,12 we provide an empirical mapping of the growing field of BDPS; this
7
provides us with an overview over where most of this new research is being published. In
doing so, we point to the many challenges of trying to map this new terrain. Secondly, we
compare trends in transparency among BDPS articles with a sample of core PS articles that
do not employ BD. Our results provide sufficient grounds for concern, and our analysis
confirms that much of the new BDPS work is being published outside traditional PS journals
and does not adhere to the transparency standards in the discipline.
The remainder of the reflection is structured as follows. We begin with a description
of our research design, including the construction of our dataset. This section is
comparatively long since our research question required a complex and novel research design
consisting of several steps to construct the dataset. In the following section we outline the
main trends in BDPS and assess the level of transparency relative to conventional PS articles.
The final section includes a discussion and some concluding remarks.
Research design
Assessing the degree of transparency in the new BDPS literature presents several empirical
challenges. The most fundamental challenge is to operationalize both BD (and non-BD) and
PS in a consistent manner. To the extent that there is an emerging BDPS literature, it may be
conducted by data scientists with no formal background in political science and published
outside traditional political science journals.
Figure 12 illustrates the problem. To construct our dataset, we essentially had to
identify, from the pool of all scientific articles, those articles that employ BD (horizontally-
striped oval) to address a political science question (vertically-striped oval). The resulting
overlap (the hatched intersection) constitutes the world of BDPS.
Figure 12 about here
8
For practical purposes, we limit ourselves to journal articles indexed in the Web of
Science, published in 2008-2019 and written in English.13 This database represents the pool
of all scientific articles illustrated in Figure 1. The first challenge lies in the fact that2.
Because we are searching for PS articles that are not necessarily published in traditional PS
venues—hence, we needneeded to develop an independent means of categorizing work in the
field of political science. Consequently, we developed two different search strings to identify
PS and BD articles. The first search is for articles that included “political science” AND “big
data”, as described below. This search serves to identify the hatched intersection of BDPS
articles in Figure 12. The raw search result (Nn=8,745) was then manually coded, and the
majority of the articles were excluded as irrelevant (not PS, not BD). We refer to the set of
included articles as the BDPS dataset (Nn=355).
The second search was for articles that included “political science” NOT “big data”.
For this search, we limited ourselves to articles published in journals categorized as political
science journals in the Web of Science. The search results represent the vertically-striped area
in Figure 12. From this search we drew a quasi-random sample of articles, stratified by year
(Nn=1,200). This sample serves as a baseline or control group, against which the BDPS data
is compared. Both searches were conducted as a “topic search” (TS) in Web of Science,
which returns results from the article title, abstract, and keyword fields.14
Figure 2 provides an overview of the process.
The remainder of this section describes how the two search strings were developed and how
we proceeded to construct the two datasets from the search results. More details are provided
in the OSM.
Figure 2 about here
The PS search string
9
Because we are looking for works in political science that may not be published in traditional
PS journals, we cannot rely on the most common means for defining the discipline. Most of
us understand “political science” to be work that is produced and discussed in explicit PS
communities, such as PS conferences and/or PS departments, and/or work that is published in
journals classified as PS journals. As we sought to identify all BDPS articles, independent of
journal outlet, we had to operationalize PS in a Boolean search string, without referencing or
selecting from the underling publication venue. To develop the search string, we surveyed the
most influential journals in PS, sampled a set of abstracts from each journal, collected the
most frequently used words from the selected abstracts, and combined the words in a search
string.
To identify key journals in the discipline, our starting point was the journal rankings
provided by Giles and Garrand (2007, table 2), but we updated these to include the SCImago
Journal Rank (SJR) indicator.15 This provided us with a total of five rankings for comparison
(see OSM: A.1). Twenty-two journals appeared in at least two of the rankings. From these,
we selected all journals that were included in at least three of the rankings, i.e., the top 13
journals. In an effort to avoid a bias towards international relations and North American
journals, and to ensure sufficient variety in search terms, we added four journals that were
included on two rankings,16 as well as two renowned journals in important sub-fields in the
discipline.17 Table 1 shows the list of journals from which we sampled abstracts. Although
readers may differ about the inclusion (or exclusion) of any particular journal on this list, we
think it offers a fair summary of mainstream PS.
Table 1 about here
We sampled abstracts from the first ten empirical articles published in 2017. To avoid
oversampling of specific issues, we included only the first article in special issues. These 190
abstracts (19 x 10) were extracted and then analyzedanalysed for salient terms using NVivo.
10
Salient terms that did not describe a PS topic (e.g., ‘data’, ‘survey’, ‘and’, and ‘or’) were
excluded. Figure 3 shows the resulting word cloud of the 50 most common words from our
sample of abstracts with terms that cover the discipline of political science.18
Figure 3 about here
The corresponding list of most frequent terms was then analyzed for words that could
be combined using stemming or wildcard techniques (e.g., ‘support* (supporter, supporters,
supported’)). In the end, we selected 22 most relevant terms/stems: soci*, politic* OR policy
OR policies, state OR states), conflict$, (election$ OR elector*), economic$, media, countr*,
war$, govern*, party OR parties, democra*, institution$, civil OR civic, power, citizen$,
gender, and nation*, interest$, labo$r, and opposition$.
To maximize the possibility that a given article indeed addresses a PS question, we
operationalize “PS” as an article in which the “topic”, i.e., title, abstract, and keyword field,
includes at least three of the terms listed above, e.g. (“soci* AND democra* AND
institution$) OR (“soci* AND democra* AND power) OR, and so forth. Doing so yields
1,771 possible combinations of terms that were used in the search string, as documented in
OSM: A.2.19
The BD search string
Operationalizing BD in a consistent manner was more difficult than operationalizing PS, as
we could not draw on an established literature or foundation to the same extent. The common
definition of BD with reference to 3Vs (velocity, volume, variety) is exceedingly hard to
transform into something measurable. As a result of this, our “BD” search is much less
precise than the “PS” search. Consequently, most of the work to identify “true” BDPS articles
11
took place after the search had been conducted, through a manual coding of all the articles
that resulted from the search (see below). Through a series of trial and error, we limited our
BD search string to simply include ‘social media’ OR ‘twitter’ OR ‘facebook’ OR ‘google’
OR ‘algorithm’ OR ‘Web 2.0’. We recognize this operationalization process produces a rather
truncated set of BD articles, in that it selected papers that repurpose data collected from
primarily big tech firms. We think this tendency is predominate in the literature but hasten to
note that the sample was not limited to papers that draw data from such firms. While the
search string could certainly have been more comprehensive, it still yielded a very large
result. A broader search would have resulted in an unfeasibly large pool of records that had to
be screened manually.
The records resulting from the search “political science” AND “big data”
(Nn=8,745)20 were downloaded and cleaned for duplicates (Nn=91). The remaining records
(Nn=8,654) were subsequently screened for eligibility: to what extent did a given article
address a PS research question using BD? A large share of excluded records – more than 95%
contain both records that clearly do not address a PS research question or records that do
address a PS research question, but without using BD. In the end, 355 records were deemed
valid cases of BDPS. These observations were then coded for transparency (see below) and
constitute the BDPS dataset.
Screening the 8,654 retrieved records for inclusion in the BDPS dataset was done in
several iterations. As already noted, it was difficult to develop a search string that adequately
captured BD, which means that the bulk of the work to classify a BDPS article took place at
the screening stage (see Figure 21). As discussed above, we departed from the 3V (velocity,
volume, and variety) to focus on repurposing and size, i.e., the datasets are usually too large
for conventional statistical modelling.
12
To develop the coding practice, we undertook a pilot coding where both authors coded
the same random sample of 100 abstracts. Subsequently we compared the coding results and
discussed discrepancies until we agreed on all coding decisions. A research assistant (RA)
was then employed to code all cases manually. We used a single coder here to ensure
consistency. All observations were then coded into one of four mutually- exclusive
categories: Include (the article is both PS and BD), doubt (needs further reading), exclude
(PS, but not BD) and irrelevant (completely unrelated). 21 Nearly 60% of our results were
found to be PS but not BD, (exclude) and another 31.8% were deemed completely
irrelevant.22 We provide more details about this screening in OSM: B.3.
For the included observations, the RA coded additional information about the type of
data (source and size) employed in the analysis. When the RA had finished, the authors went
through the doubts and recoded them into include (BDPS) or exclude (not BD and PS). The
include observations constitute what we refer to as the BDPS dataset.
Constructing the controlbenchmark group
In order to gauge the relative transparency of this new BDPS work, we need to establish the
level of transparency enjoyed by other types of published work in political science. To do this
we created a benchmark or baseline for comparison: we selected a random sample of non-BD
political science articles (using the same PS selection criteria noted above). We then had two
new coders trace their degree of transparency, following the procedure described below. The
results provide us with a baseline transparency level for all political science articles
published, against which we can compare the BDPS transparency levels.
As noted above, our second Boolean search was for articles where the topic included
“political science” NOT “big data”, published in journals classified as PS in Web of
Science. This search yielded 44,014 observations from 171 different journals (see OSM:
13
C.1.). The large number of records made it necessary to draw a sample to be able to code the
observations for transparency. In the next step, we drew a representative sample of articles,
stratified by year (Nn=1,200). For each publication year, we sorted the observations by
relevance23 and selected the 100 first articles that were published each year (100 x 12 years).24
These records constitute the observations in the control group: thebenchmark or baseline,
against which the BDPS data is compared. The control group was, and were subsequently
coded for transparency and merged with the BDPS dataset.
Coding transparency
Having assembled the two datasets the BDPS dataset and the control groupbenchmark
two additional RAs coded all observations in both datasets for transparency. Both datasets
were randomly split in two and assigned to an RA, so that each RA coded half of the
observations in each dataset. After the initial training had been completed (see OSM:
D.2),These coders were trained to read and code the materials in a way that makes
subsequent replication possible (see OSM: D.2). In particular, each RA followed a four-step
process to code each article (see OSM: D.3 for details):
1. The article was traced back to its publication site to search for supporting replication
files;
2. The article was skimmed to determine whether it contained empirical analyses (or if it
was a theoretical contribution, a literature review, or similar);
3. Empirical articles were further coded as quantitative, qualitative, or mixed methods
depending on the research design employed; and
4. Empirical articles were subsequently coded across five categories (yes or no):
a) Dataset is available;
b) Code or script to reproduce analyses is available;
14
c) Article states that the replication material is available;
d) Replication material is available on request; and
e) Author(s) state that there are restrictions on the shared material (e.g., ethical
concerns).
In short, coders conducted a thorough search of each article publication site, as well as
the homepage of the corresponding author, to search for replication materials. Coders also
combed through the article drafts to search for explicit references to how (or if) replication
materials were available on request, or at an explicit third-party site.
Comments and doubts were then checked and revised by the authors. The two
complete datasets were subsequently combined, and a dummy variable indicates whether an
observation is BDPS or belongs to the control group.benchmark. Finally, we added two
dummy variables at the journal level: PS journal (yes or no)25 and JETS signatory (yes or
no),26 and checked for reliability (see OSM: D.4). To measure transparency, we created a
dummy variable that takes the value of 1 if all necessary replication material is available
(both dataset and code/script).
In the analysis, we focus exclusively on empirical (i.e., quantitative, qualitative, and
mixed research designs) political science. This excludes 181 observations that do not contain
any form of empirical analysis (e.g., theoretical work). We believe this constitutes the most
correct benchmark for comparison, as the BDPS articles are empirical (and quantitative) by
definition. In some of the analyses, we also limit ourselves to articles with a quantitative
research design only, as this group most closely resembles the BDPS articles.
Results
15
Our analysis is mainly visual, and we rely largely on predicted transparency scores with
corresponding confidence intervals.27 To corroborate the visual analysis, we rely on a series
of logistic regression models that allow us to test our propositions more formally. The tests
also include a model using coarsened exact matching (Iacus et al. 2012), where ‘treated’ (i.e.,
BDPS) observations and the control groupbenchmark are matched on publication year,
journal, and coder id (i.e., which of the two research assistantsRAs coded the article for
transparency).
The section presents our results in two parts. First, we describe trends in BDPS
publishing during the last decade. Then we describe transparency trends in the benchmark,
i.e., conventional political science, i.e., the benchmark or baseline, against which we compare
transparency among BDPS articles.
Trends in BDPS publishing 2008-2019
Figure 4 displays when and where the identified BDPS articles were published. Here we are
struck by a sharp increase in the total number of BDPS articles from 2014 and onwards. In
the first part of the period, we find hardly any BDPS articles at all. Notably, the first BDPS
article in our material appears in 2009. Before 2014, there are only 4 cases of BDPS in our
dataset.
Figure 4 about here
Secondly, we note the relative irrelevance of traditional PS journals. In particular, it
would seem that the rapid growth in BDPS publications is essentially taking place outside of
the discipline’s core journals. Of the 355 observations, only 54 records, or about 15 percent,
were published in journals classified as PS; the remaining 301 records were published in
journals not categorized as PS. In total, only 19 articles were published in a JETS journal.
16
A closer look at the non-PS journals reveals significant heterogeneity. The 301
articles are published in 152 different journals. In our material, the six most common
publication outlets published about 21% of all BDPS articles, and about 25% of BDPS
articles in non-PS journals. Table 2 lists the six most frequent PS and non-PS journals.28
Table 2 about here
It is worth noting, however, that even if most BDPS articles are published outside of
mainstream PS outlets, many are published in interdisciplinary journals or in social science
journals broadly defined.29 Still, broadly defined.30 We doubt that many political scientists are
regular readers of Social Science Computer Review and Plos One—but these are clearly the
most fruitful outlets for this type of work, with a focus that is obviously tangential to the
discipline. Producers and consumers of BDPS would be well advised to follow these outlets
closely. On the other hand, a number of BDPS articles appear in journals far from the typical
political scientist’s selection of staple journals (e.g., EPJ Data Science, IEEE Access, and
Nuclear Engineering and Technology, to mention but a few).
Next, we turn to transparency among BDPS and non-BDPS articles.
Transparency among BDPS and non-BDPS articles
Starting with the baseline or controlbenchmark group, Figure 5 shows transparency trends
among empirical articles published in PS journals. We break down the results to focus on
articles with a quantitative (including mixed methods) research design and articles published
in so-called JETS journals.
The trends in Figure 5 are clear: While there seems to be an overall increase in
transparency—defined by the availability of full replication materials—in empirical PS, the
increase is mainly driven by quantitative work. This is as expected, given that it is easier
(practically as well as ethically) to share quantitative replication materials. The finding also
17
echoes the concerns voiced about non-quantitative data and confidentiality that we described
above. More interesting, however, is the gap between journals that have signed the JETS
statement and those that have not. Among the non-BD articles with a purely quantitative
research design, complete replication materials were provided in 35% of the cases. If we
distinguish between JETS and non-JETS journals, the share is about 66% and 16%,
respectively. In other words, compared to articles in non-JETS journals, JETS articles are
about four times more likely to provide full replication materials.31
Figure 5 about here
As argued above, the best baseline for evaluating the transparency of BDPS articles is
a control groupsubsample of observations that employs purely quantitative research designs.
In the remainder of the analysis, we therefore limit ourselves to this group, while also
keeping in mind the difference between JETS signatories and other PS journals.
Figure 6 compares transparency trends among BDPS articles and non-BDPS articles,
as well as for the subset of non-BDPS articles that are published in JETS journals.32 The
figure clearly demonstrates that BDPS articles are less likely to provide full replication
materials. If anything, this discrepancy is growing over time as PS journals increasingly
expect replication materials to accompany the published work.
Figure 6 about here
As Figure 5 showed, there is a notable difference between PS journals that are part of
the JETS initiative and those that are not. In the lower panel of Figure 6, we split the
controlbenchmark group in two, depending on whether the article is published in a JETS
journal (or not). The results indicate that while articles published in non-JETS PS journals do
not follow the same strict transparency standards (as articles published in JETS journals),
they are more likely to provide replication material than the average BDPS article.
18
To corroborate the visual analyses, we conducted a series of more formal tests using
logistic regression. The results are displayed in Table 3. Model 1 is the simplest model which
estimates the difference between BDPS and non-BDPS articles controlling for publication
year only. Model 2 repeats Model 1, but herewith standard errors are clustered by journal. In
Model 3 we also control for whether the journal is a JETS signatory. Model 4 is identical to
Model 2 except it excludes non-empirical work, while Models 5 and 6 are limited to articles
with a purely quantitative research design. In Model 6, we employ coarsened exact matching,
matching observations by publication year, journal, and coder identity. All models confirm
the results already illustrated above. No matter how the model is specified, BDPS articles are
significantly less likely to provide full replication material. This finding holds also when
controlling for JETS journals (Model 3). As expected, the difference between BDPS and
mainstream PS articles increases if we focus on empirical work, and particularly purely
quantitative work.
Table 3 about here
Discussion
Our study illustrates some of the opportunities and challenges associated with extracting data
from the Web of Science and similar publication databases, and our results are both
surprising and worrisome for the discipline. To begin with, we demonstrate a significant rise
in BDPS after 2014. Much of this new research is not being published in traditional PS
journals. We think this trend is worrisome for at least two reasons. First, it will be difficult for
political scientists to stay abreast of recent developments if these are being published in
venues outside our normal purview.This trend may offer political scientists greater
opportunities to collaborate across disciplines, as long as we are aware of where this work is
being published. There is clearly much potential to leverage technical and methodological
19
expertise in ways that provide new approaches to solving old problems. The opportunity to
introduce a broader diversity of perspectives to both BD and PS is clearly exciting (see e.g.,
D’Ignazio and Klein, 2020). But it is also clear that much of this work is being published
beyond the gaze of mainstream political science, and we need to ensure that this work
receives the critical attention it deserves.
Second, this trend suggests thatAt the discipline risks surrendering part of its role as a
gatekeeper to knowledge, and/or experts on record, on issues related to political science. To
the extent that BDPS encroaches on core areas of PS research (and influence)—such as in
areas of voter behavior and opinion—this development poses a significant challenge to the
discipline and our flagship journals.
In additionsame time, we demonstrate that much of the BDPS work does not abide to
the high transparency standards that the broaderour discipline has tried to encourage over
recent decades. Indeed, our results indicate there is substantial variation in the employment of
the discipline’s transparency standards—both within the mainstream discipline journals
(generally); and relative to BDPS articles (in particular). The first variation, within
mainstream PS journals, is not particularly surprising and can even be expected: there is
significant variation in commitment to transparency and replication across articles that are
empirical and quantitative, as opposed to theoretical and/or qualitative. Recognizing this, we
have been very careful in how we have defined our control cases (i.e., with an eye on
focusing mostly on similarly empirical and quantitative approaches). In the doing, we
uncovered substantial variation in commitment to the needs of transparency and replication
across articles published in JETS (as opposed to non-JETS) journals. We are both surprised
and bothered to find that BDPS articles are significantly less likely to be accompanied by full
replication materials.
20
We hopeconclude by hoping that our findings can spark a discussion within the
discipline about how to deal with the transparency challenge in the face of a growing wave of
BD research. Much of the new work is being undertaken by researchers untrained in political
science. In this light, we end our article with four challenges uncovered by our research.
These challenges deserve our attention as we consider the role of political science, and our
flagship journals, in the future:BDPS research.
We think it is important for the discipline to maintain its gatekeeper role, and to be
able to influence research developments from the perspective of social science, social
theory and in a manner that is consistent with the needs of science.
We think it is necessary to deal with these transparency challenges head on. In doing
so, we need to develop a strategy for handling new types of data (repurposed, costly,
proprietary, opaque). In particular, political scientists need to consider if there is
threshold, over which some types of data or data-analyses should not be included as
part of the scientific discussion.
Alternatively, the discipline may want to re-think its transparency standards
considering these new developments. Is it unreasonable to expect transparency, data-
sharing and pre-registration at a time where some of the most plentiful data are
opaque, proprietary and repurposed?
Finally, the discipline needs to decide a strategy for how, or if, we should try and
entice BDPS articles into traditional PS journals. Only then can the discipline
maintain its important role in monitoring and ensuring high quality research in
political science.
Notes
21
1 Over the past decade, the media have revealed stories about the politically-convenient sloppiness of Reinhart and
Rogoff (2010; cf. Herndon et al. 2013) and the astonishing example of deceit and manipulation by LaCour and Green
(2014; cf. Broockman et al. 2015 and Singal 2015).
2 In the wake of the 2020 US Presidential elections, there was already much media speculation about what might have
gone wrong. See, e.g., Bump (2020), Cohen (2020) and Tufekci (2020).
3 Replication material for the paper can be found at XXXXXXXXXX. Scholars who disagree with our coding choices
are invited to dialogue, and we will update the data accordingly.
4 Gary King’s (1995) contribution is probably the best known of these.
5 Gleditsch and Metelits (2003) list 27 journals in political science. See
https://academic.oup.com/isp/article/4/1/72/1930641#29683436 for a list and brief discussion.
6 See the Journal Editors Transparency Statement, https://www.dartstatement.org/2014-journal-editors-statement-jets.
7 It should be noted that the move toward greater transparency is not embraced by everyone, and a long string of
prominent political scientists, and earlier presidents of the APSA, wrote a letter to journal editors to ask them to think
carefully about signing the JETS. See Powell et al. (2016).
8 See Laney (2001).
9 More detailed coding rules and procedures will follow and can be found in the OSM.
10 We have a fourth concern, but it is not directly linked to the discussion of transparency. This is a concern about the
role of theory in a context of apparently boundless reams of data. Some BD advocates of BD suggest that the sheer
amount of data can mean an “end of theory” (e.g., Anderson 2008). In other words, some BD analysts use this
quantitative bonanza can be used as an excuse to avoid theory, and the theoretical awareness that usually accompanies a
trained social scientist. To the extent that BD lends itself to naïve induction, it represents yet another challenge to
contemporary social science.
11 We would be negligent if we didn’t mention Social Science One in this context: Harvard’s attempt to access and share
Facebook data. See https://socialscience.one/blog/social-science-one-announces-access-facebook-dataset-publicly-
shared-urls. For a more critical view, see Tromble (2021) and Dommett and Tromble (2022).
12 Replication material for the paper can be found at XXXXXXXXXX. Scholars who disagree with our coding choices
are invited to dialogue, and we will update the data accordingly.
13 Our first search covered 2008-2018 but was later updated to also include 2019.
14 See https://clarivate.libguides.com/woscc/searchtips .
15 This platform takes its name from the SCImago Journal Rank (SJR) indicator, developed by SCImago from the
widely known algorithm Google PageRank™. This indicator shows the visibility of the journals contained in the
Scopus® database from 1996. See Jensen and Moses (2020) for a discussion of challenges associated with using
SCImago to rank PS journals.
16 Journal of Political Economy; Political Geography; British Journal of Political Science and Comparative Politics.
17 Political Psychology and Political Communication. Political Psychology appears in one of the rankings.
18 For a similar approach, see Cooper et al. (2009).
19 Ideally, we would have combined more than three words and included more terms. However, through trial and error
we found that this search string was the most complicated that the Web of Science search engine could handle. Even so,
we had to conduct the search in several iterations.
20 For a distribution of the results over time, see OSM: B.1.
21 For examples of records coded as irrelevant and excluded, see OSM: B.2.
22 Given the tedious nature of this work, we experimented with machine learning to develop an algorithm that could take
our manually-coded results and use them to learn how to read an abstract and decide for itself when it employed “ PS”
and “BD”. In the end, the experiment failed, as the training dataset included a share of BDPS observations that was too
small for the machine’s needs. For more information, see OSM: B.4.
23 Recall that our search string was generated by choosing the most common words used in the abstracts from a list of
prominent journals in the field. By giving preference to “relevant” records, we choose those articles that best meet our
definition/operationalization of PS. We think this is an appropriate approach, as it allows us to focus on what we might
think of as the gold standard in the discipline, from the most cited journals. To the extent that our definition (and, quite
possibly, the discipline) prioritizes quantitative approaches, then the resulting controlbenchmark group will tend to be
biased in the direction of including a higher share of transparency articles than the profession at large. In other words,
our controlthe benchmark group is likely to include a higher share of transparencies—providing a higher benchmark for
comparisons with the BDPS data. As noted by one of the anonymous reviewers, we recognize that a stronger conceptual
framework—one that does a better job of linking the intersection of actual data qualities and our hypotheses—could
have made our search and our failed attempt at machine-reading more viable.
24 Given the limitations in the Web of Science search platform, a completecompletely random sample would have been
very cumbersome. Web of Science only allowed us to download 50 records at a time, so downloading 44,014
observations would require about 880 separate downloads.
25 Extracted from Web of Science.
26 Source for the JETS coding was the DA-RT statement, available at https://www.dartstatement.org/2014-journal-
editors-statement-jets; see also OSM: D.1.
27 We use Stata’s two-waytwoway qfitci function, which plots predicted values of the dependent variable (in this case,
transparency) as a linear function of X and X2 (here: year and year2) with a corresponding confidence interval.
28 An extended table with all journals in our material is available in OSM: E.1.
29 We recognize that the Web of Science categorization of ‘political science journals’ can appear somewhat arbitrary.
For reasons of consistency, we choose to rely on Web of Science’s classification throughout the data collection and
analysis.
30 We recognize that the Web of Science categorization of ‘political science journals’ can appear somewhat arbitrary.
For reasons of consistency, we choose to rely on Web of Science’s classification throughout the data collection and
analysis.
31 For detailed descriptive statistics, see OSM: E.2.
32 Note that the first BDPS article in the dataset appears in 2009. The wide confidence interval for the BDPS articles
during the first part of the period reflects the fact that there are hardly any observations in the dataset prior to 2012.
References
Anderson, C. 2008. The end of theory: The data deluge makes the scientific method obsolete.
Wired, 23 June.
Berghel, H. 2018. Malice domestic: The Cambridge analytica dystopia. Computer 51(5): 84-89.
Bethlehem, J. 2017. The Representativity of Election Polls. Statistics, Politics and Policy 8: 1-12.
Blumenstock, J., G. Cadamuro and R. On. 2015. Predicting poverty and wealth from mobile phone
metadata. Science 350(6264): 1073-6.
Broockman, D., J. Kalla and P. Aronow. 2015. Irregularities in LaCour (2014). 19 May. Available
from: http://stanford.edu/~dbroock/broockman_kalla_aronow_lg_irregularities.pdf.
(accessed 14 March 2018).
Bueno de Mesquita, B., N.P. Gleditsch, P. James, G. King, C. Metelits, J.L. Ray, B. Russett, H.
Strand, and B. Valeriano. 2003 Symposium on Replication in International Studies
Research. International Studies Perspectives 4: 72–107.
Bull, M. J. 2016. Introduction; open access in the social and political sciences: threat or
opportunity? EPS 15: 151-7.
Bump, P. 2020. It’s important to ask whey 2020 polls were off. It’s more important to ask what will
happen next. The Washington Post, 16 November.
Burgess, J. and A. Bruns. 2012. Twitter archives and the challenges of ‘Big Social Data’ for media
and communication research. M/C Journal 15(5).
Cohen, N. 2020. What went wrong with Polling? Some early theories. The New York Times, 10
November.
Cooper, C., T.A. Collins, and H.G. Knotts. 2009. Picturing political science. PS 42: 365-365.
Coppock, A. 2017. Did Shy Trump Supporters Bias the 2016 Polls? Evidence from a Nationally-
representative List Experiment. Statistics, Politics and Policy 8: 29-40.
Dalton, C. and J. Thatcher. 2015. Inflated granularity: Spatial ‘Big Data’ and geodemographics. Big
Data & Society (July-December): 1-15.
D'Ignazio, C. and L.F. Klein. 2020. Data Feminism. Cambridge: MIT Press.
Doctorow, C. 2008. Big data: Welcome to the petacenter. Nature 455: 16–21.
Dommett, K. and R. Tromble. 2022. Advocating for Platform Data Access: Challenges and
Opportunities for Academics Seeking Policy Change. Politics and Governance 10(1): 220-
229.
Elman, C. and D. Kapiszewski. 2014. Data access and research transparency in the qualitative
tradition. PS 47: 43-47.
Elman, C., D. Kapiszewski and A. Lupia. 2018. Transparent Social Inquiry: Implications for
Political Science. Annual Review of Political Science 21: 29-47.
Giles, M.W. and J.C. Garrand. 2007. Ranking Political Science Journals: Reputational and
Citational Approaches. PS 40: 741-51.
Gleditsch, N.P. and C. Metelits. 2003. Symposium on Replication in International Studies Research.
International Studies Perspectives 4: 72-107.
Herndon, T., M. Ash and R. Pollin. 2013. Does High Public Debt Consistently Stifle Economic
Growth? A Critique of Reinhart and Rogoff, April 15. PERI Working Paper No. 322.
Iacus, S.M., G. King and G. Porro. 2012. Causal Inference without Balance Checking: Coarsened
Exact Matching. Political Analysis 20: 1–24.
Janz, N. 2018. Replication and transparency in political science—did we make any progress?
Political Science Replication blog. 14 July. Available from:
https://politicalsciencereplication.wordpress.com/2018/07/14/replication-and-transparency-
in-political-science-did-we-make-any-progress/
(accessed 15 June 2022).
Jensen, M.R. and J.W. Moses. 2021. The state of political science, 2020. EPS 20: 14-33.
King, G. 1995. Replication, Replication.
PS
28: 444-452.
LaCour, M.J. and D.P. Green. 2014. When contact changes minds: An experiment on transmission
of support for gay equality. Science 346: 1366-1369.
Laney, D. 2001. 3D data management: Controlling data volume, velocity, variety. Application
Delivery Strategies Meta Group File 949. Available from: http://blogs.gartner.com/doug-
laney/files/2012/01/ad949-3D-Data-Management-Controlling-Data-Volume-Velocity-and-
Variety.pdf (accessed 14 May 2015).
Laitin, D.D. and R. Reich. 2017. Trust, transparency, and replication in political science. PS 50:
172-175.
Lax, J.R., J.H. Phillips and A.F. Stollwerk. 2016. Are survey respondents lying about their support
for same-sex marriage? Lessons from a list experiment. Public opinion quarterly 80: 510-
533.
Longley, P. 2012. Geodemographics and the practices of geographic information science.
International Journal of Geographical Information Science 26: 2227–2237.
Lupia, A. and C. Elman. 2014. Openness in political science: data access and research transparency
—introduction. PS 47: 19-42.
May, C. 2005. The academy's new electronic order? Open source journals and publishing political
science. EPS 4: 14-24.
Miguel, E., C. Camerer, K. Casey, J. Cohen, K.M. Esterling, A. Gerber and D. Laitin. 2014.
Promoting transparency in social science research. Science 343: 30-31.
Milliband, D. 2016. How the pollsters got the US election wrong—just like Brexit. The Telegraph,
9 November.
Moher, D., A. Liberati, J. Tetzlaff, & D.G. Altman. 2009. Preferred Reporting Items for Systematic
Reviews and Meta-Analyses: The PRISMA Statement. PLOS Medicine 6: e1000097.
Powell, B. et al. 2016. Letter from distinguished political scientists urging nuanced journal
interpretation of JETS policy guidelines.”. 13 January. Available from:
https://politicalsciencenow.com/letter-from-distinguished-political-scientists-urging-
nuanced-journal-interpretation-of-jets-policy-guidelines/ (accessed 9 December 2020).
Reinhart, C. and K.K. Rogoff. 2010. Growth in a Time of Debt. American Economic Review:
Papers & Proceedings 100: 573–578.
Singal, J. 2015. The Case of the Amazing Gay-Marriage Data: How a Graduate Student Reluctantly
Uncovered a Huge Scientific Fraud. Science of Us blog. 29 May. Available from:
http://nymag.com/scienceofus/2015/05/how-a-grad-student-uncovered-a-huge-fraud.html
(accessed on 14 March 2018).
Tromble, R. 2021. Where Have All the Data Gone? A Critical Reflection on Academic Digital
Research in the Post-API Age. Social Media and Society 1-8.
Tufekci, Z. 2020. Can we finally agree to ignore election forecasts? The New York Times, 1
November.
Wylie, C. 2019. Mindf*ck: Cambridge Analytica and the plot to break America. New York:
Random House.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Independent researchers’ access to digital platform data is critical for our understanding of the online world; yet recent reflections have shown that data are not always readily available (Asbjørn Møller & Bechmann, 2019; Bruns, 2018; Tromble, 2021). In the face of platform power to determine data accessibility, academics can often feel powerless, but opportunities and openings can emerge for scholars to shape practice. In this article, we examine the potential for academics to engage with non-academic audiences in debates around increased data access. Adopting an autoethnographic approach, we draw on our personal experiences working with policymakers and digital platforms to offer advice for academics seeking to shape debates and advocate for change. Presenting vignettes that detail our experiences and drawing on existing scholarship on how to engage with non-academic audiences, we outline the opportunities and challenges in this kind of engagement with a view to guiding other scholars interested in engaging in this space.
Article
Full-text available
In the wake of the 2018 Facebook–Cambridge Analytica scandal, social media companies began restricting academic researchers’ access to the easiest, most reliable means of systematic data collection via their application programming interfaces (APIs). Although these restrictions have been decried widely by digital researchers, in this essay, I argue that relatively little has changed. The underlying relationship between researchers, the platforms, and digital data remains largely the same. The platforms and their APIs have always been proprietary black boxes, never intended for scholarly use. Even when researchers could mine data seemingly endlessly, we rarely knew what type or quality of data were at hand. Moreover, the largesse of the API era allowed many researchers to conduct their work with little regard for the rigor, ethics, or focus on societal value, we should expect from scholarly inquiry. In other words, our digital research processes and output have not always occupied the high ground. Rather than viewing 2018 and Cambridge Analytica as a profound disjuncture and loss, I suggest that digital researchers need to take a more critical look at how our community collected and analyzed data when it still seemed so plentiful, and use these reflections to inform our approaches going forward.
Article
This article maps the state of political science since the turn of the millennium. It begins by reviewing the influential description of the discipline in Robert Goodin’s (2011 [2009]) introduction to the Oxford Handbook of Political Science. It then introduces an alternative approach, based on citation indexes, to generate a comparative list of influential authors for the same time period. After comparing Goodin’s list with our own, we use the same method to generate a list of the most influential books and articles of the 2009–2018 period and describe how the discipline has changed over the intervening decade. Two of the more interesting findings include the continued importance of books (in addition to articles) in political science citations and an apparent trend towards increased pluralism in recent years.
Article
Partisan consultancies like Cambridge Analytica that use data analytics to sway the electorate rely on social network users’ participation in their own psychological manipulation.
Article
Political scientists use diverse methods to study important topics. The findings they reach and conclusions they draw can have significant social implications and are sometimes controversial. As a result, audiences can be skeptical about the rigor and relevance of the knowledge claims that political scientists produce. For these reasons, being a political scientist means facing myriad questions about how we know what we claim to know. Transparency can help political scientists address these questions. An emerging literature and set of practices suggest that sharing more data and providing more information about our analytic and interpretive choices can help others understand the rigor and relevance of our claims. At the same time, increasing transparency can be costly and has been contentious. This review describes opportunities created by, and difficulties posed by, attempts to increase transparency. We conclude that, despite the challenges, consensus about the value and practice of transparency is emerging within and across political science’s diverse and dynamic research communities Expected final online publication date for the Annual Review of Political Science Volume 21 is May 11, 2018. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Article
Explanations for the failure to predict Donald Trump’s win in the 2016 Presidential election sometimes include the “Shy Trump Supporter” hypothesis, according to which some Trump supporters succumb to social desirability bias and hide their vote preference from pollsters. I evaluate this hypothesis by comparing direct question and list experimental estimates of Trump support in a nationally representative survey of 5290 American adults fielded from September 2 to September 13, 2016. Of these, 32.5% report supporting Trump’s candidacy. A list experiment conducted on the same respondents yields an estimate 29.6%, suggesting that Trump’s poll numbers were not artificially deflated by social desirability bias as the list experiment estimate is actually lower than direct question estimate. I further investigate differences across measurement modes for relevant demographic and political subgroups and find no evidence in support of the “Shy Trump Supporter” hypothesis.
Article
Election polls are conducted in many countries during election campaigns. Provided such polls are set up and carried out correctly, they can give an accurate indication of the voting intentions of people. However, the last couple of years these polls seem to be less able to predict election results. Examples are the polls for the general election in the UK of 2015, the Brexit referendum in the UK, and the presidential election in the US of 2016. The polls in the UK and the US have all in common that they are either telephone polls or online polls. It is shown in this paper that both type of polls suffer from lack of representativity. The compositions of their samples differ from that of the population. This can have several causes. For telephone polls, problems are mainly caused by increasing nonresponse rates, and lack of proper sampling frames. Most online polls are based on samples from web panels that are recruited by means of self-selection instead of random samples. Such web panels also not representative. The paper analyses the shortcomings of these election polls. The problems are illustrated by describing the polls in the UK and the US in some more detail.
Article
Striving better to uncover causal effects, political science is amid a revolution in micro-empirical research designs and experimental methods. This methodological development—although quite promising in delivering new findings and discovering the mechanisms that underlie previously known associations—raises new and unnerving ethical issues that have yet to be confronted by our profession. We believe that addressing these issues proactively by generating strong, internal norms of disciplinary regulation is preferable to reactive measures, which often come in the wake of public exposés and can lead to externally imposed regulations or centrally imposed internal policing.