ArticlePDF Available


In the last 10 years, many canonical findings in the social sciences appear unreliable. This so-called “replication crisis” has spurred calls for open science practices, which aim to increase the reproducibility, replicability, and generalizability of findings. Communication research is subject to many of the same challenges that have caused low replicability in other fields. As a result, we propose an agenda for adopting open science practices in Communication, which includes the following seven suggestions: (1) publish materials, data, and code; (2) preregister studies and submit registered reports; (3) conduct replications; (4) collaborate; (5) foster open science skills; (6) implement Transparency and Openness Promotion Guidelines; and (7) incentivize open science practices. Although in our agenda we focus mostly on quantitative research, we also reflect on open science practices relevant to qualitative research. We conclude by discussing potential objections and concerns associated with open science practices.
An Agenda for Open Science in Communication
Tobias Dienlina, Niklas Johannesa, Nicholas David Bowmana, Philipp K. Masura, Sven Engessera,
Anna Sophie Kümpela, Josephine Lukitoa, Lindsey M. Biera, Renwen Zhanga, Benjamin K. Johnsona,
Richard Huskeyb, Frank M. Schneiderb, Johannes Breuerb, Douglas A. Parryb, Ivar Vermeulenb, Jacob T.
Fisherb, Jaime Banksb, Rene Weberb, David A. Ellisb, Tim Smitsb, James D. Ivoryb, Sabine Trepteb, Bree
McEwanb, Eike Mark Rinkeb, German Neubaumb, Stephan Winterb, Christopher J. Carpenterb, Nicole
Krämerb, Sonja Utzb, Julian Unkelb, Xiaohui Wangb, Brittany I. Davidsonb. Nuri Kimb, Andrea Stevenson
Wonb, Emese Domahidib, Neil A. Lewisb, Claes de Vreeseb
Author Note
This paper is a large-scale multi-author paper that was drafted in a collaborative online
writing process. All authors contributed actively to the manuscript. Authorship order is
determined by the magnitude of contribution. aParticipated in drafting the manuscript.
bParticipated in brainstorming and/or commenting on the manuscript.
Correspondence to this article should be addressed to Tobias Dienlin, School of
Communication, Department for Media Psychology, University of Hohenheim, 70599 Stuttgart.
This manuscript contains online supplementary material (OSM), which can be found
It is possible to sign and thereby endorse the agenda. To sign the agenda and for the
current list of signatories, see OSM Appendix A.
In the last ten years, many canonical findings in the social sciences appear unreliable. This so-
called “replication crisis” has spurred calls for open science practices, which aim to increase the
reproducibility, replicability, and generalizability of findings. Communication research is subject
to many of the same challenges that have caused low replicability in other fields. As a result, we
propose an agenda for adopting open science practices in Communication, which includes the
following seven suggestions: (1) publish materials, data, and code; (2) preregister studies and
submit registered reports; (3) conduct replications; (4) collaborate; (5) foster open science skills;
(6) implement Transparency and Openness Promotion (TOP) Guidelines; and (7) incentivize
open science practices. While in our agenda we focus mostly on quantitative research, we also
reflect on open science practices relevant to qualitative research. We conclude by discussing
potential objections and concerns associated with open science practices.
Keywords: open science, reproducibility, replicability, communication, preregistration,
registered reports
An Agenda for Open Science in Communication
As Communication scholars, we aim to establish reliable and robust claims about
communication processes. It is a bedrock of science that such claims are only reliable and robust
if we can confirm them repeatedly. However, since 2010 several large-scale projects in various
empirical sciences have shown that many canonical findings do not replicate (Camerer et al.,
2016, 2018; R. A. Klein et al., 2014, 2018; Open Science Collaboration, 2015). The field of
Communication has not yet conducted a large-scale replication project. However, because we
employ similar methods to the fields that have already acknowledged these problems, there is
reason to believe we face similar issues. The inability to replicate findings is troublesome for
empirical disciplines, and it saps public trust in science (National Academy of Sciences, 2018).
As a potential solution to this so-called replication crisis, a growing number of scholars have
called for more transparent and open science practices (Nosek et al., 2015). Such practices tackle
causes of low replicability and contribute to a currently ongoing credibility revolution (Vazire,
2019). Indeed, open science practices can increase replicability (Munafò et al., 2017) and foster
public trust (Pew Research Center, 2019). Therefore, in order to combat threats to the reliability
and robustness of Communication scholarship, and to ensure that Communication research
remains relevant in the public sphere, we believe that it is crucial for our field to act now and to
implement open science practices.
To this end, we (a) discuss common causes of low replicability in science broadly, before
we (b) outline growing concerns about a replication crisis in Communication particularly. We (c)
explain how open science practices can address these concerns by offering an agenda of seven
specific solutions for implementing open science practices in Communication. Although we
focus mostly on confirmatory and quantitative research, we also (d) reflect on open science
practices relevant to other approaches such as qualitative research. Finally, we (e) discuss
potential concerns and objections against the proposed open science practices.
While we are not the first to identify these problems and solutions, our main contribution
is that we build on the insights generated by other adjacent disciplines and apply these to
Communication. We define the most important problems in our field, identify potential
solutions, and provide a concrete plan of action. Ultimately, we believe that by following these
solutions, we as Communication scholars can collectively improve and update the empirical
basis for our understanding of how communication processes unfold in the many contexts we
Causes of Low Replicability
Before we outline the causes of low replicability, we briefly define the underlying
concepts. Replicability means that a finding “can be obtained with other random samples drawn
from a multidimensional space that captures the most important facets of the research design.”
(Asendorpf et al., 2013, p. 109). Reproducibility means that “Researcher B [...] obtains exactly
the same results (e.g., statistics and parameter estimates) that were originally reported by
Researcher A [...] from A’s data when following the same [data analysis]” (Asendorpf et al.,
2013, p. 109). Importantly, for most quantitative researchers the end goal is generalizability,
which means that a finding “[..] does not depend on an originally unmeasured variable that has a
systematic effect. [...] Generalizability requires replicability but extends the conditions to which
the effect applies” (Asendorpf et al., 2013, p. 110).
Several large-scale projects have examined the replicability of scientific findings in
various fields such as psychology and economics (for an overview, see OSM Appendix B).
Replication rates in these projects vary considerably, ranging from 36% in cognitive and social
psychology (Open Science Collaboration, 2015) to 78% in experimental philosophy (Cova et al.,
2018). Importantly, there is no consensus on what constitutes an appropriate replicability rate or
even measure. Nevertheless, these projects show that a substantial amount of findings in
neighboring disciplines do not replicate. Whereas a systematic investigation of replicability in
Communication is lacking, it is plausible that we face similar issues given the overlap in research
methods and publishing practices.
Building on Bishop (2019), we identify four major causes for low replicability. On the
side of the researcher, a substantial challenge to robust science are so-called questionable
research practices. On the side of journalswhich includes editors, board members, and
reviewersa preference for novel and statistically significant results creates a publication bias.
In the social sciences, many fields investigate small effects whilst relying on small samples,
which leads to low statistical power. Last, replicability is reduced by problems resulting from
human errors, which include the false reporting of statistical results.
Questionable Research Practices
Quantitative Communication scholars usually rely on empirical data collected, for
example, via questionnaires, observation, or content analyses. In order to determine the
generalizability of the results, a common approach to analyzing these empirical data is the use of
frequentist null hypothesis significance testing (NHST; Levine, Weber, Hullett, Park, & Lindsey,
2008). In NHST, we calculate the probability of the empirical data (or more extreme data) under
the null hypothesis. If the probability of the empirical data (or more extreme data) is below a
specific threshold, we consider our results statistically significant, which leads us to reject the
null hypothesis. In the social sciences, including Communication, we have settled for the
(arbitrary) threshold of α = 5%.
The adoption of this threshold has led to the false belief that statistical significance
represents a benchmark of “real” effects and/or high quality research. Unfortunately, this
encourages researchers, often unknowingly, to engage in so-called “questionable research
practices” (QRPs; John, Loewenstein, & Prelec, 2012, p. 524). QRPs are aimed solely at
achieving statistical significance (i.e,. p-values less than 5%). QRPs are both widespread and
difficult to recognize (John et al., 2012). Critically, they do not even require ill intent on the side
of the researcher (Gelman & Loken, 2013). In fact, many QRPs have been considered standard
procedure and are part of many training programs (Wagenmakers, Wetzels, Borsboom, & van
der Maas, 2011). Below, we explain two prominent QRPs, HARKing and p-hacking, in more
HARKing. Knowledge generation typically relies on two distinct modes of research. In
exploratory research, new hypotheses are generated; in confirmatory research, a priori
formulated hypotheses are tested. Both modes of research serve different functions and are
crucial for scientific progress. A confirmatory approach is most relevant to the self-corrective
nature of science. From a falsification paradigm, we are compelled to abandon predictions that
do not reliably obtain empirical support (Popper, 1935), which helps discard unfruitful research
avenues. Conversely, an exploratory approach can be used to articulate postdictions, which can
help develop or update theories.
A substantial problem arises when researchers present exploratory research as if it were
confirmatory research; that is, when they label postdictions as predictions. This QRP is known as
HARKing, an acronym for Hypothesizing After Results are Known (Kerr, 1998). When
HARKing, data are used to generate hypotheses which are tested on the same data (Nosek,
Ebersole, DeHaven, & Mellor, 2018). To illustrate, imagine a researcher who expects Condition
A to be more effective than Condition B. However, when results reveal that Condition B is more
effective, the researcher writes the manuscript as if they had expected Condition B to be more
effective all along. Therefore, HARKing constitutes circular reasoning. It fails the very purpose
of hypothesis testing, and it violates the basic scientific method. Crucially, HARKing capitalizes
on chance: Unexpected results might not represent stable effects, which dilutes the literature with
false positives and contributes to low replicability (Nosek et al., 2018).
p-Hacking. When analyzing data, there are multiple legitimate analytic options, so-called
“researcher degrees of freedom” (Simmons, Nelson, & Simonsohn, 2011, p. 1359). As a result,
researchers find themselves in the proverbial “garden of forking paths” (Gelman & Loken, 2013,
p. 1). Some paths will lead to statistically significant results, others will not. The situation
becomes particularly problematic when researchers deliberately search and choose those paths
that lead to significance, a practice known as p-hacking (Simmons et al., 2011).
For example, when conducting a multiple regression analysis that does not reveal a
significant result, researchers might include (or remove) statistical control variables, which
increases their chances of obtaining a statistically significant result. Other examples of p-hacking
include (a) continuing data collection until researchers find significant results, (b) using multiple
measures of a construct and reporting only those with statistically significant results, (c)
including or excluding scale items depending on whether or not they produce significance, (d)
including or excluding outliers from data analysis to achieve significance, and (e) selecting and
analyzing only specific subgroups that show a significant effect.
Using these QRPs, Simmons et al. (2011) demonstrated “how unacceptably easy it is to
accumulate (and report) statistically significant evidence for a false hypothesis” (p. 1359). They
showed that p-hacking increases the chances of finding statistically significant results for non-
existent effects by as much as 60%.
As with HARKing, p-hacking results in effects that are
neither reliable nor robust, which clutters the literature with non-replicable findings.
Publication Bias
Statistically significant findings are more likely to get published than non-significant
ones, which creates a so-called publication bias (Ioannidis, Munafò, Fusar-Poli, Nosek, & David,
2014). Scholars can contribute to this bias as authors, reviewers, and editors.
First, many authors feel that nonsignificant findings do not contribute anything
substantial to the literature. As a result, nonsignificant results often remain unpublished, creating
the so-called “file-drawer problem” (Rosenthal, 1979, p. 638). To illustrate, Cooper, DeNeve,
and Charlton (1997) surveyed a small sample of social scientists to ask about the fate of studies
approved by their institutional research board. They found that statistically significant findings
were far more likely to be submitted to peer-review than non-significant findings. Second,
reviewers and editors often reject manuscripts because they consider the results not sufficiently
novel, conclusive, or exciting (Giner-Sorolla, 2012)a tendency especially apparent in the case
of failed replications (Arceneaux, Bakker, Gothreau, & Schumacher, 2019). As a consequence,
authors are encouraged either to discard studies in which some predictions are supported but
others are not or, even worse, to actively engage in p-hacking to achieve a coherent story and a
definitive conclusion (O’Boyle, Banks, & Gonzalez-Mulé, 2017).
In conclusion, although the extent to which manuscripts with null findings are rejected is
not known, publication bias leads to an overrepresentation of both significant findings and
inflated effect sizes (Fanelli, 2012). These practices create a bizarre situation in which effects
The online app “p-hacker” ( illustrates how easily one can attain statistical
significance using various p-hacking techniques, including those discussed here.
that appear well-supported in the literature actually do not exist, resulting in the canonization of
false facts (Nissen, Magidson, Gross, & Bergstrom, 2016) and, ultimately, low replicability.
Low Statistical Power
Power refers to the probability of observing a true effect. For typical between-person
designs, power is determined by the alpha level, the true effect size and variance in the
population, the sample size, study design, and the type of hypothesis or statistical test (e.g., one-
versus two-sided; Cohen, 1992). Broadly, for large effects, small samples can reliably detect
effects; for small effects, large samples are needed (Cohen, 1992). In practice, researchers can
determine an adequate number of cases for a specific effect by conducting a priori power
analyses (e.g., using tools such as G*Power or the R package pwr). When researchers analyze a
small effect with a small sample, analyses are underpowered. Underpowered analyses are highly
problematic: First, they reduce our ability to find effects that actually exist. Second, they
overestimate the size of those effects that are found (Funder & Ozer, 2019). Thus, low power
leads to erroneous results that are unlikely to replicate.
Human Errors
All humans make errors; all researchers are humans; hence, all researchers make errors.
An analysis of more than 250,000 psychology papers published between 1985 and 2013 found
that half of those reporting significance tests contained at least one p-value inconsistent with its
test statistic or degrees of freedom (Nuijten, Hartgerink, van Assen, Epskamp, & Wicherts,
2016). Although many of these errors are unintentional, researchers seem reluctant to share their
data in order to help detect and correct errors. For example, Vanpaemel, Vermorgen,
Deriemaecker, and Storms (2015) found that less than 40% of the authors who published a
manuscript in one of four American Psychological Association (APA) journals in 2012 shared
their data upon request, even though a refusal to share is a violation of APA research ethics
(American Psychological Association, 2009, p. 12). Even when statistical reporting errors are
detected in published research, issuing corrections is arduous (e.g., Retraction Watch, 2018).
Human errors are a natural by-product of science as a human enterprise and need to be expected,
but the current system is not designed to detect, embrace, or correct mistakes. As a result, the
literature contains too many erroneous findings, which is another reason for low replicability.
Replicability in Communication
Given the obvious overlap of methods, theories, and publication practices in the
quantitative social sciences (Zhu & Fu, 2019), we have reason to believe that Communication
also suffers from low replicability. Indeed, there are early warning signs that our discipline has a
replication problem. A special issue of Communication Studies reported nine replication attempts
of published Communication studies (McEwan, Carpenter, & Westerman, 2018). The results
resemble those of prior replications projects: Two studies replicated all of the prior findings,
three studies replicated some of the prior findings, whereas four studies replicated none of the
prior findings (McEwan et al., 2018). Together, these results suggest low replicability.
This is not surprising given that the same four causes of low replicability and
reproducibility can also be found in Communication. For example, studies in leading journals in
Communication show evidence of the QRPs discussed above (Matthes et al., 2015),
demonstrating that we engage in the same practices as our colleagues in other fields. Likewise,
there is a growing body of studies in Communication illustrating the “garden of forking paths,”
showing that analytical results can differ starkly depending on the analytical choices made by the
researchers (e.g., Orben, Dienlin, & Przybylski, 2019). Similarly, there exist several accounts of
publication bias in Communication (Franco, Malhotra, & Simonovits, 2014; Keating & Totzkay,
2019), which indicates a preference for novel and statistically significant results. Next, just like
all other scientists, Communication scholars commit errors: Of all p-values reported in 693
empirical Communication papers published between 2010 and 2012, 8.8% were incorrect; of
those, 75% were too low (Vermeulen & Hartmann, 2015). Similarly, when it comes to correctly
interpreting p-values, Communication scholars commit errors (Rinke & Schneider, 2018).
A strong indicator of low replicability is low statistical power. In their meta-review of 60
years of quantitative research, Rains, Levine, and Weber (2018) reported that observed effects in
Communication have a median size of r = .18. In other words, using traditional benchmarks
(Cohen, 1992) effects are typically small to moderate (but see Funder & Ozer, 2019). However,
note that this is an overly optimistic assumption in light of QRPs and publication bias; not all
effects that are reported actually exist, and those that do are likely smaller (Camerer et al., 2018).
Next, Rains et al. (2018) report that meta-analyses in Communication typically feature a median
of 28 effects and a median of 4,663 participants, from which we can extrapolate that effects are
typically tested with 167 participants. However, for most study designs such a small sample size
would be insufficient to reliably detect small or moderate effects (for examples of specific
designs, see OSM Appendix C)which suggests that a large number of studies in
Communication are likely to be underpowered.
The presence of these four causes of low replicability implies that we have a replication
problem in Communication as well. Similar to other fields, in Communication “there is an
increased internal understanding of our own faults and foibles that is leading to requests for more
information about what underlies the evidence that serves as a basis for our knowledge claims”
(Holbert, 2019, p. 237). Hence, we must not wait for a large scale replication project to further
demonstrate that substantial portions of our research cannot be replicated. Instead, we believe
that we must act now. Open science practices can be an important part of ensuring our research is
reproducible, replicable, and thus credible (Munafò et al., 2017).
Table 1
Summary of the Open Science Agenda
The 7-Point Agenda
Examples of Addressed Problems and Benefits
1. Publish materials, data, and
Facilitates reproduction of analyses and replication of studies
Provides a vast resource for knowledge creation and incremental
progress in science
Reduces p-hacking through analytical transparency
2. Preregister studies and
submit registered reports
Provides a clear distinction between exploratory and
confirmatory research
Reduces HARKing and p-hacking
Reduces underpowered studies
Reduces publication bias and the file-drawer effect
3. Conduct replication studies
Provides the basis for cumulative knowledge creation
Reduces publication bias and the file-drawer effect
4. Collaborate
Facilitates recruiting appropriately powered samples
Provides immediate replication opportunity
Increases chance of early detection of errors
5. Foster open science skills
Improves skills and knowledge about open science practices
Establishes open science practices as a de facto approach to the
scientific method, e.g., in graduate theses or as norms in research
6. Implement Transparency
and Openness Promotion
(TOP) Guidelines
Provides authors a reputable outlet for engaging in open science
Demonstrates open science practices to the greater community
Allows editors to motivate, encourage, and guide authors
through implementing open science practices
7. Incentivize open science
Implements long-term changes toward an open science culture
Introduces experience with open science practices (including
replications and or collaborations) as criterion for jobs, tenure,
promotion, or funding
An Agenda for Open Science Practices in Communication
Open science is an umbrella term and describes “the process of making the content and
process of producing evidence and claims transparent and accessible to others” (Munafò et al.,
2017, p. 5). In other words, open science practices shift the research focus from a closed “trust
me, I’m a scientist” to a more transparent “here, let me show you” position (Bowman & Keene,
2018, p. 364). In what follows, we present the seven solutions that we consider most relevant for
addressing low replicability (see Table 1). Whereas our first points focus on direct solutions, the
actual research, and the individual researcher (see also Lewis, 2019), our final points emphasize
more indirect approaches and also include other stakeholders.
1. Publish Materials, Data, and Code
We recommend that researchers share study materials (e.g., items, stimuli, protocols,
instructions, or codebooks for content- and meta-analyses), data (e.g., raw, aggregated,
processed, or synthetic), code (e.g., data-gathering, -preparation, or -analysis), and software (e.g.,
operationalizations of experiments, simulations, content coding tools, scraping tools, or
applications) when appropriate and ethical. These recommendations are aligned with the
International Communication Association’s (ICA) Code of Ethics
, which states that “ICA fully
supports the openness of scholarly research” (p. 2). When sharing, we recommend following the
FAIR data principles (an acronym for findability, accessibility, interoperability, and reusability;
Wilkinson et al., 2016) or the suggestions offered by O. Klein et al. (2018).
Sharing research materials has several important benefits. First, sharing of materials,
data, code, and software increases reproducibility, because others can independently verify and
better understand the results (LeBel, McCarthy, Earp, Elson, & Vanpaemel, 2018). As an
immediate result, sharing improves the quality of peer review. Whereas some peer-review
criteria can be assessed on the basis of the final manuscript (e.g., the strength of the theoretical
rationale), other criteria require access to (a) study materials (to comprehensively assess a
proposed methodology), (b) analysis code (to better evaluate a data analysis), or (c) study data
(to check results for potential errors). Without access to these materials, peer reviewers must
accept scientific claims solely based on authors’ claims (Munafò et al., 2017).
Second, sharing improves the quality of the research itself, because it provides the basis
for cumulative knowledge generation. Science is ultimately a social enterprise in which
individual researchers and groups work together to create and to accumulate knowledge as a
public good (Merton, 1974). In each research project, scholars rely on prior work to develop
theories, craft research designs, or develop analytical procedures. Importantly, the quality of each
step depends on how much information about prior work is available. Sharing allows researchers
to work more efficiently and conceive new studies without having to “reinvent the wheel.”
2. Preregister Studies and Submit Registered Reports
Many QRPs are not a result of bad intentions. Like all humans, researchers are inclined to
see patterns in random data that are aligned with preexisting beliefs (Munafò et al., 2017).
Therefore, introducing structures that limit biases as part of the scientific process is beneficial.
To this end, we recommend that all confirmatory research should be preregistered. (Much
exploratory research can be preregistered, too.) Preregistration means that hypotheses, study
design, and analysis plan are explicated in an official registry prior to data collection or data
analysis (Nosek et al., 2018). Preregistration platforms such as AsPredicted
and OSF Registries
also ask for a justification of the planned sample size. To justify their sample and to prevent low
power, we recommend that researchers conduct a priori power analyses when planning a study.
Preregistrations can also include additional details, such as a short summary of the project,
measures or coding schemes, variable transformations, recruitment or sampling strategies, and
power calculations. This initial research plan is preserved in the registry, receives a time-stamp,
is made discoverable (if desired, only after an embargo period), and is linked to in the research
article (so that planned and conducted analyses can be compared). For further instructions and
concrete templates, see Lewis (2019).
Because preregistration means all steps are determined before data collection, researchers
protect themselves from HARKing and p-hacking, which reduces the likelihood of false or
inflated effects. Some evidence of this effect already existsfor example, a comparison of the
effect sizes from 993 studies in psychology found that preregistered studies reported effects (r =
.16) that were less than half as large compared to those reported by studies that were not
preregistered (r = .36; Schäfer & Schwarz, 2019). Although this result could be a selection effect,
it has been demonstrated elsewhere (e.g., Camerer et al., 2018). Critically, preregistration does
not prevent authors from post-hoc exploratory data analyses, but only requires authors to clearly
distinguish confirmatory and exploratory analyses. Exploratory analyses are crucial for scientific
advancement; they should not be disguised as confirmatory analyses but receive a designated
section and thereby a more prominent spot.
A logical extension of preregistrations is registered reports (Chambers, 2013). Registered
reports follow a two-stage review process. In Stage 1, authors submit a manuscript that includes
the introduction, method, planned sample size and any pilot study results. This proposal is sent
through the usual peer-review process before the study is conducted. Reviewers assess the merits
of the study design. If evaluated positively, authors receive an in principle acceptance, which
as long as the research is conducted as specified in the accepted proposalguarantees
publication of the manuscript, regardless of results. In Stage 2, the authors submit the full
manuscript, which includes the results of their preregistered analysis plan, exploratory results,
and a discussion of their findings. Deviations from the preregistration have to be highlighted and
explained. In the second round of peer review, reviewers evaluate if the confirmatory analyses
correspond to the planned procedure, assess any new exploratory analyses, and give feedback on
the discussion.
Registered reports provide peer review when it has the most impact: before the study is
conducted. Hence, in contrast to traditional submission types, registered reports can improve the
design of the research. Moreover, given that publication is independent from a study’s outcome,
registered reports eliminate publication bias (Munafò et al., 2017). Several journals have offered
exploratory reports as a format dedicated to hypothesis generation and discovery (McIntosh,
2017). The list of journals offering registered reports is growing continually, already listing more
than 200 journals (Chambers, 2019). In Communication, at the time of writing, registered reports
are accepted by Communication Research Reports, Computational Communication Research,
and Journal of Media Psychology. We urge other Communication journals to follow suit and
encourage scholars to submit their work as registered reports.
3. Conduct Replications
We encourage (a) Communication researchers to conduct more replication studies and (b)
editors and reviewers to publish more replication studies. Although conducting and publishing
replications is important for scientific progress in general (Merton, 1974), it is central to open
science in particular, because it makes transparent the robustness of previously published results.
At least three types of replications exist (for a more granular conceptualization, see LeBel
et al., 2018): direct replications, when a researcher reruns a study using the same
operationalizations and data analysis methods; close replications, when a researcher for example
updates the stimulus material (Brandt et al., 2014); and conceptual replications, when a
researcher reruns parts of a study or uses different operationalizations/methods. Notably, internal
replications, in which researchers replicate their own work, are not predictive of independent
replication efforts (Kunert, 2016). Trying to replicate the results of other researchers is therefore
a requirement of the scientific method and a necessity for science to be self-correcting.
Although conceptual replications of research are not uncommon, there is a shortage of
direct replications. In Communication, they represent approximately 1.8% of all published
research (Keating & Totzkay, 2019). Thus, we call upon both editors and reviewers to be open to
the publication of direct replications. Specifically, we believe that journals have a responsibility
to publish high quality replications of studies that the journal originally published; a procedure
known as the “Pottery Barn rule” (Srivastava, 2012). This challenges journals to find ways to
support the submission of high quality replications. Authors struggling to find venues for the
publication of replications can self-publish their research as preprints (J. M. Berg et al., 2016).
There are several guidelines for conducting replications (e.g., Brandt et al., 2014). Here,
we emphasize three aspects. First, replications should not be used as a political tool to disparage
individual researchers. Instead, they inform our research and update our knowledge about
important effects. Second, although not necessarily a condition for a good replication, we
encourage researchers to contact the authors of the original study to get feedback on their
preregistered replication attempt. Third, it is a common misunderstanding that direct replications
should rerun the old studies using the same sample size. Given publication bias, small samples,
and inflated effect sizes, replications need new power analyses, preferably assuming that the
actual effects are half as strong as initially reported (Camerer et al., 2018). It is crucial that
replication attempts have sufficient statistical power, especially in cases when the original study
was underpowered. For well-executed replication attempts, see Camerer et al. (2018).
4. Collaborate
Because effects in Communication are typically small to moderate (Rains et al., 2018), a
priori power analyses often reveal that we need large samples to reliably detect such effects.
However, large samples necessitate vast resources. As a result, we encourage Communication
scholars to collaborate across labs or research sitessomething routinely done in other fields
(e.g., the Human Connectome project
). Such collaborations can take place on a small scale (with
a few individuals or labs joining forces), on a large scale (with dozens of labs participating
worldwide), or indirectly by analyzing already existing large scale datasets or by cooperating
with companies. More collaboration reflects the basic idea of open science in the sense of
strengthening interactions among scholars, enabling a more proactive exchange of data and study
materials, and establishing the mindset that we need large scale empirical data to produce
reliable results, which can be collected and maintained only by a collective effort.
In order to find collaborators for small-scale collaborations, researchers can use online
resources such as StudySwap, a platform for interlab replication, collaboration, and research
resource exchange.
Regarding large-scale collaborations, programs such as the Many Labs
projects involve several labs working together in order to replicate contemporary findings (e.g.,
R. A. Klein et al., 2018); for an example of a large-scale cooperation in Communication, see
Hameleers et al. (2018). Next, Communication scholars might consider using existing large-scale
datasets. For a list of freely available datasets see Brick (2019); for a search engine for datasets,
see Google Dataset Search
or GESIS Data Search.
Collaborations with companies such as
Google or Facebook
are another option (Taneja, 2016).
5. Foster Open Science Skills
In order for open science practices to become widely adopted, it is essential that they
become an integral component of a researcher’s training and education (R. van den Berg et al.,
2017). Good education and training are the cornerstone of high quality research; hence, they can
treat the causes of low replicability at the roots.
Broadly, researchers can make use of open access learning resources focused on open
science practices. These services include webinars, teaching materials, or consultation, offered
by the Center for Open Science,
and many other organizations. Furthermore,
there exist several massive open online courses (so-called MOOCs),
video material,
an Open
Science Knowledge Base,
or tutorials (e.g., O. Klein et al., 2018), which all help develop a
familiarity with or expertise in open science practices.
Second, researchers should encourage students to implement open science practices as
part of the advising and mentoring process. For example, students could (a) preregister theses
and studies conducted for their coursework; (b) conduct replication studies, which is ideal for
See, for example, the initiative Social Science One:
For resources on preregistration see and for registered reports see
understanding important methods and theories in Communication while simultaneously
contributing valuable epistemic insights; or (c) analyze publicly available datasets, which
significantly expedites the research process while being able to produce reliable findings
(Schönbrodt, Maier, Heene, & Zehetleitner, 2015). Fortunately, it is often possible to build on
established practices and routines: Similar to preregistrations, thesis projects often require
students to first propose their theoretical foundations, study design and methodology, and
planned data analysis (Nosek et al., 2018). Likewise, students are often required to share data
and analysis scripts with their advisor and other advisory committee members, which could
easily be extended to open data repositories.
6. Implement Transparency and Openness Promotion (TOP) Guidelines
Replicability can be increased indirectly by promoting an open research publication
culture. This can be achieved if academic journals adopt, promote, and require open science
practices. To structure this process, Nosek et al. (2015) proposed the so-called Transparency and
Openness Promotion (TOP) Guidelines, which consist of eight standards that largely encompass
the suggestions outlined in the current manuscript. Broadly, the TOP Guidelines encourage
journals to ensure that as much of the work published in their outlets is made available to the
public, while clearly communicating how authors and readers should engage those materials.
TOP Guidelines acknowledge that not all open science practices are possible or plausible
for all areas of research. They propose three incremental levels of transparency and openness.
Level 1 necessitates only an update of submission guidelines, in which submitting authors are
required to state whether they shared their data, code, or materials; actual sharing is encouraged
but not required. On Level 2, journals require authors to share their data, code, and materials on
trusted repositories. Finally, Level 3 represents a move toward complete transparency, in which
journals adopt all open science practices suggested by TOP. This includes the preregistration of
all confirmatory studies and the enforcement of design and analysis transparency standards. As
of this writing, TOP Guidelines are adopted by over 1,000 journals.
Within Communication, there are explicit calls for more transparency and open sharing
(Bowman & Keene, 2018; Lewis, 2019; Spence, 2019). We therefore encourage Communication
journals to adopt the TOP Guidelines. This change needs joint efforts from various stakeholders,
including publishers, publication committees, editors, board members, reviewers, and authors.
7. Incentivize Open Science Practices
Only by changing academia’s incentive system will it be possible to guarantee sustained
change. An implicit incentive that already exists may be the reputational gain that can be
achieved through the early adoption of open science standards (Allen & Mehler, 2019;
McKiernan et al., 2016). Furthermore, the opportunity to publish null findings via registered
reports may also contribute to traditional markers of productivity such as number of publications.
However, to combat low replicability and its causes effectively, Communication needs to
incentivize open science practices explicitly.
Above all, it is crucial that we introduce the successful implementation of the
abovementioned practices of open science as a quality indicator to selection and evaluation
processes, which includes hiring, tenure, promotion, and awards. To this end, several universities
have already begun to require that applicants list practices of open science (Allen & Mehler,
2019). Funders, including the European Commission
and the German
and Dutch
Foundations, increasingly require explicit open science practices. On the side of journals,
introducing badges that signal a manuscript’s adherence to open science practices (e.g., open
material, open code, open data) may incentivize open science practices (Kidwell et al., 2016).
Changing our incentive structure necessitates changing our general culture. This can only
be achieved if we come together as a community, and events such as the 2020 ICA conference
“Open Communication” express an urge to make a change. In that vein, preconferences, theme
slots, or symposia are great ways to further engage the community. Local and decentralized
grassroots initiatives (e.g., the open science journal club “ReproducibiliTea” by Orben, 2019 or
the “UK Reproducibility Network”)
might similarly affect sustained cultural changes.
Open Science and Qualitative Research
To this point, we have implicitly focused on quantitative research. Open science
practices, however, are not exclusive to any particular form of data or type of analysis. The basic
notion of making scholarship transparent is one shared by all scholars (Haven & Van Grootel,
2019). That said, reasons and motivations for engaging in open science differ across approaches,
as do implementations and solutions. It is beyond the scope of this paper to address all aspects
that need to be considered for the manifold approaches that are used within Communication.
Instead, we want to emphasize that other approaches can similarly benefit from engaging in open
science practices. In what follows, we briefly outline some first suggestions as to how open
science practices can be used to strengthen qualitative research.
Because there are different plausible understandings of what constitutes qualitative
research, we refer here to approaches that a) aim at understanding how and why certain
phenomena may occur, instead of making inferences about the larger population from which the
sample was drawn; and b) involve complex data (e.g., texts, videos, images) that are analyzed
using semantic approaches (e.g., verbal interpretation, categorization, encoding). Data often
come from smaller samples that do not maximize representativeness but rather heterogeneity, the
interpretation is carried out by the researchers themselves, and if statistics are reported at all
(e.g., frequencies within the small sample), they are primarily descriptive. Due to their
comprehensive and more granular nature, findings from such methods contribute to a better
understanding of specific processes and offer new insights that are less likely to be produced or
even impossible to produce with quantitative methods.
Because qualitative research is primarily based on subjective evaluations and does not
include inferential statistics, many of the abovementioned QRPs such as p-hacking or low
statistical power do not apply. For the same reason, qualitative research cannot really have a
replication crisis, because it is not the explicit aim to generalize over an underlying population
(although mixed method approaches do exist, which are at least partly confirmatory). Similarly,
because an individual human researcher is centrally involved in the interpretation of the data,
general reproducibility is by definition limited (Childs, McLeod, Lomas, & Cook, 2014). That
said, qualitative research is not entirely subjective. Its methods are rooted in general principles
shared by many researchers, which allows us to compare and evaluate results (e.g., when
determining inter-rater reliability). Because qualitative research informs us about underlying
processes it can establish a profound understanding of specific mechanisms, particularly about
those that a researcher has not thought of a priori. Together, this still implies a certain but much
more modest aim when it comes to generalising results. Again, acknowledging that different
understandings, practices, and aims exist, it is our understanding that several of the above-
mentioned open science practices will also benefit qualitative research. In short, we argue that
several specific open science practices can increase the quality of qualitative research, primarily
by improving transparency and traceability.
First, researchers can share research designs, interview and interrogation protocols,
anonymized data and coded data files, and coding strategies used to analyze these data (Haven &
Van Grootel, 2019). Haven and Grootel (2019) assert that working with qualitative data does not
“exempt the researcher from the duty to maximize transparency” (p. 236). This approach allows
other scholars to better understand their interpretive lenses, to better assess the quality of the
findings, and to use or adapt these materials in their own research (DeWalt & DeWalt, 2011).
Together, this increases consistency and comparability.
Scholars working with qualitative analyses have many researcher degrees of freedom as
well, and are incentivized to find compelling narratives that increase chances of publication.
Together, this introduces biases that can reduce precision and, in turn, generalizability. Hence,
preregistration of qualitative research can be fruitful, too: It increases transparency, tracks
flexibility and modification during the research process, and, when submitted as a registered
report, prevents publication bias (Haven & Van Grootel, 2019). For the most part, preregistering
a qualitative study is similar to a quantitative study. It includes specifying the a priori choices for
sampling frames, data collection tools, or planned data analysis choices. In contrast to
quantitative research, preregistrations need not be about registering predictions, but “putting the
study design and plan on an open platform for the (scientific) community to scrutinize” (Haven
and Van Grootel, 2019, p. 236). Such a process would motivate researchers to “make explicit
which tradition and theoretical lens they work from” and to “carefully reflect upon their own
values prior to going into the field and prior to interpreting and reporting the findings within the
context of these a priori values” (Haven and Van Grootel, 2019, p. 237). Preregistration can
thereby provide an additional layer of accountability and credibility for the work as a whole. For
a preregistration template for qualitative research, see Hartman, Kern, and Mellor (2018).
We encourage scholars conducting qualitative research to collaborate as well, for
example through triangulation (Creswell & Poth, 2018), secondary data analysis, or by using
multiple datasets of qualitative phenomena (Ruggiano & Perry, 2019). Each of these
possibilities, or any combination of them, allows for deeper and wider insights into the research
questions at hand and would help examine the relative stability, variability, and generalisability
of a given result. If a number of researchers with different interpretive lenses find similar or
complementary results, we can be more confident in the claims of this collective research.
Objections and Concerns
Some might question whether or not Communication really has a replication problem.
Throughout this manuscript, we have presented several arguments as to why there is cause for
concern (and for a list of further reasons, see OSM Appendix D). However, even if our analyses
should be incorrect, consider that we as Communication scholars have always aimed to improve
our scholarly practices over time. Because our agenda is built on shared principles of science
(Merton, 1974), we believe that adopting open science practices represents a natural continuation
of this collective development that will make our research more informed, robust, and credible.
Some might express concerns over specific open science practices. One of the most
prominent concerns regards data sharing. Sometimes the sharing of materials or data is
problematic or unethical, because they could be used for unintended, harmful, or unethical
purposes (Lakomý, Hlavová, & Machackova, 2019). One key issue is the privacy of participants.
There are cases in which the full data cannot be shared because participants can or could be
identified. Specific raw data such as news articles, video material, or data scraped from online
platforms sometimes cannot be made public, for example because of copyright reasons
(Atteveldt, Strycharz, Trilling, & Welbers, 2019). Fortunately, there are several means to address
these challenges. First, it is necessary to implement an appropriate informed consent process, in
which participants are informed about who has access to the data and how they can delete their
data. Second, researchers can prevent others from identifying participants by (a) removing direct
or indirect identifiers (e.g., date of birth), (b) binning (i.e., turning continuous values into
categories), or (c) aggregating. Similarly, Cheshire (2009) overviews several recommendations
to properly and safely anonymize qualitative data, such as using distal pseudonyms in place of
real names, deleting identifying information such as interview locations, restricting access to
anonymous transcripts only (e.g., no access to audio or other data formats), and digitally
manipulating images or videos (e.g., disguising voices or blurring faces). Klein et al. (2018)
provide additional guidance on how to deal with concerns related to participant privacy, whereas
Rocher, Hendrickx, & de Montjoye (2019) address several limitations. A third solution to both
privacy and copyright issues is non-consumptive data use (Atteveldt et al., 2019), which means
providing access to the data without physically copying it (e.g., by means of onsite visits).
Fourth, researchers can share synthetic data sets. These are simulated data that “mimic real
datasets by preserving their statistical properties and the relationships between variables”
(Quintana, 2019, p. 2). In synthetic data sets, all individual cases are fictitious and novel, while
the general properties of the variables remain the same (e.g., means, variance, and covariance).
Synthetic data sets thereby enable others to reproduce the results of a study while guaranteeing
the anonymity of participants. Synthetic data can be computed using open-source software such
as the R package synthpop. Finally, data can be shared using licenses that legally restrict use, for
example for scientific purposes only. Likewise, researchers can use services that limit access to
specific users. In general, when sharing their data researchers should be as restrictive as
necessary and as open as possible. Whether or not the sharing of data is possible always needs to
be evaluated in the context of each individual research project. When data cannot be shared at
all, researchers should provide explicit reasons in their manuscripts.
There also are concerns surrounding preregistration. What if researchers have learned a
better way to analyze the data since they preregistered? What if they find something interesting
that was not predicted? Again, it is a common misconception that preregistration restricts the
ways in which researchers can analyze their data. Preregistration permits researchers to explore
their data or to adapt their plans. The only condition is that all deviations from the preregistration
need to be explained and that all additional analyses need to be labelled as exploratory.
Another concern is that reviewers, editors, or readers might use open science as a
heuristic for high quality. Although sharing materials can be considered a necessary condition for
high quality, it is not a sufficient one: “Transparency doesn’t guarantee credibility. Transparency
allows others to evaluate the credibility of your scientific claims” (Vazire, 2019). Deviations
from the preregistration in the final publication are common (Claesen, Gomes, Tuerlinckx, &
Vanpaemel, 2019). As a result, studies employing open science practices need to be evaluated
just as carefully as traditional studies. Open science practices are no panacea, and they cannot
prevent intentional fraud.
With regard to replicability, there are communication phenomena that we might not
expect to replicate because of changes in external factors. For example, a study on the
relationship between one’s number of Facebook followers and others’ perceptions of one’s social
attraction (Tong, Van Der Heide, Langwell, & Walther, 2008) failed to replicate ten years later
(Lane, 2018). This failed replication, however, can be attributed to shifts in Facebook users’
orientations towards the platform. This example illustrates that replications can fail for different
reasons, because of poor design or actual changes in the phenomenon of interest. Failed
replications help us in designing new studies to assist with future investigations. That said, by
using appropriate methods and sound theory we should aim to produce findings that are robust,
sustainable, and likely to generalize across time, samples, and contexts.
Open science practices generally, and preregistration specifically, might lead to an
unfamiliar publication process. Published studies will likely feature more mixed findings and
null results and, thus, present less coherent “stories” (Giner-Sorolla, 2012). However, when
studying human thoughts, behavior, or media content, there is always noise; hence, it is unlikely
that we should repeatedly find coherent narratives (Giner-Sorolla, 2012) or large effects (Funder
& Ozer, 2019). We believe that embracing a culture that values this challenge will advance
Communication far more than a culture favoring simple narratives that do not replicate.
Other concerns revolve around increased costs and additional labor. Adequately powered
studies require larger samples, which in turn require more resources. The additional
documentation that accompanies open science practices is laborious and demands more careful
planning and administration. Individuals adopting open science practices may be unable to
publish their results as quickly as they are accustomed to. Together, early career researchers
(ECRs) especially might be concerned that this will lead to a “thin” publication record.
Regarding the publication system, implementing open science practices such as the TOP
Guidelines mean that reviewers are expected to review supplementary material and to attempt to
reproduce the results for themselves, which means additional labor. Whereas some of these
concerns are justified, others are not. For example, preregistration does not lead to more work
but instead front-loads processes that are otherwise addressed in later stages of a research
project. When submitted as registered report, ECRs can list accepted-in-principle manuscripts on
their CV, even before data collection. Furthermore, when conducted as registered reports, mixed
and null findings are more likely to get published (Allen & Mehler, 2019), making it easier to
plan projects because publication does not hinge on results. Registered reports currently also
have a higher chance of acceptance (Chambers, 2019). To reduce the additional burden for
reviewers, some journals have already implemented verification processes to ensure the
reproducibility of analytical results.
Similar to current practices with regard to plagiarism
checks, the process is carried out by an independent institute instead of the reviewer. Overall,
however, there is no denying that open science practices require us to increase our efforts.
Perhaps the best answer to concerns about additional labor is normative. We as a field need to
focus on research quality and not on publication quantity (Nelson, Simmons, & Simonsohn,
2012). Indeed, several solutions that we propose might lead to fewer submissions, but these
submissions would be of higher quality. If we submit fewer papers, we have more time to review
those of others, including data and materials. Open science practices should be acknowledged
and incentivized by the entire field: It is the current and future hiring committees, grant agencies,
and student research supervisors that will ultimately determine our norms, and whether or not the
increased efforts help or harm individual careers.
Finally, the solutions we have presented are only a subset of several useful practices (for
a list of additional solutions, see OSM Appendix E). Our list is not exhaustive: More work is
needed to address challenges and opportunities for different research domains, such as
qualitative research, computational methods, or hermeneutic approaches. In the spirit of a self-
correcting science, as we collectively move towards more open science practices in
Communication, the agenda will be revisited, challenged, and expanded. Not everyone might be
able to immediately adopt all points of the agenda. Our agenda is not all or nothing: Even an
incremental adoption will bring important benefits to our field, and some progress is better than
no progress. To effectively address all concerns and to tailor initiatives to all members of the
Communication community, we encourage surveys of Communication researchers regarding
their opinions on open science, their hopes, and their concerns. Until then, a survey of German
social scientists found that hopes concerning the benefits of open science significantly exceeded
concerns (Abele-Brehm, Gollwitzer, Steinberg, & Schönbrodt, 2019)a situation that likely also
applies to Communication.
Several of our suggestions for open science practices imply substantial changes to the
way we conduct research. They require learning new skills and jettisoning some of our prior
routines. So why should we care? Because from an ethical perspective, the values of open
science practices are aligned with our societal function as scientists (Bowman & Keene, 2018;
Merton, 1974). Open science practices provide the basis for collaboration, make results available
to the community, and facilitate a culture which does not judge a study by its outcome, but by the
quality of its theory and methods. They even boost public trust in our profession. The most
important reason to adopt open science practices, however, is epistemic. We as Communication
scholars aim to establish robust and reliable findings. Open science practices will produce more
credible results, foster the integrity of our discipline and, ultimately, enhance our knowledge
about Communication processes.
Abele-Brehm, A. E., Gollwitzer, M., Steinberg, U., & Schönbrodt, F. D. (2019). Attitudes
toward open science and public data sharing: A survey among members of the German
Psychological Society. Social Psychology, 19.
Allen, C., & Mehler, D. M. A. (2019). Open science challenges, benefits and tips in early career
and beyond. PLOS Biology, 17(5), e3000246.
American Psychological Association. (2009). Publication manual of the American Psychological
Association (6th ed.). Washington, D.C.: American Psychological Association.
Arceneaux, K., Bakker, B. N., Gothreau, C., & Schumacher, G. (2019, June 20). We tried to
publish a replication of a Science paper in Science. The journal refused. Slate. Retrieved
Asendorpf, J. B., Conner, M., De Fruyt, F., De Houwer, J., Denissen, J. J. A., Fiedler, K., …
Wicherts, J. M. (2013). Recommendations for increasing replicability in psychology.
European Journal of Personality, 27(2), 108119.
Atteveldt, W. van, Strycharz, J., Trilling, D., & Welbers, K. (2019). Toward open computational
communication science: A practical road map for reusable data and code. International
Journal of Communication, 13(0), 20.
Berg, J. M., Bhalla, N., Bourne, P. E., Chalfie, M., Drubin, D. G., Fraser, J. S., … Wolberger, C.
(2016). Preprints for the life sciences. Science (New York, N.Y.), 352(6288), 899901.
Berg, R. van den, Brennan, N., Hyllseth, B., Kamerlin, C. L., Kohl, U., O’Carroll, C., …
Directorate-General for Research and Innovation. (2017). Providing researchers with the
skills and competencies they need to practise Open Science. Retrieved from
Bishop, D. V. M. (2019). Rein in the four horsemen of irreproducibility. Nature, 568(7753),
Bowman, N. D., & Keene, J. R. (2018). A layered framework for considering open science
practices. Communication Research Reports, 35(4), 363372.
Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., … van ’t
Veer, A. (2014). The Replication Recipe: What makes for a convincing replication?
Journal of Experimental Social Psychology, 50, 217224.
Brick, C. (2019). Directory of free, open psychological datasets. Retrieved from
Camerer, C. F., Dreber, A., Forsell, E., Ho, T.-H., Huber, J., Johannesson, M., … Wu, H. (2016).
Evaluating replicability of laboratory experiments in economics. Science, 351(6280),
Camerer, C. F., Dreber, A., Holzmeister, F., Ho, T.-H., Huber, J., Johannesson, M., … Wu, H.
(2018). Evaluating the replicability of social science experiments in Nature and Science
between 2010 and 2015. Nature Human Behaviour, 2(9), 637644.
Chambers, C. D. (2013). Registered Reports: A new publishing initiative at Cortex. Cortex,
49(3), 609610.
Chambers, C. D. (2019). What’s next for Registered Reports? Nature, 573(7773), 187189.
Cheshire, L. (2009). Archiving qualitative data: Prospects and challenges of data preservation
and sharing among Australian qualitative researchers. Queensland, Australia: Institute
for Social Science Research.
Childs, S., McLeod, J., Lomas, E., & Cook, G. (2014). Opening research data: Issues and
opportunities. Records Management Journal.
Claesen, A., Gomes, S. L. B. T., Tuerlinckx, F., & Vanpaemel, W. (2019). Preregistration:
Comparing dream to reality [Preprint].
Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155159.
Cooper, H., DeNeve, K., & Charlton, K. (1997). Finding the missing science: The fate of studies
submitted for review by a human subjects committee. Psychological Methods, 2(4), 447
Cova, F., Strickland, B., Abatista, A., Allard, A., Andow, J., Attie, M., … Zhou, X. (2018).
Estimating the Reproducibility of Experimental Philosophy. Review of Philosophy and
Creswell, J. W., & Poth, C. N. (2018). Qualitative inquiry & research design: Choosing among
five approaches (Fourth edition). Los Angeles: SAGE.
DeWalt, K. M., & DeWalt, B. R. (2011). Participant observation: A guide for fieldworkers (2nd
ed). Lanham, Md: Rowman & Littlefield, Md.
Fanelli, D. (2012). Negative results are disappearing from most disciplines and countries.
Scientometrics, 90(3), 891904.
Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences:
Unlocking the file drawer. Science, 345(6203), 15021505.
Funder, D. C., & Ozer, D. J. (2019). Evaluating effect size in psychological research: Sense and
nonsense. Advances in Methods and Practices in Psychological Science, 2(2), 156168.
Gelman, A., & Loken, E. (2013). The garden of forking paths: Why multiple comparisons can be
a problem, even when there is no “fishing expedition” or “p-hacking” and the research
hypothesis was posited ahead of time. Retrieved from
Giner-Sorolla, R. (2012). Science or art? How aesthetic standards grease the way through the
publication bottleneck but undermine science. Perspectives on Psychological Science,
7(6), 562571.
Hameleers, M., Bos, L., Fawzi, N., Reinemann, C., Andreadis, I., Corbu, N., … Weiss-Yaniv, N.
(2018). Start spreading the news: A comparative experiment on the effects of populist
communication on political engagement in sixteen European countries. The International
Journal of Press/Politics, 23(4), 517538.
Hartman, A., Kern, F., & Mellor, D. (2018). Preregistration for qualitative research template.
Haven, T. L., & Van Grootel, D. L. (2019). Preregistering qualitative research. Accountability in
Research, 26(3), 229244.
Holbert, R. L. (2019). Editorial vision, goals, processes, and procedures. Journal of
Communication, 69(3), 237248.
Ioannidis, J. P. A., Munafò, M. R., Fusar-Poli, P., Nosek, B. A., & David, S. P. (2014).
Publication and other reporting biases in cognitive sciences: Detection, prevalence, and
prevention. Trends in Cognitive Sciences, 18(5), 235241.
John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable
research practices with incentives for truth telling. Psychological Science, 23(5), 524
Keating, D. M., & Totzkay, D. (2019). We do publish (conceptual) replications (sometimes):
Publication trends in communication science, 20072016. Annals of the International
Communication Association, 115.
Kerr, N. L. (1998). HARKing: Hypothesizing After the Results are Known. Personality and
Social Psychology Review, 2(3), 196217.
Kidwell, M. C., Lazarević, L. B., Baranski, E., Hardwicke, T. E., Piechowski, S., Falkenberg, L.-
S., … Nosek, B. A. (2016). Badges to acknowledge open practices: A simple, low-cost,
effective method for increasing transparency. PLOS Biology, 14(5), e1002456.
Klein, O., Hardwicke, T. E., Aust, F., Breuer, J., Danielsson, H., Hofelich Mohr, A., … Frank,
M. C. (2018). A practical guide for transparency in psychological science. Collabra:
Psychology, 4(1), 20.
Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R. B., Bahník, Š., Bernstein, M. J., … Nosek,
B. A. (2014). Investigating variation in replicability: A “Many Labs” replication project.
Social Psychology, 45(3), 142152.
Klein, R. A., Vianello, M., Hasselman, F., Adams, B. G., Adams, R. B., Alper, S., … Nosek, B.
A. (2018). Many Labs 2: Investigating variation in replicability across samples and
settings. Advances in Methods and Practices in Psychological Science, 1(4), 443490.
Kunert, R. (2016). Internal conceptual replications do not increase independent replication
success. Psychonomic Bulletin & Review, 23(5), 16311638.
Lakomý, M., Hlavová, R., & Machackova, H. (2019). Open science and the science-society
relationship. Society, 56(3), 246255.
Lane, B. L. (2018). Still too much of a good thing? The replication of Tong, Van Der Heide,
Langwell, and Walther (2008). Communication Studies, 69(3), 294303.
LeBel, E. P., McCarthy, R. J., Earp, B. D., Elson, M., & Vanpaemel, W. (2018). A unified
framework to quantify the credibility of scientific findings. Advances in Methods and
Practices in Psychological Science, 1(3), 389402.
Levine, T. R., Weber, R., Hullett, C., Park, H. S., & Lindsey, L. L. M. (2008). A critical
assessment of null hypothesis significance testing in quantitative communication
research. Human Communication Research, 34(2), 171187.
Lewis, N. A. (2019). Open communication science: A primer on Why and some
recommendations for How. Communication Methods and Measures, 112.
Matthes, J., Marquart, F., Naderer, B., Arendt, F., Schmuck, D., & Adam, K. (2015).
Questionable research practices in experimental communication research: A systematic
analysis from 1980 to 2013. Communication Methods and Measures, 9(4), 193207.
McEwan, B., Carpenter, C. J., & Westerman, D. (2018). On replication in Communication
Science. Communication Studies, 69(3), 235241.
McIntosh, R. D. (2017). Exploratory reports: A new article type for Cortex. Cortex; a Journal
Devoted to the Study of the Nervous System and Behavior, 96, A1A4.
McKiernan, E. C., Bourne, P. E., Brown, C. T., Buck, S., Kenall, A., Lin, J., … Yarkoni, T.
(2016). How open science helps researchers succeed. ELife, 5, e16800.
Merton, R. K. (1974). The sociology of science: Theoretical and empirical investigations (4.).
Chicago, IL: Univ. of Chicago Press.
Munafò, M. R., Nosek, B. A., Bishop, D. V. M., Button, K. S., Chambers, C. D., Du Percie Sert,
N., … Ioannidis, J. P. A. (2017). A manifesto for reproducible science. Nature Human
Behaviour, 1(1).
National Academy of Sciences. (2018). The science of science communication III: Inspiring
novel collaborations and building capacity: Proceedings of a colloquium (S. Olson, Ed.).
Nelson, L. D., Simmons, J. P., & Simonsohn, U. (2012). Let’s publish fewer papers.
Psychological Inquiry, 23(3), 291293.
Nissen, S. B., Magidson, T., Gross, K., & Bergstrom, C. T. (2016). Publication bias and the
canonization of false facts. ELife, 5, e21451.
Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., …
Yarkoni, T. (2015). Promoting an open research culture. Science, 348(6242), 14221425.
Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration
revolution. Proceedings of the National Academy of Sciences, 115(11), 26002606.
Nuijten, M. B., Hartgerink, C. H. J., van Assen, M. A. L. M., Epskamp, S., & Wicherts, J. M.
(2016). The prevalence of statistical reporting errors in psychology (1985-2013).
Behavior Research Methods, 48(4), 12051226.
O’Boyle, E. H., Banks, G. C., & Gonzalez-Mulé, E. (2017). The Chrysalis effect: How ugly
initial results metamorphosize into beautiful articles. Journal of Management, 43(2),
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science.
Science, 349(6251), 4716.
Orben, A. (2019). A journal club to fix science. Nature, 573(7775), 465465.
Orben, A., Dienlin, T., & Przybylski, A. K. (2019). Social media’s enduring effect on adolescent
life satisfaction. Proceedings of the National Academy of Sciences of the United States of
Pew Research Center. (2019, August 2). Trust and mistrust in Americans’ views of scientific
experts. Retrieved from
Popper, K. (1935). Logik der Forschung / The Logic of Scientific Discovery. Springer /
Hutchinson & Co.
Quintana, D. (2019). Synthetic datasets: A non-technical primer for the biobehavioral sciences
Rains, S. A., Levine, T. R., & Weber, R. (2018). Sixty years of quantitative communication
research summarized: Lessons from 149 meta-analyses. Annals of the International
Communication Association, 42(2), 105124.
Retraction Watch. (2018, March 19). Caught Our Notice: Retraction eight as errors in Wansink
paper are “too voluminous” for a correction. Retrieved from
Rinke, E. M., & Schneider, F. M. (2018). Probabilistic misconceptions are pervasive among
Communication Researchers [Preprint].
Rocher, L., Hendrickx, J. M., & de Montjoye, Y.-A. (2019). Estimating the success of re-
identifications in incomplete datasets using generative models. Nature Communications,
10(1), 3069.
Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological
Bulletin, 86(3), 638641.
Ruggiano, N., & Perry, T. E. (2019). Conducting secondary analysis of qualitative data: Should
we, can we, and how? Qualitative Social Work, 18(1), 8197.
Schäfer, T., & Schwarz, M. A. (2019). The meaningfulness of effect sizes in psychological
research: Differences between sub-disciplines and the impact of potential biases.
Frontiers in Psychology, 10, 813.
Schönbrodt, F. D., Maier, M., Heene, M., & Zehetleitner, M. (2015). Commitment to research
transparency. Retrieved from
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed
flexibility in data collection and analysis allows presenting anything as significant.
Psychological Science, 22(11), 13591366.
Spence, P. R. (2019). Policy, practices and Communication Studies. The more things change….
Communication Studies, 70(1), 129131.
Srivastava, S. (2012, September 27). A Pottery Barn rule for scientific journals. Retrieved from
The hardest science website:
Taneja, H. (2016). Using commercial audience measurement data in academic research.
Communication Methods and Measures, 10(23), 176178.
Tong, S. T., Van Der Heide, B., Langwell, L., & Walther, J. B. (2008). Too much of a good
thing? The relationship between number of friends and interpersonal impressions on
Facebook. Journal of Computer-Mediated Communication, 13(3), 531549.
Vanpaemel, W., Vermorgen, M., Deriemaecker, L., & Storms, G. (2015). Are we wasting a good
crisis? The availability of psychological research data after the storm. Collabra, 1(1).
Vazire, S. (2019, March). The credibility revolution in psychological science. Presented at the
Trier, Germany. Trier, Germany.
Vermeulen, I., & Hartmann, T. (2015). Questionable research and publication practices in
communication science. Communication Methods and Measures, 9(4), 189192.
Wagenmakers, E.-J., Wetzels, R., Borsboom, D., & van der Maas, H. L. J. (2011). Why
psychologists must change the way they analyze their data: The case of psi: Comment on
Bem (2011). Journal of Personality and Social Psychology, 100(3), 426432.
Wilkinson, M. D., Dumontier, M., Aalbersberg, Ij. J., Appleton, G., Axton, M., Baak, A., …
Mons, B. (2016). The FAIR Guiding Principles for scientific data management and
stewardship. Scientific Data, 3(1), 160018.
Zhu, Y., & Fu, K. (2019). The relationship between interdisciplinarity and journal impact factor
in the field of Communication during 19972016. Journal of Communication, 69(3),
... Moreover, power problems may explain why subsequent research may have difficulty in replicating the results that are reported in prior research (Funder & Ozer, 2019). As such attention to the sample size in a study is key to offer robust results (Dienlin et al., 2021). In order to understand the status of the research on mobile dating, the current review on mobile dating will examine the current research's time frame, theoretical perspectives, methodological designs, examined samples in terms of power and heterogeneity (RQ2). ...
Full-text available
Mobile dating applications (MDAs) have become commonly used tools to seek out dating and sexual partners online. The current review aimed to systematically synthesize empirical findings in 72 quantitative studies on mobile dating, published in ISI-ranked journals between 2014 and 2020. This review focused on summarizing different approaches toward mobile dating, identity features of quantitative research on mobile dating, and hypothesized antecedents and outcomes of mobile dating. Our findings showed, first, that the literature diverges in how mobile dating is operationalized. Second, quantitative research on mobile dating predominantly consists of cross-sectional studies that draw on theoretical insights from multiple disciplines. Third, a variety of traits and sociodemographics were associated with MDA use. In particular, using MDAs for (1) relational goals related to being male, non-heterosexual, higher levels of sociosexuality, sensation seeking, extraversion, and holding more positive peer norms about using MDAs for relational goals; (2) intrapersonal goals related to being female and having more socially impairing traits; and (3) entertainment goals related to having higher levels of sociosexuality, sensation seeking, and antisocial traits. Outcomes significantly associated with general use of MDAs were scoring higher on sexual permissiveness and on engaging in casual (unprotected) sexual intercourse, as well as having higher risk at nonconsensual sex. MDA use was also connected with increased psychological distress and body dissatisfaction. Shortcomings of the existing research approaches and measures are discussed and six methodological and theoretical recommendations for future research are provided.
... While this movement towards open science seems to be mainly focused on quantitative research, open science can also be applied to qualitative research or game analyses to improve transparency, traceability, consistency and comparability instead of replicability (Dienlin et al., 2021). Therefore, common open science practices mentioned before (e.g., sharing materials, data and code, as well as pre-registering analyses) are also appropriate for qualitative research (Haven & Van Grootel, 2019). ...
Full-text available
The analysis of digital games is a widely used method in the field of game studies and an important instrument to study games and game-related topics. However, existing methodological work showcases a divergency of perspectives on game analyses, hindering the development of clear guidelines on how to actually conduct them. This lack of methodological consensus is fueled further by several complexities when analyzing games, such as the active participation that is required on the part of the researchers. Therefore, the current paper proposes the Digital Game Analysis Protocol (DiGAP), a methodological toolkit that, compared to existing methodological frameworks, provides researchers with sufficient flexibility and adaptability in order to cater to a game analysis’ specific focus and needs. DiGAP’s goal is twofold: to make researchers reflect on the potential impact of their methodological choices on the analysis and interpretation of game content, and to promote the transparent reporting of game analyses in academic manuscripts. Based on previous methodological scholarship, the authors’ prior game analysis experience and brainstorm meetings between members of our interdisciplinary author team, DiGAP consists of 31 items categorized in seven sections: (1) Rationale & objectives, (2) Researcher background, (3) Game selection, (4) Boundaries, (5) Analysis approach, (6) Coding techniques & data extraction and (7) Reporting & transparency. Due to its comprehensive setup and its reflexive nature, DiGAP may be used as a (didactic) checklist to make insights from the field of game studies regarding game analyses accessible to a broader range of research fields (e.g., communication and human-computer interaction). This, in turn, makes it equally valuable for (student) researchers unfamiliar with the method of game analysis as well as more experienced game scholars.
... While this movement towards open science seems to be mainly focused on quantitative research, open science can also be applied to qualitative research or game analyses to improve transparency, traceability, consistency and comparability instead of replicability (Dienlin et al., 2021). Therefore, common open science practices mentioned before (e.g., sharing materials, data and code, as well as pre-registering analyses) are also appropriate for qualitative research (Haven & Van Grootel, 2019). ...
... Our database also aimed at developing and validating our generic TK modelling framework, as illustrated in this paper where the evaluation of the efficiency and robustness of MOSAIC bioacc proved to be successful for the whole set of collected data 9 . This is very much in line with Open Science as advocated today by most of research bodies 30 . ...
Full-text available
Regulatory bodies require bioaccumulation evaluation of chemicals within organisms to better assess toxic risks. Toxicokinetic (TK) data are particularly useful in relating the chemical exposure to the accumulation and depuration processes happening within organisms. TK models are used to predict internal concentrations when experimental data are lacking or difficult to access, such as within target tissues. The bioaccumulative property of chemicals is quantified by metrics calculated from TK model parameters after fitting to data collected via bioaccumulation tests. In bioaccumulation tests, internal concentrations of chemicals are measured within organisms at regular time points during accumulation and depuration phases. The time course is captured by TK model parameters thus providing bioaccumulation metrics. But raw TK data remain difficult to access, most often provided within papers as plots. To increase availability of TK data, we developed an innovative database from data extracted in the scientific literature to support TK modelling. Freely available, our database can dynamically evolve thanks to any researcher interested in sharing data to be findable, accessible, interoperable and reusable.
Researchers often focus on the benefits of adopting open science, yet questions remain whether the general public, as well as academics, value and trust studies consistent with open science compared to studies without open science. In three preregistered experiments (total N = 2,691), we find that the general public perceived open science research and researchers as more credible and trustworthy than non-open science counterparts (Studies 1 and 2). We also explored if open science practices compensated for negative perceptions of privately-funded research versus publicly-funded research (Study 2), although the evidence did not support this hypothesis. Finally, Study 3 examined how communication scholars perceive researchers and their work as a function of open science adoption, along with publication outlet (e.g., high-prestige vs. low-prestige journals). We observed open science research was perceived more favorably than non-open science research by academics. We discuss implications for the open science movement and public trust in science.
Full-text available
Celebrity endorsements have long been used as a promotional tool in marketing communication. However, literature has documented inconsistent findings on the effects of celebrity endorsements compared to no endorsement or noncelebrity endorsements, suggesting a close examination about the reliability and robustness of celebrity endorsements is needed. This study conducted a p-curve analysis among two sets of published studies based on different comparison groups (celebrity endorsements vs. no celebrity endorsement; celebrity endorsements vs. noncelebrity endorsements) to investigate if both sets of studies contain an evidential value. The significantly right-skewed p curve suggests that both sets of published studies have some integrity. However, the studies that compared celebrity endorsements with no celebrity endorsements showed low statistical power. Theoretical and methodological implications for celebrity endorsement research were discussed.
This study applies Harvey and Green’s (1993) model of quality to scholarly knowledge production. Although studies of quality in higher education have been commonplace for decades, there is a gap in understanding quality in terms of research production from stakeholders’ perspectives. This study begins to fill that gap through a qualitative interview study of quality in the knowledge production process. Stakeholders in all parts of the scholarly knowledge production process, from 17 countries, are included in the data sample. Analysis of interview data extends Harvey and Green’s (1993) model into the realm of knowledge production. Definitions and challenges of quality in producing scholarly knowledge are discussed. The findings indicate a rift between the institutional view of quality and the individual perceptions of quality, suggesting the need for institutional policies that respond to stakeholders’ perceptions of quality in scholarly knowledge production and celebrate, rather than erase, epistemic diversity.
Full-text available
Computational communication science (CCS) offers an opportunity to accelerate the scope and pace of discovery in communication research. This article argues that CCS will profit from adopting open science practices by fostering the reusability of data and code. We discuss the goals and challenges related to creating reusable data and code and offer practical guidance to individual researchers to achieve this. More specifically, we argue for integration of the research process into reusable workflows and recognition of tools and data as academic work. The challenges and road map are also critically discussed in terms of the additional burden they place on individual scholars, which culminates in a call to action for the field to support and incentivize the reusability of tools and data.
Full-text available
While rich medical, behavioral, and socio-demographic data are key to modern data-driven research, their collection and use raise legitimate privacy concerns. Anonymizing datasets through de-identification and sampling before sharing them has been the main tool used to address those concerns. We here propose a generative copula-based method that can accurately estimate the likelihood of a specific person to be correctly re-identified, even in a heavily incomplete dataset. On 210 populations, our method obtains AUC scores for predicting individual uniqueness ranging from 0.84 to 0.97, with low false-discovery rate. Using our model, we find that 99.98% of Americans would be correctly re-identified in any dataset using 15 demographic attributes. Our results suggest that even heavily sampled anonymized datasets are unlikely to satisfy the modern standards for anonymization set forth by GDPR and seriously challenge the technical and legal adequacy of the de-identification release-and-forget model.
Full-text available
Nowadays, the prevailing trend in the science-society relationship is to engage with the broader public, which is beneficial for the public, scientific institutes, scientific findings, and the legitimacy of science as a whole. This article provides a broad review of the rapidly growing research on Open Science and identifies the gaps in the current knowledge for future research. The review focuses on the science-society relationship, such that knowledge from this field is summarised and systematised. Insight into the most salient topics, including science communication, public engagement with science, public cognition of science, and challenges and potential unintended consequences connected to interactions with the public are examined. The first section of the paper focuses on science communication which involves efforts and approaches to inform the public about science by the most effective means. The section on public engagement reviews how scientists and scientific institutions are increasingly involved in direct interactions with the public and different groups of stakeholders to make science more open. The section focusing on public cognition of science provides information about public knowledge, perception, and trust regarding science, which both determines and is formed by public engagement. Last, risks, ethical issues, and data issues connected to the implementation of Open Science principles are reviewed, as there are many unintended consequences of Open Science which are examined by this current research. In conclusion, research covering the science-society relationship is rapidly growing. However, it brings multiple challenges as well as opportunities which are captured and discussed in a variety of existing studies. This article provides a coherent overview of this field in order to bring more comprehensible knowledge to scientists, scientific institutions, and outreach professionals.
Full-text available
Replication is necessary for theory building and knowledge accumulation in communication science. Thus, researchers might wonder about the frequency with which we are publishing replication attempts. We analyzed a representative sample of quantitative communication research published in central and regional journals between 2007 and 2016. Approximately one in every seven published reports was framed as a replication attempt, the majority of which were conceptual replications, and they were more likely to be found in central journals. Studies more frequently claimed in the Discussion (post hoc) that the results replicated an existing finding. The results suggest the field is pursuing replication more frequently than might be assumed; however, they are also consistent with a bias favouring originality and statistically significant results.
Communication scientists devote large amounts of resources to conducting studies to improve our understanding of the social world, in hopes that our efforts contribute to improving people’s life outcomes. Unfortunately, for a variety of reasons, the process by which our research is conducted is not always clear in journal articles or books reporting our research. This lack of process-insight (a) limits our ability to build on each other’s research, (b) limits our holistic understanding of communication processes, and (c) limits the ability of consumers of our research to put it into practice. The current article discusses recent methodological advances designed to address these issues – advances in open science practices. I provide a brief primer on the philosophy behind open science and its relevance for communication research, then provide recommendations for both novice and expert researchers to implement open science practices at multiple steps of the research pipeline.
ReproducibiliTea can build up open science without top-down initiatives, says Amy Orben. ReproducibiliTea can build up open science without top-down initiatives, says Amy Orben. Open science is a process, not a one-time achievement or a claim to virtue
Reviewing and accepting study plans before results are known can counter perverse incentives. Chris Chambers sets out three ways to improve the approach. Reviewing and accepting study plans before results are known can counter perverse incentives. Chris Chambers sets out three ways to improve the approach.
We studied publication bias in the social sciences by analyzing a known population of conducted studies—221 in total—in which there is a full accounting of what is published and unpublished. We leveraged Time-sharing Experiments in the Social Sciences (TESS), a National Science Foundation–sponsored program in which researchers propose survey-based experiments to be run on representative samples of American adults. Because TESS proposals undergo rigorous peer review, the studies in the sample all exceed a substantial quality threshold. Strong results are 40 percentage points more likely to be published than are null results and 60 percentage points more likely to be written up. We provide direct evidence of publication bias and identify the stage of research production at which publication bias occurs: Authors do not write up and submit null findings.
Open research data provides considerable scientific, societal, and economic benefits. However, disclosure risks can sometimes limit the sharing of open data, especially in datasets that include information from individuals with rare disorders. This article introduces the concept of synthetic datasets, which is an emerging method originally developed to permit the sharing of confidential census data. Synthetic datasets mimic real datasets by preserving their statistical properties and the relationships between variables. Importantly, this method also reduces disclosure risk to essentially nil as no record in the synthetic dataset represents a real individual. This practical guide with accompanying R script enables biobehavioral researchers to create synthetic datasets and assess their utility via the ‘synthpop’ R package. By sharing synthetic datasets that mimic original datasets that could not otherwise be made open, researchers can ensure the reproducibility of their results and facilitate data exploration while maintaining participant privacy.
Central values of science are, among others, transparency, verifiability, replicability, and openness. The currently very prominent Open Science (OS) movement supports these values. Among its most important principles are open methodology (comprehensive and useful documentation of methods and materials used), open access to published research output, and open data (making collected data available for re-analyses). We here present a survey conducted among members of the German Psychological Society ( N = 337), in which we applied a mixed-methods approach (quantitative and qualitative data) to assess attitudes toward OS in general and toward data sharing more specifically. Attitudes toward OS were distinguished into positive expectations (“hopes”) and negative expectations (“fears”). These were un-correlated. There were generally more hopes associated with OS and data sharing than fears. Both hopes and fears were highest among early career researchers and lowest among professors. The analysis of the open answers revealed that generally positive attitudes toward data sharing (especially sharing of data related to a published article) are somewhat diminished by cost/benefit considerations. The results are discussed with respect to individual researchers’ behavior and with respect to structural changes in the research system.