ArticlePDF Available

Replication in Criminology: A Necessary Practice

Authors:
  • Minnesota Department of Corrections
  • Miami University Regionals

Abstract and Figures

Although researchers acknowledge the importance of replication in building scientific knowledge, replication studies seem to be published infrequently. The present study examines the extent to which replications are conducted in criminology. We conduct a content analysis of the five most influential journals in criminology. We also compare the replication rate in criminology with that in the social sciences and natural sciences. The results show that replication research is rarely published in these disciplines. In criminology journals in particular, replication studies constitute just over 2 percent of the articles published between 2006 and 2010. Further, those replication studies that were published in criminology journals in that period tended to conflict with the original studies. These findings call into question the utility of empirical results published in criminology journals for developing theory and policy. Strategies for promoting replication research in criminology are suggested.
Content may be subject to copyright.
1
Replication in Criminology: A Necessary Practice
Accepted for publication in European Journal of Criminology, January 26, 2015
http://euc.sagepub.com/content/12/5/581
Susan McNeeley*
Justice Center for Research, Pennsylvania State University, State College, PA
Jessica J. Warner
School of Criminal Justice, University of Cincinnati, Cincinnati, OH
*Please address any correspondence to: Susan McNeeley, Postdoctoral Scholar, Justice Center
for Research, Pennsylvania State University, 327 Pond Building, University Park, PA 16802,
Phone: (814) 867-3292, Email: smm70@psu.edu. An earlier version of this paper was presented
at the 2013 Academy of Criminal Justice Sciences Annual Meeting in Dallas, TX. The authors
would like to thank Francis T. Cullen and John H. Kramer for their helpful comments on earlier
drafts of the paper.
2
Replication in Criminology: A Necessary Practice
Although researchers acknowledge the importance of replication in building scientific
knowledge, replication studies seem to be published infrequently. The present study examines
the extent to which replications are conducted in criminology. We conduct a content analysis of
the five most influential journals in criminology. We also compare the replication rate in
criminology with that in the social sciences and natural sciences. The results show that
replication research is rarely published in these disciplines. In criminology journals in particular,
replication studies constitute just over 2 percent of the articles published between 2006 and 2010.
Further, those replication studies that were published in criminology journals in that period
tended to conflict with the original studies. These findings call into question the utility of
empirical results published in criminology journals for developing theory and policy. Strategies
for promoting replication research in criminology are suggested.
3
Introduction
Replication is an essential component of the scientific process. Scholars agree that the
results of any empirical study cannot be fully trusted until they have been replicated in additional
studies. The concept of replication is so familiar to the scientific community that its meaning
seems obvious, making formal, agreed-upon definitions rare. Schweizer (1989) compiled a list of
definitions for replication in the social sciences, most of which include a component regarding the
reproduction of an experimental procedure; additionally, many definitions stress the use of
different, independently collected data.
It is important to note that not all studies should necessarily be expected to replicate. For
example, ethnographies and other qualitative work or work with a historical focus often examine
specific groups and contexts without making claims about the generalizability of any observations
made. Certainly, some criminological research meets these criteria and should not be expected to
be subjected to replication. However, a great deal of criminology research is meant to test general
theories or evaluate interventions and initiatives, and these types of studies are in need of
replication. Additionally, because criminal justice is an applied field, findings from research
studies may be applied by criminal justice agencies. Policy and practice should not be guided by
information that has not been properly validated. Therefore, the greater degree of confidence that
is provided by replications is needed. For this reason, replication of highly-publicized studies is
even more important, as such research is more likely to come to the attention of policymakers and
practitioners. Further, replication would allow practitioners to have more confidence in the
original findings, lending credibility to the research process and making the integration of research
results into policies and practice more attractive.
There have been important replications conducted in criminology research that shed light
4
on the importance of replicating findings before making policy implications. First, early studies
showed that arrests were more likely to occur when police arrived at a crime scene shortly after a
call for service is made (Isaacs, 1967; Clawson and Chang, 1977). However, subsequent
replications (e.g., Pate et al., 1976; Spelman and Brown, 1984) failed to find a significant
relationship between response times and arrest, or found that the relationship only held for certain
types of crime, such as those involving direct contact between the victim and offender. Second,
in the Minnesota Domestic Violence study, Sherman and Berk (1984) examined the effectiveness
of multiple police responses to domestic violence calls. Their findings showed that offenders who
were arrested were less likely to re-offend than were those who were given counseling or
temporarily sent away. Several replications of the study, some of them NIJ-funded, have since
been conducted. Many of these failed to support the findings of the original experiment (e.g.,
Sherman et al., 1991; Hirschel et al., 1992; Dunford, 1992); those that did find results in the same
direction reported weaker relationships between arrest and repeat offending than did the original
study (e.g., Pate, Hamilton, and Annan, 1991; Berk et al., 1992). These results demonstrate the
potential unreliability of single studies; the findings of some studies may be sensitive to the context
in which they are conducted or the specific research methods used. This highlights the importance
of replicating criminological studies in an effort to provide more reliable results on which to base
policy and practice. While these examples are well known and the importance of replication is
largely agreed upon, it is currently unknown whether the results reported in criminology journals
suffer from a lack of replication.
Due to the importance of replication and the concern over the frequency of replication
research, we conduct an exploratory study that examines the number of replication studies that
have recently been published in top-tier criminology and criminal justice journals. By doing so,
5
we hope to provide initial insight regarding the extent to which the findings published in
criminology journals can be considered reliable. Additionally, we extend previous research by
comparing across broad disciplines (natural science, social science, and criminology) to gain
additional understanding of replication in criminology journals relative to other journals. Finally,
because previous research suggests that replication studies are more likely to be published when
they contradict the findings of the study they replicate (Hubbard and Armstrong, 1994; Hubbard
and Vetter, 1996; Madden, Easley, and Dunn, 1995; Neuliep and Crandall, 1990; Reid, Soley, and
Wimmer, 1981; Wilson, Smoke, and Martin, 1973), we compare the results of the criminology
replication studies we identify to the findings of the studies they replicate.
Performance and publication of replication research
Replication is an important part of the scientific process because it establishes stability in
our knowledge of nature by using repeat testing to rule out the possibility that results are merely
due to coincidence (Popper, 1959; Radder, 1996). It thereby increases the level of confidence one
can have in a set of empirical generalizations (Rosenthal, 1991). Replication also functions as a
norm to differentiate between scientific and unscientific claims; statements that cannot be
supported by replication can be determined unscientific (Radder, 1996). An additional benefit of
replication, according to Schmidt (2009), is its ability to address possible issues with internal and
external validity. The replication of studies serves as a control for sampling error, artifacts, and
academic fraud. It also allows findings to be generalized to a larger or different population than
was used in the original study.
Scholars in several disciplines notably social sciences such as business, finance,
marketing, and organization have addressed the issue of publication of replication studies. In
6
general, the conclusion from this line of inquiry is that there may be a bias against replication
research. The authors attribute this bias to academia’s focus on uniqueness studies that are not
seen to contribute unique information are not as highly valued as those that provide new
information (Mone and McKinley, 1993); in addition, studies that examine a new and different
topic are considered more “interesting,” and therefore more worthy (Hagan, 1973).
This tendency has also been recognized in criminology research. In his 2010 Sutherland
Address, Francis T. Cullen describes the state of the field as “adolescence-limited criminology,”
which is, in part, characterized by an emphasis on the creation of new theories rather than the
development of knowledge that has already been generated. According to Cullen, criminology
journals reward single studies with publication, especially if those studies develop and test a new
theory or examine an understudied type of crime or population. He argued that the results of
existing studies are considered “old news;” therefore, the relationships examined therein are taken
for granted and are not further examined. The emphasis on originality, the devaluation of
replication research it creates, and the subsequent potential inaccuracy of some published research
may be detrimental to knowledge development and cumulation, as well as to the development of
policy. This potential harm is exacerbated by the need for scholars to secure as many publications
as possible. The need to publish many studies leads scholars to avoid research that is unlikely to
be accepted for publication, regardless of the potential benefit to theory or practice that would be
provided by such work.
This focus on new information and the subsequent potential bias against replication
research among academic journals is especially problematic given the importance of publication
in academic journals for career advancement. When making decisions on hiring, tenure, or
promotion, university committees place a great deal of emphasis on how frequently candidates are
7
able to publish their work in scholarly journals, and also consider the level of the journals in which
their research is presented. This causes scholars to gauge the possibility of publishing their work
in a prestigious journal before embarking on research projects. Because of the necessity of
publishing multiple articles in top-tier academic journals, they are hesitant to spend valuable time
engaging in research that is unlikely to be published, such as replications of earlier work.
Therefore, the extent to which replication research is rewarded with publication in prestigious
academic journals is important for understanding whether research published in academic journals
is being replicated.
Submission of replication studies to academic journals
Existing evidence suggests that scholars in many fields may be hesitant to engage in
replication research. Conducting replication research may not be considered an effort that is likely
to enhance one’s career. As will be discussed in the following sections, the most tangible and
career-enhancing reward of conducting research publication may be unlikely to result from
replication research. In fact, journal editors interviewed by Madden and colleagues (1995)
believed that faculty members whose research consisted of many replications were unlikely to
receive tenure, as it signals a lack of scholarly creativity or originality. It is therefore unsurprising
that replication studies are rarely submitted to journals for publication. For example, Neuliep and
Crandall (1990) surveyed 288 current and former editors of journals in the social and behavioral
sciences. Almost half of these editors (42%) reported that they had never received a direct
replication for review. Additionally, Madden and colleague’s (1995) interviews with editors in
the social and natural sciences found similar results; editors in both disciplines reported receiving
very few replications.
8
Acceptance of replication studies
Even when replications are submitted, interviews with editors and reviewers suggest that
these studies may be unlikely to be accepted for publication. According to Kerr, Tolliver, and
Petree’s survey of reviewers of management journals (1977), reviewers are unlikely to recommend
publication for papers that are merely replications of earlier studies. A survey of reviewers of
psychology journals showed a similar reluctance to publish direct replications if they added no
new dimension to theory (Rowney and Zenisek, 1980). Editors report that replication studies are
unlikely to be accepted unless they represent an extension through the addition of new variables
or presentation of novel findings (Madden et al., 1995; Neuliep and Crandall, 1990). A notable
exception, according to the editors, is when articles report replications conducted in controversial
areas in that case, replication studies are considered important contributions to the field.
While the findings discussed above were generally consistent across both natural and social
science journals, the willingness to publish replication research may vary by academic discipline.
According to Madden et al. (1995), editors in the natural sciences reported publishing replications
less often than did social science editors. This may be due to the nature of research in that
discipline; there is less need for literal replication in the natural sciences because all work follows
closely from earlier related work, making replication a built-in part of the research process. In
contrast, the social science editors reported that they would be willing to consider publishing more
replication studies, if such studies were submitted to their journals more often. They were also
more likely than editors of natural science journals to be concerned about the absence of
replication, and they reported a desire to encourage replication research for that reason. However,
this concern does not seem to have translated into practice; in another study, only 5% of editors
said that they explicitly encouraged the submission of replication studies (Neuliep and Crandall,
9
1990).
Evidence from content analyses
The conclusions drawn from comments by journal editors have been supported by results
of previous content analyses. Several studies, mostly focusing on the social sciences, have shown
that replications are published infrequently. In a review of psychology journals, Sterling (1959)
found that, out of 362 research articles published in a one-year period, none were replications of
earlier work. A later study analyzing psychology journals found that only 1.6% of studies
contained the word “replication,” while 1.07% actually replicated earlier studies (Makel, Plucker,
and Hegarty, 2012). Wilson and colleagues (1973) analysis of sociology journals found only 4
replications out of 76 articles (5.2%); when examining variable relationships within journal
articles, 6.6% were replications of earlier work. In a review of journals in advertising, marketing,
and communication, Reid et al. (1981) reported that only 30 of 501 articles (approximately 6%)
could be classified as replications. Another study examining journals in various business
disciplines found no true replications; replications with extensions represented 5-10% of the
articles, depending on the specific discipline (Hubbard and Vetter, 1996). Similarly, Hubbard and
Armstrong (1994) found no true replications in three major marketing journals, and reported that
extensions constituted only 1.1% of the journal space. In addition, the publication rate of
replications in marketing journals had declined since the 1970s. According to Horton and
colleagues (1993), of the 130 studies published in a five-year period in the Journal of Research in
Science Teaching, only 3% were intended by the author to be a direct replication of an earlier
study. Finally, Kelly’s examination of the three top journals in behavioral ecology (2006) found
no true replications, with most studies categorized as quasi-replications.
10
Failure to replicate
There is growing recognition across disciplines that the findings of published studies often
do not replicate, calling into question the accuracy of research reported in academic journals. The
results of some studies may not be replicable for many reasons: contextual differences in the
locations or environments in which the studies were conducted, an inability to exactly imitate the
original study’s methods, or even scientific fraud – a meta-analysis of surveys of scientists shows
that almost 2% of scholars have fabricated, falsified, or modified their data (Fanelli, 2009).
According to Lehrer (2010), research suffers from the “decline effect,” in which results of single
studies are significant by chance alone and replications of the study fail to find significant results
due to regression to the mean. In addition, single studies may be unreliable due to “significance
chasing” (Ioannidis, 2005), or the practice of re-analyzing data to find statistical significance due
to the difficulty of publishing null results. Similar to Rosenthal’s discussion of the “file drawer
problem (1979) in which studies with null findings are never published results of studies that
fail to replicate earlier results may not be known, masking the existence of Type I errors published
in academic journals.
Consistent with the perceptions of editors and reviewers discussed in the sections above,
content analyses support the idea that replications are more likely to be considered for publication
if their results contradict the original study. Of the 30 replication studies in Reid et al.’s (1981)
study, 40% supported the original, 20% provided partial support, and 40% conflicted with earlier
results. Hubbard and Armstrong (1994) found that only 3 of 20 extensions published in marketing
journals provided full confirmation for the original studies. In Hubbard and Vetter’s (1996)
analysis, approximately 27% of the replication studies supported the original findings; 45.5% of
the replication studies provided conflicting results and 27.4% were only able to partially confirm
11
the earlier study. Similarly, Wilson et al.’s (1973) analysis of replications in sociology found that
over a quarter of the replicated variable relationships that were examined reversed the original
results (either from significant to non-significant or vice versa).
Whatever the reason for the inconsistency in replicated studies, it is apparent that many
disciplines suffer from a potential for errors that can be alleviated by replicating important studies.
However, it is currently unknown whether and to what degree this problem applies to findings
reported in criminology journals.
The current study
The research reviewed above demonstrates the infrequency with which replication studies
have been published in several different disciplines. Scholars have noted the problems that arise
from a lack of replication; most notably, it is unknown whether findings published in reputable
journals are reliable. In the current study, we attempt to extend this literature to the field of
criminology by exploring how criminology journals compare to journals in other disciplines in
regards to the publication of replication studies.
Following the literature described above, the following four hypotheses were developed.
First, content analyses of journals in other fields suggest that replication is rare; therefore, we
expect to find a small percentage of articles in criminology journals that qualify as replications.
Second, we expect to find a larger percentage of replication studies in social science journals than
in natural science journals. As mentioned previously, editors of journals in the social sciences are
more concerned about replication research, leading to the conclusion that such journals may
contain more published replications than do journals in the natural sciences. Third, we compare
criminology to the broader social sciences. It is unclear whether to expect criminology journals to
12
contain more or less replication research. On the one hand, due to the practicality of research
published in criminology and criminal justice journals stemming from the use for criminal justice
policy, one could expect a greater proportion of replication studies in such journals than in the
social sciences in general. On the other hand, because of the relative youth of the field of
criminology and of criminology journals, the development of new theories may be emphasized
more greatly in criminology journals, leading to less replication research than is conducted in the
social sciences. Fourth, based on the results of previous content analyses and the preferences of
editors and reviewers, it is expected that a majority of replication studies in criminology will not
provide support for the studies they replicate. As discussed above, previous research indicates that
replication studies are more likely to be published when they contradict earlier findings.
Sample
This study begins to address the research questions by examining highly influential
journals in each field. Journals were selected using the Journal Citation Reports® (JCR), an online
resource that provides citation data for various journals. There are two editions offered: JCR
Science Edition and JCR Social Science Edition. Both editions provide multiple measures of a
journal’s impact on science, such as the journal impact factor, five-year journal impact factor, and
aggregate impact factor. We used the rankings obtained from the Article Influence Score (AIS).
Although the Impact Factor is the more common metric when considering journal influence, the
AIS ranking was chosen because it adjusts for citations of articles published in the same journal
and controls for the number of articles in each journal issue (Thomson Reuters, 2013). We felt the
latter adjustment was very important for our purposes, as the number of articles published per issue
varies substantially across journal and discipline.
13
A list of the journals included and their corresponding AIS can be found in Table 1
1
.
Initially, the top five journals were selected from JCR Science, JCR Social Science, and JCR Social
Science criminal justice. However, some of these journals were removed from the sample
because they did not contain original work; instead, they were comprised of reviews of research.
This decision was not meant to discount the contributions of reviews, but rather to align with our
definition of replication presented below; journals that solely include reviews would necessarily
not contain any studies that replicate earlier work. Journals that included a combination of original
research and reviews were included in the study, but reviews were left out of calculations of the
number of articles in the journal and rates of replication. Replacement journals were selected from
additional rankings.
[Insert Table 1 here]
Defining replication
Before exploring replication publication rates in academic journals, it was necessary to
adopt a formal definition of replication. To do so, a substantial review of the social science
literature was conducted to determine how replication has previously been defined. While there
are many definitions and interpretations of replication, for this study, the approach to identifying
replication studies was based on the work of Fuess (1996): rather than classifying studies ex post,
meaning that the articles were reviewed and classified according to established criteria, studies
were categorized a priori (as identified by the original authors) in order to better address the
1
Because of the use of citation indices, which privilege journals located in areas with larger numbers of researchers,
the sample especially in regards to criminology is primarily made up of American journals in which a majority
of articles are authored by American scholars. Therefore, our results may not be representative of research
conducted by scholars of different nationalities or the publication practices of prestigious journals housed in other
countries.
14
authors’ research questions and intent. Thus, the abstract, introduction, methods section, and
conclusion sections of each article were reviewed to determine whether the author(s) described the
study as a re-analysis of earlier work that did not make major changes to the measurement or
analysis used in the original study.
Using this definition, studies were placed into one of two categories: original work or
replication. For a study to be classified in the replication" category, the author(s) must have
referred to at least one previous research study and presented their research as repeating the
methods of the earlier study, although we allowed for very minor changes to the measurement or
analysis. We also counted as replications studies in which extensions to theory or measurement
were also made, so long as the author(s) first conducted a replication of earlier work and presented
this as a component of the study. Within the category of replication, there are two divisions: direct
replication and empirical generalization. Direct replications use the same methods and
measurements on the same population to determine the internal validity of the study being
replicated. Empirical generalizations use the same methods and measurements on a different
population to determine the generalizability of original findings.
Coding
Every volume published between 2006 and 2010 was reviewed and coded for the following
information: number of articles in the issue, number of original articles, number of replication
articles, and number of empirical generalization articles. Reviews of each article included a
thorough read of the abstract, main body, and any footnotes. After jointly coding one volume of
a criminal justice journal to ensure reliable coding procedures, the authors divided the set of
journals to review. Additionally, each author coded randomly-selected issues from each category
15
of the other author’s list of journals in order to verify inter-rater reliability. Agreement between
the coders was 100%. Once replications published in criminology journals had been identified,
these studies were examined to determine the extent to which the findings presented were
consistent with the findings of the studies that they sought to replicate. This was determined using
discussion given by the authors of the replication study, allowing us to determine how authors of
published replication studies presented their findings relative to the original studies.
Results
First, we examined the frequency of replication in criminology journals (see Table 2). 724
articles were published in these journals between 2006 and 2010. Six of these articles (.87%) were
categorized as direct replications, which replicate the methods of an earlier study using the same
population. Another ten (1.45%) were classified as empirical generalizations, which replicate the
methods of an earlier study but focus on a different population. Therefore, 2.34% of the articles
published in the top five criminology journals were some form of replication.
[INSERT TABLE 2 HERE]
Next, we analyzed journals in the social and natural sciences. Replication in the broader
social sciences was somewhat more common than that in criminology; 2.82% of the articles
published in the top social science journals were categorized as replications. Out of 1,559 social
science articles published between 2006 and 2010, 18 (1.15%) were direct replications of earlier
studies and 26 (1.67%) were empirical generalizations. As hypothesized, a smaller percentage of
replication studies were published in the natural sciences (see Madden et al., 1995). Of 6,637
natural science articles presenting original research, 53 (.80%) were direct replications and 41
(.62%) were empirical generalizations. The difference across disciplines was statistically
16
significant (χ2 = 21.09, p < .001).
We then examined the findings of the sixteen studies published in criminal justice journals
that were categorized as either direct replications or empirical generalizations. Specifically, we
noted whether the findings of these studies supported or contradicted the findings of the original
studies (see Table 3). While the small number of replication studies makes these results difficult
to interpret, it appears that direct replications that were published in these journals tended to
contradict the findings of the original study. Only two of the six direct replications reported
findings consistent with the study they replicated, while the others contradicted or partially
contradicted the findings of the original studies. Conversely, six of the ten empirical
generalizations, which replicated an earlier study but were concerned with applying the findings
to a different population, were consistent with the original study, with the other four reporting
results that were inconsistent or partially inconsistent with the original study. This is not surprising,
as reviewers and editors are likely to consider these studies unique contributions to the field, since
such research provides a new perspective on an existing theory or observation.
[INSERT TABLE 3 HERE]
Discussion
Due to the importance of replication of quantitative criminological work for the creation
of criminal justice policy, the current study aims to provide initial insight regarding the frequency
with which replications are published in criminology journals. Previous content analyses in other
disciplines found replication rates between three and ten percent in prestigious journals, with most
studies finding replication rates around five percent. Similarly, our results show that replication
studies are not published frequently in the most influential criminology journals. Direct
17
replications made up less than one percent of the articles published in the top criminology journals
between 2006 and 2010. When including empirical generalizations, the replication rate in these
criminology journals during this time period was 2.2%. It is important to note that criminology
emerged as a separate discipline relatively recently; therefore, there may be a greater emphasis on
developing and testing new theories within criminology than there is in other disciplines.
It has been suggested that replication is rare in the social sciences due to an emphasis on
originality for journal publications and universities’ emphasis on publication as an indicator of
scholarly productivity. For example, we found that empirical generalizations were more likely
than direct replications to appear in journals in the social sciences or in criminology. A study
extending previous findings to a new population may be seen as more original than a direct
replication, increasing the likelihood of publication. Relatedly, while this was not directly tested
in our study, many of the social science or criminology studies categorized as direct replications
also presented an extension of the earlier work, perhaps providing necessary material for
publication.
Previous research shows that replication studies are more likely to be published if the
findings contradict the original study, as such articles may be considered a more meaningful
contribution to the literature (e.g., Hubbard and Vetter, 1996; Reid et al., 1981; Wilson et al.,
1973). The results of this study, while not conclusive, may provide some preliminary support for
this idea. We found that direct replications tended not to provide support for the findings of the
original study. However, we cannot say whether the lack of support provided by replication studies
reflects a bias toward supportive replications, as suggested by previous research (Kerr et al., 1977;
Neuliep and Crandall, 1990; Madden et al., 1995) or the existence of Type I errors in published
research. If the latter were true, more replication research would be needed to prevent the
18
dissemination of erroneous findings in top-tier criminology journals. Importantly, if the results of
any single studies published in criminology journals were unreliable, theories based on these
studies would be limited, and further research building upon their results would be misguided.
Policy implications
Caution may be needed when attempting to create policy or programs based on the results
of academic research published in top criminal justice journals. Because of the applied nature of
criminology and criminal justice, the need for replication is greater than in some other social
science fields; policies based on unreplicated and erroneous studies could potentially waste a great
deal of money and even cost lives. Instead, policy and practice should be based on consistent and
replicated research rather than the results of individual studies.
Single studies often receive a great deal of attention, and the public often argues that the
results warrant changes to policy or practice. For example, after the Minnesota Domestic Violence
study (Sherman and Berk, 1984), many police departments considered adopting policies that
mandate arrest for domestic violence offenders. However, the results of later replications (e.g.,
Pate et al., 1991; Sherman et al., 1991; Berk et al., 1992; Hirschel et al., 1992; Dunford, 1992)
suggest that such policy changes may not be advisable. A more recent example is Danziger, Levav,
and Avnaim-Pesso’s study of parole decisions (2011), in which parole was more likely to be
granted by parole boards at the beginning of the work day or after a meal break; the authors
concluded that decision-makers are susceptible to psychological biases. This finding initially
received a great deal of attention from various media outlets despite a lack of replication, with
calls for policy changes to minimize human error (see, for example, Melnick, 2011). However, it
has since been argued that the patterns observed by Danziger and colleagues were erroneous, as
19
cases seen by parole boards are not ordered randomly (Weinshall-Margell and Shapard, 2011);
these questions call for further replication of the study to examine the extent to which extraneous
factors influence the decisions made by judges.
If criminologists are not encouraged to produce replications of earlier work, the validity of
the results of available studies (i.e., those published in academic journals) is unknown.
Additionally, our results suggest that there are occasions when the findings and conclusions of
original studies could be challenged by conflicting findings from replication studies. To avoid the
implementation of ineffective and potentially erroneous policies or practices, criminal justice
professionals should conduct thorough literature reviews to find multiple evaluations of a
particular program or strategy prior to facilitating organizational change. Additionally,
practitioners should give particular consideration to meta-analytic findings, as these studies
incorporate results of multiple previous studies.
Limitations and suggestions for future research
As with all research, there are limitations of this study that must be addressed. First, the
definition of replication used in this study required that the author(s) of an article described the
study as a replication of earlier work. We recognize that this definition of replication may be
conservative in nature. Based on the a priori definition, it is likely that fewer studies were
classified as replications than if they had been classified ex post. Another possible avenue would
have been to examine replication of findings rather than entire studies (e.g., Wilson et al., 1973).
However, we contend that there are several advantages to our definition of replication. First, the a
priori classification speaks to the authors’ intentions to engage in replication research. Second, if
replication were directly and explicitly tied to the purpose or goal of the study, our results could
20
speak to the preferences of editors and reviewers in accepting replication studies for publication.
Third, the definition of replication employed in this paper does not require that the coders be
experts in the topic of the study, allowing for a broad range of research to be included in the
analysis.
Second, due to the exploratory nature of our inquiry, our sample is limited in a few ways
that reduce the generalizability of our findings. For instance, the sample only includes articles
published in a five-year period. More notably, like previous research on replication in other
disciplines, our sample only consists of the top journals in each discipline. It is possible that top-
tier journals place more emphasis on uniqueness than lower journals, making lower journals
especially those with a stronger policy focus a better outlet for replication studies. In addition,
replication studies may also be available in government or technical reports. Therefore, replication
research may be conducted and published more frequently than our findings suggest. Scholars
interested in extending this topic should examine a wider range of criminology journals and other
sources of scientific work. However, because it is more important for promotion and tenure for
scholars to publish in prestigious journals, we contend that the absence of replications in
prestigious journals decreases the willingness of scholars to engage in replication research. Also,
the increased visibility of academic journals, especially top-tier journals, makes it less likely that
replication research in lower journals or in government or technical reports will be widely seen.
Therefore, scholars and practitioners may be aware of findings that were published in prestigious
journals but unaware of failure to replicate these findings, which may hinder the development of
theory or policy.
Third, because the selection of journals for the social science category may not be
representative of the social sciences field, our cross-disciplinary comparison is limited.
21
Specifically, four of the journals were economics or finance journals, in which replicability of
quantitative analysis would be seen as more important than in other social science fields that are
more dominated by historical or qualitative analysis. The fifth was a psychiatry journal, which
also may not be representative of other social science disciplines such as sociology or political
science. However, psychiatry may be more similar in content to the criminal justice journals of
interest than finance or economics. In other words, a review of human service journals might lead
to different findings in describing rates of replication publication for social sciences as well as in
comparison to criminal justice journals.
Fourth, because we analyzed published articles rather than submitted articles, the
conclusions that can be drawn from the findings of our study are limited. While the results show
that replications are unlikely to be published in criminology journals, we are unable to determine
the reason for their absence. It is unclear from our analysis whether few replications are published
because scholars choose not to perform replication research or whether conducted replications are
stuck in researchers’ file drawers (see Rosenthal, 1979) because editors choose not to publish
studies that replicate earlier work without providing new contributions. Future research should
attempt to determine the extent to which publication biases exist in criminology, and how any such
biases relate to the publication of replication studies. One such approach would be to seek out the
assistance of top-tier journal editors in obtaining all manuscripts submitted for review and analyze
the proportion of submitted manuscripts that contain replications of earlier work.
Finally, another direction for future research is to examine the ways that solicitations for
funding encourage or discourage replication studies. We suspect that existing grant funding is
generally awarded to scholars proposing new research, unless they are conducting replications in
controversial areas. Because external funding for research is as important for hiring, promotion,
22
and tenure as publication, a bias against replication in this arena could further discourage scholars
from replicating existing work.
Developing a replication tradition in criminology
Our results, while limited, suggest that it is possible that scholars in the field of criminology
may need encouragement to engage in replication research. There are several practical steps that
could be taken to promote the use of replication of applicable studies in criminology. First, grant
solicitations and calls for research could specifically target replication research; these could be
supported by special foundations or specific solicitations by grant-awarding bodies (see Mayer,
1980). This type of monetary support would encourage replication research not only by providing
opportunity and funds for the study, but also by rewarding scholars for their conducting
replications with a stronger potential for tenure and promotion for securing funding for their work.
As an extension of this recommendation, funding sources could also require either internal
replication as a condition of the reward or require that authors submit and release the data to the
funding sources for others to reanalyze.
Second, the editorial policies of top journals could be modified to facilitate more
replications. Criminology journals could require or encourage authors to maintain their raw data
and descriptions of their procedures to provide scholars interested in replicating their studies with
the necessary information to do so (see Hubbard and Armstrong, 1994). In addition, criminology
journals could feature web blogs for each article where authors could post the results of replications
of each study, increasing the visibility of replication research. Editors of top-tier criminology
journals could also organize special issues for replication studies or issue calls for papers that
replicate existing work (e.g., Madden et al., 1995). Alternatively, as scholars in other disciplines
23
have recommended (see Hubbard and Armstrong, 1994; Hubbard and Vetter, 1996), replication
research could be encouraged through journals dedicated to the publication of replication studies,
and/or a special section within existing journals in which a small number of replication studies
could be featured. Hubbard and Armstrong (1994) suggest editorial boards appoint a separate
“replication editor” to oversee this section, ensuring that replications of important articles are
published. They also suggest methods for determining which articles are “important” and
therefore should be replicated, such as identifying highly-cited articles or surveying scholars and
practitioners.
A third approach that could increase replication of criminology research is to incorporate
replication into education. While replication of existing studies has been implemented by
professors of undergraduate research methods courses in some psychology programs (Frank and
Saxe, 2012), it may be more feasible for replication research to be conducted by graduate students.
An incorporation of replication research into graduate education would not only increase the
number of replication studies conducted but would also provide beneficial research experience to
students in criminology programs and increase their scholarly ability and productivity. As
suggested by Reid et al. (1991), graduate students could be encouraged to conduct replications of
published work for their dissertations or other research. Additionally, replication could be a
mandatory part of graduate education; for example, graduate students in some econometrics and
marketing programs are required to replicate and extend a published empirical study as a class or
degree requirement (Dewald et al., 1986; Hubbard, Brodie, and Armstrong, 1992; Feigenbaum and
Levy, 1993). As discussed above, there is a perception among scholars that replication research
is not rewarding. A requirement in graduate school to conduct replications of earlier work would
increase the replication of criminology studies in spite of the lack of popularity of such research.
24
Whether required or optional, replication research conducted by graduate students should be made
public; for example, unpublished replications could be available on the program’s website and data
could be submitted to data housing organizations such as ICPSR.
If further inquiry into the replication of quantitative criminology research demonstrates a
need to increase the use of replication in criminology, we recommend a combination of explicit
encouragement of replication studies in criminology journals and incorporation of replication into
criminology education. We feel that an emphasis on replication during one’s scholarly training,
along with incentives associated with an increased ability to publish replication research, may
improve the perception of replication research among criminologists, thereby further increasing
replication of applicable findings in criminology.
25
References
Berk RA, Campbell A, Klap R and Western B (1992) A Bayesian analysis of the Colorado
Spouse Abuse Experiment. Journal of Criminal Law and Criminology 83(1): 170-200.
Clawson C and Chang SK (1977) The relationship of response delays and arrest rates. Journal
of Police Science and Administration 5(1): 53-68.
Cullen FT (2011) Beyond adolescence-limited criminology: Choosing our future the American
Society of Criminology 2010 Sutherland Address. Criminology 49(2): 287-330.
Danziger S, Levav J and Avnaim-Pesso L (2011) Extraneous factors in judicial decisions. PNAS
108(17): 6889-6892.
Dewald WG, Thursby JG and Anderson RG (1986) Replication in empirical economics: The
Journal of Money, Credit, and Banking Project. American Economic Review 76(4): 587-
603.
Dunford FW (1992) The measurement of recidivism in cases of spouse assault. Journal of
Criminal Law and Criminology 83(1): 120-136
Fanelli D (2009) How many scientists fabricate and falsify research: A systematic review of
meta-analysis of survey data. Plos ONE 4(5): e5738.
Feigenbaum S and Levy DM (1993) The market for (ir)reproducible econometrics. Social
Epistemology 7(3): 215-232.
Frank MC and Saxe R (2012) Teaching replication. Perspectives on Psychological Science 7(6):
600604.
Fuess SM (1996) On replication in business and economics research. Quarterly Journal of
Business and Economics 35(2): 3-13.
26
Hagan J (1973) Labeling and deviance: A case study in the “sociology of the interesting.”
Social Problems 20(4): 447-458.
Hirschel JD, Hutchison IW, Dean CW, Kelley JJ and Pesackis CE (1992). Charlotte spouse
assault replication project: Final report. Charlotte, NC: University of North Carolina at
Charlotte.
Horton PB, McConney AA, Woods AL, Barry K, Krout HL and Doyle BK (1993) A content
analysis of research published in the Journal of Research in Science Teaching from 1985
through 1989. Journal of Research in Science Teaching 30(8): 857-869.
Hubbard R and Armstrong JS (1994) Replications and extensions in marketing Rarely
published but quite contrary. International Journal of Research in Marketing 11(3):233-
248.
Hubbard R, Brodie RJ and Armstrong JS (1992) Knowledge development in marketing: The role
of replication research. New Zealand Journal of Business 14(1): 1-12.
Hubbard R and Vetter DE (1996) An empirical comparison of published replication research in
accounting, economics, finance, management, and marketing. Journal of Business
Research 35(2): 153-164.
Ioannidis JPA (2005) Why most published research results are false. Plos Medicine 2(8): e124.
Isaacs HH (1967) A study of communications, crimes and arrests in a metropolitan police
department. task force report: Science and technology, a report to the president's
commission on law enforcement and administration of justice. Washington, DC: USGPO.
Kelly CD (2006) Replicating empirical research in behavioral ecology: How and why it should
be done but rarely ever is. The Quarterly Review of Biology 81(3): 221-236.
Kerr S, Tolliver J and Petree D (1977) Manuscript characteristics which influence acceptance for
27
management and social science journals. The Academy of Management Journal 20(1):
132-141.
Lehrer J (2010) The truth wears off. The New Yorker 86(40): 52. Retrieved from http:// www.
newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer.
Madden CS, Easley RW and Dunn MG (1995) How journal editors view replication research.
Journal of Advertising 24(): 77-87.
Makel MC, Plucker JA and Hegarty B (2012) Replications in psychology research: How often
do they really occur? Perspectives on Psychological Science 7(6): 537542.
Mayer T (1980) Economics as a hard science: Realistic goal or wishful thinking? Economic
Inquiry 18(2): 165-178.
Melnick M (2011, April) When lunch is served, so is justice. Time. Retrieved from
http://healthland.time.com/2011/04/14/when-lunch-is-served-so-is-justice/
Mone MA and McKinley W (1993) The uniqueness value and its consequences for organization
studies. Journal of Management Inquiry 2(3): 284-296.
Neuliep JW and Crandall R (1990) Editorial bias against replication research. Journal of Social
Behavior and Personality 5(4): 85-90.
Pate A, Hamilton EE and Annan S (1991) Metro-Dade spouse abuse replication project: Draft
final report. Washington, D.C.: Police Foundation.
Popper KR (1959) The logic of scientific discovery. London: Hutchinson.
Radder H (1996) In and about the world. Philosophical studies of science and technology.
Albany, NY: SUNY Press.
Reid LN, Soley LC and Wimmer RD (1981) Replication in advertising research: 1977, 1978,
1979. Journal of Advertising 10(1): 3-13.
28
Rosenthal R (1979) The “file drawer problem” and tolerance for null results. Psychological
Bulletin 86(3): 638-641.
Rosenthal R (1991) Replication in behavioral research. In: Neuliep JW (ed) Replication
Research in the Social Sciences. Newbury Park: Sage, pp.1-39.
Rowney JA and Zenisek TJ (1980) Manuscript characteristics influencing reviewers’ decisions.
Canadian Psychology 21(1): 17-21.
Schmidt S (2009) Shall we really do it again? The powerful concept of replication is neglected in
the social sciences. Review of General Psychology 13(2): 90-100.
Schweizer K (1989) Eine analyse der konzepte, bedingungen und zielsetzungen von
replikationen. [An analysis of concepts, conditions and aims of replications.] Archiv fu¨r
Psychologie 141(2): 8597.
Sherman LW and Berk RA (1984) The specific deterrent effects of arrest for domestic assault.
American Sociological Review 49(2): 261-272.
Sherman LW, Schmidt JD, Rogan DP, Gartin PR, Cohn EG, Collins DJ and Bacich AR (1991)
From initial deterrence to long-term escalation: Short-custody arrest for poverty ghetto
domestic violence. Criminology 29(4): 821-50.
Spelman W and Brown DK (1981) Calling the police: A replication of the citizen reporting
component of the Kansas City response time analysis. Washington, DC: Police Executive
Research Forum.
Sterling TD (1959) Publication decisions and their possible effects on inferences drawn from tests
of significance or vice versa. Journal of the American Statistical Association 54(285):
30-34.
Thomson Reuters. (2013). Journal citation reports®. Retrieved from http://admin-
29
apps.webofknowledge.com/JCR/JCR?PointOfEntry=HomeandSID=1B4Z3RPOxLXZhP
cXBmv.
Weinshall-Margel K and Shapard J (2011) Overlooked factors in the analysis of parole decisions.
PNAS 108(42): E833.
Wilson FD, Smoke GL and Martin JD (1973) The replication problem in sociology: A report and
a suggestion. Sociological Inquiry 43(2): 141-149.
30
Table 1. Journals included in the analysis.
Journal Title
2010 Article Influence Score©
Criminal Justice
Criminology
1.85
Journal of Quantitative Criminology
1.47
Violence and Victims
1.38
Journal of Research in Crime and Delinquency
1.09
Justice Quarterly
0.81
Natural Science
New England Journal of Medicine
21.35
Cell
20.59
Nature
19.31
Science
16.818
Nature Materials
16.166
Social Science
Quarterly Journal of Economics
11.69
Journal of Political Economies
10.74
Econometrica
8.81
Journal of Finance
7.477
Archives of General Psychiatry
6.07
31
Table 2. Replications Published Across Disciplines.
Criminal Justice
Social Science
Natural Science
Direct Replication
6 (.87%)
18 (1.15%)
53 (.80%)
Empirical Generalization
10 (1.45%)
26 (1.67%)
41 (.62%)
Original Research
675 (97.68%)
1515 (97.18%)
6543 (98.58%)
Note. Number of articles reported with percentage in parentheses.
χ2 = 21.09, p < .001
32
Table 3. Support for Original Study Found in Criminal Justice Replication Research.
Direct Replication
Empirical Generalization
Consistent with Original Study
2 (33.3%)
6 (60%)
Partially Consistent with Original Study
1 (16.7%)
3 (30%)
Not Consistent with Original Study
3 (50%)
1 (10%)
Note. Number of studies reported with percentages in parentheses.
... The pervasiveness of replication issues in scientific fields that have looked for them suggests these issues likely extend to other scientific fields that have yet to engage in the same undertaking. In criminology, recent work has provided preliminary evidence suggesting many of the factors responsible for false-positives rates in other fields are also present in criminology (Barnes et al., 2020;Chin, 2021;McNeeley & Warner, 2015;Pridemore et al., 2018;West et al., 2020;Wooditch, Fisher, et al., 2020;Wooditch, Sloas, et al., 2020). ...
Article
Full-text available
This study uses Bayesian simulations to estimate the probability that published criminological research findings are wrong. Toward this end, we employ two equations originally popularized in John P.A. Ioannidis’ (in)famous article, “Why Most Published Research Findings are False.” Values for relevant parameters were determined using recent estimates for the field’s average level of statistical power, level of research bias, level of factionalization, and quality of theory. According to our simulations, there is a very high probability that most published criminological research findings are false-positives, and therefore wrong. Further, we demonstrate that the primary factor contributing to this problem is the poor quality of theory. Stated differently, even when the overall level of research bias is extremely low and overall statistical power is extremely high, we find that poor theory still results in a high rate of false positives. We conclude with suggestions for improving the validity of criminological research claims.
... Despite the importance of replication (Nosek et al., 2012), it remains an infrequent and largely unappreciated exception in the published literature, especially in criminology (McNeeley & Warner, 2015). Of the 178 criminological replications identified by Pridemore and colleagues (2018), 74% were successful. ...
Article
Full-text available
In 2014, Pickett and Baker cast doubt on the scholarly consensus that Americans are pragmatic about criminal justice. Previous research suggested this pragmaticism was evidenced by either null or positive relationships between seemingly opposite items (i.e., between dispositional and situational crime attributions and between punitiveness and rehabilitative policy support). Pickett and Baker argued that because these studies worded survey items in the same positive direction, respondents’ susceptibility to acquiescence bias led to artificially inflated positive correlations. Using a simple split-ballot experiment, they manipulated the direction of survey items and demonstrated bidirectional survey items resulted in negative relationships between attributions and between support for punitive and rehabilitative policies. We replicated Pickett and Baker’s methodology with a nationally representative sample of American respondents supplemented by a diverse student sample. Our results were generally consistent, and, in many cases, effect sizes were stronger than those observed in the original study. Americans appear much less pragmatic when survey items are bidirectional. Yet, we suggest the use of bidirectional over unidirectional survey items trades one set of problems for another. Instead, to reduce acquiescence bias and improve overall data quality, we encourage researchers to adopt item-specific questioning.
... If coders assigned multiple different codes to the same text, this was counted as one disagreement11 If disagreements including multiple codes were counted as multiple, IRR dropped to 0.77 ...
Preprint
Full-text available
Increased execution of replication studies contributes to the effort to restore credibility of empirical research. However, a second generation of problems arises: the number of potential replication targets is at a serious mismatch with available resources. Given limited resources, replication target selection should be well-justified, systematic, and transparently communicated. At present the discussion on what to consider when selecting a replication target is limited to theoretical discussion, self-reported justifications, and a few formalized suggestions. In this Registered Report, we proposed a study involving the scientific community to create a list of considerations for consultation when selecting a replication target in psychology. We employed a modified Delphi approach. First, we constructed a preliminary list of considerations. Second, we surveyed psychologists who previously selected a replication target with regards to their considerations. Third, we incorporated the results into the preliminary list of considerations and sent the updated list to a group of individuals knowledgeable about concerns regarding replication target selection. Over the course of several rounds, we established consensus regarding what to consider when selecting a replication target.
... Despite the importance of replication (Nosek et al., 2012), it remains an infrequent and largely unappreciated exception in the published literature, especially in criminology (McNeeley & Warner, 2015). Of the 178 criminological replications identified by Pridemore and colleagues (2018), 74% were successful. ...
Article
Full-text available
In 2014, Pickett and Baker cast doubt on the scholarly consensus that Americans are pragmatic about criminal justice. Previous research suggested this pragmatism was evidenced by either null or positive relationships between seemingly opposite items (i.e., between dispositional and situational crime attributions and between punitiveness and rehabilitative policy support). Pickett and Baker (2014) argued that because these studies worded survey items in the same positive direction, respondents’ susceptibility to acquiescence bias led to artificially inflated positive correlations. Using a simple split-ballot experiment, they manipulated the direction of survey items and demonstrated bidirectional survey items resulted in negative relationships between attributions and between support for punitive and rehabilitative policies. We replicated Pickett and Baker’s (2014) methodology with a nationally representative sample of American respondents supplemented by a diverse student sample. Our results were generally consistent, and, in many cases, effect sizes were stronger than those observed in the original study. Americans appear much less pragmatic when survey items are bidirectional. Yet, we suggest the use of bidirectional over unidirectional survey items trades one set of problems for another. Instead, to reduce acquiescence bias and improve overall data quality, we encourage researchers to adopt item-specific questioning.
... Replication provides a tool to evaluate the generalizability and robustness of individual studies, and to what extent researchers and policymakers can be confident in certain findings (McNeeley & Warner, 2015). The current study aimed to replicate the finding that police killings in the United States can influence attitudes towards local police in other contexts. ...
Preprint
Full-text available
High-profile incidents of police misconduct can have serious consequences for public trust in the police. A recent study in the British Journal of Political Science found that Eric Garner’s death in NYC lead to more negative attitudes towards the police in London among Black residents compared to White and Asian residents. The current study aimed to replicate this transnational effect by assessing the impact of George Floyd’s death on Londoners’ perceptions of police. Using the same data and methodological approach, we did not replicate the immediate effect on Black Londoners’ attitudes. We did find that attitudes across ethnic groups became more negative when using a wider temporal bandwidth. However, we discovered violations to the excludability assumption, meaning we cannot be certain that the effect is solely due to the murder of George Floyd, or at least partly due to different dynamics, like the effects of the COVID-19 pandemic and the accompanying policies. This means that while police killings in other contexts are likely to play a role in shaping attitudes towards local police, these effects are difficult to disentangle from other global and local factors.
... Yet, reproductions and replications are rare. This has been demonstrated in domestic policing research (Huey & Bennell, 2017) and criminology more broadly (McNeeley & Warner, 2015). The necessity for a fully open investigation into the scale and composition of police demand is particularly important given the public nature of the debate around police reform and the considerable implications of redefining the role of the police without transparent empirical investigation. ...
Article
Full-text available
This paper describes the scale and composition of emergency demand for police services in Detroit, United States. The contribution is made in replication and extension of analyses reported elsewhere in the United States. Findings indicate that police spend a considerable proportion of time performing a social service function. Just 51% of the total deployed time responding to 911 calls is consumed by crime incidents. The remainder is spent on quality of life (16%), traffic (15%), health (7%), community (5%), and proactive (4%) duties. A small number of incidents consume a disproportionately large amount of police officer time. Emergency demand is concentrated in time and space, and can differ between types of demand. The findings further highlight the potential implications of radically reforming police forces in the United States. The data and code used here are openly available for reproduction, reuse, and scrutiny.
... While there has been less research conducted on prison victimization than on prison misconduct, there is a robust body of literature showing that an incarcerated individual's risk for violence varies according to their background characteristics and their experiences during incarceration, including their daily routines (e.g., Steiner et al., 2017;Wooldredge, 1998). As noted by McNeeley and Warner (2015), replicating this workespecially in other contextsis vital for establishing best practices regarding the reduction of violence in prison. ...
Article
Full-text available
Prior research found routine activities in prison affect risk of victimization among incarcerated people. However, most of this work is cross-sectional in nature and does not establish temporal order between the expected risk factors and victimization. To address this gap, the current study examines a snapshot population of individuals incarcerated in Minnesota state prisons on January 1, 2021, following them forward to examine violent victimization during a 6-month follow-up period. Results of Cox regression models and negative binomial models showed several in-prison activities (e.g., treatment, work, visitation, misconduct) and individual characteristics (e.g., race, age, mental and physical health) were related to risk of victimization and/or the number of violent incidents experienced. In addition, race-specific models showed the specific predictors of victimization vary across racial groups. The results confirm the utility of lifestyle-routine activities theory as a framework for understanding victimization in prisons.
Article
Recent research has shown a link between fairness in terms of school rules and discipline and negative student outcomes, including delinquency, violent behavior, and victimization. The current study examines how fairness at school impacts bullying victimization—operationalized as repeated physical, psychological, or verbal victimization—among high school sophomores. We tested this hypothesis using one wave of the Educational Longitudinal Study (2002). The findings of negative binomial regression analyses indicate bullying victimization is more prevalent among students who believe that their schools are unfair. A number of student characteristics were also significantly related to experiencing bullying. Implications for policy and practice are outlined.
Article
Full-text available
In the context of intense and accelerating transformations of social reality, science acts as one of the adaptive mechanisms to understand the nature of what is happening and the prospects for future change. However, in the face of rapid and substantial social change, the social empirical sciences, including criminology, also need to be transformed to take account of the digitalisation of relations, their exponentially increasing complexity and the growth of universal connectivity. These properties of a transforming reality require not only a move away from traditional sources of information on criminologically relevant phenomena, not only the application (expansion) of an interdisciplinary approach, but also the establishment of a 'quantitative criminology paradigm', a 'computational criminology' that enables the tracking of both criminal and potential criminogenic phenomena as well as background crime phenomena in real time through monitoring and using both preset and self-generated algorithms.
Article
Scholars and practitioners who develop evidence-based crime policy debate on how best to translate criminological knowledge into better criminal justice practices. These debates highlight the counterpoised problems of over-selling the contribution of scientific evidence; or, alternately, overemphasizing the limitations of science. This challenge attends any attempt to translate research findings into practice; however, and problematically, in criminology this challenge is rarely approached in a theoretically coherent fashion. This article therefore seeks to theorize uncertainty in criminology by examining insights on communicating scientific uncertainty in other fields, and applying these insights specifically to the field of Evidence-Based Policing (EBP). Taking the position that all science is inherently uncertain, we examine the following four aspects of the field: the particular uncertainties of criminology, variance in receptivity to research, the lack of evidence regarding effective communication, and the boundaries of evidence. Building on this analysis, we set out the normative challenge of how researchers should characterize and balance the implications and limits of scientific findings in the decision-making process. Looking ahead, we argue for the need to invest in an empirical project for determining meaningful strategies to express research evidence to decision-makers.
Article
Full-text available
Recent controversies in psychology have spurred conversations about the nature and quality of psychological research. One topic receiving substantial attention is the role of replication in psychological science. Using the complete publication history of the 100 psychology journals with the highest 5-year impact factors, the current article provides an overview of replications in psychological research since 1900. This investigation revealed that roughly 1.6% of all psychology publications used the term replication in text. A more thorough analysis of 500 randomly selected articles revealed that only 68% of articles using the term replication were actual replications, resulting in an overall replication rate of 1.07%. Contrary to previous findings in other fields, this study found that the majority of replications in psychology journals reported similar findings to their original studies (i.e., they were successful replications). However, replications were significantly less likely to be successful when there was no overlap in authorship between the original and replicating articles. Moreover, despite numerous systemic biases, the rate at which replications are being published has increased in recent decades. © The Author(s) 2012.
Article
This Article analyzes data from the Colorado Springs Spouse Abuse Experiment. In that experiment, suspects apprehended for misdemeanor spouse abuse were assigned at random to one of four treatments: (1) an emergency order of protection for the victim coupled with arrest of the suspect; (2) an emergency order of protection for the victim coupled with immediate crisis counseling for the suspect; (3) an emergency order of protection only; or (4) restoring order at the scene with no emergency order of protection. Outcome measures are taken from official police data and from follow-up interviews with victims. Using Bayesian procedures to take previous experiments into account, the balance of evidence supports a deterrent effect for arrest among "good risk" offenders, who presumably have a lot to lose by being arrested. The balance of evidence is far more equivocal for a "labeling effect" in which an arrest increases the likelihood of new violence.
Article
The usefulness of replication research is a widely debated topic in the social sciences. Although most scholars recognize the need for a replication tradition in their respective disciplines, studies have documented a paucity of replication research in the advertising/consumer behavior/marketing literature. The authors investigate the prevalence of replication research by soliciting journal editors' perceptions of their disciplines' attitudes toward such work. Two studies were conducted questioning editors first in the natural and social sciences and, later, editors in advertising, communications, and marketing journals. Findings included that natural science editors have generally endorsed replication as a necessary part of research, while social science editors have been less than enthusiastic about its adoption. Marketing communications and advertising editors responded consistently with that of most other social science editors.
Article
A content analysis of all 1977, 1978 and 1979 issues of the leading advertising, marketing and communication publications was conducted to determine the frequency of replication in advertising research. The results revealed that replications are seldom published in advertising research and, as a consequence, the possibility exists that empirical results are uncritically absorbed into the advertising literature as verified knowledge. Recommendations are offered to assure that replication becomes a recognized and practiced component of the advertising research.
Article
Replication is held as the gold standard for ensuring the reliability of published scientific literature. But conducting direct replications is expensive, time-consuming, and unrewarded under current publication practices. So who will do them? The authors argue that students in laboratory classes should replicate recent findings as part of their training in experimental methods. In their own courses, the authors have found that replicating cutting-edge results is exciting and fun; it gives students the opportunity to make real scientific contributions (provided supervision is appropriate); and it provides object lessons about the scientific process, the importance of reporting standards, and the value of openness. © The Author(s) 2012.
Article
To determine if actual practice was consistent with commonly recommended research methods and procedures, this study examined 130 studies reported over a 5-year period in three volumes of the Journal of Research in Science Teaching (JRST). The results were consistent with similar previous analyses (Shaver & Norton, 1980a, 1980b; Wallen & Fraenkel, 1988a) and indicate that appropriate generalizations beyond the confines of the reported studies may be impossible for most (64%) of the JRST studies surveyed. The findings also show that replication studies, which could be employed to offset deficiencies in generalizability, were not commonly encountered (3%) in these 130 reports. In addition, the study results indicate that many researchers (48%) do not properly restrict their conclusions based on the limits imposed by the accessible populations and samples used; nor do they typically provide possible alternative explanations for the outcomes obtained (76%). These findings prompt the following recommendations:1. A greater awareness and use of replication as a check on generalizability should be encouraged by the science education community.2. Clearly defined populations (target and accessible) and fully described samples warrant increased attention as report components from authors, reviewers, and editorial board members of JRST.3. In light of the difficulties inherent in effecting random selection in educational settings, a greater emphasis should be placed on recognizing the limits that the underlying assumptions of inferential statistics place on research conclusions.The results of this study indicate that the methodological quality of published science education research should remain a concern for both practitioners and readers.
Article
Discussion begins with a review of three major concerns of the labelling perspective in deviance: (1) locating the social origins of stigmatic labels; (2) documenting the application of these labels to selected populations; and (3) assessing the consequences of the labelling process for the recipients' future conduct. It is suggested that the latter concern is the most dramatic aspect of the labelling perspective. Two assumptions accompanying this version of the labelling argument are reviewed: (1) other's reaction to subject intensifies subject's behavior; and (2) psychological differences do not exist in a manner relevant to the production and explanation of deviant behavior. After accumulating evidence suggestive of weaknesses in the preceding assumptions, it is argued that the popularity of the labelling perspective in deviance may be best understood as an instance in the “Sociology of the Interesting.” Implications are suggested.