ArticlePDF Available

Reflections on the Replication Corner: In Praise of Conceptual Replications

Authors:

Abstract and Figures

We contrast the philosophy guiding the Replication Corner at IJRM with replication efforts in psychology. Psychology has promoted "exact" or "direct" replications, reflecting an interest in statistical conclusion validity of the original findings. Implicitly, this philosophy treats non-replication as evidence that the original finding is not "real" - a conclusion that we believe is unwarranted. In contrast, we have encouraged "conceptual replications" (replicating at the construct level but with different operationalization) and "replications with extensions", reflecting our interest in providing evidence on the external validity and generalizability of published findings. In particular, our belief is that this replication philosophy allows for both replication and the creation of new knowledge. We express our views about why we believe our approach is more constructive, and describe lessons learned in the three years we have been involved in editing the IJRM Replication Corner. Of our thirty published conceptual replications, most found results replicating the original findings, sometimes identifying moderators.
Content may be subject to copyright.
Reections on the replication corner: In praise of conceptual replications
,
☆☆
John G. Lynch Jr.
a,
,EricT.Bradlow
b
,JoelC.Huber
c
, Donald R. Lehmann
d
a
Leeds School of Business, University of Colorado-Boulder, Boulder CO 80309, USA
b
Wharton School, University of Pennsylvania, Philadelphia, PA 19104, USA
c
Fuqua School of Business, Duke University, Durham, NC 27708, USA
d
Columbia Business School, Columbia University, New York, NY 10027, USA
abstractarticle info
Article history:
First received on July 2, 2015
Available online 3 October 2015
Guest Editors: Jacob Goldenberg and
Eitan Muller
Keywords:
Conceptual replication
Replication and extension
Direct replication
External validity
We contrast the philosophy guiding the Replication Corner at IJRM with replication efforts in psychology. Psy-
chology has promoted exactor directreplications, reecting an interest in statistical conclusion validity of
the original ndings. Implicitly, this philosophy treats non-replication as evidence that the original nding is
not real”—a conclusion that we believe is unwarranted. In contrast, we have encouraged conceptual replica-
tions(replicatingat the construct level but with different operationalization)and replicationswith extensions,
reectingour interest in providing evidence on the external validity and generalizability of published ndings. In
particular, our belief is that this replication philosophy allowsfor both replication and the creation of newknowl-
edge. We express our views about why we believe our approach is more constructive, and describe lessons
learned in the three years we have been involved inediting the IJRM Replication Corner. Of our thirty published
conceptual replications, most found results replicating the original ndings, sometimes identifying moderators.
© 2015 Elsevier B.V. All rights reserved.
1. Introduction
A number of researchers have concluded that there is a replication
crisis in the social sciences, raisingnew questions about the trustworthi-
ness of our evidence base (e.g. Pashler & Wagenmakers, 2012; Simons,
2014). This issue is playing out in the academic psychology literature
via a large-scale effort to conduct director exactreplications of im-
portant papers. The ndings are mixed, which has led to considerable
acrimony and suspicion about the replication police(Gilbert, 2014)
and negative psychology(Coan, 2014) with public shaming of authors
whose work is found not to replicate. This is not true of all direct repli-
cation efforts: the just-released report by the Open Science
Collaboration (2015) is a model of circumspection. After summarizing
attempts to replicate 100 papers sampled from the 2008 issues of
three top psychology journals, the authors note, It is also too easy to
conclude that a failure to replicate a result means that the original evi-
dence was a false positive(p. aac4716-6).
We believe that the Replication Corner in IJRM provides an alterna-
tive and perhaps more constructive model to direct replication efforts. It
has encouraged papers that are either replications and extensionsor
conceptual replications.Conceptual replications attempt to replicate
an important original nding while acknowledging differences in back-
ground factors (e.g., participants, details of procedures) compared with
the original article. Put differently, they attempt to study the same
construct-to-construct relations as the original article despite
operationalizing those constructs differently. Replications and exten-
sions test both the original nding and a moderator of it. They may
somewhat closely replicate the original in a part of the design but
then test whether varying some specic background factor moderates
the results.
We believe that the very concept of an exact replicationin social
science is awed. Even if one used the exact same procedures, respon-
dents may have changed over time. Exact replication is impossible.
Therefore, the only issue is how close the replication is to the original,
and whether it is desirable to be as close as possible.When the goal
is generalization, we argue that imperfectconceptual replications
that stretch the domain of the research may be more useful.
The other problem with some current replication efforts is the focus
by many on consistency of the original and the replicate in statistical
signicance of the effect as opposed to its effect size. We see scholars
attempting directreplications who declare a replication failure if the
original study found a signicant effect and the replicate did not when
they have neither shown that the effect sizes for replicate and original
Intern. J. of Research in Marketing 32 (2015) 333342
The authorsthank Joe Harveyand Andrew Long for theirassistance withthe coding of
publishedand in press Replication Corner articles thatappears in Tables 1 and 2, and they
thank the authors of the 30 replications for providing feedback on both tables. They thank
Carl Mela and Uri Simonsohn for comments on earlier drafts. They thank editors Jacob
Goldenberg and Eitan Muller for comments and for creating the Replication Corner.
☆☆ Their support and vision has been invaluable. Finally, they thank IJRM Managing
Editor, Cecilia Nalgon, for support of the replication corner and for preparing materials
for our coding of articles.
Corresponding author at: Ted Anderson Professor of Free Enterprise, Leeds School of
Business, University of Colorado-Boulder, Boulder CO 80309.
E-mail addresses: john.g.lynch@color ado.edu (J.G. Lynch), ebradlow@
wharton.upenn.edu (E.T. Bradlow), joel.huber@duke.edu (J.C. Huber), drl2@
coloumbia.edu (D.R. Lehmann).
http://dx.doi.org/10.1016/j.ijresmar.2015.09.006
0167-8116/© 2015 Elsevier B.V. All rights reserved.
Contents lists available at ScienceDirect
Intern. J. of Research in Marketing
journal homepage: www.elsevier.com/locate/ijresmar
differ signicantly nor that the pooled effect size is indistinguishable
from zero. Maniadis, Tufano, and List (2014) declare a failure to repli-
cate Ariely, Loewenstein, and Prelec's (2003) ndings of anchoring ef-
fects on valuations of goods and experiences. But Simonsohn,
Simmons, and Nelson (2014) show that the effect sizes in the replicate
and original were entirely consistent. In particular, the condence-
interval around the estimate of the replication included the entire
condence-interval for the original study: it was simply the case that
the replication hadlower power due to lower sample sizes and oor ef-
fects. Because the point estimate effect size from Maniadis et al. was
lower than that from Ariely et al., from a meta-analytic perspective,
the results of Maniadis et al. properly should decrease one's estimate
of the population effect size from Ariely et al. But the new results should
strengthen beliefthat the effect in Ariely et al. is larger than 0 in the pop-
ulation not decrease the belief as Maniadis et al. conclude.
1
Even if the
meta-analytic effect sizes shrink towards zero with the addition of a
new study, the standard error may decline even more making one
even more convinced about the existence of the effect.
We have therefore created policies in the Replication Corner to pro-
vide an outlet for conceptual replications and replications with exten-
sion rather than directreplications. Moreover, we have encouraged
authors to take a meta-analytic view of how their results increase or de-
crease the strength of evidence of the original ndings.
In this article, we give our views of what we have learned so far, and
we articulate the philosophy that has guided our efforts at IJRM, con-
trasting our philosophy of conceptual replication with what seems to
prevail in psychology's efforts to encourage exact replications.
2. The history of the replication corner
Trust in behavioral research was badly shaken when internationally
famous professors Stapel, Sanna, and Smeesters all resigned their facul-
ty positions after investigations uncovered unethical data reporting and
outright fabrication of data. Similar misconduct was found in many
other branches of science (e.g. Fang, Steen, & Casadevall, 2012).
Around the same time, there were highly publicized failures to rep-
licate important ndings even though no fraud was suspected, with
contentious exchanges about the adequacy of the replication efforts.
For example, Bargh, Chen, and Burrows (1996) found that priming el-
derly stereotypes caused subjects to walk more slowly and replicated
the nding in their own paper. But Doyen, Klein, Pichon, and
Cleeremans (2012) could not replicate the nding unless the experi-
menters were aware of the hypotheses being tested. Given growing
questions about the reliability of published research, the journal Per-
spectives on Psychological Science then published a special section on
replicability (Pashler and Wagenmakers (2012)).
An inuential article by Simmons, Nelson, and Simonsohn (2011)
pointed out that even researchers who were not trying to deceive
their audiences might be deceiving themselves by (eld-allowed)
exibility in data analysis and reporting. They outlined common prac-
tices in data analysis that could inate the effective type I error rate
far in excess of the nominal 0.05 level, leading authors to nd evidence
of an effectthat doesn't exist in the population. More generally, Gelman
and Loken's (2014) analysis identied a broader statistical crisis in sci-
enceresulting from overly exible data analysis decisions.
In response to these disturbing revelations, the top journals in mar-
keting and many in psychology have required more complete reporting
of instruments and methods, greater specics of data analysis, and more
complete publication of materials. The goal was to allow scholars to bet-
ter assess what is in the paper and to go beyond it to uncover further in-
sights. For IJRM, Jacob Goldenberg and Eitan Muller took the view that
replication must be an important complement to these disclosure re-
quirements if we are to understand the scope and limits of our pub-
lished ndings. They noted that no high-status marketing journal was
publishing successful or unsuccessful replication attempts and decided
that IJRM would provide an appropriate outlet.
The approach of Jacob, Eitan and the Replication Corner co-editor
team in 2012 differed from what was emerging in psychology. Below,
we lay out here the difference in philosophy and take stock of what
we have learned over the past three years.
First, we decided to focus the replication corner on importantand
highly cited papers.
Second, we expressed a preference for conceptual replicationsand
replications with extensionsrather than director exactreplica-
tions. Put differently, our focus has been on matters of external valid-
ity rather than of the statistical conclusion validity of the original
nding (cf. Cook & Campbell, 1979).
Third,we have tried to be even-handed in publishingboth successful
replications of original ndings and failures to replicate.
3. Focus on importantpapers
We have chosen to consider only replication attempts of important
papers. With limited pages, we wished to focus limited scholarly re-
sources where it would matter most. Since we started the Replication
Corner, psychology journals have followed a similar path.
A (much) secondary motive for our focus on importantpapers was
to provide a further incentive for good professional behavior. We as-
sume that most scholars in the eld are earnestly trying to further sci-
ence, while only a handful are more motivated by career-
maximization. If scholars get promoted based on their most famous pa-
pers, the possibility that their best papers would be the focus of a repli-
cation attempt should heighten authors' desireto be sure that what they
publish is air-tight.
Our primary motive, however, was that we expected that readers
would be more interested in learning about the scope and limits of im-
portant ndings than of unimportant ones.Since we started the replica-
tion corner, the single biggest reason for rejecting submissions has been
that the original paper did not meet our threshold for importance and
inuence, as reected in awards and citations relative to other ndings
published in top marketing journals in thesame year, augmented by our
own judgments of importance.
4. Conceptual rather than directreplications
Psychologists have debated whether to promote director concep-
tualreplications. If replications are to fulll an auditing function, one
should maximize the similarity between the procedures used in the
original study, relying on so called directreplications (Pashler &
Harris, 2012; Simons, 2014)sometimes heroically called exactrep-
lications. Brandt et al. (2014) have gone so far as to put forth a replica-
tion recipefor closereplications.
As we explained earlier, we don't believe that exactreplications
are possible. In any given study, researchers make dozens of decisions
about small details of procedure, participants, stimuli, setting, and
time. When researchers hold those presumed-irrelevant factors con-
stant, it is not possible to say whether any observed treatment effect is
amain effectof the treatment or a simple effectof the treatment
in some interaction with one or more of those background factors
(Lynch, 1982). Obviously, many things will always differ between the
original study and the attempted replication (Stroebe & Strack, 2014).
If some replication attempt fails to detect an effect in the original
paper, one cannot say whether the issue is one of statistical conclusion
validity of the original authors the original result was a type I
1
Simonsohnet al. point out thatnot only did Maniadis et al. successfullyreplicate Ariely
et al. (2003): ironically,much of their paper presented a theoretical model of replications
that was anexact replicationof the model presentedby Ioannidis (2005) in his widelycit-
ed Why Most Published Research Findings are False.
334 J.G. Lynch Jr. et al. / Intern. J. of Research in Marketing 32 (2015) 333342
error or an issue of external validity (Open Science Collaboration,
2015; Shadish, Cook, & Campbell, 2002).
Replication efforts in psychology are often motivated by questions of
statistical conclusion validity. They seek to investigate whether the orig-
inal nding wasa type I error, or more demandingly, whether the effect
size in the replication matches that in the original. As Meehl (1967) ar-
gued persuasively, the null hypothesis is never true. Any treatment has
some effect however small, so if one has a directional hypothesis and
very large sample size, the probability of getting a signicant result in
the predicted direction approaches 0.5.
Suppose the effect in the population is nonzero but tiny. If the repli-
cation doesn't nd a statistically signicant result, a contributing factor
may arise from a publication bias for signicant ndings. Any such pub-
lication bias makes it likely that the effect size in the original (pub-
lished) study is greater than the effect size in the population from
which the original study was drawn. That is exactly what was found
in the Open Science Collaboration (2015) report of attempts to replicate
100 psychology articles. We don't nd such shrinkage surprising: it is a
logical consequence of regression to the mean in a system with publica-
tion bias that censors the reporting of small effect sizes. Such shrinkage
reects poorly on our publication system, but not on the individual
authors.
Moreover, if replicators base power analyses on inated effect size
estimates from published studies, they may have far less power than
their calculations imply. So failure to nd a signicant result in the rep-
lication is less probative of whether an original result was a type I error
than some replicators imagine. All of this makes us skeptical of using
replications to sort out issues of statistical conclusion validity in the
original study.
2
We, the co-editors of the Replication Corner, believe that it is more
interesting to investigate the external validity (robustness) of an inu-
ential study. To what extent are the sign, signicance, and effect size
of original result robust with respect to changes in the stimuli, settings,
participant characteristics, contexts, and time of the study? We believe
that conceptual replications are more informative than directreplica-
tions if the objective is to better understand the external validity or gen-
eralizability of the nding. Information on background factor by
treatment interactions helps us better understand the nomological va-
lidity of the theory that was used to interpret the original nding
(Lynch, 1982, 1983).
4.1. Consequences of successfuldirect vs. conceptual replication
Consider the consequences of successfulreplication and failures
to replicateunder two possible replication strategies:
(a) when the authors attempt directreplication by matching the
replicate to the original study on as many background factors
as possible;
(b) when authors attempt conceptualreplication, replicating the
conceptual independent and dependent variables with
operationalizations that vary in multiple ways from the original
and with multiple background factors held constant at levels dif-
ferent from the original.
The latter strategy corresponds to what Cook and Campbell
(1979) have called deliberate sampling for heterogeneity.
We believe that one learns more from a successfulconceptual
replication than from a successful direct replication. In the case
of a direct replication, it may be the case that the results derive
from shared interactions with the various background factors
that have been held constant in the original and the replication.
In the case of a successful conceptualreplication, it becomes
much less plausible that the maineffect of the treatment in
question is confounded with some background factor by treat-
ment interaction (Lynch, 1982, 1983). From a Bayesian perspec-
tive, this increases the amount of belief shift warranted by the
combined original study and successfulreplication (Brinberg,
Lynch, & Sawyer, 1992).
As we will report later in this paper, the replications we pub-
lished largely reproduced the original authors' effects. We be-
lieve that less would have been learned about the broad
phenomena studied if the same papers had faithfully attempted
to follow exactly the original authors' methods.
4.2. Consequences of unsuccessfuldirect vs. conceptual replication
Now consider the same comparison when the replication attempt
fails to reproduce the original effect. The rst thing the replicator should
do is to see if the inconsistency in results exceedswhat might be expect-
ed by chance. As noted on the IRJM Website (http://portal.idc.ac.il/en/
main/research/ijrm/pages/submission-guidelines.aspx):
If the original author reports a signicant effect and a second author
nds no signicant effect, it is always unclear whether the difference
in results is a failure to replicateor just what one would expect
from random draws from a common effect size distribution. We
would like to ask authors to include in their papers or atleast an on-
line appendix a meta-analysis including the original study and
attempted replication. An example of such a meta-analysis in theon-
line appendix comes from Chark and Muthukrishnan (IJRM Dec
2013). It is easy to have a situation where one effect size is signicant
and another is not, but no signicant heterogeneity exists across the
studies. If the heterogeneity is not signicant, then one can calculate
the weighted average effect size and test whether the effect is signif-
icant after pooling across a ll the available studies. If there is n o signif-
icant heterogeneity but the weighted average effect size remains
signicant, the original conclusion would stand. If there is no signif-
icant heterogeneity and the weighted average effect size is NOT sig-
nicant, then this calls into question the original nding. If there is
signicant heterogeneity, then this raises the question of what is
the moderator or boundary condition that explains the difference
in results.
We are surprised that in psychology and economics, it does not seem
to be common practice for the authors of an unsuccessful replication to
conduct such a meta-analysis, and we are pleased to see that in the nal
report of the Open Science Collaboration (2015), meta-analytic tests of
aggregate effect sizes were reported. In 2014, we began to encourage
such analysis in the IJRM Replication Corner, where feasible.
In the casewhere there is no signicant heterogeneity of effect sizes
and the combined effect is not signicant, direct and conceptual replica-
tions are similar. In both cases, a Bayesianprocess of belief revision will
often lead to lower belief in the construct-to-construct links asserted in
the original paper. This is a form of learning because before the failed
replication, one believed the original effect and its interpretation and
now the contrary results put that into question. This is contrary to the
narrow denition of learning in which posterior uncertainty is reduced
by new data; that is, in this case we learn we know less(Bradlow,
1996).
In the case in which the replicator nds no effect and that effect is
signicantly different from the original in meta-analytic tests, readers
of the report wouldn't understand why the results of the original and
replicate differed without further data no matter whether it is a
2
Here, our skepticism does not extend to efforts to sort out issues of statistical conclu-
sion validity by placing p valuesor effect sizes in some distribution of teststatistics. For ex-
ample, Simonsohn, Simmons and Nelson (2015) have developed a specication curve
methodology for determining what percent of all plausible specications of an empiri-
cal/econometric modelwould reproduce the signand the signicance of the effect report-
ed in the original paper. If it can be shown that only a very small fraction of plausible
specications produce an effect of the samesign or signicance, thiswould raise the plau-
sibility of questions about statistical conclusion validity.
335J.G. Lynch Jr. et al. / Intern. J. of Research in Marketing 32 (2015) 333342
conceptual or a direct replication. It is to be expected that inferences
from unsuccessful replications are not denitive but require more re-
search. Onemight take the position that a failedreplicationis more in-
formative if it is an exactreplication versus one that differs from the
original on multiple dimensions. This is not true if thegoal is to establish
broad empirical generalizations that may require multiple studies and
meta-analysis. This suggests that any given study is (just) one data
point for a larger meta-analysis involving future studies. If one is antic-
ipating meta-analysis after more ndings accumulate, one should be
thinking in terms of what next study might successfully discriminate
between plausible alternative causes of variations in effect size and
most increase our understanding of key moderators and boundary con-
ditions (Farley, Lehmann, & Mann, 1998).
4.3. Failures to replicate are not shameful for the original authors
In psychology, failures to replicate are often taken to shame the orig-
inal authors, inspiring acrimony about the degree to which the
replicators have faithfully followed the original. We think this is unfor-
tunate. Cronbach (1975) has argued persuasively that most real world
behavior is driven byhigher-order interactions that are virtually impos-
sible to anticipate. He gives the following example:
Investigators checking on how animals metabolize drugsfound that
results differed mysteriously from laboratory to laboratory. The most
startling inconsistency of all occurred after a refurbishing of a Na-
tional Institutes of Health (NIH) animal room brought in new cages
and new supplies. Previously, a mouse would sleep for about
35 min after a standard injection of hexobarbital. In their new
homes, the NIH mice came miraculously back to their feet just
16 min after receiving a shot of the drug. Detective work proved that
red-cedar bedding made the difference, stepping up the activity of
several enzymes that metabolize hexobarbital. Pine shavings had
the same effect. When the softwood was replaced with birch or ma-
ple bedding like thatoriginally used,drug response cameback in line
with previous experience(p. 121).
Who would everbe smart enough to anticipate that one? Our view is
that if a colleague's nding is not replicated in an attempted direct
replication, it is not (necessarily or even usually) a sign of something
underhanded or sloppy. It simply means that in the current state of
knowledge, we may not fully understand the effect or what moderates
it (Cesario, 2014; Lynch, 1982).
4.4. Hidden background factors inuence effect sizes
The journal Social Psychology published a report by the Many Labs
Project (Klein et al., 2014), wherein teams of researchers at 36 different
universities attempted direct replications of 16 studies from 13 impor-
tant papers. In aggregate they successfully replicated 10 of the 13 pa-
pers. The teams of researchers at the 36 different universities all
followed the same pre-registered protocol. Nonetheless, meta-analytic
QandI
2
statistics showed substantial and statistically signicant unex-
plained heterogeneity across labs for 8 of the 16 effects studied.
In theoretical research, the objective is often to make claims about
construct-to-construct links. When one researcher fails to replicate
some original nding, it is possible that the replicate and original don't
differ in construct-to-construct links; rather, the original and the repli-
cate may differ in the mapping from operational variables to latent con-
structs. One route to this outcome is when the replicate and original
differ in characteristics of participants (e.g. Aaker & Maheswaran, 1997).
The mapping from operational to latent variables can also change
with time.A directreplication of a 30 year old study might betechnical-
ly equivalent in terms of operational independent and dependent vari-
ables but with differences in the conceptual meaning of the same
operational variables (e.g. Schwarz & Strack, 2014,p.305).Justas
construct-to-construct relations can change over time (Gergen, 1973;
Schooler, 2011), so too can operationalization-to-construct links.
In psychology, reports of failures to replicate important papers have
been prosecutorial, as if the original effect was not real.For example,
Johnson, Cheung, and Donnellan (2014) failed to replicate a nding by
Schnall, Benton, and Harvey (2008) that washing hands led to more le-
nient moral judgments. Donnellan's (2014) related blog post implied
that the original effect does not exist.
So more research is probably needed to better understand this ef-
fect [Don't you just love Mom and Apple Pie statements!]. However,
others can dedicatetheir time and resources to this effect. Wegave it
our best shot and pretty much encountered an epic fail as my 10-
year-old would say. My free piece of advice is that others should
use very large samples and plan for small effect sizes.
Donnellan later apologized for the epic failline, but it reects an un-
derlying attitude shared by other replicators of —“our ndings are correct
and the original authors' are wrong.That's hubris: both ndings are rel-
evant data points in an as-yet-unsolved puzzle. The same issue of Social
Psychology that published Johnson, Cheung, and Donnellan (2014) report-
ed two separate direct replications of Shih, Pittinsky, and Ambady's
(1999) famous nding that Asian-American women performed better
on a mathematics test when their ethnic identity was activated, but
worse when their gender identity was activated. Gibson, Losee, and
Vitiello (2014) replicated the original nding, but Moon and Roeder
(2014) did not despite following the same pre-registered protocol. If
one should be embarrassed to have another lab fail to replicate one's
own original result, how should we feel when two different directrep-
lications produce different results? And how should the researchers in the
Many Labs Replication Project(Klein et al., 2014)feelaboutthefactthat
their colleagues at other universities found different effect sizes when fol-
lowing the same pre-registered replication research protocols?
In summary, successful direct replications may strengthen our prior
beliefs about a well-known effect, but not as much as successfulconcep-
tual replications. When replicators nd results different from the origi-
nal, direct replications are like conceptual replications in requiring
future research to understand the reasons for differences. Worse, a
focus on direct replications can create an unhealthy atmosphere in our
eld where the competence or honesty of researchers is subtly
questioned. Variation in results is to be expected. The next section re-
veals that it has been a challenge for us to deal with defensiveness on
the part of authors whose ndings did not replicate.
5. Even-handed publication of successful conceptual replications
and of failures to replicate
We have proposed that the original authors should not feel
embarrassed when another author team fails to replicate their results.
But in our experience, that is not how original authors often see it. As ed-
itors, when we have received a failure-to-replicate report, we have com-
monly included one of the original authors on the review team for that
paper. It is not uncommon for the reviewing author to be a bit defensive,
pointing out differences of the replication and the original as aws.
We have tried hard as editors to push back against such defensive-
ness, though we are not sure we have been completely successful. As
we will lay out in the next section, the percentage of successful replica-
tions that we have published seems comparable to what was reported
in the Many Labs test of 16 effects from 13 famous papers (Klein et al.,
2014). However, the percent of successful conceptual replications in
the Replication Corner signicantly exceeds the percentage of success-
ful direct replications in the Reproducibility Project (Open Science
Collaboration, 2015). The Reproducibility Project is the largest and
most ambitious open source replicationeffort to date, involving 250 sci-
entists around the world. Collectively, they attempted to replicate 100
studies published in the 2008 volumes of Psychological Science,Journal
336 J.G. Lynch Jr. et al. / Intern. J. of Research in Marketing 32 (2015) 333342
of Personality and Social Psychology,andJournal of Experimental Psychol-
ogy: Learning, Memory and Cognition. A summary of that effort classied
61 ndings as not replicated to varying degrees and only 39 were repli-
cated to varying degrees (Baker, 2015). That's a far lower rate for suc-
cessful direct replications than we are observing for conceptual
replications.
The lesson we derive as editors of the Replication Corner is that we
need to be even more aggressive in pushing back when original authors
recommend rejection of unsuccessful replications of their work. Pashler
and Harris (2012) have correctly noted that absent any systematic repli-
cation effort, the normal journal review process makes it more likely that
successful conceptual replications will be published than unsuccessful
ones because successful conceptual replications seem more interesting.
A journal section dedicated to replication should avoid that bias.
6. What we have found so far
Thus far we have published or accepted 30 replications from 91 sub-
missions. Table 1 summarizes the nature of the differences between the
replicated articles and the conceptual replications and extensions in
those 30 papers. We evaluated these articles on four dimensions
shown in Table 2:
1. Direct replication included? Did the authors include at least a subset
of experiment design cells/subjects intended to have a very similar
operationalization to the original: 1 = yes; 0 = no, conceptual repli-
cation only.
2. Moderator? Did the paper show an interaction in which the original
result replicates in some conditions but contains different effects
under other conditions? (1 = yes; 0 = no)
3. Resolve conict? Does the paper address and resolve apparent con-
icts in the literature? (1 = yes; 0 = no)
4. How closely did thendings agree with the original studyor studies?
(From Baker, 2015), 1 = Not at all similar; 2 = slightly similar; 3 =
somewhat similar;4 = moderately similar; 5 = very similar; 6 = ex-
tremely similar; 7 = virtually identical.
Table 2 shows that most papers reported replicating the original nd-
ings to a large degree. The mean of two co ders on our 7 point scale was 5.8
on a scale where 6 means extremely similar(Cronbach's α= .76). Only
threepaperswerecoded as less than 4=verysimilarto the original:
Table 1
Summary of nature of replications and extensions of 30 replication corner papers.
Authors Title Paper Replicated Nature of extension
Aspara and van
den Berg (2014)
Naturally designed for masculinity vs. femininity?
Prenatal testosterone predicts male consumers'
choices of gender-imaged products
Alexander (2006), Archives of Sexual
Behavior
Tested consequences of digit ratios for new DV: Preference
for gender-linked products.
Barakat et al.
(2015)
Severe service failure recovery revisited: Evidence of
its determinants in an emerging market context
Weun et al. (2004), J. Services
Marketing
Extended prior ndings on link of service failure severity to
satisfaction to new emerging market and new industry.
Extended replicated paper by testing how three perceived
justice dimensions moderate that relationship in real
(rather than experimental) service encounters.
Baxter and
Lowrey (2014)
Examining children's preference for phonetically
manipulated brand names across two English accent
groups
Shrum et al., 2012 International J. of
Research in Marketing
Tested moderation of brand sound preferences by accent
and age
Baxter et al.
(2014)
Revisiting the automaticity of phonetic symbolism
effects
Yorkston & Menon, 2004. J. Consumer
Research
Tested moderation of automatic phonetic symbolism effects
across adults and children
Blut et al. (2015) How procedural, nancial and relational switching
costs affect customer satisfaction, repurchase
intentions, and repurchase behavior: A
meta-analysis
Burnham et al. (2003), J. of the
Academy of Marketing Science
Meta-analysis of conicting studies on effects of satisfaction
and switching costs on repurchase behavior. Examined
moderation by DV of intentions vs. actual behavior.
Brock et al.
(2013)
Satisfaction with complaint handling: A replication
study on its determinants in a business-to-business
context
Orsingher et al. (2010), J. of the
Academy of Marketing Science.
Extend original ndings into B to B context
Butori and Parguel
(2014)
The impact of visual exposure to a physically
attractive other on self-presentation
Roney (2003), Personality and Social
Psychology Bulletin
Extended by use of different stimuli (pictures in a
non-mating context, whereas previous studies used
pictures in a mating context), and analyzed a different
population : women (and not just men).
Chan (2015-a) Attractiveness of options moderates the effect of
choice overload
Gourville and Soman (2005),
Marketing Science; Iyengar & Lepper
(2000, J. Personality & Social
Psychology)
Examined a moderating variable for the choice overload
effect, namely how attractive the options in a choice set are
Chan (2015-b) Endowment effect for hedonic but not utilitarian
goods
Ariely et al. (2005), J. Marketing
Research; Kahneman et al. (1990) J.
Political Economy.
Examined a moderating variable for the endowment effect,
namely by comparing hedonic against utilitarian goods
Chark and
Muthukrishnan
(2013)
The effect of physical possession on preference for
product warranty
Peck and Shu (2009), J. Consumer
Research
Generarlized effect of physical contact from perceived
ownership to intention to buy extended warranties
Chowdhry et al.
(2015)
Not all negative emotions lead to concrete construal Labroo and Patrick (2009),J.
Consumer Research
Extended the original research by showing that appraisals
of specic emotions (rather than valence alone) impacts
construal level
Davvetas,
Sichtman &
Diamantopoulos
(2015)
The impact of perceived brand globalness on
consumers' willingness to pay
Steenkamp et al. (2003),J.
International Business Studies
Manipulated rather than measured brand globalness, tested
multiple moderators and found few signicant, replicating
in new product categories
Evanschtzki et al.
(2014)
Hedonic shopping motivations in collectivistic and
individualistic consumer cultures
Arnold and Reynolds (2003),J.
Retailing
Replicated original results in individualistic cultures, but
showed different effects of shopping motivations in
collectivist cultures
Fernandes (2013) The 1/N rule revisited: Heterogeneity in the naïve
diversication bias
Benartzi and Thaler (2001), American
Econ. Review
Tested and refuted two previous explanations for
diversication bias, desire for variety and nancial
knowledge, and showed role of reliance on intuition in
(continued on next page)
337J.G. Lynch Jr. et al. / Intern. J. of Research in Marketing 32 (2015) 333342
Aspara & van den Berg (2014), replicating Alexander (2006);Baxter,
Kulczynski, & Ilicic (2014), replicating Yorkston & Menon (2004);Gill &
El Gamal (2014) replicating Berger & Fitzsimons (2008).
Of the 30 articles, 22 were exclusively conceptual replications; only
8 included at least some conditions intended to somewhat closely
match operationalizations of the original authors. Twelve of the studies
showed some moderation of the original ndings. Seven were coded as
showing a resolution of some inconsistency in the literature. As an ex-
ample Müller, Schliwa, and Lehmann (2014) replicated Simonson and
Tversky's (1992) prize decoy experiment that Frederick, Lee, and
Baskin (2014) had been unable to replicate. They followed Simonson's
(2014) reply to Frederick et al. using real and not hypothetical gambles
with asymmetrically dominated (rather than truly inferior) decoys. This
echoes our earlier point that failures to replicate can themselves fail to
replicate.
Because this coding was not done with the rigor of a formal content
analysis, we emailed adraft of the paper withour tentative codes to the
authors of the 30 replications; all 30 replied. We received very few
minor corrections, and in Table 2, we deferred to the authors in those
cases.
Table 1 (continued)
Authors Title Paper Replicated Nature of extension
diversication bias
Gill and El Gamal
(2014)
Does exposure to dogs (cows) increase the
preference for puma (the color white)? Not always
Berger and Fitzsimons (2008),J.
Marketing Research
Extended test for priming effects to different stimuli,
different population sample (general population, plus
students in the lab), and different frequencies of exposure
Hasford, Farmer, &
Waites (2015)
Thinking, feeling, and giving: The effects of scope
and valuation on consumer donations
Hsee and Rottenstreich (2004),J.
Experimental Psychology: General
Used new charity for tests and showed moderation by
understanding of emotional intelligence, shedding light on
mechanism for scope insensitivity
Holden and
Zlatevska
(2015)
The partitioning paradox: The big bite around small
packages
Do Vale et al. (2008), J. Consumer
Research; Scott et al. (2008),J.
Consumer Research
Blended methods from three prior studies, testing in
different country and shorter time period. Extended by
showing that respondent awareness of participation in a
food study eliminated the effect.
Holmqvist and
Lunardo (2015)
The impact of an exciting store environment on
consumer pleasure and behavioral intentions
Kaltcheva and Weitz (2006),J.
Marketing
Extended to new culture and tested mediators of original
effect
Huyghe and van
Kerckhove
(2013)
Can fat taxes and package size restrictions stimulate
healthy food choices?
Mishra and Mishra (2011),J.
Marketing Research
Extended to choices between healthy and unhealthy foods.
Showed that price changes for indulgent option but not
healthy option affected intentions to buy the indulgent
option. Changes in package size of the healthy but not the
indulgent option affected intentions to purchase the
indulgent option.
Kuehnl and
Mantau (2013)
Same sound, same preference? Investigating sound
symbolism effects in international brand names
Lowrey & Shrum (2007) J. Consumer
Research; Shrum et al. (2012), IJRM
Compared sound perceptions and preferences in French,
German, and Spanish stimuli. Created new ctitious brand
names and added consideration of the effect of consonants
in brand names
Lenoir et al.
(2013)
The impact of cultural symbols and spokesperson
identity on attitudes and intentions
Deshpandé and Stayman (1994),J.
Marketing Research; Forehand and
Deshpandé (2001), J. Marketing
Research
Showed targeted marketing strategies that work for rst
generation minority consumers do not work for second
generation minority consumers & vice versa
Lin (2013) Does container weight inuence judgments of
volume?
Krishna (2006), J. Consumer Research Replicated nding that longer cylinders are perceived to
have more volume than shorter ones of equal volume, then
showed this bias goes away when weight cues are
incorporated into volume judgments
Maecker et al.
(2013)
Charts and demand: Empirical generalizations on
social inuence
Salganik et al. (2006), Science Different subject times, stimuli, and product classes
Mukherjee (2014) How chilling are network externalities? The role of
network structure
Goldenberg et al. (2010), IJRM Across 7 real world data sets, author demonstrated that the
conclusion that externalities slow adoption is not a
tautological consequence of original model formulation;
that higher size and higher average degree can offset the
effect of network externalities, and that more clustering in
the network strengthens the chilling effect of externalities
Müller (2013) The real-exposure effect revisited - How purchase
rates vary under pictorial vs. real item presentations
when consumers are allowed to use their tactile
sense
Bushong et al. (2010), Am. Econ. Rev. Extended to different modes of real exposure; purchase DV
vs. Becker Degroot Marshak; appetitive vs. nonappetitive
goods; high vs. low familiarity
Müller et al.
(2014)
Prize decoys at work New experimental evidence
for asymmetric dominance effects in choices on
prizes in competitions
Simonson and Tversky (1992),J.
Marketing Research; Frederick et al.
(2014), J. Marketing Research
Extended to real consequential choices among options
tested to produce tradeoffs claimed to be necessary for
asymmetric dominance effect
Müller et al.
(2013)
The time vs. money effect. A conceptual replication Mogilner and Aaker (2009),J.
Consumer Research
Extended from eld to lab, tested treatment interactions
with demographic background variables
Orazi and Pizzetti
(2015)
Revisiting fear appeals: A structural re-Inquiry of the
protection motivation model
Johnston and Warkentin (2010), MIS
Quarterly
Conceptuallly replicated original with different subject
types (adults), product types (online banking security), and
model specication and estimation
Van Doorne et al.
(2013)
Satisfaction as a predictor of future performance: A
replication
Keiningham et al. (2007), JM;
Morgan and Rego (2006), Marketing
Science; Reichheld (2003)HBR
Prior work disputed which customer metric best predicts
future company performance. Authors assessed the impact
of different satisfaction and loyalty metrics as well as the
Net Promoter Score on sales revenue growth, gross margins
and net operating cash ows using a Dutch sample.
Wright et al.
(2013)
If it tastes bad it must be good: Consumer naïve
theories and the marketing placebo effect
Shiv et al. (2005), J. Marketing
Research
Replicated original price placebo effect using unique
stimuli, subject types, and dependent variables. Showed
that effect extends to other cues: Set size, product typicality,
product taste, and shelf availability.
338 J.G. Lynch Jr. et al. / Intern. J. of Research in Marketing 32 (2015) 333342
Table 2
Coding of papers appearing in replication corner.
Authors Title Paper replicated Direct
replication
included?
(1
= Yes; 0
=
No)
Moderation
of effect
shown? (0
=
No; 1 =
Yes)
Resolve
conict
among
papers?
(1 =
Yes; 0
= No)
Replication
score
(1 = Not
At all
similar; 7
=
virtually
identical)
Aspara and van den
Berg (2014)
Naturally designed for masculinity vs. femininity? Prenatal
testosterone predicts male consumers' choices of
gender-imaged products
Alexander (2006), Archives of sexual
behavior
00 03
Barakat, Ramsey,
Lorenz, and
Gosling (2015)
Severe service failure recovery revisited: Evidence of its
determinants in an emerging market context
Weun, Beatty, and Jones (2004),J.
Services Marketing
0 0 1 5.5
Baxter and Lowrey
(2014)
Examining children's preference for phonetically
manipulated brand names across two english accent
groups
Shrum, Lowrey, Luna, Lerman, & Liu,
2012 International J. of Research in
Marketing
01 05
Baxter et al. (2014) Revisiting the automaticity of phonetic symbolism effects Yorkston & Menon, 2004. J. Consumer
Research
1 0 0 2.5
Blut, Frennea, Mittal,
and Mothersbaugh
(2015)
How procedural, nancial and relational switching costs
affect customer satisfaction, repurchase intentions, and
repurchase behavior: A meta-analysis
Burnham, Frels, and Mahajan (2003),J.
of the Academy of Marketing Science
0 1 1 5.5
Brock, Blut,
Evanschitzky, and
Kenning (2013)
Satisfaction with complaint handling: A replication study
on its determinants in a business-to-business context
Orsingher, Valentini, and deAngelis
(2010), J. of the Academy of Marketing
Science.
00 05
Butori and Parguel
(2014)
The impact of visual exposure to a physically attractive
other on self-presentation
Roney (2003), Personality and Social
Psychology Bulletin
00 07
Chan (2015-a) Attractiveness of options moderates the effect of choice
overload
Gourville and Soman (2005), Marketing
Science; Iyengar & Lepper (2000, J.
Personality & Social Psychology)
01 17
Chan (2015-b) Endowment effect for hedonic but not utilitarian goods Ariely, Huber, and Wertenbroch (2005),
J. Marketing Research; Kahneman,
Knetsch, and Thaler (1990), J. Political
Economy.
01 07
Chark and
Muthukrishnan
(2013)
The effect of physical possession on preference for product
warranty
Peck and Shu (2009), J. Consumer
Research
00 07
Chowdhry,
Winterich, Mittal,
and Morales
(2015)
Not all negative emotions lead to concrete construal Labroo and Patrick (2009), J. Consumer
Research
1 0 0 6.5
Davvetas, Sichtmann,
& Diamantopoulos
(2015)
The impact of perceived brand globalness on consumers'
willingness to pay
Steenkamp, Batra, and Alden (2003),J.
International Business Studies
00 05
Evanschtzki et al.
(2014)
Hedonic shopping motivations in collectivistic and
individualistic consumer cultures
Arnold and Reynolds (2003), J. Retailing 1 1 0 7
Fernandes (2013) The 1/N rule revisited: Heterogeneity in the naïve
diversication bias
Benartzi and Thaler (2001), American
Econ. Review
11 16
Gill and El Gamal
(2014)
Does exposure to dogs (cows) increase the preference for
puma (the color white)? Not always
Berger and Fitzsimons (2008),J.
Marketing Research
0 0 0 3.5
Hasford, Farmer, &
Waites (2015)
Thinking, feeling, and giving: The effects of scope and
valuation on consumer donations
Hsee and Rottenstreich (2004),J.
Experimental Psychology: General
01 07
Holden and Zlatevska
(2015)
The partitioning paradox: The big bite around small
packages
Do Vale, Pieters, and Zeelenberg (2008),
J. Consumer Research; Scott, Nowlis,
Mandel, and Morales (2008),J.
Consumer Research
0 1 0 6.5
Holmqvist and
Lunardo (2015)
The impact of an exciting store environment on consumer
pleasure and behavioral intentions
Kaltcheva and Weitz (2006),J.
Marketing
10 05
Huyghe and van
Kerckhove (2013)
Can fat taxes and package size restrictions stimulate
healthy food choices?
Deshpandé and Stayman (1994),J.
Marketing Research; Forehand and
Deshpandé (2001), J. Marketing
Research
0 0 0 6.5
Kuehnl and Mantau
(2013)
Same sound, same preference? Investigating sound
symbolism effects in international brand names
Lowrey & Shrum (2007) J. Consumer
Research; Shrum et al. (2012), IJRM
0 0 0 5.5
Lenoir, Puntoni,
Reed, and Verlegh
(2013)
The impact of cultural symbols and spokesperson identity
on attitudes and intentions
Deshpandé and Stayman (1994),J.
Marketing Research; Forehand and
Deshpandé (2001), J. Marketing
Research
01 05
Lin (2013) Does container weight inuence judgments of volume? Krishna (2006), J. Consumer Research 0 1 0 6
Maecker,
Grabenströer,
Clement, and
Heitmann (2013)
Charts and demand: Empirical generalizations on social
inuence
Salganik, Dodds, and Watts (2006),
Science
1 0 0 6.5
(continued on next page)
339J.G. Lynch Jr. et al. / Intern. J. of Research in Marketing 32 (2015) 333342
7. Conclusion
We are proud to have served as co-editors of the Replication Corner
under IJRM editors Jacob Goldenberg and Eitan Muller. We believe that
the ndings published in the Replication Corner have had a distinctly
positive inuence on the eld of marketing, serving to enhance the
sense that our most important ndings are not perilously fragile.
The incoming editor of IJRM will discontinue the Replication Corner
in the face of the reality of journal rankings based on citation impact fac-
tors. Replication papers are crucial for the eld, but on average they may
be cited less than the regular-length articles in the same journals. We
were heartened to learn that the new EMAC Journal of Marketing Behav-
ior has eagerly agreed to continue the Replication Corner. JMB editor
Klaus Wertenbrochtook this decision although the Reproducibility Pro-
ject did not replicate ndings from Dai, Wertenbroch, and Brendl
(2008). We infer that Wertenbroch shares our view that unsuccessful
and successfulreplications are a valuable contribution to science and
that there is no personal affront when another scholar reports an un-
successfulreplication of one's earlier ndings.
Like Klaus Wertenbroch, we are committed to the cause and will fol-
low the Replication Corner to its new home at Journal of Marketing Be-
havior. We hope that readers of this editorial will similarly continue to
support the Replication Corner.
References
Aaker, J. L., & Maheswaran, D. (1997). The effect of cultural orientation on persuasion.
Journal of Consumer Research,24(3), 315328.
Alexander, G. M. (20 06). Associations among gender-linked toy preferences, spatial
ability,and digit ratio: Evidence from eye-tracking analysis. Archives of Sexual
Behavior,35(6), 699709.
Ariely, D., Huber, J., & Wertenbroch, K. (2005). When do losses loom larger than gains?
Journal of Marketing Research,42,134138.
Ariely, D., Loewenstein, G., & Prelec, D. (2003). Coherent arbitrariness: Stable de-
mand curves without stable preferences. The Quarterly Journal of Economics,
118(1), 73105.
Arnold, M. J., & Reynolds, K.E. (2003). Hedonic shopping motivations. Journal of Retailing,
79(2), 7795. http://dx.doi.org/10.1016/S0022-4359(03)00007-1.
Aspara, J., & van den Berg, B. (2014). Naturally designed for masculinity vs. femininity?
Prenatal testosterone predicts male consumers' choices of gender-imaged products.
International Journal of Research in Marketing,31(1), 117121.
Baker, M. (2015, April 30). First results from psychology's largest reproducibility test:
Crowd-sourced effort raises nuanced questions about what counts as a replication.
Nature,2015.http://dx.doi.org/10.1038/nature.2015.17433.
Barakat, L. L., Ramsey, J. R., Lorenz, M. P., & Gosling, M. (2015). Severe service failure re-
covery revisited: Evidence of its determinants in an emerging mar ket context.
International Journal of Research in Marketing,32(1), 113116.
Bargh, J., Chen, M., & Burrows, L. (1996). Automaticity of social behavior: Direct effects of
trait construct and stereotype activation on action. Journal of Personality and Social
Psychology,71(2), 230244.
Baxter, S., & Lowrey, T. M. (2014). Examining children's preference for phonetically ma-
nipulated brand names across two English accent groups. International Journal of
Research in Marketing,31(1), 122124.
Baxter,S., Kulczynski, A.,& Ilicic, J. (2014).Revisiting the automaticityof phonetic symbol-
ism effects. International Journal of Research in Marketing,31(4), 448451.
Benartzi, S., & Thaler, R. H. (2001). Naive diversication strategies in retirement saving
plans. American Economic Review,91(1), 7998.
Berger, J., & Fitzsimons, G. (2008, February). Dogs on the street, puma on the feet: How
cues in the environment inuence product evaluation and choice. Journal of
Marketing Research,45,114.
Blut, M., Frennea, C. M., Mittal, V., & Mothersbaugh, D. L. (2015). How procedural , nan-
cial and relational switching costs affect customer satisfaction, repurchase intentions,
and repurchase behavior: A meta-analysis. International Journal of Research in
Marketing,32(2), 226229.
Bradlow, E. T. (1996). Negative information and the three-parameter logistic model.
Educational and Behavioral Statistics,21(2), 179185.
Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., ... Van't
Veer, A. (2014,). The replication recipe: What makes for a convincing replication?
Journal of Experimental Social Psychology,50,217224.
Brinberg, D. L., Lynch, J. G., Jr., & Sawyer,A. G. (1992, September). Hypothesized and con-
founded explanations in theory tests: A Bayesian analysis. Journal of Consumer
Research,19,139154.
Brock, C., Blut, M., Evanschitzky, H., & Kenning, P. (2013). Satisfaction with complaint
handling: A replication study on its determinants in a business-to-business context.
International Journal of Research in Marketing,30(3), 319322.
Burnham, T. A., Frels, J. K., & Mahajan, V. (2003). Consumer switching costs: A typology,
antecedents, and consequences. Journal of the Academy of Marketing Science,31(2),
109127.
Bushong, B., King, L. M., Camerer, C. F., & Rangel, A. (2010). Pavlovian processes in con-
sumer choice: The physical presence of a good increases willingness-to-pay.
American Economic Review,100(4), 15561571.
Butori, R., & Parguel, B. (2014). The impact of visual exposure to a physically attractive other
on self-presentation. International Journal of Research in Marketing,31(3), 445447.
Cesario, J. (2014). Priming, replication, and the hardest science. Perspectives on
Psychological Science,9(1), 4048.
Chan, E. Y. (2015a). Attractiveness of options moderates the effect of choice overload.
International Journal of Research in Marketing,32(4), 425427.
Chan, E. Y. (2015b). Endowment effect for hedonic but not utilitarian goods. International
Journal of Research in Marketing,32(4), 439441.
Chark, R., & Muthukrishnan, A. V. (2013). The effect of physical possession on preference
for product warranty. International Journal of Research in Marketing,30(4), 424425.
Table 2 (continued)
Authors Title Paper replicated Direct
replication
included?
(1
= Yes; 0
=
No)
Moderation
of effect
shown? (0
=
No; 1 =
Yes)
Resolve
conict
among
papers?
(1 =
Yes; 0
= No)
Replication
score
(1 = Not
At all
similar; 7
=
virtually
identical)
Mukherjee (2014) How chilling are network externalities? The role of
network structure
Goldenberg, Libai, and Muller (2010),
IJRM.
11 16
Müller (2013) The real-exposure effect revisited - How purchase rates
vary under pictorial vs. real item presentations when
consumers are allowed to use their tactile sense
Bushong, King, Camerer, and Rangel
(2010), Am. Econ. Rev.
01 07
Müller et al. (2014) Prize decoys at work New experimental evidence for
asymmetric dominance effects in choices on prizes in
competitions
Simonson and Tversky (1992),J.
Marketing Research; Frederick et al.
(2014), J. Marketing Research
0 0 1 6.5
Müller, Lehmann,
and Sarstedt
(2013)
The time vs. money effect. A conceptual replication Mogilner and Aaker (2009), J. Consumer
Research
10 07
Orazi and Pizzetti
(2015)
Revisiting fear appeals: A structural re-Inquiry of the
protection motivation model
Johnston and Warkentin (2010), MIS
Quarterly
0 0 0 5.5
Van Doorne,
Leeang, and Tijs
(2013)
Satisfaction as a predictor of future performance: A
replication
Keiningham, Cooil, Andreasson, and
Aksoy (2007), JM; Morgan and Rego
(2006), Marketing Science; Reichheld
(2003), HBR
0 0 1 5.5
Wright, Hernandez,
Sundar, Dinsmore,
and Kardes (2013)
If it tastes bad it must be good: Consumer naïve theories
and the marketing placebo effect
Shiv, Carmon, and Ariely (2005),J.
Marketing Research
0 0 0 6.5
340 J.G. Lynch Jr. et al. / Intern. J. of Research in Marketing 32 (2015) 333342
Chowdhry, N., Winterich, K. P., Mittal, V., & Morales, A. C. (2015). Not all negative emo-
tions lead to concrete construal. Intern ational Journal of Research in Marketing,
32(4), 428430.
Coan, J. (2014). Negative psychology: The atmosphere of wary and suspicious disbelief.
Blog post in Medium.com https://medium.com/@jimcoan/n egative-psycholog y-
f66795952859
Cook, T. K., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for
eld settings. Chicago: Rand McNally.
Cronbach, L. J. (1975). Beyond the two disciplines of scienticpsychology.American Psy-
chologist,30,116127.
Dai, X., Wertenbroch, K., & Brendl, C. M. (2008). The value heuristic in judgments of rel-
ative frequency. Psychological Science,19(1), 1819.
Davvetas, V., Sichtman, C., & Diamantopoulos, A. (2015). The impact of perceived brand
globalness on consumers' willingness to pay. International Journal of Research in
Marketing,32(4), 431434.
Deshpandé, R., & Stayman, D. M. (1994). A tale of two cities: Distinctiveness theory and
advertising effectiveness. Journal of Marketing Research,31(1), 5764.
Do Vale, R. C., Pieters, R., & Zeelenberg, M. (2008, October). Flying under the radar: Per-
verse package size effects on consumpt ion self-regulation . Journal of Consumer
Research,35,380390.
Donnellan,D. (2014). Go big or go home: A recent replication attempt. Blog posthttps://
traitstate.wo rdpress.com/2013/ 12/11/go-big-or-go-home-a-recent-r eplication-
attempt/
Doyen, S., Klein, O., Pichon, C., & Cleeremans, A. (2012). Behavioral priming: It's all in the
mind, but whose mind. PloS One,7(1), e29081.
Evanschtzki, H., Emrich, O., Sangtani, V.,Ackfeldt, L., Reynolds, K. E., & Arnold, M. J. (2014).
Hedonic shopping motivations in collectivisticand individualistic consumer cultures.
International Journal of Research in Marketing,31(3), 335338.
Fang, F. C., Steen, R. G., & Casadevall, A. (2012). Misconduct accounts for the majority of
retracted scientic publications. Proceedings of the Nationa l Academy of Sciences,
109(42), 1702817033.
Farley, J.U., Lehmann, D. R., & Mann, L. H. (1998, November). Designing the next study for
maximum impact. Journal of Marketing Research,35,496501.
Fernandes, D. (2013). The 1/N rule revisited: Heterogeneity in the naïve diversication
bias. International Journal of Research in Marketing,30(3), 310313.
Forehand, M. R., & Deshpandé, R. (2001). What we see makes us who we are: Priming
ethnic self-aw areness and advertising response. Journ al of Marketing Research,
38(3), 338346.
Frederick, S., Lee, L., & Baskin, E. (2014). The limits of attraction. Journal of Marketing
Research,51(4), 487507.
Gelman, A., & Loken, E. (2014). The statistical crisis in science. American Scientist,102(6),
460465.
Gergen,K.J.(1973).Social psychology as history. Journal of Personali ty and Social
Psychology,26(2), 309.
Gibson, C.E., Losee, J., & Vitiello, C. (2014). Areplication attemptof stereotype susceptibil-
ity (Shih, Pittinsky,& Ambady, 1999): Identity salienceand shifts in quantitative per-
formance. Social Psychology,45(3), 194198.
Gilbert, D. (2014). Psychology's replication police prove to be shameless little bullies.
Twitter post https://twitter.com/dantgilbert/status/470199929626193921
Gill, T., & El Gamal, M. (2014). Does exposure to dogs (cows) increase the preference for
puma (the color white)? Not always. International Journal of Research in Marketing,
31(1), 125126.
Goldenberg, J., Libai, B., & Muller, E. (2010). The chilling effects of network externalities.
International Journal of Research in Marketing,27(1), 415.
Gourville, J., & Soman, D. (2005). Overchoice and assortment type: When variety back-
res. Marketing Science,24,382395.
Hasford, J., Farmer, A., & Waites, S. F. (2015). Thinking, feeling, and giving: The effects of
scope and valuation on consumer donations. Internation al Journal of Research in
Marketing,32(4), 435438.
Holden, S. S., & Zlatevska, N. (2015). The partitioning paradox: The big bite around small
packages. International Journal of Research in Marketing,32(2), 230233.
Holmqvist, J., & Lunardo, R. (2015). The impact of an exciting store environment on con-
sumer pleasure and shopping intentions. International Journal of Research in
Marketing,32(1), 117119.
Hsee, C. K., & Rottenstreich, Y. (2004). Music, pandas, and muggers: On the affective psy-
chology of value. Journalof Experimental Psychology: General,133(1), 23. http://dx.doi.
org/10.1037/0096-3445.133.1.23.
Huyghe, E., & van Kerckhove, A. (2013). Can fat taxes and package size restrictions stim-
ulate healthy foo d choices? International Journal of Research in M arketing,30(4),
421423.
Ioannidis, J. P. A. (2005). Why most published research ndings are false. PLoS Medicine,
2(8), e124.
Iyengar,S.S.,&Lepper,M.R.(2000).When choice is demotivating: Can one desire
too much of a good thing? Journal of Personality and Social Psychology,79,
9951006.
Johnson, D. J., Cheung, F., & Donnellan, M. B. (2014). Does cleanliness inuence moral
judgments? A dire ct replication of Schnall, Benton, and Harvey (2008). Social
Psychology,45(3), 209215.
Johnston, A. C., & Warkentin, M. (2010). Fear appeals and information security behaviors:
An empirical study. MIS Quarterly,34(3), 549566.
Kahneman, D., Knetsch, J. L., & Thaler, R. H. (1990). Experimental tests of the endowment
effect and the Coase theorem. Journal of Political Economy,98,13251348.
Kaltcheva, V. D., & Weitz, B. A. (2006).When should a retailer create an exciting store en-
vironment? Journal of Marketing,70(1), 107118.
Keiningham, T. L., Cooil, B., Andreasson, T.W., & Aksoy, L. (2007). A longitudinal examina-
tion of net promoterand rm revenue growth. Journal of Marketing,71(3), 3951.
Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R. B., Bahník, S., Bernstein, M. J., Bocian, K.,
et al. (2014t). Investigating variation in replicability. Social Psychology,45(3),
142152.
Krishna, A. (2006). Interaction of senses: the effect of vision versus touch on the elonga-
tion bias. Journal of Consumer Research,32,557566.
Kuehnl, C.,& Mantau, A. (2013). Same sound, same preference? Investigatingsound sym-
bolism effects in international brand names. International Journal of Research in
Marketing,30(4), 417420.
Labroo, A. A., & Patrick, V. M. (2009). Psychological distancing: Why happiness helps you
see the big picture. Journal of Consumer Research,35(5), 800809.
Lenoir, A. I., Puntoni,S., Reed, A., & Verlegh, P.W. J. (2013). The impact of cul tural symbols
and spokesperson identity on attitudes an d intentions. Interna tional Journal of
Research in Marketing,30(4), 426428.
Lin, H. -M. (2013). Does container weight inuence judgments of volume? International
Journal of Research in Marketing,30(3), 308309.
Lowrey, T. M., & Shrum, L. J. (2007). Phonetic symbolism and brand name preference.
Journal of Consumer Research,34,406414 (October).
Lynch, J. G., Jr. (1982,December). On the external validity of experiments in consumer re-
search. Journal of Consumer Research,9,225239.
Lynch, J. G.,Jr. (1983, June). The role of external validity in theoretical research.Journal of
Consumer Research,10,109111.
Maecker,O., Grabenströer,N. S., Clement, M.,& Heitmann, M. (2013).Charts and demand:
Empirical gener alizations on social inuence. International Journal of Research in
Marketing,30(4), 429431.
Maniadis,Z., Tufano, F., & List, J.A. (2014). One swallow doesn't make a summer: New ev-
idence on anchoring effects. The American Economic Review,104(1), 277290.
Meehl, P. E. (1967). Theory-testing in psychology and physics:A methodological paradox.
Philosophy of Science,34(2), 103115.
Mishra, A., & Mishra, H. (2011). The inuence of price discount versus bonus pack on
the preference for virtue and vice foods. Journal of Marketing Research,48(1),
196206.
Mogilner, C., & Aaker, J. (2009). The time vs. money effect: Shifting product attitudes
and decisions through personal connection. Journal of Consumer Resear ch,36(2),
277291.
Moon, A., & Roeder, S. S. (2014). A secondary replication attempt of stereotype suscepti-
bility (Shih, Pittinsky, & Ambady, 1999). Social Psychology,45(3), 199201.
Morgan, N.A., & Rego, L. L. (2006). Thevalue of different customer satisfaction and loyalty
metrics in predicting business performance. Marketing Science,25(5), 426439.
Mukherjee, P. (2014). How chilling are network externalities? The role of network struc-
ture. International Journal of Research in Marketing,31(4), 452456.
Müller, H. (2013). The real-exposure effect revisited How purchase rates vary under
pictorial vs. real item presentations when consumers are allowed to use their tactile
sense. International Journal of Research in Marketing,30(3), 304307.
Müller, H., Lehmann, S., & Sarstedt, M. (2013). The time vs. money effect. A conceptual
replication. International Journal of Research in Marketing,30(2), 199200.
Müller, H., Schliwa, V., & Lehmann, S. (2014). Prize decoys at work New experimental
evidence for asymmetric dominance effects in choices on prizes in competitions.
International Journal of Research in Marketing,31(4), 457460.
Open Science Collaboration. (2015). Estimating the reproducibility of psychological sci-
ence. Science,349, aac4716 DOI:10.1126/science.aac4716.
Orazi, D. C., & Pizzetti, M. (2015). Revisiting fear appeals: A structural re-inquiry of the
protection motivation model. International Journal of Research in Marketing,32(2),
223225.
Orsingher, C., Valentini, S., & deAngelis, M. (2010). A meta-analysis of satisfaction with
complaint handling in services. Journal of the Academy of Marketing Science,38(2),
169186.
Pashler, H., & Harris, C. R. (2012). Is the replicability crisis overblown? Three arguments
examined. Perspectives on Psychological Science,7(6), 531536.
Pashler, H., & Wagenmakers, J. (2012). Editors' introduction to thespecial section on rep-
licability in psychological science: A crisis of condence? Perspectives on Psychological
Science,7(6), 528530.
Peck, J., & Shu, S. B. (2009). The effect of mere touch on perceived ownership. Journal of
Consumer Research,36(3), 434447.
Reichheld, F. F. (2003). The one number you need to grow. Harvard Business Review,81,
4654.
Roney, J. R. (2003). Effects of visual exposure to the opposite sex: Cognitive aspects of
mate attraction in human males. Personality and Soci al Psychology Bulletin,29(3),
393404.
Salganik, M. J., Dodds, P. S., & Watts, D. J. (2006). Experimentalstudy of inequality and un-
predictability in an articial cultural market. Science,311,854856.
Schnall, S., Benton, J., & Harvey, S. (2008). With a clean conscience: cleanliness reduces
the severity of moral judgments. Psychological Science,19(12), 12191222.
Schooler, J. (2011). Unpublished results hide the decline effect. Nature,470(7335),
437439.
Schwarz, N., & Strack, F. (2014). Does merely going through the same moves make for a
directreplication? Concepts, contexts, and operationalizations. Social Psychology,
45(4), 299311.
Scott, M. L., Nowlis, S. M., Mandel, N., & Morales, A. C. (2008, October). The effects of re-
duced food size and package size on theconsumption behavior of restrained and un-
restrained eaters. Journal of Consumer Research,35,391405.
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental
designs for generalized causal inference. Boston: Houghton Mifin.
Shih, M., Pittinsky, T. L., & Ambady, N. (1999). Stereotype susceptibility: Identity salience
and shifts in quantitative performance. Psychological Science,10(1), 8083.
Shiv, B., Carmon, Z., & Ariely, D. (2005). Placebo effects of marketing actions: Consumers
may get what they pay for. Journal of Marketing Research,42(4), 383393.
341J.G. Lynch Jr. et al. / Intern. J. of Research in Marketing 32 (2015) 333342
Shrum, L. J., Lowrey, T. M., Luna, D., Lerman, D. B., & Liu, M. (2012). Sound symbolism ef-
fects across languages: Implications for global brand names. International Journal of
Research in Marketing,29(3), 275279.
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology undis-
closed exibility in data collection and analysis allows presenting anything as signif-
icant. Psychological Science,22(11), 13591366.
Simons, D. J. (2014). The value of direct replication. Perspectives on Psychological Science,
9(1), 7680.
Simonsohn, U., Simmons, J. P., & Nelson, L. D. (2014). Anchoring is not a false-positive:
Maniadis, Tufano, and List's (2014) failure to replicateis actually entirely consistent
with the original. (April 27, 2014). Available at SSRN: http://ssrn.com /abstract=
2351926
Simonsohn, U., Simmons, J. P., & Nelson, L. D. (2015). Specication curve: Descriptive and
inferential statisics for all plausible specications. Presentation at University of Mary-
land decision processes symposium.
Simonson, I. (2014, August). Vic es and virtues of misguided replications: The case of
asymmetric dominance. Journal of Marketing Research,51,514519.
Simonson, I., & Tversky, A. (1992). Choice in context: Tradeoff contrast and extremeness
aversion. Journal of Marketing Research,29(3), 281.
Steenkamp, J. B. E. M., Batra, R., & Alden, D. L. (2003). How perceived brand globalness
creates brand value. Journal of International Business Studies,34,5365.
Stroebe, W., & Strack, F. (2014). The alleged crisis and the illusion of exact replication.
Perspectives on Psychological Science,9(1), 5971.
Van Doorne, J., Leeang, P. S. H., & Tijs, M. (2013). Satisfaction as a predictor of future
performance: A replication. Internatio nal Journal of Research in Marketing,30(3),
314318.
Weun, S., Beatty, S. E., & Jones, M. A. (2004). The impact of service failure severity on ser-
vice recovery evaluations and post-recovery relationships. Journal of Services
Marketing,18(2), 133146.
Wright, S. A., Hernandez, J. M. C., Sundar, A., Dinsmore, J., & Kardes, F. R. (2013). If it tastes
bad it must be god: Co nsumer naïve theories and the marketing placebo effect.
International Journal of Research in Marketing,30(2), 197198.
Yorkston, E., & Menon, G. (2004). A sound idea: Phonetic effects of brand names on con-
sumer judgment. Journal of Consumer Research,31,4351.
342 J.G. Lynch Jr. et al. / Intern. J. of Research in Marketing 32 (2015) 333342
... Limitations and future research There are several limitations in our study which offer opportunities for future research. First, there is an obvious need to further replicate our findings in other countries to establish their generalizability (Lynch et al., 2015). Both our studies have been conducted in Western individualistic country settings (Hofstede Insights, 2021), in which personal freedom and choice are highly valued (Markus and Schwartz, 2010). ...
Article
Purpose Reactance theory is applied to investigate consumer responses to “buy local” campaigns initiated by government to counteract the effects of an economic crisis, using the COVID-19 pandemic as an illustrative context. Design/methodology/approach A conceptual model is developed, aimed at revealing the extent to which “buy local” campaigns – explicitly justified by the need to fight an economic crisis – are likely to lead to (a) compliance (i.e. support for local products/retailers) or (b) freedom restoration (i.e. support for foreign products/retailers). The model is subsequently tested on samples of German (N = 265) and Italian (N = 268) consumers. Findings “Buy local” campaigns are likely to generate reactance amongst consumers and such reactance can lead to both non-compliance and, albeit less so, freedom restoration outcomes. At the same time, consumer ethnocentrism acts as a countervailing influence by attenuating the effects of generated reactance and its undesirable outcomes. Research limitations/implications Psychological reactance theory offers a novel perspective for conceptually approaching the likely responses of consumers towards “buy local” campaigns and the empirical findings support the use of the theory in this context. Practical implications Policymakers seeking to encourage consumers to support the local economy during times of an economic crisis need to be aware that “buy local” campaigns may, against their intended communication goals, result in non-compliance as well as consumer responses in the opposite direction. Thus, the reactance-generating potential of such campaigns needs to be explicitly considered at the planning/implementation stage. Originality/value The findings confirm the relevance of reactance theory as a conceptual lens for studying the effects of “buy local” campaigns and have important implications for domestic/foreign firms as well as for policy makers seeking to encourage consumers to support the local economy during times of an economic crisis.
... This finding is unsurprising, given the historical editorial policies of leading management journals that have discouraged replications (Nosek et al., 2012;. However, recent replication efforts within the management disciplines, including marketing (Babin et al., 2021;Lynch et al., 2015) and information systems (Dennis et al., 2020), have sought to challenge the status quo by critically examining their body of research, highlighting the need for improved transparency. ...
Article
Driven by the high-profile failures to reproduce and replicate published findings, there have been increasing demands to adopt open science practices across scientific disciplines in order to enhance research transparency. Critics have highlighted the use of underpowered studies and researchers’ analytical degrees of freedom as factors contributing to these issues. Despite methodological advances and updated guidelines, similar concerns persist regarding studies utilizing partial least squares structural equation modeling (PLS-SEM). Open science practices can help alleviate these concerns by facilitating transparency in PLS-SEM-based studies. However, the current level of adherence to these practices remains unknown. In this article, we conduct a comprehensive literature review of leading marketing journals to assess the extent to which open science practices are implemented in PLS-SEM-based studies. Based on the observed lack of adoption, we propose a PLS-SEM-specific preregistration template that researchers can use to foster transparency in their analyses, thereby bolstering confidence in their findings.
... This finding is unsurprising, given the historical editorial policies of leading management journals that have discouraged replications (Nosek et al., 2012;. However, recent replication efforts within the management disciplines, including marketing (Babin et al., 2021;Lynch et al., 2015) and information systems (Dennis et al., 2020), have sought to challenge the status quo by critically examining their body of research, highlighting the need for improved transparency. ...
Preprint
Full-text available
Driven by the high-profile failures to reproduce and replicate published findings, there have been increasing demands to adopt open science practices across scientific disciplines in order to enhance research transparency. Critics have highlighted the use of underpowered studies and researchers’ analytical degrees of freedom as factors contributing to these issues. Despite methodological advances and updated guidelines, similar concerns persist regarding studies utilizing partial least squares structural equation modeling (PLS-SEM). Open science practices can help alleviate these concerns by facilitating transparency in PLS-SEM-based studies. However, the current level of adherence to these practices remains unknown. In this article, we conduct a comprehensive literature review of leading marketing journals to assess the extent to which open science practices are implemented in PLS-SEM-based studies. Based on the observed lack of adoption, we propose a PLS-SEM-specific preregistration template that researchers can use to foster transparency in their analyses, thereby bolstering confidence in their findings.
... The major consumer research journals have discussed replicability and approaches to decrease false-positive findings (e.g., Bergkvist, 2020;Hubbard & Lindsay, 2013;Laurent, 2013;Pham & Oh, 2021;Simmons et al., 2021). However, while some editorial offices frequently encourage replication and open-science practices (i.e., preregistration, open data, and material sharing; see, e.g., Bradlow et al., 2020;Easley & Madden, 2013;Labroo et al., 2022;Lynch et al., 2015), journals in the consumer research field generally publish only very few replication studies (Tipu & Ryan, 2022), and do not encourage large-scale replication projects. ...
Preprint
Full-text available
During the past few years, researchers have criticized their professions for providing an entry point for false-positive results arising from publication bias and questionable research practices such as p-hacking (i.e., selectively reporting analyses that yield a p-value below 5 %). Researchers are advocating replication studies and the implementation of open-science practices, like preregistration, in order to identify trustworthy effects. Nevertheless, because such consumer research developments are still emerging, most prior research findings have not been replicated, leaving researchers in the dark as to whether a line of research or a particular effect is trustworthy. We tackle this problem by providing a toolbox containing multiple heuristics to identify data patterns that might, from the information provided in published articles, indicate publication bias and p-hacking. Our toolbox is an easy-to-use instrument with which to initially assess a given set of findings.
... The major consumer research journals have discussed replicability and approaches to decrease false-positive findings (e.g., Bergkvist, 2020;Hubbard & Lindsay, 2013;Laurent, 2013;Pham & Oh, 2021;Simmons et al., 2021). However, while some editorial offices frequently encourage replication and open-science practices (i.e., preregistration, open data, and material sharing; see, e.g., Bradlow et al., 2020;Easley & Madden, 2013;Labroo et al., 2022;Lynch et al., 2015), journals in the consumer research field generally publish only very few replication studies (Tipu & Ryan, 2022), and do not encourage large-scale replication projects. ...
Article
During the past few years, researchers have criticized their professions for providing an entry point for false-positive results arising from publication bias and questionable research practices such as p-hacking (i.e., selectively reporting analyses that yield a p-value below 5 %). Researchers are advocating replication studies and the implementation of open-science practices, like preregistration, in order to identify trustworthy effects. Nevertheless, because such consumer research developments are still emerging, most prior research findings have not been replicated, leaving researchers in the dark as to whether a line of research or a particular effect is trustworthy. We tackle this problem by providing a toolbox containing multiple heuristics to identify data patterns that might, from the information provided in published articles, indicate publication bias and p-hacking. Our toolbox is an easy-to-use instrument with which to initially assess a given set of findings.
... Conceptual replications refer to the academic endeavor to operationalize the same effect/concept in a different way to rule out the usually unexpected effect of systematic bias created by background factors (e.g., samples and stimuli) and to examine the generalizability of findings (Feest, 2019;Lynch, 1982). As in other fields, such as psychology, marketing, and medicine (Coiera et al., 2018;Crandall & Sherman, 2016;Lynch et al., 2015), conceptual replications have been increasingly advocated in the tourism literature (Pike & Page, 2014;Suhartanto et al., 2020). In particular, since tourism faces greater complexity as a result of globalization (Wahab & Cooper, 2001;Gonzalez, 2008), it is of both theoretical and managerial interest to replicate findings with participants recruited from different countries or segments. ...
Article
Dish names and dish images can be widely found online, providing consumers with important information. Meanwhile, implied explosion (i.e., the perception of explosion induced by static stimuli) is increasingly utilized by real-world restaurants. The present research thus combines dish names, dish images, and implied explosion to examine the impact of implied explosion on various aspects of consumer behavior within a restaurant context. Three experiments demonstrated that exploding dish names and exploding dish images (i.e., dish names/dish images showing implied explosion) can create a more intense taste perception and a more favorable taste evaluation. Additionally, exploding dish images can enhance perceived dish liking and increase consumers’ willingness to pay. The present research suggests that exploding dish names/dish images are subtle but effective communication tools for the tourism industry, helping to deliver a more stimulating perception and experience to consumers and to generate higher margins. By exploring the effects of implied explosion, we also introduce the implied motion concept to the tourism management literature.
... Our conceptual replication (Lynch et al., 2015) included the same items as Mosquera et al. (2020), and the data were scrutinized for items of poor reliability. Two items concerning job satisfaction were thus omitted from further analysis. ...
Article
Full-text available
Purpose Intrinsic motivation affects job satisfaction and turnover intention. Still, previous motivational studies among real estate brokers (brokers) have primarily focused on extrinsic rewards, leaving intrinsic rewards/motivation practically unexplored. The purpose of this study is therefore to evaluate the role of both satisfaction with intrinsic rewards (SIR) and satisfaction with extrinsic rewards (SER) on job satisfaction and turnover intention among Swedish brokers. Design/methodology/approach This article is a replication, more precisely an empirical generalization and extension, of Mosquera et al .’s (2020) study conducted among brokers in Portugal. Using a sample of 910 Swedish brokers, the study analyzes a conceptual framework and tests hypotheses by using partial least squares (PLS). Findings Results indicate that SIR has a very strong impact on job satisfaction, which is not the case in the Portuguese sample. On the other hand, SER does not have an impact on job satisfaction, which is the case in the Portuguese sample. SIR does not have an impact on turnover intention in the Swedish sample, whereas SER does. Job satisfaction has twice the positive impact on turnover intention in the Swedish sample compared to the Portuguese. Furthermore, job satisfaction mediates the relationship between SIR/SER and turnover intention. Research limitations/implications Findings of this study extend the existing literature of satisfaction with extrinsic and in particular intrinsic rewards on job satisfaction and turnover intention in the context of the brokerage industry. The most interesting difference between the samples is that Swedish brokers display much higher levels of satisfaction with intrinsic rewards. On the other hand, Swedish brokers appear to be less driven by extrinsic rewards, which is not in line with prior studies within brokerage. Practical implications Both managers and students planning to become brokers should consider that SIR has a stronger impact on job satisfaction than SER. What are perceived as intrinsic rewards, however, is highly subjective, which is troublesome from a managerial perspective, even more so as SIR is much harder to influence than SER. Given that intrinsic motivation is primarily a consequence of needs fulfillment, screening of applicants for person-job fit ought to increase job satisfaction and reduce turnover given its focus on the congruence between job demands and worker’s needs, respectively, what a job provides and the worker’s needs. Originality/value This study contributes to the brokerage research field by indicating that being a broker differs substantially between countries and that intrinsic rewards matter for Swedish brokers.
Article
While debates about a replication crisis in organization studies have taken up significant journal space over the past years, the issue of reproducibility has been mostly ignored. Reproducibility manifests when researchers draw the same conclusions from a reanalysis of the same dataset as used in the original study with the same (literal reproducibility) or superior (constructive reproducibility) data analytic techniques. Reproducibility studies are crucial for correcting accidental mistakes as well as intentional distortions during data preparation and analysis, thus allowing a science to be self-correcting. In the current editorial, we define reproducibility, provide published examples that illustrate the crucial role that reproducibility plays in scientific knowledge production, and offer findings from a review of papers published in the 2019 volumes of Academy of Management Journal and Journal of Management to explore how frequently different forms of reproducibility are employed in the top management outlet. We discuss the implications of our findings for future research and reporting practices and offer guidance for authors, reviewers, and editors.
Article
Celebrity endorsement is a common advertising strategy, yet, as well-known scandals show, it is not without risk. Studies at the marketing–finance interface investigate how negative publicity surrounding a celebrity endorser affects firm value, though without determining how such events might spill over to the sponsor firms’ competitors and their stock prices. To address this research gap, the authors assess the impact of celebrity endorser scandals on competitor stock returns with an event study approach. The unique sample of 121 celebrity scandals over a 35-year period reveals a contagion effect, such that competitor firms experience negative stock returns on average, though not to the same extent. According to univariate and regression analyses, the more negative the event affects the sponsor company and the more homogeneous the industry, the stronger the negative spillover effect from a scandal. These findings show that a contagion effect is a likely scenario and offer recommendations for managers regarding how they should adapt their risk management processes and communicate with their boards and shareholders.
Article
Full-text available
In today's multinational marketplace, it is increasingly important to understand why some consumers prefer global brands to local brands. We delineate three pathways through which perceived brand globalness (PBG) influences the likelihood of brand purchase. Using consumer data from the U.S.A. and Korea, we find that PBG is positively related to both perceived brand quality and prestige and, through them, to purchase likelihood. The effect through perceived quality is strongest. PBG effects are weaker for more ethnocentric consumers.Journal of International Business Studies (2003) 34, 53–65. doi:10.1057/palgrave.jibs.8400002
Article
Full-text available
Presents an analysis of theory and research in social psychology which reveals that while methods of research are scientific in character, theories of social behavior are primarily reflections of contemporary history. The dissemination of psychological knowledge modifies the patterns of behavior upon which the knowledge is based. This modification occurs because of the prescriptive bias of psychological theorizing, the liberating effects of knowledge, and the resistance based on common values of freedom and individuality. In addition, theoretical premises are based primarily on acquired dispositions. As the culture changes, such dispositions are altered, and the premises are often invalidated. Several modifications in the scope and methods of social psychology are derived from this analysis. (53 ref.) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Information technology executives strive to align the actions of end users with the desired security posture of management and of the firm through persuasive communication. In many cases, some element of fear is incorporated within these communications. However, within the context of computer security and information assurance, it is not yet clear how these fear-inducing arguments, known as fear appeals, will ultimately impact the actions of end users. The purpose of this study is to investigate the influence of fear appeals on the compliance of end users with recommendations to enact specific individual computer security actions toward the mitigation of threats. An examination was performed that culminated in the development and testing of a conceptual model representing an infusion of technology adoption and fear appeal theories. Results of the study suggest that fear appeals do impact end user behavioral intentions to comply with recommended individual acts of security, but the impact is not uniform across all end users. It is determined in part by perceptions of self-efficacy, response efficacy, threat severity, and social influence. The findings of this research contribute to information systems security research, human computer interaction, and organizational communication by revealing a new paradigm in which IT users form perceptions of the technology, not on the basis of performance gains, but on the basis of utility for threat mitigation.
Article
Full-text available
Although replication is a central tenet of science, direct replications are rare in psychology. This research tested variation in the replicability of 13 classic and contemporary effects across 36 independent samples totaling 6,344 participants. In the aggregate, 10 effects replicated consistently. One effect – imagined contact reducing prejudice – showed weak support for replicability. And two effects – flag priming influencing conservatism and currency priming influencing system justification – did not replicate. We compared whether the conditions such as lab versus online or US versus international sample predicted effect magnitudes. By and large they did not. The results of this small sample of effects suggest that replicability is more dependent on the effect itself than on the sample and setting used to investigate the effect.
Article
The authors conducted an empirical study to test McGuire's (1984) distinctiveness theory within an advertising context. First, following the distinctiveness theory postulate, they found that members of minority groups were more likely than majority groups to have their ethnicity salient. Furthermore, in applying distinctiveness theory to persuasion, they found that members of minority (versus majority) groups find an ad spokesperson from their own ethnic group to be more trustworthy and that increased trustworthiness led to more positive attitudes toward the brand being advertised. The authors draw implications for both advertising to ethnic/minority groups as well as for further research applications of distinctiveness theory.
Article
Empirical results often hinge on data analytic decisions that are simultaneously defensible, arbitrary, and motivated. To mitigate this problem we introduce Specification-Curve Analysis, which consists of three steps: (i) identifying the set of theoretically justified, statistically valid, and non-redundant analytic specifications, (ii) displaying alternative results graphically, allowing the identification of decisions producing different results, and (iii) conducting statistical tests to determine whether as a whole results are inconsistent with the null hypothesis. We illustrate its use by applying it to three published findings. One proves robust, one weak, one not robust at all.
Article
Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effec