Content uploaded by Hyun Young Cho
Author content
All content in this area was uploaded by Hyun Young Cho on May 30, 2019
Content may be subject to copyright.
Journal
of
Retailing
90
(2,
2014)
217–232
How
Online
Product
Reviews
Affect
Retail
Sales:
A
Meta-analysis
Kristopher
Floyd,
Ryan
Freling,
Saad
Alhoqail,
Hyun
Young
Cho,
Traci
Freling ∗
College
of
Business
Administration,
University
of
Texas
at
Arlington,
Arlington,
TX
76019,
United
States
Abstract
A
growing
body
of
research
has
emerged
on
online
product
reviews
and
their
ability
to
elicit
performance
outcomes
desired
by
retailers;
yet,
a
common
understanding
of
the
performance
implications
of
online
product
reviews
has
eluded
us.
Scholars
continue
to
navigate
an
array
of
studies
assessing
different
design
elements
of
online
product
reviews,
and
various
research
settings
and
data
sources.
We
undertake
a
meta-analysis
of
26
empirical
studies
yielding
443
sales
elasticities
to
examine
how
these
variables
relate
to
retail
sales.
Building
on
well-established
meta-analytical
methods,
we
address
the
following
questions:
How
does
review
valence
influence
the
elasticity
of
retailer
sales?
What
about
review
volume?
For
which
product
types
and
usage
situations
do
online
product
reviews
have
a
greater
impact
on
retailer
sales
elasticity?
Which
types
of
online
reviewers
and
websites
exert
the
greatest
influence
on
retailer
sales
elasticity?
Our
study
answers
these
important
questions
and
provides
a
much
needed
quantitative
synthesis
of
this
burgeoning
stream
of
research.
©
2014
New
York
University.
Published
by
Elsevier
Inc.
All
rights
reserved.
Keywords:
Online
word-of-mouth;
Electronic
word-of-mouth;
Online
consumer
reviews;
Online
feedback
mechanisms
Introduction
In
an
online
survey
of
2,005
American
shoppers,
Weber
Shandwick
(2012),
in
conjunction
with
KRC
research,
surveyed
participants
to
understand
how
they
use
reviews
to
make
buy-
ing
decisions
and
the
impact
of
online
product
reviews
on
sales.
The
study
shows
that,
as
a
result
of
consumer
reviews,
65%
of
potential
consumers
selected
a
brand
that
had
not
been
in
their
original
consideration
set.
As
consumers
search
online,
learn
about
products,
and
evaluate
different
alternatives,
they
are
likely
to
encounter
and
consider
numerous
online
product
reviews
from
other
consumers
(Mudambi
and
Schuff
2010).
And,
according
to
a
recent
report
by
market
research
firm
Nielsen
(2012),
70%
of
consumers
indicate
they
trust
online
product
reviews.
Opinions
posted
online
are
thought
to
influence
con-
sumers’
choices
in
a
surprising
variety
of
contexts,
including
airlines,
telephone
companies,
resorts,
movies,
restaurants,
and
stocks
(Guernsey
2000)—and
utilization
of
online
recommen-
dations
in
decision-making
appears
to
be
on
the
rise.
Web
traffic
∗Corresponding
author.
Tel.:
+1
817
272
0152;
fax:
+1
817
272
2854.
E-mail
addresses:
kfloyd@uta.edu
(K.
Floyd),
rfreling@uta.edu
(R.
Freling),
saad.alhoqail@uta.edu,
alhoqail@uta.edu
(S.
Alhoqail),
hyunyoung.cho@uta.edu,
hyunycho@uta.edu
(H.Y.
Cho),
freling@uta.edu
(T.
Freling).
analysis
site
Compete.com
reports
that,
in
December
2012,
visits
increased
15%
at
Yelp.com
(a
review
website
for
local
busi-
nesses),
8%
at
TripAdvisor.com
(a
travel
review
website),
and
80%
at
Angie’s
List
(another
review
website
for
local
busi-
nesses)
(Grant
2013).
Moreover,
in
a
recent
study
5,000
shoppers
across
five
countries
were
asked
to
indicate
the
three
most
important
sources
of
information
they
use
for
making
buying
decisions.
Online
ratings
and
reviews
on
retailer
websites
(52%)
were
included
among
the
top
three
sources
of
information
most
frequently
by
respondents—ahead
of
advice
from
friends
and
family
members
(49%)
and
advice
from
store
employees
(12%)
(Cisco
Internet
Business
Solutions
Group
2013).
These
findings
are
consistent
with
other
survey
results
in
which
online
product
reviews
are
rated
as
“important”
or
“extremely
important”
in
the
buying
decisions
of
half
of
consumers
who
visited
retailer
cites
with
consumer
postings
(Forrester
Research
2000).
Consumers’
incorporation
of
online
product
reviews
into
their
decision-making
has
not
escaped
the
notice
of
retailers,
who
actively
try
to
harness
electronic
word-of-mouth
(eWOM)
as
a
new
marketing
tool
by
inviting
their
consumers
to
post
personal
product
evaluations
on
seller
websites
or
availing
con-
sumers
to
information
provided
about
their
products
by
other
third-party
sources
(e.g.,
Epinions.com
or
Moviefone.com)
(Dellarocas
2003).
To
illustrate,
Amazon.com
has
encouraged
consumers
to
post
their
product
reviews
since
1995,
and
now
http://dx.doi.org/10.1016/j.jretai.2014.04.004
0022-4359/©
2014
New
York
University.
Published
by
Elsevier
Inc.
All
rights
reserved.
218
K.
Floyd
et
al.
/
Journal
of
Retailing
90
(2,
2014)
217–232
boasts
over
10
million
consumer
reviews
across
product
cate-
gories
on
its
website.
Amazon’s
online
product
reviews
are
very
popular
and
are
considered
to
be
one
of
the
site’s
more
effective
features
(New
York
Times
2004).
Electronic
word-of-mouth
communication
(eWOM)—defined
by
Goldsmith
(2006)
as
“word-of-mouth
communication
on
the
Internet,
which
can
be
diffused
by
many
Internet
applications
such
as
online
forums,
electronic
bulletin
board
systems,
blogs,
review
sites,
and
social
networking
sites”—is
regarded
by
marketers
as
an
important
source
of
product
information
that
influences
human
behavior
(Brown
and
Reingen
1987;
McFadden
and
Train
1996).1In
comparison
to
traditional
WOM
(Katz
and
Lazarsfeld
1955),
eWOM
may
be
perceived
by
consumers
as:
(1)
a
more
powerful,
effective
communication
device
because
it
can
be
accessed
by
consumers
anywhere
via
the
Internet
(Bakos
and
Dellarocas
2011;
Duan,
Gu,
and
Whinston
2008);
(2)
more
balanced
and
unbiased
because
it
allows
divergent
opinions
to
be
presented
simul-
taneously
on
the
same
website
and
from
different
consumers
(Lee,
Park,
and
Han
2008;
Senecal
and
Nantel
2004);
(3)
easier
to
decipher,
given
that
the
quantity
and
quality
of
online
feedback
mechanisms
is
published
in
written
form;
and,
(4)
more
controllable
by
retailers,
who
can
design
information
systems
that
mediate
online
feedback
exchanges
by
regulating
who
participates,
what
type
of
information
is
solicited,
how
information
is
aggregated,
and
what
type
of
information
is
made
available
about
sources
(Dellarocas
2003).
Given
these
interesting
features
of
eWOM,
simply
relying
on
existing
knowledge
of
traditional
WOM
(c.f.,
de
Matos
and
Rossi
2008)
would
likely
be
insufficient
for
fully
understanding
a
particular
eWOM
mechanism
like
online
product
reviews.
These
differences
between
traditional
WOM
and
eWOM
seem
to
be
largely
ignored
by
retailers,
who
eagerly
integrate
online
product
reviews
into
their
marketing
strategies,
assum-
ing
that
it
will
significantly
influence
consumers’
purchasing
decisions
and
ultimately
improve
their
profits.
Interestingly,
a
spate
of
recent
empirical
studies
exploring
the
impact
of
online
product
reviews
has
produced
mixed
results.
While
some
research
suggests
online
product
reviews
strongly
affect
retailer
performance
(Chen,
Dhanasobhon,
and
Smith
2008;
Chevalier
and
Mayzlin
2006;
Clemons,
Gao,
and
Hitt
2006;
Ghose
and
Ipeirotis
2006),
other
work
in
this
area
suggests
their
influence
is
negligible
(Chen,
Wu,
and
Yoon
2004;
Duan,
Gu,
and
Whinston
2008),
equivocal
(Chen,
Wu,
and
Yoon
2004;
Eliashberg
and
Shugan
1997),
or
context-dependent
(Chatterjee
2001;
Li
and
Hitt
2008).
Thus,
despite
the
significant
insights
provided
by
prior
research,
a
consensus
regarding
the
impact
of
online
product
reviews
has
yet
to
emerge,
intimating
the
need
for
a
systematic
integration
of
this
body
of
work.
It
is
possible
that
a
comprehensive
grasp
of
online
prod-
uct
reviews
has
evaded
scholars
because
of
some
interrelated
1As
this
definition
implies,
eWOM
takes
many
forms,
including
online
rate-
an-review
websites,
discussion
boards,
chat
rooms,
blogs,
wikis,
etc.
We
limit
our
exploration
to
online
product
reviews
because
research
suggests
they
constitute
the
most
prevalent
form
of
eWOM
(Duan,
Gu,
and
Whinston
2008).
characteristics
of
research
in
this
area.
Specifically,
the
diverse
array
of
research
approaches,
settings,
and
data
sources
explored
in
researching
online
product
reviews
may
have
hindered
the
quest
for
generalizable
insights.
Empirical
assessments
of
online
product
reviews
have
considered
its
impact
on
movie
releases
(Duan,
Gu,
and
Whinston
2008),
television
viewership
(Godes
and
Mayzlin
2004),
and
the
sales
of
books
(Chevalier
and
Mayzlin
2006),
beer
(Clemons,
Gao,
and
Hitt
2006),
and
automobiles
(Chen,
Fay,
and
Wang
2003),
to
name
a
few.
Addi-
tionally,
miscellaneous
sources—which
vary
considerably
in
terms
of
bias
and
expertise—provide
the
online
feedback
inves-
tigated
in
this
stream
of
research,
including
online
consumer
reviews
on
retailer
websites
(Chen,
Wu,
and
Yoon
2004),
reviews
mediated
on
third-party
websites
(Dellarocas,
Awad,
and
Zhang
2004),
and
expert
or
professional
reviews
(Basuroy,
Chatterjee,
and
Ravid
2003).
Although
collectively
these
efforts
provide
valuable
insights
within
the
bounds
of
the
contexts
considered,
gaining
a
better
understanding
of
the
effects
of
online
product
reviews
hinges
on
investigating
the
systematic
variation
induced
by
such
differences.
In
the
current
research,
we
employ
meta-analysis
to
quantita-
tively
synthesize
this
developing
literature
stream
and
to
explore
the
consequences
of
online
product
reviews.
Specifically,
we
examine
the
effect
of
online
product
reviews
on
retailer
sales
and
delineate
important
moderators
relating
to
characteristics
of
the
reviews
and
the
products
being
evaluated
that
enhance
or
mitigate
these
effects.
For
academics,
understanding
what
makes
online
product
reviews
effective
will
help
set
the
agenda
for
future
research
efforts.
Retailers
also
benefit
from
practical
guidance
based
on
rigorous
analysis
of
specific
design
elements
of
online
feedback
mechanisms
and
contextual
variables
that
could
improve
their
marketing
initiatives.
In
the
sections
that
follow,
we
delineate
the
scope
of
our
study
and
discuss
the
development
of
our
database.
We
then
present
the
meta-analytic
methodology
employed
and
describe
in
detail
the
variables
coded
and
included
in
our
analyses,
including
char-
acteristics
of
the
reviews,
products/usage
situations,
study
and
sample,
and
data.
Following
this,
we
report
the
results
of
our
meta-analysis,
concluding
with
a
discussion
of
the
theoretical
and
managerial
implications
of
this
research.
Methodology
Database
Development
We
constructed
a
database
using
several
approaches
us
to
identify
the
population
of
studies
for
inclusion
in
our
meta-
analysis.
First,
we
collected
relevant
articles
appearing
in
a
recent
meta-analysis
of
overall
WOM
effects
(de
Matos
and
Rossi
2008),
including
only
those
papers
that
explored
eWOM
(not
traditional
WOM).
We
next
conducted
a
manual
search
of
leading
journals
(including
the
Journal
of
Marketing
Research,
Journal
of
Consumer
Research,
Journal
of
Marketing,
Jour-
nal
of
Consumer
Psychology,
Advances
in
Consumer
Research,
Marketing
Science,
Journal
of
Retailing
and
the
Journal
of
the
Academy
of
Marketing
Science)
in
which
articles
investigating
online
product
reviews
were
most
likely
to
appear.
Keyword
K.
Floyd
et
al.
/
Journal
of
Retailing
90
(2,
2014)
217–232
219
searches
of
electronic
databases
using
such
terms
as
“WOM,”
“word
of
mouth,”
“online
reviews,”
“eWOM,”
“online
word
of
mouth,”
“online
recommendation,”
“online
rater,”
“word
of
mouth
performance,”
“online
consumer
review,”
and
“online
rating”
were
then
conducted.
Importantly,
in
an
attempt
to
avoid
publication
bias
that
could
reduce
measurement
variability
in
the
meta-analysis
(Andrews
and
Franke
1991;
Ferguson
and
Brannick
2012;
Rust,
Lehmann,
and
Farley
1990),
we
searched
Table
1
Description
of
articles
comprising
meta-analytic
dataset.
Article
Source(s)
of
data
Product(s)
reviewed
Review
characteristics
examined
Measure
of
sales
Number
of
elasticities
Ogut
and
Tas
(2012)
Booking.com
(hotel
booking
website)
Hotels
Valence
ln(Reviews
per
room)
8
Ye,
Gu,
and
Chen
(2010)
Ctrip.com
(Chinese
travel
website)
Hotels
Valence
ln(Monthly
hotel
bookings)a
1
Ye
et
al.
(2011)
Ctrip.com
(Chinese
travel
website)
Hotels
Valence
ln(Number
of
reviews)
1
Li
and
Hitt
(2008)
Amazon.com
Books
Volume
ln(Sales
estimated
from
sales
rank)a
4
Liu
(2006)
Yahoo!
Movies,
TheNumbers.com,
Variety,
com,
IMDB.com
Movies
Volume
ln(Weekly
box
office
revenues)
18
Amblee
and
Bui
(2011)
Amazon.com
Amazon
short
stories
Valence
and
Volume
Sales
rank
3
Archak,
Ghose,
and
Ipeirotis
(2010)
Amazon.com
Digital
cameras
and
camcorders
Valence
and
Volume
ln(Sales
rank)
28
Brandes,
Nolte,
and
Nolte
(2011)
Online
travel
and
holiday
portal
Hotels
Valence
and
Volume
ln(Hotel
days
booked/week)
24
Chen,
Dhanasobhon,
and
Smith
(2008)
Amazon.com
Books
Valence
and
Volume
ln(Sales
rank)a10
Chen,
Fay,
and
Wang
(2011)
and
Chen,
Wang,
and
Xie
(2011)
Amazon.com
Digital
cameras
Valence
and
Volume
ln(Sales
rank)a16
Chevalier
and
Mayzlin
(2006)
Amazon.com,
BN.com
Books
Valence
and
Volume
ln(Sales
rank)a50
Chintagunta,
Gopinath,
and
Venkataraman
(2010)
Yahoo!
Movies,
ACNielsen
Movies
Valence
and
Volume
ln(Gross
box
office
sales)
26
Clemons,
Gao,
and
Hitt
(2006)
Association
of
Brewers,
Ratebeer.com
Craft
beer
Valence
and
Volume
Sales
growth
rate
5
Cui,
Lui,
and
Guo
(2012)
Amazon.com
Video
games,
consumer
electronics
Valence
and
Volume
ln(Sales
rank)
6
Dewan
and
Ramprasad
(2009)
Nielsen
SoundScan,
Amazon.com
Music
albums
Valence
and
Volume
ln(Album
sales)
14
Duan,
Gu,
and
Whinston
(2008)
Yahoo!
Movies,
Box
Office
Mojo,
Variety.com
Movies
Valence
and
Volume
Daily
gross
movie
revenues
16
Forman,
Ghose,
and
Wiesenfeld
(2008)
Amazon.com
Books
Valence
and
Volume
ln(Sales
rank)
6
Ghose
and
Ipeirotis
(2011)
Amazon.com
Audio
and
video
players,
digital
cameras,
DVDs
Valence
and
Volume
ln(Sales
rank)
6
Godes
and
Mayzlin
(2004)
Nielsen
ratings,
Usenet
newsgroups
TV
shows
Valence
and
Volume
Television
viewership
rating
points
27
Gu,
Park,
and
Konana
(2012)
Amazon.com,
Cnet,
DPreview,
Epinions
Digital
cameras
Valence
and
Volume
ln(Sales
rank)
52
Pathak
et
al.
(2010)
Amazon.com
Books
Valence
and
Volume
ln(Sales
rank)a16
Sun
(2012)
Amazon.com,
BN.com
Books
Valence
and
Volume
ln(Sales
rank)a12
Yang
et
al.
(2012)
Korean
Film
Council,
NAVER
Movie
(Korean
web
portal)
Movies
Valence
and
Volume
ln(Weekly
box
office
revenues)
24
Zhang,
Li,
and
Chen
(2012)
Amazon.com,
BN.com
Books
Valence
and
Volume
ln(Sales
rank)a48
Zhu
and
Lai
(2009)
Tongcheng,
Xiecheng
(Chinese
travel
websites)
Hotels
Valence
and
Volume
ln(Number
of
visitors)
2
Zhu
and
Zhang
(2010)
NPD
(marketing
research
firm),
GameSpot.com
Video
game
consoles
Valence
and
Volume
ln(Market
share)a19
aAt
least
some
of
the
elasticities
calculated
from
this
article
are
based
on
differenced
sales
measures.
220
K.
Floyd
et
al.
/
Journal
of
Retailing
90
(2,
2014)
217–232
for
unpublished
studies,
working
papers,
conference
papers,
and
dissertations
examining
eWOM
by
contacting
published
authors
of
research
in
this
area,
searching
SSRN
and
ProQuest,
and
post-
ing
a
request
for
such
work
on
ELMAR.
Finally,
employing
an
ancestry
approach,
we
examined
the
references
of
studies
identified
in
the
preceding
searches
and
key
conceptual
arti-
cles.
Together,
these
efforts
initially
yielded
approximately
400
articles
which
we
further
scrutinized
for
inclusion
in
our
meta-
analysis.
Domain
Specification
From
this
initial
group
of
articles
we
eliminated
any
papers
that
were
not
electronically
based,
or
that
did
not
examine
the
valence
and/or
volume
of
product
evaluations.
Among
the
remaining
eWOM
articles,
we
elected
not
to
include
studies
exploring
the
effects
of
discussion
boards,
chat
rooms,
blogs,
wikis,
and
forms
of
eWOM
other
than
online
product
reviews.
Further,
because
we
are
interested
in
investigating
the
impact
of
online
product
reviews
on
sales2we
excluded
(1)
effects
based
on
experimental
or
other
noneconometric
designs,
and
(2)
articles
that
did
not
include
actual
sales
data
or
that
measured
different
dependent
variables.
So,
for
example,
field
work
or
empirical
papers
that
measured
review
helpfulness,
likelihood
to
post
an
online
product
review,
or
price
of
the
product
sold
(c.f.,
Chen,
Fay,
and
Wang
2011;
Chen,
Wang,
and
Xie
2011;
Jiang
and
Wang
2007)
were
not
included,
nor
were
experiments
that
measured
attitudes
and
purchase
intentions
(c.f.,
Khare,
Labrecque,
and
Asare
2011;
Kim
and
Gupta
2012).
A
study
was
deemed
eligible
for
inclusion
in
our
analyses
if
it
provided
econometric
estimates
of
sales
elasticity
or
reported
descriptive
statistics
and
beta
coefficients
that
allowed
us
to
compute
sales
elasticity.3
Overall,
26
papers
covering
the
period
from
2004
to
2013
met
these
requirements
and
were
coded
for
analysis.
In
our
database,
a
“paper”
is
any
distinct
document
(a
journal
article,
an
unpublished
dissertation,
a
working
paper,
etc.)
that
offers
some
original
analysis
and
findings—so
there
are
no
duplica-
tions
or
redundant
papers
(Wood
2008).
The
papers
comprising
our
database
provide
analyses
of
many
distinct
datasets
that
con-
tain
information
about
sales
related
to
online
product
reviews
in
some
specific
market
setting.
Following
Albers,
Mantrala,
and
Sridhar
(2010),
when
researchers
apply
a
different
estimation
technique/model
using
the
same
data
in
the
same
paper
or
dif-
ferent
papers,
we
treat
resulting
elasticities
as
multiple
distinct
measurements
from
one
dataset.
One
paper
may
also
provide
analyses
of
multiple
distinct
datasets
and
contribute
one
or
more
distinct
sales
elasticity
estimates
from
each
dataset.
Applying
2Papers
that
measured
sales,
the
log
of
sales,
sales
rank,
revenue,
and
other
proxies
for
demand
were
deemed
includable.
Importantly,
we
coded
for
the
“functional
form”
of
the
model
so
that
we
could
control
for
differences
in
the
reporting
of
the
dependent
variable.
3Some
models
were
run
on
subsamples
of
the
full
dataset.
We
used
descriptive
statistics
from
the
full
dataset
when
authors
did
not
provide
this
information
for
subsamples
(c.f.,
Godes
and
Mayzlin
2004,
Table
8).
these
domain
specifications,
the
26
research
papers
comprising
our
database
provide
443
sales
elasticity
measurements.
(The
papers
comprising
our
database
are
denoted
by
asterisk
(*)
in
our
References
section.)
Of
these
443
elasticities,
147
(33.18%)
involve
book
purchases,
104
(23.48%)
involves
sales
of
elec-
tronic
products
(e.g.,
cameras
or
videogame
players),
100
(22.57%)
involve
box
office
movie
sales
or
album
sales,
34
(7.67%)
involve
hotel
nights
purchased,
and
58
(13.09%)
are
from
other
settings
(e.g.,
TV
show
ratings
and
DVDs).
(These
papers
are
described
in
Table
1.)
Coding
Procedures
Treatment
of
sales
measures
Our
database
contains
only
studies
that
provide
econo-
metric
estimates
of
sales
elasticities
associated
with
online
product
reviews.
Even
so,
we
were
forced
to
consider
a
diverse
array
of
sales
measures
employed
by
researchers
in
this
domain.
Ultimately
we
include:
(1)
measures
directly
related
to
sales;
(2)
proxy
measures
of
sales;
and,
(3)
measures
of
relative
sales.
Measures
that
directly
reflect
sales—such
as
gross
movie
receipts
(Chintagunta,
Gopinath,
and
Venkataraman
2010)
or
Nielsen
television
rating
points
(Godes
and
Mayzlin
2004)—permit
a
relatively
straightforward
interpretation
of
sales
elasticities.
However,
a
large
portion
of
our
database
util-
izes
proxy
measures
of
sales,
such
as
product
sales
rankings
provided
by
Amazon.com
(c.f.
Chevalier
and
Mayzlin
2006)
or
reviews
per
room
(Ogut
and
Tas
2012).
Because
Amazon
sales
ranks
are
accepted
in
the
literature
as
a
suitable
proxy
for
sales4
and
because
it
dominates
our
database—42%
of
the
papers
in
our
meta-analysis
use
sales
ranks
in
some
form
as
their
depend-
ent
measure—we
include
all
studies
using
a
proxy
measures
of
sales
in
their
analysis.
Using
sales
rank
as
a
dependent
measure
of
sales
intro-
duces
an
additional
challenge
relating
to
the
inverse
relationship
between
sales
rank
and
actual
sales.
That
is,
products
with
higher
sales
ranks
are
by
definition
those
that
have
fewer
sales
(e.g.,
a
sales
rank
of
“1”
denotes
a
best
seller,
while
a
higher
rank
of
323
means
the
product
has
relatively
lower
sales).
Hence,
care
must
be
taken
to
account
for
the
sign
of
the
elasticity
estimate
at
the
measurement
level,
especially
when
comparing
elasticities
across
studies—since
a
negative
value
actually
rep-
resents
an
increase
in
sales
for
sales
elasticities
based
on
sale
rank.
Finally,
we
also
include
papers
that
use
relative
measures
of
sales,
such
as
those
that
result
from
the
econometric
tech-
nique
of
differencing.
Nearly
30%
of
the
studies
in
our
database
employ
differencing
techniques
to
help
account
for
endogeneity
in
their
models
by
differencing
across
platforms
(e.g.,
Amazon
vs.
Barnes
&
Noble;
Sun
2012),
across
time
(Chen,
Fay,
and
4Using
proprietary
data
from
Amazon.com,
Schnapp
and
Allwine
(2001)
show
that
the
relationship
between
Log(sales)
and
Log(sales
rank)
is
approxi-
mately
linear.
K.
Floyd
et
al.
/
Journal
of
Retailing
90
(2,
2014)
217–232
221
Elasticity of ret
ail
sales
Volume of reviews
Valence
of
review
s
Methodologica
l var
iables
Manus
cript
status
Journal
quality
Geogr
aphic
settin
g
Sample domain
Span of data
colle
ction
Estimation method
Functional
form
of mo
del
Endogeneity
Hetero
geneity
Theoretical variables
Review valence
Criti
cs’
re
views
Third-party review
s
Product
be
nef
its
Frequency of purchase
Fig.
1.
Study
framework
for
variables
influencing
elasticity
of
retail
sales.
Wang
2011;
Chen,
Wang,
and
Xie
2011),
or
both
(Chevalier
and
Mayzlin
2006).5
Coding
sales
elasticities
Following
coding
techniques
suggested
by
Hunter
and
Schmidt
(2004),
we
collected
data
for
our
meta-analysis
that
allowed
us
to
code
or
calculate
sales
elasticities.
In
many
cases,
the
articles
comprising
our
database
did
not
report
elasticities
directly,
requiring
us
to
calculate
the
elasticities
from
informa-
tion
provided
in
those
studies.
In
studies
employing
a
log–log
specification
(where
the
dependent
measure
of
sales
and
the
independent
variable
of
review
rating
or
review
volume
are
included
in
the
model
as
natural
log
transformations
of
the
orig-
inal
variable),
the
beta
coefficient
reported
in
the
model
is
the
elasticity.
In
studies
featuring
a
log–level
model
specification,
the
elasticity
was
calculated
by
multiplying
the
beta
coefficient
on
the
independent
review
variable
by
its
mean
value.
Finally,
in
studies
employing
a
level–level
specification
(where
both
the
independent
variable
and
the
dependent
variable
of
interest
are
not
transformed)
the
product
of
the
mean
value
of
the
dependent
variable
and
its
beta
coefficient
is
divided
by
the
mean
value
of
the
independent
variable.
Coding
independent
variables
In
addition
to
coding/calculating
sales
elasticities,
we
also
coded
15
independent
variables
that
could
potentially
influ-
ences
sales
elasticities
(see
Fig.
1).
Table
2
presents
our
coding
5A
further
complication
associated
with
including
studies
that
employ
differ-
encing
techniques
involves
our
ability
to
calculate
elasticities
when
the
model
is
in
LogLevel
form.
Not
all
papers
that
otherwise
met
our
inclusion
criteria
reported
the
summary
statistics
necessary
to
determine
the
mean
of
the
differ-
enced
variable
for
use
in
elasticity
calculations.
When
this
crucial
information
could
not
be
obtained
from
authors,
we
were
not
able
to
include
those
obser-
vations
in
our
analyses.
This
reduced
the
number
of
observations
in
our
model
from
443
to
411
sales
elasticities.
scheme
and
provides
an
overview
of
the
independent
variables
in
our
meta-analytic
model,
which
fall
into
one
of
two
cate-
gories:
(1)
theoretical
variables
(i.e.,
review
characteristics
and
product
characteristics);
or,
(2)
methodological
variables
(i.e.,
study/sample
characteristics
and
data/model
characteristics).
Two
members
of
the
research
team
recorded
the
descriptive
statistics
and
beta
coefficients
required
to
calculate
elasticities,
and
coded
the
methodological
variables.
Inter-rater
reliabil-
ity
between
these
coders
averaged
96%,
with
disagreements
resolved
by
discussion
(Table
3).
To
avoid
introducing
bias
in
the
treatment
of
the
more
sub-
jective
theoretical
variables
relating
to
products
under
review
and
their
typical
purchase
and
usage/consumption
context(s),
three
expert
judges
coded
these
variables
(Bearden,
Hardesty,
and
Rose
2001;
Carlson
et
al.
2009).
These
judges
were
assis-
tant
professors
in
marketing
specializing
in
the
area
of
consumer
research
that
had
no
other
involvement
with
this
project
and
were
blind
to
our
expectations.
Inter-rater
reliability
between
judges
averaged
93%,
and
discrepancies
were
resolved
through
discussion.
Because
these
coded
variables
exhibited
a
high
degree
of
correlation
we
conducted
a
Common
Factor
Analysis
with
prin-
cipal
factor
extraction
to
identify
factors
that
best
explain
the
common
variance
between
them
(Ford,
MacCallum,
and
Tait
1986).
Because
our
objective
was
to
use
the
factors
as
variables
in
our
model,
we
employed
varimax
rotation
to
yield
a
solution
that
produces
orthogonal
factors.6One
item
assessing
nature
of
the
product
and
another
item
assessing
perceived
performance
risk
were
eliminated
because
they
exhibited
significant
cross-
loadings
on
two
or
more
factors
(Briggs
and
Cheek
1986;
Hair
6We
also
explored
oblique
factor
solutions
resulting
in
correlated
factors,
but
these
solutions
did
not
appreciably
improve
interpretation
beyond
the
orthogonal
solution.
To
minimize
multicollinearity
in
the
model,
we
retained
the
orthogonal
solution.
222
K.
Floyd
et
al.
/
Journal
of
Retailing
90
(2,
2014)
217–232
Table
2
Variables
used
in
analysis.
Number
Variable
description
Coding
scheme
Theoretical
variables
1
Review
valence
captures
whether
the
calculated
elasticity
relates
to
volume
or
valence
of
reviews
1
=
Sales
elasticity
calculated
for
valence
of
reviewed;
0
=
Sales
elasticity
calculated
for
volume
of
reviews
2
Critics’
reviews
captures
whether
the
reported
reviews
included
reviews
by
experts
or
only
consumer
reviews
1
=
Critical
reviews
included;
0
=
Critical
reviews
not
included
3
Third-party
sources
captures
whether
the
reported
reviews
included
reviews
from
third-party
websites
or
only
seller
websites
1
=
Third-party
review
included;
0
=
Third-party
review
not
included
4
Product
involvement
captures
the
degree
to
which
consumers
perceive
a
product
category
to
be
innately
important
Mean
rating
of
product
involvement
where
1
=
Involving
and
7
=
Not
involving
5
Product
benefits
captures
whether
a
product
is
a
necessity
purchased
for
utilitarian
reasons
to
be
privately
consumed
Mean
rating
where
1=
Privately
consumed
necessity
and
7
=
Publicly
consumed
luxury
6
Frequency
of
purchase
captures
whether
the
product
has
a
life
span
of
less
than
three
years
(i.e.,
is
a
durable
product
that
represents
a
nonroutine
purchase)
Mean
rating
of
product
durability
where
1
=
Nonroutine
purchase
of
a
durable
product
and
7
=
Routine
purchase
of
a
nondurable
product
Methodological
variables
7
Manuscript
status
whether
the
paper
was
published
in
a
peer-reviewed
journal
1
=
Published
manuscript,
0
=
Unpublished
manuscript
8
Journal
quality
captures
whether
the
paper
appeared
in
an
elite
journal
1
=
Published
in
elite
journal,
0
=
Not
published
in
and
elite
journal
9
Geographic
setting
captures
the
country
in
which
data
were
collected
1
=
Data
collected
in
the
U.S.,
0
=
Data
collected
elsewhere
10
Sample
domain
captures
whether
unreviewed
products
were
included
in
the
dataset
1
=
Unreviewed
products
included,
0
=
Unreviewed
products
not
included
11
Span
of
data
collection
captures
whether
data
was
collected
at
one
point
in
time
or
across
several
time
periods
1
=
Cross-sectional
data,
0
=
Time
series
data
12
Estimation
method
capture
whether
the
estimation
method
used
was
ordinary
least
squares
(OLS),
multistage
and
generalized
least
squares,
or
maximum
likelihood
1
=
Ordinary
least
squares,
0
=
Other
estimation
method
13
Functional
form
of
model
(Log–Log)
captures
if
both
the
dependent
variable
and
independent
variable
are
modeled
as
log
transformations
of
the
original
variables
1
=
Log–Log
functional
form,
0
=
Other
function
form
14
Endogeneity
captures
whether
endogeneity
is
modeled
1
=
Endogeneity
accounted
for,
0
=
Endogeneity
not
accounted
for
15
Heterogeneity
summarizes
the
impact
of
unobserved
variables
relating
to
the
reviewed
product
1
=
Fixed
effect
or
random
effects
specification
used
to
account
for
unobserved
product-related
variables,
0
=
Unobserved
product-related
not
accounted
for
et
al.
2010).
Our
resulting
three
factors
are:
product
involve-
ment;
product
benefits;
and
frequency
of
purchase.
The
first
factor
(product
involvement)
is
comprised
of
the
following
two
items:
(1)
This
product
is
“1”
definitely
not
involving.
.
.“7”
definitely
involving;
and,
(2)
This
product
is
“1”
definitely
a
low-priced
product.
.
.“7”
definitely
a
high-priced
product.
Fac-
tor
two
(product
benefits)
is
comprised
of
the
following
three
items:
(1)
This
product
is
“1”
definitely
a
luxury.
.
.“7”
def-
initely
a
necessity;
and
(2)
This
product
is
“1”
definitely
a
hedonic
product.
.
.“7”
definitely
a
utilitarian
product;
and,
(3)
This
product
is
“1”
definitely
a
publicly
consumed
product.
.
.“7”
definitely
a
privately
consumed
product.7The
third
factor
7While
product
utilitarianism
and
necessity
seem
conceptually
similar,
we
acknowledge
that
the
relation
of
these
items
to
private
consumption
is
less
straightforward—even
if
these
three
items
loaded
strongly
together
in
our
factor
analysis.
It
is
possible
that
the
resulting
product
benefits
factor
is
an
idiosyn-
crasy
of
(1)
our
database—one-third
of
which
was
comprised
of
book
sales—and
(2)
our
dataset—which
included
theoretical
variables
derived
from
the
ratings
(frequency
of
purchase)
is
comprised
of
the
following
two
items:
(1)
This
product
is
“1”
definitely
a
nondurable
product.
.
.“7”
definitely
a
durable
product;
and,
(2)
This
product
is
“1”
defi-
nitely
routine.
.
.“7”
definitely
nonroutine.
We
had
our
expert
judges
evaluate
the
items
used
to
create
the
above
factors
because
they
could
be
coded
from
the
studies
comprising
our
database,
and
are
theoretically
justifiable
based
on
the
extant
eWOM
literature
exploring
possible
explanations
for
the
differences
in
the
effects
of
online
product
reviews
on
retailer
performance.
Our
selection
of
methodological
variables
was
guided
by
recent
meta-analyses
in
the
marketing
literature
investigating
elasticities
of
personal
selling
(Albers,
Mantrala,
and
Sridhar
2010),
advertising
(Sethuraman,
Tellis,
and
Briesch
2011),
and
shelf-space
(Eisend
2013)
and
their
determinants.
We
offer
informal
expectations
regarding
how
theoretical
vari-
ables
will
affect
sales
elasticities
but
make
no
predictions
for
of
scholarly
experts,
who
evaluated
books
as
privately-consumed
utilitarian
necessities.
K.
Floyd
et
al.
/
Journal
of
Retailing
90
(2,
2014)
217–232
223
Table
3
Correlation
matrix
for
all
variables
in
HLM
model.
Pearson
Correlation
Coefficients,
N
=
412
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
1
Prod.
involvement
1
2
Product
benefits
−0.01
1
3
Frequency
of
purchase
0.00
0.01
1
4
Third-party
sources
−0.62
−0.34
−0.22
1
5
Critics’
reviews
−0.39
−0.03
0.11** 0.40
1
6
Manuscript
status
(published)
−0.09*−0.37
0.38
0.27
0.12** 1
7
Journal
quality
(elite)
−0.06
0.08*0.24
0.16
0.02
0.37
1
8
Geographic
setting
(U.S.)
0.20
−0.24
0.47
0.03
0.06
0.53
0.54
1
9
Sample
domain
(unreviewed
products
included)
0.21
0.24
−0.13
−0.16
−0.22
0.14
0.35
0.21
1
10
Span
of
data
collection
(cross-sectional)
−0.04
−0.06
0.06
0.13
0.04
0.15
0.09*0.03
−0.18
1
11
Heterogeneity
0.18
0.03
0.02
−0.20
−0.18
−0.13
−0.02
0.16
0.19
−0.67
1
12
FF
(loglog) 0.03
−0.10** 0.33
−0.03 0.19
0.14
0.08*0.13
−0.19 0.15
−0.11** 1
13
Endogeneity
0.29
0.12** 0.39
−0.08 −0.06 0.39
0.35
0.58
0.24
−0.01
0.24
0.09*1
14
Estimation
method
(OLS) 0.30
0.19
−0.13 −0.24 0.03
−0.16
0.09*0.14** −0.09*−0.05
0.08
0.16
−0.06
1
15
Review
valence −0.07 0.09*0.11** 0.00
0.01
−0.04
−0.12*−0.10** −0.17
−0.01
0.00
0.52
−0.02
−0.04
1
16
Sales
elasticity
−0.25
0.18
0.17
0.22
0.39
0.10** 0.08
0.01
−0.07
0.00
0.01
0.11** 0.06
−0.05
0.19
1
Bold:
p
<
0.01.
*p
<
0.10.
** p
<
0.05.
methodological
variables—which
are
included
based
on
prece-
dence
(Eisend
2013).
Description
of
Theoretical
Variables
Influencing
Sales
Elasticity
Valence
and
volume
Online
product
reviews
vary
considerably
in
terms
of
volume
(i.e.,
the
number
of
online
comments
or
ratings)
and
valence
(i.e.,
the
preference
carried
in
the
WOM
information).
The
volume
of
online
product
reviews
is
thought
to
increase
consumer
aware-
ness
about
products
and
generates
greater
sales
(Anderson
and
Salisbury
2003;
Bowman
and
Narayandas
2001;
Chen,
Wu,
and
Yoon
2004;
Godes
and
Mayzlin
2004;
Liu
2006;
Van
den
Bulte
and
Lilien
2001).
Duan,
Gu,
and
Whinston
(2008)
posit
that
a
higher
volume
of
online
product
reviews
affects
consumers’
purchase
decisions
by
influencing
both
their
awareness
of
the
product
and
perceptions
of
its
quality
(Duan,
Gu,
and
Whinston
2008).
Moreover,
Khare,
Labrecque,
and
Asare
(2011)
assert
that
volume
is
an
extrinsic,
high-scope
cue
that
increases
the
diagnosticity
and
persuasiveness
of
online
WOM,
because
an
opinion
expressed
by
more
people
conveys
the
correctness
of
the
position
(Salganik,
Dodds,
and
Watts
2006;
Salganik
and
Watts
2008).
Recent
empirical
work
supports
these
arguments,
and
suggests
that
the
volume
of
online
product
reviews
is
pos-
itively
associated
with
movie
sales
(Duan,
Gu,
and
Whinston
2008),
box
office
revenues
(Liu
2006),
and
automobile
sales
(Chen,
Wu,
and
Yoon
2004).
With
respect
to
valence,
positive
reviews
typically
enhance
consumers’
expected
quality
and
attitudes
toward
that
product,
while
negative
reviews
may
involve
product
denigration,
rumor,
or
complaints,
and
usually
have
an
unfavorable
impact
on
prod-
uct
attitudes
(Liu
2006).
Interestingly,
a
substantial
amount
of
research
suggests
that
negative
information
is
more
strong,
influ-
ential,
predictive,
and
difficult
to
resist
than
positive
information
(Baumeister
et
al.
2001;
Fiske
1980;
Maheswaran
and
Meyers-
Levy
1990;
Skowronski
and
Carlston
1989;
Taylor
1991).
This
may
derive
from
the
loss
aversion
principle
in
prospect
theory
(Kahneman
and
Tversky
1979),
which
posits
that
a
potential
loss
has
a
greater
impact
on
consumer
perceptions
and
decision-
making
than
an
otherwise
equivalent
gain
because
the
value
function
is
steeper
for
losses
than
gains.8
Recent
research
demonstrates
this
negativity
bias
may
also
apply
to
online
product
reviews
(c.f.
Lee,
Park,
and
Han
2008);
however,
it
is
important
to
note
that
the
papers
in
our
meta-
analytic
database
do
not
allow
us
to
explore
this
relationship.
Since
the
papers
we
meta-analyze
use
sales
as
their
dependent
variable—and
do
not
capture
missed
sales
opportunities—we
are
not
able
to
assess
the
degree
to
which
negative
(or
relatively
8Recent
research
conducted
by
TARP
Worldwide
Inc.
with
8,000
customers
in
multiple
industries
indicates
that
only
a
portion
of
dissatisfied
consumers
com-
plain
to
employees
(45%)
or
to
management
or
company
headquarters
(1–5%)
(Zeithaml,
Bitner,
and
Gremler
2013).
While
we
acknowledge
that
this
reluc-
tance
to
post
negative
reviews
could
affect
our
results,
we
could
only
code
what
is
actually
reported
by
dissatisfied
customers
in
the
original
articles
comprising
our
database.
224
K.
Floyd
et
al.
/
Journal
of
Retailing
90
(2,
2014)
217–232
lower)
online
product
reviews
deter
purchase.
We
are
able
to
examine
the
relative
impact
that
review
valence
versus
volume
has
on
sales
elasticities
for
items
evaluated
in
online
product
reviews
by
coding
and
analyzing
whether
each
elasticity
is
based
on
the
review
volume
or
valence.
Critics’
reviews
We
also
recognize
that
the
source
providing
the
information—both
the
type
of
person
and
website—could
impact
sales
elasticities.
In
fact,
the
notion
that
source
charac-
teristics
can
enhance
or
impair
the
impact
of
a
message
is
an
enduring
theme
of
persuasion
research
in
social
psychology
(Hovland,
Janis,
and
Kelley
1953;
Hovland
and
Weiss
1951;
McGuire
1969,
1985).
The
source
credibility
model
(Hovland,
Janis,
and
Kelley
1953)
suggests
that
sources
exhibiting
greater
expertise
and
trustworthiness
should
be
perceived
as
more
credible
and
confer
relatively
greater
persuasion.
Applying
this
theory
in
the
current
context,
it
is
likely
that
online
product
reviews
provided
by
sources
that
possess
greater
expertise
will
be
associated
with
greater
product
sales.
Thus,
we
code
whether
online
product
reviews
are
provided
by
typical
consumers
or
present
feedback
from
experts
or
critics.
Third-party
sources
Just
as
expert
critics
could
confer
greater
persuasion
than
ordinary
consumers,
so
might
websites
that
are
perceived
as
more
trustworthy.
Thus,
we
code
whether
online
product
reviews
were
posted
on
an
independent
website
(e.g.,
ePinions.com
or
MySimon.com)
or
a
seller
website
(e.g.,
Amazon.com),
and
include
this
as
an
independent
variable
in
our
model.
Returning
to
the
source
credibility
model,
it
is
possible
that
third-party
sources
will
possess
greater
trustworthiness
because
consumers
perceive
these
websites
as
communicating
information
without
bias
(Kelman
1961).
In
fact,
source
expertise
and
trustworthiness
have
both
been
found
to
positively
impact
consumers’
attitudes
toward
the
brand,
behavioral
intentions,
and
behaviors
(Gilly
et
al.
1998;
Harmon
and
Coney
1982).
Product
involvement
Product
involvement
is
commonly
defined
as
a
consumer’s
enduring
perceptions
of
a
product
category’s
importance
based
on
that
consumer’s
inherent
needs,
values,
and
interests
(c.f.
Bloch
and
Richins
1983;
Zaichkowsky
1985).
Product
involve-
ment
is
thought
to
be
higher
when
there
is
a
great
deal
of
money
involved
in
a
purchase,
because
higher
price
levels
involve
a
greater
“pain
of
paying”
and
concern
about
making
the
best
choice
(Prelec
and
Loewenstein
1998).
Consumers
with
higher
product
involvement
are
more
likely
to
perceive
attribute
dif-
ferences,
to
place
higher
importance
on
the
product,
and
to
possess
greater
commitment
in
their
brand
choices
(Howard
and
Sheth
1969).
Further,
higher
product
involvement
motivates
consumers
to
search
for
more
information
and
spend
a
greater
amount
of
time
making
an
optimal
decision
(Clarke
and
Belk
1978).
Hence,
we
expect
that
online
product
reviews
for
more
highly
involving
products
will
induce
higher
sales
elasticities
in
comparison
to
those
for
less
involving
products.
Product
benefits
Products
reviewed
online
also
vary
considerably
in
terms
of
the
benefits
they
provide
and
how/when
they
will
be
used
or
consumed.
One
way
to
describe
a
product’s
benefits
is
by
whether
the
product
is
possessed
by
virtually
everyone
(i.e.,
a
necessity)
or
is
associated
with
exclusivity—meaning
that
not
all
consumers
are
able
to
purchase
the
product
(i.e.,
a
luxury;
Bearden
and
Etzel
1982).
Product
benefits
can
also
be
charac-
terized
in
terms
of
whether
they
provide
enjoyable
experiences,
such
as
fun,
pleasure,
and
excitement
for
the
consumer
(i.e.,
a
hedonic
product;
Dhar
and
Wertenbroch
2000),
or
are
primarily
instrumental
and
functional
in
nature
(i.e.,
a
utilitarian
product;
Hirschman
and
Holbrook
1982).
Finally,
the
benefits
of
a
pub-
licly
consumed
product
are
visible
to
others,
while
those
of
a
privately
consumed
product
are
not
(Bearden
and
Etzel
1982).
Prior
research
suggests
that
consumers
are
more
purposive
in
decision-making
about
public
consumption
due
to
social
influ-
ence.
More
specifically,
a
publicly
consumed
product
allows
consumers
to
communicate
their
self-image
(Belk
1988)
and
to
impress
others
by
maintaining
the
illusion
that
they
are
“keep-
ing
up
with
the
Joneses”
(Lee
and
Shrum
2012;
Ordabayeva
and
Chandon
2011).
Given
this,
one
might
expect
online
prod-
uct
reviews—which
provide
a
consumer
with
information
about
others’
product
perceptions—to
induce
greater
sales
elastici-
ties
for
publicly
consumed,
as
compared
to
privately
consumed
products.
However,
this
is
not
likely
to
be
an
unqualified
effect.
In
fact,
luxury
items
are
often
chosen
for
their
high
quality,
uniqueness,
and
emotional
appeal
(Nueno
and
Quelch
1998).
Similarly
subjective
are
hedonic
products—objects
of
desire
that
are
reflective
of
a
particular
consumer’s
personal
preferences.
Interestingly,
the
different
aspects
of
product
benefits
introduce
potentially
opposing
effects
that
make
predictions
about
the
impact
of
online
product
reviews
difficult.
For
this
reason,
we
offer
no
formal
expectations
about
the
influence
of
online
prod-
uct
reviews
on
sales
elasticities
for
privately
consumed
utilitarian
necessities
versus
publicly
consumed
hedonic
luxuries.
Frequency
of
purchase
Product
durability
could
also
affect
sales
elasticity.
A
durable
product
is
one
that
does
not
quickly
wear
out,
and
is
consumed
over
time
rather
than
in
one
use
(Grewal
and
Marmorstein
1994).
Thus,
with
durable
products
there
are
typically
long
periods
between
successive
purchases.
With
such
nonroutine
purchases,
the
purchase
cycle
is
lengthy
and
infrequent.
In
con-
trast,
nondurable
products
are
purchased
more
frequently,
are
immediately
used
by
the
consumer,
and
typically
have
a
life-
span
of
less
than
three
years.
That
is,
nondurables
are
likely
to
involve
routine
purchases
of
products
with
short
purchase
cycles,
occur
on
a
more
regular
basis,
and
involve
repeated
decisions,
in
which
the
consumer
simplifies
the
task
by
storing
relevant
information
and
establishing
a
routine
in
the
decision
process
(Howard
and
Sheth
1969).
Consumers
search
more
for
information
about
the
objective
quality
of
(nonroutine)
durables
than
for
(routine)
nondurables,
because
durable
products
gen-
erally
involved
a
greater
expense
than
nondurable
goods
and
require
more
of
a
commitment
from
the
consumer
(Moorthy,
Ratchford,
and
Talukdar
1997;
Moorthy
and
Zhao
2000).
Given
K.
Floyd
et
al.
/
Journal
of
Retailing
90
(2,
2014)
217–232
225
this,
we
expect
online
product
reviews—that
provide
informa-
tion
about
product
quality—of
durable
products
to
induce
higher
sales
elasticities
than
for
nondurable
products.
Methodological
Variables
Influencing
Sales
Elasticity
Manuscript
status
We
coded
whether
each
paper
in
our
database
is
published
or
unpublished.
Manuscript
status
was
included
in
our
cod-
ing
scheme
because
significant
findings
are
more
likely
to
be
submitted
to,
and
published
in,
peer-reviewed
journals
than
non-
significant
findings,
leading
to
publication
bias
(i.e.,
reduced
interstudy
variability
and
upwardly
biased
mean
estimates)
and
potentially
influencing
sales
elasticities
(Dickersin
2005;
Ferguson
and
Brannick
2012).
Journal
quality
We
coded
whether
each
study
appeared
in
A-level
publica-
tion
(i.e.,
Journal
of
Consumer
Research,
Journal
of
Marketing,
and
Journal
of
Marketing
Research)
or
some
other
outlet
in
an
attempt
to
capture
variance
in
the
rigor
of
each
paper’s
research
as
well
as
the
review
process
it
withstood
(c.f.
Johnson
et
al.
1981).
Geographic
setting
We
also
coded
whether
the
data
in
each
study
was
collected
in
the
United
States
or
elsewhere.
We
included
this
vari-
able
because
other
meta-analyses
examining
elasticities
have
done
so,
and
because
the
impact
of
online
product
reviews
on
sales
could
vary
considerably
according
to
consumers’
cultural
orientations
and
self-construals.
That
is,
whether
consumers
are
Americans—who
are
relatively
more
individualistic
and
independent—or
from
a
more
collectivistic,
interdependent
cul-
ture
could
affect
the
influence
that
online
product
reviews
exert
on
sales
in
these
countries
(Markus
and
Kitayama
1991;
Oyserman,
Coon,
and
Kemmelmeier
2002).
Sample
domain
In
our
coding
scheme,
we
specified
whether
the
dataset
in
each
paper
included
unreviewed
products
or
not.
Span
of
data
collection
Because
longitudinal
data
allows
researchers
to
better
examine
causality
and
control
for
confounding
effects
than
con-
temporaneous
data,
it
is
generally
regarded
as
more
informative
in
inferring
the
true
association
between
constructs
(Rindfleisch
et
al.
2008).
Given
that
this
could
affect
sales
elasticity
estimates,
we
coded
whether
data
were
collected
using
cross-sectional
comparisons
(i.e.,
measurements
for
multiple
product
sales
at
one
point
in
time)
or
longitudinally
(i.e.,
with
repeated
mea-
surements
for
the
same
product’s
sales
over
time).
Estimation
method
The
studies
comprising
our
database
differ
in
the
models
applied
to
estimate
sales.
Thus,
we
capture
differences
in
esti-
mation
method
by
differentiating
between
more
simple
methods
like
ordinary
least
squares
(OLS)
regression
and
more
com-
plex
methods
such
as
multistage
least
squares
regression
(Eisend
2013).
Functional
form
of
model
Researchers
also
employed
different
functional
forms,
which
can
affect
elasticities
(Assmus,
Farley,
and
Lehmann
1984).
Given
this,
we
coded
whether
each
study
employed
a
Log–Log,
Log–Level,
Level–Level,
or
other
functional
form.
Endogeneity
Some
studies
accounted
for
endogeneity,
given
that
model
misspecification
can
lead
to
biased
coefficient
estimates
and
inferences
(Louviere
et
al.
2005).
While
some
meta-analysts
have
found
that
the
omission
of
endogeneity
induces
a
positive
bias
on
elasticity
estimates
(Manchanda,
Rossi,
and
Chintagunta
2004),
others
suggest
this
leads
to
an
underestimation
of
elastici-
ties
(Sethuraman,
Tellis,
and
Briesch
2011).
We
capture
whether
each
study
accounted
for
endogeneity
in
its
model.
Heterogeneity
Because
heterogeneity
can
increase
or
decrease
elasticities
(Hutchinson,
Kamakura,
and
Lynch
2000),
we
also
coded
for
this
variable.
Among
the
studies
comprising
our
dataset,
hetero-
geneity
was
typically
modeled
by
using
a
fixed
effect
or
random
effect
specification
to
account
for
unobserved
product-related
variables.
Analysis
Meta-analytic
Model
We
model
the
sales
elasticities
associated
with
online
prod-
uct
reviews
as
a
linear
function
of
the
influencing
variables
discussed
in
the
previous
section.
Because
we
are
recording
multiple
measures
of
sales
elasticity
from
each
paper
in
our
meta-analytic
database,
our
estimation
approach
must
be
able
to
account
for
the
within-study
error
correlations
resulting
from
the
independent
variables
not
capturing
study-specific
characteris-
tics
completely
and
thus
violating
assumptions
of
OLS
(Bijmolt
and
Pieters
2001).
Following
Bijmolt
and
Pieters
(2001)
and
consistent
with
other
meta-analyses
that
measure
elasticities
as
the
effect
size
metric
(Bijmolt,
Van
Heerde,
and
Pieters
2005;
Eisend
2013;
Sethuraman,
Tellis,
and
Briesch
2011)
we
employ
a
model
of
the
following
form:
ηsj =
Xsj β
+
us+
esj (1)
where
usis
an
unobservable
study
specific
component
of
the
error
and
esj is
the
measurement
level
error.
Both
are
assumed
to
be
normally
distributed
with
mean
zero
and
variance
σ2
sand
σ2
e
respectively.
Hierarchical
Linear
Modeling
procedures
in
statis-
tical
packages
such
as
SAS
are
used
to
create
the
nested
error
structure
in
a
J
x
J
block
diagonal
matrix
containing
both
study
specific
and
measurement
error
variances.
We
estimate
Model
(1),
using
the
Proc
Mixed
procedure
in
SAS
(Albers,
Mantrala,
and
Sridhar
2010)
as
suggested
by
Bijmolt
and
Pieters
(2001).
226
K.
Floyd
et
al.
/
Journal
of
Retailing
90
(2,
2014)
217–232
Multicollinearity
Robustness
Checks
As
other
meta-analysts
using
this
approach
note,
one
of
the
major
issues
affecting
the
estimation
of
hierarchical
linear
mod-
els
is
that
of
collinearity
among
proposed
influencing
variables
in
the
model.
Previously,
we
discussed
the
creation
of
three
factors
related
to
the
original
theoretical
variables.
For
the
remaining
variables
we
first
examined
their
pair-wise
correlations.
Sub-
sequent
examination
of
the
condition
index
and
VIF
indicated
that
including
both
loglog
and
loglev
indicators
of
functional
form
variables
in
the
model
could
be
problematic
because
due
to
VIF
>
15
and
correlation
=
0.90.
To
further
investigate
the
dummy
variables
in
the
model
we
ran
chi-square
tests
of
inde-
pendence
on
each
pairing
of
the
discrete
variables.
Any
pairing
that
exhibited
a
significant
association
was
examined
further
by
systematically
removing
one
variable
from
each
of
the
identified
pairs
and
re-analyzing
our
model;
noting
how
the
omission
of
one
methodological
variable
affected
the
value
and
significance
of
the
other.
Through
these
diagnostic
efforts
we
chose
to
remove
the
discrete
dummy
variables
capturing
the
log–level
functional
form
of
the
model
and
the
competition
dummy
variable.
The
result
of
dropping
the
log–level
functional
form
variable
is
that
both
the
log–level
and
level–level
model
specifications
were
sub-
sumed
into
the
“0”
condition
of
the
log–log
functional
form
indicator
variable.
We
initially
considered
including
a
dummy
variable
to
identify
each
observation
that
captured
sales
data
and
review
information
from
Amazon.com
(Amazon
sales
and
reviews).
Since
Amazon
is
the
largest
online
retailer
in
the
world
and
because
Amazon.com
sales
rank
data
is
used
in
more
than
half
of
the
observations
in
our
data,
it
was
worth
investigat-
ing
whether
or
not
there
is
an
Amazon-specific
effect
on
sales
elasticities.
Ultimately
we
were
unable
to
include
Amazon
sales
and
reviews
in
the
final
model
due
to
correlation
issues
with
the
Inclusion
of
third-party
reviews
variable.
Upon
further
inves-
tigation,
it
was
apparent
that
the
Amazon
dummy
was
just
a
subset
of
the
“0”
condition
of
the
Inclusion
of
third-party
review
(i.e.,
observations
for
sellers
with
local
reviews).
When
both
Amazon
sales
and
reviews
and
Inclusion
of
third-party
are
included
the
Amazon
indicator
was
not
significant.
The
third-
party
variable
was
relatively
stable
and
significant
regardless
of
the
presence
of
Amazon
sales
and
reviews.
If
Inclusion
of
third-
party
reviews
was
excluded,
Amazon
sales
and
review
barely
achieved-significance;
estimates
of
all
other
variables
remained
relatively
constant
in
value
and
there
were
no
changes
in
signif-
icance.
The
variables
included
in
the
final
specification
in
the
model
are
shown
in
Table
4.
Results
Overall
Magnitude
and
Frequency
Distribution
of
Sales
Elasticities
The
frequency
distribution
of
observed
sales
elasticities
appears
in
Fig.
2.
The
observed
mean
sales
elasticity
cal-
culated
using
review
valence
(Es=
.69)
is
higher
than
the
observed
mean
sales
elasticity
using
review
volume
(Es=
.35).
0
10
20
30
40
50
60
70
Frequency Count of Elascies by Value
Valence Volume
Fig.
2.
Frequency
distribution
of
sales
elasticities.
Compared
to
other
elasticities
reported
in
recent
marketing-
related
meta-analyses,
these
sales
elasticities
are
higher
in
magnitude
than
shelf
space
elasticity
(.169;
Eisend
2013),
per-
sonal
selling
elasticity
(.34;
Albers,
Mantrala,
and
Sridhar
2010),
and
both
long-term
(.24)
and
short-term
sales
elasticities
(.12;
Sethuraman,
Tellis,
and
Briesch
2011),
but
lower
in
magnitude
than
price
elasticity
(−2.62;
Bijmolt,
Van
Heerde,
and
Pieters
2005).
Effects
of
Influencing
Variables
Table
4
presents
the
results
of
our
meta-analytic
model,
including
parameter
estimates
for
the
influencing
variables
based
on
the
final,
useable
sample
of
412
sales
elasticity
esti-
mates.
Coefficients
for
the
following
variables
are
statistically
significant
(at
the
p
<
05
level):
review
valence;
critics’
reviews;
third-party
sources;
and,
product
involvement.
Sales
elasticities
calculated
based
on
review
valence
are
significantly
higher
than
those
calculated
based
on
review
vol-
ume
(β
=
.0.81,
S.E.
=
0.16,
t
=
4.94,
p
<
0.0001).
Online
product
reviews
appearing
on
a
third-party
website
have
significantly
higher
sales
elasticities
than
those
appearing
on
seller
websites
(β
=
0.95,
S.E.
=
.25,
t
=
3.77,
p
=
0.00).
Similarly,
sales
elastici-
ties
for
products
evaluated
by
experts
in
online
product
reviews
are
significantly
higher
than
those
reviewed
by
other
consumers
(β
=
1.00,
S.E.
=
.22,
t
=
4.56,
p
<
0.0001).
When
online
product
reviews
pertain
to
items
that
are
char-
acterized
by
greater
product
involvement,
the
impact
on
sales
elasticity
is
significantly
greater
than
when
lower
involvement
products
are
evaluated
(β
=
0.53,
S.E.
=
.13,
t
=
3.98,
p
<
0.0001).
Neither
product
benefits
sought
(β
=
.17,
S.E.
=
.12,
t
=
1.40,
p
=
.16)
nor
frequency
of
purchase
(β
=
.04,
S.E.
=
.13,
t
=
0.27,
p
=
.79)
are
significant
variables
in
our
model.
Among
the
methodological
variables
that
both
heterogene-
ity
and
functional
form
(log–log)
are
significant
and
positively
related
to
sales
elasticities.
Sales
elasticities
for
studies
which
specify
a
model
of
functional
form
(log–log)
are
significantly
greater
than
for
those
observations
that
specify
other
func-
tional
forms
such
as
log–level
or
level–level
(β
=
0.69,
S.E.
=
.21,
K.
Floyd
et
al.
/
Journal
of
Retailing
90
(2,
2014)
217–232
227
Table
4
HLM
estimates
of
variance
in
sales
elasticities
associated
with
online
product
reviews.
Variable β
S.E.
t
Value
Pr
>
|t|
Intercept
−1.52
0.51
−2.97
0.01
Theoretical
variables
Review
valence 0.81
0.16
4.94
<.0001
Critics’
reviews
1.00
0.22
4.56
<.0001
Third-party
reviews
0.95
0.25
3.77
0.00
Product
involvement
0.53
0.13
3.98
<.0001
Product
benefits
0.17
0.12
1.40
0.16
Frequency
of
purchase
0.04
0.13
0.27
0.79
Methodological
variables
Manucript
status
(published)
0.76
0.43
1.77
0.08
Journal
quality
(elite)
−0.15
0.25
−0.61
0.54
Geographic
setting
(U.S.)
0.14
0.45
0.32
0.75
Sample
domain
(includes
unreviewed
products)
0.00
0.24
0.02
0.99
Span
of
data
collection
(cross-sectional)
0.15
0.24
0.63
0.53
Estimation
method
(OLS)
−0.34
0.24
−1.42
0.16
Functional
form
(LogLog)
0.69
0.21
3.34
0.00
Endogeneity
−0.37
0.25
−1.52
0.13
Heterogeneity
0.60
0.26
2.31
0.02
Model
fit
and
N
−2Log
Likelihood
1327.5
N
412
t
=
3.34,
p
=
0.00).
It
is
possible
that
this
result
is
related
to
the
fact
that
elasticities
for
log–log
functional
form
can
be
taken
directly
from
the
reported
coefficients
in
the
models,
and
do
not
have
to
be
calculated
using
the
means—as
those
elastici-
ties
derived
from
log–level
and
level–level
functional
forms
do.
Studies
with
models
that
account
for
heterogeneity
have
signif-
icantly
higher
sales
elasticities
than
studies
that
fail
to
account
for
heterogeneity
in
their
models
(β
=
0.60,
S.E.
=
.26,
t
=
2.31,
p
=
0.02).
It
is
also
important
to
note
some
of
the
variables
in
our
model
that
did
not
have
a
significant
impact
on
sales
elasticities.
Specif-
ically,
the
influence
that
online
product
reviews
exerts
on
sales
elasticities
is
not
significantly
different
for
(1)
U.S.
versus
other
samples,
(2)
cross-sectional
versus
longitudinal
sales
data,
(3)
articles
published
in
elite
versus
other
journals,
and
(4)
research
applying
relatively
simple
(e.g.,
OLS)
versus
sophisticated
esti-
mation
methods.
This
suggests
that
the
conclusions
we
draw
about
online
product
reviews
are
relatively
generalizable
across
a
variety
of
contexts.
Discussion
Like
it
or
not,
retailers
now
compete
in
an
Internet-based
envi-
ronment,
where
consumers
“harangue,
lecture,
pontificate,
and
otherwise
broadcast
personal
opinions,
experiences,
problems,
solutions,
and
other
adventures”
(Notess
2000).
This
means
the
received
wisdom
about
word-of-mouth
communications
(see
review
in
de
Matos
and
Rossi
2008)
is
evolving,
especially
given
the
preponderance
of
online
product
reviews
that
pervade
today’s
marketplace
(Duan,
Gu,
and
Whinston
2008).
We
offer
evidence
that
online
product
reviews
have
a
significant
impact
on
sales
elasticity.
Our
research
also
provides
interesting
insights
about
important
variables—relating
to
specific
features
of
the
reviews,
the
websites
on
which
reviews
appear,
and
the
nature
of
the
products
being
reviewed—which
augment
or
diminish
the
influence
that
online
product
reviews
exert
on
retailer
per-
formance.
Theoretical
and
Practical
Implications
The
most
impactful
influencing
variable
in
our
meta-analytic
model
is
critics’
reviews,
followed
by
third-party
sources,
and
review
valence.
Online
product
reviews
have
a
significantly
greater
influence
on
sales
elasticities
when
they
are
delivered
by
a
critic,
appear
on
a
non-seller
website,
and
include
valence
information
in
the
evaluation.
Interestingly,
observations
based
on
review
valence
have
sig-
nificantly
higher
sales
elasticities
than
those
based
on
review
volume.
Our
confirmation
that
both
review
variance
and
review
volume
exert
an
impact
on
sales
elasticities
is
not
surprising.
Interestingly,
though,
review
volume
has
frequently
been
found
to
have
a
greater
influence
on
performance—especially
in
certain
product
domains
(c.f.
Duan,
Gu,
and
Whinston
2008;
Liu
2006).
Our
finding
that
review
valence
is
relatively
more
impactful
than
review
volume
represents
a
departure
from
this
assumption,
but
finds
limited
support
in
more
recent
research
by
Chintagunta,
Gopinath,
and
Venkataraman
(2010),
whose
work
features
a
rigorous
treatment
of
sequential
rollout
and
aggregation
in
Box
Office
movie
ticket
sales.
It
is
likely
that
the
relationship
between
online
product
reviews
and
retail
sales
is
more
complex
than
the
direct
associ-
ations
we
analyzed
here.
In
particular,
experimental
researchers
have
demonstrated
that
review
valence
and
volume
interact
to
impact
consumer
perceptions
(c.f.
Khare,
Labrecque,
and
Asare
2011).
We
attempted
to
apply
Bushman’s
(1994)
vote-counting
procedure
to
assess
this
interaction
meta-analytically;
however,
228
K.
Floyd
et
al.
/
Journal
of
Retailing
90
(2,
2014)
217–232
we
were
only
able
to
locate
six
studies
in
our
database
that
exam-
ine
the
review
valence
by
volume
interaction.
Unfortunately,
instead
of
providing
parameter
estimates
across
the
full
range
of
each
level
for
both
variables,
five
of
these
studies
provided
parameter
estimates
for
subsets
of
different
levels
of
valence
and
volume.
A
notable
exception
is
provided
by
Chintagunta,
Gopinath,
and
Venkataraman
(2010),
who
demonstrate
that
the
review
valence
X
volume
interaction
exerts
a
significant
impact
on
box
office
movie
sales.
Even
if
all
six
studies
treated
both
valence
and
volume
as
continuous
variables
in
their
analysis
of
interactions
and
reported
results
in
a
statistically
comparable
manner,
it
is
not
meaningful
to
perform
formal
statistical
tests
on
such
a
small
number
of
measures.
Understanding
how
this
and
other
interactions
help
or
hinder
sales
elasticities
associated
with
online
product
reviews
represents
a
particularly
important
avenue
to
pursue
in
future
research.
While
our
meta-analysis
does
not
clearly
elucidate
the
impact
of
the
review
valence
×
volume
interaction
on
sales
elasticity,
we
are
able
to
provide
more
conclusive
take-aways
with
respect
to
other
important
aspects
of
online
product
reviews
and
the
products
they
evaluate.
Consonant
with
our
expectations
and
other
research
exploring
the
impact
of
online
product
reviews
(Chen
and
Xie
2008;
Senecal
and
Nantel
2004),
sales
elastic-
ities
are
higher
for
online
product
reviews
that
are
posted
on
third-party
websites
as
compared
to
those
appearing
on
seller
websites.
We
also
find
higher
sales
elasticities
for
products
eval-
uated
by
experts
(vs.
other
consumers)
in
online
product
reviews.
For
example,
in
our
meta-analytic
dataset
the
overall
sales
elas-
ticities
from
Duan,
Gu,
and
Whinston
(2008)’s
data—which
includes
the
opinions
of
movie
critics—are
higher
than
average
(Es=
1.40),
as
are
the
sales
elasticities
we
calculated
for
review
volume
from
Liu
(2006)’s
data
(Es=
.461).
Our
results
also
sup-
port
more
general
research
on
source
effects,
which
demonstrate
greater
persuasive
impact
for
information
delivered
by
sources
that
are
perceived
as
possessing
higher
credibility
and
expertise
(Hovland,
Janis,
and
Kelley
1953;
Hovland
and
Weiss
1951;
Kelman
1961;
McGuire
1969,
1985).
Our
results
also
suggest
that
online
product
reviews
have
a
significantly
greater
impact
on
the
sales
elasticities
of
high-
involvement
products.
To
illustrate,
we
calculated
higher
sales
elasticities
for
data
from
Gu,
Park,
and
Konana
(2012)—which
examines
the
impact
of
online
product
reviews
for
digital
cameras—for
both
review
valence
(Es=
2.88)
and
review
vol-
ume
(Es=
1.49).
This
finding
is
consistent
with
involvement
theory,
which
indicates
that
consumers
engage
in
extensive
(limited)
online
search
for
products
that
are
more
(less)
involv-
ing,
which
they
associate
with
higher
(lower)
perceived
risk
(Mathwick
and
Rigdon
2004).
A
particularly
noteworthy
research-related
implication
per-
tains
to
the
general
robustness
of
the
relationship
between
online
product
reviews
and
sales
elasticity.
While
the
focus
of
our
meta-analysis
was
on
the
characteristics
of
the
reviews
and
the
products
being
reviewed,
the
results
we
obtain
with
respect
to
methodological
variables
characterizing
the
study,
sample,
data,
and
model
of
each
paper
provide
interesting
insights
for
future
research
exploring
the
impact
of
online
product
reviews.
We
test
for—but
find
no
significant
differences
in—sales
elasticities
for
a
wide
variety
of
commonly
raised
method-
related
concerns,
including
whether:
(1)
the
manuscript
appears
in
an
elite
journal
or
another
outlet;
(2)
the
data
was
collected
in
the
U.S.
or
elsewhere;
(3)
the
data
was
contemporaneous
or
longitudinal;
and,
(4)
the
sample
included
unreviewed
products.
The
only
significant
methodological
variables
in
our
model
were
whether:
(1)
the
model
employed
a
functional
form
(LogLog);
(2)
the
manuscript
was
published;
and,
(3)
whether
the
model
accounted
for
heterogeneity.
Thus,
our
examination
of
412
sales
elasticities
from
26
studies
suggests
that
sales
elasticities
are
statistically
invariant
to
a
host
of
design-related
variables.
(This
finding
mirrors
results
reported
by
Krasnikov
and
Jayachandran
(2008)
in
a
meta-analysis
examining
the
impact
of
the
marketing
function
on
firm
performance.)
While
we
support
the
employ-
ment
of
research
designs
that
allow
researchers
to
arrive
at
valid
and
reliable
inferences,
our
meta-analysis
suggests
that
method-
and
measurement-induced
biases
should
not
be
used
to
automat-
ically
reject
otherwise
rigorously
conducted
research.
Thus
far
research
in
this
area
has
relied
on
a
rich
mix
of
methods
to
extend
our
understanding
of
online
products
reviews,
including
experi-
ments,
simulations,
and
econometric
analyses
of
sales
data.
It
is
our
hope
that
exploration
employing
multiple
methods
will
con-
tinue,
and
that
this
meta-analysis
will
encourage
researchers
to
examine
the
impact
of
online
product
reviews
in
diverse
settings
and
industries,
with
a
variety
of
product
categories
and
samples.
Limitations
and
Future
Research
While
this
manuscript
builds
and
expands
upon
the
eWOM
knowledge
base,
some
limitations
should
be
noted.
Any
quan-
titative
synthesis
is
constrained
by
the
nature
and
scope
of
the
original
studies
on
which
it
is
based
and
this
shortcoming
should
be
borne
in
mind
when
interpreting
findings
presented
here.
First,
not
all
published
studies
on
online
product
reviews
reported
enough
data
to
calculate
a
usable
sales
elasticity;
there-
fore,
some
empirical
work
exploring
the
relationship
between
online
product
reviews
and
sales
elasticity
could
not
be
incor-
porated
into
this
analysis.
Second,
the
theoretical
variables
in
our
model
were
developed
through
factor
analysis
of
ratings
provided
by
expert
judges
who
are
Marketing
professors—a
well-established
approach
in
both
meta-analysis
and
scale-
development
research.
However,
the
composition
of
our
product
benefits
variable
might
be
attributed
to
the
nature
of
our
expert
judges
and
their
ratings
of
one
particular
product
(i.e.,
books)
that
accounts
for
one-third
of
the
observations
in
our
dataset.
Their
evaluation
of
books
as
privately
consumed
utilitarian
necessities
might
account
for
the
statistical
grouping
of
two
related
items
(product
utilitarianism
and
necessity)
with
a
less
related
item
(private
consumption).
Finally,
our
analyses
were
constrained
to
examining
variables
that
could
be
coded
from
the
extant
literature.
While
the
theoretical
variables
studied
here
provide
scholars
and
practitioners
with
useful
information,
the
inability
of
these
codeable
variables
to
fully
account
for
the
vari-
ance
in
sales
elasticity
indicates
that
additional
measurement
and/or
contextual
factors
need
to
be
modeled
and
reported
in
future
studies
of
online
product
reviews.
For
instance,
it
would
be
interesting
to
explore
the
effectiveness
of
online
product
K.
Floyd
et
al.
/
Journal
of
Retailing
90
(2,
2014)
217–232
229
reviews
for
products
that
vary
in
terms
of
credence
(Bloom
and
Pailin
1995;
Nelson
1980;
Wilde
1980)
and
performance
risk
(Schiffman
and
Kanuk
2007).
(We
attempted
to
include
these
variables
in
our
meta-analysis;
however,
items
assessing
cre-
dence
and
performance
risk
were
found
to
be
multivocal
in
the
factor
analysis
performed
to
group
our
theoretical
variables
and
were
thus
eliminated.)
Conclusions
The
results
of
our
meta-analysis
reinforce
the
old
retailing
adages
that
“the
customer
is
always
right,”
and
yield
practi-
cal
implications
for
retailers.
First,
our
findings
highlight
the
importance
of
providing
a
quality
product
that
delivers
on
its
brand
promise
and
meets
or
exceeds
consumers’
expectations
(de
Chernatony
2001;
Priluck
2003).
Additionally,
retailers
must
establish
mechanisms
for
detecting
service
and
product
failures,
and
have
established
procedures
and
well-trained
employees
in
place
to
redress
such
situations
(Bougle,
Pieters,
and
Zeelenberg
2003;
Richins
1983;
Spreng,
Harrell,
and
Mackoy
1995).
This
is
more
crucial
now
than
ever
before,
because
unresolved
com-
plaints
are
likely
to
motivate
dissatisfied
consumers
to
vent
by
posting
negative
(i.e.,
relatively
lower)
online
product
reviews
that
may
deter
legions
of
potential
consumers
from
purchas-
ing
the
offending
brand
in
the
future
(Goodwin
and
Ross
1992;
Grant
2013;
Jones
and
Sasser
1995).
Conversely,
retailers
should
encourage
consumers
who
have
a
favorable
product
experience
to
recommend
the
product
to
others
on
the
seller’s
website
and
on
other
third-party
websites
such
as
Epinions.com
or
Yelp.com
(Dellarocas
2003).
That
is,
since
potential
consumers
are
apt
to
interpret
a
greater
number
of
positive
online
product
reviews
as
supporting
an
accurate
assessment
(Salganik,
Dodds,
and
Watts
2006;
Salganik
and
Watts
2008),
retailers
should
facil-
itate
the
writing
of
reviews
by
satisfied
consumers.
However,
it
is
important
that
retailers
encourage
positive
eWOM
without
appearing
to
engage
in
unethical
or
deceptive
practices.
While
extant
research
on
the
“gaming”
of
review
sites
is
primarily
anec-
dotal,
there
is
some
evidence
of
retailers
posting
fake
positive
reviews
to
boost
their
ratings
and
unfavorable
reviews
deni-
grating
competitors
(Fisman
2012).
Such
transgressions
breed
mistrust
among
consumers,
and
are
likely
to
prompt
a
negative
backlash
in
the
marketplace
(Moyer
2010).
References9
Albers,
Sonke,
Murali
K.
Mantrala
and
Shrihari
Sridhar
(2010),
“Personal
Selling
Elasticities:
A
Meta-analysis,”
Journal
of
Marketing
Research,
47
(October),
840–53.
*Amblee,
Naveen
and
Tung
Bui
(2011),
“Harnessing
the
Influence
of
Social
Proof
in
Online
Shopping:
The
Effect
of
Electronic
Word
of
Mouth
on
Sales
of
Digital
Microproducts,”
International
Journal
of
Electronic
Commerce,
16
(Winter),
91–113.
Anderson,
Eugene
and
Linda
Salisbury
(2003),
“The
Formation
of
Market-level
Expectations
and
its
Covariates,”
Journal
of
Consumer
Research,
30
(June),
115–24.
9Articles
comprising
our
meta-analytic
dataset
are
marked
with
asterisk
(*).
Andrews,
Rick
L.
and
George
R.
Franke
(1991),
“The
Determinants
of
Cigarette
Consumption:
A
Meta-analysis,”
Journal
of
Public
Policy
&
Marketing,
10
(Spring),
81–100.
*Archak,
Nikolay,
Anindya
Ghose
and
Panagiotis
G.
Ipeirotis
(2011),
“Deriv-
ing
the
Pricing
Power
of
Product
Features
by
Mining
Consumer
Reviews,”
Management
Science,
57
(August),
1485–509.
Assmus,
Gret,
John
U.
Farley
and
Donald
R.
Lehmann
(1984),
“How
Advertising
Affects
Sales:
Meta-analysis
of
Econometric
Results,”
Journal
of
Marketing
Research,
21
(February),
65–74.
Bakos,
Yannis
and
Chrysanthos
Dellarocas
(2011),
“Cooperation
without
Enforcement?
A
Comparative
Analysis
of
Litigation
and
Online
Reputation
as
Quality
Assurance
Mechanisms,”
Management
Science,
57
(November),
1944–62.
Basuroy,
Suman,
Subimal
Chatterjee
and
S.
Abraham
Ravid
(2003),
“How
Criti-
cal
are
Critical
Reviews?
The
Box
Office
Effects
of
Film
Critics,
Star
Power,
and
Budgets,”
Journal
of
Marketing,
67
(October),
103–17.
Baumeister,
Roy
F.,
Ellen
Bratslavsky,
Catrin
Finkenauer
and
Kathleen
D.
Vohs
(2001),
“Bad
is
Stronger
than
Good,”
Review
of
General
Psychology,
5
(December),
323–70.
Bearden,
William
O.
and
Michael
J.
Etzel
(1982),
“Reference
Group
Influence
on
Product
and
Brand
Purchase
Decisions,”
Journal
of
Consumer
Research,
9
(September),
183–94.
Bearden,
William
O.,
David
M.
Hardesty
and
Randall
L.
Rose
(2001),
“Consumer
Self-confidence:
Refinements
in
Conceptualization
and
Mea-
surement,”
Journal
of
Consumer
Research,
28
(June),
121–34.
Belk,
Russell
W.
(1988),
“Possessions
and
the
Extended
Self,”
Journal
of
Con-
sumer
Research,
15
(September),
139–68.
Bijmolt,
Tammo
H.A.
and
Rik
G.M.
Pieters
(2001),
“Meta-analysis
in
Market-
ing
When
Studies
Contain
Multiple
Measurements,”
Marketing
Letters,
12
(May),
157–69.
Bijmolt,
Tammo
H.A.,
Harald
J.
Van
Heerde
and
Rik
G.M.
Pieters
(2005),
“New
Empirical
Generalizations
on
the
Determinants
of
Price
Elasticity,”
Journal
of
Marketing
Research,
42
(May),
141–56.
Bloch,
Peter
H.
and
Marsha
L.
Richins
(1983),
“A
Theoretical
Model
for
the
Study
of
Product
Importance
Perceptions,”
Journal
of
Marketing,
47
(Sum-
mer),
69–81.
Bloom,
Paul
N.
and
James
E.
Pailin
Jr.
(1995),
“Using
Information
Situations
to
Guide
Marketing
Strategy,”
Journal
of
Consumer
Marketing,
12
(2),
19–27.
Bougle,
Roger,
Rik
Pieters
and
Marcel
Zeelenberg
(2003),
“Angry
Customer
Don’t
Come
Back,
They
Get
Back:
The
Experience
and
Behavioral
Impli-
cations
of
Anger
and
Dissatisfaction
in
Services,”
Journal
of
the
Academy
of
Marketing
Science,
31
(Fall),
377–93.
Bowman,
Douglas
and
Das
Narayandas
(2001),
“Managing
Customer-initiated
Contacts
with
Manufacturers:
The
Impact
of
Share
of
Category
Require-
ments
on
Word-of-Mouth
Behavior,”
Journal
of
Marketing
Research,
38
(August),
281–97.
*Brandes,
Leif,
Ingmar
Nolte
and
Sandra
Nolte
(2011),
Where
Do
the
Joneses
Go
on
Vacation?
Social
Distance
and
the
Influence
of
Online
Reviews
on
Product
Sales,
working
paper.
University
of
Zurich.
Briggs,
Stephen
R.
and
Jonathan
M.
Cheek
(1986),
“The
Role
of
Factor
Anal-
ysis
in
the
Development
and
Evaluation
of
Personality
Scales,”
Journal
of
Personality,
54
(March),
106–48.
Brown,
Jacquline
J.
and
Peter
H.
Reingen
(1987),
“Social
Ties
and
Word-of-
Mouth
Referral
Behavior,”
Journal
of
Consumer
Research,
14
(December),
350–62.
Bushman,
Brad
J.
(1994),
“Vote-counting
Procedures
in
Meta-analysis,”
in
The
Handbook
of
Research
Synthesis
working
paper,
Cooper
Harris
and
Hedges
Larry
V. ,
eds.
New
York:
Russell
Sage
Foundation,
193–213.
Carlson,
Jay
P.,
Leslie
H.
Vincent,
David
M.
Hardesty
and
William
O.
Bearden
(2009),
“Objective
and
Subjective
Knowledge
Relationships:
A
Quantitative
Analysis
of
Consumer
Research
Findings,”
Journal
of
Consumer
Research,
35
(February),
864–76.
Chatterjee,
Patrali
(2001),
“Online
Reviews:
Do
Consumers
Use
Them?,”
Advances
in
Consumer
Research,
28
(1),
29–33.
*Chen,
Pei-yu,
Samita
Dhanasobhon
and
Michael
D.
Smith
(2008),
All
Reviews
are
Not
Created
Equal:
The
Disaggregate
Impact
of
Reviews
and
Reviewers
at
Amazon.com,
working
paper.
Carnegie
Mellon
University.
230
K.
Floyd
et
al.
/
Journal
of
Retailing
90
(2,
2014)
217–232
Chen,
Pei-yu,
Shin-yi
Wu
and
Jungsun
Yoon
(2004),
“The
Impact
of
Online
Recommendations
and
Consumer
Feedback
on
Sales,”
in
24th
International
Conference
on
Information
Systems
(ICIS)
working
paper.
Chen,
Yubo,
Scott
Fay
and
Qi
Wang
(2003),
“Marketing
Implications
of
Online
Consumer
Product
Reviews,”
Business
Week,
7150,
1–36.
,
and
(2011a),
“The
Role
of
Marketing
in
Social
Media:
How
Online
Consumer
Reviews
Evolve,”
Journal
of
Interactive
Marketing,
25
(May),
85–94.
*Chen,
Yubo,
Qi
Wang
and
Jinhon
Xie
(2011b),
“Online
Social
Interactions:
A
Natural
Experiment
on
Word
of
Mouth
versus
Observational
Learning,”
Journal
of
Marketing
Research,
48
(April),
238–54.
Chen,
Yubo
and
Jinhong
Xie
(2008),
“Online
Consumer
Review:
Word-of-
Mouth
as
a
New
Element
of
Marketing
Communication
Mix,”
Management
Science,
54
(March),
477–91.
*Chevalier,
Judith
and
Dina
Mayzlin
(2006),
“The
Effect
of
Word
of
Mouth
on
Sales:
Online
Book
Reviews,”
Journal
of
Marketing
Research,
43
(August),
345–54.
*Chintagunta,
Pradeep
K.,
Shyam
Gopinath
and
Sriram
Venkataraman
(2010),
“The
Effects
of
Online
User
Reviews
on
Movie
Box-office
Performance:
Accounting
for
Sequential
Rollout
and
Aggregation
across
Local
Markets,”
Marketing
Science,
29
(September–October),
944–57.
Cisco
Internet
Business
Solutions
Group
(2013),
Catch
and
Keep
Digi-
tal
Shoppers,
http://www.cisco.com/web/about/ac79/docs/retail/Catch-and-
Keep-the-Digital-Shopper
PoV.pdf
Clarke,
Keith
and
Russell
W.
Belk
(1978),
“The
Effects
of
Product
Involvement
and
Task
Definition
on
Anticipated
Consumer
Effort,”
In
Advances
in
Con-
sumer
Research,
Vol.
5,
Hun
H.
Keith
ed.
Ann
Arbor,
MI:
Association
for
Consumer
Research.
*Clemons,
Eric
K.,
Guodong
G.
Gao
and
Lorin
M.
Hitt
(2006),
“When
Online
Reviews
Meet
Hyperdifferentiation:
A
Study
of
the
Craft
Beer
Industry,”
Journal
of
Management
Information
Systems,
23
(Fall),
149–71.
*Cui,
Geng,
Hon-Kwong
Lui
and
Xiaoning
Guo
(2012),
“The
Effect
of
Online
Consumer
Reviews
on
New
Product
Sales,”
International
Journal
of
Elec-
tronic
Commerce,
17
(Fall),
39–57.
de
Chernatony,
Leslie
(2001),
“A
Model
for
Strategically
Building
Brands,”
The
Journal
of
Brand
Management,
9
(September),
32–44.
de
Matos,
Celso
Augusto
and
Carlos
Alberto
Varga s
Rossi
(2008),
“Word-
of-Mouth
Communications
in
Marketing:
A
Meta-analytic
Review
of
the
Antecedents
and
Moderators,”
Journal
of
the
Academy
of
Marketing
Science,
36
(December),
578–96.
Dellarocas,
Chrysanthos
(2003),
“The
Digitization
of
Word
of
Mouth:
Promise
and
Challenges
of
Online
Feedback
Mechanisms,”
Management
Science,
49
(October),
1407–24.
Dellarocas,
Chrysanthos,
Neveen
Awa d
and
Xiaoquan
Zhang
(2004),
“Exploring
the
Value
of
Online
Reviews
to
Organizations:
Implications
for
Revenue
Forecasting
and
Planning,”
in
24th
International
Conference
on
Information
Systems
(ICIS).
*Dewan,
Sanjeev
and
Jui
Ramprasad
(2009),
“Chicken
and
Egg?
Interplay
between
Music
Blog
Buzz
and
Album
Sales,”
in
Pacific
Asia
Conference
on
Information
Systems.
Dhar,
Ravi
and
Klaus
Wertenbroch
(2000),
“Consumer
Choice
between
Hedonic
and
Utilitarian
Good,”
Journal
of
Marketing
Research,
37
(February),
60–71.
Dickersin,
Kay
(2005),
“Publication
Bias:
Recognizing
the
Problem,
Under-
standing
its
Origins
and
Scope
and
Preventing
Harm,”
in
Publication
Bias
in
Meta-analysis:
Prevention,
Assessment,
and
Adjustments,
Rothstein
Han-
nah
H.,
Sutton
Alexander
J.
and
Borenstein
Michael,
eds.
Hoboken,
NJ:
John
Wiley
&
Sons.
*Duan,
Wenjing,
Bin
Gu
and
Andrew
B.
Whinston
(2008),
“Do
Online
Reviews
Matter?—An
Empirical
Investigation
of
Panel
Data,”
Decision
Support
Sys-
tems,
45
(November),
1007–16.
Eisend,
Martin
(2013),
“Shelf
Space
Elasticity:
A
Meta-analysis,”
Journal
of
Retailing,,
http://dx.doi.org/10.1016/j.jretai.2013.03.003
Eliashberg,
Jehoshua
and
Steven
M.
Shugan
(1997),
“Film
Critics:
Influencers
or
Predictors?,”
Journal
of
Marketing,
61
(April),
68–78.
Ferguson,
Christopher
J.
and
Michael
T.
Brannick
(2012),
“Publication
Bias
in
Psychological
Science:
Prevalence,
Methods
for
Identifying
and
Controlling
and
Implications
for
the
Use
of
Meta-analysis,”
Psychological
Methods,
17
(March),
120–8.
Fiske,
S.T.
(1980),
“Attention
and
Weight
in
Person
Perception:
The
Impact
of
Negative
and
Extreme
Behavior,”
Journal
of
Personality
and
Social
Psy-
chology,
38
(June),
889–906.
Fisman,
Ray
(2012),
Should
You
Trust
Online
Reviews?,
http://www.slate.com
Ford,
J.
Kevin,
Robert
C.
MacCallum
and
Marianne
Tait
(1986),
“The
Appli-
cation
of
Exploratory
Factor
Analysis
in
Applied
Psychology:
A
Critical
Review
and
Analysis,”
Personnel
Psychology,
39
(June),
291–314.
*Forman,
Chris,
Anindya
Ghose
and
Batia
Wiesenfeld
(2008),
“Examining
the
Relationship
between
Reviews
and
Sales:
The
Role
of
Reviewer
Iden-
tity
Disclosure
in
Electronic
Markets,”
Information
Systems
Research,
19
(September),
291–313.
Forrester
Research
Inc.
(2000),
http://www.forrester.com
Ghose,
Aindya
and
Panagiotis
G.
Ipeirotis
(2006),
“Towards
an
Understanding
of
the
Impact
of
Customer
Sentiment
on
Product
Sales
and
Review
Quality,”
in
Workshop
on
Information
Technology
and
Systems,
*Ghose,
Anindya
and
Panagiotis
G.
Ipeirotis
(2011),
“Estimating
the
Helpful-
ness
and
Economic
Impact
of
Product
Reviews:
Mining
Tex t
and
Reviewer
Characteristics,”
IEEE
Transactions
on
Knowledge
&
Data
Engineering,
23
(October),
1498–512.
Gilly,
Mary
C.,
John
L.
Graham,
Mary
Finley
Wolfinbarger
and
Laura
J.
Yale
(1998),
“A
Dyadic
Study
of
Interpersonal
Information
Search,”
Journal
of
the
Academy
of
Marketing
Science,
26
(Spring),
83–100.
*Godes,
David
and
Dina
Mayzlin
(2004),
“Using
Online
Conversations
to
Study
Word
of
Mouth
Communication,”
Marketing
Science,
23
(Fall),
545–60.
Goldsmith,
Ronald
E.
(2006),
Encyclopedia
of
E-Commerce,
E-Government
and
Mobile
Commerce,
Hershey,
PA:
Idea
Group
Publishing.
Goodwin,
Cathy
and
Ivan
Ross
(1992),
“Consumer
Responses
to
Service
Failures:
Influence
of
Procedural
and
Interactional
Fairness
Perceptions,”
Journal
of
Business
Research,
25
(September),
149–63.
Grant,
Kelli
B.
(2013),
10
Things
Online
Reviewers
Won’t
Say,
MarketWatch.
http://articles.marketwatch.com/2013-03-04/finance/37368031
1
online-
reviews-review-site-amazon-mechanical-turk
Grewal,
Dhruv
and
Howard
Marmorstein
(1994),
“Market
Price
Variation,
Perceived
Price
Variation
and
Consumers’
Price
Search
Decisions
for
Durable
Goods,”
Journal
of
Consumer
Research,
21
(December),
45–460.
*Gu,
Bin,
Jaehong
Park
and
Prabhudev
Konana
(2012),
“The
Impact
of
External
Word-of-Mouth
Sources
on
Retailers
Sales
of
High-involvement
Products,”
Information
Systems
Research,
23
(March),
182–96.
Guernsey,
L.
(2000),
Bookbag
of
the
Future:
Dental
Schools
Stuff
4
Years
Worth
of
Manuals
and
Books
into
One
DVD,
The
New
York
Times.1–7.
Hair,
Joseph
F.,
William
C.
Black,
Barry
J.
Babin
and
Rolph
E.
Anderson
(2010),
Multivariate
Data
Analysis,
Englewood
Cliffs,
NJ:
Prentice
Hall.
Harmon,
Amy
(2004),
“Amazon
Glitch
Unmasks
War
of
Reviewers,”
in
The
New
York
Times.
Harmon,
Robert
R.
and
Kenneth
A.
Coney
(1982),
“The
Persuasive
Effects
of
Source
Credibility
in
Buy
and
Lease
Situations,”
Journal
of
Marketing
Research,
19
(May),
255–60.
Hirschman,
Elizabeth
C.
and
Morris
B.
Holbrook
(1982),
“Hedonic
Consump-
tion:
Emerging
Concepts,
Methods
and
Propositions,”
Journal
of
Marketing,
46
(Summer),
92–101.
Hovland,
Carl
I.,
Irving
L.
Janis
and
Harold
H.
Kelley
(1953),
Communication
and
Persuasion:
Psychological
Studies
in
Opinion
Change,
New
Haven,
CT:
Yale
University
Press.
Hovland,
Carl
I.
and
Walter
Weiss
(1951),
“The
Influence
of
Source
Credibility
on
Communication
Effectiveness,”
Public
Opinion
Quarterly,
15
(Winter),
635–50.
Howard,
John
A.
and
Jagdish
N.
Sheth
(1969),
The
Theory
of
Buyer
Behavior,
New
York:
John
Wiley.
Hunter,
John
E.
and
Frank
L.
Schmidt
(2004),
Methods
of
Meta-analysis:
Cor-
recting
Error
and
Bias
in
Research
Findings,
2nd
ed.
Newbury
Park,
CA:
Sage
Publications.
Hutchinson,
J.
Wesley,
Wagner
A.
Kamakura
and
John
G.
Lynch
(2000),
“Unob-
served
Heterogeneity
as
an
Alternative
Explanation
for
‘Reversal’
Effects
in
Behavioral
Research,”
Journal
of
Consumer
Research,
27
(December),
324–44.
Jiang,
Bao-Jun
and
Bin
Wang
(2007),
Impact
of
Consumer
Reviews
and
Rat-
ings
on
Sales,
Prices,
and
Profits:
Theory
and
Evidence,
working
paper.
Pittsburgh,
PA:
Tepper
School
of
Business,
Carnegie
Mellon
University.
K.
Floyd
et
al.
/
Journal
of
Retailing
90
(2,
2014)
217–232
231
Johnson,
David
W.,
Geoffrey
Maruyama,
Roger
Johnson,
Deborah
Nelson
and
Linda
Skon
(1981),
“Effects
of
Cooperative,
Competitive
and
Individualistic
Goal
Structures
on
Achievement:
A
Meta-analysis,”
Psychological
Bulletin,
89
(January),
47–62.
Jones,
Thomas
O.
and
W.
Earl
Sasser
Jr.
(1995),
“Why
Dissatisfied
Cus-
tomers
Defect,”
Harvard
Business
Review,
73
(November–December),
88–91.
Kahneman,
Daniel
and
Amos
Tversky
(1979),
“Prospect
Theory:
An
Analysis
of
Decisions
under
Risk,”
Econometrica,
47
(March),
263–91.
Katz,
Elihu
and
Paul
F.
Lazarsfeld
(1955),
Personal
influence:
The
Part
Played
by
People
in
the
Flow
of
Mass
Communications,
Glencoe,
IL:
The
Free
Press.
Kelman,
H.C.
(1961),
“Process
of
Opinion
Change,”
Public
Opinion
Quarterly,
25
(Spring),
57–78.
Khare,
Adwait,
Lauren
I.
Labrecque
and
Anthony
K.
Asare
(2011),
“The
Assimilative
and
Contrastive
Effects
of
Word-of-Mouth
Volume:
An
Exper-
imental
Examination
of
Online
Consumer
Ratings,”
Journal
of
Retailing,
87
(March),
111–26.
Kim,
Junyong
and
Pranjal
Gupta
(2012),
“Emotional
Expressions
in
Online
User
Reviews:
How
They
Influence
Consumers’
Product
Evaluations,”
Journal
of
Business
Research,
65,
985–92.
Krasnikov,
Alexander
and
Satish
Jayachandran
(2008),
“The
Relative
Impact
of
Marketing,
Research-and-Development
and
Operations
Capabilities
on
Firm
Performance,”
Journal
of
Marketing,
72
(July),
1–11.
Lee,
Jaehoon
and
L.J.
Shrum
(2012),
“Conspicuous
Consumption
versus
Char-
itable
Behavior
in
Response
to
Social
Exclusion:
A
Differential
Needs
Explanation,”
Journal
of
Consumer
Research,
39
(October),
530–44.
Lee,
Jumin,
Do-Hyung
Park
and
Ingoo
Han
(2008),
“The
Effect
of
Negative
Online
Consumer
Reviews
on
Product
Attitude:
An
Information
Processing
View,”
Electronic
Commerce
Research
and
Applications,
7
(Autumn),
341–52.
*Li,
Xinxin
and
Lorin
M.
Hitt
(2008),
“Self
Selection
and
Information
Role
of
Online
Product
Reviews,”
Information
Systems
Research,
19
(December),
456–74.
*Liu,
Yong
(2006),
“Word
of
Mouth
for
Movies:
Its
Dynamics
and
Impact
on
Box
Office
Revenue,”
Journal
of
Marketing,
70
(July),
74–89.
Louviere,
Jordan,
Kenneth
Train,
Moshe
Ben-Akiva,
Chandra
Bhat,
David
Brownstone,
Trudy
Ann
Cameron,
Richard
T.
Carson,
J.R.
Deshazo,
Den-
zil
Fiebig,
William
Greene,
David
Hensher
and
Donald
Waldman
(2005),
“Recent
Progress
on
Endogeneity
in
Choice
Modeling,”
Marketing
Letters,
16
(December),
255–65.
Maheswaran,
Durairaj
and
Joan
Meyers-Levy
(1990),
“The
Influence
of
Mes-
sage
Framing
and
Issue
Involvement,”
Journal
of
Marketing
Research,
27
(August),
361–7.
Manchanda,
Puneet,
Peter
E.
Rossi
and
Pradeep
K.
Chintagunta
(2004),
“Response
Modeling
with
Nonrandom
Marketing-mix
Variables,”
Journal
of
Marketing
Research,
41
(November),
467–78.
Markus,
Hazel
Rose
and
Shinobu
Kitayama
(1991),
“Culture
and
the
Self:
Impli-
cations
for
Cognition,
Emotion,
and
Motivation,”
Psychological
Science,
98
(April),
224–53.
Mathwick,
Charla
and
Edward
Rigdon
(2004),
“Play,
Flow
and
the
Online
Search
Experience,”
Journal
of
Consumer
Research,
31
(September),
324–32.
McFadden,
Daniel
and
Kenneth
E.
Train
(1996),
“Consumers’
Evaluation
of
New
Products:
Learning
from
Self
and
Others,”
Journal
of
Political
Econ-
omy,
104
(August),
683–703.
McGuire,
William
J.
(1969),
“The
Nature
of
Attitude
and
Attitude
Change,”
In
Handbook
of
Social
Psychology,
Vol.
3,
Lindzey
Gardner
and
Aronson
Elliot
eds.
Reading,
MA:
Addison-Wesley,
136–314.
(1985),
“Attitudes
and
Attitude
Change,”
In
Handbook
of
Social
Psychology,
Vol.
2,
Lindzey
Gardner
and
Aronson
Elliot
eds.
San
Diego,
CA:
Random
House,
233–346.
Moorthy,
Sridhar,
Brian
T.
Ratchford
and
Debabrata
Talukdar
(1997),
“Con-
sumer
Information
Search
Revisited:
Theory
and
Empirical
Analysis,”
Journal
of
Consumer
Research,
23
(March),
263–77.
Moorthy,
Sridhar
and
Hao
Zhao
(2000),
“Advertising
Spending
and
Perceived
Quality,”
Marketing
Letters,
11
(August),
221–33.
Moyer,
Michael
(2010),
“Manipulation
of
the
Crowd:
How
Trustworthy
are
Online
Ratings?,”
Scientific
American,
303
(July),
26–8.
Mudambi,
Susan
M.
and
David
Schuff
(2010),
“What
Makes
a
Helpful
Online
Review?
A
Study
of
Customer
Reviews
on
Amazon.Com,”
MIS
Quarterly,
34
(March),
185–200.
Nelson,
Phillip
(1980),
“Comments
on
‘The
Economics
of
Consumer
Informa-
tion
Acquisition’,”
The
Journal
of
Business,
53
(July),
163–5.
Nielsen
(2012),
Global
Online
Consumers
and
Multi-screen
Media:
Today
and
Tomorrow,
http://www.scientificamerican.com/article.cfm?id=
manipulation-of-the-crowd
Notess,
Greg
R.
(2000),
Consumers’
Revenge:
Online
Product
Reviews
and
Ratings,
Web
Wanderings.
http://notess.com/write/archive/200004ww.html
Nueno,
Jose
Luis
and
John
A.
Quelch
(1998),
“The
Mass
Marketing
of
Luxury,”
Business
Horizones,
41
(November–December),
61–8.
*Ogut,
Hulisi
and
Bedri
Kamil
Onur
Tas
(2012),
“The
Influence
of
Internet
Customer
Reviews
on
the
Online
Sales
and
Prices
in
Hotel
Industry,”
The
Service
Industries
Journal,
32
(February),
197–214.
Ordabayeva,
Nailya
and
Pierre
Chandon
(2011),
“Getting
Ahead
of
the
Jone-
ses:
When
Equality
Increases
Conspicuous
Consumption
among
Bottom-tier
Consumers,”
Journal
of
Consumer
Research,
38
(June),
27–41.
Oyserman,
Daphna,
Heather
M.
Coon
and
Markus
Kemmelmeier
(2002),
“Rethinking
Individualism
and
Collectivism:
Evaluation
of
Theoretical
Assumption
and
Meta-analysis,”
Psychological
Bulletin,
128
(January),
3–72.
*Pathak,
Bhavik,
Robert
Garfinkel,
Ram
D.
Gopal,
Rajkumar
Venkatesan
and
Fang
Yin
(2010),
“Empirical
Analysis
of
the
Impact
of
Recommender
Sys-
tems
on
Sales,”
Journal
of
Management
Information
Systems,
27
(Fall),
159–88.
Prelec,
Drazen
and
George
Loewenstein
(1998),
“The
Red
and
the
Black:
Mental
Accounting
of
Savings
and
Debt,”
Marketing
Science,
17
(1),
4–28.
Priluck,
Randi
(2003),
“Relationship
Marketing
Can
Mitigate
Product
and
Service
Failures,”
Journal
of
Services
Marketing,
17
(1),
37–52.
Rindfleisch,
Aric,
Alan
J.
Malter,
Shankar
Ganesan
and
Christine
Moorman
(2008),
“Cross-sectional
Versus
Longitudinal
Survey
Research:
Concepts,
Findings
and
Guidelines,”
Journal
of
Marketing
Research,
45
(June),
261–79.
Richins,
Marsha
L.
(1983),
“Negative
Word-of-Mouth
by
Dissatisfied
Con-
sumers:
A
Pilot
Study,”
Journal
of
Marketing,
47
(Winter),
68–78.
Rust,
Roland
T.,
Donald
R.
Lehmann
and
John
U.
Farley
(1990),
“Estimat-
ing
Publication
Bias
in
Meta-analysis,”
Journal
of
Marketing
Research,
27
(May),
220–6.
Salganik,
Matthew
J.,
Peter
Sheridan
Dodds
and
Duncan
J.
Watts
(2006),
“Exper-
imental
Study
of
Inequality
and
Unpredictability
in
an
Artificial
Cultural
Market,”
Science,
311
(February),
854–6.
Salganik,
Matthew
J.
and
Duncan
J.
Watts
(2008),
“Leading
the
Herd
Astray:
An
Experimental
Study
of
Self-fulfilling
Prophecies
in
an
Artificial
Cultural
Market,”
Social
Psychology
Quarterly,
71
(December),
228–55.
Schiffman,
Leon
G.
and
Leslie
Lazar
Kanuk
(2007),
Consumer
Behavior,
New
Jersey:
Pearson
Prentice
Hall.
Schnapp,
Madeline
and
Tim
Allwine
(2001),
“Mining
of
Book
Data
from
Amazon.com,”
in
Paper
presented
at
the
UCB/SIMS
Web
Mining
Con-
ference
working
paper,.
Available
at
http://www.sims.berkely.edu:8000/
resources/affiliates/workshops/webmining/Slides/ORA.ppt
Senecal,
Sylvain
and
Jacques
Nantel
(2004),
“The
Influence
of
Online
Product
Recommendations
on
Consumers’
Online
Choices,”
Journal
of
Retailing,
80
(2),
159–69.
Sethuraman,
Raj,
Gerard
J.
Tellis
and
Richard
A.
Briesch
(2011),
“How
Well
Does
Advertising
Work?
Generalizations
from
Meta-analysis
of
Brand
Advertising
Elasticities,”
Journal
of
Marketing
Research,
48
(June),
457–71.
Skowronski,
John
J.
and
Donal
E.
Carlston
(1989),
“Negativity
and
Extremity
Biases
in
Impression
Formation:
A
Review
of
Explanations,”
Psychological
Bulletin,
105
(January),
131–42.
Spreng,
Richard
A.,
Gilbert
D.
Harrell
and
Robert
D.
Mackoy
(1995),
“Service
Recovery:
Impact
on
Satisfaction
and
Intentions,”
Journal
of
Services
Mar-
keting,
9
(1),
15–23.
*Sun,
Monic
(2012),
“How
Does
the
Variance
of
Product
Ratings
Matter?,”
Management
Science,
58
(April),
696–707.
Taylor,
Shelley
E.
(1991),
“Asymmetrical
Effects
of
Positive
and
Negative
Events:
The
Mobilization–Minimization
Hypothesis,”
Psychological
Bul-
letin,
110
(July),
67–85.
232
K.
Floyd
et
al.
/
Journal
of
Retailing
90
(2,
2014)
217–232
Van
den
Bulte,
Christophe
and
Gary
Lilien
(2001),
Two-Stage
Partial
Observ-
ability
Models
for
Innovation
Adoption,
working
paper.
Wharton
School,
University
of
Pennsylvania.
Weber
Shandwick/KRC
Research
(2012),
Buy
It,
Try
It,
Rate
It,
http://www.
webershandwick.com/resources/ws/flash/reviewssurveyreportfinal.pdf
Wilde,
Louis
L.
(1980),
“The
Economics
of
Consumer
Information
Acquisition,”
The
Journal
of
Business,
53
(July),
143–58.
Wood,
J.A.
(2008),
“Methodology
for
Dealing
with
Duplicate
Study
Effects
in
a
Meta-analysis,”
Organizational
Research
Methods,
11
(January),
79–95.
*Yang,
Joonhyuk,
Wonjoon
Kim,
Naveen
Amblee
and
Jaeseung
Jeong
(2012),
“The
Heterogeneous
Effect
of
WOM
on
Product
Sales:
Why
the
Effect
of
WOM
Valence
is
Mixed?,”
European
Journal
of
Marketing,
46
(November–December),
1523–38.
*Ye,
Qiang,
Bin
Gu
and
Wei
Chen
(2010),
Measuring
the
Influence
of
Man-
agerial
Responses
on
Subsequent
Online
Customer
Reviews—A
Natural
Experiment
of
Two
Online
Travel
Agencies,
working
paper.
Harbin
Institute
of
Technology.
*Ye,
Qiang,
Rob
Law,
Bin
Gu
and
Wei
Chen
(2011),
“The
Influence
of
User-
generated
Content
on
Traveler
Behavior:
An
Empirical
Investigation
on
the
Effects
of
e-Word-of-Mouth
to
Hotel
Online
Bookings,”
Computers
in
Human
Behavior,
27
(March),
634–9.
Zaichkowsky,
Judith
Lynne
(1985),
“Measuring
the
Involvement
Construct,”
Journal
of
Consumer
Research,
12
(December),
341–52.
Zeithaml,
Valarie
A.,
Mary
Jo
Bitner
and
Dwayne
D.
Gremler
(2013),
Services
Marketing:
Integrating
Customer
Focus
Across
the
Firm,
6th
ed.
New
York:
McGraw-Hill
Irwin.
*Zhang,
Zhu,
Xin
Li
and
Yubo
Chen
(2012),
“Deciphering
Word-of-Mouth
in
Social
Media:
Text-based
Metrics
of
Consumer
Reviews,”
ACM
Transac-
tions
on
Management
Information
Systems,
3
(April),
5:1–5:23.
*Zhu,
Feng
and
Xiaoquan
(Michal)
Zhang
(2010),
“Impact
of
Online
Con-
sumer
Reviews
on
Sales:
The
Moderating
Role
of
Product
and
Consumer
Characteristics,”
Journal
of
Marketing,
74
(March),
133–48.
*Zhu,
Min
and
Shengqiang
Lai
(2009),
“A
Study
about
the
WOM
Influence
on
Tourism
Destination
Choice,”
in
International
Conference
on
Electronic
Commerce
and
Business
Intelligence
working
paper,
120–4.