ArticlePDF Available

Measuring the Impact of a Single Negative Customer Review on Online Search and Purchase Decisions

Authors:

Abstract and Figures

Abstract How much does a single negative customer review impact online search and purchase behavior? To answer this question, we compare choices made by consumers searching for a product that had a negative review posted on its product-page with those made when the same review had “moved” to a second page as additional reviews were posted. By focusing on where the review appears rather than whether it was submitted, our approach tackles concerns of spurious correlation between customer reviews and unobserved demand shocks. Using data from a large online retailer, we find that the discovery of a negative review – one that has 3-stars or less out of 5 – leads to an average drop in purchase probability of 51.4%, an increase in continuing search for a competing item of 11.4%, and a 15.8% increase in the price paid if a purchase occurs. Our findings are presented on a two-dimensional product map – an innovative way to evaluate the vulnerability of products to a single negative review.
Content may be subject to copyright.
Measuring the Impact of a Single Negative Customer Review on
Online Search and Purchase Decisions
Marton VargaPaulo Albuquerque
December 10, 2019
Abstract
How much does a single negative customer review impact online search and purchase
behavior? To answer this question, we compare choices made by consumers searching for
a product that had a negative review posted on its product-page with those made when the
same review had “moved” to a second page as additional reviews were posted. By focusing on
where the review appears rather than whether it was submitted, our approach tackles concerns
of spurious correlation between customer reviews and unobserved demand shocks. Using data
from a large online retailer, we find that the discovery of a negative review – one that has 3-
stars or less out of 5 – leads to an average drop in purchase probability of 51.4%, an increase
in continuing search for a competing item of 11.4%, and a 15.8% increase in the price paid if a
purchase occurs. Our findings are presented on a two-dimensional product map – an innovative
way to evaluate the vulnerability of products to a single negative review.
Keywords: Customer reviews, Online shopping, Consumer search, Elasticity of demand,
Electronic word-of-mouth
We are grateful to the Wharton Customer Analytics Initiative and its partners for providing us the data. We thank
the participants at the 8th Workshop on Consumer Search and Switching Cost, the 39th Annual ISMS Marketing
Science Conference, and the 2018 INSEAD PhD Research Day for their valuable comments and suggestions.
Ph.D. Candidate, INSEAD. Corresponding author at marton.varga@insead.edu.
Associate Professor of Marketing, INSEAD.
1
“Please do not leave a negative review. Contact us and we will do everything for
your satisfaction.” eBay seller
INTRODUCTION
It is common practice among online retailers to post customer ratings and reviews on their websites.
Online shoppers frequently refer to reviews for input at multiple stages of the purchase decision
process to obtain information, evaluate and compare alternatives, and decide which product, if
any, to buy (Mudambi and Schuff 2010). Several studies show individual reviews and ratings are
regarded by consumers as highly credible input in purchase decisions (Floyd et al. 2014, Bright-
Local 2017, Chen and Xie 2008, Brown et al. 2005, Kozinets et al. 2010, Y. Liu 2006). Indeed,
consumers often trust reviews more than advertising (Cheong and Morrison 2008, Hung and Li
2007). Studies of brand signaling suggest that, compared to company communications, customer
reviews give brands additional credibility (Erdem and Swait 1998, Montgomery and Wernerfelt
1992).
Given the importance of this consumer-generated content, manufacturers monitor the arrival of
new reviews. In online marketplaces, the drive to secure good ratings is such that some companies
resort to “brushing” – engaging in fabricated transactions where fictitious buyers leave positive
feedback (NPR, 2018). Negative reviews have become so potentially damaging that sellers have
sued buyers for causing irreparable harm with a critical opinion (Washington Post, 2012). Like-
wise, online retailers who provide the space for user-generated information pay close attention
to their valence and content in an effort to ensure a high level of trust, positive reputation, and
a meaningful experience on their site. For example, Amazon filed lawsuits against more than a
thousand defendants who allegedly created fake reviews for about $5 per unit (Forbes, 2017, New
York Post, 2017).
2
Such cases illustrate the managerial importance that a single review can have for practitioners.
While it is clear that online shoppers consult more information than the average review rating, its
impact on sales has been the focus of the majority of research (e.g., see Babi´
c Rosario et al. 2016,
You, Vadakkepatt, and Joshi 2015, and Floyd et al. 2014 for meta-analyses) rather than the effect
of a single review. Hence, our study contributes to the literature by quantifying the impact of a
single negative review – of three stars or fewer out of five – on product demand and search. We
focus on negative reviews because they are of greater information value to consumers than positive
ones (Ahluwalia, Burnkrant, and Unnava 2000, Monga and John 2008, J. Lee, Park, and Han 2008,
Ito et al. 1998, Ivanova, Scholz, and Dorner 2013, Soroka 2006).
More specifically, our study answers the following research questions:
What is the impact of a single negative review on the consumer’s purchase decisions regard-
ing the reviewed product and its competitors?
How much does a single negative review impact search behavior, i.e. how does it affect the
consideration set of the consumer?
When they see a low rating, are customers willing to pay a premium for a rival product, and
if so, how much?
How much does sales elasticity to a single negative review vary across products and cate-
gories?
To answer these questions we use a quasi-natural experiment created by the way retailers up-
date consumer-generated reviews on product-pages, and leverage a rich data set that tracks mul-
tiple steps of the buying process. Our identification strategy involves comparing consumers who
scrolled down to the first five reviews available at the bottom of the product-page and found a low
rating there, with consumers who scrolled down to the first five reviews of the same product but
did not see the low rating at the bottom of the product-page. For the latter group, the same low
rating was accessible upon clicking on a “read more reviews” button. This exogenous treatment
3
of consumers happens because the number of reviews on the product-page is fixed: newly submit-
ted reviews automatically take the first positions, thereby relegating older ratings to higher order
review-pages. In most cases this means that the review will not be seen.
Our approach deliberately focuses on the relegation of a negative review to the second review-
page rather than on its creation. Unobserved events like delivery delays or stock-outs can simulta-
neously lead to the creation of a negative review and undermine demand for the product (Reinstein
and Snyder 2005). Consequently, if we instead compared consumer decisions before the low rating
was posted with those made after the arrival of that rating, we would be measuring the joint impact
of the low rating and an unobserved demand or supply shock(s). By focusing on the review’s rele-
gation, which occurs after a certain time lapse since its submission, we avoid this problem because
the negative review was available on the website both before and after relegation.
We test our method on a large data set on search and purchase decisions from two broad product
categories sold at a British online retailer: technology and home-&-garden, with 410,628 product-
page views and 7,774 purchases. We find that viewing a single negative review decreases the
probability of purchase by 51.4% while present on the product-page, and increases the probability
that the consumer adds at least one more item to the consideration set by 11.4%. Consumers
are also more likely to switch to a more expensive alternative when they see a negative review
while searching, causing a 15.8% increase in the price paid if they choose to buy a product at the
platform. We derive the negative review elasticity of sales for a variety of products, with most
values between -5% and -50% depending on the category, and transpose them to a “vulnerability
map”. This offers managers a useful tool to gauge the impact of a single negative review on their
product offering.
THEORETICAL BACKGROUND
Our research builds on economic-based constructs – preferences, risk aversion, search costs – and
applies them to consumer search and purchase decisions. Both theoretical (Kahneman and Tversky
4
1979) and empirical (Siegrist and Cvetkovich 2001, Bizer, Larsen, and Petty 2010, Mittal, William,
and Patrick 1998) literature suggest that negative information about a product has a two-fold effect
on choices: it decreases an item’s perceived utility and increases the uncertainty about its quality.
In addition, although consumers have easy online access to a large number of product pages, they
often decide to browse, in a sequential way, a very small percentage of available product pages
(e.g., Kim, Albuquerque, and Bronnenberg 2010, Honka 2014, Seiler 2013).
Since a consumer searches for only a small subset of all information accessible on the retailer’s
website, we argue, first, that each observed piece of information is relevant and, hence, even a
single negative review can trigger the perception of worse quality. As in the online setting nega-
tive information is much less frequent than positive information (Hu, J. Zhang, and Pavlou 2009,
Schoenmueller, Netzer, and Stahl 2018), a negative review provides a strong signal when compared
to more commonly observed positive reviews (Ahluwalia, Burnkrant, and Unnava 2000).
Second, we assume that consumers search for products in a sequential way (Kim, Albuquerque,
and Bronnenberg 2016; 2010). Indeed, this assumption is strongly supported in our empirical
application. In a sequential search framework, consumers continue browsing for products as long
as the expected marginal benefit of sampling an additional item exceeds its marginal cost. In this
setting, a negative rating should make search longer, because the additional negative information
in a product page decreases the utility of the product being evaluated. In contrast, in a fixed-
sample (simultaneous) search framework (De los Santos, Hortacsu, and Wildenbeest 2012) where
consumers decide ex-ante how many products to sample, a discovered negative review would not
change the size of the consideration set.
Building on these concepts, we posit that after a single negative review there will be (i) a
reduction of purchase probabilities of products receiving the negative review; (ii) an increase in
search activity for competing products due to the sequential nature of online search, the negative
shock, and risk aversion; (iii) an increase in the probability of buying a substitute product; and
(iv) a higher price paid for the substitute product, as it can signal higher quality (McConnell 1968;
Bagwell and Riordan 1991).
5
Contribution to the Literature on Online Word-of-Mouth
The literature on electronic word-of-mouth (eWOM) is extensive. To summarize it, various meta-
analyses of the relationship between customer reviews and product sales have been published.
Floyd et al. (2014) collect 26 empirical studies that estimate the relationship between the study-
specific measure of review valence (e.g. number of positive ratings, number of negative ratings
or average rating ) and the study-specific measure of sales (e.g. sales-rank, daily revenue, sales
growth rate). In many cases, the articles comprising the database of the authors did not report
elasticities directly, requiring them to calculate the elasticities from information provided in those
studies. They plot the frequency distribution of the elasticities across the multiple studies using
multiple definitions for the elasticity of sales to valence, finding that the average of the numbers is
0.69.
You et al. (2015) conduct a similar meta-analysis and find that the mean of the valence elas-
ticities in their database (comprising 51 studies that used a variety of different definitions for the
elasticity of sales to review valence) is 0.236. Instead of elasticities, Babic Rosario et al. (2016)
use bivariate and partial correlations to measure the effect size of eWOM on sales. They obtain
their main estimate using a random-effects method by Hedges and Olkin (2014) for combining
the 96 studies in their database. The authors conclude that overall, there is a positive correlation
of .091 between eWOM and sales. Although these three meta-analyses derive their results from
a number of papers which use very different metrics for sales and valence, when taken together,
there seems to be a significant positive relationship between the content of online worth-of-mouth
information and the sales measure available.
Despite the absence of studies on customer reviews that specifically address the question of
how much a single review impacts sales or consumer search, we can approximate this effect in
a limited set of papers, using a similar approach to You et al. (2015), who calculate elasticities
from the reported tables of previous articles. For example, Chevalier and Mayzlin (2006) use data
from Amazon.com and BN.com, finding that an additional 1-star review increases the sales-rank
difference of a listed book. Although the authors do not interpret the magnitude of the effect, based
6
on their results, we can derive that a new bad review would decrease the sales rank of a book by
35% – albeit this cannot be translated to an equivalent sales decrease, as sales data are not available.
Zhuan and Zhang (Zhu and X. Zhang 2006, Zhu and X. Zhang 2010) study the role of customer
reviews on sales of video games and find that a one-point increase in a game’s average rating leads
to a 4% increase in its sales in the following month. From their reported tables, we can infer that
one negative review would reduce average rating of a representative game by 0.5, implying that a
single negative review would reduce sales of video-games by 2%.
Wu et al. (2015) analyze how consumers update their restaurant preferences from online re-
views. From their model and estimates, we can deduce that one additional negative review would
cause a 2% drop in choice probability of an average restaurant. In their study on Twitter posts’
influence on box-office income, Hennig-Thurau et al. (2015) provide a numerical example, finding
that around 800 fewer negative reviews would have increased the box-office revenue of a popular
movie by 3.5%. Assuming linearity, this means that a single negative tweet leads to a 0.004% drop
in sales. Finally, X. Liu, D. Lee, and Srinivasan (2017) investigate how the text of the reviews
influences sales by applying a deep-learning algorithm. From their reported coefficients, we cal-
culate that one more negative review read by the consumer reduces the odd ratio of conversion by
between 5% and 7%. However, as their focus is the relationship between text aesthetics and con-
version, they do not translate this finding into a percentage change in purchase probability. Their
identification strategy is also different from ours as it relies on variation due to the creation of
reviews that induce within-product changes of review content.
Although it is possible to obtain some guidance about the magnitude of the effect of one review
on sales from the aforementioned literature, the role of a single review – particularly its causal
impact on choice beyond the effect of average rating – has hitherto been of secondary concern.
By quantifying the causal relation between a single negative review and sales, we contribute to
the existing literature on the impact of customer reviews on choice decisions. We also advance a
growing body of work on how consideration sets are formed and how traditional marketing efforts
of firms influence the palette of products consumers are willing to choose from (Astorne-Figari,
7
López, and Yankelevich 2018, Demuynck and Seel 2018, Manzini and Mariotti 2018, 2014, Eliaz
and Spiegler 2011, Van Nierop et al. 2010, Allenby and Ginter 1995) by quantifying the effect of
a negative rating on the size of the consideration set.
Concerning Spurious Correlation
One of the concerns when identifying the effect of online ratings on demand is the possibility
that parameter estimates may be biased by spurious correlations, that is, when critical reviews
are simply evidence of a negative demand or supply shock. In such cases, the decline in sales
would have happened even without the existence of the review, as consumers could have found out
about the negative shock from other information sources. This raises concerns about methods that
compare sales or search decisions before and after the arrival of a new rating on an online platform.
In an online shopping setting, spurious correlation between review creation and sales can arise
from various factors related to either permanent or temporary shocks. We illustrate a sample of
these with actual quotes from customers submitted to retailers and social media platforms.
Spurious correlation caused by permanent demand shocks: “We got this TV. The video
feature doesn’t work as advertised. It simply freezes. We found on the manufacturers website where
they said they simply stopped supporting this part of their TV.
The manufacturer may have stopped offering maintenance and/or customer support, as exem-
plified by the above quote from a customer review. Consumers can obtain this information from
either the online review or other sources such as the firm’s website or newsletter.
Spurious correlation caused by temporary demand shocks: “This game wouldn’t play
on our video console, looked on the Internet and there is a fault with them.
Products may have a temporary malfunction, which could be discovered from other sources
than the retailer. The glitch may be resolved after some time, for example, with a software update.
Hence, a temporary drop in demand is likely to be caused by demand shock instead of the neg-
ative review. If we compared consumers browsing before the arrival of the negative review with
consumers browsing after its arrival we might, wrongly, contrast groups of consumers who are all
8
affected by this provisional malfunction.
Spurious correlation caused by temporary supply shocks: “To our valued customers
- we’re experiencing delays in the NY & NJ delivery areas and are working diligently to resolve
these issues. Unfortunately, we’re unable to provide any updated delivery details at the present
time. Thank you for your patience!”
Manufacturers may have logistic or capacity issues that impede satisfaction of demand. With
the above Twitter post by IKEA, consumers learned about the supply shock. Those who read the
post were less likely to order the product for at least as long as the delivery issues are not resolved.
This reduction of demand could coincide with the creation of negative customer reviews due to the
experienced delivery delay, thus creating a spurious relationship.
Previous research has encountered similar challenges when measuring the effects of reviews on
sales. Reinstein and Snyder (2005) warn of spurious correlation between expert reviews and movie
ticket sales induced by an underlying correlation in unobservable quality signals. The authors
were able to isolate the influence of expert reviews from other unobservables using a difference-
in-differences (DiD) approach that was built on the difference between positive and negative eval-
uations for movies reviewed during their opening weekend and movies reviewed later on. They
found that the traditional methodology – which does not control for the spurious correlation be-
tween review generation and demand – overestimates the effect by about 50% to 100%.
Chintagunta et al. (2010) also exploit the sequential nature of movie release strategies across
markets. As reviews can only come from consumers in places where the movie was previously
released, the researchers use information from these markets as instruments to measure the impact
of reviews at release times in other locations. Anderson and Magruder (2012) take advantage of
a managerial policy by Yelp – a local-search service powered by crowd-sourced review forum –
to obtain the causal effect of the average rating on restaurant reservations. When Yelp computes
the average rating of a product or service, the company rounds it off to the nearest half-star. This
enables the authors to propose a regression discontinuity design that takes advantage of this policy.
In a similar spirit, we propose an identification strategy based on the presence of a negative
9
review at different locations in the platform, either in a product-page or in a second page that
requires consumers to paginate to access content.
RESEARCH DESIGN
Data
We use data on consumer search and purchase decisions at a large online retailer based in the
United Kingdom, between February 1st and March 31st of 2015. The data set includes individual
visits to web-pages of home-&-garden and technology products as click-stream information. These
products are further classified by the online retailer into 629 categories. We observe 31,284 prod-
ucts that received at least one consumer visit during the analysis period, of which 14,876 products
have at least one review.
For our analysis, we define a “search session” as a visit that starts when a consumer first opens a
product-page and ends either with a purchase in that category or with the consumer’s last observed
product-related click in that category. Altogether, in our dataset, 123,994 unique consumers carried
out 222,682 search sessions lasting less than 24 hours in 94.7% of the cases, during which they
viewed a total of 410,628 product-pages and purchased 7,774 items.
Visitors had access to 575,084 unique product reviews with a respective rating. Products were
rated on a 1-to-5 discrete scale represented by stars: 57% of the reviews had a 5-star rating, 26%
had a 4-star rating – hence the overwhelming majority of reviews were positive – 6% had a 3-
star rating, 3% a 2-star rating, and 8% had a 1-star rating. Given the empirical distribution of the
ratings, we classify a review as negative if it has 3-stars or less, although we also test the robustness
of our findings with different definitions.
The product’s price, number of reviews, and average rating are visible while browsing a par-
ticular category-page without opening the page of the particular product. If consumers decide to
click on the product-page, they see it organized into two parts: the product description and the
review list (Figure 1). The product description section at the top of the product-page is imme-
10
diately visible when the page is opened. It repeats the information from the category-page and
provides further details about the item such as technical specifications, size, color, etc. In contrast,
the review-section is not immediately visible on arrival at the product-page. The user must scroll
down to view the five most recent reviews.1We denote this area of the product-page the “first
review-page”. After scrolling down to the bottom of the page, the consumer has the option to click
to additional review-pages, each of which can contain 20 reviews.
– FIGURE 1 ABOUT HERE –
Customer surveys report that people form an opinion by typically reading one to six reviews
(Search Engine Land, p. 2015). From the data, we also see that the large majority of consumers
do not read the reviews accessible in pages other than the initial product-page. Important to our
analysis, our data contain an indicator variable that captures the decision to scroll down to the
review-section and to click to additional review-pages. About 24.1% of the consumers scroll to
the review section in the product page, while 16.7% of consumers who see the first review-page
paginate by clicking on the “next review-page” button. As a result, a negative review is hidden
from the eyes of the majority of consumers, including about 5 out of 6 consumers who scroll down
to the first set of reviews on the product-page. Altogether, consumers viewed the first review-page
76,726 times and paginated to the remaining review-pages 12,773 times.
In terms of browsing behavior, about 63% of the consumers click on one product-page only,
27% click on two or three pages, and the remaining 10% of consumers click on four or more
product-pages. Assuming that viewing a product implies consideration, the average consideration
set size in our data is 1.84. In terms of time spent on the website, a search session lasts about 2
minutes on average.
For each product, we observe reviews, prices, product category and characteristics, within-
category market shares at the retailer, and page-view shares during the two months. Regarding
reviews, the mean number of reviews is 55.9 and the median is 7.3, among products with at least
1Reviews can be sorted by consumers, for example by helpfulness or rating. However, only 3.3% of consumers
sort the reviews by any criteria.
11
one review. The average rating is 4.18 and the median rating is 4.34. Regarding market shares
and page-view shares, we find strong market concentration: about 1% of items attract more than
two-thirds of visits and purchases. Finally, the price average is £81.5, with £68.1 in the technology
category and £85.2 in home-&-garden.2
To relate search patterns to our hypothesis on price paid, Figure 2 shows the price (in £) of
products as consumers move forward in their search process, from the first item seen in a browsing
session to the tenth. There is a clear pattern of price increase as consumers move through product-
pages. This suggests that if consideration set increases due to the discovery of a negative review,
the extended set will likely include more expensive products.
– FIGURE 2 ABOUT HERE –
Identification of Review Impact
Our data includes a unique quasi-natural experiment that helps quantify the effect of a low rating
on both search and choice. During the two months, the online retailer updated the product-pages
daily with the publication of reviews by consumers who purchased the respective product. New
reviews took the top spots on the product-page, while older reviews moved further down or were
relegated to later review-specific pages. This sequential arrival of reviews, without managerial
influence, creates variation in the location of reviews on the retailer’s website at the time of arrival
of a consumer.3
We focus on the clear discontinuity of comparing the status of a negative review in the last
(5th) position of the first review-page versus on the top of the second review-page. By doing so
we ensure that our approach is unrelated to the effect of the creation of the negative review, thus,
free of potential spurious correlations. Hence, the relegation of the negative review to the second
2For most products, we obtained average price directly from the data provider. We also scraped archived data from
to complete the data set. We were left with 1,693 products for which we could not obtain
price data. In these cases, we use the category-average to impute the missing values.
3From conversations with company representatives, we believe the firm does not manipulate the publication of
submitted reviews beyond censoring the ones with inappropriate wording or offensive language.
12
review-page and the decision to scroll down to the review area in the product-page are used as
distinctions between five consumer groups:
Group 1: Consumers who visited the product-page, scrolled down to the review-section at
the bottom of the product-page, and found a low rating there in the last position;
Group 2: Consumers who visited the product-page, scrolled down to the review-section, but
did not find the same low rating there, because of its relegation to a second review-page;
Group 3: Consumers who visited the product-page while the same low rating was accessible
at the bottom of the product page (in the last position), but who did not scroll to the review-
section;
Group 4: Consumers who visited the product-page while the same low rating was relegated
to the second review-page, but who did not scroll to the review-section.
Group 5: Consumers who visited the product-page at any other time (scrolling down or not)
and consumers who browsed other products.
Figure 3 provides an illustrative example. In this case, at the beginning of the observation window,
the product-page contains a negative review with a 1-star, at position 4. Consumers arriving at this
page will be considered in Group 5. Later, a review with a 5-star rating is posted and older reviews
move down: the negative review is now in position 5 of the product-page and any consumers
arriving and scrolling down to the review area will be classified as Group 1, while those consumers
that do not scroll down will be part of Group 3. Even later, a 4-star review is posted and the 1-star
review is relegated to the review-specific page. To observe this rating, consumers now need to
paginate the review-page. At this point, any consumer who arrives at the page, scrolls down but
does not paginate will not see the negative review and hence will be classified as a member of
Group 2, while consumers that do not scroll down will belong to Group 4.
– FIGURE 3 ABOUT HERE –
13
We observe 1,884 products that have at least one treatment and a corresponding control period
during the two months of data availability. We define these as focal products, as these are the
key products in our analysis and identification of the effect of the negative review. During the
observation period, these 1,884 products accounted for 24.1% of the sales and for 24.9% of the
traffic, measured as number of page-views, in categories that include at least one focal item.4Such
categories received 90.6% of the overall traffic at the retailer. The remaining products sold by the
retailer are still in the analysis to identify other effects.
There were 16,795 visits for focal products in treatment periods, 43,263 visits for focal prod-
ucts in control periods, and 36,916 visits for focal products outside treatment or control periods.
313,654 visits were made for products other than focal. Among visits made during treatment pe-
riods, reviews were read 4,151 times (Group 1) while reviews were not read 12,644 times (Group
3). Among visits made during control periods, reviews were read 10,880 times (Group 2) while
reviews were not read 32,383 times (Group 4). On average, before visiting the product-page of a
focal item, Group 1 consumers scroll down to the review section of 0.41 products, and Group 2
consumers to 0.42 products. Group 3 and Group 4 consumers, however, read reviews about fewer
products, 0.12 and 0.14 products, respectively. The mean duration of the treatment periods is 11.4
days, while of the control periods is 20.4. days.
As more reviews need to arrive to push the negative review to the second review-page, we
inherently have a temporal lag between the arrival of Groups 1 and 2 above. Hence, we cannot
use first differencing methods (Wooldridge 2010) as Group 2 consumers always visit the product-
page at a later stage than Group 1 consumers. If we simply compared the behavior of consumers
in these two groups, there could be misattribution of the review effect to time-related factors such
seasonality or an increasing or declining popularity of a product. Instead, we apply a DiD approach
(Blundell and Dias 2009, Athey and Imbens 2017), in which the difference between consumers in
Group 3 and Group 4 is used to control for any demand changes that may exist between treatment
and control periods due to the temporal lag. These latter two groups do not see the reviews but
4The remaining 76.9% of the sales (75.1% of the traffic) come from products that do not have any control-treatment
pair in our two-month analysis period.
14
search at a similar time as Groups 1 and 2, respectively. In other words, we first calculate a first
difference between decisions of Group 1 and Group 2, then calculate a second difference between
decisions of Groups 3 and Group 4. Our estimate for the relegation of the negative review will be
the difference between these two differences.
In order to obtain unbiased estimates using the DiD approach, our assumption is that unob-
served shocks concurrent with the creation of the review should not affect consumers who read re-
views differently than consumers who do not read reviews, and that the difference between groups
3 and 4 is a good control, when compared to the difference between groups 1 and 2. Intuitively, we
see no reason why this would not be the case in general. Nonetheless, to control for any possible
differences between groups of consumers, we additionally include a variety of variables to ensure
this. These further control variables describe the behavior of the consumer before the discovery
of the focal item including the number of browsed pages and reviews read previously (both in the
category of the focal product and in other product categories). By doing so, consumers who scroll
down to read the reviews and those who do not are as similar as possible, conditional on observed
behavior at the platform. This ensures that they are comparable in terms of preferences, informa-
tion, and motivation to read reviews, hence should react similarly to shocks. We present detailed
econometric specifications in the next section for different consumer decisions.
Model-Free Evidence
This section provides descriptive evidence of our hypotheses by looking at the average behavior
of consumers who browsed for the focal products in the treatment periods, when a negative review
was accessible upon scrolling down, and in their control periods, when it was not. Table 1 shows
how frequently on average consumers (i) purchased the focal item, (ii) purchased a competing
item, (iii) searched for more alternatives following seeing the page of the focal product, (iv) the
number of additional alternatives searched, and (v) the price paid for the purchased product – focal
or competitor. These statistics are broken down by condition and by scrolling decision. In Table 1,
we indicate the change in the behavior of consumers visiting in treatment and control periods (i.e.
15
when the negative review was in position 5 on the product-page versus when it was moved to the
second page), and the respective significance.
– INSERT TABLE 1 ABOUT HERE –
The descriptive statistics tell us that consumers who searched in treatment periods and scrolled
to the reviews (where they saw the low rating) have lower purchase rates than those who did not
scroll to the reviews (hence did not see the low rating). The purchase rate of consumers who
searched in control periods (when the review was on the second page) and scrolled to the reviews
is higher than of those who did not scroll to the reviews. As most control period consumers did not
see the negative review because that was only accessible on the second review-page, this suggests
that encountering a negative review decreases purchase intention.
To control for unobserved trends and shocks, our strategy is to separate out the behavior of
those consumers who did not see the reviews. The last column of the table simply shows the double
difference values, i.e. the differences between the respective values in the “Difference” column.
These double differences suggest that those who encounter a negative rating on the first review-
page are less likely to purchase the focal product and, to a lesser extent, more likely to purchase
a competing product. Furthermore, viewing a negative review is associated with an increased
average propensity to search for alternatives, with a larger consideration set, and with a higher
price paid for the finally chosen product – focal or substitute.
Modeling the Effect of a Review Relegation to a Second Page
Our approach has two stages to estimate the impact of a negative review. First, we quantify the
impact of the negative review’s relegation to the second review-page, that most consumers do not
visit. Next, we correct that estimate to account for the fact that even when the review is in the
second page of reviews (our control period), there is a small percentage of consumers that visit
the product page, but because they paginate, they will be affected by the negative review. We
start by describing on how to obtain estimates for the effect of the negative review on different
16
decisions, using the relegation of the review to the second review-page. We formulate a model for
each consumer decision of interest and show how we obtain our DiD coefficients.
Purchase decision of product j The first analysis evaluates the impact of the review rele-
gation on the purchase decision of the consumer. To do so, we define consumer is utility from
purchasing product jon day tas
Ui jt =β0j+β1Tjt Ri jt +β2Cjt Ri jt +β3Tjt (1Ri jt ) + β4Cjt (1Ri jt )
+β5Ri jt +β
β
β0
0
0
6Xi jt +εi jt .(1)
In Equation 1, the term Tjt (Tfor treated) takes the value of 1 if the product is one of the focal
products visited when there is a negative review in the product page, 0 otherwise. The term Cjt (C
for control) takes the value of 1 if, at time t, consumer ivisits a product-page that does not contain
a negative review because that was relegated to the second review-page, and zero otherwise. The
variable Ri jt measures scrolling behavior, taking the value of 1 if the consumer scrolled down to
the review area in the product-page, 0 otherwise. We include product-specific fixed effects, β0j,
and the vector Xi jt denoting time-varying control variables, namely: an indicator variable whether
jhas reviews at the time of visit, the average rating of product j(zero if no reviews), product js
number of reviews, number of products browsed in the category of product jand other categories,
until viewing product js page, number of products reviewed in the category of product jand other
categories, until reviewing the page of product j, number of search sessions initiated in the category
of product j, until viewing that product’s page, mean duration of previous search sessions in other
categories (in minutes), mean probability of purchase in other categories (before arriving at product
j’s page), and a linear trend in days since the start of observation period.5Finally, εi jt is an i.i.d.
unobserved component assumed to have an extreme value distribution. A purchase is made if
Ui jt >0.
In this formulation, β2β1measures the utility difference between consumers in Groups 2and
5For variables that can be zero, we take the logarithm of the variable’s value plus one.
17
1as defined earlier (who scrolled down to the focal product’s reviews), while β4β3measures
the utility difference between consumers in Groups 4and 3(who did not scroll down). Hence, the
DiD estimator for the impact of relegating the negative review on purchase by a consumer who has
scrolled to the reviews is
δ1= (β2β1)(β4β3).
Due to the properties of logit regression (see e.g. Train 2009), δ1is the difference between the
log of odds ratio, i.e., the difference in the probability of buying product jover the probability of
not buying that product between consumers who scrolled to the reviews in control and treatment
periods, controlling for all else.
Purchase decision of any competitor of product j The negative review of product jmay
increase the purchase probability of competitor products offered by the retailer in the same cat-
egory. Therefore, we model the decision to buy any competitor j’ within the same category as
follows. Define the following utility from purchasing an alternative item as
Vij 0t=α0j+α1TjtSi jt +α2Cjt Ri jt +α3Tjt (1Ri jt ) + α4Cjt (1Ri jt )
+α5Ri jt +α
α
α0
0
0
6Xi jt +εi j 0t,(2)
where subscript j’ represents any product in the same category of product j. As before, purchase
of a competitor happens if Vi j0t>0.The other terms have a similar interpretation as in Equation 1.
In this case, the DiD estimator for the impact of a discovered negative review received by
product jon purchasing any of the competitors by a consumer who has scrolled to the reviews of
jis
δ2= (α2α1)(α4α3).
Search decisions after viewing product j In the case of search, we consider two depen-
dent variables. First, we set as dependent variable the decision to continue browsing for products
after visiting the page of the focal product j. The utility from continuing to browse after viewing
18
the page of product jis defined as
Wijt =λ0j+λ1TjtRi jt +λ2Cjt Ri jt +λ3Tjt (1Ri jt ) + λ4Cjt (1Ri jt )
+λ5Ri jt +λ
λ
λ0
0
0
6Xi jt +vi jt ,(3)
where vi jt is an i.i.d. type 1 extreme value random variable. Search continues if, and only if, Wi jt >
0, and the remaining terms in the specification have the same interpretation as previously. The DiD
estimator for the impact of the negative review’s relegation on further search to competitors for a
scrolling consumer is given by:
δ3= (λ2λ1)(λ4λ3).
Consideration set size after viewing product j Alternatively, we can measure the impact
of a single review on browsing by its effect on the size of the consideration set. In this case,
we choose as dependent variable to be the logarithm of the number of product-pages visited after
product jwas viewed, denoted as Yi jt and estimate the specification
log(Yi jt +1) = γ0+γ1Tjt Ri jt +γ2Cjt Ri jt +γ3Tjt(1Ri jt ) + γ4Cjt (1Ri jt )
+γ5Ri jt +γ6j+γ
γ
γ0
0
0
6Xi jt +υi jt .(4)
Given that the dependent variable is continuous, we use ordinary least squares (OLS) to estimate
the respective parameters. In this case, υi jt is a random variable following the standard normal dis-
tribution. The DiD estimator for the impact of the negative review’s relegation on the consideration
set size of a scrolling consumer is defined as
δ4= (γ2γ1)(γ4γ3).
Price of the purchased product Consumers might be willing to pay a premium for a
19
competing product following the discovery of the focal product’s negative review. To investigate
whether this hypothesis holds in our data, we restrict our attention to the sample of consumers who
have made a purchase at the retailer’s website. Denote the price of the purchased item as Zikt. We
are interested in how the product characteristics of product j, which received the negative review
in some occasions, influences the price paid for product k, whose page was seen after product’s j
page. We estimate the following OLS regression on the log price paid:
log(Zikt ) = ξ0j+ξ1Tjt Ri jt +ξ2Cjt Ri jt +ξ3Tjt (1Ri jt ) + ξ4Cjt (1Ri jt )
+ξ5Ri jt +ξ
ξ
ξ0
0
0
6Xi jt +ωikt ,(5)
where ωikt is a standard normal random variable. The DiD estimator for the impact of the negative
review’s relegation on the logarithm of the final price paid by a consumer who read the first review-
page of jis then equal to
δ5= (ξ2ξ1)(ξ4ξ3).
Estimation
To estimate the parameters for product purchase, we maximize the following log-likelihood func-
tion for β
β
β
LL(β
β
β,Data) =
1
IT J
I
i=1
J
j=1
T
t=1
ai jt log Pr(ai jt =1|β
β
β,Data)+ (1ai jt )log 1Pr(ai jt =1|β
β
β,Data),(6)
where ai jt =1 if consumer ibuys the product, and 0 otherwise. Given the proposed specifica-
tion, the probability of product purchase is given by
Pr(ai jt =1|β
β
β,Data) = e¯
Ui jt
1+e¯
Ui jt
,(7)
20
where ¯
Ui jt is the deterministic part of the utility in Equation 1. We use a similar approach to obtain
the coefficients for the decision to buy any competitor and the decision to continue search.
For the continuous outcome variables, how many products to search and what price to pay, we
estimate the respective parameters by minimizing squared errors. For example, for the number of
products searched, we minimize the expression for γ
γ
γ
SSE(γ
γ
γ,Data) = 1
IT J
I
i=1
J
j=1
T
t=1log(¯
Yijt +1)log(Yi jt +1)2,(8)
where Yijt is the actual size of search set seen in the data and ¯
Yijt is the deterministic part of
Equation 4.
For the estimation of the models above, we take advantage of a publicly available software
package denoted by H20, a scalable open source machine learning platform that offers paralleled
implementations of various machine learning algorithms, including Generalized Linear Models
(Nykodym et al. 2016).6Given the size of the data and especially the number of covariates (more
than 30,000 product fixed-effects), we found that the approach is efficient because, unlike tradi-
tional methods, it runs maximum likelihood estimation via iteratively re-weighed least squares
(Burrus 2012, Green 1984). There have been multiple approaches to derive standard errors in the
iteratively re-weighted least squares (Street, Carroll, and Ruppert 1988). In order to ensure that our
standard errors are cluster-robust at the product-level, we obtain those by bootstrapping samples of
consumers who have searched for the same item.7
Impact of a Review Relegation on Purchase and Search Probabilities
The coefficients δ1,δ2, and δ3reflect changes in log-odds ratios, hence their exponents indicate
respective changes in odds ratios. In order to provide a straightforward interpretation of the esti-
6For more information visit .
7Although the H2O cluster allows the researcher to include a parameter that penalizes the sum of the absolute
coefficients or the 2-norm of the those, we do not use any penalty in the likelihood function. I.e., we do not shrink any
parameter estimates towards zero and/or limit the number of covariates. Hence, the parameters obtained in Equations 6
and 8 do not change, the standard econometric specifications are maintained (Friedman, Hastie, and Tibshirani 2001),
and all coefficients are estimated including each product fixed effect.
21
mated DiD coefficients, we translate those to percentage changes. We illustrate the procedure for
the impact on purchase, reflected in the estimate ˆ
δ1, as the steps to obtain the values for the other
coefficients are similar.
We start by computing the probability of focal purchase among members of Group 1. These
are the consumers who scrolled down to the review area and found a negative review at the bottom
of the product-page (i.e. on the first review-page). By denoting a member of Group 1 as consumer
hand substituting estimated values into Equation 1 we obtain
EUh jt |Tjt =1,Cjt =0,Rh jt =1,Xh jt ,ˆ
β
β
β=ˆ
β0j+ˆ
β1+ˆ
β5+ˆ
β
β
β0
0
0
6Xh jt .(9)
We can express the probability that Group 1 consumer hbuys focal product jat time tas
Pr(ah jt =1|Tjt =1,Cjt =0,Rh jt =1,Xh jt ,ˆ
β
β
β) =
expEUh jt |Tjt =1,Cjt =0,Rh jt =1,Xh jt ,ˆ
β
β
β
1+expEUh jt |Tjt =1,Cjt=0,Rh jt =1,Xh jt ,ˆ
β
β
β ,(10)
following the logit specification.8
Next, we express the probability of purchase among Group 1 consumers under the hypothetical
scenario where the negative review is relegated to the second review-page but everything else is
unchanged. To do so, we derive the latent utility of purchase if consumers had no chance to see
the negative review on the first review-page but only on the second. Under this (unobserved)
counterfactual, where Group 1 consumers are browsing the focal product in its control period, the
purchase utility is
EUh jt |Tjt =0,Cjt =1,Rh jt =1,Xh jt ,ˆ
β
β
β=ˆ
β0j+ˆ
β2+ˆ
β5+ˆ
β
β
β0
0
0
6Xh jt +,(11)
where stands for any unobserved trend differences between treatment and control periods.
8DeMaris (1995) offer a useful guide on how to convert logit coefficients to probabilities and interpret the findings.
22
We can rewrite Equation 11 as
EUh jt |Tjt =0,Cjt =1,Rh jt =1,Xh jt ,ˆ
β
β
β=ˆ
β0j+ˆ
β2+ˆ
β5+ˆ
β
β
β0
0
0
6Xh jt ++ˆ
β1ˆ
β1=
ˆ
β0j+ˆ
β1+ˆ
β5+ˆ
β
β
β0
0
0
6Xh jt +ˆ
β2ˆ
β1+.
By substituting
=ˆ
β3ˆ
β4,
i.e., the difference between shopping behavior of not scrolling consumers in treatment and control
periods as a proxy for unobserved trends, we obtain
EUh jt |Tjt =0,Cjt =1,Rh jt =1,Xh jt ,ˆ
β
β
β=EUh jt |Tjt =1,Cjt =0,Rh jt =1,Xh jt ,ˆ
β
β
β+ˆ
δ1.
The probability that consumer hin Group 1 buys focal product jat time t, provided she cannot
see the negative review on the first review-page because of its relegation to the second page is
Pr(ah jt =1|Tjt =0,Cjt =1,Rh jt =1,Xh jt ,ˆ
β
β
β) =
expEUh jt |Tjt =0,Cjt =1,Rh jt =1,Xh jt ,ˆ
β
β
β
1+expEUh jt |Tjt =0,Cjt =1,Rh jt =1,Xh jt ,ˆ
β
β
β.(12)
We then sort Group 1 consumers by their predicted purchase probabilities according to Equa-
tion 10 and denote hMas the median consumer in this ordering.9Relegating the negative review
from the first-page to the second then implies an average change in the probability of purchase
equal to
E1=Pr(ahMjt =1|Tjt =0,Cjt =1,RhMjt =1,XhMjt ,ˆ
β
β
β)
Pr(ahMjt =1|Tjt =1,Cjt =0,RhMjt =1,XhMjt ,ˆ
β
β
β)1.(13)
9We estimate the effects using the mean probability instead of the median and obtain similar results.
23
We take a similar approach to obtain the percentage change in the probability of purchasing
of a competitor, which we denote by E2, and of probability of continuing search, denoted by E3.
The percentage changes E4and E5implied by ˆ
δ4and ˆ
δ5are given simply by the exponent of the
respective coefficient minus one due to the log specification in the linear regressions.
Impact of Posting a Negative Review
With estimates for the impact of review relegation in hand, we approximate the impact of posting
a new negative review. After correcting for consumers who would see the low rating regardless of
its position, from the relegation effect we can obtain an estimate for the impact of posting the bad
review from the website.
We quantify the impact of deleting the critical review instead of relegating it as follows. Note
that when we calculate the relegation effect E, a non-treated consumers might still see the negative
review on the second review-page (Equation 12), if she decides to paginate. Therefore, to obtain
an estimate for the existence of the negative review, we correct for the paginating consumers who
are still affected by the relegated negative review.
To do so, denote Dnopaginate as the effect that the relegation of the negative review has on a
not-paginating consumer and Dpaginate as the effect that it has on a paginating consumer. We can
write the effect of the relegation, for each consumer decision of interest, as a weighted average of
these two effects
Eq=Nnopaginate ×Dno paginate +Npaginate ×Dpaginate
Npaginate +Nnopaginate
,for q=1,2,3,4,5,(14)
where Npaginate denotes the number of paginating Group 1 consumers and Nnopaginate denotes the
number of Group 1 consumers who do not paginate. Notice that for the not-paginating Group 1
consumers, Dnopaginate is equivalent to the effect of deleting the negative review. This is because
not-paginating consumers do not see the low rating regardless whether it is deleted or moved to the
second page. Notice as well that Dpaginate =0, as paginating consumers in Group 1 would see the
24
low rating even if that was accessible only on the second review-page. These imply that we can
approximate the effect of deleting the negative review entirely from the website as
Dq=Eq×Nnopaginate
Npaginate +Nnopaginat e 1
for q=1,2,3,4,5.(15)
By assuming that deleting a rating from the product-page has the opposite impact to posting it,
our estimate for the effect of posting a negative review is simply Dq.
We note that the fewer the consumers who, having seen the first set of reviews, wish to see
further ones, the better our approximation. In the limit when no consumer paginates, the impact of
the arrival of a new negative review equals the impact of relegating one from the product-page. In
our data only 1 out of 6 scrolling consumers paginates on average. Consequently, we believe that
the proposed estimation approach provides close to unbiased estimates regarding the impact of a
posted negative review.
RESULTS
Regression Coefficients
In this section we discuss the coefficients of the regression models previously described. The next
section goes into more detail in terms of interpretation, focused on the DiD results. Table 2 shows
the estimated coefficients from Equations 1-5 (we note that by construction the baseline is the
shopping behavior of consumers in Group 5). Looking first at the decision to purchase a product
whose page is viewed at time t, we find that consumers are more likely to buy the item when
they scroll down to the review area. However, this effect is reduced when they observe a negative
review on the product-page – i.e. in treatment periods. The other coefficients have face-validity:
having made more purchases in other categories leads to a higher probability of purchasing the
product in view, while previous search efforts viewing and reviewing other products lead to a
lower purchase probability. Average rating and number of reviews have the expected positive sign
25
but are insignificant, suggesting that these effects are absorbed by the product fixed effects.
– TABLE 2 ABOUT HERE –
Looking now at the decision to purchase a competitor product, we observe that scrolling down
to the reviews of the focal product does not significantly affect the purchase of a competing product,
regardless of observing a negative review or not. Competitors of products discovered early are less
likely to be bought, as are competitors of products that were reviewed early. Customers who have
previously purchased products in other categories are more likely to buy a rival product on the
website. Average rating and number of reviews of the focal product do not significantly impact the
purchase of competing products. We also observe a robust negative time trend in both purchase
regressions, suggesting that the retailer was becoming generally less popular over the two month
period.
In terms of consumer search behavior, measured by two dependent variables – the continuing
search decision and the number of products searched after the focal product – we observe that
scrolling down to the review area leads to more search, especially so in the group that sees the
negative review (the coefficient of the interaction of scrolling with treatment is higher than for
the other three groups). Longer previous search in the category, measured either by number of
product views or time spent online, is positively correlated with the decision to continue browsing
for alternatives. Moreover, customers who have previously purchased items in other categories are
more likely to have a smaller consideration set. The negative time trend coefficients imply that
consumers who browse at later times search for fewer products.
Finally, we run a model estimation to measure the effect of relegation on the (log) price paid
of a chosen product. For this, we keep only those customers who have made a purchase from the
retailer. This leaves us with 7,774 unique customers who viewed 16,290 items in total, among
which views 2,774 happened either in a treatment or in a control period. The last column of
Table 2 contains the estimates. The results suggest that customers who view a bad review on the
focal product’s page end up paying more for the purchased item, on average. Those who have
reviewed more items in the category also pay more, but those who in the past frequently purchased
26
in other categories choose cheaper products. Overall, the average purchase price decreased over
our observation period. The coefficients of other variables are not significant.
Difference-in-Differences Estimates
Table 3 lists the estimated coefficients (δ) alongside their standard errors and confidence interval.
In the third column of the table we show the percentage changes due to the relegation of a negative
review (E). which we calculated as a first stage according to Equation. 13. The last column dis-
plays our second stage estimates, where the effect of posting a negative review (D) is calculated
applying Equation 15. First and second stage estimates differ because around 20% of consumers,
those who paginate, would still see the low rating on the second review-page after its relegation.
Relegating the negative review from the first review-page to the second significantly (at five
percent level using one-sided test) increases the probability that a consumer who sees the first set
of reviews (but not necessarily the second) purchases the product by 41.1%. This means that if
a negative review was posted for a product of which the latest five reviews were good (hence its
product-page does not include any bad reviews), it would imply a decline in purchase probability
of 51.4% among consumers who see the first five reviews (but not necessarily more). Regarding
the impact on competitor purchase likelihood, relegating or posting a negative rating has no impact
in our data. The latter result suggests that although some consumers might be willing to substitute
the product that has a negative review with other items, a large fraction of them decide not to buy
anything from the retailer once a bad review is encountered.
– TABLE 3 ABOUT HERE –
Regarding search decisions, consumers are, on average, 9.1% less likely to continue their
search for other products if they see no negative review on the product-page due to its relega-
tion. This implies that posting a negative review increases the number of consumers who search
for alternative items after reading the first set of reviews of the focal product by 11.4%. We also
find that a low rating increases the number of additional products sampled by 3.5%. The latter
27
translates to a 1.2% expected increase in consideration set size in our data, since the majority of
consumers browse for one product only. Overall, both search-related dependent variables indicate
a significant increase in browsing effort when consumers view a low rating when shopping online.
Finally, we find that there is a significant willingness to pay a price premium above that of
the product with the negative review. We estimate that, among those consumers who purchased a
product, viewing a single negative review increases the final price paid for the chosen product on
average by 15.8%. Note that average price of purchased products can change even if a negative
review has no effect on the purchase rate of substitutes. This is because the negative review affects
the pool of customers subject to this last analysis. Consumers who would have purchased the focal
product without the negative review but do not purchase it with the negative review are not subjects
of this last model provided they make no purchase at the website. This type of consumer is deterred
from purchase due to the discovery of the low rating but, presumably, price-sensitive enough such
that more expensive alternatives are not considered as viable alternatives to the focal product. With
fewer customers of this type in the analysis pool, the average price paid may increase, as confirmed
by the data.10
Robustness Checks
To investigate the robustness of our findings, we rerun the analysis using alternative treatment
definitions. Recall that in the main specification the negative review had a rating of no more
than 3-stars and was relegated from position 5 on the first page to the second page. In the first
alternative specification, periods in which the negative review is in 4th position are also considered
as treatment periods, while control periods are unchanged. In a second alternative specification,
we restrict the control period to last only as long as the negative review is in first position on the
second review-page. In the third alternative specification, we define a negative review as one that
has a rating no higher than 2-stars. In the fourth alternative the rating is no higher than 1-star. In
10We also estimate the price effect among consumers who purchased a competing product – i.e. we exclude those
who purchased the focal one. We find that, in this second sample of customers, viewing a negative review about the
focal product significantly increased the final price paid for a rival product by 23.2%.
28
the fifth alternative, we let the negative review occupy any position on the first review-page, but
use only observations after the negative review has been on the website for three days. If the nature
of demand or supply shocks is temporary and these dissipate fast enough, this last specification is
valid despite the fact that the start of the treatment period is closer to the arrival of the negative
review than in the base model. This specification might be also useful to contrast with approaches
that compare consumers decisions before and after the arrival of the review.
For parsimony, in Table 4 we present only the findings as percentage changes due posting a
negative review. We relegate the estimated δcoefficients, standard errors, and confidence inter-
vals to the Web Appendix. The estimated effect for the focal purchase decision is negative in all
five cases, and significant (at five percent using one-sided tests) in two alternative specifications.
Regarding the estimates of the competitor purchase decision in the five alternatives, we see that
the signs of the effects are mixed and none of them is significant. We conclude that bad reviews
affect the purchase likelihood of items that receive them but do not affect the purchase probability
of competing products.
In terms of search behavior, we see that the estimate of further search decision is significantly
positive in all alternatives. In addition, the estimated changes are substantively similar across
specifications, suggesting that the magnitude of the effect is stable across treatment definitions.
Estimates on the impact of a negative review on the number of further products searched are all
significantly positive as well. The estimation on search decisions benefits from the fact that the
number of effective observations is much larger in this case than when we consider purchases,
hence the consistency in the coefficients. Overall, robustness tests confirm that a single negative
review positively affects both the decision to continue to search and the consumer’s consideration
set.
Finally, the robustness checks support the increased paid price hypothesis, as in each of the
alternative specifications the estimate is positive, and in the first four alternatives it is also sig-
nificant. We conclude that consumers are willing to pay a considerable premium for a competing
product after discovering a negative review provided they find a suitable alternative on the retailer’s
29
website.
– TABLE 4 ABOUT HERE –
MANAGERIAL IMPLICATIONS
This section describes the variability of responsiveness to negative reviews across products. To do
so, we use our estimates to approximate the causal drop in sales due to the posting of a low rating,
among all consumers, including those who do not read reviews. In other words, we compute the
negative reviews elasticity of sales for each product - an index analogous to price elasticity. We
also calculate the causal impact of a posted negative review on the number of visitors searching for
more alternatives. We then show how one might visualize the vulnerability of different products
based on these elasticity measures. Finally, we discuss the managerial implications, including the
value of fake reviews.
Negative Review Elasticities
We estimate the impact of a posted negative review on sales as follows. We take the set of prod-
uct visits where the consumer, at any time in our observation window, browsed for (but did not
necessarily read the reviews of) a product that had no negative review at the bottom of its page.
We predict the purchase probability for all these visits using 1 to obtain estimates for the scenario
when there is no negative review on the product-page. Next, to obtain estimates for the scenario
when there is a recently posted negative review on the product-page, we multiply this purchase
probability by 1 D1for those consumers who also scrolled down to the reviews. For those who
do not scroll to the reviews the purchase probability in the two scenarios are the same.11 For each
product, we sum the estimated probabilities across all visitors (i.e. those who read the reviews and
11We estimate the elasticities through a second method as well, in which we also recalculate the average rating and
the number of reviews after a new negative review is submitted. With this second method these latter probabilities are
not exactly the same because average rating and number of reviews appear in Equation 1. However, the elasticities we
derive are almost identical to the ones we obtain with the main approach.
30
also those who do not) under the two scenarios. The impact of a single, newly posted negative
review on the sales of a given product is simply the ratio of the predicted sales with and without
the negative review minus one. We title this effect as the negative review elasticity of sales.
Figure 4 displays these elasticities to negative review in the most visited home-&-garden cat-
egories. The box-plots reveal that sales are expected to drop by 5% - 50%, with significant dis-
persion across products. We observe that, for example, in the mattresses category product sales
are very sensitive to negative reviews (median elasticity of -31%), while curtains are impacted
significantly less (median elasticity of -5%). If we look at the most popular technology categories
(Figure 5), we can see, for instance, that demand for printer inks tends to be less elastic to negative
reviews than for the printers themselves.
– FIGURE 4 ABOUT HERE –
– FIGURE 5 ABOUT HERE –
We also estimate the impact of a posted negative review on the probability that the consumer
will browse for further alternatives after viewing a given product, we call as the negative review
elasticity of search. The estimation process in this case is analogous to the one used to derive the
negative review elasticity of sales, as described above. The only difference is that we use Equation
3 to obtain estimates for the scenario when there is no negative review on the product-page and
that we multiply the search continuation probability by 1 D3, the corresponding DiD estimate.
Vulnerability Map
Managers can gain insight about how the market responds to negative reviews by visualizing these
reactions on a two-dimensional scatter plot which we call a vulnerability map. This displays nega-
tive review elasticity of sales on the horizontal axis and negative review elasticity of search on the
vertical axis. Figure 6 shows the vulnerability map for all product that have no low rating among
their first five reviews. We see that if a product scores high, in magnitude, on the sales elasticity
index it also tends to score high on the search elasticity index. However, there are also products
31
with high sales elasticity and low search elasticity and vice versa, indicating that not all products
are hit the same by a low rating. The average sales elasticity to negative review is -18.0%, while
the average search elasticity to negative review is 4.0%.
– FIGURE 6 ABOUT HERE –
As an example, Figure 7 shows a vulnerability map for selected vacuum cleaner brands. Each
symbol on the chart represents a different product in the category. Capital letters denote the brand
(made anonymous). Brands A and B offer expensive, premium products, Brand C is the private
label of the retailer, while Brand D is a moderately priced national brand. The size of the icons on
the map correspond to the fraction of consumers who decided to read reviews about the product
during the two months in our data period. We can see that products with higher scroll rates react
more intensively to negative reviews than products with lower scroll rates. The reason for this is
that consumers are not affected by the posted low rating if they do not see it. As more consumers
are interested in reading the reviews of Brand A than of any other brand on average, its products
are located in the top-left corner of the map indicating high elasticity to negative reviews.
To see whether product price influences elasticity, we regress sales elasticity on scroll rate and
log price in the vacuum cleaner category. We obtain a significantly negative scroll rate coefficient
(-0.519, t=17.07) but an insignificant price coefficient (0.456, t=0.65). This suggests that there
is no systematic difference between the elasticities of premium and budget products. What matters
is how much consumers are willing to scroll to the reviews of an item they consider. We arrive at
a similar conclusion once we regress search elasticity on scroll rate and log price for the vacuum
cleaners: the coefficient of scroll rate is significantly positive (0.110, t=19.37) but the price
coefficient is insignificant (-0.196, t=1.48). We speculate that the reason behind why the reviews
of Brand A are more frequently viewed by consumers could be that while Brand B and D specialize
in the production of domestic appliances, Brand A is a larger multinational company with a wide
product range in multiple industries.
– FIGURE 7 ABOUT HERE –
32
Finally, we note that we have derived the impact of a negative review on sales and search for
substitutes, but have not yet stated how long would these effects hold after the publication of a
new bad review. Although the negative review will not be deleted by the retailer and remains
available on the website, the effects calculated hold during the days when it is accessible on the
product-page. In Table 5, we show the median time of how long a negative review (3-star or less)
stayed on the first review-page over the years 2008-2015 in the most popular product categories
– information we obtained from additional data from our retailer. These numbers reveal that in
some categories the effects would likely last for months (e.g. 166 days for curtains), while in
other categories for a few weeks only (e.g. 14 days for printers). As a rule of thumb, managers of
categories in which a negative review stays longer on the product-page can benefit the most from
devoting an increased effort to monitor online reviews about their brand.
– TABLE 5 ABOUT HERE –
Fake reviews
Lastly, a few words about fake reviews. According to the New York Post (2017) the price of one
positive fake review at Amazon is about $5 (~£4). From these, the product manager would need to
buy at least five in order to relegate a recent (not fake) negative review to the second page, where it
is hidden from the eyes of most visitors. That would cost £20, disregarding any indirect cost such
as legal or reputational consequences. We perform a back-of-the-envelope calculation to gauge
whether this could ever be considered a good investment.
A negative review stays on the product-page for about 80 days. Once relegated, its effect fades
as most people do not look at reviews accessible elsewhere than the product-page. Consequently,
unless the expected savings that come from customers who as a result do not see the low rating
exceeds £20 over the 80 days, the purchase of fake reviews is not justified financially.
To exceed the £20 saving threshold, the product needs to generate at least £20 ×1
0.2=£100
profit over the 80 days if we calculate with a sales elasticity of -20%. We can predict the 80 day
sales of each product in our data using Equation 1, and we see that 7.7% of the products (18.7%
33
of products with at least 5 reviews) generate a larger revenue than £100. We define gross profit as
the value of product sales minus manufacturing cost (purchasing fake reviews is not manufacturing
cost). If we assume zero marginal cost of manufacturing, the fraction of products that can success-
fully recover the £20 investment in 5 fake reviews is 7.7%. With a rather realistic marginal cost of
manufacturing to price ratio of 50%, we calculate that only 4.6% of the products (10.9% of prod-
ucts with at least 5 reviews) can bring a higher increase in gross profit than the cost of investment
in fake reviews. This suggests that only managers of the best-selling items would find the purchase
of fake reviews a good investment – provided they ignore the risky consequences. However, it is
likely that the indirect cost of fake reviews such as damage to brand equity is also the highest for
these brands. Therefore, acquiring fake reviews is not only unethical but also a poor investment in
general.
CONCLUSION
In this paper we quantify the impact of a single negative review on purchase and search deci-
sions and empirically illustrate our approach in technology and home-&-garden categories, using a
quasi-natural experiment arising from the “newest first” display policy – a common practice among
retailers – to obtain our results. We compare consumers who searched for the product when the
negative review was among the first reviews shown with consumers who searched for the same
product when the same review was accessible through pagination only. By looking at where the
review is displayed instead of when the review is initially posted, our estimates do not suffer from
spurious correlations between review valence and unobserved factors which may account for low
ratings.
In our modeling approach, we first estimate the effect of relegating a low rating from the
product-page to a page entirely dedicated to customer reviews. This relegation is the result of
the arrival of additional customer reviews that are shown above the earlier ones. We use this rele-
gation event to approximate the impact of a newly posted negative review on search and purchase
34
outcomes. Our results indicate that a single negative review, if seen, decreases purchase probability
by 51.4% and increases the probability of further search for substitutes by 11.4%. Furthermore,
we find that those who still purchase an item after viewing the negative review pay 15.8% more
than the price of the original item.
Building on these estimates, we derive negative review elasticity values - a measure analogous
to the well-known price-elasticity but which indicates a product’s vulnerability to consumer opin-
ion instead of price changes. The majority of the obtained elasticity of sales indices fall in the -5%
and -50% range, with an average of -18.0%, suggesting that even a single negative review can have
a detrimental effect on product demand. This can explain why online sellers reportedly make in-
creasing efforts to avoid them. Infographics are used to visualize the sensitivity of different brands
to customer reviews in terms of search and choice. Our methodology can be of particular interest
to product managers who seek insight about the vulnerability of their products to negative online
reviews.
The approach for approximating the effect of a newly posted low rating has its limitations. We
have implicitly assumed that a low rating at position five on the first review-page has the same effect
as it was displayed at the first position. Unfortunately, we cannot test this without running into the
simultaneity problem between review creation and unobserved demand and/or supply shocks. If
there is indeed a position effect in the sense that reviews displayed closer to the bottom of the
product-page have lower influence on consumer behavior than those visible just above them, our
results can be considered a lower bound for the effect of a recently created negative review.
As regards future research, it would be relevant to model how a different platform design –
for example where there is no need to scroll down to see the first reviews – would change what
consumers decide to browse for and eventually purchase. Another potentially fruitful direction
would be to study whether and how low ratings influence the arrival frequency of new reviews
through decreased purchase rates. Overall, the literature would benefit from research that provides
a better understanding about why and under what circumstances consumers decide to read reviews.
35
References
Ahluwalia, Rohini, Robert E. Burnkrant, and H. Rao Unnava (2000). “Consumer response to nega-
tive publicity: The moderating role of commitment”. In: Journal of Marketing Research 37(2),
pp. 203–214.
Allenby, Greg M and James L Ginter (1995). “The effects of in-store displays and feature adver-
tising on consideration sets”. In: International Journal of Research in Marketing 12.1, pp. 67–
80.
Anderson, Michael and Jeremy Magruder (2012). “Learning from the crowd: Regression disconti-
nuity estimates of the effects of an online review database”. In: The Economic Journal 122.563,
pp. 957–989.
Astorne-Figari, Carmen, José Joaquın López, and Aleksandr Yankelevich (2018). “Advertising for
consideration”. In: Journal of Economic Behavior & Organization.
Athey, Susan and Guido W Imbens (2017). “The state of applied econometrics: Causality and
policy evaluation”. In: Journal of Economic Perspectives 31.2, pp. 3–32.
Babi´
c Rosario, Ana et al. (2016). “The effect of electronic word of mouth on sales: A meta-
analytic review of platform, product, and metric factors”. In: Journal of Marketing Research
53.3, pp. 297–318.
Bagwell, Kyle and Michael H. Riordan (1991). “High and Declining Prices Signal Product Qual-
ity”. In: The American Economic Review 81.1, pp. 224–239. IS SN: 00028282. URL:
.
Bizer, George, Jeff T. Larsen, and Richard E. Petty (2010). “Exploring the valence-framing effect:
Negative framing enhances attitude strength”. In: Political Psychology 32(1), pp. 59–80.
Blundell, Richard and Monica Costa Dias (2009). “Alternative approaches to evaluation in empir-
ical microeconomics”. In: Journal of Human Resources 44.3, pp. 565–640.
BrightLocal (2017). Local Consumer Review Survey 2017.URL:
(visited on 06/26/2018).
36
Brown, Tom J et al. (2005). “Spreading the word: Investigating antecedents of consumers’ positive
word-of-mouth intentions and behaviors in a retailing context”. In: Journal of the Academy of
Marketing Science 33.2, pp. 123–138.
Burrus, C Sidney (2012). “Iterative reweighted least squares”. In: OpenStax CNX. Available online:
http://cnx. org/contents/92b90377-2b34-49e4-b26f-7fe572db78a1 12.
Chen, Yubo and Jinhng Xie (2008). “Online consumer review: Word-of-mouth as a new element
of marketing communication mix”. In: Management Science 54(3), pp. 477–491.
Cheong, Hyuk Jun and Margaret A Morrison (2008). “Consumers’ reliance on product information
and recommendations found in UGC”. In: Journal of Interactive Advertising 8.2, pp. 38–49.
Chevalier, Judith A. and Dina Mayzlin (2006). “The effects of word of mouth on sales: Online
book reviews”. In: Journal of Marketing Research 43(3), pp. 345–354.
Chintagunta, Pradeep K., Shyam Gopinath, and Sriram Venkataraman (2010). “The effects of on-
line user reviews on movie box office performance: Accounting for sequential rollout and ag-
gregation across local markets”. In: Marketing Science 29(5), pp. 944–957.
De los Santos, Babur, Ali Hortacsu, and Matthijs R. Wildenbeest (2012). “Search model of con-
sumer search using data on web browsing and purchasing behavior”. In: The American Eco-
nomic Review 102(6), pp. 2955–2980.
DeMaris, Alfred (1995). “A Tutorial in Logistic Regression”. In: Journal of Marriage and Family
57.4, pp. 956–968. ISSN: 00222445, 17413737. UR L:
.
Demuynck, Thomas and Christian Seel (2018). “Revealed preference with limited consideration”.
In: American Economic Journal: Microeconomics 10.1, pp. 102–31.
Eliaz, Kfir and Ran Spiegler (2011). “Consideration sets and competitive marketing”. In: The Re-
view of Economic Studies 78.1, pp. 235–262.
Erdem, Tülin and Joffre Swait (1998). “Brand equity as a signaling phenomenon”. In: Journal of
consumer Psychology 7.2, pp. 131–157.
37
Floyd, Kristopher et al. (2014). “How online product reviews affect retail sales: A meta-analysis”.
In: Journal of Retailing 90.2, pp. 217–232.
Forbes (2017). Amazon’s Fake Review Problem Is Now Worse Than Ever, Study Suggests.URL:
(vis-
ited on 02/28/2018).
Friedman, Jerome, Trevor Hastie, and Robert Tibshirani (2001). The elements of statistical learn-
ing. Vol. 1. Springer series in statistics New York.
Green, Peter J (1984). “Iteratively reweighted least squares for maximum likelihood estimation,
and some robust and resistant alternatives”. In: Journal of the Royal Statistical Society: Series
B (Methodological) 46.2, pp. 149–170.
Hedges, Larry V and Ingram Olkin (2014). Statistical methods for meta-analysis. Academic press.
Hennig-Thurau, Thorsten, Caroline Wiertz, and Fabian Feldhaus (2015). “Does Twitter matter?
The impact of microblogging word of mouth on consumers’ adoption of new movies”. In:
Journal of the Academy of Marketing Science 43.3, pp. 375–394.
Honka, Elisabeth (2014). “Quantifying search and switching costs in the US auto insurance indus-
try”. In: The RAND Journal of Economics 45.4, pp. 847–884.
Hu, Nan, Jie Zhang, and Paul A Pavlou (2009). “Overcoming the J-shaped distribution of product
reviews”. In: Communications of the ACM 52.10, pp. 144–147.
Hung, Kineta H and Stella Yiyan Li (2007). “The influence of eWOM on virtual consumer commu-
nities: Social capital, consumer learning, and behavioral outcomes”. In: Journal of advertising
research 47.4, pp. 485–495.
Ito, Tiffany A. et al. (1998). “Negative information weighs more heavily on the brain: The negativ-
ity bias in evaluative categorizations”. In: Journal of Personality and Social Psychology 75(4),
pp. 887–900.
38
Ivanova, Olga, Michael Scholz, and Verena Dorner (2013). “Does Amazon scare off customers?
The effect of negative spotlight reviews on purchase intention”. In: Wirtschaftsinformatik Pro-
ceedings.URL: .
Kahneman, Daniel and Amos Tversky (1979). “Prospoect theory: An analysis of decision under
risk”. In: Econometrica 47(2), pp. 263–292.
Kim, Jun B., Paulo Albuquerque, and Bart J. Bronnenberg (2010). “Online demand under limited
consumer search”. In: Marketing Science 29(6), pp. 1001–1023.
(2016). “The probit choice model under sequential search with an application to online retail-
ing”. In: Management Science.
Kozinets, Robert V. et al. (2010). “Networked narratives: Understanding word-of-mouth marketing
in online communities”. In: Journal of Marketing 74(2), pp. 71–89.
Land, Search Engine (n.d.). Findings from BrightLocal 2015 Local Consumer Review Survey.URL:
.
Lee, Jumin, Do-Hyung Park, and Ingoo Han (2008). “The effect of negative online consumer re-
views in product attitude: An information processing view”. In: Electronic Commerce Research
and Applications 7(3), pp. 341–352.
Liu, Xiao, Dokyun Lee, and Kannan Srinivasan (2017). “Large Scale Cross Category Analysis of
Consumer Review Content on Sales Conversion Leveraging Deep Learning”. In: NET Institute
Working Paper No. 16-09.
Liu, Yong (2006). “Word of mouth for movies: Its dynamics and impact on box office revenue”.
In: Journal of marketing 70.3, pp. 74–89.
Manzini, Paola and Marco Mariotti (2014). “Stochastic choice and consideration sets”. In: Econo-
metrica 82.3, pp. 1153–1176.
(2018). “Competing for attention: is the showiest also the best?” In: The Economic Journal
128.609, pp. 827–844.
39
McConnell, J. Douglas (1968). “The Price-Quality Relationship in an Experimental Setting”. In:
Journal of Marketing Research 5.3, pp. 300–303. DOI: .
eprint: . URL:
.
Mittal, Vikas, T. Ross William, and M. Baldasare Patrick (1998). “The Asymmetric Impact of
Negative and Positive Attribute-level Performance on Overall Satisfaction and Repurchase In-
tentions”. In: Journal of Marketing 62(1), pp. 33–47.
Monga, Alokparna Basu and Deborah Roedder John (2008). “When does negative brand publicity
hurt? The moderating influence of analytic versus holistic thinking”. In: Journal of Consumer
Psychology 18(4), pp. 320–332.
Montgomery, Cynthia A and Birger Wernerfelt (1992). “Risk reduction and umbrella branding”.
In: Journal of Business, pp. 31–50.
Mudambi, Susan M. and David Schuff (2010). “What makes a helpful online review? A study of
customer reviews on Amazon.com”. In: MIS Quarterly 34(1), pp. 185–200.
NPR (2018). A Series of Mysterious Packages.URL:
(visited on 08/09/2018).
Nykodym, Tomas et al. (2016). “Generalized Linear Modeling with H2O”. In: Published by H2O.
ai, Inc.
Post, New York (2017). Scammers elude Amazon crackdown on fake reviews with new tricks.URL:
(visited on 02/28/2018).
Post, Washington (2012). Judge says homeowner must delete some accusations on Yelp, Angie’s
List.UR L:
(visited on
02/26/2018).
40
Reinstein, David A and Christopher M Snyder (2005). “The influence of expert reviews on con-
sumer demand for experience goods: A case study of movie critics”. In: The journal of indus-
trial economics 53.1, pp. 27–51.
Schoenmueller, Verena, Oded Netzer, and Florian Stahl (2018). “The Extreme Distribution of On-
line Reviews: Prevalence, Drivers and Implications”. In:
Seiler, Stephan (2013). “The impact of search costs on consumer behavior: A dynamic approach”.
In: Quantitative Marketing and Economics 11.2, pp. 155–203.
Siegrist, Micheal and George Cvetkovich (2001). “Better negative than positive? Evidence of a
bias for negative information about possible health dangers”. In: Risk Analysis 21(1), pp. 199–
206.
Soroka, Stuart N. (2006). “Good news and bad news: Asymmetric rresponse to economic informa-
tion”. In: The Journal of Politics 68(2), pp. 372–385.
Street, James O, Raymond J Carroll, and David Ruppert (1988). “A note on computing robust
regression estimates via iteratively reweighted least squares”. In: The American Statistician
42.2, pp. 152–154.
Train, Kenneth E (2009). Discrete choice methods with simulation. Cambridge university press.
Van Nierop, Erjen et al. (2010). “Retrieving unobserved consideration sets from household panel
data”. In: Journal of Marketing Research 47.1, pp. 63–74.
Wooldridge, Jeffrey M (2010). Econometric analysis of cross section and panel data. MIT press.
Wu, Chunhua et al. (2015). “The Economic Value of Online Reviews”. In: Marketing Science 5.34,
pp. 739–754.
You, Ya, Gautham G Vadakkepatt, and Amit M Joshi (2015). “A meta-analysis of electronic word-
of-mouth elasticity”. In: Journal of Marketing 79.2, pp. 19–39.
Zhu, Feng and Xiaoquan Zhang (2006). “The influence of online consumer reviews on the demand
for experience goods: The case of video games”. In: ICIS 2006 Proceedings, p. 25.
(2010). “Impact of online consumer reviews on sales: The moderating role of product and
consumer characteristics”. In: Journal of marketing 74.2, pp. 133–148.
41
TABLES
Table 1: DESCRIPTIVE STATISTICS OF SHOPPING BEHAVIOR
Purchase Scroll Negative
Review on
Product-Page
No Negative
Review on
Product-Page
Difference T-Statistic Double
Difference
Buys Focal Product Yes .027 .027 .000 .067 -.008
No .032 .024 .008 4.482
Buys Substitute Yes .026 .020 .006 1.890 .002
No .021 .017 .004 2.308
Search Continuation Yes .473 .442 .031 3.321 .042
No .385 .396 -.011 2.039
# Further Substitutes Searched Yes 1.086 1.080 .006 .187 .042
No .908 .944 -.036 1.778
Price Paid Yes 49.784 63.220 -13.435 2.582 2.772
No 50.896 61.559 -10.663 3.162
42
Table 2: REGRESSION ESTIMATES
Buys Focal Product Buys Substitute Search Continuation Log Further
Substitutes Searched
Log Price Paid
Intercept -3.082*** (.828) -5.061*** (.322) -.438* (.256) .413*** (.072) 2.120*** (.139)
Scroll .194*** (.033) -.026 (.024) .255*** (.012) .069*** (.003) -.005 (.010)
Treatment x Scroll -.225** (.096) .007 (.087) -.109*** (.037) -.055*** (.010) .064** (.030)
Control x Scroll -.183*** (.063) .061 (.060) -.166*** (.029) -.048*** (.008) -.016 (.023)
Treatment x No Scroll .192*** (.062) -.036 (.053) -.264*** (.027) -.077*** (.007) -.063*** (.023)
Control x No Scroll -.117** (.056) .021 (.048) -.144*** (.021) -.043*** (.006) -.009 (.018)
Indicator “Has Reviews” -.079 (.084) .049 (.067) -.021 (.039) -.003 (.011) -.040 (.042)
Average Rating .038 (.062) -.025 (.049) -.017 (.024) -.005 (.007) -.045 (.028)
Log Reviews .021 (.044) -.009 (.036) .008 (.017) .001 (.006) .021 (.018)
Log Products Searcheda-.226*** (.026) .429*** (.042) .473*** (.006) .186*** (.002) .002 (.008)
Log Products Searchedb-.115*** (.012) -.069*** (.010) .062*** (.004) .026*** (.001) -.006 (.006)
Log Reviewed Productsa-.165*** (.031) .413*** (.041) -.003 (.012) .001 (.004) .026** (.013)
Log Reviewed Productsb-.328*** (.051) -.045 (.036) .032* (.018) -.002 (.005) .030 (.023)
Log Search Sessions 1.386*** (.106) 1.400*** (.110) -.384*** (.058) -.085*** (.015) .034 (.022)
Log Durationb.000 (.002) .005** (.002) .008*** (.001) .002*** (.000) -.001 (.001)
Purchase Rateb3.819*** (.089) 3.130*** (.096) -.102*** (.036) -.061*** (.009) -.051*** (.014)
Linear trend -.005*** (.000) -.010*** (.001) -.003*** (.000) -.002*** (.000) -.001*** (.000)
Product Fixed Effects Yes Yes Yes Yes Yes
R2.152 .131 .181 .224 .943
AUCc.908 .904 .741 - -
# Observations 410,628 410,628 410,628 410,628 16,290
***p<.01 **p<.05 *p<.1
a: focal category
b: other categories
c: Area Under the ROC Curve. 1 indicates a perfect classifier, .5 indicates a classifier with performance no better than random guessing.
Note: Bootstrapped standard error, clustered at product level in parentheses.
43
Table 3: ESTIMATED DID COEFFICIENTS
Coefficient (δ) Effect of relegating
a negative review (E)
Effect of posting
a negative review
(D)
Buys Focal Product .351*** (.137)
[.032,.541]
41.1% -51.4%
Buys Substitute -.004 (.098)
[-.200,.183]
-.4% .5%
Search Continuation -.176*** (.044)
[-.257,-.095]
-9.1% 11.4%
# Further Substitutes Searched -.028*** (.011)
[-.047,-.006]
-2.8% 3.5%
Price Paid -.135*** (.036)
[-.215,-.072]
-12.6% 15.8%
***p<.01 **p<.05 *p<.1
Note: p is the fraction of bootstrapped coefficients that do not have the sign as hypothesized. Standard errors clustered
at product level in parentheses. 95% confidence interval in brackets. Effects significant at 5% (one-sided) are in bold.
Table 4: EFFECT OF POSTING A NEGATIVE REVIEW USING ALTERNATIVE DEFINI-
TIONS
Definition
I II III IV V
Buys Focal Product -66.7% -37.5% -12.5% -49.1% -23.4%
Buys Substitute 9.6% 11.8% 22.9% -33.3% 14.4%
Search Continuation 9.4% 11.2% 10.3% 11.0% 10.2%
# Further Substitutes Searched 3.8% 5.3% 3.3% 3.5% 5.0%
Price Paid 8.8% 24.0% 12.4% 11.5% 5.8%
# Focal Products 1,988 1,862 1,428 1,136 1,459
Note: Effects for which the fraction of bootstrapped coefficients not having the hypothesized sign is less than 5% are
in bold.
44
Table 5: MEDIAN DURATION OF A NEGATIVE REVIEW ON PRODUCT-PAGE
Home-&-Garden Technology
Category Days Category Days
Curtains 166 Printer ink 56
Duvet cover sets 117 Headphones and earphones 42
Fridge freezers 111 Laptops and PCs 35
Bed frames 104 Telephones 32
Wardrobes 55 Pay as you go phones 25
Mattresses 49 SIM free phones 20
Microwaves 15 Laptops and netbooks 14
Washing machines 15 Televisions 14
Kettles 13 Printers 14
Vacuum cleaners and accessories 9 Tablets 10
FIGURES
Figure 1: THE PRODUCT-PAGE, THE FIRST REVIEW AREA, AND ADDITIONAL REVIEW-
PAGES ACCESSIBLE THROUGH PAGINATION
Product
description
***
*
****
*****
****
Reviews
Next review page
Product
description
***
*
****
*****
****
Reviews
Scroll Paginate
***
*
****
*****
****
**
***
****
*****
*****
*****
****
***
***
**
****
****
****
*
Reviews
Next review-pageNext review-page
45
Figure 2: PRICE OF CONSIDERED PRODUCTS BY SEARCH ORDER
80
90
100
110
120
12345678910
Search order
Price (GBP)
Figure 3: EXAMPLE OF CONDITIONS THAT DEFINE TREATMENT AND CONTROL PE-
RIODS
Time
Rating of the
new review
First
review-page
01/02/2015 31/03/2015
Treatment
period
Other
period
Control
period
Observation window
Other
period
46
Figure 4: NEGATIVE REVIEW ELASTICITIES OF SALES (HOME-&-GARDEN)
Curtains
Duvet cover sets
Kettles
Fridge freezers
Washing machines
Wardrobes
Microwaves
Vacuum cleaners and accessories
Bed frames
Mattresses
−60 −40 −20 0
Sales elasticity to negative review (%)
Figure 5: NEGATIVE REVIEW ELASTICITIES OF SALES (TECHNOLOGY)
Printer ink
Pay as you go phones
SIM free phones
Headphones and earphones
iPad
Laptops and netbooks
Tablets
Televisions
Printers
Telephones
−60 −40 −20 0
Sales elasticity to negative review (%)
47
Figure 6: SALES AND SEARCH ELASTICITIES TO NEGATIVE REVIEW
−50 −40 −30 −20 −10 0
0 2 4 6 8 10
Sales elasticity to negative review (%)
Search elasticity to negative review (%)
48
Figure 7: VULNERABILITY MAP OF VACUUM CLEANERS
0.0
2.5
5.0
7.5
10.0
12.5
−50 −40 −30 −20 −10 0
Sales elasticity to negative review (%)
Search elasticity to negative review (%)
Brand
A
B
C
D
Scroll rate (%)
0
25
50
75
100
49
WEB APPENDIX
Table WA.1: REGRESSION ESTIMATES (ALTERNATIVE 1)
Buys Focal Product Buys Substitute Search Continuation Log Further
Substitutes Searched
Log Price Paid
Intercept -3.098*** (.835) -5.045*** (.318) -.439* (.259) .414*** (.071) 2.105*** (.139)
Scroll .212*** (.034) -.033 (.023) .255*** (.012) .069*** (.002) -.010 (.010)
Treatment x Scroll -.298*** (.084) .086 (.070) -.154*** (.030) -.067*** (.008) .053** (.027)
Control x Scroll -.119* (.066) .013 (.061) -.200*** (.029) -.065*** (.007) .003 (.026)
Treatment x No Scroll .184*** (.051) -.029 (.049) -.266*** (.022) -.088*** (.006) -.034* (.020)
Control x No Scroll -.072 (.053) .021 (.042) -.167*** (.019) -.055*** (.005) -.010 (.018)
Indicator “Has Reviews” -.071 (.084) .042 (.067) -.019 (.039) -.003 (.011) -.037 (.041)
Average Rating .037 (.063) -.023 (.049) -.014 (.024) -.003 (.007) -.045 (.028)
Log Reviews .014 (.044) -.005 (.037) .011 (.017) .002 (.005) .020 (.018)
Log Products Searcheda-.226*** (.026) .428*** (.042) .472*** (.006) .185*** (.002) .002 (.008)
Log Products Searchedb-.115*** (.012) -.069*** (.010) .032* (.018) .026*** (.001) -.006 (.006)
Log Reviewed Productsa-.164*** (.031) .413*** (.042) -.004 (.012) .000 (.004) .026** (.013)
Log Reviewed Productsb-.328*** (.050) -.045 (.036) .032* (.018) -.001 (.005) .029 (.023)
Log Search Sessions 1.387*** (.106) 1.400*** (.110) -.384*** (.058) -.085*** (.015) .034 (.022)
Log Durationb.000 (.002) .005** (.002) .008*** (.001) .002*** (.000) -.001 (.001)
Purchase Rateb3.820*** (.088) 3.129*** (.096) -.102*** (.036) -.060*** (.009) -.052*** (.014)
Linear trend -.005*** (.000) -.009*** (.001) -.003*** (.000) -.001*** (.000) -.001*** (.000)
Product Fixed Effects Yes Yes Yes Yes Yes
R2.152 .131 .181 .225 .943
AUCc.908 .904 .741 - -
# Observations 410,628 410,628 410,628 410,628 16,290
***p<.01 **p<.05 *p<.1
a: focal category
b: other categories
c: Area Under the ROC Curve. 1 indicates a perfect classifier, .5 indicates a classifier with performance no better than random guessing.
Note: Bootstrapped standard error, clustered at product level in parentheses.
50
Table WA.2: REGRESSION ESTIMATES (ALTERNATIVE 2)
Buys Focal Product Buys Substitute Search Continuation Log Further
Substitutes Searched
Log Price Paid
Intercept -3.116*** (.844) -5.041*** (.307) -.463* (.257) .406*** (.071) 2.160*** (.137)
Scroll .177*** (.030) -.020 (.023) .254*** (.010) .069*** (.002) -.000 (.009)
Treatment x Scroll -.052 (.107) .020 (.091) -.059 (.038) -.041*** (.010) .047 (.031)
Control x Scroll -.113 (.114) -.057 (.097) -.138*** (.050) -.057*** (.013) -.096** (.043)
Treatment x No Scroll .280*** (.067) -.046 (.052) -.221*** (.025) -.066*** (.006) -.061*** (.023)
Control x No Scroll -.050 (.074) -.023 (.062) -.124*** (.028) -.038*** (.007) .010 (.027)
Indicator “Has Reviews” -.049 (.083) .041 (.069) .009 (.039) .005 (.010) -.039 (.041)
Average Rating .033 (.061) -.023 (.050) -.022 (.024) -.005 (.007) -.047* (.028)
Log Reviews .006 (.044) -.005 (.036) -.009 (.018) -.004 (.005) .019 (.018)
Log Products Searcheda-.225*** (.026) .428*** (.041) .474*** (.006) .186*** (.002) .002 (.008)
Log Products Searchedb-.115*** (.013) -.069*** (.009) .062*** (.004) .026*** (.001) -.006 (.006)
Log Reviewed Productsa-.163*** (.033) .412*** (.041) -.002 (.012) .001 (.004) .026* (.013)
Log Reviewed Productsb-.328*** (.050) -.045 (.034) .032* (.018) -.001 (.005) .029 (.023)
Log Search Sessions 1.388*** (.109) 1.399*** (.104) -.384*** (.057) -.085*** (.015) .034 (.022)
Log Durationb.000 (.002) .005** (.002) .008*** (.001) .002*** (.000) -.001 (.001)
Purchase Rateb3.818*** (.089) 3.130*** (.097) -.102*** (.036) -.060*** (.009) -.051*** (.014)
Linear trend -.006*** (.000) -.009*** (.001) -.004*** (.000) -.001*** (.000) -.001*** (.000)
Product Fixed Effects Yes Yes Yes Yes Yes
R2.152 .131 .180 .224 .943
AUCc.908 .904 .740 - -
# Observations 410,628 410,628 410,628 410,628 16,290
***p<.01 **p<.05 *p<.1
a: focal category
b: other categories
c: Area Under the ROC Curve. 1 indicates a perfect classifier, .5 indicates a classifier with performance no better than random guessing.
Note: Bootstrapped standard error, clustered at product level in parentheses.
51
Table WA.3: REGRESSION ESTIMATES (ALTERNATIVE 3)
Buys Focal Product Buys Substitute Search Continuation Log Further
Substitutes Searched
Log Price Paid
Intercept -3.070*** (.841) -5.032*** (.335) -.428 (.261) .416*** (.072) 2.104*** (.130)
Scroll .189*** (.034) -.035 (.022) .253*** (.011) .068*** (.003) -.010 (.011)
Treatment x Scroll -.000 (.123) .193* (.103) -.045 (.043) -.038*** (.012) .061** (.029)
Control x Scroll -.200** (.082) -.006 (.068) -.114*** (.031) -.033*** (.008) -.005 (.025)
Treatment x No Scroll .259*** (.077) -.067 (.065) -.219*** (.031) -.069*** (.008) -.077*** (.025)
Control x No Scroll -.037 (.061) -.061 (.054) -.124*** (.023) -.038*** (.006) -.039* (.021)
Indicator “Has Reviews” -.062 (.077) .037 (.068) -.012 (.039) -.000 (.010) -.041 (.040)
Average Rating .036 (.063) -.022 (.052) -.019 (.024) -.005 (.007) -.044 (.028)
Log Reviews .012 (.045) -.002 (.038) .000 (.018) -.001 (.005) .022 (.016)
Log Products Searcheda-.225*** (.024) .428*** (.042) .474*** (.006) .186*** (.002) .002 (.007)
Log Products Searchedb-.115*** (.014) -.069*** (.009) .062*** (.003) .026*** (.001) -.006 (.005)
Log Reviewed Productsa-.164*** (.037) .412*** (.042) -.002 (.012) .001 (.004) .027* (.014)
Log Reviewed Productsb-.328*** (.052) -.045 (.037) .032* (.018) -.001 (.005) .030 (.021)
Log Search Sessions 1.387*** (.110) 1.400*** (.112) -.384*** (.057) -.085*** (.015) .034 (.024)
Log Durationb.000 (.002) .005** (.002) .008*** (.001) .002*** (.000) -.001 (.001)
Purchase Rateb3.818*** (.090) 3.129*** (.097) -.102*** (.037) -.060*** (.010) -.052*** (.015)
Linear trend -.005*** (.000) -.009*** (.001) -.003*** (.000) -.001*** (.000) -.001*** (.000)
Product Fixed Effects Yes Yes Yes Yes Yes
R2.152 .131 .180 .224 .943
AUCc.908 .904 .740 - -
# Observations 410,628 410,628 410,628 410,628 16,290
***p<.01 **p<.05 *p<.1
a: focal category
b: other categories
c: Area Under the ROC Curve. 1 indicates a perfect classifier, .5 indicates a classifier with performance no better than random guessing.
Note: Bootstrapped standard error, clustered at product level in parentheses.
52
Table WA.4: REGRESSION ESTIMATES (ALTERNATIVE 4)
Buys Focal Product Buys Substitute Search Continuation Log Further
Substitutes Searched
Log Price Paid
Intercept -3.104*** (.831) -5.038*** (.317) -.437 (.258) .413*** (.071) 2.095*** (.138)
Scroll .186*** (.032) -.034 (.022) .252*** (.011) .068*** (.003) -.007 (.010)
Treatment x Scroll -.083 (.148) -.033 (.114) .004 (.052) -.030** (.014) .066 (.049)
Control x Scroll -.106 (.086) .058 (.068) -.060* (.034) -.018* (.009) -.002 (.026)
Treatment x No Scroll .388*** (.098) .007 (.078) -.208*** (.037) -.068*** (.009) -.071** (.032)
Control x No Scroll .029 (.066) -.139** (.059) -.096*** (.027) -.027*** (.007) -.043* (.022)
Indicator “Has Reviews” -.048 (.083) .033 (.067) -.004 (.039) -.001 (.010) -.043 (.042)
Average Rating .033 (.063) -.021 (.050) -.021 (.024) -.005 (.007) -.044 (.028)
Log Reviews .006 (.045) .000 (.037) -.004 (.018) -.000 (.005) .022 (.019)
Log Products Searcheda-.225*** (.026) .428*** (.042) .474*** (.006) .186*** (.002) .002 (.008)
Log Products Searchedb-.115*** (.012) -.069*** (.010) .062*** (.004) .026*** (.001) -.006 (.006)
Log Reviewed Productsa-.164*** (.031) .412*** (.042) -.002 (.012) .001 (.003) .027** (.013)
Log Reviewed Productsb-.328*** (.050) -.045 (.035) .032* (.018) -.001 (.005) .029 (.023)
Log Search Sessions 1.386*** (.106) 1.398*** (.108) -.385*** (.058) -.085*** (.014) .034 (.022)
Log Durationb.000 (.002) .005** (.002) .008*** (.001) .002*** (.000) -.001 (.001)
Purchase Rateb3.819*** (.088) 3.130*** (.095) -.102*** (.036) -.060*** (.009) -.052*** (.014)
Linear trend -.006*** (.000) -.009*** (.000) -.003*** (.000) -.001*** (.000) -.001*** (.000)
Product Fixed Effects Yes Yes Yes Yes Yes
R2.152 .131 .180 .224 .943
AUCc.908 .904 .740 - -
# Observations 410,628 410,628 410,628 410,628 16,290
***p<.01 **p<.05 *p<.1
a: focal category
b: other categories
c: Area Under the ROC Curve. 1 indicates a perfect classifier, .5 indicates a classifier with performance no better than random guessing.
Note: Bootstrapped standard error, clustered at product level in parentheses.
53
Table WA.5: REGRESSION ESTIMATES (ALTERNATIVE 5)
Buys Focal Product Buys Substitute Search Continuation Log Further
Substitutes Searched
Log Price Paid
Intercept -3.194*** (.848) -5.015*** (.332) -.278 (.259) .466*** (.073) 2.122*** (.133)
Scroll .167*** (.032) -.020 (.023) .258*** (.012) .069*** (.002) -.006 (.011)
Treatment x Scroll .063 (.091) .028 (.070) -.188*** (.030) -.074*** (.008) .006 (.027)
Control x Scroll .020 (.077) -.035 (.052) -.279*** (.031) -.088*** (.008) .006 (.030)
Treatment x No Scroll .189*** (.061) -.016 (.055) -.273*** (.022) -.092*** (.005) -.048** (.020)
Control x No Scroll -.028 (.061) -.035 (.052) -.208*** (.022) -.066*** (.006) -.002 (.023)
Indicator “Has Reviews” -.056 (.081) .039 (.065) .004 (.039) .004 (.011) -.035 (.039)
Average Rating .049 (.061) -.022 (.052) -.027 (.024) -.007 (.007) -.050* (.029)
Log Reviews .008 (.044) -.004 (.037) -.001 (.018) -.001 (.005) .020 (.016)
Log Products Searcheda-.225*** (.025) .428*** (.042) .473*** (.006) .185*** (.002) .002 (.007)
Log Products Searchedb-.115*** (.013) -.069*** (.009) .062*** (.003) .026*** (.001) -.006 (.005)
Log Reviewed Productsa-.162*** (.034) .412*** (.042) -.004 (.012) .001 (.004) .026* (.014)
Log Reviewed Productsb-.330*** (.049) -.044 (.037) .033* (.018) -.001 (.005) .031 (.021)
Log Search Sessions 1.389*** (.106) 1.400*** (.112) -.385*** (.058) -.085*** (.015) .033 (.024)
Log Durationb.000 (.002) .005** (.002) .008*** (.001) .002*** (.000) -.001 (.001)
Purchase Rateb3.819*** (.088) 3.130*** (.097) -.102*** (.037) -.060*** (.009) -.051*** (.015)
Linear trend -.005*** (.000) -.009*** (.000) -.004*** (.000) -.001*** (.000) -.001*** (.000)
Product Fixed Effects Yes Yes Yes Yes Yes
R2.152 .131 .181 .225 .943
AUCc.908 .904 .741 - -
# Observations 410,628 410,628 410,628 410,628 16,290
***p<.01 **p<.05 *p<.1
a: focal category
b: other categories
c: Area Under the ROC Curve. 1 indicates a perfect classifier, .5 indicates a classifier with performance no better than random guessing.
Note: Bootstrapped standard error, clustered at product level in parentheses.
54
Table WA.6: DID COEFFICIENT OF RELEGATION USING ALTERNATIVE DEFINITIONS
Buys Focal Product Buys Substitute Search Continuation # Further
Substitutes Searched
Price Paid
Alternative 1: Negative review is at position 4 or 5
Coefficient .435*** (.125) -.080 (.087) -.144*** (.039) -.030*** (.009) -.073** (.036)
95% interval [-.088, .588] [-.231, .090] [-.222, -.083,] [-.048, -.009] [-.147, -.009]
Alternative 2: Control period’s review is at the first position only
Coefficient .269 (.186) -.101 (.134) -.175*** (.064) -.043*** (.017) -.216*** (.059)
95% interval [-.169, .591] [-.338, .116] [-.300, -.057] [-.078, -.010] [-.351, -.100]
Alternative 3: Negative review has a 1-star or 2-star rating
Coefficient .096 (.152) -.205* (.117) -.163*** (.053) -.026* (.014) -.104*** (.040)
95% interval [-.184, .366] [-.412, .070] [-.266, -.057] [-.053, .002] [-.183, -.025]
Alternative 4: Negative review has a 1-star rating
Coefficient .336* (.182) .238* (.126) -.176*** (.064) -.028* (.017) -.096* (.055)
95% interval [-.015, .660] [-.043, .417] [-.295, -.067] [-.058, .005] [-.206, .012]
Alternative 5: Negative review is at any position with a 3-day delay
Coefficient .174 (.121) -.123 (.095) -.155*** (.045) -.040*** (.011) -.047 (.036)
95% interval [-.094, .370] [-.272, .089] [-.260, -.078] [-.064, -.018] [-.132, .022]
***p<.01 **p<.05 *p<.1
Note: Bootstrapped standard errors clustered at product level in parentheses.
55
Table WA.7: REGRESSION OF VACUUM CLEANER ELASTICITIES
Elasticity of Sales Elasticity of Search
Intercept -2.737
(3.460)
1.175*
(.650)
Scroll Rate -.518***
(.030)
.110***
(.005)
Log Price .456
(.706)
-.196
(.132)
R2.791 .829
# Products 80
***p<.01 **p<.05 *p<.1
Note: Standard errors in parentheses.
56
... experiential) purchases are considered more helpful and more strongly influence receivers' choices, because they are perceived as more reflective of the objective quality of the purchase (Dai et al., 2019). Further, motivation to seek WOM is higher (e.g., information search is prolonged) when receivers are exposed to negative information (Varga & Albuquerque, 2019), and motivation to process increases when receivers' prior knowledge or beliefs are disconfirmed (Karmarkar & Tormala, 2010;Watts & Zhang, 2008). Increased processing can increase persuasion, which may increase or decrease product evaluations and purchase likelihood, depending on the valence of the WOM (Karmarkar & Tormala, 2010;Varga & Albuquerque, 2019). ...
... Further, motivation to seek WOM is higher (e.g., information search is prolonged) when receivers are exposed to negative information (Varga & Albuquerque, 2019), and motivation to process increases when receivers' prior knowledge or beliefs are disconfirmed (Karmarkar & Tormala, 2010;Watts & Zhang, 2008). Increased processing can increase persuasion, which may increase or decrease product evaluations and purchase likelihood, depending on the valence of the WOM (Karmarkar & Tormala, 2010;Varga & Albuquerque, 2019). ...
... Most receivers read the text of multiple reviews (Bambauer-Sachse & Mangold, 2011;Murphy, 2018;Varga & Albuquerque, 2019), and consider aspects of reviews such as length, valence, and content to be important (Murphy, 2018). Indeed, accounting for specific aspects of message content can improve predictions of the impact of WOM over and above star ratings and summary statistics (Archak et al., 2011;Cao, Duan, & Gan, 2011;Ghose, Ipeirotis, & Li, 2012;Ziegele & Weber, 2015). ...
Article
Full-text available
Online word‐of‐mouth (WOM) can impact consumers’ product evaluations, purchase intentions, and choices—but when does it do so? How do those receiving WOM know whether to rely on a particular message? This article suggests that the multiple players involved in online WOM (receivers, senders, sellers, platforms, and other consumers) each have their own interests, which are often in conflict. Thus, receivers of WOM are faced with a judgment task in deciding what information to rely on: They must make inferences about the product in question and about the players who provide or present WOM. To do so, they use signals embedded in various components of WOM, such as average star ratings, message content, or sender characteristics. The product and player information provided by these signals shapes the impact of WOM by allowing receivers to make inferences about (a) their likelihood of product satisfaction, and (b) the trustworthiness of WOM players, and therefore the trustworthiness of their content. This article summarizes how each player changes the impact of online WOM, providing a lens for understanding the current literature in online WOM, offering insights for theory in this context, and opening up pathways for future research.
... With the advent of technology and its booming period in recent years, online customer reviews are getting a subject of research where negativity (negative promotion) by twist is a high point of interest that most marketers do follow and apply. Online promotions are making promotion advantages to a premium kind of feeling which is as prestigious as sophisticated one, making the entire sales-promotion regression as a "digital" marketing rather than physical complexity (in branding and etc.) [40] . ...
Article
Full-text available
An interaction between promotion and sales, in a given business, should unfold many of the insights for the business. There lies a problem on higher or lower relative measures in respect of effort of promotion or sales, in an interaction in business. The study has found the relative measure and obtained results which theoretically lead to generate a model, may be called as sales model. With many of its prosperous virtues, this model would be able to control negativity in the measures of either promotion or sales. Such study can easily maneuver to computer software to anticipate or estimate the interaction outcome. In the study, there would be found the correlative relationship by which interaction phenomenon can be visualized, over several fluctuations of sales and promotion. The study should make the interaction modeling to form out the similar modeling for any management discipline like human resources management especially, alongwith operation, strategy, financial management and many others. With lots of future scopes, the study can be treated as an examining explanation to several real-life problems to solve within a business.
... The influence of positive ratings on purchase intention is greater than that of negative ratings [29]. Alternatively, when exposed to negative information, consumers become motivated to seek additional information through OCRs, and their purchase likelihood is reduced [79]. Additionally, studies have shown that search and experiential products have a moderating effect on OCR valence [80,81]. ...
Article
Full-text available
We examine the interaction effects of linguistic style and verification of online reviews in terms of their valence on purchase intention for search and experiential products. We adopt the cue utilization framework to examine the interplay between the extrinsic cues of online reviews—content style (general versus specific), verified purchase (VP) badge (present versus absent), and valence (positive versus negative)—in two product categories—search product (tablet) and experiential product (trip package)—using an experimental design. The findings of the frequentist and Bayesian analyses show that valence supersedes other attributes’ impacts on purchase intention in both product categories. Variations in the content style of the reviews have minor influences on purchase intention. The presence of a VP badge on a review has a negligible influence on purchase intention across both product categories. Valence-content style and valence-VP badge interactions significantly affect purchase intention. Based on these findings, implications are discussed.
Chapter
In the last two years, consumers have experienced massive changes in consumption – whether due to shifts in habits; the changing information landscape; challenges to their identity, or new economic experiences of scarcity or abundance. What can we expect from these experiences? How are the world's leading thinkers applying both foundational knowledge and novel insights as we seek to understand consumer psychology in a constantly changing landscape? And how can informed readers both contribute to and evaluate our knowledge? This handbook offers a critical overview of both fundamental topics in consumer psychology and those that are of prominence in the contemporary marketplace, beginning with an examination of individual psychology and broadening to topics related to wider cultural and marketplace systems. The Cambridge Handbook of Consumer Psychology, 2nd edition, will act as a valuable guide for teachers and graduate and undergraduate students in psychology, marketing, management, economics, sociology, and anthropology.
Article
Full-text available
In this paper we discuss recent developments in econometrics that we view as important for empirical researchers working on policy evaluation questions. We focus on three main areas, where in each case we highlight recommendations for applied work. First, we discuss new research on identification strategies in program evaluation, with particular focus on synthetic control methods, regression discontinuity, external validity, and the causal interpretation of regression methods. Second, we discuss various forms of supplementary analyses to make the identification strategies more credible. These include placebo analyses as well as sensitivity and robustness analyses. Third, we discuss recent advances in machine learning methods for causal effects. These advances include methods to adjust for differences between treated and control units in high-dimensional settings, and methods for identifying and estimating heterogeneous treatment effects.
Article
Full-text available
The increasing amount of electronic word of mouth (eWOM) has significantly affected the way consumers make purchase decisions. Empirical studies have established an effect of eWOM on sales but disagree on which online platforms, products, and eWOM metrics moderate this effect. The authors conduct a meta-analysis of 1,532 effect sizes across 96 studies covering 40 platforms and 26 product categories. On average, eWOM is positively correlated with sales (.091), but its effectiveness differs across platform, product, and metric factors. For example, the effectiveness of eWOM on social media platforms is stronger when eWOM receivers can assess their own similarity to eWOM senders, whereas these homophily details do not influence the effectiveness of eWOM for e-commerce platforms. In addition, whereas eWOM has a stronger effect on sales for tangible goods new to the market, the product life cycle does not moderate the eWOM effectiveness for services. With respect to the eWOM metrics, eWOM volume has a stronger impact on sales than eWOM valence. In addition, negative eWOM does not always jeopardize sales, but high variability does.
Article
Full-text available
Internet review forums increasingly supplement expert opinion and social networks in informing consumers about product quality. However, limited empirical evidence links digital word‐of‐mouth to purchasing decisions. We implement a regression discontinuity design to estimate the effect of positive Yelp.com ratings on restaurant reservation availability. An extra half‐star rating causes restaurants to sell out 19 percentage points (49%) more frequently, with larger impacts when alternate information is more scarce. These returns suggest that restaurateurs face incentives to leave fake reviews but a rich set of robustness checks confirm that restaurants do not manipulate ratings in a confounding, discontinuous manner.
Article
Full-text available
Even though negative information about brands and companies is widely prevalent in the marketplace, except for case studies, there has been no systematic investigation of how consumers process negative information about the brands they like and use. In the three studies in this research, the authors attempt to bridge this gap. The findings of the first and second studies provide a theoretical framework for understanding how consumers process negative information in the marketplace. Commitment of the consumer toward the brand is identified as a moderator of negative information effects. In the third study, the authors use this theoretical framework to derive and test response strategies that companies can use to counter negative publicity for consumers who are high and low in commitment toward the brand.
Article
Although management assumes a relationship between price and quality when making decisions about pricing and when acting against price cutting within distribution channels, little research on this relationship has been done. Earlier price-quality studies have not involved consumers actually using products over time. This article reports a study in which price was the only variable and, over 24 trials, quality differences for three brands were perceived by subjects when no quality difference existed. The relationship between price and perception of quality was positive but not linear.
Article
We analyze markets where firms competing on price advertise to increase the probability of entering consumers’ consideration sets. We find that moderately costly advertising allows firms to raise prices and possibly profits by reducing the fraction of price-conscious consumers, and by segmenting the market according to whether or not consumers consider the lower priced firm. However, when the cost of advertising is sufficiently low, advertising leads to a prisoners’ dilemma that adversely impacts profits without affecting expected prices.
Article
The relationship between attribute-level performance, overall satisfaction, and repurchase intentions is of critical importance to managers and generally has been conceptualized as linear and symmetric. The authors investigate the asymmetric and nonlinear nature of the relationship among these constructs. Predictions are developed and tested empirically using survey data from two different contexts: a service (health care, n = 4517) and a product (automobile, n = 9359 and n = 13,759). Results show that (1) negative performance on an attribute has a greater impact on overall satisfaction and repurchase intentions than positive performance has on that same attribute, and (2) overall satisfaction displays diminishing sensitivity to attribute-level performance. Surprisingly, results show that attribute performance has a direct impact on repurchase intentions in addition to its effect through satisfaction.
Article
We derive revealed preference tests for models where individuals use consideration sets to simplify their consumption problem. Our basic test provides necessary and sufficient conditions for consistency of observed choices with the existence of consideration set restrictions. The same conditions can also be derived from a model in which the consideration set formation is endogenous and based on subjective prices. By imposing restrictions on these subjective prices, we obtain additional refined revealed preference tests. We illustrate and compare the performance of our tests by means of a dataset on household consumption choices.
Article
This article discusses some major uses of the logistic regression model in social data analysis. Using the example of personal happiness, a trichotomous variable from the 1993 General Social Survey (n = 1,601), properties of the technique are illustrated by attempting to predict the odds of individuals being less, rather than more, happy with their lives. The exercise begins by treating happiness as dichotomous, distinguishing those who are not too happy from everyone else. Later in the article, all three categories of happiness are modeled via both polytomous and ordered logit models.