ArticlePDF Available

Should We Reach for the Stars? Examining the Convergence Between Online Product Ratings and Objective Product Quality and Their Impacts on Sales Performance

Authors:

Abstract and Figures

By documenting that online ratings poorly correlate with quality scores provided by Consumer Reports—presumably a measure of ‘objective’ product quality—de Langhe et al. (2016a) found that consumers rely more heavily on such ratings when making quality inferences than they should. Aside from replicating this finding, we examine the moderating effect of product age on the convergence between objective and rated quality and investigate which quality indicator is a better predictor of sales performance.
Content may be subject to copyright.
A preview of the PDF is not available
Conference Paper
Full-text available
The value that consumers derive from behavioral advertising has been more often posited than empirically demonstrated. The majority of empirical work on behavioral advertising has focused on estimating the effectiveness of behaviorally targeted ads, measured in terms of click or conversion rates. We present the results of two online within-subject experiments (Study 1 and Study 2) that, instead, employ a counterfactual approach, designed to assess comparatively some of the consumer welfare implications of behaviorally targeted advertising. Participants are presented with alternative product offers: products advertised in ads displayed to them on websites that commonly show behaviorally targeted ads (ad condition); competing products from the organic results of online searches (search condition); and random products (random condition). The alternatives are compared along a variety of metrics, including objective measures (such as product price and vendor quality) and participants’ self-reports (such as purchase intention and perceived product relevance). In Study 1 (n = 489) we find, first, that both ads and organic search results within our sample of participants are dominated by a minority of vendors; however, ads are more likely to present participants with less popular (and therefore lesser known) vendors. Second, we find that purchase intentions are higher in the ad and the search conditions than in the random condition; the effect is driven by higher product relevance in the ad and search conditions; however, in absolute terms, product relevance is low, even in the ad condition. Third, we find that ads are more likely to be associated with lower quality vendors, and higher prices (for identical products), compared to competing alternatives found in search results. Study 2 (n = 493) replicates Study 1 results. In addition, Study 2 finds that higher purchase intentions and higher relevance in the ad condition are driven by participants having previously searched for the advertised product. Furthermore, we use a latent utility model to estimate differences in consumer surplus (a commonly used measure of consumer welfare) across conditions. In our sample of participants, the random condition is associated with the lowest surplus. After accounting for differences in vendor quality, the search condition is associated with slightly higher surplus relative to the ad condition.
Chapter
Die ständige Überprüfung der Frage, ob sich Klient und Coach auf einem guten Weg zur Zielerreichung befinden, ist unerlässlich. Der finale Rückblick dient der Würdigung und Überprüfung der gemeinsamen Arbeit und soll sorgfältig gestaltet werden: auf Augenhöhe, denn Coaching ist nach systemischem Verständnis eine Koproduktion. Bedauerlicherweise überwiegt im Publikum eine naive Vorstellung, wie Coaching und Coaches zu bewerten sind: emotional und oberflächlich mit Daumen hoch oder runter. Es gilt daher, das Verständnis von Coaching als professioneller Dienstleistung hoch zu halten.
Article
Full-text available
Consumer-generated ratings and reviews play an important role in people’s experiences of online search and shopping. This article applauds and extends the thought-provoking response of de Langhe, Fernbach, and Lichtenstein (2016, in this issue) to Simonson’s (2015) assertions about the topic and suggests an agenda for future research. Follow-up research into the topic should emphasize the diversity of consumers and the multiplicity of their needs. It should recognize that reviews and ratings are complex social conversations embedded in consumers’ multifaceted communicational repertoires. It should be cautious when using terms such as objective and rational when describing consumers and consumption. Being aware of the risks to external validity of studying average ratings may lead to frameworks with greater contextual integrity, and encourage collaborative communication between scholars from different perspectives working in this field.
Article
Full-text available
A growing body of research has emerged on online product reviews and their ability to elicit performance outcomes desired by retailers; yet, a common understanding of the performance implications of online product reviews has eluded us. Scholars continue to navigate an array of studies assessing different design elements of online product reviews, and various research settings and data sources. We undertake a meta-analysis of 26 empirical studies yielding 443 sales elasticities to examine how these variables relate to retail sales. Building on well-established meta-analytical methods, we address the following questions: How does review valence influence the elasticity of retailer sales? What about review volume? For which product types and usage situations do online product reviews have a greater impact on retailer sales elasticity? Which types of online reviewers and websites exert the greatest influence on retailer sales elasticity? Our study answers these important questions and provides a much needed quantitative synthesis of this burgeoning stream of research.
Article
Full-text available
This paper examines the informational role of product ratings. We build a theoretical model in which ratings can help consumers figure out how much they would enjoy the product. In our model, a high average rating indicates a high product quality, whereas a high variance of ratings is associated with a niche product, one that some consumers love and others hate. Based on its informational role, a higher variance would correspond to a higher subsequent demand if and only if the average rating is low. We find empirical evidence that is consistent with the theoretical predictions with book data from Amazon.com and BN.com. A higher standard deviation of ratings on Amazon improves a book's relative sales rank when the average rating is lower than 4.1 stars, which is true for 35% of all the books in our sample. This paper was accepted by Pradeep Chintagunta, marketing.
Article
The major point of the article by de Langhe, Fernbach, and Lichtenstein (2016, in this issue) is that objective ratings produced by Consumer Reports and online consumer ratings have a low correlation. We argue in this comment that this result is unsurprising due to some unresolved statistical issues, heterogeneity in terms of consumers’ use of ratings and of the underlying consumer population and contexts, dynamics in the ratings system, and the complexity of modeling the generation of the consumer ratings. We also question why this low correlation matters given the fact that consumers use multiple sources of information, and more uncorrelated sources lead to more efficient decision making.
Article
User reviews aggregate word of mouth and often greatly enhance consumers’ ability to estimate product quality. Consumers decide for themselves whether and how to incorporate user reviews with other sources when evaluating options (e.g., a small minority uses Consumer Reports). Despite the extraordinary diligence of Bart de Langhe, Philip Fernbach, and Donald Lichtenstein and their use of a variety of data sources and methods, I have concerns about the purpose of the research, the evidence and distinctions they rely on, and the overstated conclusions. However, studying how user reviews and other currently available quality sources of information affect consumers is important and offers new directions for judgment and decision-making researchers.
Article
In de Langhe, Fernbach, and Lichtenstein (2016) we argue that consumers trust average user ratings as indicators of objective product performance much more than they should. This simple idea has provoked passionate commentaries from eminent researchers across three sub-disciplines of marketing: experimental consumer research, modeling, and qualitative consumer research. Simonson challenges the premise of our research, asking whether objective performance even matters. We think it does, and explain why in our response. Winer and Fader argue that our results are neither insightful nor important. We believe that their reaction is due to a fundamental misunderstanding of our goals, and show that their criticisms do not hold up to scrutiny. Finally, Kozinets points out how narrow a slice of consumer experience our paper covers. We agree, and build on his observations to reflect on some big-picture issues about the nature of research and the interaction between the sub-disciplines.
Article
This research documents a substantial disconnect between the objective quality information that online user ratings actually convey and the extent to which consumers trust them as indicators of objective quality. Analyses of a dataset covering 1,272 products across 120 vertically-differentiated product categories reveal that average user ratings (1) lack convergence with Consumer Reports scores, the most commonly used measure of objective quality in the consumer behavior literature, (2) are often based on insufficient sample sizes which limits their informativeness, (3) do not predict resale prices in the used-product marketplace, and (4) are higher for more expensive products and premium brands, controlling for Consumer Reports scores. However, when forming quality inferences and purchase intentions, consumers heavily weight the average rating compared to other cues for quality like price and the number of ratings. They also fail to moderate their reliance on the average user rating as a function of sample size sufficiency. Consumers’ trust in the average user rating as a cue for objective quality appears to be based on an “illusion of validity.”
Article
The creation of online consumer communities to provide product reviews and advice has been touted as an important, albeit somewhat expensive component of Internet retail strategies. In this paper, we characterize reviewer behavior at two popular Internet sites and examine the effect of consumer reviews on firms' sales. We use publicly available data from the two leading online booksellers, Amazon.com and BarnesandNoble.com, to construct measures of each firm's sales of individual books. We also gather extensive consumer review data at the two sites. First, we characterize the reviewer behavior on the two sites such as the distribution of the number of ratings and the valence and length of ratings, as well as ratings across different subject categories. Second, we measure the effect of individual reviews on the relative shares of books across the two sites. We argue that our methodology of comparing the sales and reviews of a given book across Internet retailers allows us to improve on the existing literature by better capturing a causal relationship between word of mouth (reviews) and sales since we are able to difference out factors that affect the sales and word of mouth of both retailers, such as the book's quality. We examine the incremental sales effects of having reviews for a particular book versus not having reviews and also the differential sales effects of positive and negative reviews. Our large database of books also allows us to control for other important confounding factors such as differences across the sites in prices and shipping times.