Conference PaperPDF Available

Analyzing Online Customer Reviews - An Interdisciplinary Literature Review And Research Agenda

Conference Paper

Analyzing Online Customer Reviews - An Interdisciplinary Literature Review And Research Agenda

Abstract and Figures

Online customer reviews increasingly exert influence on customers’ purchase decisions when shopping online and give new importance to the concept of word-of-mouth. This is reflected in a growing body of academic literature across varying disciplines that draws on online customer reviews as source of information. However, these studies apply varying methods and obtain contradicting results. We conduct a systematic and interdisciplinary literature review to examine how online customer review data is used in academic research and which insights these studies provide. Analyzing 49 journal articles we find that the most studies investigate online customer reviews’ effect on sales, review helpfulness or review manipulation. Furthermore, the variety of product categories and review websites these studies receive their information from is rather limited. The results reveal that previous research can only provide an imperfect understanding of the impact of electronic word-of-mouth on business. We therefore develop a twofold research agenda with regards to (a) studies investigating customer reviews and its effects and (b) issues with respect to the use of customer reviews as a data source in academic research.
Content may be subject to copyright.
Association for Information Systems
AIS Electronic Library (AISeL)
ECIS 2013 Completed Research ECIS 2013 Proceedings
7-1-2013
Analyzing Online Customer Reviews - An
Interdisciplinary Literature Review And Research
Agenda
Manuel Trenz
University of Augsburg, Augsburg, Germany, trenz@bwl.uni-mannheim.de
Benedikt Berger
University of Mannheim, Mannheim, Germany, benedikt.berger@bwl.lmu.de
Follow this and additional works at: hp://aisel.aisnet.org/ecis2013_cr
is material is brought to you by the ECIS 2013 Proceedings at AIS Electronic Library (AISeL). It has been accepted for inclusion in ECIS 2013
Completed Research by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org.
Recommended Citation
Trenz, Manuel and Berger, Benedikt, "Analyzing Online Customer Reviews - An Interdisciplinary Literature Review And Research
Agenda" (2013). ECIS 2013 Completed Research. Paper 83.
hp://aisel.aisnet.org/ecis2013_cr/83
ANALYZING ONLINE CUSTOMER REVIEWS AN
INTERDISCIPLINARY LITERATURE REVIEW AND
RESEARCH AGENDA
Trenz, Manuel, University of Augsburg, Universitätsstr. 2, 86159 Augsburg, Germany,
manuel.trenz@wiwi.uni-augsburg.de
Berger, Benedikt, University of Mannheim, Schloss, 68131 Mannheim, Germany,
bberger@bwl.uni-mannheim.de
Abstract
Online customer reviews increasingly exert influence on customers’ purchase decisions when
shopping online and give new importance to the concept of word-of-mouth. This is reflected in a
growing body of academic literature across varying disciplines that draws on online customer reviews
as source of information. However, these studies apply varying methods and obtain contradicting
results. We conduct a systematic and interdisciplinary literature review to examine how online
customer review data is used in academic research and which insights these studies provide.
Analyzing 49 journal articles we find that the most studies investigate online customer reviews’ effect
on sales, review helpfulness or review manipulation. Furthermore, the variety of product categories
and review websites these studies receive their information from is rather limited. The results reveal
that previous research can only provide an imperfect understanding of the impact of electronic word-
of-mouth on business. We therefore develop a twofold research agenda with regards to (a) studies
investigating customer reviews and its effects and (b) issues with respect to the use of customer
reviews as a data source in academic research.
Keywords: e-commerce, social media, eWOM, online customer reviews, literature review, research
agenda.
Proceedings of the 21st European Conference on Information Systems
1
1 Introduction
Customer reviews have become an essential part of the day-to-day business of most e-commerce
platforms on the internet. According to a survey by the Pew Research Center’s Internet & American
Life Project, 58% of the American adults research products and services on the internet prior to the
purchase decision and 24% post comments or reviews online afterwards (Jansen, 2010). The growing
importance of electronic word-of-mouth (eWOM) for e-commerce success is reflected by a growing
body of research that draws upon online customer review data.
The concept of word-of-mouth (WOM) has been part of the marketing literature since the middle of
the 20th century (Katz and Lazarsfeld, 1955) but gained additional importance with the rise of the
internet. Because the conventional limitation to a local social network and the fleeting character of
WOM yielded a broad reach and long term accessibility (Breazeale, 2009; Chen and Xie, 2008; Davis
and Khazanchi, 2008; Duan et al., 2008a), eWOM became a reliable source of customer information
for various research purposes (Li and Hitt, 2010). Early scientific contributions towards the changes
that WOM undergoes in an online context have been made by Dellarocas (2003) as well as Godes and
Mayzlin (2004). The marketing and information systems (IS) literature on online customer reviews
subsequently focused on the relationship between these reviews and sales (Chevalier and Mayzlin,
2006; Clemons et al., 2006; Dellarocas et al., 2007). Later on, further topics like bias or manipulation
in online customer reviews (e.g. Hu, Liu, et al., 2011; Li and Hitt, 2008), or review helpfulness (e.g.
Mudambi and Schuff, 2010) have been investigated. Apart from this, online customer review data is
also examined in tourism or medical journals
Despite the large number of publications and the large number of issues addressed using customer
review data, a structured analysis of this research stream is lacking. Such an analysis of previous
research is especially valuable because the findings are distributed across multiple disciplines and are
therefore difficult to survey. Two attempts to assess the literature on eWOM have been conducted so
far (Breazeale, 2009; Cheung and Thadani, 2012). The review by Breazeale (2009) focusses solely on
marketing literature and henceforth misses out important insights from other research areas named
above. In turn, the literature review of Cheung and Thadani (2012) differs from ours in two ways.
First, their study covers various forms of eWOM while we focus on online customer reviews in
particular. Second, Cheung and Thadani (2012) restrict their literature review to studies investigating
eWOM at an individual level, i.e., as a communication process between the reviewer and the reader.
We fill these gaps by conducting a broad, systematic, and interdisciplinary review on studies that use
genuine online customer reviews on products as their data source. We determine the scope and sources
of online customer review data that has been observed, what research questions have been sought to
answer using such a dataset, which methods have been applied to approach it, and what insights have
been generated. These findings can guide further research to more successfully exploit this unique data
to generate scientific and practical insights.
The remainder of this paper is organized as follows. In the theoretical foundations we provide basic
definitions. After explaining the methodology of the systematic literature review we present and
critically discuss our results in a concept-centric manner as suggested by Webster and Watson (2002).
Finally, we summarize our findings and derive an extensive research agenda for the use of customer
review data in academic research.
2 Theoretical Foundations
The most widespread definition of eWOM communication stems from Hennig-Thurau et al. (2004)
who describe it “as any positive or negative statement made by potential, actual, or former customers
about a product or company, which is made available to a multitude of people and institutions via the
Internet.” Following this definition, online and offline word-of-mouth both report personal product
evaluations from the first-person perspective and address other customers (Dellarocas et al., 2010).
Proceedings of the 21st European Conference on Information Systems
2
However, eWOM uniquely enables customers to exchange information about a product beyond
geographic or temporal limitations (Chen et al., 2011; Dellarocas et al., 2010). Further differences
between electronic and traditional WOM are the anonymity of the reviewer and the increased volume
and the diversity of judgments (Pan and Zhang, 2011). While the definition by Hennig-Thurau et al.
(2004) covers various forms of eWOM such as forums or blogs (Duan et al., 2008a) we are focusing
especially on online customer reviews as an important part of the concept of eWOM.
In particular, online customer reviews can be regarded as “peer-generated product evaluations posted
on company or third party websites” (Mudambi and Schuff, 2010). Usually, the customer is offered to
provide a rating on a specific scale as a measure of the overall evaluation and to write a text of
arbitrary length that may serve as a justification for the rating (Lee et al., 2011; Mudambi and Schuff,
2010). In contrast to other forms of eWOM, customer reviews have become an important tool for
online marketplaces and sellers. Online customer reviews are of higher relevance than other forms of
marketing communication and can be used to increase trust in an online shop (Chen and Xie, 2008;
Dellarocas, 2003). Two important concepts closely related to online customer reviews are review
manipulation or fraud and review helpfulness. Following Hu et al. (Hu, Bose, et al., 2011) we refer to
review fraud or manipulation if vendors, producers or any other person that holds a stake in the
product or the company behind it engage in writing online customer reviews with the aim to inflate the
rating or to improve the awareness of the product while hiding behind the identity of an anonymous
customer. The helpfulness of an online customer review is usually represented by the ratio of positive
helpfulness votes to total helpfulness votes for a particular review. This vote is obtained from the
reader by simply asking question whether this review was helpful or not (Mudambi and Schuff, 2010;
Pan and Zhang, 2011). Because many customer review systems rank reviews according to their
helpfulness, this concept gains additional relevance (Cao et al., 2011).
3 Methodology
The review is conducted along the eight-step guide for systematic literature reviews provided by Okoli
and Schabram (2010) as well as the guide for literature reviews in the IS field by Webster and Watson
(2002). We choose a broad scope for this literature review and did not restrict our research to a single
discipline or a certain set of journals. Instead we conduct a key word search in abstracts of a broad
range of publications via three main literature search services to assure the multidisciplinary approach.
For this purpose we develop a comprehensive set of key words including synonyms as well as singular
and plural terms for online customer reviews and eWOM. To guarantee appropriateness of the
reviewed material we only search for peer refereed publications. We set January 1st 2012 as a
publication deadline for articles to be included in our sample to avoid repeated searches until the
completion of the review. Nevertheless, we allow for articles that were in press and published online
only at that time. We obtain 380 results from our search on EBSCOhost, 85 results on ScienceDirect
and 187 results on ProQuest in this chronological order. Due to the fact that 67.9 % of the results from
ScienceDirect and 68.1 % of the results from ProQuest are redundant with prior search we decide to
stop our search at this stage.
Using a set of exclusion rules, which we refine continually following Okoli and Schabram (2010), we
screen the abstract and, if necessary, the full text of each article for inclusion. These rules assure that
only studies drawing on original online customer product review data are included in our sample. We
explicitly exclude lab experiments and tests of text mining algorithms because in these cases the object
being studied is rather a customer or an algorithm than review data itself. After the practical screening
we remain with a total of 77 articles. As expected, we receive a sample including studies from various
disciplines such as IS, Marketing, Operations Research and Management Science, Accountancy,
Tourism and Hospitality Management or Health Care Sciences. Instead of relying on a self-developed
in-depth quality appraisal of each included article as suggested by Okoli and Schabram (2010), we rely
on external quality indicators. Owing to the interdisciplinarity of our sample we opt for the 2010
Citation Impact Factors from the Journal Citation Reports as presented in the ISI Web of Knowledge
Proceedings of the 21st European Conference on Information Systems
3
by Thomson Reuters which is available for many important journals across disciplines. Journals that
are neither listed in the Social Science Edition nor the Science Edition of the Journal Citation Report
are excluded from the sample. We set an Impact Factor of 1.0 as minimum for journals to be included
in our final set of literature and remain with 49 articles. To back-up our sampling procedure we obtain
the journal ratings from the Association of Business Schools (ABS) Journal Quality Guide (Harvey et
al., 2010) and the VHB-JOURQUAL2.1 of the German Academic Association for Business Research
(2011). These additional quality indicators underpin our selection.
We organize the analysis and synthesis of the literature using a concept matrix which we steadily
adjust during our work (Webster and Watson, 2002). The concepts are derived iteratively based on the
reviewed papers. The matrix categorizes the research topics, product categories the analyzed reviews
stem from, data sources, product and review data extracted, as well as several properties of the dataset
and methodology.
4 Results
We present our results along the concept matrix that we used for the assessment of our final set of
literature. Because our space is limited and the tabulation of the whole concept matrix might not add
adequate value, we depict it in the form of a summarizing table (cf. Table 1). Although our review
covers 49 top journal articles we analyze 55 entities because six of the articles either consist of two
distinct studies (Pan and Zhang, 2011), conduct the same analysis with two different datasets (Chen et
al., 2011; Dellarocas et al., 2010; Hu et al., 2012) or apply two separate models on the same data
(Forman et al., 2008; Hu, Liu, et al., 2011). In addition, we want to note that two articles by Duan et.
al. (2008a; 2008b) use the same data and examine similar research questions while using slightly
different empirical models.
4.1 Research Topics and Findings
We identify three major topics in our set of articles: effect on sales, bias and fraud, and review
helpfulness. We adopt this structure in the following to present our findings.
4.1.1 Effect on Sales
The most popular question examined is whether and to what extent the presence of online customer
reviews has an effect on the sales of the reviewed products. Concerning the influence of average
rating, or review valence, on revenue, results are mixed: whereas Archak et al. (2011), Chevalier and
Mayzlin (2006), Chintagunta et al. (2010), Clemons et al. (2006), Li and Hitt (2008), Yang and Mai
(2010), and Ye et al. (2009; 2011) affirm a positive relationship between the average rating of a
product and its sales, others neglect a significant correlation between these variables (Duan et al.,
2008b; Forman et al., 2008; Park et al., 2012). In contrast, the positive influence of online review
volume on revenue is heavily supported (Archak et al., 2011; Chevalier and Mayzlin, 2006; Dellarocas
et al., 2007; Duan et al., 2008b; Forman et al., 2008; Li and Hitt, 2008) with only Clemons et al.
(2006) and Chintagunta et al. (2010) obtaining no significance for this effect. The findings by Archak
et al. (2011), Clemons et al. (2006), and Hu et al. (2009) indicate that the variance or standard
deviation in online customer ratings serves as a more accurate predictor of sales growth than the
simple average, which again is negated by Chintagunta et al. (2010) and Ye et al. (2011). Clemons et
al. (2006) observe that the average of the top quartile ratings seems to have stronger positive influence
on sales growth than the average of the bottom quartile has a negative one. On the other hand, a
follow-up study of Clemons and Gao (2008) identifies the absence of strongly negative reviews as an
important predictor for online sales. This is in line with the findings by Chevalier and Mayzlin (2006)
who state that an additional negative review has more power to decrease sales than an additional
positive review has power to increase them.
Proceedings of the 21st European Conference on Information Systems
4
Concept
Outcome
Abs.
Rel.
Concept
Outcome
Abs.
Rel.
Research
Topics
Effect on Sales
22
40.0 %
Unit of
Analysis
Product
30
54.5 %
Bias and Fraud
10
18.2 %
Review
22
40.0 %
Review Helpfulness
10
18.2 %
Product-
related
Data
extracted
Number of Reviews
31
56.4 %
Other
17
30.9 %
Average Rating
29
52.7 %
Product
Categories
Books
14
25.5 %
Standard Deviation
8
14.5 %
Movies
8
14.5 %
Price
21
38.2 %
Software
5
9.1 %
Sales
27
49.1 %
Video Games
5
9.1 %
Release Date
16
29.1 %
DVDs
6
10.9 %
Other
28
50.9 %
CDs
3
5.5 %
Review-
related
Data
extracted
Rating
30
54.5 %
Digital Cameras
10
18.2 %
Length
11
20.0 %
Mobile Phones
4
7.3 %
Helpfulness Votes
17
30.9 %
Hotels
5
9.1 %
Reviewer Details
11
20.0 %
Other
21
38.2 %
Date and Time
18
32.7 %
Data
Source
Amazon
25
45.5 %
Text
23
41.8 %
Barnes & Nobles
2
3.6 %
Others
15
27.3 %
TripAdvisor
3
5.5 %
Yahoo! Movies
7
12.7 %
CNET
5
9.1 %
Epinions
5
9.1 %
Other
15
27.3 %
Table 1. Results of the literature analysis (n=55)
Several possible explanations for these contradicting findings exist. Duan et al. (2008a) argue that
eWOM is not only a determinant but also a result of retail sales and thus has to be accounted for as an
endogenous factor. They suggest that average review valence has an indirect impact on sales because
it is positively correlated to review volume. Thus, ignoring the bilateral relationship between review
volume and revenue is likely to cause a bias and lead to an overestimation of the effect of review
valence on sales. Chintagunta et al. (2010) explain different results of previous studies with the
aggregation of data across different markets. Another reason for the inconsistent findings could be the
lacking consideration of an interaction effect with the product variety offered (Zhou and Duan, 2012).
Finally, Dellarocas et al. (2007) claim that both volume and valence are significant predictors of sales
if properly modeled.
Further insights are provided by Zhu and Zhang (2010) and Duan et al. (2009) who confirm positive
effects of eWOM on sales only for less popular products. Dellarocas et al. (2010) show that users
prefer to rate products which are less available on the market or which already have been reviewed
extensively. The distinction between more and less popular products is closely related to the concept
of the long tail. This concept is analyzed with regards to online customer reviews by Lee et al. (2011)
and Zhou and Duan (2012). Drawing on online review volume Lee et al. (2011) propose that the long
tail theory only partly holds for products that are assessed objectively because people tend to follow
other customers’ advice in this case.
4.1.2 Bias and Fraud
Systematic differences in rating or purchase behavior may introduce a bias which is one of the major
concerns named by authors with regard to the limitations of their studies (Archak et al., 2011;
Clemons et al., 2006; Decker and Trusov, 2010; Dellarocas et al., 2010; Duan et al., 2008a; Hu and Li,
2011; Lee et al., 2011; Money et al., 2011). It is generally acknowledged that online product ratings
are overly positive on average (Chevalier and Mayzlin, 2006; Dellarocas et al., 2010; Duan et al.,
2008b; Duan et al., 2009; Forman et al., 2008; Hu et al., 2009; Hu, Liu, et al., 2011; Li and Hitt, 2008;
Li and Hitt, 2010; Mudambi and Schuff, 2010; Pan and Zhang, 2011; Zhu and Zhang, 2010). Hu et al.
(2009) identify a J-shaped distribution of product ratings with extreme ratings dominating and less
Proceedings of the 21st European Conference on Information Systems
5
average ratings. They explain this observation with the notion of two distinct self-selection biases: On
the one hand, people who think a product is of very low or extraordinary quality are more likely to
write a review than those with an average meaning which they refer to as “underreporting bias” (Hu
et al., 2009; Koh et al., 2010). On the other hand, extremely positive reviews outnumber extremely
negative reviews because people with lower product evaluation prior to purchase will neither buy the
product nor engage in writing a negative review the “purchase bias” (Hu et al., 2009). Another
important characteristic of online customer ratings is, that they usually tend do decrease with time
elapsing (Duan et al., 2008b; Hu, Liu, et al., 2011; Li and Hitt, 2008; Zhu and Zhang, 2010). Li and
Hitt (2008) attribute this phenomenon to idiosyncratic customer preferences. They reason that early
adopters, and therefore early reviewers, might have preferences towards a product which favour
positive reviews. This introduces a positive review bias of early product reviews that tends to decline
with more customers rating the product. Yet another form of bias might be caused by reviews’
sensitiveness to price and changes in price over time (Li and Hitt, 2010).
A second concern regarding the reliability of reviews is the possibility of review manipulation
(Chevalier and Mayzlin, 2006; Decker and Trusov, 2010; Pan and Zhang, 2011). Studying book
reviews on Amazon, Hu et al. (2012) estimate that about 10 % of the books are subject to manipulated
reviews. Hu et al. (2011) claim review manipulation to be an alternative explanation for decreasing
average reviews over time as it becomes more costly to influence the average product ratings the more
reviews already have been published. Based on their empirical analysis neither self-selection bias nor
review manipulation can be ruled out as cause of this effect, whereas none of both explains it to the
full extent. According to the observations by Hu et al. (2011), review manipulation is more likely for
popular and high priced products as well products with less helpful reviews. Nevertheless, up until
now it is still uncertain to which extent customers are able to correct review biases or manipulation
when assessing eWOM (Hu, Liu, et al., 2011; Li and Hitt, 2008).
4.1.3 Review Helpfulness
The last topic we used to categorize our literature set is review helpfulness. Mudambi and Schuff
(2010), Forman et al. (2008), and Pan and Zhang (2011) conclude that review length has a positive
effect on helpfulness and that this effect is stronger for search goods than for experience goods. In
contrast, Korfiatis et al. (2012) find stylistic elements to be more important than the extensiveness of
the text. Pan and Zhang (2011) and Cao et al. (2011) find a positive relationship between review
extremity and helpfulness especially for experience goods while the results by Willemsen et al. (2011)
suggest exactly the opposite. Mudambi and Schuff (2010) and Forman et al. (2008) observe a positive
correlation between review helpfulness moderate ratings for experience goods. Characteristics of the
reviewer influence review helpfulness as well (Forman et al., 2008; Pan and Zhang, 2011). The more
the reviewers reveal about their personality the more helpful are their reviews to other customers who
implement this information in their purchase decisions (Forman et al., 2008). In the second study of
their paper Pan and Zhang (2011) arrive at the result that reviews by the least and most innovative
reviewers are less helpful than those of reviewers with average innovativeness. Regarding the effect of
message sidedness on review helpfulness, empirical evidence is contradicting. While Forman et al.
(2008) approximated equivocality using the extremity of the review rating, Schlosser (2011) and
Willemsen et al. (2011) perform a content analysis to measure whether a review is one- or two-sided.
Although two-sided reviews appear to be more helpful in general, this effect is enhanced if the review
is accompanied by a moderate review instead of an extreme one. We can summarize that the effects of
both review length as well as review extremity and message sidedness on review helpfulness have not
been agreed upon so far.
4.1.4 Other
14 articles could not be allocated to one of the three main topics. Chaves et al. (2011), Hughes and
Cohen (2011), Yang and Fang (2004), and Yang et al. (2004) conduct content analyses to extract
Proceedings of the 21st European Conference on Information Systems
6
quality dimensions important to customers, while Chen and Xie (2008) and Zhang et al. (2010) study
the differences between customer and professional reviews. Amongst the others, Feng and Papatla
(2011) and Chen et al. (2011) examine the relationship between eWOM and traditional marketing and
Nelson (2008) investigates the anxiety of parents about their kids reflected in online customer reviews.
4.2 Product Categories
The most mentioned limitation in the studies we review is that the results are restricted to the
particular product category examined (Cao et al., 2011; Duan et al., 2008b; Feng and Papatla, 2011;
Forman et al., 2008; Hu et al., 2012; Korfiatis et al., 2012; Lee et al., 2011; Li and Hitt, 2010; Li et al.,
2010; Mudambi and Schuff, 2010; Pan and Zhang, 2011; Park et al., 2012; Schlosser, 2011; Yang and
Mai, 2010; Zhou and Duan, 2012). The decision for a certain kind of product is in many cases related
to inherent characteristics of these goods. Generally, a minimum level of popularity and therefore
number of reviews is required (Chen and Xie, 2008; Clemons et al., 2006; Hu et al., 2012; Hughes and
Cohen, 2011; Lee et al., 2011; Mudambi and Schuff, 2010; Pan and Zhang, 2011). Since the
importance of eWOM for sales is supposed to be higher in industries like the movie or video game
industry, a review analysis in these markets is more promising than in others (Dellarocas et al., 2010;
Duan et al., 2008a; Zhu and Zhang, 2010). Moreover, products for which the Internet is the
predominant distribution channel are appropriate for online customer review analysis (Chen and Xie,
2008). To use a product category that already has been focused on offers the chance to draw
comparisons with prior findings (Duan et al., 2008a). Further reasons are data availability (Dellarocas
et al., 2010; Duan et al., 2008b) or the absence of monetary incentives for review manipulation (Cao et
al., 2011).
In the end, the choice for a product category depends on the purpose of the study. Books, DVDs, CDs,
video games or software are usually supposed to represent experience products while digital cameras
and mobile phones are common examples for search goods. This enables researchers to measure
differences in effects for search and experience goods (Duan et al., 2009; Pan and Zhang, 2011; Sen
and Lerman, 2007; Willemsen et al., 2011; Yang and Mai, 2010). Clemons et al. (2006) (2006) choose
the craft beer industry for their investigations because it is a repeat purchase product which allows for
an analysis of online customer reviews on repeated sales. Dellarocas et al. (2010) study the propensity
to write a review after consumption and work with movie data because movies tend to be rated soon
after watching them.
4.3 Data Sources
The variety of data sources is rather limited with 39 out of 55 studies relying on six different review
websites from various industries and Amazon being the most popular one. Only a minority of the
authors justifies the selection of the particular review website they choose (Cao et al., 2011; Duan et
al., 2009; Forman et al., 2008; Hu et al., 2012; Hughes and Cohen, 2011; Jeacle and Carter, 2011; Li et
al., 2010; Pan and Zhang, 2011; Park et al., 2012; Zhang, Ye, et al., 2010; Zhu and Zhang, 2010).
Unrestricted accessibility of the entire review history serves as an essential requirement (Cao et al.,
2011; Hughes and Cohen, 2011). The major criterion is the size or popularity of the review platform
because as more products and reviews are available a more elaborate sampling is possible (Cao et al.,
2011; Duan et al., 2009; Forman et al., 2008; Jeacle and Carter, 2011; Koh et al., 2010; Pan and
Zhang, 2011; Park et al., 2012; Zhu and Zhang, 2010). The type and currency of the explicit figures
for customer review and sales data which are provided by the website are important determinants for
the selection as well (Cao et al., 2011; Jeacle and Carter, 2011). Further appreciated features are a
sophisticated product categorization and multiple sort options within these (Cao et al., 2011; Duan et
al., 2009; Jeacle and Carter, 2011). Another important factor is the censorship of reviews which Pan
and Zhang (2011) and Zhang et al. (2010) suppose is on a minimum level on Amazon in comparison
with other online retailers. Hughes and Cohen (2011) and Yang et al. (2004) use search engines to
determine an appropriate review website.
Proceedings of the 21st European Conference on Information Systems
7
With regards to the comparability of results from different review platforms it has to be mentioned that
differences in the rating systems exist. While Amazon, Barnes and Noble, CNET and TripAdvisor
operate a five-star ratings system (Cao et al., 2011; Chaves et al., 2011; Chevalier and Mayzlin, 2006;
Pan and Zhang, 2011), Booking.com (Chaves et al., 2011) and GameSpot.com (Zhu and Zhang, 2010)
use ranges from 1 to 10 whereas on Yahoo! Movies users rate by letters from A to F (Schlosser, 2011).
4.4 Data Extracted
Further important aspects that differentiate articles from each other are the figures that actually have
been derived from the product review data. This especially depends on whether the product or the
review is the unit of analysis. We note that product-related data such as the number of reviews or the
average rating is sometimes observed directly on the product level (Chen et al., 2011; Chen and Xie,
2008; Chevalier and Mayzlin, 2006; Duan et al., 2009; Forman et al., 2008; Lee et al., 2011; Li and
Hitt, 2008; Li and Hitt, 2010; Park et al., 2012; Ye et al., 2009; Zhu and Zhang, 2010) and sometimes
aggregated from review-related data (Archak et al., 2011; Dellarocas et al., 2010; Duan et al., 2008b;
Ghose and Ipeirotis, 2011; Hu, Bose, et al., 2011; Hu, Liu, et al., 2011; Zhang, Craciun, et al., 2010).
The latter studies therefore use the product as their unit of analysis but still extract review-related data.
Review volume, review valence and sales are the most often used product-related figures. This fact is
not surprising as this information is needed to study the effect of online customer reviews on sales or
review manipulation. The price of the product is usually needed to control for the effect of prices on
sales (Archak et al., 2011; Chevalier and Mayzlin, 2006; Feng and Papatla, 2011; Forman et al., 2008;
Ghose and Ipeirotis, 2011; Hu et al., 2009; Park et al., 2012; Willemsen et al., 2011; Yang and Mai,
2010; Ye et al., 2009; Ye et al., 2011; Zhou and Duan, 2012; Zhu and Zhang, 2010). The product
release date often in combination with review date and time is necessary to differentiate the age of
products (Forman et al., 2008) and thus their stage in the product lifecycle (Chen and Xie, 2008; Li
and Hitt, 2010) or diffusion (Duan et al., 2009), to rule out pre-release reviews (Dellarocas et al.,
2010) or to construct the sample (Li and Hitt, 2008). Additional information such as product features
(Chen and Xie, 2008; Li and Hitt, 2010) or certain categorizations like the genre of a movie
(Dellarocas et al., 2010; Schlosser, 2011) are added depending on the purpose of the study. As no sales
data for Amazon is available, most studies using customer reviews from Amazon rely on the sales
rank as a proxy for sales (e.g. Archak et al., 2011; Chevalier and Mayzlin, 2006).
For the review-related data we perceive that early studies focus on the review rating which is criticized
by Archak et al. (2011) and Schlosser (2011). Archak et al. (2011) argue that one-dimensional ratings
are not capable of reflecting the multiple aspects of product experience and evaluation by a customer.
Thus, over time the review text became a matter of deeper interest (Cao et al., 2011) and is usually
approached by content analysis (Archak et al., 2011; Chaves et al., 2011; Hughes and Cohen, 2011;
Money et al., 2011; Pan and Zhang, 2011; Schlosser, 2011; Willemsen et al., 2011) or text mining
(Cao et al., 2011; Decker and Trusov, 2010; Ghose and Ipeirotis, 2011; Korfiatis et al., 2012; Li et al.,
2010; Xu et al., 2011). Review length is primarily examined in connection with review helpfulness
(Ghose and Ipeirotis, 2011; Mudambi and Schuff, 2010; Pan and Zhang, 2011; Willemsen et al., 2011;
Zhang, Craciun, et al., 2010). Further review attributes obtained are for example information about the
reviewer’s preferences (Clemons and Gao, 2008) or multidimensional ratings (Chintagunta et al.,
2010; Duan et al., 2008b; Schlosser, 2011; Yang and Mai, 2010).
5 Discussion
Our systematic literature review uncovered 49 high quality journal articles from a variety of
disciplines that use customer review data. The analysis reveals that online customer reviews are a topic
of growing interest in many research areas. While only 16 articles were published between 2004 and
2009, this number tremendously increased within the last years with 33 studies being published in
2010 and 2011. Online customer review data is used for various purposes. The majority of the studies
we review draws upon online customer review data to examine either the effect on sales, review
Proceedings of the 21st European Conference on Information Systems
8
helpfulness, or biases and fraud in eWOM. Other approaches concentrate on a particular business case
and study customers’ opinion on certain products. The results of the studies are not throughout
consistent and only few phenomena such as the overly average customer rating are generally agreed
upon. The causal relations between a product’s sales and reviews, the review helpfulness and possible
manipulation seem to be intertwined and depending on the business context. We observe that so far
research is restricted to a relatively limited set of products and review data sources. Our understanding
of the characteristics of online customer reviews and especially their role in the process of the
purchase decision making remains restricted.
Based on our in-depth insights, we therefore derive a research agenda that is twofold. First, we focus
on the research topics that have been addressed using customer review data. We use our literature
review to derive gaps in this area and outline important research questions that should be addressed
(cf. Table 2). Second, we focus on the data that has been used to derive these insights. We identify
potential biases in previous studies and formulate research questions to address these concerns (cf.
Table 3).
Research Topic
Findings in literature review
Research questions to be addressed
Effect on Sales
Inconsistencies between findings
of different studies with regards to
the impact of customer reviews on
sales.
How can the inconsistencies between
previous studies be resolved? What are the
major drivers?
The direction of the causal
relationship between sales, prices
and review valence is disputable.
Is there a reciprocal effect between sales,
prices and review valence? How are firms’
pricing strategies
influenced by sales and
review valence?
Most paper focus on one vendor
that is influenced by the customer
reviews. However,
the impact on
the whole market remains unclear.
Does the influence of customer reviews
differ between different vendor types and
qualities?
Bias and Fraud
A variety of biases such as the
underreporting, purchase or price
bias have been identified
. Further
research on the impact of these
biases is crucial to ensure the
validity of results derived from
customer reviews. Deliberate
manipulations are largely ignored.
How does the impact of the identified
biases
differ between different systems?
How do incentives for customer reviews
influence the occurrence of the biases?
How can context specific information be
used to extend the usefulness of the
aggregated product rating? How can review
manipulations be detected?
Review
Helpfulness
Studies have mainly focused on
identifying drivers of helpfulness
ratings based on existing reviews.
How can customer review systems be
designed to incentivize more helpful
reviews?
How does indicated helpfulness
influence customer decision making?
Table 2. The effect of customer reviews: Findings and Research Opportunities
The results of this paper have important implications for researchers on eWOM. We analyze the state
of the art in the literature based on online customer reviews. To our best knowledge, no study with
similar contributions on such a broad literature base has been conducted so far. We reveal that
obtaining reliable and generalizable results in this research area is, despite the mass of publicly
available data, very complex and the common knowledge generated so far is rather limited. In order to
fully exploit these promising datasets, we provide an agenda of emerging starting points for new
research projects. The detailed insights on study designs and methods enable researchers to make
informed decisions about their methodology in future studies, but call for researchers’ attention to
potential pitfalls when relying on previous findings in this area.
Proceedings of the 21st European Conference on Information Systems
9
Findings in literature review
Research questions to be addressed
Most studies focus on a similar set of products,
mostly entertainment, electronics and books.
What is the impact of customer reviews in the
context of different product groups (e.g. clothes,
services)?
Most studies are limited to a small set of review
websites. Therefore, we doubt that the current
research results are applicable to customer reviews
general. Furthermore, studies mainly have focused
on products or reviews as unit of analysis and
neglected the review system artifact.
What is the impact of the customer review system
itself? How do differences in the design or the
functionality influence the creation and the use of
customer reviews?
Most of the studies are conducted on an Anglo-
American or Asian background. Little is known
about the influences of online customer reviews in
different markets (except Chaves et al. (2011), Koh
et al. (2010)).
What is the impact of intercultural differences on
(a) the content of reviews, (b) its length, (c)
perceived helpfulness, (d) participation in
reviewing activities, and (e) the influence of
customer reviews on sales?
Table 3. Customer reviews as a data source: Findings and Research Opportunities
References
Archak, N., Ghose, A., Ipeirotis, P. G. (2011). Deriving the Pricing Power of Product Features by
Mining Consumer Reviews. Management Science, 57 (8), 1485–1509.
Breazeale, M. (2009). Word of mouse. International Journal of Market Research, 51 (3), 297318.
Cao, Q., Duan, W., Gan, Q. (2011). Exploring determinants of voting for the “helpfulness” of online
user reviews: A text mining approach. Decision Support Systems, 50 (2), 511521.
Chaves, M. S., Gomes, R., Pedron, C. (2011). Analysing reviews in the Web 2.0: Small and medium
hotels in Portugal. Tourism Management, (in press), 12.
Chen, Y., Fay, S., Wang, Q. (2011). The Role of Marketing in Social Media: How Online Consumer
Reviews Evolve. Journal of Interactive Marketing, 25 (2), 8594.
Chen, Y., Xie, J. (2008). Online Consumer Review: Word-of-Mouth as a New Element of Marketing
Communication Mix. Management Science, 54 (3), 477491.
Cheung, C. M. K., Thadani, D. R. (2012). The impact of electronic word-of-mouth communication: A
literature analysis and integrative model. Decision Support Systems, 54 (1), 461470.
Chevalier, J. A., Mayzlin, D. (2006). The Effect of Word of Mouth on Sales: Online Book Reviews.
Journal of Marketing Research, 43 (3), 345354.
Chintagunta, P. K., Gopinath, S., Venkataraman, S. (2010). The Effects of Online User Reviews on
Movie Box Office Performance: Accounting for Sequential. Marketing Science, 29 (5), 944–957.
Clemons, E. K., Gao, G. “Gordon” (2008). Consumer informedness and diverse consumer purchasing
behaviors: Traditional mass-market, trading down, and trading out into the long tail. Electronic
Commerce Research and Applications, 7 (1), 317.
Clemons, E. K., Gao, G. “Gordon”, Hitt, L. M. (2006). When Online Reviews Meet
Hyperdifferentiation: A Study of the Craft Beer Industry. Journal of Management Information
Systems, 23 (2), 149171.
Davis, A., Khazanchi, D. (2008). An Empirical Study of Online Word of Mouth as a Predictor for
Multi-product Category e-Commerce Sales. Electronic Markets, 18 (2), 130141.
Decker, R., Trusov, M. (2010). Estimating aggregate consumer preferences from online product
reviews. International Journal of Research in Marketing, 27 (4), 293307.
Dellarocas, C. (2003). The Digitization of Word of Mouth: Promise and Challenges of Online
Feedback Mechanisms. Management Science, 49 (10), 14071424.
Dellarocas, C., Gao, G. (Gordon), Narayan, R. (2010). Are Consumers More Likely to Contribute
Online Reviews for Hit or Niche Products? Journal of Management Information Systems, 27 (2),
127157.
Proceedings of the 21st European Conference on Information Systems
10
Dellarocas, C., Zhang, X., Awad, N. F. (2007). Exploring the value of online product reviews in
forecasting sales: The case of motion pictures. Journal of Interactive Marketing, 21 (4), 2345.
Duan, W., Gu, B., Whinston, A. B. (2008a). The dynamics of online word-of-mouth and product
salesAn empirical investigation of the movie industry. Journal of Retailing, 84 (2), 233–242.
Duan, W., Gu, B., Whinston, A. B. (2008b). Do online reviews matter? An empirical investigation
of panel data. Decision Support Systems, 45 (4), 10071016.
Duan, W., Gu, B., Whinston, A. B. (2009). Informational Cascades and Software Adoption on the
Internet: an Empirical Investigation. MIS Quarterly, 33 (1), 2348.
Feng, J., Papatla, P. (2011). Advertising: Stimulant or Suppressant of Online Word of Mouth? Journal
of Interactive Marketing, 25 (2), 7584.
Forman, C., Ghose, A., Wiesenfeld, B. (2008). Examining the Relationship Between Reviews and
Sales: The Role of Reviewer Identity Disclosure in Electronic Markets. Information Systems
Research, 19 (3), 291–313.
German Academic Association for Business Research (2011). VHB-JOURQUAL 2.1. Retrieved 31.
March, 2012, from http://vhbonline.org/en/service/jourqual/vhb-jourqual-21-2011/.
Ghose, A., Ipeirotis, P. G. (2011). Estimating the Helpfulness and Economic Impact of Product
Reviews: Mining Text and Reviewer Characteristics. IEEE Transactions on Knowledge & Data
Engineering, 23 (10), 1498–1512.
Godes, D., Mayzlin, D. (2004). Using Online Conversations to Study Word-of-Mouth
Communication. Marketing Science, 23 (4), 545560.
Harvey, C., Kelly, A., Morris, H., Rowlinson, M. (2010). Academic Journal Quality Guide. The
Association of Business Schools, London, United Kingdom.
Hennig-Thurau, T., Gwinner, K. P., Walsh, G., Gremler, D. D. (2004). Electronic word-of-mouth via
consumer-opinion platforms: What motivates consumers to articulate themselves on the Internet?
Journal of Interactive Marketing, 18 (1), 3852.
Hu, N., Bose, I., Gao, Y., Liu, L. (2011). Manipulation in digital word-of-mouth: A reality check for
book reviews. Decision Support Systems, 50 (3), 627–635.
Hu, N., Bose, I., Koh, N. S., Liu, L. (2012). Manipulation of online reviews: An analysis of ratings,
readability, and sentiments. Decision Support Systems, 52 (3), 674–684.
Hu, N., Liu, L., Sambamurthy, V. (2011). Fraud detection in online consumer reviews. Decision
Support Systems, 50 (3), 614626.
Hu, N., Pavlou, P. A., Zhang, J. (2009). Overcoming the J-shaped Distribution of Product Reviews.
Communications of the ACM, 52 (10), 144147.
Hu, Y., Li, X. (2011). Context-Dependent Product Evaluations: An Empirical Analysis of Internet
Book Reviews. Journal of Interactive Marketing, 25 (3), 123–133.
Hughes, S., Cohen, D. (2011). Can Online Consumers Contribute to Drug Knowledge? A Mixed-
Methods Comparison of Consumer-Generated and Professionally Controlled Psychotropic
Medication Information on the Internet. Journal of Medical Internet Research, 13 (3), e53.
Jansen, J. (2010). Online Product Research. Pew Research Center’s Internet & American Life Project,
Washington, D.C.
Jeacle, I., Carter, C. (2011). In TripAdvisor we trust: Rankings, calculative regimes and abstract
systems. Accounting, Organizations & Society, 36 (4/5), 293309.
Katz, E., Lazarsfeld, P. F. (1955). Personal Influence : the part played by people in the flow of mass
communications. Free Press, Glencoe, Illinois.
Koh, N. S., Hu, N., Clemons, E. K. (2010). Do online reviews reflect a product’s true perceived
quality? An investigation of online movie reviews across cultures. Electronic Commerce Research
and Applications, 9 (5), 374385.
Korfiatis, N., García-Bariocanal, E., Sánchez-Alonso, S. (2012). Evaluating content quality and
helpfulness of online product reviews: The interplay of review helpfulness vs. review content.
Electronic Commerce Research and Applications, 11 (3), 205217.
Lee, J., Lee, J.-N., Shin, H. (2011). The long tail or the short tail: The category-specific impact of
eWOM on sales distribution. Decision Support Systems, 51 (3), 466479.
Proceedings of the 21st European Conference on Information Systems
11
Li, X., Hitt, L. M. (2008). Self-Selection and Information Role of Online Product Reviews.
Information Systems Research, 19 (4), 456474.
Li, X., Hitt, L. M. (2010). Price Effects in Online Product Reviews: an Analytical Model and
Empirical Analysis. MIS Quarterly, 34 (4), 809831.
Li, Y.-M., Lin, C.-H., Lai, C.-Y. (2010). Identifying influential reviewers for word-of-mouth
marketing. Electronic Commerce Research & Applications, 9 (4), 294304.
Money, A. G., Barnett, J., Kuljis, J. (2011). Public Claims about Automatic External Defibrillators: An
Online Consumer Opinions Study. BMC Public Health, 11 (Supplement 4), 332345.
Mudambi, S. M., Schuff, D. (2010). What Makes a Helpful Online Review? A Study of Customer
Reviews on Amazon.com. MIS Quarterly, 34 (1), 185200.
Nelson, M. K. (2008). Watching Children. Journal of Family Issues, 29 (4), 516538.
Okoli, C., Schabram, K. (2010). A Guide to Conducting a Systematic Literature Review of
Information Systems Research. Sprouts: Working Papers on Information Systems, 10 (26), 149.
Pan, Y., Zhang, J. Q. (2011). Born Unequal: A Study of the Helpfulness of User-Generated Product
Reviews. Journal of Retailing, 87 (4), 598612.
Park, J., Gu, B., Lee, H. (2012). The relationship between retailer-hosted and third-party hosted WOM
sources and their influence on retailer sales. Electronic Commerce Research and Applications, 11
(3), 253261.
Schlosser, A. E. (2011). Can including pros and cons increase the helpfulness and persuasiveness of
online reviews? The interactive effects of ratings and arguments. Journal of Consumer Psychology,
21 (3), 226–239.
Sen, S., Lerman, D. (2007). Why are you telling me this? An examination into negative consumer
reviews on the Web. Journal of Interactive Marketing, 21 (4), 76–94.
Webster, J., Watson, R. T. (2002). Analyzing the Past to Prepare for the Future: Writing a Literature
Review. MIS Quarterly, 26 (2), 1323.
Willemsen, L. M., Neijens, P. C., Bronner, F., De Ridder, J. A. (2011). ‘Highly Recommended!’ The
Content Characteristics and Perceived Usefulness of Online Consumer Reviews. Journal of
Computer-Mediated Communication, 17 (1), 1938.
Xu, K., Liao, S. S., Li, J., Song, Y. (2011). Mining comparative opinions from customer reviews for
Competitive Intelligence. Decision Support Systems, 50 (4), 743754.
Yang, J., Mai, E. (Shirley) (2010). Experiential goods with network externalities effects: An empirical
study of online rating system. Journal of Business Research, 63 (9/10), 10501057.
Yang, Z., Fang, X. (2004). Online service quality dimensions and their relationships with satisfaction:
A content analysis of customer reviews of securities brokerage services. International Journal of
Service Industry Management, 15 (3), 302327.
Yang, Z., Jun, M., Peterson, R. T. (2004). Measuring customer perceived online service quality.
International Journal of Operations & Production Management, 24 (11), 1149–1174.
Ye, Q., Law, R., Gu, B. (2009). The impact of online user reviews on hotel room sales. International
Journal of Hospitality Management, 28 (1), 180182.
Ye, Q., Law, R., Gu, B., Chen, W. (2011). The influence of user-generated content on traveler
behavior: An empirical investigation on the effects of e-word-of-mouth to hotel online bookings.
Computers in Human Behavior, 27 (2), 634639.
Zhang, J. Q., Craciun, G., Shin, D. (2010). When does electronic word-of-mouth matter? A study of
consumer product reviews. Journal of Business Research, 63 (12), 13361341.
Zhang, Z., Ye, Q., Law, R., Li, Y. (2010). The impact of e-word-of-mouth on the online popularity of
restaurants: A comparison of consumer reviews and editor reviews. International Journal of
Hospitality Management, 29 (4), 694700.
Zhou, W., Duan, W. (2012). Online user reviews, product variety, and the long tail: An empirical
investigation on online software downloads. Electronic Commerce Research and Applications, 11
(3), 275289.
Zhu, F., Zhang, X. (Michael) (2010). Impact of Online Consumer Reviews on Sales: The Moderating
Role of Product and Consumer Characteristics. Journal of Marketing, 74 (2), 133148.
Proceedings of the 21st European Conference on Information Systems
12
... They are sometimes referred to as electronic word-of-mouth (eWOM), as they capture the characteristics of traditional word-of-mouth communication with the exception that they're communicated on the internet De Maeyer, (2012). Hening-Thurau et al (2004) are credited with providing a widely-used definition for electronic word of mouth (eWOM), as "any positive or negative statement made by potential, actual, or former customers about a product or company, which is made available to a multitude of people and institutions via the Internet" (in Manuel and Berger, 2013). ...
... This portends a risk to the validity and reliability of research based on online customer reviews data. With references to researches by a number of different researchers, Manuel and Berger (2013) reported that on average, online customer reviews can be overly positive, and the possibility of customer reviews being manipulated towards highly popular products. ...
Thesis
Full-text available
At the moment, the largest amount of tissue paper products are consumed in East Asia (which includes China and Japan), Europe and North America, and between 2010 and 2023, consumption of tissue paper products is expected to grow by 3% with the largest growth rate in China. Meanwhile, toilet paper, kitchen towels and facial tissues, combined make up 85.2% of all tissue paper products consumed in 2018. While different factors can be named as responsible for this growth in consumption, it is quite difficult to isolate the product-related factors that contribute to the growth of consumption of tissue paper in different countries due to the fact that tissue products have limited uniform specifications between different suppliers. This means, tissue suppliers and tissue machine manufacturers, like Valmet, are left with the challenge of understanding customers’ motivations behind demand of different tissue paper products. This is the motivation behind this study, which is to investigate what qualities of toilet paper, facial tissues and kitchen towels, are customers in the US, China and Japan satisfied or dissatisfied with. This is compared with measurements of the qualities of these products, like softness, strength, thickness, to determine the ranges of values of the products’ qualities that customers are satisfied with. Online reviews of 34 different brands of tissue paper products are collected ethically from online retailer websites (Amazon US, Amazon Japan, Walmart, JD.com), and analyzed using the thematic analysis methodology to identify the list of qualities of tissue products that customers are satisfied or dissatisfied with. Then, the Pareto rule was used to identify the top 20% most important qualities to customers. These 34 brands were also purchased from these online retailers and their different qualities were measured. The results and conclusion from the study revealed different insights, among which is that softness is a must-be quality for toilet papers in all countries studied, while water absorption is a must-be quality for kitchen towels. The study also revealed ranges of the numerical values of these qualities that are acceptable to end consumers. Valmet intends to use the conclusions from the study in engagements with tissue paper suppliers in the three countries regarding product strategies and choice of technology.
... The reviews' reliability is of particular importance to our current investigation. A variety of review biases have been reported [36]. People who think a product is of very low or high quality are more likely to write a review than those who believe the product is of average quality (the 'underreporting' bias; [37,38]. ...
... Using a python script, we harvested reviews of four domestic robot types: vacuum cleaners, pool cleaners, lawnmowers, and grill cleaners. Amazon was selected because of its size and popularity and because it is a commonly used source for analyzing customer experiences [36,45]. We chose the robot types and specific models based on the number of reviews on Amazon at the time of collection. ...
Preprint
Full-text available
There is a knowledge gap regarding which types of failures robots undergo in domestic settings and how these failures influence customer experience. We classified 10,072 customer reviews of small utilitarian domestic robots on Amazon by the robotic failures described in them, grouping failures into twelve types and three categories (Technical, Interaction, and Service). We identified sources and types of failures previously overlooked in the literature, combining them into an updated failure taxonomy. We analyzed their frequencies and relations to customer star ratings. Results indicate that for utilitarian domestic robots, Technical failures were more detrimental to customer experience than Interaction or Service failures. Issues with Task Completion and Robustness & Resilience were commonly reported and had the most significant negative impact. Future failure-prevention and response strategies should address the technical ability of the robot to meet functional goals, operate and maintain structural integrity over time. Usability and interaction design were less detrimental to customer experience, indicating that customers may be more forgiving of failures that impact these aspects for the robots and practical uses examined. Further, we developed a Natural Language Processing model capable of predicting whether a customer review contains content that describes a failure and the type of failure it describes. With this knowledge, designers and researchers of robotic systems can prioritize design and development efforts towards essential issues.
... İnternet ortamında ürün ve hizmetlerle alakalı tüketici yorumlarının metin madenciliği ile işlenmesi neticesinde ürün ve hizmetler hakkında önemli bilgiler elde edilmektedir. Bu aşamada literatür tarandığında tüketici yorumları üzerine pek çok araştırma bulunmaktadır (Trenz ve Berger, 2013;Melián-González, Bulchand-Gidumal, ve López-Valcárcel, 2013;Wei ve Lu, 2013;Öğüt ve Taş, 2012;Somprasertsri ve Lalitrojwong, 2010;Zhan, Loh, ve Liu, 2009). Bu araştırmalarda genellikle tüketici yorumları üzerinde gerçekleştirilen çeşitli analiz teknikleri ile bilgi, örüntü çıkarma amaçlanmıştır. ...
Article
Full-text available
Araştırmanın amacı, TripAdvisor kullanıcılarının Türkçe ve İngilizce yorumlarındaki duygusal eğilimlerin ortaya çıkarılması ve sınıflandırılmasında kullanılan duygu analizi yöntemlerini karşılaştırmaktır. Amaç kapsamında makine öğrenme yöntemlerinden Decision Tree, Random Forest gibi sınıflandırma algoritmaları kullanılmıştır. Nicel araştırma özelliği gösteren bu çalışma kapsamında veriler, TripAdvisor turizm portalından web kazıma tekniği ile elde edilmiştir. Amaçsal örnekleme yönteminin benimsendiği bu çalışmada verilerin analiz edilmesi sürecinde duygu analizi yöntemi kullanılmıştır. Veri analiz sürecinde açık kaynak kodlu KNİME veri madenciliği programından yararlanılmıştır. Araştırma neticesinde makine öğrenme algortimalarının sözlük tabanlı analize göre daha etkin sınıflandırma gerçekleştirdiği görülmüştür. Ayrıca makine öğrenme algortimaları sınıflandırma aşamasında Türkçe dilindeki yorumlarda daha başarılı sonuçlar üretmiştir. The aim of the research is to compare the sentiment analysis methods used to reveal and classify the emotional tendencies in Turkish and English comments of hotel users. Within purpose, classification algorithms such as Decision Tree and Random Forest from machine learning methods were used. The data was obtained from Tripadvisor tourism portal with web scraping/mining technique within the scope of this study, which shows quantitative research feature. A purposeful sampling method was used in this study. Emotion analysis, which is one of the text mining applications, was used to analyze the data. KNIME Analytics Platform was used in the data analysis process. As a result of the research, it was seen that the machine learning algorithms performed more effective classification than dictionary-based analysis. In addition, the machine learning algorithms produced more successful results in the Turkish language comments at the classification stage.
... Online customer reviews can be considered as a part of marketing which can serve as an effective marketing channel for firms to boost their sales and awareness at a very low cost (Zhao et al., 2019). Online customer reviews help to shape the purchase decision of a product and product awareness (Trenz and Berger, 2013). Based on the research done by Nielsen (2015) more than two-third of the consumers believe online reviews when they make their purchasing decision. ...
Article
Full-text available
Customer opinions have a profound impact on both business organizations and customers. This research is focused on validating the impact of online client reviews on sales performance of the online stores based on an evidence-based approach. The research has been carried out as a systematic review which is a review of recently published academic research articles with the purpose ofsynthesizing current knowledge of the field of online client reviews. 15 quality journal articles published in Emerald and Elsevier have been analysed to reach the proposed objectives. This research fills the identified research gap by reporting a rich conclusion with the support of identified evidence followed by implications and logical recommendations. The findings of this review show that online client reviews play a critical role in today’s online businesses and it has two main categories namely numerical reviews and textual reviews. Further, it has been found that both numerical and textual reviews are positively correlated with sales performance. Hence, the overall impact of online client reviews on sales performance is positive. It has been concluded that textual reviews have a greater influence on the sales performance of online stores.
... Reviews are the sample used most in all four of the fields considered here, followed by general participants, students, and customers. In line with the findings of Trenz and Berger (2013), our results indicate that review data sources are limited to some review platforms from various industries, Amazon, TripAdvisor, Yelp.com, eBay, and some hotel booking sites being the most popular. The size and popularity of a review platform are the primary selection criteria (Cao, Duan, & Gan, 2011;Forman, Ghose, & Wiesenfeld, 2008). ...
Article
The objective of this paper is to conduct a systematic review of the literature on online consumer reviews (OCRs) in order to provide understanding of the multi-featured nature and complexity of such reviews and assist researchers and practitioners. A total of 234 papers covering a publication period from 2000 to July 2019 were included in our systematic analysis and a fivefactor communication process framework served as a classification scheme. In addition to the insights into OCRs in terms of publication outlets, methodology, and data sources, the most commonly used features, most frequently examined response-based features, and consistent findings/discrepancies between previous studies are discussed in the synthesized results. Research trends identified during the observation period and future research directions are also highlighted. This study also made an attempt to develop an integrative framework of the five feature categories with the intention of providing a comprehensive picture of OCR scope.
... In our current day and age, reviews are part of almost every product/service provided on the internet [14], as seen in [8] it is the primary way for a company to get an understanding concerning the amount of success their product has and as examined in [7] for the customer to build trust in purchasing or using a service of which only a description or a picture exits. Therefore, a need for a deeper understanding and analysis of those reviews are needed [9] for any individual who wishes to derive various consequences regarding a product. Standard methods for such insight derivation include sentiment analysis, around which we will formulate a new approach for review rating classification. ...
Preprint
Full-text available
Typical use cases of sentiment analysis usually revolve around assessing the probability of a text belonging to a certain sentiment and deriving insight concerning it; little work has been done to explore further use cases derived using those probabilities in the context of rating. In this paper, we redefine the sentiment proportion values as building blocks for a triangle structure, allowing us to derive variables for a new formula for classifying text given in the form of product reviews into a group of higher and a group of lower ratings and prove a dependence exists between the sentiments and the ratings.
... E-wom, "belirli mal ve hizmetlerin veya satıcılarının kullanımı veya özellikleri ile ilgili internet tabanlı teknoloji ile tüketicilere yönelik iletişimdir" (Litvin, Goldsmith ve Pan, 2008). Başka bir ifadeyle e-wom, potansiyel veya eski müşteriler tarafından bir ürün veya işletme hakkında yapılan olumlu veya olumsuz yorumlar olarak tanımlanmaktadır (Trenz ve Berger, 2013). Ağızdan ağıza iletişime dayalı pazarlama yaklaşımı, ağızdan ağıza pazarlama olarak bilinir. ...
Article
Full-text available
Bu çalışmanın amacı, online platformlarda yapılan içerik pazarlamasına yönelik tutumun yeşil ürün satın alma davranışı üzerine etkisinde e-wom’un aracı rolünü belirlemektir. Bu doğrultuda geliştirilen anket formu, Google Drive üzerinden online olarak uygulanmış ve 438 adetkullanılabilir anket elde edilmiştir. Araştırmanın amaçları doğrultusunda verilere uygun analiz yöntemleri uygulanmıştır. Bu analizler, değişkenler arası korelasyon, SPSS process’te regresyon analizi ve Sobel testidir. Yapılan korelasyon analizine göre, değişkenler arası orta düzeyde ilişki olduğu sonucuna ulaşılmıştır. Regresyon sonuçlarına göre, öncelikli olarak içerik pazarlamasına yönelik tutumun hem e-wom hem de yeşil ürün satın alma niyeti üzerinde etkisi vardır. İçerik pazarlamasına yönelik tutum ve e-wom birlikte yeşil ürün satın alma davranışını daha fazla açıklamaktadır. Nihai olarak içerik pazarlamasına yönelik tutumun yeşil ürün satın alma davranışına etkisinde e-wom aracı etkisi bulunmaktadır. Bu sonuç, Sobel testi ile onaylanmış ve kısmi aracılık etkisi olduğu saptanmıştır. Kısaca, yeşil ürün ve çevre ile ilgili oluşturulan içeriklerin online ortamda bir marka veya ürün ile ilişkilendirildiğinde yeşil ürün satın alma davranışını etkilemekle birlikte, içerik pazarlaması e-wom ile desteklendiğinde bu etki daha fazla olmaktadır
... E-commerce platforms have traditionally adopted numeric metrics, such as review ratings and the helpfulness of the review, to assist consumers to evaluate products. Although these metrics are useful, studies have shown that the numeric ratings suffer under-reporting bias on various factors (Trenz et al., 2013). In contrast, the review text embeds rich description on product features and quality. ...
Conference Paper
Full-text available
The increased Internet usage has driven a rapid growth of e-commerce transactions. One of the key determinants of the increased online transactions is the influence of electronic word-of-mouth (eWOM) in the form of online reviews. In particular, comparative reviews that compare similar products provide valuable information for consumers to evaluate multiple products and play a pivotal role in driving consumer purchase decisions. By constructing a product network based on products connected by comparative reviews, we develop several new network centrality measures and empirically examine the impact of eWOM through these new centrality measures and the semantic similarity of the comparative reviews. We find that the comparative reviews are key eWOM measures that influence the product's sales within a product network. Our findings also demonstrate that the text semantic similarity is a better measure of the strength of tie in a comparative product network than the review sentiment. Our study contributes to the eWOM literature by utilizing text review semantic similarity to capturing review strength based on the latent product features, and to the network graph theory through the new centrality measures we have developed. Overall, our findings provide important insights for e-commerce platform operators and vendors to leverage the impact of eWOM and help consumers compare products in a more effective manner.
Article
Customer loyalty is important in marketing because it represents the desire of customers, employees or friends to make personal investments or sacrifices to strengthen the relationship between them. The purpose of this study is to determine the role of trust in mediating the effect of eWOM and shopping experience on customer loyalty to Tokopedia application users in Denpasar. The subjects of this research are Tokopedia users, the questionnaires were distributed to 105 respondents through online media by first compiled in a Google-form link format. Determination of the sample using a non-probability sampling method, namely purposive sampling. The data were analyzed using the SEM-PLS technique. The results of this study show that eWOM has a positive and significant effect on customer loyalty, eWOM has a positive and significant effect on trust, shopping experience has a positive and significant effect on customer loyalty, shopping experience has a positive and significant effect on trust, trust has a positive and significant effect on customer loyalty, trust able to mediate the effect of eWOM on customer loyalty, and trust is able to mediate the effect of shopping experience on customer loyalty. The practical implication that can be given from the research findings is that this study can be used as a basic model to evaluate Tokopedia's strategy for eWOM, shopping experience, trust, and customer loyalty. Based on the results of statistical data, that among the three variables that affect customer loyalty, the eWOM variable has the highest path coefficient, which can be a reference for Tokopedia management in making strategies to increase customer loyalty.
Article
Full-text available
With the rapid growth of the Internet, the ability of users to create and publish content has created active electronic communities that provide a wealth of product information. However, the high volume of reviews that are typically published for a single product makes harder for individuals as well as manufacturers to locate the best reviews and understand the true underlying quality of a product. In this paper, we reexamine the impact of reviews on economic outcomes like product sales and see how different factors affect social outcomes such as their perceived usefulness. Our approach explores multiple aspects of review text, such as subjectivity levels, various measures of readability and extent of spelling errors to identify important text-based features. In addition, we also examine multiple reviewer-level features such as average usefulness of past reviews and the self-disclosed identity measures of reviewers that are displayed next to a review. Our econometric analysis reveals that the extent of subjectivity, informativeness, readability, and linguistic correctness in reviews matters in influencing sales and perceived usefulness. Reviews that have a mixture of objective, and highly subjective sentences are negatively associated with product sales, compared to reviews that tend to include only subjective or only objective information. However, such reviews are rated more informative (or helpful) by other users. By using Random Forest-based classifiers, we show that we can accurately predict the impact of reviews on sales and their perceived usefulness. We examine the relative importance of the three broad feature categories: “reviewer-related” features, “review subjectivity” features, and “review readability” features, and find that using any of the three feature sets results in a statistically equivalent performance as in the case of using all available features. This paper is the first study that integrates eco- - nometric, text mining, and predictive modeling techniques toward a more complete analysis of the information captured by user-generated online reviews in order to estimate their helpfulness and economic impact.
Article
Full-text available
Online users often need to make adoption decisions without accurate information about the product values. An informa- tional cascade occurs when it is optimal for an online user, having observed others' actions, to follow the adoption deci- sion of the preceding individual without regard to his own information. Informational cascades are often rational for individual decision making; however, it may lead to adoption of inferior products. With easy availability of information about other users' choices, the Internet offers an ideal envi- ronment for informational cascades. In this paper, we empi- rically examine informational cascades in the context of online software adoption. We find user behavior in adopting software products is consistent with the predictions of the informational cascades literature. Our results demonstrate that online users' choices of software products exhibit distinct jumps and drops with changes in download ranking, as predicted by informational cascades theory. Furthermore, we find that user reviews have no impact on user adoption of the most popular product, while having an increasingly positive impact on the adoption of lower ranking products. The phenomenon persists after controlling for alternative explana- tions such as network effects, word-of-mouth effects, and product diffusion. Our results validate informational cas- cades as an important driver for decision making on the Internet. The finding also offers an explanation for the mixed results reported in prior studies with regard to the influence of online user reviews on product sales. We show that the mixed results could be due to the moderating effect of infor- mational cascades.
Article
We analyze how online reviews are used to evaluate the effectiveness of product differentiation strategies based on the theories of hyperdifferentiation and resonance marketing. Hyperdifferentiation says that firms can now produce almost anything that appeals to consumers and they can manage the complexity of the increasingly diverse product portfolios that result. Resonance marketing says that informed consumers will purchase products that they actually truly want. When consumers become more informed, firms that provide highly differentiated products should experience higher growth rates than firms with less differentiated offerings. We construct measures of product positioning based on online ratings and find supportive evidence using sales data from the craft beer industry. In particular, we find that the variance of ratings and the strength of the most positive quartile of reviews play a significant role in determining which new products grow fastest in the market-place. This supports our expectations for resonance marketing.
Article
This study analyzes the value of retailer-and third-party hosted WOM by investigating how WOM valences and volumes of multiple sources interact with one other to influence retailer sales. Consumer opinions, experiences, and product recommendations posted on online WOM sites have become a major information source for consumer purchase decisions. Previous literature shows that WOM information can influence retailer sales in two ways – volume and valence, but most researchers investigate these two WOM effects separately. In reality, consumers evaluate volumes and valences jointly from multiple WOM sources for their purchase decisions. That is, there would be an interaction effect between them. Therefore, this study investigates how WOM valences and volumes at both retailer and third-party review web sites interact with one other to influence retailer sales. We collect sales rank data for 145 camera products from Amazon for a period of four months, and the corresponding online review data from Amazon and CNet for the same period. Our analysis shows that WOM valence is positively interacted with its own volume at both sources. We also find that retailer-hosted WOM valence is negatively interacted with third party-hosted WOM volume. Our findings indicate the importance of considering interaction effect between WOM sources.
Article
The notion of electronic word-of-mouth (eWOM) communication has received considerable attention in both business and academic communities. Numerous studies have been conducted to examine the effectiveness of eWOM communication. The scope of published studies on the impact of eWOM communication is large and fragmented and little effort has been made to integrate the findings of prior studies and evaluate the status of the research in this area. In this study, we conducted a systematic review of eWOM research. Building upon our literature analysis, we used the social communication framework to summarize and classify prior eWOM studies. We further identified key factors related to the major elements of the social communication literature and built an integrative framework explaining the impact of eWOM communication on consumer behavior. We believe that the framework will provide an important foundation for future eWOM research work.
Article
Introduction While product review systems that collect and disseminate opinions about products from recent buyers (Table 1) are valuable forms of word-of-mouth communication, evidence suggests that they are overwhelmingly positive. Kadet notes that most products receive almost five stars. Chevalier and Mayzlin also show that book reviews on Amazon and Barnes & Noble are overwhelmingly positive. Is this because all products are simply outstanding? However, a graphical representation of product reviews reveals a J-shaped distribution (Figure 1) with mostly 5-star ratings, some 1-star ratings, and hardly any ratings in between. What explains this J-shaped distribution? If products are indeed outstanding, why do we also see many 1-star ratings? Why aren't there any product ratings in between? Is it because there are no "average" products? Or, is it because there are biases in product review systems? If so, how can we overcome them? The J-shaped distribution also creates some fundamental statistical problems. Conventional wisdom assumes that the average of the product ratings is a sufficient proxy of product quality and product sales. Many studies used the average of product ratings to predict sales. However, these studies showed inconsistent results: some found product reviews to influence product sales, while others did not. The average is statistically meaningful only when it is based on a unimodal distribution, or when it is based on a symmetric bimodal distribution. However, since product review systems have an asymmetric bimodal (J-shaped) distribution, the average is a poor proxy of product quality. This report aims to first demonstrate the existence of a J-shaped distribution, second to identify the sources of bias that cause the J-shaped distribution, third to propose ways to overcome these biases, and finally to show that overcoming these biases helps product review systems better predict future product sales. We tested the distribution of product ratings for three product categories (books, DVDs, videos) with data from Amazon collected between February--July 2005: 78%, 73%, and 72% of the product ratings for books, DVDs, and videos are greater or equal to four stars (Figure 1), confirming our proposition that product reviews are overwhelmingly positive. Figure 1 (left graph) shows a J-shaped distribution of all products. This contradicts the law of "large numbers" that would imply a normal distribution. Figure 1 (middle graph) shows the distribution of three randomly-selected products in each category with over 2,000 reviews. The results show that these reviews still have a J-shaped distribution, implying that the J-shaped distribution is not due to a "small number" problem. Figure 1 (right graph) shows that even products with a median average review (around 3-stars) follow the same pattern.
Article
Word of mouth by consumers is attracting increased attention from marketing scholars because of findings that it can affect brand perceptions and sales. There is limited empirical research, however, on the stimulants of consumer word of mouth. An assumption in the literature has been that increased advertising can also stimulate consumer word of mouth and, hence, complement the effects of advertising. We present arguments for why increased advertising may be associated with reductions in online word of mouth. We empirically test this possibility on online word of mouth in the auto industry. Our results suggest that increased advertising can, indeed, be associated with reductions in online consumer word of mouth.
Article
One guideline given to online reviewers is to acknowledge a product's pros and cons. Yet, I argue that presenting two sides is not always more helpful and can even be less persuasive than presenting one side. Specifically, the effects of two- versus one-sided arguments depend on the perceived consistency between a reviewer's arguments and rating. Across a content analysis and three experiments that vary the information provided in the online review and whether the ratings are positive or negative, the results support these predictions. Furthermore, beliefs that the reviewer is able (vs. willing) to tell the truth mediated the effects.