ArticlePDF Available
Manipulation of online reviews: An analysis of ratings, readability, and sentiments
Nan Hu
a
, Indranil Bose
b,
, Noi Sian Koh
c
, Ling Liu
a
a
University of WisconsinEau Claire, United States
b
The University of Hong Kong, Hong Kong
c
Singapore Management University, Singapore
a b s t r a c ta r t i c l e i n f o
Article history:
Received 11 May 2010
Received in revised form 29 September 2011
Accepted 3 November 2011
Available online 12 November 2011
Keywords:
Manipulation
Online reviews
Ratings
Readability
Runs test
Sentiments
Text mining
As consumers become increasingly reliant on online reviews to make purchase decisions, the sales of the
product becomes dependent on the word of mouth (WOM) that it generates. As a result, there can be at-
tempts by rms to manipulate online reviews of products to increase their sales. Despite the suspicion on
the existence of such manipulation, the amount of such manipulation is unknown, and deciding which re-
views to believe in is largely based on the reader's discretion and intuition. Therefore, the success of the ma-
nipulation of reviews by rms in generating sales of products is unknown. In this paper, we propose a simple
statistical method to detect online reviews manipulation, and assess how consumers respond to products
with manipulated reviews. In particular, the writing style of reviewers is examined, and the effectiveness
of manipulation through ratings, sentiments, and readability is investigated. Our analysis examines textual
information available in online reviews by combining sentiment mining techniques with readability assess-
ments. We discover that around 10.3% of the products are subject to online reviews manipulation. In spite of
the deliberate use of sentiments and ratings in manipulated products, consumers are only able to detect ma-
nipulation taking place through ratings, but not through sentiments. The ndings from this research ensue a
note of caution for all consumers that rely on online reviews of books for making purchases, and encourage
them to delve deep into the book reviews without being deceived by fraudulent manipulation.
© 2011 Elsevier B.V. All rights reserved.
1. Introduction
Consumers are increasingly relying on opinions posted on the e-
commerce websites to make a variety of decisions ranging from
what movies to watch to what stocks to invest in [17]. Previously,
these decisions were based on advertisements or product information
provided by vendors. However, with the proliferation of e-commerce
and increasing number of product reviews provided by users, it has
been found that consumers have increasingly relied on online re-
views for their search of information related to a variety of products.
Prior research has also found that consumers nd such user-
generated reviews more credible and trustworthy than the traditional
sources [3]. However, it is generally not known to what extent these
online reviews are truthful user-generatedreviews or merely re-
views provided by vendors interested to push the sales of products.
In addition, it is not clear how effectively vendors can use various
mechanisms to manipulate online reviews and inuence consumers'
purchase decisions.
Following previous literature [22,23], we dene reviews manipula-
tion as vendors, publishers, writers, or any third-party consistently
monitoring the online reviews and posting non-authentic online
reviews on behalf of customers when needed, with the goal of boost-
ing the sales of their products. Based on the assumption that the writ-
ing style of authentic online reviews (e.g., readability, which will be
dened later) should be random, we propose a non-parametric meth-
od to evaluate whether the reviews of one product, instead of individ-
ual reviews of each product, are manipulated and whether consumers
understand such manipulation.
Reviews manipulation is not a hypothetical phenomenon. It is
known to exist widely in popular websites related to e-commerce,
travel, and music. For example, when Amazon.com's Canadian web-
site accidentally revealed the true identities of some of its book re-
viewers due to software errors, it was found that a sizable
proportion of these reviews were written by the book's own pub-
lishers, authors and their friends or relatives [19]. This is also con-
rmed by our data of products with manipulated reviews (Fig. 1), in
which we noticed the suspicious behavior of a customer who fre-
quently posted positive reviews. He/she visited the website every
few days to post reviews with different textual comments with very
high ratings for a single item. Fig. 2 shows another case in which
one reviewer plagiarized the content of another review.
1
Decision Support Systems 52 (2012) 674684
Corresponding author at: School of Business, The University of Hong Kong, Pok-
fulam Road, Hong Kong. Tel.: +852 2241 5845; fax: +852 2858 561.
E-mail address: indranil_bose@yahoo.com (I. Bose).
1
Our method focuses on detecting manipulation activity through observing non-
random behavior, as shown in Fig. 1. Detecting the type of manipulation shown in
Fig. 2 will involve another technique i.e., duplication detection which is not covered
in this paper.
0167-9236/$ see front matter © 2011 Elsevier B.V. All rights reserved.
doi:10.1016/j.dss.2011.11.002
Contents lists available at SciVerse ScienceDirect
Decision Support Systems
j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / d s s
Reviews manipulation is not just prevalent amongst book sellers. The
music industry is known to hire professional marketers who surf various
online chat rooms and fan sites to post positive comments about new al-
bums [30,39]. It also exists in the hospitability industry centered around
hotels and restaurants. Insiders of the travel industry have claimed that
reviews in their industry have been manipulated, either by the owners
or by the competitors.
2
The comments made by the manipulator of res-
taurant reviews are an eye opener. I began tracking feedback about
my restaurant on TripAdvisors rants and ravespage. It very quickly oc-
curred to me that I could [write] in glowing reviews about my own res-
taurant and up my ratings numbers. After a period of time, I began to
see my ratingslide a bit after somenot so positive postingsby supposedly
realcustomers. Were they posted by my competition? Perhaps, but I
didn't let it concern me too much. I simply got on TripAdvisor and bom-
barded them with glowing reviews about my own restaurant! Within
days, I was rated a perfect 5!The well-known publisher of travel guides
Frommers remarked: Why wouldn't a hotel submit a urry of positive
comments penned by employees or friends? If you were a hotel owner,
wouldn't you take steps to make sure that TripAdvisor contained numer-
ous favorable write-ups of your property? Who would fail to do this?
3
Although the various pieces of evidence in the above paragraph
show that online reviews manipulation is a well-established industrial
malpractice and a serious problem in itself because consumers may
make the wrong purchase decision based on these manipulated infor-
mation, to date, there have been few studies that have investigated
and reported the presence of manipulated reviews in the online review
forums. To the best of our knowledge, there are only two recent re-
search papers that have focused on proving the existence of online re-
views manipulation [22,23]. However, current work does not offer
ways to identify products whose reviews are manipulated. Also,
[22,23] focus on using numeric ratings to detect the existence of online
reviews manipulation, ignoring the rich textual contents of online re-
views. In this paper, we go beyond the analysis of ratings to examine
the textual content of reviews and propose a statistical Runstest meth-
od to identify products with reviews that are manipulated.
4
Since participants of online review communities can assume any
identity or choose to remain anonymous, marketers are able to dis-
guise their promotion of products as consumer recommendations. In
an online context, if potential customers knew which reviews were
posted by real customers who consumed the product, and which re-
views were written by authors, publishers, or any third parties with
selsh interests, then those potential customers could undo the
damages caused by these slanted reviews. Unfortunately, since all
slanted reviews were written by anonymous entities or by manipu-
lators who assumed a customer's identity, it was not easy for con-
sumers to distinguish a slanted review from a truthful review
written by a zealous customer by simply looking at the rating of a
review. A manual inspection of the textual content of a single re-
view could not totally solve that problem either because it was
still difcult to differentiate between truthful and manipulated re-
views unless some parts of the manipulated review were identical
to another review [7]. For unsuspecting customers it was almost im-
possible to detect the manipulation of ratings of products as well as
product related emotional sentiments that were included in a
review.
In this paper, we set off to discover the presence of manipulation
in online reviews of products and identify the effectiveness of the
promotional content within manipulated reviews on the sales of
products.
We specically address the following research questions:
1. To what extent is manipulation present in online reviews?
2. How can such manipulation be detected from the ratings and textual
content of reviews? What are some of the textual characteristics that
can be used to identify products with manipulated reviews?
Fig. 1. Examples of manipulated reviews.
2
http://www.tripso.com/today/new-tripadvisor-whistleblower-claims-some-
reviews-are-totally-fraudulent/
3
http://www.elliott.org/blog/does-tripadvisor-hotel-manipulation-scandal-render-
the-site-completely-useless/
4
Note that our approach only identies products with manipulated reviews but is
unable to specically pinpoint which reviews are manipulated. Fig. 2. Duplication of online reviews.
675N. Hu et al. / Decision Support Systems 52 (2012) 674684
3. What is the impact of reviews manipulation in terms of rating and
writing style on the sales of products?
To answer the above questions, we need to nd a way to identify
products with manipulated reviews. We rst describe the intuition
behind the method for the detection of manipulated products. As
writing style varies with the background of an individual, intuitively,
reviews written by different consumers will be random in the case
where there is no manipulation [21,24]. In other words, writing
style of the reviews and review scores should be mutually indepen-
dent and identically distributed with respect to time. Building on
this intuition, we propose a method to detect manipulated products
by examining the sequence of review ratings and writing style of
the textual reviews. Subsequently, we extract products with manipu-
lated reviews and then analyze the impact of manipulation of the re-
views of the products on the sales of the products.
In the context of this research, writing style refers to how con-
sumers construct sentences together when they write online reviews.
Reviews written by individual consumers often express a personal
view of their experience about the products. Thus their writing style
should be different from one another. Such differences reect the het-
erogeneity in their culture, education, occupation and so on. Howev-
er, for manipulators, the situation is different. If reviews are
consistently monitored and posted by manipulators, then the ob-
served reviews will be a blend of true customer reviews and manipu-
lators' reviews; hence the writing styles of observed reviews will not
be random with the existence of manipulators.
By observing the change in the writing style over time, we can
infer whether the online reviews for a product is manipulated or
not because writing style is unique among individuals. Building on
this intuition, we develop a model for the detection of manipulation.
The rest of the paper is organized as follows. Section 2 discusses
related work in the eld of accounting and computer science that
deals with detection of fraud, and reviews extant research on senti-
ments and writing style analysis. Section 3 presents our research
method for the detection of manipulation in reviews. Section 4 pre-
sents the research setting, and the numerical results related to the
existence of reviews manipulation and its impact on sales. Finally,
Section 5 summarizes the main contributions of this paper, iden-
ties the limitations of this research approach, and discusses
some directions for future research in the area of online reviews
manipulation.
2. Related work
Several researchers have actively examined the various effects of
WOM e.g. [4,5,810,15,26,27]. Using user reviews on Yahoo!
Movies, Liu [27] and Duan et al. [10] found that the valence of pre-
vious movie reviews did not have any signicant impact on later
weekly box ofce revenues. Gruhl et al. [16] showed that volume
of blog postings could be used to predict spikes in actual consumer
purchase decisions at the online retailer Amazon. Other researchers
started to investigate various factors that could inuence online re-
views such as the impact of online reviewers' characteristics [11,14].
Forman et al. [11] considered the effect of reviewers' online identi-
ties on the impact of reviews. They found that reviews posted by
real name reviewers had a larger impact on product sales than
those posted by anonymous reviewers. Hence, with the prolifera-
tion of online reviews, many people believed that online consumer
reviews were a good proxy for overall WOM and could also inu-
ence consumers' decisions. However, the efcacy of online reviews
could nonetheless be limited.
Given the power of electronic WOM, many rms are taking advan-
tage of online consumer reviews as a new marketing tool [8]. Studies
showed that rms not only regularly posted their product informa-
tion and sponsored promotional chats on online forums, such as
USENET [30], they also proactively encouraged their consumers to
spread the word about their products online [15]. Some rms even
strategically manipulated online reviews in an effort to inuence con-
sumers' purchase decisions [8,20]. An underlying belief behind such
strategies is that online consumer reviews could signicantly inu-
ence consumers' purchase related decisions. Some recent studies
have looked into how marketers can strategically manipulate con-
sumers' online communications [8,30].
2.1. Manipulation
Manipulation of reviews occurs when online vendors, publishers,
or authors write consumerreviews by posing as real customers.
Thus, manipulation here means that the posted review is not a truth-
ful account of a real customer's experience. Manipulation or fraud is
not a new area of research in the traditional business elds [29,31].
For example, in the area of accounting there is extant research on pro-
ling of earnings manipulators through the identication of their dis-
tinguishing characteristics as well as development of models for the
detection of earnings management [2,34]. The variables used in
such models represented the effects of manipulation or preconditions
that prompted rms to engage in such activities. Research in this area
identied the existence of a systematic relationship between the
probability of manipulation and some key nancial statement vari-
ables. As a result, the analysis of the accounting data of the companies
could identify the rms that engaged in earnings manipulation. In
fact, by comparing the accrual levels for one company over different
years and under different types of nancial situations, the researcher
was able to identify the abnormal accruals that were closely related to
earnings management. Although the models used in the earnings ma-
nipulation literature were easy to implement, the nancial reports of
the same company had to be available for several years in order for
the analysis to be effective.
Even though the existence of online reviews fraud is acknowl-
edged by online vendors, these online vendors rarely discussed pub-
licly how they should ght online reviews fraud. There was no
commonly agreed conceptual denition of online reviews fraud
based on which vendors could mandate some appropriate legal ac-
tion. Similar to the case of digital rights management, vendors be-
lieved that one way to lter online reviews fraud was to never
disclose exactly how they identied such fraudulent reviews. They
had the apprehension that unethical users would take advantage
of such disclosures. Due to the above challenges, a method for the
determination of existence of manipulation in online reviews is
crucial.
A consumer review consists of two parts: a numerical rating of the
product or service being reviewed, as well as textual statements
about the product or service. We believe when unethical users ma-
nipulate online reviews, they can either post reviews with a high nu-
meric rating or manipulate the textual statements posted in the
review. Hence, by investigating how the rating or writing styles
change over time, we are able to detect manipulation in online
reviews.
2.2. Writing style: sentiments and readability
In our context, writing style refers to how consumers construct
sentences together when they write online reviews to indicate their
passion about their own reviews. We believe that by observing the
distribution of the writing style over time, we can infer whether the
online reviews for a product is manipulated or not because writing
style is unique to every individual. As stated before, in order to really
inuence consumers' decisions about purchases, vendors or pub-
lishers or writers need to hire professional manipulators to write re-
views while posing as consumers. Even if they do not hire
professionals, they need to write the reviews in a consistent and
676 N. Hu et al. / Decision Support Systems 52 (2012) 674684
believable manner so that they are able to catch the attention of the
consumers and inuence their purchase decisions. Hence, we expect
that the writing styles of manipulators will be different from those
of the genuine consumers, and they are more likely to post reviews
at certain time periods, such as when ratings decrease. These traits
in the writing style of manipulators can help us identify whether a re-
view is genuine or manipulated.
Reviews by individual consumers often express a personal view of
their experience about the products. Thus their writing style may be
very different from each other. Such differences reect the heteroge-
neity in their culture, education, occupation and so on. However, for
manipulators, the situation is different. Thus, across time, the writing
style and readability of individual reviews vary and should be random
when reviews are posted by real customers. However, if reviews are
consistently monitored and posted by manipulators in certain cir-
cumstances, such as observing a decrease rate in online reviews,
then the observed reviews will be a blend of true customer reviews
and manipulators' reviews; hence the writing styles of observed re-
views will not be random with the existence of manipulators.
We focus on two different ways of evaluating writing styles sen-
timents and readability. In the attempt to write reviews that cus-
tomers will believe and act upon, manipulators are likely to use
certain persuasion strategies. Persuasion is the use of appeals to con-
vince a listener or reader to think or act in a particular way. In ancient
Greece, the art of using language as a means to persuade was called
rhetoric. The Greek philosopher Aristotle (384322 BC) set forth an
extended treatise on rhetoric that still attracts great interest and care-
ful study. His treatise on rhetoric discussed not only the elements of
style and delivery, but also emotional appeals (pathos) and character
appeals (ethos) [12]. He identied three main forms of rhetoric:
ethos: how the character and credibility of a speaker/writer could
inuence an audience to consider him/her to be believable.
pathos: the use of emotional appeals to alter the audience's judg-
ment. This could be done through the use of metaphors, emotive
language, and sentiments that evoked strong emotions in the
audience.
logos: the use of reasoning to construct and support an argument
(e.g., use of statistics, mathematics, and logic).
Manipulators are likely to use sentiments to slant reviews (i.e.,
write or present in a biased manner) so as to inuence a potential
reader's purchase behavior. The use of such a slanting behavior is
common in public relations, lobbying, law, marketing, professional
writing and advertising where the goal of the writer is to inuence
the third party's opinion or belief. For example, Kahn and Kenney
[24] conducted content analysis of campaign coverage in major news-
papers for 67 incumbent Senate campaigns between 1988 and 1992,
and found that the papers' editorial endorsements signicantly affect-
ed the tone (i.e., positive, neutral, or negative) of the incumbent cov-
erage, and the number of criticisms published about incumbents.
Such editorial slants in turn inuenced voters' decisions in the elec-
tions. Likewise, Gurun and Butler [19] found that when local media
reported news about local companies, they used fewer negative
words than when they reported about non-local companies. As the
local companies spent more on advertising, the local media had
more positive slant towards them. The researchers reported that on
an average, an increase in local media slant by one standard deviation
was associated with a 3.59% increase in the market value of the rm.
From these examples it might be reasonable to assume that in the
context of online reviews, manipulators would tend to use positive
slant in the form of emotive language such as sentiments to persuade
and inuence customers' choices.
In addition to the sentiments of writing style, another important
metric that will be used to discover manipulation is readability. Read-
ability is dened as the reading ease that improves the comprehen-
sion as well as the retention of the textual material. Readability of
textual data indicates the amount of effort that is needed by a person
of a certain age and education level to understand a piece of text [40].
Readability is a score generated by a readability formula, and is de-
rived from a mathematical model that assessed the reading ease of
different pieces of text by a number of subjects. Based on the syntac-
tical elements and the underlying style, the readability test would
provide an indication of the understandability of a piece of text. The
score obtained from most readability tests that have been used in
the extant literature represented the school grade level that was re-
quired to comprehend the piece of text, and to understand the logic
of the statement.
3. Research method
In this section, we rst describe the method used for determining
the writing style of reviews in this study, and follow that up with the
method for detection of manipulation of reviews.
Fig. 3. Manipulated reviews posted by the same customer for one book item.
677N. Hu et al. / Decision Support Systems 52 (2012) 674684
3.1. Writing style measurements
3.1.1. Readability
In this research, the readability of the reviews or the reader's abil-
ity to comprehend a text is ascertained using the Automated Read-
ability Index (ARI) [36]. Past research in the eld of information
science made use of readability tests for studying the qualitative
characteristics of several types of texts [14,25,32]. The ARI is one of
the major readability tests that were used to evaluate the readability
of a text by decomposing the text into its basic structural elements.
We chose this measure because unlike other indices, the determina-
tion of ARI relied on the number of characters per word, rather than
the number of syllables per word. Since, the number of characters in
a word could be more easily and accurately determined than the
Fig. 5. An example of review posted by the same anonymous customer for a book.
Fig. 4. Manipulated reviews posted by the same customer for one book item.
678 N. Hu et al. / Decision Support Systems 52 (2012) 674684
number of syllables per word, this measure was subjected to a lower
error rate as compared to other readability measures. The ARI is cal-
culated using the following formula [36]:
ARI ¼4:71 !Total number of characters=Total number of wordsð Þ
þ0:5
!Total number of words=T otal number of sentencesð Þ21:43
The value of the index approximated the minimum grade level of
education that was needed to comprehend a piece of text. For in-
stance, a score of 8.3 for the ARI for a piece of text indicated that
the text could be understood by an average 8th grade student in the
United States.
The readability of the review could also inuence the size of a wri-
ter's audience. For genuine consumers that posted reviews in order to
share their evaluation of the product, the readability of the reviews
might not be of great concern. In fact, the readability of a review writ-
ten by a genuine customer should be random due to the variations in
customers' educational background, clarity of expression, ability to
communicate their thoughts appropriately, and so on. But for manip-
ulators, whose intention would be to try to reach a large and unse-
lected audience successfully, readability would be of great concern.
Intuitively, manipulated reviews should be consistent in terms of
readability.
3.1.2. Measurement of sentiment in a review
Sentiment (or polarity) analysis is used to identify positive and
negative language in the text. Extraction of sentiment from text has
been widely studied by researchers belonging to the text mining
community. Typically, the techniques employed include a combina-
tion of machine learning, natural language processing, and bags-of-
words approach [6,28,33,38]. Past research on sentiment analysis
has used automatically generated sentiment lexicons, in which a list
of seed words was used to determine whether a sentence contained
positive or negative sentiments. Then, the polarity (i.e., positive or
negative direction) of an opinion was determined on the basis of
the words that were present in the review. In terms of sentiment
mining of reviews, a simple machine learning approach for classifying
products and services as recommended (thumbs up) or not recom-
mended (thumbs down) was proposed by Turney [38]. Another ap-
proach for the semantic classication of product reviews was
presented by Dave et al. [6].
The text mining approach that we adopted in this research made
use of a simple yet efcient standard term frequency measure that
Fig. 7. Same reviewer that posted reviews for a single book.
Fig. 6. Negative review posted by manipulator.
679N. Hu et al. / Decision Support Systems 52 (2012) 674684
is commonly used by the Information Retrieval community [35].
Using this technique, we extracted strong (or weak) positive (or neg-
ative) sentiment terms from each review. We employed a standard
term frequency measure to determine the polarity of the review,
and also estimated the strength of sentiments in each review. The re-
view texts were evaluated using a dictionary of 1635 positive words
and 2005 negative words taken from the General Inquirer lexicon
[37]. In addition, we drew upon the research conducted by Archak
et al. [1], and extracted a list of 40 strong positive and 30 strong neg-
ative terms (including some phrases) from the reviews available on
Amazon.com.
5
The list of words from the General Inquirer lexicon
formed the list of ordinary (or weak) sentiment terms whereas
those extracted from Archak et al. [1] formed the list of strong senti-
ment terms. Based on these two lists of seed words, we calculated the
number of occurrences of sentiment terms/phrases in the review.
Various types of sentiment scores for the ith review calculated using
the following general formula given by Eq. (1):
senti&scorei¼senti&typei
senti&totið1Þ
where senti_type
i
belongs to {str_pos
i
,str_neg
i
,ord_pos
i
,ord_neg
i
},
str_pos is the number of strong positive terms, str_neg is the number
of strong negative terms, ord_pos is the number of ordinary positive
terms, and ord_neg is the number of ordinary negative terms present
in the review. The total number of sentimental terms (senti_tot
i
) is
determined by the sum of str_pos
i
,str_neg
i
,ord_pos
i
, and ord_neg
i
. In
particular, we calculate the following types of sentiment scores for
any review i:
Strong positive sentiment score = str_pos
i
/senti_tot
i
Strong negative sentiment score= str_neg
i
/senti_tot
i
Ordinary positive sentiment score= ord_pos
i
/senti_tot
i
Ordinary negative sentiment score= ord_neg
i
/senti_tot
i
Ordinary sentiment score= (ord_pos
i
+ord_neg
i
)/ senti_tot
i
Strong sentiment score=(str_pos
i
+str_neg
i
)/ senti_tot
i
These scores are used to detect the existence of reviews
manipulation.
3.2. Measurement of manipulation
If reviews were indeed written by customers, then the writing
style of the reviews would be random due to the diverse background
of the customers. Therefore, a simple and intuitive way to detect the
randomness of the review was to conduct a statistical test of random-
ness of writing styles and ratings of the reviews over time for each
product that was reviewed. A non-random result in such a test
would indicate the existence of manipulation. For this purpose, we
adopted the WaldWolfowitz (Runs) test to check the randomness
of ratings, sentiments, and readability of the reviews over time.
3.2.1. WaldWolfowitz (Runs) test
If reviews were indeed written by customers, then the writing
style of the reviews would be random due to the diverse background
of the customers. Therefore, a simple and intuitive way to detect the
randomness of the review is to conduct a statistical test of random-
ness of writing styles and ratings of the reviews across time for each
product that was reviewed. A non-random result in such a test
would indicate the existence of manipulation. For this purpose, we
adopted the WaldWolfowitz (Runs) test to check the randomness
of ratings, sentiments, and readability of the reviews over time.
The WaldWolfowitz test, also known as the Runs test for ran-
domness, is used to test the hypothesis that a series of numbers is
random [18]. The runs test is a non-parametric statistical test, there-
fore the interpretation of the results does not depend on any param-
eterized distributions. A runof a sequence simply refers to a segment
consisting of adjacent equal elements. For example, the sequence:
þ þ þ þ −−−− þ þ þ −−−− þ þ þ þ þ þ þ þ−−−−−−
consists of 6 runs, three of which consist of + and the other three
consist of .To carry out the test, the total number of runs (R) is com-
puted along with the number of positive and negative runs. To simpli-
fy the computations, the data are rst centered around their mean.
6
A
positive run is determined as a sequence of values that are greater
than zero, and a negative run is identied as a sequence of values
that are less than zero. The number of positive runs (n) and negative
runs (m) are checked to see if they are distributed equally in time. The
test statistic is asymptotically normally distributed. The large sample
test statistic Zis given by Z¼RE Rð Þð Þ
ffiffiffiffiffiffiffi
V R
ð Þ
p, where E Rð Þ ¼ 2nm
nþmþ1, and
V Rð Þ ¼ 2nm 2nmnmð Þ
nþmð Þ2nþm1ð Þ If the Runs test result is statistically signicant,
this means that the series of reviews posted is non-random. The
Runs test result is used as a manipulation index for each product
and is represented by a binary scale of 1 and 0, where 1 represents
non-random (with manipulation) and 0 represents random (without
manipulation). For each product, there will be a manipulation index
for each of the three variables ratings, sentiments, and readability.
For the sentiment manipulation index, avg_senti_runs
j
for each prod-
uct jis computed as shown in Eq. (2):
avgsentirunsj¼
strposrunsjþst rnegrunsjþor dposrunsjþordnegrunsj
!
4
ð2Þ
where str_pos_runs
j
is the Runs test score for strong positive senti-
ments in product j,str_neg_runs
j
is the Runs test score for strong neg-
ative sentiments in product j,ord_pos_runs
j
is the Runs test score for
ordinary positive sentiments product j, and ord_neg_runs
j
is the
Runs test score for ordinary negative sentiments in product j.
3.2.2. Evidence of manipulation discovered by Runs test
To verify if our Runs test method is able to detect manipulative ac-
tivity, a manual inspection is conducted. Amongst all the items that
were detected to have non-random reviews, we conduct a manual
check to see if the products we identied are indeed products with
manipulated reviews, e.g., multiple reviews posted by the same per-
son for the same book item. From the items that were found to have
non-random reviews, we found abundant evidence of such activities.
Figs. 3 and 4 present examples of the evidence found for different
book items. ASINrefers to the unique identication of a book while
Table 1
Descriptive statistics of books included in the sample.
Variable Median Mean (SD)
ln(Price) 2.41 2.54 (0.58)
ln(SalesRank) 10.11 9.92 (2.00)
AvgRating 4.50 4.18 (0.55)
ln(TotalReviews) 4.01 4.21(0.75)
TotalReviews 51.00 290.08 (715.30)
Helpful votes 2.00 6.08 (18.75)
5
The terms/phrases were obtained by Archak et al. (2007) from the reviews avail-
able from Amazon.com. Each term/phrase was assigned a score on a scale from 0 to
100. Among the 2697 terms/phrases listed in the research, we extracted 40 strong pos-
itive terms (with scores higher than 95), and 30 strong negative terms (with scores
less than 30).
6
We have also conducted the Runs test for a non-normal distribution using median
instead of mean as the reference point. Qualitatively our results do not change.
680 N. Hu et al. / Decision Support Systems 52 (2012) 674684
CustomerID is the unique identity of the customer. The gures
showed that there have been cases where an individual has posted
several reviews for the same book item. These gures gave us con-
dence on the effectiveness of Runs test to detect manipulation in
reviews.
Fig. 5 presents an example of a review posted by a manipulator
and as we see, it is difcult to tell if this review is posted by a manip-
ulator by simply reading the textual content unless we place it in se-
quence and conduct our test. Fig. 6 shows a negative review posted by
a manipulator who has posted negative reviews for a book. Finally,
Fig. 7 shows three reviews posted by a manipulator who uses similar
style in the review title and sentiments for all three reviews.
4. Numerical experimentation
4.1. Data description
The data used in this research were gathered from Amazon.com
using its Amazon Web Services (AWS) in July 2005. The reason for
picking Amazon.com for the data was because past research had in-
vestigated manipulation of online reviews for this site [7]. The data
analysis was based on data collected prior to July 15, 2005. For each
book, we collected data related to the title, price, sales, and reviews.
Specically, for each customer review of the book, we gathered the
review date, the numeric rating for the book, the number of helpful
votes, the total number of votes, and the original text of the review.
To have a meaningful Runs test, we retained books that had 30 or
more reviews (among 32,878 books with 967,075 reviews). The
nal dataset consisted of information related to 4490 books, with
610,713 online reviews.
The numeric ratings for each review were on a 1-star to a 5-star
scale where a 1-star corresponded to least satised, and a 5-star cor-
responded to most satised with the product. Product sales rank was
shown in descending order where a rank of 1 represented the best
selling product. Consequently, there was a negative correlation be-
tween product sales and sales rank. We used sales rank as a proxy
for product sales (with the opposite sign). Some descriptive statistics
is provided in Table 1.
Fig. 8 shows the histogram of the review readability scores for ma-
nipulated reviews, and it follows a bimodal distribution. On the con-
trary, Fig. 9 shows the same for non-manipulated reviews, and it
approximately follows a normal distribution. This result of the bi-
modal distribution of the readability scores of manipulated reviews
may be due to the existence of two distinct classes of reviews writers,
namely real customers and manipulators.
7
In addition, as we
explained before, it is more likely for manipulators to enter the
scene when they observe a negative review. Fig. 10 shows that indeed
this is true. The conditional probability of observing a positive review
after an item received a negative review is 72%, and this is almost 2.6
times that of the conditional probability of observing a negative re-
view after an item received a negative review.
4.2. Determination of manipulation in reviews
Table 2 summarizes the results of sentiment manipulation that are
obtained when the Runs test was used for books with different sales
rank. Out of 4490 books, the sentiment expressed in reviews of 463
books was found to be non-random. The non-randomness of these re-
views could be due to the manipulation of these reviews by interested
parties. It seemed that manipulation was less prevalent for the most
popular (i.e., sales rank between 1 and 100) and most unpopular
books (i.e., sales rank more than 10,000). This indicated that manipu-
lation of reviews of books was not affected by the popularity of the
book.
4.3. Impact of manipulation in reviews on sales
We used a linear regression model to determine if consumers
were aware of the manipulations present in the reviews, and if they
were able to distinguish between manipulated reviews from non-
manipulated reviews. In fact, if consumers were able to differentiate
a book review with manipulation from one without manipulation,
then with all other information remaining same, a book whose review
was being manipulated would either be punished (i.e., resulting in a
decrease in sales or an increase in sales rank) or would not be
rewarded (i.e., resulting in no change in sales or sales rank). However,
if consumers were deceived by manipulation, then with all the other
Fig. 9. Distribution of readability scores for non-manipulated reviews.
7
The other possible explanation for the bimodal distribution is that the reviews are
written by two different target groups of the product a student group and an aca-
demic group such as in the case of academic textbooks.
Negative followed by
Negative
25,569
Average Rating<3
91891 negative reviews
Negative followed by
Positive
66,322
Fig. 10. Conditional probability of review characteristics.
Fig. 8. Distribution of readability scores for manipulated reviews.
681N. Hu et al. / Decision Support Systems 52 (2012) 674684
information remaining same, a book whose review was being manip-
ulated would be rewarded with an increase in sales or a decrease in
sales rank. In the regression model, we examined the impact of ma-
nipulation in ratings, sentiments, and readability on the sales rank
of the book. Average rating was included as a control variable because
previous studies had shown that products with a high average rating
enjoyed a high demand. Price was included as a control variable in all
regression models because it reduced the demand for a book. The
total number of reviews for a book was included as well to control
for the demand of the book. Amazon.com did not disclose the actual
sales for the books available on their website. Instead, they reported
a sales rank for each book, which ranked the demand for a book rela-
tive to other books in its category. Prior research in economics and
marketing [5,13] had studied the association between these sales
ranks and demand levels for products based on the experimentally
observed fact, and had found that the variation of demand with re-
spect to sales rank followed a Pareto distribution [5]. Based on this
observation, it was possible to use the log of product sales rank as a
proxy for the log of product demand. Given the linear relationship be-
tween ln(Sales) and ln(SalesRank), we used ln(SalesRank) as a proxy
for sales of books in the log-linear regression models. To control the
potential heterogeneity in the existence of manipulation across
books with different popularities (as indicated in Table 2), some
sales rank dummies were included in the model as well. Before check-
ing the impact of manipulation on online reviews, we rst examined
the basic model in which the indices representing manipulation were
not included (Eq. (3)). The nal regression model that included the
manipulation indices is shown in Eq. (4). Model 3 is the basic model
where we study the impact of online reviews on sales. Model 4 stud-
ies the impact of manipulation of reviews on sales.
ln SalesRankð Þ ¼ γ1ln Priceð Þ þ γ2ln Tot alReviewsð Þ þ γ3AvgRatingð Þ
þγ4sr2dummy
# $þγ5sr3dummy
# $þγ6sr4dummy
# $þεð3Þ
ln SalesRankð Þ ¼ β1ln Priceð Þ þ β2ln TotalReviewsð Þ þ β3AvgRatingð Þ
þβ4ratingruns
# $þβ5avgsentiruns
# $þβ6readabilityruns
# $
þβ7sr2dummy
# $þβ8sr3dummy
# $þβ9sr4dummy
# $þε
ð4Þ
where Price denotes the price of each book, TotalReviews denotes the
total number of reviews for each book, AvgRating denotes the average
consumer rating for each book, rating_runs denotes the Runs test re-
sult of the rating for each book and is equal to 1 if the test result is
non-random, avg_senti_runs denotes the Runs test result of the aver-
age sentiment for each book and is equal to 1 if the test result is non-
random, readability_runs denotes the Runs test result of the readabil-
ity for each book and is equal to 1 if the test result is non-random,
sr2_dummy denotes the dummy variable that is equal to 1 for books
with sales rank greater than 101 and less than 1000, sr3_dummy de-
notes the dummy variable that is equal to 1 for books with sales
rank greater than 1001 and less than 10,000, sr4_dummy denotes
the dummy variable that is equal to 1 for books with sales rank great-
er than 10,000. Recall that the product sales rank is shown in des-
cending order where 1 represented the best selling product.
Therefore, the negative correlation between any variable and sales
rank indicated that a high value of that variable was associated with
higher sales.
Table 3 presents the results obtained using the basic model. We
observe that all variables associated with reviews are signicantly as-
sociated with sales. For example, the coefcient of AvgRating is
0.1403 which indicated that the higher the average rating an item
had, the better was its sales (since there was a negative correlation
between sales rank and sales). Furthermore, the adjusted R-square
of the regression model was equal to 0.6619, and it indicated that on-
line reviews could reasonably explain most of the variability in the
sales of the books.
Next we studied the impact of reviews manipulation on sales. The
coefcients for rating_runs,avg_senti_runs, and readability_runs cap-
tured the impact of manipulation through ratings, sentiments, and
readability on sales respectively. We see that the effect of the manip-
ulation of ratings (para= 0.0356) and readability (para = 0.0439)
on sales rank is not signicant. However, on average, the manipula-
tion of sentiments of reviews had a relatively signicant impact on
sales rank (para= 0.2001, and p-value 0.1). This implied that
the promotional chat using sentiments in online reviews was effec-
tive in generating extra sales for the book. Our interpretation for the
non-signicant results for rating_runs and readability_runs is that it
was relatively easier for consumers to detect reviews manipulation
through ratings or readability, and hence consumers could undo the
impact of manipulation of reviews through ratings and readability.
The fact that these variables did not generate any signicant negative
impact on sales might indicate that the consumers were unsure of
whether to trust these reviews. Hence, it seemed that consumers
found it challenging to differentiate a manipulated review from a re-
view written by a real customer. Hence, it was likely that consumers
ignored such reviews when making their purchase decisions.
Till now, what we have documented is the correlation between
the variables that indicated manipulation of reviews and the sales of
books. Next, a time lag is introduced between the dependent variable
(measured at time t+1) and the variables representing manipulation
(measured at time t) to determine if manipulation at current time
Table 2
Results of Runs test on randomness of sentiments expressed in book reviews.
Number of books Percentage of books with
non-random sentiments in reviews
1Sales rankb100 53 9.4%
101Sales rankb1000 292 12.3%
1001 Sales
rankb10,000
3076 10.3%
Sales rank>10,001 1069 9.9%
Total 4490 10.31%
Table 3
Impact of manipulation of reviews on sales.
Variable Coefcient Coefcient
In(Price)0.0254 0.0254
AvgRating 0.1403*** 0.1348***
ln(TotalReviews)0.2873*** 0.2905***
Rating_runs 0.0356
avg_senti_runs 0.2002
+
readability_runs 0.0439
sr2_dummy 1.2923*** 1.2800**
sr3_dummy 4.3210*** 4.3057***
sr4_dummy 6.9803*** 6.9629***
Intercept 7.0961*** 7.1175***
Adjusted R-square 66.19% 66.19%
N 4490 4490
***pb.001; **pb.01; *pb.05;
+
pb.10.
Table 4
Descriptive statistics of books included in the pooled sample.
Variable Median Mean (SD)
ln(Price) 2.77 1.26 (1.43)
ln(SalesRank) 7.94 10.48 (11.47)
AvgRating 4.01 3.84 (0.86)
ln(TotalReviews) 4.85 6.41 (6.87)
TotalReviews 128 608 (961.50)
Helpful votes 0.56 0.69 (0.21)
682 N. Hu et al. / Decision Support Systems 52 (2012) 674684
inuenced the sales of the books in future time. Thus, the baseline
model is transformed to Eq. (5):
ln SalesRankð Þtþ1¼β1ln Priceð Þtþ1þβ2ln TotalReviewsð Þtþ1
þβ3AvgRatingð Þtþ1þβ4ratingruns
# $tþβ5avgsentiruns
# $t
þβ6readabilityruns
# $tþβ7sr2dummy
# $tþ1þβ8sr3dummy
# $tþ1
þβ9sr4dummy
# $tþ1þ
ð5Þ
To test this model, we collected a panel dataset (pooled data) that
was collected over 5 months from 8/9/05 to 10/1/06. For each book
item, we collected the price, sales and review information at approx-
imately three-day intervals. We identied every interval by a unique
sequence number. Finally, we obtained 26 batches of review and
item-level data in total. When we selected book items with at least
30 reviews, the nal panel dataset consisted of information related
to 1693 books and 37,161 online reviews. The descriptive statistics
of the panel data are shown in Table 4.
Table 5 shows the results using the panel data as pooled sample.
The results shown in Table 5 are qualitatively similar to those in
Table 3. The effect of manipulation through ratings and readability
are still found to be ineffective in the time lagged model. On the
other hand, the manipulation using sentiments was found to have a
signicant positive impact on sales (para = 0.0628 and p-
value0.10), which indicated that vendors were able to inuence
the future book sales by manipulating online reviews.
5. Discussion of results
Online reviews can be a powerful promotional tool for marketing
communication. Marketers and vendors have used this medium be-
cause it provides a cheap and impactful channel to reach their cus-
tomers. Marketers are known to take advantage of networks of
inuence among customers to inuence the purchase behavior of po-
tential buyers. Reports have shown that promotional chat has inl-
trated the online review forums.
8
However, it is not clear whether
such knowledge sharing sites where customers review products and
provide advice to each other are fertile grounds for running promo-
tional campaigns of manipulators. This paper examines the extent
and the impact of such manipulative actions in the online reviews
environment.
In this paper, we present a simple but effective way to detect the
manipulation of reviews. Our research shows that manipulators use
both numeric ratings and textual comments to manipulate online re-
views. However, the manipulation of ratings alone is not effective in
inuencing the sales of books as consumers are able to discover
such promotional acts. However, manipulation through a component
of writing style that reects the background of an individual, such as
sentiments, is able to signicantly inuence a consumer's purchase
decision. An important benet of this approach is that one can detect
the existence of manipulation in the reviews, and assess the effective-
ness of manipulation of reviews in generating sales, without having
access to the backend data about customers' identity that is recorded
by e-commerce websites.
The method proposed in this paper assumes that if the reviews
were written by real customers, the writing styles would be random
because of the diverse background of customers. However, this as-
sumption may be valid for certain product categories like electronics
but not necessarily so for other categories of products unlike books.
Also, We realize that review ratings might not follow a random distri-
bution due to the self-selection processes suggested by Li and Hitt
[26]. For popular products, consumers might overlook review ratings
due to the presence of information cascade. However, we believe that
such biases in behavior will have a limited impact on sentiments and
readability of reviews. Overall, we believe that using the Runs test to
detect the manipulated products through assessment of the random-
ness of ratings, readability, and sentiments, is an important step in
discovering the impact of manipulation of reviews.
This paper provides a new direction in the detection of online re-
views manipulation. As we have elaborated before, even though on-
line reviews manipulation has become a serious problem in the
industry, there is no commonly agreed conceptual model for detect-
ing this. At the same time, various online vendors hesitate to openly
discuss how they ght such fraudulent reviews. The reason could be
that they believe that an open discussion of how they ght online re-
views manipulation will help manipulators learn how to trick their
systems. This may encourage manipulators to game the system
since the penalties are few (if any), and the amount of prot that
can be generated by succeeding in this gaming outweigh the costs.
The responsibility of uncovering online reviews manipulation there-
fore falls upon the shoulders of researchers. Our research sheds
light on how serious reviews manipulation is and how to detect re-
views manipulation using publicly available data on online reviews
of books.
However, one challenge for this research is still the lack of avail-
able data. For example, for a given review, some researchers may be-
lieve it is a manipulated review, whereas others may think that it is a
review written by a real customer. Deciding between a manipulated
and a non-manipulated review is a subjective matter, and so future
researchers should collaborate with industry partners to come up
with a clearly labeled dataset indicating manipulated and non-
manipulated reviews so that researchers can use this benchmark
data to build various models to identify fraudulent reviews. Also, fu-
ture research should focus on uncovering the differences between
perceived fraudulent reviews and actual fraudulent reviews, and
also study the impact of consumers' backgrounds in inuencing con-
sumers' perceptions about fraudulent reviews.
References
[1] N. Archak, A. Ghose, P.G. Ipeirotis, Show me the money! deriving the pricing
power of product features by mining consumer reviews, Proceedings of the
13th International Conference on Knowledge Discovery and Data Mining, 2007,
pp. 5665.
[2] M.D. Beneish, The detection of earnings manipulation, Financial Analysts Journal
55 (5) (1999) 2436.
[3] B. Bickart, R.M. Schindler, Internet forums as inuential sources of consumer in-
formation, Journal of Interactive Marketing 15 (3) (2001) 3140.
[4] J.A. Chevalier, A. Goolsbee, Measuring prices and price competition online: ama-
zon.com and BarnesandNoble.com, Quantitative Marketing and Economics 1 (2)
(2003) 203222.
[5] J.A. Chevalier, D. Mayzlin, The effect of word of mouth online: online book re-
views, Journal of Marketing Research 43 (3) (2006) 345354.
[6] K. Dave, S. Lawrence, D.M. Pennock, Mining the peanut gallery: opinion extrac-
tion and semantic classication of product reviews, Proceedings of the 13th Inter-
national World Wide Web Conference, 2003, pp. 519528.
8
http://www.engadget.com/2009/01/17/belkin-rep-hiring-folks-to-write-fake-
reviews-on-amazon/
Table 5
Impact of manipulation of reviews on pooled sample.
Variable Coefcient
ln(Price) 0.0607
AvgRating 0.3089***
ln(TotalReviews)0.5674***
rating_runs 0.0035
avg_senti_runs 0.0628
+
readability_runs 0.0573
sr2_dummy 3.1450***
sr3_dummy 4.6050***
sr4_dummy 7.1801***
Intercept 8.6596***
Adjusted R-Square 69.98%
N 1693
*** pb.001; ** pb.01; * pb.05;
+
pb.10.
683N. Hu et al. / Decision Support Systems 52 (2012) 674684
[7] S. David, T.J. Pinch, Six degrees of reputation: the use and abuse of online review
and recommendation systemsretrieved from, http://papers.ssrn.com/sol3/
papers.cfm?abstract_id=857505.
[8] C. Dellarocas, The digitization of word-of-mouth: promise and challenges of on-
line feedback mechanisms, Management Science 49 (10) (2003) 4071424.
[9] C. Dellarocas, N. Awad, X. Zhang, Exploring the value of online reviews to organi-
zations: implications for revenue forecasting and planning, Proceedings of the
25th International Conference on Information Systems, ACM press, New York,
2004, pp. 379386.
[10] W. Duan, B. Gu, A. Whinston, Do online reviews matter? An empirical investiga-
tion of panel data, Decision Support Systems 45 (4) (2008) 10071016.
[11] C. Forman, A. Ghose, B. Wiesenfeld, Examining the relationship between reviews
and sales: the role of reviewer identity disclosure in electronic markets, Informa-
tion Systems Research 19 (3) (2008) 291313.
[12] B. Garsten, Saving Persuasion: A Defense of Rhetoric and Judgment, Harvard Uni-
versity Press, Boston, 2005.
[13] A. Ghose, A. Sundararajan, Evaluating pricing strategy using ecommerce data: ev-
idence and estimation challenges, Statistical Science 21 (2) (2006) 131142.
[14] A. Ghos e, P.G. Ipeirotis, Estimating the helpfulness and economic impact of prod-
uct reviews: Mining text and reviewer characteristics. (2010) IEEE Transactions
on Knowledge and Data Engineering, IEEE Computer Society, Washington, DC.
[15] D. Godes, D. Mayzlin, Using online conversation to study word of mouth commu-
nication, Marketing Science 23 (4) (2004) 545560.
[16] D. Gruhl, R. Guha, R. Kumar, J. Novak, A. Tomkins, The predictive power of online
chatter, Proceedings of the 11th International. Conference on Knowledge Discov-
ery in Data Mining, New York, NY, USA, 2005, pp. 7887.
[17] L. Guernsey, Suddenly, everybody's an expert on everything, The New York
Times, February 3 2000.
[18] D.N. Gujarati, Basic Econometrics, 4th edition McGrawHill, Inc., New York, 2003.
[19] U.W. Gurun, A.W. Butler, Don't believe the hype: local media slant, local advertis-
ing and rm value, (2010) retrieved from: http://papers.ssrn.com/sol3/papers.
cfm?abstract_id=1333765.
[20] A. Harmon, Amazon glitch unmasks war of reviewers, The New York Times, Feb-
ruary 14 2004.
[21] D.I. Holmes, Authorship attribution, Computers and the Humanities 28 (2)
(1994) 87106.
[22] N. Hu, L. Liu, V. Sambamurthy, Fraud detection in online consumer reviews, Deci-
sion Support Systems 50 (3) (2011) 614626.
[23] N. Hu, I. Bose, Y. Gao, L. Liu, Manipulation in digital word-of-mouth: a reality
check for book reviews, Decision Support Systems 50 (3) (2011) 627635.
[24] K.F. Kahn, P.J. Kenney, The slant of the news, American Political Science Review 96
(2) (2002) 381394.
[25] G.R. Klare, The measurement of readability: useful information for communica-
tors, ACM Journal of Computer Documentation 24 (3) (2000) 107121.
[26] X. Li, L. Hitt, Self-selection and information role of online product reviews, Infor-
mation Systems and Economics 19 (4) (2008) 456474.
[27] Y. Liu, Word of mouth for movies: its dynamics and impact on box ofce revenue,
Journal of Marketing 70 (3) (2006) 7489.
[28] B. Liu, M. Hu, J. Cheng, Opini on observer: analyzing and comparing opinions on
the web, Proceedings of the International Conference on the World Wide Web,
2005, pp. 342351.
[29] S. Majumdar, D. Kulkarni, C. Ravishankar, Addressing click fraud in content deliv-
ery system, retrieved from, Proceedings of the Infocom, 2007 http://www.cs.ucr.
edu/~smajumdar/infocom07.pdf.
[30] D. Mayzlin, Promotional chat on the Internet, Marketing Science 25 (2) (2006)
155163.
[31] A. Metwally, D. Agrawal, A.E. Abbadi, Using association rules for fraud detection
in web advertising networks, Proceedings of the 31st International Conference
on Very Large Data Bases, 2005, pp. 169180.
[32] M.K. Paasche-Orlow, H.A. Taylor, F.L. Brancati, Readability standards for
informed-consent forms as compared with actual readability, The New England
Journal of Medicine 348 (8) (2003) 721726.
[33] B. Pang, L. Lee, S. Vaithyanathan, Thumbs up? Sentiment classication using ma-
chine learning techniques, Proceedings of the 2002 Conference on Empirical
Methods in Natural Language Processing, 2002, pp. 7986.
[34] S. Roychowdhury, Manipulation of earnings through the management of real ac-
tivities that affect cash ow from operations, Unpublished dissertation, Universi-
ty of Rochester, 2004.
[35] G. Salton, M.J. McGill, Introduction to Modern Information Retrieval, , 1983.
[36] R.J. Senter, E.A. Smith, Automated readability indexretrieved from, http://oai.dtic.
mil/oai/oai?
verb=getRecord&metadataPrex=html&identier=AD06672731967.
[37] P.J. Stone, D.C. Dunphy, M.S. Smith, D.M. Ogilvie, The General Inquirer: A Comput-
er Approach to Content Analysis, MIT Press, Cambridge, MA, 1966.
[38] P.D. Turney, Thumbs up or thumbs down? semantic orientation applied to unsu-
pervised classication of reviews, Proceedings of the 40th Annual Meeting on As-
sociation for Computational Linguistic, 2002, pp. 417424.
[39] E. White, Chatting a singer up the pop charts, The Wall Street Journal, p. B1 (Oc-
tober 5 1999).
[40] B.L. Zakaluk, S.J. Samuels, Readability: Its Past, Present, and Future, International
Reading Association, Newark, 1988.
Nan Hu is an Assistant Professor of Accounting and Fi-
nance at the University of Wisconsin at Eau Claire. He is al-
so an Assistant Professor of Information Systems at
Singapore Management University. He received his Ph.D.
from the University of Texas at Dallas. Nan's research fo-
cuses on investigating the value implications and market
efciency of both traditional information (e.g. company -
nancial report, analyst forecast, corporate governance,
etc.) and non-traditional information (e.g. blog opinion,
online consumer reviews, etc.), using a combination of
theories from accounting, nance, marketing, information
economics, sociology, psychology, and computer science.
Nan's research has appeared at JMIS (Journal of Manage-
ment Information Systems),CACM (Communications of the
ACM),JCS (Journal of Computer Security), MISQ (MIS Quar-
terly), JAAF (Journal of Accounting, Auditing, and Finance),
TEM (IEEE Transactions on Engineering Management), JBR
(Journal of Business Research), and IT&M (Information Tech-
nology and Management).
Indranil Bose is an Associate Professor at the School of
Business, The University of Hong Kong. He holds a B. Tech.
from the Indian Institute of Technology, MS from the
University of Iowa, MS and Ph.D. from Purdue University.
His research interests are in telecommunications, data
mining, information security, and supply chain manage-
ment. His publications have appeared in Communications
of the ACM, Communications of AIS, Computers and Oper-
ations Research, Decision Support Systems, Ergonomics,
European Journal of Operational Research, Information &
Management, Journal of Organizational Computing and
Electronic Commerce, Journal of the American Society for
Information Science and Technology, Operations Research
Letters etc. He is listed in the International Who's Who of
Professionals 20052006, Marquis Who's Who in the
World 2006, Marquis Who's Who in Asia 2007, Marquis
Who's Who in Science and Engineering 2007, and Marquis
Who's Who of Emerging Leaders 2007. He serves on the
editorial board of Information & Management, Communi-
cations of AIS, and several other IS journals.
Noi Sian Koh is a Lecturer at the School of Information
Technology, Nanyang Polytechnic. She received her Ph.D.
in Information Systems from Singapore Management Uni-
versity. Her research interests are in the area of social me-
dia content and text mining.
Ling Liu is an Assistant Professor of Accounting and Fi-
nance at the University of Wisconsin at Eau Claire. She re-
ceived her Ph.D. in Accounting from the University of Texas
at Dallas. Her research focuses on market efciency, corpo-
rate governance, and relative performance evaluation.
Ling's research has appeared at DSS (Decision Support Sys-
tem), JAAF (Journal of Accounting, Auditing, and Finance),
TEM (IEEE Transactions on Engineering Management), JBR
(Journal of Business Research) etc.
684 N. Hu et al. / Decision Support Systems 52 (2012) 674684
... The separating hyperplane is the predictor p. among several classes. Numerous types of linear classifiers exist, including Support Vector Machines (SVM) are one of them [20,31]. This is a type of classifier that looks for good linear separators for several classes. ...
Article
Sentiment Analysis (SA) is still being conducted in the field of text mining. SA is the computational handling of textual subjectivity, sentiments, and opinions. This survey article addresses a thorough summary of the most recent developments in this area. Numerous algorithms have been proposed recently. This review examines and briefly presents several SA applications and upgrades. These articles are grouped according on how they contribute to different SA approaches. The linked domains that drew attention to SA (transfer learning, emotion detection, and resource building) Recent researchers are discussed. This survey's primary goal is to provide a virtually complete picture of Brief descriptions of SA techniques and associated disciplines. This paper's primary contributions include the intricate classifications of numerous recent works and the example of the current research trend in sentiment analysis and related fields.
Article
Purpose This study examines how default bias, driven by the default positive review (DPR) rule – which automatically classifies reviews not provided by consumers within a specified period as positive – and rebate bias, associated with the conditional rebate strategy (CRS), where sellers offer rebates exclusively to consumers who submit positive reviews, distort the distribution of online product reviews over time and impact consumer satisfaction. Design/methodology/approach A key aspect of our method lies in developing latent variable models that capture the relationship between biased online reviews and consumer satisfaction levels. By applying our models to a panel dataset from Taobao – a leading Chinese e-commerce platform – and using insights from online consumer feedback surveys, we assess the extent of the biases introduced by DPR and CRS in a given feedback system. A hierarchical regression model was employed to investigate the impact of the proposed biases on consumer satisfaction. Findings Consumers who have previously written online reviews experience satisfaction outcomes 72.9% of the time with DPR and up to 81.3% when CRS is included. Implementing DPR may boost product sales to some extent, but it would significantly amplify consumer dissatisfaction, whereas offering a rebate could effectively alleviate consumer discontent, even though the rebate is conditional. Originality/value Our findings reveal the extent of biases introduced by CRS and DPR in online reviews and inform the consumer satisfaction debate regarding the phenomenon of excessive positive reviews resulting from these practices.
Article
The web 2.0. has profoundly transformed the way people participate in every aspect of our social, cultural, political and everyday life. One of the most studied aspects of this new participatory style of life is the citizen journalism (also known as public, democratic, participatory or street journalism). Citizen journalism is defined as ‘the gathering and reporting of news by ordinary people rather than professional reporters’, or ‘the act of private citizens playing an active role in the process of collecting, reporting and discussing news and information’. In almost a decade something similar is going to happen in the communication of tourism. The web 2.0 has profoundly changed the tourist industry, and in a special way it has affected the habits of consumers (also called pro-sumers, as they at the same time produce and consume texts, services, and communication). Customer reviews of tourist sites and attractions are going to become a significant part of the communicative micro-system established between customers, tour operators and the owners of hotel, restaurants etcetera. In a previous research (Compagnone and Fiorentino, 2018) we have analyzed a corpus of Italian online reviews posted by customers on the most famous website for tourist reviewing, TripAdvisor. Focusing on Italian online reviews of some Italian hotels we were able to get some relevant results on the linguistic, textual and pragmatic aspects of this area of the language of tourism. In this paper we enlarge our corpus including English and French online reviews in order to approach the topic from a contrastive point of view. Our objective is now to analyze the French and the English reviews of the same hotels we have already considered in order to compare reviews of the same object so that also potential intercultural aspects of citizen tourism can be observed.
Article
With the rapid expansion of social media and e-commerce platforms, an unprecedented volume of user-generated content has emerged, offering organizations, governments, and researchers invaluable insights into public sentiment. Yet, the vast and unstructured nature of this data challenges traditional analysis methods. Sentiment analysis, a specialized field within natural language processing, has evolved to meet these challenges by automating the detection and categorization of opinions and emotions in text. This review comprehensively examines the evolving techniques in sentiment analysis, detailing foundational processes such as data gathering and feature extraction. It explores a spectrum of methodologies, from classical word embedding techniques and machine learning algorithms to recent contextual embedding and advanced transformer models like Generative Pre-trained Transformer (GPT), Bidirectional Encoder Representations from Transformers (BERT), and T5. With a critical comparison of these methods, this article highlights their appropriate uses and limitations. Additionally, the review provides a thorough overview of current trends, insights into future directions, and a critical exploration of unresolved challenges. By synthesizing these developments, this review equips researchers with a solid foundation for assessing the current state of sentiment analysis and guiding future advancements in this dynamic field.
Article
Purpose Information filtering systems serve as robust tools in the ongoing difficulties associated with overwhelming volumes of data. With constant generation and accumulation of reviews in online communities, the ability to distill and provide valuable insights to assist customers in their search for relevant information is of considerable significance. This study devised an effective review filtering system for a popular online physical experience review site. Design/methodology/approach This study entailed an investigation of a hybrid approach for a review filtering system augmented with various text mining-based operational variables to extract the linguistic signals of online reviews. Moreover, we devised three ensemble models based on multiple machine learning and deep learning algorithms to build a high-performance review filtering system. Findings The main findings confirm the effectiveness of using the derived operational variables when reviewing filtering systems. We found that the reviewer’s tendency and history macros, as well as the readability and sentiment of the reviews, contribute significantly to the filtering performance. Furthermore, the proposed three ensemble frameworks demonstrated good efficiency with an average accuracy of 89.39%. Originality/value This study provides a methodological blueprint for operationalizing variables in online reviews, covering both structured and unstructured datasets. Incorporating different variables enhances the efficiency of the algorithm and provides a more comprehensive understanding of user-generated content. Furthermore, the study affords a strategic perspective and integrated guidelines for developers seeking to create advanced review filtering systems.
Chapter
This research explores the impact of linguistic manipulations in online brand communication. It highlights that language is influenced by culture and shaped by society. The study examines various linguistic techniques and features that affect readers and investigates their effectiveness in influencing audience decisions. The research identifies effective techniques such as pronunciation, help sections, and text layout, as well as psychological language like personal pronouns and exclusive words. It conducts 101 persuasion tests with 200 participants, yielding 4049 valid records and suggesting six major manipulation strategies. The findings provide a comprehensive understanding of linguistic manipulations in online marketing and offer guidance for commercial writers. It also enhances awareness and critical thinking about online commercial information. Further research could explore the use of manipulation strategies in different online marketing platforms and examine the line between persuasive and misleading tactics.
Article
Full-text available
Abstract The Detection of Earnings Manipulation The paper profiles a sample of earnings manipulators, identifies their distinguishing characteristics,and estimates a model for detecting manipulation. The model’s variables are designed to capture either the effects of manipulation or preconditionsthat may prompt firms to engage in such activity. The results suggest a systematic relation between the probability of manipulation and financial statement variables. The evidence is consistent with accounting data being useful in detecting manipulation and assessing the reliability of accounting earnings. In holdout sample tests, the model identifies approximately half of the companies involved
Article
Full-text available
This paper presents a simple unsupervised learning algorithm for classifying reviews as recommended (thumbs up) or not rec- ommended (thumbs down). The classifi- cation of a review is predicted by the average semantic orientation of the phrases in the review that contain adjec- tives or adverbs. A phrase has a positive semantic orientation when it has good as- sociations (e.g., "subtle nuances") and a negative semantic orientation when it has bad associations (e.g., "very cavalier"). In this paper, the semantic orientation of a phrase is calculated as the mutual infor- mation between the given phrase and the word "excellent" minus the mutual information between the given phrase and the word "poor". A review is classified as recommended if the average semantic ori- entation of its phrases is positive. The al- gorithm achieves an average accuracy of 74% when evaluated on 410 reviews from Epinions, sampled from four different domains (reviews of automobiles, banks, movies, and travel destinations). The ac- curacy ranges from 84% for automobile reviews to 66% for movie reviews.
Article
As Internet-based commerce becomes increasingly widespread, large data sets about the demand for and pricing of a wide variety of products become available. These present exciting new opportunities for empirical economic and business research, but also raise new statistical issues and challenges. In this article, we summarize research that aims to assess the optimality of price discrimination in the software industry using a large e-commerce panel data set gathered from Amazon.com. We describe the key parameters that relate to demand and cost that must be reliably estimated to accomplish this research successfully, and we outline our approach to estimating these parameters. This includes a method for ``reverse engineering'' actual demand levels from the sales ranks reported by Amazon, and approaches to estimating demand elasticity, variable costs and the optimality of pricing choices directly from publicly available e-commerce data. Our analysis raises many new challenges to the reliable statistical analysis of e-commerce data and we conclude with a brief summary of some salient ones.
Article
When local media report news about local companies, they use fewer negative words compared to the same media reporting about nonlocal companies. We document that one reason for this positive slant is the firms' local media advertising expenditures. Abnormal positive local media slant strongly relates to firm equity values. The effect is stronger for small firms; firms held predominantly by individual investors; and firms with illiquid or highly volatile stock, low analyst following, or high dispersion of analyst forecasts. These findings show that news content varies systematically with the characteristics and conflicts of interest of the source.
Article
With the rapid growth of the Internet, the ability of users to create and publish content has created active electronic communities that provide a wealth of product information. However, the high volume of reviews that are typically published for a single product makes harder for individuals as well as manufacturers to locate the best reviews and understand the true underlying quality of a product. In this paper, we reexamine the impact of reviews on economic outcomes like product sales and see how different factors affect social outcomes such as their perceived usefulness. Our approach explores multiple aspects of review text, such as subjectivity levels, various measures of readability and extent of spelling errors to identify important text-based features. In addition, we also examine multiple reviewer-level features such as average usefulness of past reviews and the self-disclosed identity measures of reviewers that are displayed next to a review. Our econometric analysis reveals that the extent of subjectivity, informativeness, readability, and linguistic correctness in reviews matters in influencing sales and perceived usefulness. Reviews that have a mixture of objective, and highly subjective sentences are negatively associated with product sales, compared to reviews that tend to include only subjective or only objective information. However, such reviews are rated more informative (or helpful) by other users. By using Random Forest-based classifiers, we show that we can accurately predict the impact of reviews on sales and their perceived usefulness. We examine the relative importance of the three broad feature categories: “reviewer-related” features, “review subjectivity” features, and “review readability” features, and find that using any of the three feature sets results in a statistically equivalent performance as in the case of using all available features. This paper is the first study that integrates eco- - nometric, text mining, and predictive modeling techniques toward a more complete analysis of the information captured by user-generated online reviews in order to estimate their helpfulness and economic impact.
Article
Abstract ¶ Despite the widespread popularity of online opinion forums among consumers, the business value that such systems bring to organizations has, so far, remained an unanswered question. This paper addresses this question by studying the value of online movie ratings in forecasting