Content uploaded by Andres Trujillo-Barrera
Author content
All content in this area was uploaded by Andres Trujillo-Barrera on Oct 25, 2019
Content may be subject to copyright.
The influence of online reviews on restaurants: The roles
of review valence, platform, and credibility
Last working paper before publication. Please cite as:
van Lohuizen, A.W. and Trujillo-Barrera, A. (2020). The Influence of Online Reviews on
Restaurants: The Roles of Review Valence, Platform, and Credibility. Journal of Agricultural
and Food Industrial Organization. https://doi.org/10.1515/jafio-2018-0020
Abstract
Online reviews influence consumer decision making, retrieving valuable information about
consumers to the companies. We investigate how review valence, platform type, and review
credibility affect purchase intention (visit to restaurants). We use an experimental 2x4
between-subjects factorial design with two platforms (company websites and independent
websites) and four types of review valences (neutral, negative, positive and balanced), with
data of 256 respondents. Results show that purchase intentions are influenced by review
valence. The effect is moderated by perceived review credibility. Review platform has no
moderating effect on the influence of review valence. Results provide practical information
for marketers in the service industry.
Keywords: online reviews, electronic word of mouth (eWOM), review valence
Introduction
Online reviews are a type of electronic word of mouth (eWOM) defined as any positive
or negative statement about an offer made by customers via the Internet (Hennig-Thurau,
Gwinner, Walsh, and Gremler, 2004). Today, online reviews are considered to be more
influential on purchasing decisions than traditional marketing communication tools like ad-
vertisements or promotions (Breazeale, 2009). Take for instance TripAdvisor.com, an online
1
review website that collects and posts consumers’ recommendations and opinions world-
wide about hotels, restaurants, attractions, and flights. With a traffic of millions of unique
monthly visitors, Tripadvisor.com has become an influential source of information for many
consumers. Online reviews do not only influence customers. For firms, online reviews are a
prime channel to obtain information and speedy feedback about service quality and customer
demands (Schuckert, Liu, and Law, 2015). Online reviews also influence branding, company
reputation, and acquisition and retention programs (Babi´c Rosario, Sotgiu, De Valck, and
Bijmolt, 2016).
In sum, developing efficient and effective strategy on online reviews creates value for
consumers and companies. However, developing such strategies is challenging. For instance,
some companies offer cash or coupons to consumers for writing positive online reviews. Paid
reviews however, reduce the availability of objective information, and damage the credibility
of the reviews (Pentina, Bailey, and Zhang, 2018).
Despite considerable research, several knowledge gaps still exist. First, most studies
indicate that eWOM valence (whether eWOM is positive or negative) influences purchase
intentions. However, findings are inconclusive (Lee and Koo, 2012). Some studies suggest
that negative eWOM has a stronger impact on consumers’ attitudes and purchase intentions
than positive eWOM (e.g. Chiou and Cheng (2003)), while other studies indicate the op-
posite (Pentina et al., 2018). Ismagilova, Slade, Rana, and Dwivedi (2019) argued that the
contradictory results can be explained by the different contexts used in the studies found
in the literature. Thus, is important to further explore the role of valence asymmetry on
purchase intentions in the context of restaurants.
Second, until recently, most studies focused on reviews displayed on one well-known
platform such as Amazon or eBay (Pentina et al., 2018). In practice, besides online re-
tail websites, consumers can also share their reviews through personal channels, such as
Facebook, Twitter, or blogs, or via independent platforms such as online forums or review
websites like TripAdvisor. Each platform operates with different mechanisms in terms of use,
2
administrator privileges, and/or reviewer restrictions, which affects the perceived credibility
of the reviews (Tsao and Hsieh, 2015). Hence the influence of online reviews on influencing
purchase intention can vary depending on the platform (Lee and Koo, 2012).
Third, while tangible products are dominated by search attributes determined before
purchasing a product, services consist of experience attributes perceived during and after
consumption. For information prior the experience, consumers often depend on online re-
views (Park and Lee, 2009). eWOM is ranked as the most important source of information in
the hospitality industry (Shaw, Bailey, and Williams, 2011). Despite its significant impact,
little research has been done on eWOM in the context of restaurants. This takes more rel-
evance nowadays given the amount of money spent by consumers on food away from home.
According to Economic Research Service (ERS/USDA) (2019), for every dollar spent by U.S.
consumers in food in 2017, about 36.7 cents went to food services. Thus for the food indus-
try there is added value to be gained by better understanding how online reviews influence
consumer choices. Thus, the goal of the paper is to provide insights to the question: How
online review valence, review credibility, and the online review platform affect consumer’s
purchase intentions towards restaurants.
Conceptual background and hypotheses
Purchase Intention
Intentions indicate how hard people are willing to try, or how much effort they plan to exert
to perform particular actions (Sheeran, 2002). Purchase intention refers to the predetermi-
nation to buy a certain offer (in the case of restaurants refers to the intention to visit). Thus,
it indicates how likely a consumer would buy the product. The literature appears to sup-
port a significant association between online review and intentions Ismagilova et al. (2019).
Chen (2008) argued that online bookstore reviews have a greater influence on consumers’
purchases than reviews from experts. Meanwhile Sparks and Browning (2011) found a direct
3
relationship between eWOM and purchases (i.e. positive/negative eWOM increases/reduces
purchase intentions).
Asymmetric effect of valence
Online reviews vary in valence, meaning whether the information provided is positive or
negative. Several studies indicate that compared to reading neutral reviews or no reviews,
negative reviews have a bigger impact on attitude toward the product than positive reviews
(e.g. Floh, Koller, and Zauner (2013)). The larger impact of negative online reviews is
consistent with the negativity effect, an assumption in psychology stating that negative
information has a greater weight compared to equally strong positive information in creating
judgements (Wu, 2013). Two underlying mechanisms explain the negativity effect, namely
uncertainty reduction, and prospect theory. The uncertainty reduction theory (Hu, Liu, and
Zhang, 2008) suggests that when consumers have little knowledge about a product or its
outcomes, they will try to reduce uncertainty and maximize the outcome value. Meanwhile,
the prospect theory suggests that people tend to be risk averse, valuing gains and losses in a
different manner. Negative reviews provide us with more warnings about possible risks than
positive reviews, thus reducing risk perception. We propose:
H1a: A set of negative reviews has a stronger influence on purchase intention toward a
restaurant than a set of positive reviews.
Balanced review sets
Balanced review sets combine both positive and negative information. Literature state that
balanced review sets have low impact on consumers’ product evaluations, since consensus
information (when reviewers agree) about an offer is more persuasive than conflicting infor-
mation (Lee and Cranage, 2014). According to Purnawirawan, De Pelsmacker, and Dens
(2012) balanced review sets are perceived as less informative. Apparently, the contradic-
tory information leaves the reader at a loss. Lee and Cranage (2014) suggest that when
4
all reviews in a review set are negative, consumers tend to blame the company, forming a
negative attitude, compared to when the review set is balanced. However, when a review
set is balanced, consumers attribute the satisfaction or dissatisfaction more to the reviewer,
or to circumstances beyond the company’s control. As a result, reviews are not likely to
influence the purchase intention. Thus, we propose:
H1b: A balanced set of reviews has a weaker influence on purchase intention toward a
restaurant than either positive and negative review sets.
Perceived credibility
That negative reviews have a stronger influence on consumers decisions can be explained by
more than just the negativity effect. Credibility, defined as the extent to which one perceives
information provided as unbiased, believable, true, or factual (Qiu, Pang, and Lim, 2012),
also plays a role.
Perceived credibility and valence
Previous studies show a difference in perceived review credibility between positive, negative,
and balanced reviews sets. Though, Lee and Cranage (2014) stated that eWOM consen-
sus increases perceived credibility, Doh and Hwang (2009) indicated that the credibility of
websites and online reviews can be damaged if all reviews are positively framed. The lack
of credibility in positive reviews impair the effect of positive online reviews on attitude and
purchase intention. Thus:
H2a: A set of negative reviews is perceived as more credible than a set of positive reviews.
Though an individual balanced review has a positive impact on perceived credibility (Doh
and Hwang, 2009), a set of balanced reviews is expected to have a negative effect on the
perceived credibility. Review consistency is an important cue to assess reviews’ credibility
(Cheung, Sia, and Kuan, 2012). Agreement increases the believability of information, while
opposing information decreases it (Gershoff, Mukherjee, and Mukhopadhyay, 2007). The
5
negative impact of balanced review sets on perceived credibility explain the low impact of
balanced review sets on consumers’ product evaluations and behavior. Hence:
H2b: A balanced set of reviews has a negative impact on the perceived review credibility.
Moderating effect of perceived credibility
The influence of review valence on purchase intention seems moderated by perceived review
credibility. Studies indicate that highly credible reviews are more likely to persuade con-
sumers and change their attitude and behavior in the direction of the valence (Pentina et al.,
2018). However, when the credibility of a message is low, consumer resists its intent (Lee
and Koo, 2012). Thus, the strength of the relationship between review valence and con-
sumers’ attitude and purchase intention depends on the perceived credibility of the review
set. Hence:
H3: The impact of online reviews on purchase intention is stronger when online reviews
are perceived as credible.
Online platform
Platforms used to share word of mouth are increasingly diverse. We focus on platforms
related to quasi-spontaneous communication and independent communication and refer to
them as company websites and independent websites, that according to Meuter, McCabe,
and Curran (2013) are the most preferred online review sources. Many companies provide
a review platform on their website to stimulate consumer interaction. Findings from the
literature suggest that companies can enhance their reputation and trust by facilitating such
platform. However, reviews on third-party websites tend to be more influential, because
they are seen as more independent and unbiased (Meuter et al., 2013). Also, the credibility
of reviews posted by independent websites is higher because of the “lack of control” of
companies over the review (Tsao and Hsieh, 2015).
6
Moderating role of platform type
Platform type can strengthen or weaken the relationship between review valence and per-
ceived credibility, and review valence and purchase intention. Websites established by inde-
pendent parties with a specific interest are more likely to be perceived as providing credible
evaluations of the offer attributes and functions, which would enhance the likelihood of
consumers to adopt the information (Truong and Simmons, 2010). In the case of positive
reviews on a company website, consumers may attribute the review to personal (or corpo-
rate), non-product related motivations (i.e. increasing sales), thus are less likely to follow the
recommendations (Senecal and Nantel, 2004). Moreover, according to (Mayzlin, Dover, and
Chevalier, 2014), committing review fraud is more likely to happen on company websites
than on independent websites because independent websites post hurdles. For example, to
post a review on TripAdvisor, the reviewer needs to be a registered member, and reviews are
screened to meet guidelines, that refer to zero-tolerance toward fake reviews. Such hurdles
increase the value and perceived credibility of independent websites. As such, we propose:
H4: The effect of review valence on purchase intention is stronger when posted on an
independent website.
H5: The effect of review valence on credibility is stronger when posted on an independent
website.
The variables and hypotheses derived are presented in the conceptual framework in Figure
1.
Method
Experimental design and procedures
To test the framework we conduct an experiment where the variables review valence and
review platform are the manipulated variables. Review credibility, and purchase intention
7
are the measured variables. More specifically, we ask participants to think of the event of
planning an important dinner with colleagues, and pick a restaurant.
Since it is important to be a pleasant experience, they use the help of online reviews
of the restaurant. We developed an online survey, and randomly assigned participants to
one of eight conditions, where each group had about the same number of subjects. The
scenarios were a set of reviews either positive, neutral, negative, or balanced displayed on two
platforms, the company website or an independent website (2x4 between-subjects factorial
design).
Participants in groups 1 to 4 read a set of online reviews about a restaurant posted on a
company website, while participants in groups 5 to 8 read a scenario with a set of reviews
posted on an independent website. More specifically, group 1 read a positive review set in a
company website. Similarly, in a company website, group 2 read a neutral review set (control
group), group 3 read a negative review set, and group 4 a balanced review set. Following the
same order for the review sets, we assigned participants in groups 5 to 8 to the independent
website scenarios. The survey followed with questions about intention to visit the restaurant,
and perceived review credibility. After, the experiment contained a manipulation check to
test the familiarity with the restaurant and the website, the review valence, and the platform
independency. The survey ended with demographic questions.
The sample consisted of 256 students from a University in the Netherlands. Students are
desirable participants because of their familiarity with eWOM (Meuter et al., 2013). The
distribution of men and women across conditions was balanced, but more women partici-
pated. Most respondents were between 17 and 25 years (93.4%). The students were invited
to the experiment via email-lists and social media. To motivate response rate, a gift-card
was randomly awarded to one of the respondents that entered the prize draw. To prevent
unintended cultural influences the survey was in Dutch and only Dutch students participated.
8
Pretesting and manipulation check
We conducted pre-tests to obtain feedback on the clarity of questions, wording, and manip-
ulations. In a pre-test with 15 participants, we requested feedback about the experiment.
As a result, we made minor changes to the websites, and deleted redundant questions. For
manipulation checks, participants rated the set of reviews based on the extent to which they
thought the set of reviews were positive, negative, neutral, or balanced, using a five-point
scale, where 1 is extremely negative and 5 is extremely positive (Wu, 2013). To measure the
level of (in)dependence of the two platforms we used a five-point Likert scale, containing the
item ‘the website is independent from the restaurant’ (1=strongly disagree and 5=strongly
agree).
To decrease the influence of previously formed attitudes towards the restaurant, we cre-
ated a fictitious restaurant called ‘Restaurant Max’. To check participants familiarity with
the restaurant, we asked whether they knew the restaurant. All respondents answered no to
this question. Finally, we asked whether participants knew the website. 36% of the respon-
dents who saw the independent website (n=125) were familiar with the fictitious website.
Manipulations
We manipulated review valence and platform type. We simulated two websites to show a
review set about the restaurant: an independent review website and a company website. For
the independent website, the layout was based on www.iens.nl, a popular restaurant review
website in the Netherlands. Similarly, the layout of the company website was based on an
existing restaurant website. The company website included the fictitious name and logo of
the restaurant, and also an inactive navigational menu bar. On both websites, the set of
reviews were placed in a central position.
To ensure consistency across the two platforms, the same positive, negative, neutral, and
balanced review sets were used. Thus, we designed four review sets on both platforms. The
review sets contained 6 reviews making the webpage look full (like in most real situations),
9
and discussing relevant information about the restaurant. Researchers in the restaurant
domain agree that food quality, physical environment, employee service, and price are the
most important aspects of the dining experience (Namkung and Jang, 2007). Therefore,
these aspects were included in the review sets, except for the neutral set. The neutral set
contained only one review, with a request if someone has an opinion about the restaurant.
Since neutral reviews do not provide useful information that can influence the consumer
in making a decision, we use those scenarios as control groups. From the pre-test, it was
clear that showing one neutral review was the best option for the control condition. A
variety of websites that post reviews about restaurants were visited to see how reviews are
written. More over, questions were based on the literature and also from real reviews in the
independent online review website (www.iens.nl).
For more detailed information, please see Appendix A. It contains information about the
questionnaire in Dutch, how it looked, the reviews translated from Dutch to English, sources.
Measures
The measures are adopted from the literature. Purchase intention is measured using a
two-item seven-point semantic differential scale (Cronbach’s alpha =.964), based on Spears
and Singh (2004). The two items of purchase intention are ‘definitely not/definitely’, and
‘probably not/probably’. (Pan and Siemens, 2011). Review credibility is measured with a
seven-point Likert scale, containing the statement ‘I think this reviews are credible’ (Cheung,
Luo, Sia, and Chen, 2009). (Questions can be found in appendix A).
Analysis
Table 1 provides descriptive statistics of purchase intention and credibility on each of the
eight groups of the experiment. Both purchase intention and credibility have been mean
centered as is commonly done in this type of analysis to decrease potential collinearity,
improve interpretability, and the decrease influence of different scales.
10
We ran two linear regressions that reflect the conceptual framework depicted in figure 1.
Since the regressions are nested, there is no gain on running them in a multiple regression
setting (i.e. Seemingly unrelated regression equations) since they would yield the same results
as running each of them independently.
In the first regression the independent variable is purchase intention run on valence,
platform, credibility, and interactions of valence with credibility and platform. Valence is
a categorical variable that takes the values of neutral, balanced, negative, and positive.
Platform is a categorical variable that takes the values of company and independent website.
That yields the following regression:
P urchase I ntention =Credibility +V alence Balanced +V alence Negative +
V alence P ositive +P latf orm I ndependent +C redibility ×V alence Balanced +
Credibility ×V alence Negative +Credibility ×V alence P ositive +
P latf orm Independent ×V alence Balanced +P latf orm I ndependent ×V alence N egative +
P latf orm Independent ×V alence P ositive
(1)
In the second regression, the independent variable is credibility, that is run on valence,
platform, and the interaction between platform and valence. That yields the following:
Credibility =V alence Balanced +V alence Negative +
V alence P ositive +P latf orm I ndependent +P l atform Independent ×V alence Balanced +
P latf orm Independent ×V alence Negative +P latf orm Independent ×V al ence P ositiv e
(2)
11
Results
Manipulation checks
We use ANOVA test to look for differences on perceived review valence, finding significant dif-
ferences between the four valence groups, F(3,125.17)=665.34, p.value <0.001 (Welch’s F) . A
Games-Howell post-hoc test show significant differences on perceived review valence between
the groups exposed to positive (Mean=4.80), negative (Mean=1.10) and neutral review sets
(Mean=3.12) (p.value <0.001). Groups who read balanced review sets (Mean=2.87) scored
perceived valence significantly different than groups who read positive and negative review
sets (p.value <0.001). However, no significant differences in perceived valences exist between
groups exposed to neutral and balanced review sets (p.value <0.28). To conduct a manip-
ulation check for platform independency, we compare the groups who visited the company
website and the independent website. On average, participants rated the company web-
site as less independent (Mean=3.00, SE=0.11) than the independent website (Mean=3.65,
SE=0.11), t(254)=-4.11, p.value <0.001. Results suggest that the manipulations were suc-
cessful.
Hypotheses testing
Results from table 2 suggest that both negative and positive reviews exert a significant
influence on purchase intention. However, the estimate of negative valence (-2.428) is farther
from zero than the estimate of positive valence (1.835). To offer support for hypothesis 1a,
we conduct an F-test to assess the asymmetry of the negative and positive valence results.
We restrict negative valence to the negative value of the positive valence estimate (-1.835),
finding it significantly different (p <0.001) than the original coefficient of negative valence
of -2.428, Thus supporting hypotheses 1a.
For Hypothesis 1b, table 2 shows that balanced valence is negative but not significantly
different than zero. Thus, the main effect of the balanced valence on purchase intention is
12
weaker than the highly significant positive and negative review sets, thus offering support to
hypothesis 1b.
Hypotheses 2, deals with the effect of valence on credibility. For hypotheses 2a and 2b,
we look at table 3. Hypothesis 2a looks at the asymmetric effect of the reviews, where
a negative review set is expected to be perceived as more credible than a set of positive
reviews. Results support hypothesis 1b. The estimate of negative valence is larger and
highly significant while positive valence is not significantly different than zero. Hypothesis
2b is also supported, where the balanced set of reviews shows the expected negative value
at a 5% significance level.
For Hypothesis 3 we investigate the interaction effects of valence reviews and credibility.
Table 2 shows that these interactions are relevant for balanced and negative reviews. To
further explore these associations beyond the mean value of credibility, we draw the marginal
effects of valence on purchase intention at the whole range of values of credibility as seen
in figure 2. We observe that positive and negative valence are significantly different than
the others except at very low levels of credibility. At low levels of credibility the consumers
may not trust the information, thus the role of valence becomes irrelevant. As credibility
increases the effect of positive and negative valence on purchase intention appear to grow.
Results in Table 2 suggests that the effect is more pronounced for the negative valence, as
the coefficient of its interaction with credibility is significant. All in all, hypotheses 3 is
supported by these findings.
The balanced valence offers some interesting results. As credibility increases its effect
on purchase intention also increases. However, for positive values of credibility, the effect
of balanced valence is not different than neutral reviews, meaning that balanced reviews do
not convey useful information to consumers. As credibility decreases, the effect of balanced
reviews on purchase intention also decreases. This result complements hypothesis 1b. There,
we found that the main effect of balanced valence is negative but not significantly different
than zero. However, notice that at low levels of credibility the balanced reviews become
13
significantly negative.
Hypotheses 4 and 5 deal with the role of the platform. Results from table 2 and 3
suggest that the participants of the experiment were not influenced by whether the platform
is independent or provided by the company. Main effects and interaction effects between
platform and valence are not significantly different than zero.
Discussion
We explore the effect of online reviews on consumers’ purchase intention toward restaurants,
and how is the effect influenced by review valence, perceived review credibility, and review
platform. We use an experimental 2x4 between-subjects factorial design with two platforms
and four types of review valences.
In line with the literature, we find that credibility and valence influence purchase inten-
tion. We explore the asymmetry of the effect, finding that negative reviews are more influ-
ential, which is consistent with the negativity effect found in the eWOM literature (e.g. Floh
et al. (2013)). Also, consistent with previous research in other industries (e.g. Lee and
Cranage (2014)), balanced review sets have no impact on intention. However, our analysis
reveals a new insight. At low levels of credibility, balanced review sets would decrease the
purchase intention. We argue that the lack of trust on the reviews, plus the conflicting
information provided by the balanced set increases the risk perception of the consumer, thus
decreasing the purchase intention. This finding has relevant managerial implications. For
instance, when incorporating reviews on a company website, it is important to leave the con-
trol to the consumers. Desirable strategies to deal with negative reviews include to respond
and apologize. However, attempting to decrease the credibility and impact of the review
via a balanced review set would have negative consequences on the purchase intention if
credibility becomes very low.
Review valence influences perceived review credibility. The asymmetric effect of the
14
reviews is also found on the effect on credibility. As seen in other contexts, people tend to
trust more negative reviews. However, people become confused by balanced sets, and may
remain agnostic about credibility when reading positive reviews. In terms of credibility this
implies that the managerial response is mainly about damage control. Positive reviews do
not matter too much, and negative reviews increase credibility but also decrease purchase
intention. Meanwhile, balanced sets decrease credibility. As seen in the previous paragraph,
it is crucial to maintain a credibility level at or above average to make the balanced set a
deterrent of negative reviews.
We found no difference in the persuasiveness of the review sets on the different platforms,
hence no difference in purchase intention. Though this study indicates that independent
websites take more measures to prevent the posting of false reviews, building credibility
proves challenging. These results challenge the expectation that company websites are at a
disadvantage as a review platform. Hence, to developing a review platform on the company
website is recommended for marketers.
Although our findings are not new in the literature, it has been identified that the effects
of online reviews are quite specific to their contexts (Ismagilova et al., 2019). Thus, many
results in the literature lack an overall consensus. We offer a more specific setting evaluated
for the restaurant sector. We also offer a richer analysis of the moderating effect of credibility.
Limitations in this study motivate future research. We use a sample of Dutch college
students. It would be interesting to analyze a broader sample controlling for demographics,
or cultural and personal attributes. For instance, identifying how risk perception affect online
reviews. Another limitation is the focus on a single company website and an independent
website. Further research may compare the effect of online reviews on multiple platform
types. We also, did not include a rating system that is often common in many websites.
For instance Amazon or TripAdvisor rate the products or experiences with stars, providing
a more direct and quantifiable value for the review. Combining rating with analysis of the
content (text analysis and sentiment analysis) is a venue for further research.
15
References
Babi´c Rosario, A., F. Sotgiu, K. De Valck, and T. H. Bijmolt (2016): “The Ef-
fect of Electronic Word of Mouth on Sales: A Meta-Analytic Review of Plat-
form, Product, and Metric Factors,” Journal of Marketing Research, 53, 297–
318, URL http://journals.ama.org/doi/10.1509/jmr.14.0380http://journals.
sagepub.com/doi/10.1509/jmr.14.0380.
Breazeale, M. (2009): “Forum - Word of Mouse - An Assessment of Electronic Word-of-
mouth Research,” International Journal of Market Research, 51, 297–318, URL http:
//journals.sagepub.com/doi/10.2501/S1470785309200566.
Chen, Y.-F. (2008): “Herd behavior in purchasing books online,” Computers in Hu-
man Behavior, 24, 1977–1992, URL https://linkinghub.elsevier.com/retrieve/
pii/S0747563207001458.
Cheung, C., C.-L. Sia, and K. Kuan (2012): “Is This Review Believable? A Study of
Factors Affecting the Credibility of Online Consumer Reviews from an ELM Perspective,”
Journal of the Association for Information Systems, 13, 618–635, URL http://aisel.
aisnet.org/jais/vol13/iss8/2/.
Cheung, M. Y., C. Luo, C. L. Sia, and H. Chen (2009): “Credibility of Electronic Word-
of-Mouth: Informational and Normative Determinants of On-line Consumer Recom-
mendations,” International Journal of Electronic Commerce, 13, 9–38, URL https:
//www.tandfonline.com/doi/full/10.2753/JEC1086-4415130402.
Chiou, J.-S. and C. Cheng (2003): “Should a company have message boards on its
Web sites?” Journal of Interactive Marketing, 17, 50–61, URL https://linkinghub.
elsevier.com/retrieve/pii/S109499680370139X.
Doh, S.-J. and J.-S. Hwang (2009): “How Consumers Evaluate eWOM (Electronic Word-
of-Mouth) Messages,” CyberPsychology & Behavior, 12, 193–197, URL http://www.
liebertpub.com/doi/10.1089/cpb.2008.0109.
Economic Research Service (ERS/USDA) (2019): “Food Dollar Series,” URL https://www.
ers.usda.gov/data-products/food-dollar-series/.
Floh, A., M. Koller, and A. Zauner (2013): “Taking a deeper look at online reviews:
The asymmetric effect of valence intensity on shopping behaviour,” Journal of Market-
ing Management, 29, 646–670, URL http://www.tandfonline.com/doi/abs/10.1080/
0267257X.2013.776620.
Gershoff, A. D., A. Mukherjee, and A. Mukhopadhyay (2007): “Few Ways to Love, but
Many Ways to Hate: Attribute Ambiguity and the Positivity Effect in Agent Evalua-
tion,” Journal of Consumer Research, 33, 499–505, URL https://academic.oup.com/
jcr/article-lookup/doi/10.1086/510223.
16
Hennig-Thurau, T., K. P. Gwinner, G. Walsh, and D. D. Gremler (2004): “Elec-
tronic word-of-mouth via consumer-opinion platforms: What motivates consumers to
articulate themselves on the Internet?” Journal of Interactive Marketing, 18, 38–52,
URL http://linkinghub.elsevier.com/retrieve/pii/S1094996804700961https://
linkinghub.elsevier.com/retrieve/pii/S1094996804700961.
Hu, N., L. Liu, and J. J. Zhang (2008): “Do online reviews affect product sales? The role of
reviewer characteristics and temporal effects,” Information Technology and Management,
9, 201–214, URL http://link.springer.com/10.1007/s10799-008-0041-2.
Ismagilova, E., E. L. Slade, N. P. Rana, and Y. K. Dwivedi (2019): “The Effect of Elec-
tronic Word of Mouth Communications on Intention to Buy: A Meta-Analysis,” Infor-
mation Systems Frontiers, URL https://doi.org/10.1007/s10796-019-09924-yhttp:
//link.springer.com/10.1007/s10796-019-09924-y.
Lee, C. H. and D. A. Cranage (2014): “Toward Understanding Consumer Processing of Nega-
tive Online Word-of-Mouth Communication,” Journal of Hospitality & Tourism Research,
38, 330–360, URL http://journals.sagepub.com/doi/10.1177/1096348012451455.
Lee, K.-T. and D.-M. Koo (2012): “Effects of attribute and valence of e-
WOM on message adoption: Moderating roles of subjective knowledge and
regulatory focus,” Computers in Human Behavior, 28, 1974–1984, URL
http://linkinghub.elsevier.com/retrieve/pii/S074756321200146Xhttps:
//linkinghub.elsevier.com/retrieve/pii/S074756321200146X.
Mayzlin, D., Y. Dover, and J. Chevalier (2014): “Promotional Reviews: An Empirical Inves-
tigation of Online Review Manipulation,” American Economic Review, 104, 2421–2455,
URL http://pubs.aeaweb.org/doi/10.1257/aer.104.8.2421.
Meuter, M. L., D. B. McCabe, and J. M. Curran (2013): “Electronic Word-of-Mouth Versus
Interpersonal Word-of-Mouth: Are All Forms of Word-of-Mouth Equally Influential?” Ser-
vices Marketing Quarterly, 34, 240–256, URL http://www.tandfonline.com/doi/abs/
10.1080/15332969.2013.798201.
Namkung, Y. and S. Jang (2007): “Does Food Quality Really Matter in Restaurants?
Its Impact On Customer Satisfaction and Behavioral Intentions,” Journal of Hospitality
& Tourism Research, 31, 387–409, URL http://journals.sagepub.com/doi/10.1177/
1096348007299924.
Pan, Y. and J. C. Siemens (2011): “The differential effects of retail density: An investigation
of goods versus service settings,” Journal of Business Research, 64, 105–112, URL https:
//linkinghub.elsevier.com/retrieve/pii/S0148296310000548.
Park, C. and T. M. Lee (2009): “Information direction, website reputation and eWOM
effect: A moderating role of product type,” Journal of Business Research, 62, 61–67, URL
https://linkinghub.elsevier.com/retrieve/pii/S0148296308000040.
17
Pentina, I., A. A. Bailey, and L. Zhang (2018): “Exploring effects of source similarity,
message valence, and receiver regulatory focus on yelp review persuasiveness and pur-
chase intentions,” Journal of Marketing Communications, 24, 125–145, URL https:
//www.tandfonline.com/doi/full/10.1080/13527266.2015.1005115.
Purnawirawan, N., P. De Pelsmacker, and N. Dens (2012): “Balance and Sequence in Online
Reviews: How Perceived Usefulness Affects Attitudes and Intentions,” Journal of Interac-
tive Marketing, 26, 244–255, URL https://linkinghub.elsevier.com/retrieve/pii/
S1094996812000229.
Qiu, L., J. Pang, and K. H. Lim (2012): “Effects of conflicting aggregated rating on eWOM
review credibility and diagnosticity: The moderating role of review valence,” Decision Sup-
port Systems, 54, 631–643, URL https://linkinghub.elsevier.com/retrieve/pii/
S0167923612002357.
Schuckert, M., X. Liu, and R. Law (2015): “Hospitality and Tourism Online Reviews: Recent
Trends and Future Directions,” Journal of Travel & Tourism Marketing, 32, 608–621, URL
http://www.tandfonline.com/doi/full/10.1080/10548408.2014.933154.
Senecal, S. and J. Nantel (2004): “The influence of online product recommendations on
consumers’ online choices,” Journal of Retailing, 80, 159–169, URL https://linkinghub.
elsevier.com/retrieve/pii/S0022435904000193.
Shaw, G., A. Bailey, and A. Williams (2011): “Aspects of service-dominant logic and
its implications for tourism management: Examples from the hotel industry,” Tourism
Management, 32, 207–214, URL https://linkinghub.elsevier.com/retrieve/pii/
S0261517710001044.
Sheeran, P. (2002): “Intention—Behavior Relations: A Conceptual and Empirical Review,”
European Review of Social Psychology, 12, 1–36, URL http://www.tandfonline.com/
doi/abs/10.1080/14792772143000003.
Sparks, B. A. and V. Browning (2011): “The impact of online reviews on hotel booking
intentions and perception of trust,” Tourism Management, 32, 1310–1323, URL https:
//linkinghub.elsevier.com/retrieve/pii/S0261517711000033.
Spears, N. and S. N. Singh (2004): “Measuring Attitude toward the Brand and Purchase
Intentions,” Journal of Current Issues & Research in Advertising, 26, 53–66, URL http:
//www.tandfonline.com/doi/abs/10.1080/10641734.2004.10505164.
Truong, Y. and G. Simmons (2010): “Perceived intrusiveness in digital advertising: strategic
marketing implications,” Journal of Strategic Marketing, 18, 239–256, URL http://www.
tandfonline.com/doi/abs/10.1080/09652540903511308.
Tsao, W.-C. and M.-T. Hsieh (2015): “eWOM persuasiveness: do eWOM platforms and
product type matter?” Electronic Commerce Research, 15, 509–541, URL http://link.
springer.com/10.1007/s10660-015-9198-z.
18
Figures
Valence Purchase Intention
Platform
Credibility
- Neutral
- Positive
- Negative
- Balanced
- Company
- Independent
H1
H2 H3
H4
H5
Figure 1: Framework
20
−2
0
2
−2 0 2
Credibility
Intention
Valence
Neutral
Positive
Negative
Balanced
Figure 2: Average marginal effects on purchase intention for interaction of valence and
credibility
21
Tables
Table 1: Descriptive Statistics
Purchase Intention Credibility
Valence Platform Mean SD Mean SD
Neutral Company 0.300 0.848 -0.235 1.619
Neutral Independent 0.222 1.241 -0.343 1.118
Positive Company 2.209 0.778 0.273 1.468
Positive Independent 1.984 1.198 0.151 1.508
Negative Company -2.356 0.604 0.957 1.052
Negative Independent -2.224 0.733 0.575 1.387
Balanced Company -0.238 1.392 -0.967 1.521
Balanced Independent -0.172 1.350 -0.710 1.392
Notes: Values have been mean centered
22
Table 2: Estimation results (dependent variable: Purchase Intention)
Variable Estimate Std.error t-stat. p-value Sign.
(Intercept) 0.312 0.195 1.599 0.111
Credibility 0.051 0.101 0.506 0.613
Valence Balanced -0.180 0.273 -0.660 0.510
Valence Negative -2.428 0.275 -8.819 0.000 ***
Valence Positive 1.835 0.254 7.236 0.000 ***
Platform Independent -0.073 0.277 -0.263 0.793
Credibility x Valence Balanced 0.331 0.133 2.489 0.013 *
Credibility x Valence Negative -0.302 0.141 -2.134 0.034 *
Credibility x Valence Positive 0.175 0.128 1.370 0.172
Platform Independent x Valence Balanced 0.041 0.373 0.110 0.913
Platform Independent x Valence Negative 0.110 0.368 0.297 0.766
Platform Independent x Valence Positive -0.125 0.360 -0.346 0.729
Notes: ***, ** and * are significant at the 0.1 percent, 1 percent and 5 percent levels, respectively.
Table 3: Estimation results (dependent variable: Credibility)
Variable Estimate Std.error t-stat. p-value Sign.
(Intercept) -0.235 0.274 -0.858 0.392
Valence Balanced -0.732 0.367 -1.996 0.047 *
Valence Negative 1.192 0.364 3.272 0.001 **
Valence Positive 0.508 0.356 1.427 0.155
Platform Independent -0.108 0.392 -0.275 0.784
Platform Independent x Valence Balanced 0.365 0.527 0.693 0.489
Platform Independent x Valence Negative -0.275 0.518 -0.530 0.597
Platform Independent x Valence Positive -0.014 0.509 -0.027 0.979
Notes: ***, ** and * are significant at the 0.1 percent, 1 percent and 5 percent levels, respectively.
23