Content uploaded by Luca Cian
Author content
All content in this area was uploaded by Luca Cian on Nov 03, 2020
Content may be subject to copyright.
Article
Artificial Intelligence in Utilitarian vs.
Hedonic Contexts: The “Word-of-
Machine” Effect
Chiara Longoni and Luca Cian
Abstract
Rapid development and adoption of AI, machine learning, and natural language processing applications challenge managers and
policy makers to harness these transformative technologies. In this context, the authors provide evidence of a novel “word-of-
machine” effect, the phenomenon by which utilitarian/hedonic attribute trade-offs determine preference for, or resistance to, AI-
based recommendations compared with traditional word of mouth, or human-based recommendations. The word-of-machine
effect stems from a lay belief that AI recommenders are more competent than human recommenders in the utilitarian realm and
less competent than human recommenders in the hedonic realm. As a consequence, importance or salience of utilitarian attri-
butes determine preference for AI recommenders over human ones, and importance or salience of hedonic attributes determine
resistance to AI recommenders over human ones (Studies 1–4). The word-of machine effect is robust to attribute complexity,
number of options considered, and transaction costs. The word-of-machine effect reverses for utilitarian goals if a recommen-
dation needs matching to a person’s unique preferences (Study 5) and is eliminated in the case of human–AI hybrid decision
making (i.e., augmented rather than artificial intelligence; Study 6). An intervention based on the consider-the-opposite protocol
attenuates the word-of-machine effect (Studies 7a–b).
Keywords
algorithms, artificial intelligence, augmented intelligence, hedonic and utilitarian consumption, recommendations, technology
Online supplement https://doi.org/10.1177/0022242920957347
Recommendations driven by artificial intelligence (AI) are
pervasive in today’s marketplace. Ten years ago, Amazon
introduced its innovative item-based collaborative filtering
algorithm, which generates recommendations by scanning
through a person’s past purchased or rated items and pairing
them to similar items. Since then, more and more companies
are leveraging advances in AI, machine learning, and natural
language processing capabilities to provide relevant and in-
the-moment recommendations. For example, Netflix and
SpotifyuseAIanddeeplearningtomonitorauser’schoices
and provide recommendations of movies or music. Beauty
brands such as Proven, Curology, and Function of Beauty use
AI to make recommendations about skincare, haircare, and
makeup. Real estate services such as OJO Labs, REX Real
Estate, and Roof.ai have replaced human real estate agents
with chatbots powered by AI. AI-driven recommendations are
also pervading the public sector. For example, the New York
City Department of Social Services uses AI to give citizens
recommendations about disability benefits, food assistance,
and health insurance.
In response to the proliferation of AI-enabled recommenda-
tions and building on long-standing research on actuarial judg-
ments (Dawes 1979; Groove and Meehl 1996; Meehl 1954),
recent marketing research has focused on whether consumers
will be receptive to algorithmic advice in various domains
(Castelo, Bos, and Lehman 2019; Dietvorst, Simmons, and
Massey 2014; Leung, Paolacci, and Puntoni 2019; Logg, Min-
son, and Moore 2019; Longoni, Bonezzi, and Morewedge
2019). However, no prior empirical investigation has system-
atically explored if hedonic/utilitarian trade-offs in decision
making determine preference for, or resistance to, AI-based
(vs. human-based) recommendations.
We focus our investigation on hedonic/utilitarian attribute
trade-offs because of their influence on both consumer choice
and attitudes (Bhargave, Chakravarti, and Guha 2015;
Chiara Longoni is Assistant Professor of Marketing, Questrom School of
Business, Boston University, USA (email: clongoni@bu.edu). Luca Cian is
Assistant Professor of Marketing, Darden School of Business, University of
Virginia, USA (email: cianl@darden.virginia.edu).
Journal of Marketing
1-18
Crowley, Spangenberg, and Hughes 1991). Specifically, we
examine when and why hedonic/utilitarian attribute trade-offs
in decision making influence whether people prefer or resist
AI recommenders. This question is of pivotal importance for
managers operating in both the private and public sectors who
are looking to harness the potential of AI-driven
recommendations.
Across nine studies and using a broad array of both attitu-
dinal and behavioral measures, we provide evidence of a
“word-of-machine” effect. We define “word of machine” as
the phenomenon by which hedonic/utilitarian attribute trade-
offs determine preference for, or resistance to, AI-based rec-
ommendations compared with traditional word of mouth, or
human-based recommendations. We suggest that the word-
of-machine effect stems from a lay belief about differential
competence perceptions regarding AI and human recommen-
ders. Specifically, we show that people believe AI recommen-
ders are more competent than human recommenders to assess
utilitarian attribute value and generate utilitarian-focused rec-
ommendations. By contrast, people believe that AI recommen-
ders are less competent than human recommenders to assess
hedonic attribute value and generate hedonic-focused recom-
mendations. As a consequence, and as compared with human
recommenders, individuals are more (less) likely to choose AI
recommenders when utilitarian (hedonic) attributes are impor-
tant or salient, such as when a utilitarian (hedonic) goal is
activated.
Our research is both theoretically novel and substantively
impactful. A first set of theoretical contributions relates to
research on the psychology of automation and on human–
technology interaction (Dawes 1979; Groove and Meehl
1996; Meehl 1954). The pervasiveness of AI-driven recom-
mendations has led to a burgeoning body of research exam-
ining whether consumers are receptive to the advice of
algorithms, statistical models, and artificial intelligence
(Dietvorst, Simmons, and Massey 2014; Leung, Paolacci,
and Puntoni 2019; Longoni, Bonezzi, and Morewedge
2019). With respect to this literature, we make three novel
contributions. First, we extend it by addressing the previ-
ously unexplored question of when and why hedonic/utili-
tarian trade-offs in decision making influence preference for
or resistance to AI recommenders. Second, we show under
what circumstances AI-driven recommendations are pre-
ferred to, and therefore more effective, than human ones:
when utilitarian attributes are relatively more important or
salient than hedonic ones. These results are especially note-
worthy, as most research in this area has documented a
robust and generalized resistance to algorithmic advice (for
exceptions, see Castelo, Bos, and Lehman 2019; Dietvorst,
Simmons, and Massey 2016; Logg, Minson, and Moore
2019). Third, we explore under what circumstances consu-
mers will be amenable to AI recommenders in the context
of human–AI partnerships: when AI supports rather than
replaces a human. These results are also novel as research-
ers have just begun devising AI systems capable of deciding
when to defer (vs. not defer) to a human (Hao 2020), and
empirical investigations are yet to examine if consumers
will embrace such hybrid human–AI decision making.
Our research makes a second theoretical contribution to
the literature on hedonic and utilitarian consumption (Alba
and Williams 2013; Khan and Dhar 2010; Moreau and Herd
2009; Whitley, Trudel, and Kurt 2018). Prior research in
this area has examined how the evaluation of hedonic and
utilitarian products depends on characteristics of the task,
locus of choice, and justifiability of choice (e.g., Bazerman,
Tenbrunsel, and Wade-Benzoni 1998; Botti and McGill
2011; Okada 2005). However, research in this area has not
addressed the question of whether shifts in hedonic/utilitar-
ian trade-offs in decision making determine preference for
the source of a recommendation (e.g., an AI vs. a human
recommender). Recent developments of AI have brought
this question to the fore, makingitofcriticalimportance
for companies seeking to leverage the potential of AI-driven
recommendations.
From a managerial perspective, our results are useful for
companies in both the private and public sectors that are look-
ing to leverage AI recommenders to better reach their custom-
ers. As we investigate when consumers prefer AI over human
recommenders, our findings are useful for companies debating
if and how to effectively leverage AI-based recommendation
systems. Our findings have implications for a host of marketing
decisions. For instance, our results indicate that a shift away
from hedonic attributes and toward utilitarian attributes leads to
consumers preferring AI recommenders. Accordingly, AI
recommenders may be more aligned with functional position-
ing strategies than experiential ones. In addition, emphasizing
utilitarian benefits may be relatively more impactful with an
AI-based system than emphasizing hedonic benefits. Taken
together, our research and findings provide actionable insights
for managers looking for ways to leverage AI to orchestrate
consumer journeys so as to successfully move customers
through the funnel, increase the likelihood of successful trans-
actions, and, overall, optimize the customer experience at each
phase of the journey.
Theoretical Development
Hedonic and Utilitarian Consumption
Although consumption involves both hedonic and utilitarian
considerations, consumers tend to view products as either pre-
dominantly hedonic or utilitarian (for a review, see Khan, Dhar,
and Wertenbroch 2005). Hedonic consumption is primarily
affectively driven, based on sensory or experiential pleasure,
reflects affective benefits, and is assessed on the basis of the
degree to which a product is rewarding in itself (Botti and
McGill 2011; Crowley, Spangenberg, and Hughes 1991; Hol-
brook 1994). Utilitarian consumption is instead more
cognitively driven, based on functional and instrumental goals,
reflects functional benefits, and is assessed on the basis of the
degree to which a product is a means to an end (Botti and
2Journal of Marketing XX(X)
McGill 2011; Crowley, Spangenberg, and Hughes 1991;
Holbrook 1994).
Prior research on hedonic/utilitarian consumption has
focused on the effect of characteristics of the task on product
judgments. For instance, choice tasks tend to favor utilitarian
options, whereas rating tasks tend to favor hedonic options
(Bazerman, Tenbrunsel, and Wade-Benzoni 1998; Okada
2005), and forfeiture increases the relative salience of hedonic
attributes compared to acquisition (Dhar and Wertenbroch
2000). Justifiability leads people to assign greater weight to
utilitarian (vs. hedonic) options (Okada 2005), and hedonic
(vs. utilitarian) choices are associated with greater perceived
personal causality (Botti and McGill 2011).
Although spanning over a decade, research on hedonic/uti-
litarian consumption has not yet addressed the question of
whether hedonic and utilitarian trade-offs influence preference
for the source of a recommendation (AI vs. human). This ques-
tion has come to the fore given its importance for managers
looking to leverage the potential of algorithmic recommenda-
tions. We discuss prior research on algorithmic recommenda-
tions in the next section.
(Resistance to) Algorithmic Recommendations
Ever since seminal work on statistical and actuarial predictive
models was published (Dawes 1979; Grove and Meehl 1996;
Meehl 1954), a large body of research has documented how
statistical/actuarial models outperform clinical/human judg-
ments in predicting a host of events, such as students’ and
employees’ performance (Dawes 1979) and market demand
(Sanders and Manrodt 2003). Despite the superior accuracy
of algorithmic models, people tend to eschew them. With only
a few exceptions (Castelo, Bos, and Lehman 2019; Dietvorst,
Simmons, and Massey 2016; Logg, Minson, and Moore 2019),
most of the extant literature has shown that people resist the
advice of a statistical algorithm. For instance, recent research in
the medical domain has shown that consumers may be more
reluctant to utilize medical care delivered by AI providers than
by comparable human providers (Longoni, Bonezzi, and Mor-
ewedge 2019; 2020). Corporate settings show similar patterns,
with recruiters (Highhouse 2008) and auditors (Boatsman,
Moeckel, and Pei 1997) trusting their judgment and predictions
more than algorithms.
There are numerous reasons why people resist algorithmic
recommendations. People (erroneously) believe that algo-
rithms are unable to learn and improve (Dawes 1979, High-
house 2008) and therefore lose confidence in algorithms when
they see them err (Dietvorst, Simmons, and Massey 2014).
People also believe that algorithms assume the world to be
orderly, rigid, and stable and therefore cannot take into consid-
eration uncertainty (Grove and Meehl 1996) and a person’s
uniqueness (Longoni, Bonezzi, and Morewedge 2019). Resis-
tance to algorithmic advice may also be borne out of general-
ized concerns, such as people’s fear of being reduced to “mere
numbers” (Dawes 1979) and mistrust of algorithms’ lack of
empathy (Grove and Meehl 1996).
We extend this literature and show circumstances under
which people prefer (and not just resist) algorithmic recom-
mendations. Specifically, we examine how and why hedonic/
utilitarian trade-offs determine preference for, or resistance to,
AI recommenders, as articulated in the next section.
The Word-of-Machine Effect: Utilitarian/Hedonic
Trade-offs Determine Preference for (or Resistance to)
AI Recommenders
We hypothesize a word-of-machine effect, whereby hedonic
and utilitarian trade-offs determine preference for or resistance
to AI recommenders compared to human ones. We suggest that
the word-of-machine effect stems from consumers’ differing
competence perceptions of AI and human recommenders in
assessing attribute value and generating recommendations.
Specifically, we suggest that people believe AI recommenders
to be more (less) competent to assess utilitarian (hedonic) attri-
bute value and generate utilitarian-focused (hedonic-focused)
recommendations than human recommenders.
These predictions rest on the assumption that people believe
hedonic and utilitarian attribute value assessment to require
different evaluation competences. Hedonic value assessment
should map onto criteria on the basis of experiential, emotional,
and sensory evaluative dimensions. By contrast, utilitarian
value assessment should map onto criteria on the basis of fac-
tual, rational, and logical evaluative dimensions. This assump-
tion is rooted in the very definition of hedonic and utilitarian
value. Hedonic value is conceptualized as reflecting experien-
tial affect associated with a product, sensory enjoyment, and
emotions (Batra and Ahtola 1991; Hirschman and Holbrook
1982). Indeed, hedonic consumption tends to be affectively rich
and emotionally driven (Botti and McGill 2011). By contrast,
utilitarian value is conceptualized as reflecting instrumentality,
functionality, nonsensory attributes, and rationality (Batra and
Ahtola 1991; Hirschman and Holbrook 1982). Overall, utilitar-
ian consumption is cognitively driven (Botti and McGill 2011).
How do different types of recommenders (AI vs. human)
then fare with respect to assessing hedonic and utilitarian attri-
bute value? We suggest that people believe AI recommenders
are more competent to assess utilitarian attribute value than
human recommenders and less competent to assess hedonic
attribute value than human recommenders. We attribute this
lay belief to differing associations people have about how AI
(vs. human) recommenders process and evaluate information.
Lay beliefs are developed either directly through personal
experience (Ross and Nisbett 1991) or indirectly from the envi-
ronment (Morris, Menon, and Ames 2001). Throughout child-
hood we learn firsthand that, as humans, we are able to perceive
and connect with the outside world through our affective
experiences. By contrast, we learn that AI, computers, and
robots are rational and logical, and lack the ability to have
affective, experiential interactions with the world. These asso-
ciations are reflected in idioms such as “thinking like a robot,”
whichreferstothinkinglogically without taking into
Longoni and Cian 3
consideration more “human” aspects of a situation such as
sensations and emotions. Thus, whereas AI and computers are
associated with rationality and logic, humans are associated
with emotions and experiential abilities. These associations are
also echoed in books, songs, and movies. For example, in the
Star Trek universe, the artificially intelligent form of life
named Data has superior intellective abilities but is unable to
experience emotions. Popular movies like Her,Ex Machina,
and Terminator further reinforce these associations.
Accordingly, we suggest that people believe AI recommen-
ders are more competent than human recommenders when
assessing information because they use criteria that rely rela-
tively more on facts, rationality, logic, and, overall, cognitive
evaluative dimensions. By contrast, we propose that people
believe human recommenders are more competent than AI
recommenders when assessing information because they use
criteria that rely relatively more on sensory experience, emo-
tions, intuition, and, overall, affective evaluative dimensions.
Because people perceive AI and humans to have different
competency levels when assessing information, and because
assessment of utilitarian and hedonic attribute value underscore
different evaluative foci, it follows that people perceive AI and
humans to have different competency levels with respect to asses-
sing utilitarian and hedonic attributes. This lay belief about
competence perceptions forms the basis for the proposed word-
of-machine effect. In summary, we predict that if utilitarian
(hedonic) attributes are more important or salient, such as when
a utilitarian (hedonic) goal is activated, people will be more (less)
likely to choose AI recommenders than human recommenders.
A final note warrants mention. As competence perceptions
driving the word-of-machine effect are based on a lay belief,
they are embedded in the cultural context. That is, humans are
not necessarily less competent than AI at assessing and evalu-
ating utilitarian attributes. Vice versa, AI is not necessarily less
competent than humans at assessing and evaluating hedonic
attributes. Indeed, AI selects flower arrangements for 1-800-
Flowers and creates new flavors for food companies such as
McCormick, Starbucks, and Coca-Cola (Venkatesan and
Lecinski 2020).
Overview of Studies
Studies 1a–b focus on product choice in field settings and show
the main word-of-machine effect: that AI (human) recommen-
ders lead to greater choice likelihood when a utilitarian (hedo-
nic) goal is activated. Study 2 shows different perceptions that
result from the two recommendation sources: AI (human)
recommenders lead to higher evaluation of utilitarian (hedonic)
attributes upon consumption. Study 3 shows that when a utili-
tarian (hedonic) attribute is considered important, consumers
prefer AI (human) recommenders. Study 4 uses an analysis of
mediation to corroborate the role of competence perceptions in
explaining the word-of-machine effect while ruling out attri-
bute complexity as alternative explanation. Studies 5–7 explore
the scope of the word-of-machine effect by identifying bound-
ary conditions. Study 5 shows that the effect is reversed for
utilitarian goals when the recommendation needs to match to a
person’s unique preferences, a type of task people view AI as
unfit to do. Study 6 shows that the effect is eliminated when AI
is framed as “augmented” intelligence rather than artificial
intelligence, that is, when AI enhances and supports a person
rather than replacing them. Finally, Studies 7a–b test an inter-
vention using the consider-the-opposite protocol to moderate
the word-of-machine effect.
Studies 1a–b: Preference for AI
Recommenders When Utilitarian Goals
Are Activated
Studies 1a–b focus on the word-of-machine effect on actual
product choice in field settings as a function of an activated
utilitarian or hedonic goal. We first activated either a utilitarian
or a hedonic goal and then, in an incentive-compatible setting,
measured choice as a function of recommender.
Study 1a: Hair Treatment Sample
Procedure. Two hundred passersby in a city in northeast United
States participated in Study 1a on a voluntary basis. We handed
willing passersby a leaflet explaining that we were conducting
a blind test for products in the haircare industry and, specifi-
cally, for hair masks—a leave-in treatment for hair and scalp.
Passersby read that for the purpose of the market test, we
wanted them to select one of two hair mask samples solely
on the basis of the instructions in the leaflet. These instructions
activated, in a two-cell between-subjects design, either a hedo-
nic or a utilitarian goal:
[Hedonic] For the purpose of this blind test, it is very important that
you set aside all thoughts you might already have about hair masks.
Instead, we would like you to focus only on the following. Imagine
that you have a “hedonic” goal. We would like you to imagine that
the only things that you care about in a hair mask are hedonic char-
acteristics, like how indulgent it is to use, its scent, and the spa-like
vibe it gives you. When you make the next choice, imagine that there
are no other things that are important for you in a hair mask.
[Utilitarian] For the purpose of this blind test, it is very important
that you set aside all thoughts you might already have about hair
masks. Instead, we would like you to focus only on the following.
Imagine that you have a “utilitarian” goal. We would like you to
imagine that the only things that you care about in a hair mask are
utilitarian characteristics, like how practical it is to use, its objec-
tive performance, and the chemical composition. When you make
the next choice, imagine that there are no other things that are
important for you in a hair mask.
The leaflet further explained that there were two hair mask
options from which they could choose. One option had been
recommended by a person, and the other option had been rec-
ommended by an algorithm. The leaflet specified that the per-
son and the algorithm had the same haircare expertise and that
the pots of hair masks, available for pickup on a desk, all
4Journal of Marketing XX(X)
contained the same amount of fluid ounces. The pots were
identical except for a marking of “P” if selected by a person
or “A” if selected by an algorithm (stimuli in Web Appendix
A). The key dependent variable was whether passersby chose
the product selected by the person or by the algorithm.
Results and discussion. To assess product choice, we compared
the proportion of people who chose the product recommended
by the algorithm with the proportion of people who chose the
product recommended by the person depending on the activated
goal (utilitarian vs. hedonic). The two proportions differed sig-
nificantly (w
2
(1, N ¼200) ¼12.60, p¼.001). As predicted,
when a utilitarian goal was activated, more people chose the
product recommended by the algorithm (67%) than by the per-
son (33%;z¼4.81, p<.001). When a hedonic goal was
activated, more people chose the product recommended by the
person (58%) than by the algorithm (42%;z¼2.26, p¼.024).
Study 1b: Selection of House Properties
Procedure. Study 1b was a field study conducted over four con-
secutive days in Cortina, a resort town in northeast Italy. We
selected this town because in 2026 it will host the Olympic
games and is likely to experience a boom in its real estate
market, which is the domain of the study. We secured the use
of a centrally located bar and set up the study as follows. We
placed an ad (translated to Italian) promoting a local real estate
agency at the bar entrance. The ad headline reminded people of
the opportunity to make fruitful real estate investments due to
the upcoming Olympic games. In a two-cell, between-subjects
design, we alternated the text in the ad to focus people on a
hedonic or utilitarian goal:
[Hedonic] With the Olympic games coming up, it is really impor-
tant that you look for a real estate investment that is fun, enjoyable,
and speaks to your emotions. You want a place that pleases your
senses considering all the changes that will affect [name of town]
in the next few years.
[Utilitarian] With the Olympic games coming up, it is really impor-
tant that you look for a real estate investment that is functional,
useful, and speaks to your rationality. You want a place that is
practical considering all the changes that will affect [name of town]
in the next few years.
At the bottom of the ad there were two envelopes described
as containing a curated selection of available properties in
Cortina that could fit with the opportunity in the ad (i.e., one
of the activated goals). One property selection had been (osten-
sibly) curated by a person (the respective envelope read: “one
of [name of agency]’s agents has selected these properties”)
and the other by an algorithm (the respective envelope read:
“[name of agency]’s proprietary algorithm has selected these
properties”). The ad invited people to pick up only one of the
two envelopes given the limited quantity of promotional mate-
rials (stimuli in Web Appendix B). The key dependent variable
was whether people chose the selection made by the agent or by
the algorithm. A waiter ensured that participants took only one
of the two envelopes, and we excluded two participants who
picked up two (final N ¼229).
Results and discussion. We compared the proportion of people
who chose the selection made by the algorithm with the pro-
portion of people who chose the selection made by the agent
depending on the activated goal (utilitarian vs. hedonic). The
two proportions differed significantly (w
2
(1, N ¼229) ¼29.33,
p<.001). When the goal was utilitarian, more people chose the
selection made by the algorithm (59.8%) than by the agent
(40.2%;z¼3.07, p¼.002), whereas when the goal was
hedonic, more people chose the selection made by the agent
(75.7%) than by the algorithm (24.3%;z¼7.52, p<.001).
Together, Studies 1a–b show that when a utilitarian goal is
activated, people are more likely to choose an AI recommender
than a human recommender. When a hedonic goal is activated,
people are less likely to choose an AI recommender than a
human recommender.
Study 2: AI Recommenders Shift Hedonic/
Utilitarian Perceptions Upon Consumption
Study 2 examines the word-of-machine effect upon consump-
tion. As conceptual information such as expectations affects food
consumption experiences (e.g., Allison and Uhl 1964; Wardle
and Solomons 1994), we predicted that the type of recommender
would affect perceptions of hedonic and utilitarian attributes
upon actual consumption of a product (a chocolate cake).
Procedure
One hundred forty-four participants from a paid subject pool
(open to students and nonstudents) at the University of Virginia
completed this study (M
age
¼27.5 years, SD ¼9.5; 60.4%
female). We told participants that we were testing chocolate cake
recipes on behalf of a local bakery (stimuli in Web Appendix C).
We told participants that the bakery had two options for chocolate
cake recipes: one created using the ingredient selection of an AI
chocolatier and one created using the ingredient selection of a
human chocolatier. We specified that both the human and AI
chocolatier had access to the same recipe database. We invited
participants to look at the two chocolate cakes on top of a podium
in a pop-up bakery/classroom desk. The two types of cake looked
(and were) identical. We told participants that the two chocolate
cakes, although based on different recipes, looked the same
because the bakery did not want them to be influenced by the
shape or the color. In a two-cell between-subjects design, we
asked participants to consume either the chocolate cake whose
recipe was selected by the human chocolatier or the one selected
by the AI chocolatier. After consuming the cake, we measured
hedonic/utilitarian attribute perceptions by asking participants to
rate the cake on two hedonic items (indulgent taste and aromas;
pleasantness to the senses [vision, touch, smell, etc.]) and two
utilitarian items (beneficial chemical properties [antioxidants];
healthiness [micro/macro nutrients, etc.]) on seven-point scales
Longoni and Cian 5
anchored at 1 ¼“very low” and 7 ¼“very high.” The order of
hedonic and utilitarian items was randomized.
Results and Discussion
Hedonic attribute perceptions. A one-way analysis of variance
(ANOVA) on the average of the two hedonic items (r ¼.87, p
<.001) revealed that, upon consumption, participants rated the
chocolate cake as having lower hedonic value when based on
the recommendation of an AI chocolatier than a human one
(M
AI
¼4.57, SD ¼1.38; M
H
¼6.17, SD ¼1.03; F(1, 142) ¼
61.33, p<.001).
Utilitarian attribute perceptions. A one-way ANOVA on the index
of the two utilitarian items (r ¼.84, p<.001) revealed that,
upon consumption, participants rated the chocolate cake as hav-
ing higher utilitarian value when based on the recommendation
of an AI chocolatier than a human one (M
AI
¼5.48, SD ¼1.21;
M
H
¼5.02, SD ¼1.35; F(1, 142) ¼61.33, p¼.034).
Thus, Study 2 shows that the word-of-machine effect
extends to actual consumption and that the type of recommen-
der influences people’s perceptions of hedonic/utilitarian
trade-offs. AI recommenders led participants to perceive
greater utilitarian attribute value and lower hedonic attribute
value compared to human recommenders.
Study 3: Preference for AI Recommenders
When Utilitarian Attributes Are More
Important
Study 3 further tests the word-of-machine effect. Instead of
activating hedonic/utilitarian goals as in Studies 1a–b, we mea-
sured the importance given to hedonic/utilitarian attributes
with respect to a specific product category (winter coats). Then,
we assessed relative preference for a human or an AI recom-
mender. We expected people to prefer AI to human recommen-
ders when utilitarian attributes were more important to them,
and to prefer human over AI recommenders when hedonic
attributes were more important to them. We benchmarked these
hypotheses with a condition in which people chose between
two human recommenders, wherein we expected recommender
preference to be uncorrelated with importance assigned to
hedonic/utilitarian attributes.
Procedure
Three hundred three respondents (M
age
¼38.0 years, SD ¼
11.1; 49.5%female) recruited on Amazon Mechanical Turk
participated in exchange for monetary compensation. Partici-
pants imagined that they were planning to purchase a new winter
coat (as it was the winter season) and were looking for recom-
mendations. Participants read that winter coats have functional/
utilitarian aspects (“Winter coats have functional or utilitarian
aspects, such as insulating power, breathability, and the degree
to which the coat is rain and wind proof”) and sensory/hedonic
aspects (“Winter coats have sensory or hedonic aspects, such as
the color and other aesthetics, the way the fabric feels to the
touch, and the degree to which the coat fits well”). Then, to
measure the importance of hedonic/utilitarian attributes, partici-
pants rated the extent to which, in general, they cared about
sensory/hedonic and functional/utilitarian aspects in winter coats
(1 ¼“mostly care about functional/utilitarian aspects,” and 7 ¼
“mostly care about sensory/hedonic aspects”).
Participants then read that to get recommendations about
winter coats, they could rely on one of two shopping assistants,
X or Y. We specified that both assistants had access to the same
type and size of database, would charge the same fees, would
generate recommendations autonomously, and were trained to
serve users well and to the best of their capacity. To control for
the possibility that different recommenders would be associated
with different service quality perceptions, we also specified that
the two shopping assistants had the same rating of 4.9/5.0 stars
provided by 687 consumers that had used their services in the
past. To manipulate choice set, half of the participants chose
between two human shopping assistants (both X and Y were
people and were described as two different sales associates at
that particular retailer), and the other half chose between a
human assistant, X, and an AI assistant, Y. Thus, whereas X
was always human, Y was either human or AI depending on the
condition. Finally, participants indicated their preference for
one of the assistants (1 ¼“definitely shopping assistant X,” 4
¼“indifferent,” and 7 ¼“definitely shopping assistant Y”).
Results and Discussion
We regressed recommender preference on choice set (human–
human vs. human–AI), hedonic/utilitarian attribute importance,
and their interaction. This analysis revealed significant main
effects of choice set (b ¼.85, t(299) ¼5.49, p<.001) and
hedonic/utilitarian attribute importance (b ¼.32, t(299) ¼7.46,
p<.001), as well as a significant two-way interaction (b ¼.29,
t(299) ¼6.91, p<.001). As hedonic/utilitarian attribute
importance was continuous, we explored the interaction using
the Johnson–Neyman floodlight technique (Spiller et al. 2013),
which revealed a significant effect of recommender preference in
human–AI choice set for levels of hedonic/utilitarian attribute
importance lower than 2.35 (b
JN
¼.15, SE ¼.08, p¼.050) and
higher than 3.36 (b
JN
¼.14, SE ¼.07, p¼.050). That is, the
more participants cared about utilitarian attributes (values lower
than 2.35 on the seven-pointscale), the more they preferred an AI
assistant over a human one. Conversely, the more participants
cared about hedonic attributes (values higher than 3.36 on the
seven-point scale), the more they preferred a human assistant
over an AI one. As predicted, in the human–human choice
set, which served as the control condition, participants were in-
different between the two assistants (M ¼3.98, SD ¼.34) and
recommender preference was uncorrelated with hedonic/uti-
litarian attribute importance (r ¼.116, p¼.162; see Figure 1).
These results provided correlational evidence that hedonic/
utilitarian attribute importance predicts preference between
human and AI recommenders. The next study utilizes an
6Journal of Marketing XX(X)
analysis of mediation to test competence perceptions as drivers
of the word-of-machine effect.
Study 4: Mediation by Competence
Perceptions: Ruling Out Complexity
Study 4 uses an analysis of mediation to measure competence
perceptions as lay beliefs underlying the word-of-machine
effect. In addition, this study tests attribute complexity as an
alternative explanation: a belief that AI recommenders are bet-
ter capable to process more complex attribute information than
human recommenders. One could argue that utilitarian attri-
butes seem more complex to evaluate than hedonic attributes.
If this argument is accurate, preference for AI recommenders
when utilitarian attributes are more salient could be explained
by a lay belief about the recommender’s ability, higher for AI
recommenders, to deal with complexity.
1
We tested this alter-
native explanation by manipulating attribute complexity ortho-
gonally to recommender type (human, AI) and activated goal
(hedonic, utilitarian). We manipulated attribute complexity by
way of number of product attributes, which is consistent with
prior research (Littrell and Miller 2001; Timmermans 1993).
Procedure
Four hundred two participants (M
age
¼38.5 years, SD ¼12.6;
46%female) from Amazon Mechanical Turk participated in
exchange for monetary compensation in a 2 (complexity: low,
high) 2 (goal: hedonic, utilitarian) 2 (recommender:
human, AI) between-subjects design.
Participants read about the beta testing of a new app created
to give recommendations of chocolate varieties by relying on
one of two sources: a human or an AI master chocolatier (i.e., a
computer algorithm). We told participants that the human and
AI recommenders relied on the same database of chocolate
varieties and operated autonomously. The app had the same
cost regardless of recommender. Participants saw screenshots
of the app (Figure 2).
We specified that the ratings of the chocolate varieties in the
data set were not based on personal experience but rather that
they had been rated by consumers and manufacturers in terms of
certain dimensions that varied by complexity condition. In the
high complexity condition, we described the chocolate varieties
as being rated on eight attributes, four of which were hedonic
(sensory pleasure, taste, fun factor, and pairing combinations)
and four of which were utilitarian (chemical profile, nutritional
index, digestibility profile, and health factor). In the low com-
plexity condition, we described the chocolate varieties as being
rated on two attributes, one of which was hedonic (sensory
pleasure) and one of which was utilitarian (chemical profile).We
then activated either a hedonic or a utilitarian goal by asking
participants to set aside all thoughts they might already have had
about chocolate and instead imagine that they wanted a recom-
mendation based only on (1) sensory pleasure, taste, fun factor,
and pairing combinations (hedonic/high complexity); (2) sen-
sory pleasure (hedonic/low complexity); (3) chemical profile,
nutritional index, digestibility profile, and health factor (utilitar-
ian/high complexity); or (4) chemical profile (utilitarian/low
complexity). Finally, we manipulated recommender in a two-
cell (recommender: human, AI) between-subjects design by
telling participants that in the version of the app they were
considering, it was either the human or the AI master chocolatier
that would give them a recommendation.
As a behavioral dependent variable, we asked participants if
they wanted to download the chocolate recommendation at the
end of the survey (yes, no), specifying that payment would not
be conditional on electing to download the recommendation
(which is consistent with previous research; see Cian, Longoni,
and Krishna 2020). We then measured the hypothesized med-
iator (competence perceptions) by asking participants to rate
the extent to which they thought the human (AI) recommender
(1) was competent to recommend the type of chocolate they
were looking for and (2) could do a good job recommending the
type of chocolate they were looking for (1 ¼“strongly dis-
agree,” and 7 ¼“strongly agree”; r ¼.89, p<.001).
2
At the
1
2
3
4
5
6
7
12346
rednemmoceRrofecnereferP
Utilitarian/Hedonic Attribute Importance
X is human, Y is human
X is human, Y is AI
Johnson–Neyman
Below 2.35 Above 3.36
Figure 1. Results of Study 3: Preference for AI (human) recommen-
ders when utilitarian (hedonic) attributes are more important.
Note. The y-axis represents preference for recommender measured on a
seven-point scale anchored at 1 ¼“definitely shopping assistant X,” and 7 ¼
“definitely shopping assistant Y.” The x-axis represents importance of hedonic/
utilitarian attributes, measured on a seven-point scale anchored at 1 ¼“mostly
care about functional/utilitarian aspects,” and 7 ¼“mostly care about sensory/
hedonic aspects.” The shaded region represents area of significance.
1
We thank the associate editor and two anonymous reviewers for this
suggestion.
2
We added manipulation checks at the end of the survey. These manipulation
checks were of recommender and goal, and participants indicated the
recommender (human, AI) of the app they considered and the goal they had
(hedonic, utilitarian). The recommender manipulation check was answered
correctly by 93.0%of the participants, and the goal manipulation check was
answered correctly by 90.8%of the participants. Statistical conclusions did not
differ when restricting the analysis to participants who passed either
manipulation check.
Longoni and Cian 7
very end of the survey, participants who elected to download
the recommendation were automatically directed to a down-
loadable PDF document with information about the chocolate
(a relatively more indulgent hazelnut-based chocolate called
“gianduiotti” in the hedonic condition or a relatively healthier
chocolate toasted at low temperature called “crudista” in the
utilitarian condition).
Results and Discussion
Behavior. We assessed behavior (i.e., the proportion of partici-
pants who decided to download vs. not download the recom-
mendation) by using a logistic regression with complexity,
goal, recommender, and their two-way and three-way interac-
tions as independent variables (all contrast coded) and down-
load (1 ¼yes, 0 ¼no) as dependent variable. We found no
significant main effect of complexity (B ¼.04, Wald ¼.09, 1
d.f., p¼.77) or goal (B ¼.03, Wald ¼.06, 1 d.f., p¼.81), and
we found a marginally significant main effect of recommender
(B ¼.25, Wald ¼3.75, 1 d.f., p¼.053). The three-way goal
recommender complexity interaction was not significant
(B ¼.11, Wald ¼.80, 1 d.f., p¼.37), ruling out the role of
complexity. In terms of two-way interactions, complexity did
not interact with goal (B ¼.13, Wald ¼1.04, 1 d.f., p¼.31)
nor with recommender (B ¼.18, Wald ¼1.99, 1 df, p¼.16).
Replicating previous results, the two-way goal recommender
interaction was significant (B ¼.75, Wald ¼34.60, 1 d.f.,
p<.001). The AI recommender led to more downloads
than the human recommender when the goal was utilitarian
(M
AI
¼82%,M
H
¼63%;z¼3.10, p¼.002) and fewer
downloads when the goal was hedonic (M
AI
¼52%,M
H
¼
88%;z¼5.63, p<.001).
Competence perceptions. A222 ANOVA on competence
perceptions revealed no significant main effect of complexity
(F(1, 394) ¼1.24, p¼.27) and significant main effects of goal
(F(1, 394) ¼8.99, p¼.003) and recommender (F(1, 394) ¼
19.81, p<.001). The three-way complexity goal recom-
mender interaction was not significant (F(1, 394) ¼.64, p¼
.44), ruling out complexity. In terms of two-way interactions,
complexity did not interact with goal (F(1, 394) ¼.61, p¼.44),
nor with recommender (F(1, 394) ¼.36, p¼.55). Importantly,
the two-way goal recommender interaction was significant
(F(1, 394) ¼57.63, p<.001). Planned contrasts revealed that
participants perceived the AI recommender as more competent
than the human recommender in the case of a utilitarian goal
(M
AI
¼5.92, SD
AI
¼1.10; M
H
¼5.50, SD
H
¼1.38; F(1, 394)
¼4.90, p¼.027) and less competent in the case of a hedonic
goal (M
AI
¼4.51, SD
AI
¼1.77; M
H
¼6.13, SD
H
¼.96;
F(1, 394) ¼73.04, p<.001).
Moderated mediation. We ran a moderated mediation model
using PROCESS Model 8 (5,000 resamples; Hayes 2018). In
this model, the moderating effect of goal takes place before the
mediator (competence perceptions). The interaction between
recommender and goal was significant (95%CI ¼.38 to .64)
in the path between the independent variable and the mediator
but not in the path between the independent variable and the
dependent variable (95%CI ¼.08 to .63). As predicted, the
indirect effect recommender !competence perceptions !
Figure 2. Stimuli of Study 4.
8Journal of Marketing XX(X)
download was significant but in the opposite direction condi-
tionally on the moderator (hedonic: 95%CI ¼1.19 to 2.40;
utilitarian: 95%CI ¼.88 to .06).
These results provide evidence for the hypothesized role of
competence perceptions as drivers of the word-of-machine
effect. Participants rated AI recommenders as more (less) com-
petent in the case of utilitarian (hedonic) goals. Differential
competence perceptions explained higher choice likelihood for
the AI’s recommendation than the human’s if a utilitarian goal
had been activated and lower choice likelihood for the AI’s
recommendation than the human’s if a hedonic goal had been
activated. Furthermore, we did not find evidence that the word-
of-machine effect was moderated by complexity. The next
three studies tested the scope of the word-of-machine effect
by identifying boundary conditions.
Study 5: Testing Unique Preference Matching
as a Boundary Condition
Study 5 explores a circumstance under which the word-of-
machine effect might reverse: when consumers want a recom-
mendation that matches their unique needs and preferences.
3
Matching a recommendation to one’s preferences is valued and
might even be expected (Franke, Keinz, and Steger 2009). In
this study, we tested the hypothesis that consumers view the
task of matching a recommendation to one’s unique prefer-
ences as being better performed by a person than by AI.
4
This
argument is in line with recent research in the medical domain
showing that consumers perceive AI as less able than a human
physician to tailor a medical recommendation to their unique
characteristics and circumstances (Longoni, Bonezzi, and Mor-
ewedge 2019). Thus, we expected people to choose AI recom-
menders at a lower rate and, conversely, choose human
recommenders at a higher rate if matching to unique prefer-
ences was salient, even in the case of an activated utilitarian
goal. In other words, if matching to unique preferences was
salient, we expected people to prefer a human recommender for
both hedonic and utilitarian goals. We tested this possibility by
manipulating whether participants’ desire to have a recommen-
dation matched to their unique needs and preferences was sali-
ent and then measuring their choice of recommender.
Procedure
Five hundred forty-five respondents (M
age
¼39.0 years, SD ¼
12.9; 46.6%female) from Amazon Mechanical Turk partici-
pated in exchange for monetary compensation in a 2 (goal:
hedonic, utilitarian) x 2 (matching: unique preferences, con-
trol) between-subjects design. Participants read information
about the beta testing of a new smartphone app offered by a
real estate service. The app would allow users to chat with a
Realtor to find properties to buy or rent. Participants further
read that there were two versions of this app. In one version of
the app, users would interact with a human Realtor, and in the
other version, users would interact with an AI Realtor (i.e., a
computer algorithm). Participants saw screenshots of the app
(Figure 3) and read about how the app would work: the users
would indicate what attributes they were looking for in a prop-
erty (square footage, number of rooms, budget) and the [Real-
tor/AI Realtor] would use [their/its] training and knowledge to
make apartment recommendations. We specified that both the
human and AI Realtors had access to the same number and type
of property listings. We then activated either a hedonic or a
utilitarian goal by asking participants to set aside all thoughts
they might already have had about apartments and instead
imagine that they wanted a recommendation based only on:
(1) how trendy the neighborhood is, the apartment views, aes-
thetics (hedonic goal condition) or (2) distance to their work-
place, proximity to public transport, functionality (utilitarian
goal condition; based on Bhargave, Chakravarti, and Guha
2015). Finally, to make unique preference matching salient,
we told half of the participants that it was very important for
them to get a recommendation that would be matched to their
unique needs and personal preferences. Participants in the con-
trol condition were not focused on unique preference matching.
As a dependent variable, we measured choice of recommender
by asking participants if, given the circumstances described,
they wanted to chat with the human or the AI Realtor.
Results and Discussion
We assessed choice on the basis of the proportion of participants
who decided to chat with the human versus AI Realtor by using a
logistic regression with goal, matching, and their two-way inter-
action as independent variables (all contrast coded) and choice
(0 ¼human, 1 ¼AI) as a dependent variable. We found signif-
icant effects of goal (B ¼1.75, Wald ¼95.70, 1 df, p<.000)
and matching (B ¼.54, Wald ¼24.30, 1 df, p<.000). More
importantly, goal interacted with matching (B ¼.25, Wald ¼
5.33, 1 df, p¼.021). Results in the control condition (when
unique preference matching was not salient) replicated prior
results: in the case of an activated utilitarian goal, a greater
proportion of participants chose the AI Realtor (76.8%) over the
human Realtor (23.2%;z¼8.91, p<.001), and when a hedonic
goal was activated, a lower proportion of participants chose the
AI (18.8%) over the human Realtor (81.2%;z¼10.35, p<
.001). However, making unique preference matching salient
reversed the word-of-machine effect in the case of an activated
utilitarian goal: choice of the AI Realtor decreased to 40.3%
(from 76.8%in the control; z ¼6.17, p<.001). That is, making
unique preference matching salient turned preference for the AI
Realtor into resistance despite the activated utilitarian goal, with
most participants choosing the human over the AIRealtor. In the
3
We thank an anonymous reviewer for this suggestion.
4
We validated this hypothesis by asking respondents from Amazon
Mechanical Turk (N ¼95) the extent to which they would expect a property
selected by [a human/an AI] Realtor to match their unique preferences and
needs (1 ¼“not at all,” and 7 ¼“very much”). Indeed, participants expected
the human Realtor to be more able than the AI Realtor to match a property
recommendation to their unique preferences and needs (M
H
¼5.85, SD ¼0.82,
M
AI
¼4.70, SD ¼1.30; F(1, 93) ¼26.69, p<.001).
Longoni and Cian 9
case of an activated hedonic goal, making unique preference
matching salient further strengthened participants’ choice of
the human Realtor, which increased to 88.5%from 81.2%in
the control, although the effect was marginal, possibly due to a
ceiling effect (z ¼1.66, p¼.097).
Overall, whereas the word-of-machine effect replicated in
the control condition when unique preference matching was
salient, participants preferred the human Realtor over the
AI recommender both in the hedonic goal conditions (human
¼88.5%,AI¼11.5%;z¼12.40, p<.001) and in the utilitar-
ian goal conditions (human ¼59.7%,AI¼40.3%;z¼3.24,
p¼.001; Figure 3), corroborating the notion that people view
AI as unfit to perform the task of matching a recommendation
to one’s unique preferences.
These results show that preference matching is a boundary
condition of the word-of-machine effect, which reversed in the
case of a utilitarian goal when people had a salient goal to get
recommendations matched to their unique preferences and
needs. The next study tests another boundary condition.
Study 6: Testing Augmented Intelligence as a
Boundary Condition
Study 6 explores under what circumstances the word-of-
machine effect is eliminated, and it tests the role of AI as bound-
ary condition. Studies 1–5 tested cases in which the role of AI
was to replace human recommenders. Study 6 explores the case
in which AI is leveraged to assist and augment human intelli-
gence. “Augmented intelligence” involves AI’s assistive role in
enhancing and amplifying human intelligence instead of repla-
cing it (Araya 2019). So far, we have showed that consumers
resist AI recommenders when a hedonic goal is activated. In
Study 6, we tested the hypothesis that consumers will be more
receptive to AI recommenders, even in the case of a hedonic
goal, if the AI recommender assists and amplifies a human
recommender who retains the role of ultimate decision maker.
In this case, we expected people to believe that the human deci-
sion maker would compensate for the AI’s relative perceived
incompetence in the hedonic realm. We expected the reverse
effect in the case of a utilitarian goal. In other words, we
expected that augmented intelligence—a human–AI hybrid
decision making model— would help bolster AI to the level
of humans for hedonic decision making and help bolster humans
to the level of AI for utilitarian decision making. In addition, we
added a control condition in Study 6 in which neither recom-
mender was mentioned to serve as a baseline measure of parti-
cipants’ perceptions of hedonic and utilitarian attributes.
Procedure
Four hundred four respondents (M
age
¼40.2 years, SD ¼12.5;
48.9%female) from Amazon Mechanical Turk participated in
exchange for monetary compensation in a three-cell
Human Recommender Condition AI Recommender Condition
81.2%
88.5%
18.8%
11.5%
23.2%
59.7%
76.8%
40.3%
0%
20%
40%
60%
80%
100%
Control Unique Preference Control Unique Preference
HAI
Choice
Hedonic goal
Utilitarian goal
p= .097
p< .001
Figure 3. Stimuli (top) and results (bottom) of Study 5: The word-of-machine effect is reversed for utilitarian goals if the recommendation
needed to match participants’ unique preferences.
Note. The y-axis represents the proportion of participants who chose to chat with the human versus AI realtor.
10 Journal of Marketing XX(X)
(recommender: human, artificial intelligence, augmented intel-
ligence) between-subjects design. A fourth control condition
contained no recommender manipulation and served as the
baseline.
The stimuli and procedure were identical to those of Study
4. Participants read about the beta testing of a new app created
to give recommendations of chocolate varieties by relying on
one of two sources: a human or an AI master chocolatier.
Participants read that human and AI recommenders relied on
the same database, which comprised a large number of choco-
late varieties that had been rated by consumers and manufac-
turers. Participants read that the app had the same cost
regardless of the type of recommender it relied on. Finally,
participants read that the app would suggest a curated selection
of five chocolate bars.
We then manipulated recommender by randomly assigning
participants to (1) a human condition, in which a human cho-
colatier would curate the chocolate section; (2) an artificial
intelligence condition, in which an AI chocolatier (i.e., a com-
puter algorithm) would curate the chocolate section; or (3) an
augmented intelligence condition, in which the AI chocolatier
would assist the human chocolatier in the curation of the cho-
colate selection. Specifically, participants read:
[Human condition] In the version of the app we are testing today, it
is the human chocolatier that curates a selection of chocolate bars.
This selection contains five chocolate bars selected by the human
chocolatier. That is, it is a person who selects chocolate bars. This
version of the app is technically called “human intelligence,”
because it uses what human intelligence can do.
[Artificial intelligence condition] In the version of the app we are
testing today, it is the A.I. chocolatier that curates a selection of
chocolate bars. This selection contains five chocolate bars selected
by the A.I. chocolatier. That is, it is a computer algorithm that
selects chocolate bars. This version of the app is technically called
“artificial intelligence,” because it uses a computer algorithm to
substitute and replace what human intelligence can do.
[Augmented intelligence condition] In the version of the app we
are testing today, it is the A.I. chocolatier that curates a selection of
chocolate bars. This selection contains five chocolate bars selected
by the A.I. chocolatier. That is, it is a computer algorithm that
selects chocolate bars. The computer algorithm makes the initial
selection and assists a human chocolatier, who will make the final
decision about which chocolate bars to recommend. This version
of the app is technically called “augmented intelligence,” because
it uses a computer algorithm to enhance and augment what human
intelligence can do.
The control condition entailed no recommender manipula-
tion; instead, it merely included a description of the app and
no information about the source of the chocolate bar recom-
mendation. As a dependent variable, we measured hedonic
attribute perceptions with two items (indulgent taste and aro-
mas; pleasantness to the senses [vision, touch, smell, etc.]) and
utilitarian attribute perceptions with two items (beneficial
chemical properties [antioxidants, etc.]; healthiness [micro/
macro nutrients, etc.]), all on seven-point scales anchored at
1¼“very low,” and 7 ¼“very high.” The order of items was
randomized.
Results and Discussion
Hedonic attribute perceptions. The one-way ANOVA on the
average of the two items measuring hedonic attribute percep-
tions (r ¼.79, p<.001) was significant (F(1, 436) ¼48.92,
p<.001). In line with previous results, and replicating the
word-of-machine effect, participants reported higher hedonic
attribute perceptions when the recommender was human
(M
H
¼6.00; SD ¼1.06) than when the recommender was
AI (M
artificial_intelligence
¼4.15, SD ¼1.64; F(1, 436) ¼
125.55, p<.001). However, when the AI recommender was
augmenting human intelligence, the word-of-machine effect
was eliminated: participants reported the same hedonic percep-
tions (M
augmented_intelligence
¼5.74, SD ¼1.11) as they did
when the recommender was human (F(1, 436) ¼2.31, p¼
.129) and higher hedonic perceptions than when the recommen-
der was AI alone (F(1, 436) ¼84.73, p<.001). Participants in
the control condition reported lower hedonic perceptions
(M
control
¼5.62, SD ¼1.09) than participants in the human
condition (F(1, 436) ¼5.32, p¼.022) and higher hedonic
perceptions than participants in the AI condition (F(1, 436) ¼
77.92, p<.001). Control condition and augmented intelligence
condition did not differ (F(1, 436) <1, p¼.49).
Utilitarian attribute perceptions. The one-way ANOVA on the
average of the two items measuring utilitarian attribute per-
ceptions (r ¼.75, p<.001) was significant (F(1, 436) ¼6.60,
p<.001). In line with previous results, and replicating the
word-of-machine effect, participants reported higher
utilitarian attribute perceptions when the recommender was AI
(M
artificial_intelligence
¼5.24; SD ¼1.41) than when the recom-
mender was human (M
H
¼4.75, SD ¼1.57; F(1, 436) ¼6.40,
p¼.012). However, when the AI recommender was augmenting
human intelligence, the word-of-machine effect was
eliminated: participants reported the same utilitarian perceptions
(M
augmented_intelligence
¼5.44, SD ¼1.32) as they did when the
recommender was AI alone (F(1, 436) ¼.99, p¼.321) and higher
utilitarian perceptions than when the recommender was human
(F(1, 436) ¼11.87, p<.001). Participants in the control condition
reported the same utilitarian perceptions (M
control
¼4.70, SD ¼
1.56) as participants in the human condition (F(1, 436) ¼.05, p¼
.820) and lower utilitarian perceptions than participants in both
the AI (F(1, 436) ¼7.47, p¼.007) and augmented intelligence
conditions (F(1, 436) ¼13.22, p<.001; Figure 4).
These results delineate the scope of the word-of-machine
effect and show a circumstance under which the effect is elim-
inated. Even when a hedonic goal was activated, AI recom-
menders fared as well as human recommenders as long as they
were in a hybrid decision-making model in partnership with a
human.
Longoni and Cian 11
Studies 7a–7b: Attenuating the Lay Belief
Underlying the Word-of-Machine Effect
Studies 7a and 7b test an intervention to attenuate the lay belief
underlying the word-of-machine effect—that AI recommenders
are less (more) competentthan human recommenders in assessing
hedonic (utilitarian) value. We used a protocol called “consider-
the-opposite,” in whichpeople are prompted to consider the oppo-
site of what they initially believe to be true and take into account
evidence that is inconsistent with one’s initial beliefs. This pro-
tocol has been effectively used to correct biased beliefs in judg-
ment, such as the explanation bias (Lord, Lepper, and Preston
1984), confirmatory hypothesis testing (Wason and Golding
1974), anchoring (Musselweiler, Strack, and Pfeiffer 2000) and
halo effects in marketing claims (Ordabayeva and Chandon
2016). Study 7a tests this intervention following the original
protocol (i.e., Musselweiler, Strack, and Pfeiffer2000), andStudy
7b tests a protocol that is relatively easier to implement and scale
by embedding the intervention in a real chatbot.
Study 7a: Testing the Original Consider-the-Opposite
Protocol
Procedure. Three hundred sixty-eight respondents (M
age
¼39.8
years, SD ¼12.5; 49.2%female) from Amazon Mechanical
Turk participated in exchange for monetary compensation in a
2 (recommender: human, AI) 2 (intervention: consider the
opposite, control) between-subjects design.
The stimuli and procedure were identical to those of Studies
4 and 6: participants read about a new app created to give
chocolate recommendations by relying on either a human or
an AI master chocolatier. We manipulated recommender
between subjects by telling participants that, in the version of
the app they were considering, it was either the human or the AI
chocolatier that would suggest a curated selection of five cho-
colate bars. We also implemented the intervention between
subjects by prompting half of the participants to “consider the
opposite”: consider the ways in which they could be wrong
about what they expected the [human/AI] recommender to be
good at (based on Musselweiler, Strack, and Pfeiffer 2000):
Think for a moment about what you expect the [human/AI] choco-
latier to be good at when selecting chocolate bars. Before you rate
the chocolate selection, we would like you to consider the opposite.
Can your expectations about what the human chocolatier is good at
when selecting chocolates be wrong? Imagine that you were trying
to be as unbiased as possible in evaluating this chocolate selection—
consider yourself to be in the same role as a judge or juror. Could the
[human/AI] chocolatier be good at the opposite of what you expect
them to be good at? Please write down some ways in which you
could be wrong in terms of your expectations about what the
[human/AI] chocolatier is good at when selecting chocolates.
This prompt was absent for participants in the control condition.
As a dependent variable, participants reported their perceptions
of hedonic/utilitarian attributes of the curated selection of cho-
colate bars, measured on a seven-point scale ranging from 1 ¼
“sensory pleasure (taste, aromas, etc.)” to 7 ¼“healthy chemical
properties (antioxidants, micro/macro nutrients, etc.).” Thus,
lower numbers indicated higher hedonic value.
Results and discussion. A22 ANOVA on hedonic/utilitarian
attribute perceptions revealed no significant main effect of
intervention (F(1, 364) ¼.25, p¼.62), a significant main
effect of recommender (F(1, 364) ¼65.17, p<.001), and a
significant two-way recommender intervention interaction
(F(1, 364) ¼12.11, p¼.001). Planned contrasts revealed that
the word-of-machine effect replicated both in the control and
intervention conditions, with lower hedonic perceptions (or,
conversely, higher utilitarian perceptions) for AI recommen-
ders than human recommenders (control conditions: M
AI_control
¼4.51, SD ¼1.84, M
H_control
¼2.49, SD ¼1.48; F(1, 364) ¼
81.48, p<.001; intervention conditions: M
AI_intervention
¼3.99,
6.00
4.15
5.74 5.62
4.75
5.24 5.44
4.70
1
2
3
4
5
6
7
H AI H + AI Control
Attribute Perceptions
Hedonic attributes
Utilitarian attributes
n.s.
p= .012
n.s.
p= .000
Figure 4. Results of Study 6: The word-of-machine effect is elimi-
nated in the case of augmented intelligence (human–AI hybrid decision
making).
Note. The y-axis represents hedonic attribute perceptions and utilitarian
attribute perceptions measured on seven-point scales anchored at 1 ¼“very
low,” and 7 ¼“very high.” Error bars represent standard errors. The solid-line
pairwise comparisons represent the word-of-machine effect. The dashed-line
pairwise comparisons represent moderation by augmented intelligence: A
human–AI hybrid decision making model bolsters AI to the level of humans for
hedonic decision making, and humans to the level of AI for utilitarian decision
making. Details of all pairwise comparisons are reported subsequently.
Hedonic Attribute Perceptions
Word-of-machine effect: Human versus AI: F(1, 436) ¼125.55, p¼.000
Moderation by augmented intelligence (H þAI hybrid decision making bolsters
AI to the level of humans for hedonic decision making): Human versus H þAI:
F(1, 436) ¼2.31, p¼.129
AI versus H þAI: F(1, 436) ¼84.73, p¼.000
Control versus H: F(1, 436) ¼5.32, p¼.022
Control versus AI: F(1, 436) ¼77.92, p¼.000
Control versus H þAI: F(1, 436) ¼.49, p¼.486
Utilitarian Attribute Perceptions
Word-of-machine effect: Human versus AI: F(1, 436) ¼6.40, p¼.012
Moderation by augmented intelligence (H þAI hybrid decision making bolsters
H to the level of AI for utilitarian decision making): AI versus H þAI: F(1, 436)
¼.99, p¼.321
H versus H þAI: F(1, 436) ¼11.87, p¼.001
Control versus H: F(1, 436) ¼.05, p¼.820
Control versus AI: F(1, 436) ¼7.47, p¼.007
Control versus H þAI: F(1, 436) ¼13.22, p¼.000
12 Journal of Marketing XX(X)
SD ¼1.78, M
H_intervention
¼3.18, SD ¼1.44; F(1, 364) ¼8.93,
p¼.003; higher numbers indicate higher utilitarian/lower
hedonic perceptions). More importantly, the intervention
attenuated the word-of-machine effect and led to participants
perceiving the AI’s recommendation as having higher hedonic
value compared with the control condition (M
AI_intervention
¼
3.99, SD ¼1.78, M
AI_control
¼4.51, SD ¼1.84; F(1, 364) ¼
7.66, p¼.006) and the human recommendation as having
higher utilitarian value compared to the control condition
(M
H_intervention
¼3.18, SD ¼1.44; M
H_control
¼2.49, SD ¼
1.48, F(1, 364) ¼4.59, p¼.033; higher numbers indicate
higher utilitarian/lower hedonic perceptions; Figure 5).
Thus, these results provide evidence for a potential interven-
tion that alleviates initial beliefs about,and therefore resistance to,
AI recommenders: prompting people to consider the opposite.
Study 7b: Testing a Consider-the-Opposite Intervention
That Is Easier to Implement and Scale
Study 7b builds on the original consider-the-opposite protocol
and the results of Study 7a to test an intervention better suited
for implementation and scalability in a real-world setting. To
do so, we created a real chatbot that participants could interact
with and that delivered the intervention.
Procedure. Two hundred nighty-nine respondents (M
age
¼40.4
years, SD ¼12.6; 43.1%female) from Amazon Mechanical
Turk participated in exchange for monetary compensation in a
two-cell (intervention: consider the opposite, control) between-
subjects design. Participants read about an app called “Cucina”
that would rely on AI to give recipe recommendations. The app
worked by giving users the chance to chat with the AI Chef and
ask for recipe suggestions and recommendations. Participants
further read that they could try out the AI Chef by chatting with
it in a web browser window. We created a chatbot ad hoc for this
experiment by embedding a JavaScript in the Qualtrics survey
(Figure 6). The chatbot was programmed to first introduce itself:
“Hello I am an A.I. Chef at Cucina! Thank you for trying out our
app! What is your name?” Participants could then reply to the
chatbot using a text box. We programmed the chatbot’s next
response to differ depending on the intervention condition:
[Intervention: consider the opposite] “Hi [participant’s name]! I
am here to suggest a recipe for you to try! Some people might
think that an Artificial Intelligence Chef is not competent to give
food suggestions ...but this is a misjudgment. For a moment, set
aside your expectations about me. When it comes to making food
suggestions, could you consider the idea that I could be good at
things you do not expect me to be good at? Okay, let’s chat about
food. How can I help you?”
[Intervention: control] “Hi [participant’s name]! I am here to sug-
gest a recipe for you to try! Okay, let’s chat about food. How can I
help you?”
As a dependent variable, we measured hedonic/utilitarian
attribute perceptions of the recipes suggested by the AI chatbot,
as measured on a seven-point scale ranging from 1 ¼“mostly
based on sensory pleasure (taste, aromas, etc.)” to 7 ¼“mostly
based on healthy chemical properties (antioxidants, micro/
macro nutrients, etc.).”
Results and discussion. A one-way ANOVA on hedonic/utilitarian
attribute perceptions revealed that the intervention attenuated the
word-of-machine effect and led to higher hedonic perceptions
compared to the control condition (M
intervention
¼3.75, SD ¼
1.46, M
control
¼4.25, SD ¼1.37; F(1, 297) ¼9.15, p¼.003;
lower numbers indicate higher hedonic perceptions). These
results corroborate those of Study 7a and provide evidence for
a practical and relatively easier-to-implement intervention for
managers looking to attenuate the lay belief underlying the
word-of-machine effect.
General Discussion
As companies in the private and public sectors assess how to
harness the potential of AI-driven recommendations, the ques-
tion of how trade-offs in decision making influence preference
for AI recommenders is of great importance. We address this
question across nine studies and show a word-of-machine
effect: the phenomenon by which hedonic and utilitarian
trade-offs determine preference for (or resistance to) AI-
driven recommendations. Studies 1a–1b show that a utilitarian
(hedonic) goal makes people more (less) likely to choose AI
recommenders than human ones. Study 2 shows that AI
(human) recommenders lead to higher perceptions of utilitarian
(hedonic) attributes upon consumption. Study 3 shows that
people prefer AI (human) recommenders when utilitarian
(hedonic) attributes are more important. Study 4 shows that
differing competence perceptions underlie the word-of-
machine effect and rule out complexity. Studies 5 and 6 identify
boundary conditions: Study 5 shows that the word-of-machine
effect is reversed for utilitarian goals if the recommendation
2.49
4.51
3.18
3.99
1
2
3
4
5
6
7
HAI
eulaVnairatilitU/cinodeHdeviecreP
Control
Intervention
Figure 5. Results of Study 7a: Prompting people to consider the
opposite attenuated the word-of-machine effect.
Notes. The y-axis represents perceived hedonic/utilitarian attribute value
measured on a seven-point scale anchored at 1 ¼“sensory pleasure (taste,
aromas, etc.), and 7 ¼“healthy chemical properties (antioxidants, micro/macro
nutrients, etc.)”; therefore, higher numbers indicate higher utilitarian value/
lower hedonic value. Error bars represent standard errors.
Longoni and Cian 13
needs to match a person’s unique preferences, and Study 6
shows that the effect is eliminated when AI is framed as
“augmented” rather than “artificial” intelligence, that is, in
human–AI hybrid decision making. Finally, Studies 7a–7b
tested an intervention to attenuate the word-of-machine effect.
Theoretical Contributions
Our research makes several important theoretical contributions.
A first set of contributions speaks to research on the
psychology of automation and on human–technology interac-
tions (Dawes 1979; Groove and Meehl 1996; Meehl 1954).
First, we extend this literature by addressing the question of
whether hedonic/utilitarian trade-offs in decision making drive
preference for or resistance to AI recommenders. This question
is novel, as prior research has not relied on differences inherent
to hedonic/utilitarian consumption to predict people’s reactions
to receiving advice from automated systems.
Second, we show under what circumstances AI-driven rec-
ommendations are preferred to, and therefore more effective,
than human ones: when utilitarian attributes are relatively more
important or salient than hedonic ones. Research in this area has
largely focused on consumers’ resistance to automated systems.
For example, in the domain of performance forecasts, people
are less likely to rely on the input of an algorithm than a person
to make predictions about student performance, an effect that is
due to the belief that algorithms, unlike people, cannot learn
from their mistakes (Dietvorst, Simmons, and Massey 2014). In
the domain of health care utilization, people are less likely to
rely on an automated medical provider if a human provider is
available, even when the two providers have the same accuracy
(Longoni, Bonezzi, and Morewedge 2019, 2020).
Limited research has identified under what circumstances
resistance to algorithmic advice is attenuated: if people have
the opportunity to modify algorithms and thus exert control
over them (Dietvorst, Simmons, and Massey 2016), if the
human likeness of algorithms is increased (Castelo, Bos, and
Lehman 2019), if the task entails a numeric estimate of a target
(Logg, Minson, and Moore 2019), and if the algorithm is
described as tailoring a recommendation to a person’s unique
case (Longoni, Bonezzi, and Morewedge 2019, 2020). We
extend this literature by showing circumstances in which con-
sumers’ resistance to AI may be reversed and by showing cases
in which consumers even prefer automated systems: when they
assign greater importance to utilitarian attributes or when a
utilitarian goal is activated.
Third, we explore under what circumstances consumers will
be amenable to AI recommenders in the context of human–AI
partnerships. We show that augmented intelligence helps bol-
ster AI to the level of humans for hedonic decision making and
helps bolster humans to the level of AI for utilitarian decision
making. This contribution is important because it represents the
first empirical test of augmented intelligence as an alternative
conceptualization of artificial intelligence that focuses on AI’s
assistive role in advancing human capabilities. We hope that
this contribution will prioritize new research focused on under-
standing the potential of AI in conjunction with humans rather
than in contraposition, as this seems to be the advocated way
forward by many practitioners (Araya 2019; Hao 2020).
We also contribute to the literature on hedonic and utilitar-
ian consumption (Alba and Williams 2013; Khan and Dhar
2010; Moreau and Herd 2009; Whitley, Trudel, and Kurt
2018). Literature in this area has identified the factors that
influence evaluation of hedonic and utilitarian product dimen-
sions. We extend this literature by investigating how hedonic/
utilitarian attribute trade-offs influence the effectiveness of a
Figure 6. Stimuli of Study 7b.
14 Journal of Marketing XX(X)
source of a product recommendation (i.e., a human vs. an AI
recommender; Studies 1a, 1b, 3–5) and how the source of a
product recommendation influences hedonic/utilitarian percep-
tions (Studies 2, 6–7b).
Managerial Implications
The current speed of development and adoption of AI, machine
learning, and natural language processing algorithms challenge
managers to harness these transformative technologies to opti-
mize the customer experience. Our findings are insightful for
managers as they navigate the remarkable technology-enabled
opportunities that are growing in today’s marketplace. These
new technologies are also experiencing a renewed prominence
in public discourse. For instance, the U.S. government has
established the National Artificial Intelligence Research and
Development Strategy to address economic and social impli-
cations of AI.
Our findings provide useful insights for both companies and
public policy organizations debating if and how to effectively
automate their recommendations systems. A company like
Sephora relies both on human-based recommendations from
sales associates and its customer base and AI-based recommen-
dations through its Visual Artist app, a conversational bot that
interacts with prospective shoppers. Our results suggest cases
in which AI-based recommendations would be more effective
(i.e., when utilitarian attributes are more salient or important,
such as grooming products) and when they would be less effec-
tive (i.e., when hedonic attributes are more salient or important,
such as fragrances).
Our results are insightful for strategic and tactical marketing
decisions. Marketers could prioritize functional positioning
strategies over experiential ones in the case of AI-based rec-
ommendations for target segments for whom utilitarian attri-
butes are more important. For instance, a company in the
hospitality industry such as TripAdvisor should emphasize
AI-based recommendations for business travel services and
deemphasize AI-based recommendations for leisure travel ser-
vices. Our results also apply to a host of tactical decisions such
as marketing communications. Managers could communicate
to their customers in a way that is aligned with a target seg-
ment’s goal (i.e., hedonic vs. utilitarian) and emphasize the
most effective points of parity/difference with competing
brands or across different products in the portfolio. Companies
like Netflix and YouTube could emphasize AI-based recom-
mendations when utilitarian attributes are relatively more
important (e.g., documentaries) and human-based recommen-
dations (“similar users”) when hedonic attributes are relatively
more important (e.g., horror movies).
This research also highlights boundary conditions that may
prove useful for practitioners. Study 5 indicated that when
consumers want recommendations that are matched to their
unique preferences, they resist AI recommenders and instead
prefer human recommenders, regardless of hedonic or utilitar-
ian goals. These results suggest that companies whose custom-
ers are known to be satisfied with “one size fits all”
recommendations, or who are not in need of a high level of
customization, may rely on AI systems. However, companies
whose customers are known to desire personalized recommen-
dations should rely on humans. Some companies, such as Ama-
zon, seem to be implementing a similar strategy. Even though
most of Amazon’s recommendations are based on algorithms,
the company has recently started offering an additional service
for an added fee called “personal shopper.” This service relies
on human shopping assistants to give clothing recommenda-
tions rather than on algorithms. Our results indicate that more
companies, especially those in markets that are relatively more
hedonic, should follow Amazon’s example.
Study 6 provides another managerially relevant boundary
condition: augmented intelligence. The results of this study
indicate that consumers are more receptive to AI recommen-
ders, even in the case of hedonic goals, if the AI recommender
does not replace a human recommender but instead assists a
human recommender who retains the role of ultimate decision
maker. These results are important for practitioners managing
relatively more hedonic products or services. For instance, in a
personal conversation with the authors, a Walmart marketing
manager noted how the top two most frequently ignored rec-
ommendations on the company’s website are those for alco-
holic beverages and food items—arguably products for which
hedonic attributes tend to be more salient and important. In
these circumstances, practitioners could leverage our results
and utilize AI systems to generate an initial recommendation
on which a human then “signs off.”
Finally, in Studies 7a–7b we tested an intervention that
practitioners managing relatively more hedonic products and
relying on AI systems may execute. Building on the consider-
the-opposite protocol, we created a realistic chatbot that inter-
acted with participants and nudged them to consider that the AI
recommender could be good at things that participants did not
expect it to be good at. The intervention was successful in both
studies, suggesting that practitioners may utilize this technique
if hedonic attributes are important.
Limitations and Future Research
Despite the robustness of the word-of-machine effect, our
research has limitations that offer several opportunities for
future research. First, there is the possibility that drawing atten-
tion to the source of a recommendation primed study partici-
pants. AI recommenders might have primed utilitarian
attributes or made utilitarian goals more salient, and it was the
associated increased activation of these concepts, rather than
competence perceptions, that gave rise to the word-of-machine
effect. Although possible, this alternative explanation based on
priming is unlikely given the results of a study we report in
Web Appendix D. In this study (N ¼230), we first primed
participants with either human or AI-related concepts by draw-
ing their attention to either a human or an AI recommender,
thus approximating the kind of priming that could have
occurred in our studies. To assess whether the AI recommender
primed utilitarian concepts, we then measured perceptions of
Longoni and Cian 15
utilitarian and hedonic attributes of a stimulus in a domain
unrelated to one in which the priming manipulation occurred.
This stimulus was pretested to be neutral (i.e., perceived to be
equally utilitarian and hedonic). The results indicate that the
stimulus was perceived to be equally utilitarian and hedonic
regardless of the priming manipulation. Although these results
offer preliminary evidence that priming does not account for
the word-of-machine effect, the inferences one can draw from a
null effect are limited. More broadly, the question of whether
AI-based recommendations activate specific constructs that
might be influential on decision making is a worthy avenue for
future research.
Second, even though we tested the word-of-machine effect
across multiple domains, there remains the possibility that the
effect is stronger or weaker in certain categories. For instance,
the effect might be stronger in categories (e.g., a chocolate
cake) in which discerning hedonic attributes (e.g., how tasty
or how indulgent it is) is easier than discerning utilitarian attri-
butes (e.g., how many macronutrients it contains, or how
healthy it is). Future research could more systematically inves-
tigate what dimensions of different product categories
strengthen versus weaken the word-of-machine effect.
Third, the lay beliefs underlying the word-of-machine effect
may be transitional. As competence perceptions driving the
word-of-machine effect are based on a lay belief, they are
embedded in a cultural view that may change over time. The
lay belief about differential competence perceptions may
already be inaccurate, as AI is already utilized in domains that
are relatively more hedonic. For instance, AI curates flower
arrangements on the basis of customers’ past transactions and
inferred preferences (1-800-Flowers) and creates new flavors
for food companies such as McCormick, Starbucks, and Coca-
Cola (Venkatesan and Lecinski 2020).
Our research also suggests opportunities for future explora-
tion of this area. First, the word-of-machine effect may have
interesting downstream consequences on other responses. For
instance, relying on an AI recommender may lead consumers to
compensate by adjusting their own choices. Given the belief
that AI-based recommendations excel on utilitarian attributes
and are weaker on hedonic attributes, consumers may choose
from a set of options by paying closer attention to the hedonic
attributes of the options, assuming that the options are satisfac-
tory in terms of utilitarian attributes. This “second-step choice”
is an interesting question to consider in the future.
Second, in Studies 7a–b we show preliminary evidence of
how lay beliefs toward AI systems could be successfully alle-
viated through a protocol utilized in the decision making liter-
ature. Future research could identify other real-world variables
that might have similar attenuating effects, such as domain
expertise, involvement, time spent making decisions, or famil-
iarity/repeated use of AI systems. A third fruitful research
opportunity would be to explore whether consumers can be
persuaded to trust AI systems, even more than humans, in the
eventuality that AI systems are sufficiently sophisticated to
pass the Turing test. In this vein, future research could identify
conditions under which the word-of-machine effect reverses,
with AI recommenders being more persuasive than humans for
hedonic products.
As research on the psychology of automation expands to
include developments such as AI, we hope that our findings
(especially those of Study 6) will spur further research prior-
itizing the understanding of the vast potential of AI operating in
partnership with humans. More research is also necessary to
map out the impact of AI systems across consumption settings.
AI-powered technologies will be instrumental in optimizing
the customer experience at each phase of the consumer journey
by offering products of increasing personalization (Venkatesan
and Lecinski 2020). New technologies like image, text, and
voice recognition, together with large-scale A/B testing will
provide managers with the data necessary for a complete, AI-
driven customization of the journey (Venkatesan and Lecinski
2020) and will allow researchers to gather the consumer signals
that are produced as a by-product of consumer activities
(Schweidel et al. 2020). We hope that future research will focus
on how to harness this great potential of AI for managers and
researchers alike.
Overall, understanding when consumers will be amenable to
and when they will resist AI-driven recommendations is a
pressing and complex endeavor for researchers and firms alike.
We hope that our research will spur further exploration of this
important topic.
Acknowledgments
The authors would like to acknowledge the Journal of Marketing
review team for their guidance throughout the review process, and
Remi Trudel, Bernd Schmitt, Raj Venkatesan, and the USC Dornsife
Mind & Society Center for comments on an earlier version of this
article. The authors also gratefully acknowledge Shi Hao Ruan, James
Weissman, the BRAD lab, and restaurant LP26 in Cortina (Italy) for
their help with data collection.
Author Contributions
All authors contributed equally.
Associate Editor
Connie Pechmann
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to
the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, author-
ship, and/or publication of this article.
ORCID iD
Luca Cian https://orcid.org/0000-0002-8051-1366
16 Journal of Marketing XX(X)
References
Alba, Joseph W. and Elanor F. Williams (2013), “Pleasure Principles:
A Review of Research on Hedonic Consumption,” Journal of Con-
sumer Psychology, 23 (1), 2–18.
Allison, Ralph I. and Kenneth P. Uhl (1964), “Influence of Beer Brand
Identification on Taste Perception,” Journal of Marketing
Research, 1 (3), 36–39.
Araya, Daniel (2019), “3 Things You Need to Know About Augmen-
ted Intelligence,” Forbes (January 22), https://www.forbes.com/
sites/danielaraya/2019/01/22/3-things-you-need-to-know-about-
augmented-intelligence/.
Batra, Rajeev and Olli T. Ahtola (1991), “Measuring the Hedonic and
Utilitarian Sources of Consumer Attitudes,” Marketing Letters,2
(4), 159–70.
Bazerman, Max H., Ann E. Tenbrunsel, and Kimberly Wade-Benzoni
(1998), “Negotiating with Yourself and Losing: Understanding and
Managing Conflicting Internal Preferences,” Academy of Manage-
ment Review, 23 (2), 225–41.
Bhargave, Rajesh, Amitav Chakravarti, and Abhijit Guha (2015),
“Two-Stage Decisions Increase Preference for Hedonic Options,”
Organizational Behavior and Human Decision Processes,130,
123–35.
Boatsman, James, Cindy Moeckel, and Buck K.W. Pei (1997), “The
Effects of Decision Consequences on Auditors’ Reliance on Deci-
sion Aids in Audit Planning,” Organizational Behavior and
Human Decision Processes, 71 (2), 211–47.
Botti, Simona and Ann L. McGill (2011), “Locus of Choice: Personal
Causality and Satisfaction with Hedonic and Utilitarian Deci-
sions,” Journal of Consumer Research, 37 (4), 1065–78.
Castelo, Noah, Maarten W. Bos, and Donald R. Lehman (2019),
“Task-Dependent Algorithm Aversion,” Journal of Marketing
Research, 56 (5), 809–25.
Cian, Luca, Chiara Longoni, and Aradhna Krishna (2020),
“Advertising a Desired Change: When Process Simulation Fosters
(vs. Hinders) Credibility and Persuasion,” Journal of Marketing
Research, 57 (3), 489–508.
Crowley, Ayn E., Eric R. Spangenberg, and Kevin R. Hughes (1991),
“Measuring the Hedonic and Utilitarian Dimensions of Attitudes
Toward Product Categories,” Marketing Letters, 3 (3), 239–49.
Dawes, Robyn M. (1979), “The Robust Beauty of Improper Linear
Models in Decision Making,” American Psychologist,34(7),
571–82.
Dhar, Ravi and Klaus Wertenbroch (2000), “Consumer Choice
Between Hedonic and Utilitarian Goods,” Journal of Marketing
Research, 37 (1), 60–71.
Dietvorst, Berkeley J., Joseph P. Simmons, and Cade Massey (2014),
“Algorithm Aversion: People Erroneously Avoid Algorithms After
Seeing Them Err,” Journal of Experimental Psychology: General,
144 (1), 114–26.
Dietvorst, Berkeley J., Joseph P. Simmons, and Cade Massey (2016),
“Overcoming Algorithm Aversion: People Will Use Imperfect
Algorithms If They Can (Even Slightly) Modify Them,” Manage-
ment Science, 64 (3), 1155–70.
Franke, Nikolaus, Peter Keinz, and Christoph J. Steger (2009),
“Testing the Value of Customization: When Do Customers Really
Prefer Products Tailored to Their Preferences?” Journal of Mar-
keting, 73 (5), 103–21.
Grove, William M. and Paul E. Meehl (1996), “Comparative Effi-
ciency of Informal (Subjective, Impressionistic) and Formal
(Mechanical, Algorithmic) Prediction Procedures: The Clinical-
Statistical Controversy,” Psychology, Public Policy, and Law,2
(2), 293–323.
Hao, Karen (2020), “AI is Learning When It Should and Shouldn’t
Defer to a Human,” MIT Review (August 5), https://www.techno
logyreview.com/2020/08/05/1006003/ai-machine-learning-defer-
to-human-expert/.
Highhouse, Scott (2008) “Stubborn Reliance on Intuition and Subjec-
tivity in Employee Selection,” Industrial and Organizational Psy-
chology: Perspectives on Science and Practice, 1 (3), 333–42.
Hirschman, Elizabeth C. and Morris B. Holbrook (1982), “Hedonic
Consumption: Emerging Concepts, Methods, and Propositions,”
Journal of Marketing, 46 (3), 92–101.
Holbrook, Morris B. (1994), “The Nature of Customer Value,” in
Service Quality: New Directions in Theory and Practice, Roland
T. Rust and Richard L. Oliver, eds. Thousand Oaks, CA: SAGE
Publications, 21–71.
Khan, Uzma and Ravi Dhar (2010), “Price-Framing Effects on the
Purchase of Hedonic and Utilitarian Bundles,” Journal of Market-
ing Research, 47 (6), 1090–99.
Khan, Uzma, Ravi Dhar, and Klaus Wertenbroch (2005), “A Beha-
vioral Decision Theory Perspective on Hedonic and Utilitarian
Choice,” in Inside Consumption: Frontiers of Research on Con-
sumer Motives, Goals, & Desires, S. Ratneshwar and David Glen
Mick, eds. Abingdon-on-Thames, UK: Routledge, 144–65.
Leung, Eugina, Gabriele Paolacci, and Stefano Puntoni (2019), “Man
Versus Machine: Resisting Automation in Identity-Based Con-
sumer Behavior,” Journal of Marketing Research, 55 (6), 818–31.
Littrell, Mary A. and Nancy J. Miller (2001), “Marketing Across
Cultures: Consumers’ Perceptions of Product Complexity, Famil-
iarity, and Compatibility,” Journal of Global Marketing, 15 (1),
67–86.
Logg, Jennifer M., Julia Minson, and Dan A. Moore (2019),
“Algorithm Appreciation: People Prefer Algorithmic to Human
Judgment,” Organizational Behavior and Human Decision Pro-
cesses, 151, 90–103.
Longoni, Chiara, Andrea Bonezzi, and Carey K. Morewedge (2019),
“Resistance to Medical Artificial Intelligence,” Journal of Con-
sumer Research, 46 (4), 629–50.
Longoni, Chiara, Andrea Bonezzi, and Carey K. Morewedge (2020),
“Resistance to Medical Artificial Intelligence is an Attribute in a
Compensatory Decision Process: Response to Pezzo and Beck-
stead,” Judgment and Decision Making, 15 (3), 446–48.
Lord,CharlesG.,MarkR.Lepper,andElizabethPreston(1984),
“Considering the Opposite: A Corrective Strategy for Social
Judgment,” Journal of Personality and Social Psychology,47
(6), 1231–43.
Meehl, Paul (1954), Clinical Versus Statistical Prediction:A Theore-
tical Analysis and Review of the Literature. Minneapolis: Univer-
sity of Minnesota Press.
Moreau, C. Page and Kelly B. Herd (2009), “To Each His Own? How
Comparisons with Others Influence Consumers’ Evaluations of
Longoni and Cian 17
Their Self-Designed Products,” Journal of Consumer Research,36
(5), 806–19.
Morris, Michael W., Tanya Menon, and Daniel R. Ames (2001),
“Culturally Conferred Conception of Agency: A Key to Social
Perception of Persona, Groups, and Other Actors,” Journal of Per-
sonality and Social Psychology Review, 5 (2), 169–82.
Musselweiler, Thomas, Fritz Strack, and Tim Pfeiffer (2000),
“Overcoming the Inevitable Anchoring Effect: Considering the
Opposite Compensates for Selective Accessibility,” Personality
and Social Psychology Bulletin, 26 (9), 1142–50.
Okada, Erica M. (2005), “Justification Effects on Consumer Choice of
Hedonic and Utilitarian Goods,” Journal of Marketing Research,
42 (2), 43–53.
Ordabayeva, Nailya and Pierre Chandon (2016), “In the Eye of the
Beholder: Visual Biases in Package and Portion Size Perceptions,”
Appetite, 103, 450–57.
Ross, Lee and Richard E. Nisbett (1991), The Person and the Situa-
tion: Perspectives of Social Psychology. New York: McGraw-Hill.
Sanders, Nada R. and Karl B. Manrodt (2003), “The Efficacy of
Using Judgmental Versus Quantitative Forecasting Methods in
Practice,” The International Journal of Management Science,31
(6), 511–22.
Schweidel, David, Yakov Bart, Jeff Inman, Andrew Stephen, Barak
Libai, Michelle Andrews, et al. (2020), “In the Zone: How Tech-
nology Is Reshaping the Customer Journey,” working paper.
Spiller, Stephen A, Gavan J. Fitzsimons, John G. Lynch Jr., , and Gary
H. McClelland (2013), “Spotlights, Floodlights, and the Magic
Number Zero: Simple Effects Tests in Moderated Regression,”
Journal of Marketing Research, 50 (2), 277–88.
Timmermans, Danielle (1993), “The Impact of Task Complexity on
Information Use in Multi-Attribute Decision Making,” Journal of
Behavioral Decision Making, 6 (2), 95–111.
Venkatesan, Rajkumar and Jim Lecinski (2020), The AI Marketing
Canvas: A Five Stage Roadmap to Implementing Artificial Intelli-
gence in Marketing. Redwood City, CA: Stanford University Press.
Wardle, Jane and Wendy Solomons (1994), “Naughty but Nice: A
Laboratory Study of Health Information and Food Preferences,”
Health Psychology, 13 (2), 180–83.
Wason, P.C. and Evelyn Golding (1974), “The Language of Incon-
sistency,” British Journal of Psychology, 65 (4), 537–46.
Whitley, Sara C., Remi Trudel, and Didem Kurt (2018), “The Influ-
ence of Purchase Motivation on Perceived Preference Uniqueness
and Assortment Size Choice,” Journal of Consumer Research,45
(4), 710–24.
18 Journal of Marketing XX(X)