ArticlePDF Available

Recommendations as personalized marketing: insights from customer experiences

Authors:

Abstract

Purpose – The purpose of this paper is an exploratory study of customers’ “lived” experiences of commercial recommendation services to better understand customer expectations for personalization with recommendation agents. Recommendation agents programmed to “learn” customer preferences and make personalized recommendations of products and services are considered a useful tool for targeting customers individually. Some leading service firms have developed proprietary recommender systems in the hope that personalized recommendations could engage customers, increase satisfaction and sharpen their competitive edge. However, personalized recommendations do not always deliver customer satisfaction. More often, they lead to dissatisfaction, annoyance or irritation. Design/methodology/approach – The critical incident technique is used to analyze customer satisfactory or dissatisfactory incidents collected from online group discussion participants and bloggers to develop a classification scheme. Findings – A classification scheme with 15 categories is developed, each illustrated with satisfactory incidents and dissatisfactory incidents, defined in terms of an underlying customer expectation, typical instances of satisfaction and dissatisfaction and, when possible, conditions under which customers are likely to have such an expectation. Three pairs of themes emerged from the classification scheme. Six tentative research propositions were introduced. Research limitations/implications – Findings from this exploratory research should be regarded as preliminary. Besides, content validity of the categories and generalizability of the findings should be subject to future research. Practical implications – Research findings have implications for identifying priorities in developing algorithms and for managing personalization more strategically. Originality/value – This research explores response to personalization from a customer’s perspective.
Recommendations as personalized marketing:
insights from customer experiences
Anyuan Shen
School of Business, State University of New York at New Paltz, New Paltz, New York, USA
Abstract
Purpose – The purpose of this paper is an exploratory study of customers’ “lived” experiences of commercial recommendation services to better
understand customer expectations for personalization with recommendation agents. Recommendation agents programmed to “learn” customer
preferences and make personalized recommendations of products and services are considered a useful tool for targeting customers individually. Some
leading service firms have developed proprietary recommender systems in the hope that personalized recommendations could engage customers,
increase satisfaction and sharpen their competitive edge. However, personalized recommendations do not always deliver customer satisfaction. More
often, they lead to dissatisfaction, annoyance or irritation.
Design/methodology/approach The critical incident technique is used to analyze customer satisfactory or dissatisfactory incidents collected from
online group discussion participants and bloggers to develop a classification scheme.
Findings A classification scheme with 15 categories is developed, each illustrated with satisfactory incidents and dissatisfactory incidents, defined
in terms of an underlying customer expectation, typical instances of satisfaction and dissatisfaction and, when possible, conditions under which
customers are likely to have such an expectation. Three pairs of themes emerged from the classification scheme. Six tentative research propositions
were introduced.
Research limitations/implications – Findings from this exploratory research should be regarded as preliminary. Besides, content validity of the
categories and generalizability of the findings should be subject to future research.
Practical implications – Research findings have implications for identifying priorities in developing algorithms and for managing personalization
more strategically.
Originality/value – This research explores response to personalization from a customer’s perspective.
Keywords Customer satisfaction, Personalization, Critical incidents, Consumer preference, Recommendation agent
Paper type Research paper
An executive summary for managers and executive
readers can be found at the end of this issue.
Introduction
Recommendation agents programmed to “learn” customer
preferences and make personalized recommendations of
products and services (Gershoff and West, 1998;West et al.,
1999) are considered a very useful tool for personalized
marketing, i.e. targeting each customer individually in
marketing (Rust and Chung, 2006;Simonson, 2005). Some
leading service firms have developed proprietary
recommender systems in the hope that making personalized
recommendations could engage customers and increase
customer satisfaction, thus sharpening the competitive edge of
their businesses (Netflix Annual Report, 2010;Amazon
Annual Report, 2010).
For all its potentials to enhance customer satisfaction,
the practice of making personalized recommendations has
met with both successes and challenges (Flynn, 2006;Shen
and Ball, 2009). In some instances, personalized
recommendations may actually lead to customer
dissatisfaction, even annoyance or irritation (Iacobucci, 2006,
p. 582):
I love Amazon.com; I really do. I think “amazon” should be pronounced:
a-ma
z’-in(g). Yet, it’s quite difficult to opt out of their overriding, tailored
recommendation format. When I browse for books, I want to know
everything that’s out there, not someone’s heuristic of what they think I’ll
like. Algorithms based on similarity, within genre, or frighteningly to other
customers with whom I probably share no other qualities, assume that
customers want assistance drilling down to find more, similar products. A
different assumption underlies the segment that reads voraciously and
eclectically to be transported to different worlds, via travel essays of Bryson
or Theroux, popular science writings on fractals or space worms,
international collections of sublime prayers, or a barf check on the popular
business press books. I might be a difficult case for Amazon, in that, as a
final insight into my psyche, I find these questions to be of equal importance:
“Will this new multivariate book explain concepts more clearly to my
doctoral students than books I’ve assigned previously?” and “Will Stephanie
Plum choose Morelli or Ranger?” Nevertheless, I didn’t ask Amazon to track
my preferences. Indeed, I wish they’d stop. I am an efficient and effective
expert in this category, and I don’t need some computer hack salesperson?
sales entity? suggesting what I’d enjoy. Further, the recommendations need
to be based on a better mix of the similarity between profile pattern
(correlation) and profile height (distance measures), because currently we’re
The current issue and full text archive of this journal is available at
www.emeraldinsight.com/0887-6045.htm
Journal of Services Marketing
28/5 (2014) 414–427
© Emerald Group Publishing Limited [ISSN 0887-6045]
[DOI 10.1108/JSM-04-2013-0083]
This research was funded by a Research and Creative Projects Award from
the State University of New York at New Paltz.
Received 2 April 2013
Revised 6 July 2013
Accepted 15 September 2013
414
also bombarded with simple volume-based offerings: “Dawn, we
recommend the latest Harry Potter”, or worse, an Oprah read.
Is this disclosure of dissatisfaction merely “a difficult case” or
fairly common among customers who have interacted with
personalized recommendations? What are the major sources of
(dis)satisfaction with a service firm’s recommendations? These
questions are fundamental to personalized marketing – a topic
highly relevant to managers and academics.
To online service firms embracing personalization as part of
their core competitive strategy, (dis)satisfaction with agent
recommendations is not without consequences. First, customer
(dis)satisfaction may have an immediate impact on sales
transactions in the current service episode. While satisfied
customers are more likely to be engaged and to proceed to
placing orders, dissatisfied customers may simply drift away from
the current episode, or even to a competitor’s Web site (which is
just a click away). Second, customer (dis)satisfaction may also
affect their overall experiences with the service firm, which may
in turn exhibit its impact on customer retention and long-term
profitability. Furthermore, with customers of ever increasing
technology savvy, marketing of personalized products and
services is becoming a business necessity. A solid understanding
of sources of customer (dis)satisfaction with recommendation
services may help the development of recommendation
algorithms and may have broader implications for service firms
hoping to be more adaptive in their individual marketing
strategies.
Service researchers have studied personalized marketing in
interpersonal service encounters under the rubrics of service
customization (Bettencourt and Gwinner, 1996;Gwinner et al.,
2005), service personalization (Mittal and Lassar, 1996;
Surprenant and Solomon, 1987) and service relationships (Ball
et al., 2006;Gwinner et al., 1998). However, personalization
through recommendation agents often does not involve the
intervention of service employees; rather, it is driven by
information technologies such as algorithms, databases, data
mining and artificial intelligence. The potential for
recommendation agents to create customer benefits by learning
preferences and personalizing services is not fully understood
(Dabholkar and Sheng, 2012;Rust and Chung, 2006;Shen and
Ball, 2009). Some researchers have even questioned the core
assumption of personalized marketing that customers have
preferences that marketers can reveal by building a learning
relationship (Kramer, 2007;Simonson, 2005). Drawing on the
rich tradition of constructive choice theories (Bettman et al.,
1998;Slovic, 1995), this research argues that preferences are
constructed at the time when decisions are being made –
customers often do not have stable preferences to be retrieved
and applied to decisions. The notion of preference construction
casts into serious doubt whether preference learning and
personalization can add any value to service research. Indeed,
research on recommendation agents has shifted its theoretical
focus from preference learning to the context in which
preferences are constructed (Aksoy et al., 2006;Cooke et al.,
2002;Häubl and Murray, 2003;Kramer, 2007). For instance,
Kramer (2007) argued that, as preferences are often ill-defined,
consumers will evaluate personalized recommendations based on
how easily they can identify their stated preferences. He
demonstrated experimentally that measurement tasks that allow
consumers to “see through” the construction of their preferences
increase the likelihood of agent recommendations being chosen,
a finding he refers to as the “task transparency effect”.
Although the notion of constructed preference is
well-accepted in research, little is known about how customers
experience preference construction under the aid of
recommendation agents. Yet, how customers “live” the
personalization experiences of recommendation agents may be
interesting to researchers who wish to further investigate
whether and when preference learning and personalization by
recommendation agents could add value to customers – a
question fundamental to personalized marketing and service
research. Analyzing customer experiences of and articulating
customer expectations for agent recommendations may offer a
new angle to review the theorization of personalized marketing
and may potentially enrich our understanding of preference
learning and preference construction.
To identify sources of customer (dis)satisfaction with agent
recommendations, I conducted an exploratory study of critical
(dis)satisfaction incidents as narrated by customers who had
interacted with agent recommendations in their relationship with
a service firm. An exploratory approach is warranted because the
topic of customer (dis)satisfaction with agent recommendations
has scarcely been examined in previous research, as discussed
above. Taking the customer’s perspective is appropriate for the
specific objective of this research, which is to understand sources
of customer satisfaction or dissatisfaction with recommendation
services. The critical incident technique is a research method that
has been successfully applied to study other research problems
(Bitner, 1990;Gremler, 2004;Keaveney, 1995;Meuter et al.,
2000). I collected and content-analyzed critical incidents and
experiences narrated by customers to answer these questions:
From the customer’s perspective, what are the major sources of
(dis)satisfaction with agent recommendation services? What are
the underlying customer expectations driving their
(dis)satisfaction?
In the rest of the paper, I first discuss the critical incident
technique and the procedures I followed in the collection and
analysis of critical (dis)satisfaction incidents. I then present
the classification scheme developed to account for all the
(dis)satisfaction incidents, discuss coding results, identify
themes emerging from these incidents and suggest tentative
research propositions. Finally, I discuss the managerial and
theoretical implications of this research.
Research methodology
Critical incident technique
The critical incident technique is a qualitative research
method that enables researchers to collect and analyze
incidents or occurrences of interest, and develop classification
schemes to help addressing practical problems (Bitner, 1990;
Flanagan, 1954;Gremler, 2004). Marketing and consumer
researchers find it an especially valuable tool when the
research objective is to develop understanding of behavior of
interest in the marketplace. Gremler (2004) identified 106
articles in major journals of marketing research that had used
this research tool. For instance, service researchers have used
it to explore customer (dis)satisfaction with service encounters
(Bitner, 1990;Bitner et al., 1994;Meuter et al., 2000),
incidents that precede service switching (Keaveney, 1995) and
retail failures and recoveries (Kelley et al., 1993). Following
Recommendations as personalized marketing
Anyuan Shen
Journal of Services Marketing
Volume 28 · Number 5 · 2014 · 414 –427
415
Gremler’s (2004) advice, as the critical incident technique is
widely accepted in research, the rest of my discussion will
focus on the operational procedures I used to conduct the
research. Interested researchers may refer to Gremler (2004)
for a comprehensive review of the critical incident technique.
Defining critical incidents
In this research, a critical incident is an episode of service
interaction between a customer and his/her service firm that
takes place via the firm’s proprietary recommender system to
which the customer attributes his/her (dis)satisfaction. That
is, an interaction episode must meet two requirements
simultaneously to qualify as a critical incident. First, the
interaction episode must take place via the company’s
proprietary recommender system (e.g. clicking to view
recommended items, rating recommended items). Thus,
failure to receive shipments of orders placed from
Amazon.com should not be considered a qualified incident
because it involves the company’s delivery system, not its
recommender system. Second, an interaction episode must
result in a customer’s satisfaction or dissatisfaction (i.e. the
valence must be unambiguously positive or negative). Thus,
merely explaining to other customers how to use Netflix’s
recommender system should not be considered a qualified
incident because no satisfaction or dissatisfaction can be
observed. For an example of qualified incident, a customer
may express satisfaction when Amazon.com recommends a
book he/she enjoys reading and would not have found
otherwise. Alternatively, a customer may express
dissatisfaction when Netflix recommends a movie that turns
out to be a total disappointment.
Data collection
Research on critical incidents starts with collecting customer
narratives of their own service experiences. Although
narratives of critical incidents are often obtained through
personal interviews or surveys in previous research (Keaveney,
1995;Meuter et al., 2000), narratives of customer
(dis)satisfaction with personalized recommendation services
may be more often communicated among members of online
communities (e.g. in online discussion groups or via Web
blogs), and can be observed in a naturalistic way following the
netnography approach (Kozinets, 2002;2010). Such critical
incidents, recorded as they occur or immediately after and
collected by researchers from online discussion groups and
personal blogs, may be more accurate than those obtained at
a later time of data collection by prodding customers with
interviews or survey questions (Kozinets, 2010). Although this
netnography approach makes it difficult to collect reliable
customer demographics data, previous research has suggested
that this should not be a major concern if the research
objective is to gain research insights (Bitner et al., 1990,
1994). With this consideration in mind, I decided to adopt the
naturalistic netnography approach in data collection and
analysis (Kozinets, 2002,2010), by observing and selecting
communications that take place in online communities
without interrupting their flow.
Personalized recommendation services are available from a
number of service firms. I selected three service firms (Amazon,
Netflix and Apple in its iTunes) whose proprietary recommender
systems have been in operation for a relatively extended period.
To collect data, I searched through Google for online discussion
groups and blogs under key words such as “Amazon
recommendations”, “personalized recommendations”, “Netflix
recommendations”, “Netflix movie recommendation system”,
“iTunes Genius” or “Genius recommendations”. The narratives
collected for this research were mostly from online discussion
groups (e.g. rec.arts.sf.written) or personal blogs (e.g. http://
safle.org/wordpress/2011/11/23/amazon-recommendations-
and-useless-algorithms.html).
To ensure the quality of data collection, returned search
results were initially screened by the researcher based on the
judgment of whether the narratives meet the qualification
requirements discussed earlier – narratives would be screened
out if they did not pertain to using the company’s recommender
system or if they involved neutral stance (e.g. how-to-use
instructions) or ambiguous experiences (e.g. reporting both
positive and negative responses to the same service episode). I
also checked whether narratives read like written by average
customers holding conversations in discussion groups or
monologs in blogs over their user experiences of the firm’s
recommendation service. Narratives that appear on
company-sponsored Web sites or that read like written by
business executives or algorithm developers were discarded. I
exercised caution regarding privacy and copyrights, limiting my
search to Web sites or blogs that existed in the public domain and
did not place restrictions on the academic use of content.
In preparation for data coding, I again read discussion
threads or blog posts individually as well as in their original
contexts to determine, using the criteria discussed above,
whether they were qualified critical incidents – disqualified
incidents were again discarded and qualified incidents were
highlighted as either satisfactory incidents or dissatisfactory
incidents.
Sample size was determined by following the benchmark
numbers of previous studies (Bitner et al., 1990;Gremler,
2004;Keaveney, 1995;Meuter et al., 2000). In this research,
426 qualified narratives were collected from 358 online group
discussion participants and bloggers, for a total of 650
qualified incidents. A discussion participant or a blogger may
post multiple narratives, so the number of discussion
participants or bloggers is smaller than the number of
narratives (358 vs 426). Similarly, a narrative may consist
of one or more qualified critical incidents, so the number of
narratives is smaller than the number of critical incidents (426
vs 650). For more detailed distribution across the three service
firms, see Table I.
Table I Participants/bloggers, narratives and critical incidents by service
firms
Service Firm
Number (%)
of discussion
participants/
bloggers
Number (%)
of narratives
Number (%)
of critical
incidents
Amazon.com 115 (32) 140 (33) 212 (33)
Netflix.com 121 (34) 154 (36) 263 (40)
Apple’s iTunes 122 (34) 132 (31) 175 (27)
Total 358 (100) 426 (100) 650 (100)
Recommendations as personalized marketing
Anyuan Shen
Journal of Services Marketing
Volume 28 · Number 5 · 2014 · 414 –427
416
Data analysis
Unit of analysis
The unit of analysis is a critical incident – an episode of
interaction with the recommendation service that results in
satisfaction or dissatisfaction (no distinction was made
between high and moderate levels in satisfaction or
dissatisfaction).
As customers record their user experiences
naturalistically, a narrative may record a customer’s
response to one discrete interaction episode (e.g. viewing a
recommended item and finding it to be offensive) or several
discrete interaction episodes (e.g. viewing a recommended
item and finding it to be poorly related to his/her true
preference; rating the recommended item as a poor
recommendation; and receiving recommendations better
aligned with his/her preference). This is not uncommon, as
previous research has documented that a narrative may
involve two or more critical behaviors (Keaveney, 1995).
Sometimes, a narrative may report a cumulative response to
multiple discrete interactions of the same type. For
example, rating a movie after watching it is a discrete
interaction with the recommendation service. However,
rating 1,000 movies after viewing each of them in a period
of five years and finding it really easy to do is a cumulative
response based on multiple episodes of similar interaction
with the recommendation service. In this research, a
cumulative response is treated as one critical incident.
Classification scheme development and inter-rater reliability
An iterative inductive delineation process (Bitner et al.,
1990;Gremler, 2004) was followed to develop a
classification scheme that could account for all the critical
incidents. I first read and reread each of the 650 qualified
incidents that resulted in satisfaction (38 per cent, or 248
incidents) or dissatisfaction (62 per cent, or 402 incidents)
alone and in combination with its original context to allow
commonalities to emerge. Then, a classification scheme was
drafted with definition for each of the categories. Moving on
to the third stage, I started to sort each identified incident
into a category of the classification scheme drafted in the
previous stage, making minor modifications to the category
definitions when necessary until I felt confident that the
categories in the classification scheme were able to account
for all qualified critical incidents and that each critical
incident had been sorted into a correct category.
To test the reliability of the classification scheme, two
business students were recruited as coders (coder A and
coder B). They first received training to familiarize
themselves with the classification categories previously
developed by the researcher. They then used the
classification scheme to independently code all the
satisfactory and dissatisfactory incidents. The two coders
agreed in 82.92 per cent of the critical incidents in the
designation of categories. Coder A agreed with the
researcher in the designation of 81.69 per cent of the
incidents, while coder B agreed with the researcher in the
designation of 80.15 per cent of the incidents. The
inter-judge reliability is considered acceptable.
Results
The classification scheme
Following the procedures described above, a classification
scheme with 15 categories has been developed to account for
all the 650 incidents. In Table II, all categories except the
“other” category in the classification scheme are each
illustrated with an exemplary incident.
Number and per cent of critical incidents by category
Table III is a summary report of the number and percentage of
critical incidents that fall into each of the categories. Customer
discussion of personalized recommendations is concentrated
(79.6 per cent of all incidents) in the eight topic areas of
algorithm (Is the recommendation algorithm superior or
inferior?), discovery (Do recommendations help me find
valuable items I would not find otherwise?), convincing
connection (Are recommendations connected to my preference
in a convincing manner?), privacy (Does the system violate my
privacy?), accuracy (Do recommendations match my
preference?), customer knowledge (Do recommendations
indicate that the company has good knowledge of the
customer?), sales motive (Does the company only make
recommendations out of self-interested motive?) and product
knowledge (Do recommendations indicate that the company
has good knowledge of the products it recommends?). The
higher proportion of dissatisfaction incidents (402 of 650, or
61.8 per cent) than that of satisfaction incidents (248 of 650,
or 38.2 per cent) suggests that overall customer experiences
with agent recommendations might be in the negative region.
Table IV is a ranking of the most discussed categories, the
top satisfying and dissatisfying categories and the top winning
and losing categories. The categories most discussed (79.6 per
cent of all critical incidents) are, in order of frequency,
algorithm (16.3 per cent), discovery (13.2 per cent),
convincing connection (10.3 per cent), privacy (10.2 per
cent), accuracy (9.1 per cent), customer knowledge (7.4 per
cent), sales motive (6.9 per cent) and product knowledge (6.3
per cent).
Interestingly, 77.3 per cent of all dissatisfaction incidents are
related to algorithm (the recommendation algorithm is perceived
to be inferior), convincing connection (recommendations are not
connected to the customer’s preference in a convincing manner),
discovery (recommendations are repetitive and do not help the
customer discover valuable items), sales motive (the company
only makes recommendations out of self-interested motive),
customer knowledge (recommendations indicate the company has
poor knowledge of the customer), product knowledge
(recommendations indicate the company has poor knowledge of
the products it recommends), privacy (the recommendation
system is a violation of privacy) and accuracy (recommendations
match the customer’s preference). Dissatisfaction incidents are
dominant when it comes to convincing connection,product
knowledge,sales motive and customer knowledge. Incidents are
almost exclusively dissatisfactory in sub-accounts (the
recommendation system does not separate the preference
profiles of users who share the account), redundancy (making
redundant recommendations), propriety (recommendations are
embarrassing or inappropriate) and availability (the system
cannot generate any recommendations when needed).
Recommendations as personalized marketing
Anyuan Shen
Journal of Services Marketing
Volume 28 · Number 5 · 2014 · 414 –427
417
Table II Illustrative incidents for categories in the classification scheme
Categories of
(dis)satisfac-tion
sources Satisfaction incidents Dissatisfaction incidents
Accuracy “The Amazon.com recommendation engine knows me fairly well
[. . .] I’ve been their customer for almost 15 years, and the
books they try to sell me genuinely reflect my passions: Spanish-
language literature, personal essays, travel, business and
marketing, children books and books related to World’s Fairs”
“[. . .] it seems that I’m not the only one irritated by
Netflix’s recommendations[. . .] Is there a way for me to
use Netflix’s rating system in a way that will cause the
system to recommend movies that truly match my
tastes, or should I forget about it?”
Available at: http://aguahispanicmarketing.com/lost-in-the-jungle/;
Posted by anonymous on 6/30/11 (accessed 27 March 2012)
Available at: http://ask.metafilter.com/185224/Do-you-
know-anything-about-Netflix-ratings; Posted by partner
to media & arts on 5/6/2011 (accessed 17 March 2012)
Discovery “[. . .] I utilize [. . .] (Netflix) electronic recommendations when
choosing films, and I find that this is one of the best ways to
discover films that I honestly would not have discovered on my
own”
Available at: http://themotionpictures.wordpress.com/2012/04/07/
5-ways-i-choose-films/; Posted by Lindsey on 4/7/12 (accessed
on 21 April 2012)
“[. . .] these recommendations are based on the idea of
‘more of the same’. So if you like one British comedy
show, you get recommended more British comedies [. . .]
I think that if you just follow the recommendations, you
might end up just reading the same thing over and
over”
Available at: http://christinarosendahl.wordpress.
com/2012/01/12/are-recommendations -bad-for-you/; Posted
by Christina R. on 1/12/12 (accessed on 12 February 2012)
Convincing connection “Netflix definitely got more linear with its recommendation this
week. The connection between St. Elmo’s Fire and About Last
Night are pretty clear: Brat-Pack actors, similar plot lines,
beautiful twenty-something’s having some sex”
Available at: www.secretly-important.com/category/52-weeks-of-
netflix/; Posted by jaimemnavarro on 10/6/11 (accessed on 24
February 2012)
“(iTunes Genius) suggests other music [. . .] nebulously,
tenuously or debatably related to that music. If you like
hearing the preternaturally visionary melancholia of
Elliott Smith, I will advise you to purchase ‘The A Team’
by Ed Sheeran, who honestly does share more with
Smith than initials, and is in no way a derivative and
cynical unit-shifting cipher disguised as a songwriter”
Available at: http://hairyapplefeed.blogspot.com/2011/08/
itunes-genius-where-your-heart-should.html Posted by
HairyAppleMen on 8/4/11 (accessed on 29 March 2012)
Algorithm “[. . .] how they make the magic happen [. . .] Netflix looks at
the movies you rate, (anonymously) finds other users who’ve
given the same movies the same ratings, and then recommends
other movies that they rated highly. They then classify these
recommendations into categories based on traits associated with
the movies”
“iTunes recommendations seem to be rather run-of-the-
mill collaborative filtering recommendations based upon
the wisdom of the crowds [. . .] recommendations seem
to be artist-based and not album or track-based[. . .] the
Genius just picks tracks from similar artists regardless of
how well the track is representative for the artist”
Available at: www.entertainedorganizer.com/2011/08/what-
netflix-has-taught-me-about-myself.html; Posted by Patrick on
8/11/11 (accessed on 24 February 2012)
Available at: http://synthese.wordpress.com/2008/09/10/
genius-itunes-recommender/ Posted by Andre Vellino on
9/10/08 (accessed on 29 March 2012)
Voluntary
participation
“I happily spend a bit of time entering my ratings of just about
every book I’ve ever read, just for fun”
Available at: rec.arts.sf.written; Posted by Brian Charles Kohn on
3/27/10 (accessed on3 February 2012)
“I have spent a number of hours rating the stuff I own
(telling Amazon that I both already own and what star
ranking I give it) and checking off’not interested’[. . .]
The downside is that it takes time”
Available at: http://boards.straightdope.com/sdmb/
showthread.php?t518189 Posted by phreesh on
5/21/09 (accessed on 9 February 2012)
User control “By rating a lot of books and putting books on my wishlist [. . .]
I’ve got this doing a pretty good job of recommending stuff to
me [. . .] It does an especially decent job keeping up with new
books by authors I like and new books about my professional
interests”
“I have such an eclectic browsing history and have
purchased so many gifts through Amazon that my
recommendations are utter crap and after tinkering
around with the ‘fix this recommendation’ feature, I
haven’t been able to fix a thing”
Available at: http://boards.straightdope.com/sdmb/showthread.
php?t518189; Posted by Harriet the Spry on 5/20/09 (accessed
on 24 February 2012)
Available at: www.thecompulsivereader.com/2012/
01/reading-rants-balancing-amazon-and.html; Posted by The
Compulsive Reader on 1/2/12 (accessed on 2 February 2012)
Sales motive “[. . .] Genius also has some recommendations from the iTunes
store but it certainly isn’t a pushy salesman. You can preview the
purchasable music from within a sidebar or simply slam the section
shut if you aren’t in the mood to kick start our economy”
“I don’t know why I am not as comfortable with the
similar Amazon system. I suspect it is because every time I
look there, they make recommendations that are obviously
based on what they have been paid to promote”
(continued)
Recommendations as personalized marketing
Anyuan Shen
Journal of Services Marketing
Volume 28 · Number 5 · 2014 · 414 –427
418
Table II
Categories of
(dis)satisfac-tion
sources Satisfaction incidents Dissatisfaction incidents
Available at: http://usedwigs.com/itunes-genius/; Posted by Todd
Marrone on 9/19/08 (accessed on 29 March 2012)
Available at: rec.arts.sf.composition/; Posted by Ian
Montgomerie in 11/08 (accessed on 9 February 2012)
Product knowledge “(brick and mortar bookstores) sales staffs not as
knowledgeable as the mythology suggests. Amazon’s
recommendation engine is worlds better than anybody I ever
met in a bookstore, especially if you understand how it works
(and how it’s biased) and tweak it”
“‘Genius’ is of no use to those whose taste in music
goes beyond a fairly narrow range. On my Macs it can’t
make a playlist based on Vladimir Horowitz playing
Chopin. This is not esoteric stuff[. . .]”
Available at: http://mhpbooks.com/48750/when-will-big-
publishers-speak-out-about-amazon/; Posted by TwoTooth on
2/2/12 (accessed on 22 February 2012)
Available at: comp.sys.mac.system/; Posted by Davoud
on 9/17/08 (accessed on 29 March 2011)
Customer knowledge “Since I do most of my reading on my Kindle, Amazon has a
good record of the books I like. To find new books to read, I
usually look at their recommendations for me. Chuck
Klosterman, Chelsea Handler, Tina Fey, Stephen Clarke, Amy
Sedaris, Kathy Griffin, Augusten Burroughs, Sarah Silverman,
Elizabeth Gilbert [. . .] the list varied quite a bit”
Available at: http://everythingisblooming.wordpress.com/2011/10/
21/on-amazon-recommendations/; Posted by Ashley on 10/21/11
(accessed on 2 February 2012)
“(Amazon) top recommendation is Dan Brown’s turgid
‘The Lost Symbol’. I cannot for the life of me think why
Amazon’s algorithms would recommend this [. . .] My
best guess is it is based on my browsing history because
I have looked at all the Dan Brown books on Amazon.
What Amazon failed to spot is that I did not just look at
the books–I reviewed them”
Available at: http://safle.org/wordpress/2011/11/23/amazon-
recommendations-and-useless-algorithms.html; Posted by
Stephen on 11/23/11 (accessed on 2 February 2012)
Propriety N.A. “Steve Wozniak just sent me this hilarious screenshot
[. . .] It’s from the Genius Recommendations in the
movie section of the iTunes Store: If you bought the PBS
documentary Steve Jobs: One Last Thing you will like
Hitler: A Career. His comment in the mail: ‘Someone at
Apple has a sense of humor”
Available at: http://gizmodo.com/5898202/if-you-bought-
steve-jobs-one-last-thing-youll-like-hitler-a-career-says-
itunes-genius; Posted by Jesus Diaz on 4/1/12 (accessed
on 3 April 2012)
Privacy “Did you not read the ‘Genius’ agreement? Apple collects a list
of music content but it does not collect the information that
would link it to an identity”
Available at: comp.sys.mac.system/; Posted by Davoud on 9/10/
08 (accessed on 29 March 2011)
“[. . .] i had a friend who download music from torrent
never been caught but when he turn it the Guinness bar
on about a week later his isp email him saying that
some company reported that he has download nusic of
torrent site and they would not apriciate it if does that
again”
Available at: www.ifans.com/forums/threads/genius-
piracy-detection.97237/; Posted by jaywurld on 5/6/10
(accessed on 7 April 2012)
Redundancy N.A. “I also wish that it would stop recommending other
editions of things that I already own”
Available at: http://arstechnica.com/civis/viewtopic.php?
f2&t20840; Posted by Chicago Burbs on 2/26/10
(accessed on 18 March 2012)
Sub-accounts N.A. “I just wish there was a way to segregate my wife’s
recommendations and likes from mine. She spends
considerably more time with Netflix than I do, so
whenever I go to look for something the
recommendations and estimated ratings aren’t exactly in
line with my tastes”
Available at: http://arstechnica.com/civis/viewtopic.php?f
23&t1146591; Posted by Chris FOM on 6/4/11 (accessed
on 15 February 2012)
Availability “IIRC, John Siracusa was complaining a lot at the beginning of Genius
that the feature would be useless for collections that weren’t mostly
iTMS purchases, and this has not been the case at all”
“Alas, Genius still has no recommendations for my new
copy of Taylor Swift’s ‘Speak Now,’ which I confess I did
buy from Amazon MP3 instead of iTunes”
Available at: http://musicmachinery.com/2011/05/14/how-good-is-
googles-instant-mix/; Posted by Zachary Pennington on 5/17/11
(accessed on 29 March 2012)
Available at: http://forums.macrumors.com/showthread.
php?t1041524 Posted by 6502a on 11/1/10 (accessed
on 29 March 2012)
Recommendations as personalized marketing
Anyuan Shen
Journal of Services Marketing
Volume 28 · Number 5 · 2014 · 414 –427
419
In contrast, 73.4 per cent of satisfaction incidents are related
to discovery (recommendations help the customer discover
valuable items he/she would not find otherwise), algorithm (the
recommendation algorithm is perceived to be superior),
privacy (the system does not violate the customer’s privacy),
accuracy (recommendations closely match the customer’s
preference) and voluntary participation (the customer enjoys
participation). Satisfaction incidents are dominant when it
comes to voluntary participation,discovery and privacy.
Customer expectations by category
Included in Table V is customer expectation for each category
of incidents.
Patterns or commonalities in customer expectations would
naturally emerge as the classification scheme was developed.
In the process of understanding and articulating customer
expectations, I also attempted to illustrate customer
expectations with typical instances (satisfaction vs
dissatisfaction) and, when possible, explore the conditions
under which these expectations are likely to occur. For
example, for the discovery category, customer expectation is
illustrated with the question: “Do recommendations help me
find valuable items I would not find otherwise?” Typical
instances exemplify moments of satisfaction or dissatisfaction:
The customer will be satisfied if recommendations are related to his/her
interest, offering variety in recommendations or even challenging him/her, in
order to find great things to enjoy. The customer would be dissatisfied if
recommendations are repetitive – “more of the same things”, “over and over
again” – thus failing to help him/her find great things to enjoy.
When it is possible, I also suggest the condition under which
the expectation is likely to arise: “When a customer has an
interest to be further defined through exploration”.
For all other categories, see Table V.
Emerging themes and research propositions
The research also attempts to identify more abstract,
higher-level themes across categories of incidents and to
develop tentative propositions for future research. When
Table V is closely examined, three pairs of themes seem to
emerge from the 14 categorized sources of (dis)satisfaction
Table III Classification of critical satisfaction/dissatisfaction incidents
Categories of
(dis)satisfaction
sources
N
(%) of
satisfaction
incidents
N
(%) of
dissatisfaction
incidents
N
(%) of
critical
incidents
Accuracy 29 (11.7) 30 (7.5) 59 (9.1)
Discovery 52 (21.0) 34 (8.5) 86 (13.2)
Convincing
connection
14 (5.6) 53 (13.2) 67 (10.3)
Algorithm 43 (17.3) 63 (15.7) 106 (16.3)
Voluntary
participation
23 (9.3) 10 (2.5) 33 (5.1)
User control 12 (4.8) 16 (4.0) 28 (4.3)
Sales motive 11 (4.4) 34 (8.5) 45 (6.9)
Product
knowledge
9 (3.6) 32 (8.0) 41 (6.3)
Customer
knowledge
15 (6.0) 33 (8.2) 48 (7.4)
Propriety 0 (0.0) 15 (3.7) 15 (2.3)
Privacy 35 (14.1) 31 (7.7) 66 (10.2)
Redundancy 0 (0.0) 20 (5.0) 20 (3.1)
Sub-accounts 0 (0.0) 22 (5.5) 22 (3.4)
Availability 1 (0.0) 7 (1.7) 8 (1.2)
Other 4 (1.6) 2 (0.4) 6 (0.9)
Total 248 (100) 402 (100) 650 (100)
Table IV Rankings of critical incident categories
Top satisfying categories (by per cent in all satisfaction incidents): Top dissatisfying categories (by per cent in all dissatisfaction incidents):
1. Discovery: 21.0 1. Algorithm: 15.7
2. Algorithm: 17.3 2. Convincing connection: 13.2
3. Privacy: 14.1 3. Discovery: 8.5
4. Accuracy: 11.7 4. Sales motive 8.5
5. Voluntary participation: 9.3 5. Customer knowledge: 8.2
In total: 73.4 6. Product knowledge: 8.0
Top winning categories (by per cent of satisfaction incidents within a
category):
7. Privacy: 7.7
8. Accuracy: 7.5
In total: 77.3
1. Voluntary participation: 23/33 69.7
2. Discovery: 52/86 60.5
3. Privacy: 35/66 53.0
Top losing categories (by per cent of dissatisfaction incidents within a
category):
Most discussed categories (by per cent of all critical incidents):
1. Algorithm: 16.3
2. Discovery: 13.2
3. Convincing connection: 10.3
4. Privacy: 10.2
5. Accuracy: 9.1
6. Customer knowledge: 7.4
7. Sales motive: 6.9
8. Product knowledge: 6.3
In total: 79.6
1. Sub-accounts: 22/22 100.0
2. Redundancy: 20/20 100.0
3. Propriety: 15/15 100.0
4. Availability: 7/8 87.5
5. Convincing connection: 53/67 79.1
6. Product knowledge: 32/41 78.0
7. Sales motive: 34/45 75.6
8. Customer knowledge: 33/48 68.8
Recommendations as personalized marketing
Anyuan Shen
Journal of Services Marketing
Volume 28 · Number 5 · 2014 · 414 –427
420
Table V Customer expectations by category
Customer expectation by category Typical instances
Condition under which the expectation
is likely to arise
Accuracy: “Do recommendations match
my preference?”
A recommendation being accurate (“right on ”) results
in satisfaction, whereas a recommendation being
inaccurate (“way off ”) results in dissatisfaction.
Dissatisfaction will be worse if a recommendation is
something the customer obviously dislikes (“the
opposite ”)
When a customer knows clearly what s/he
wants, or when, where, how and at what
price s/he wants it
Discovery: “Do recommendations help
me find valuable items I would not
find otherwise?”
The customer will be satisfied if recommendations are
related to his/her interest, offering variety in
recommendations or even challenging him/her, in
order to find great things to enjoy. The customer
would be dissatisfied if recommendations are
repetitive–“more of the same things”, “over and over
again”–thus failing to help him/her find great things
to enjoy
When a customer has an interest to be
further defined through exploration
Convincing Connection: “Are
recommendations connected to my
preference in a convincing manner?”
The customer will be satisfied if there is a convincing
connection between the recommendation and his/her
previous purchases or interests. For example,
recommending accessory items or items often
purchased together is considered helpful and good
service (“in case I forget”). The customer will be
dissatisfied if the connection is perceived as
unconvincing, hard to understand or even funny
When a customer lacks the technology
savvy about recommendation agents
Algorithm: “Is the recommendation
algorithm superior or inferior?”
Satisfaction results from perceived superiority (e.g. if
the company adopts a customer-centered approach in
learning customer preferences–inferring preferences
from the customer’s ratings, reviews or using data
integrated across multiple venues where customer
preferences are revealed — and is able to present a
true picture of the customer). Dissatisfaction results
from perceived inferiority (e.g. if the company adopts
a company-centered approach–inferring preferences
solely from the company’s internal data such as search
and purchase, whereas the customer also purchases in
other places–thus have a fragmented picture of the
customer)
When the customer has the technology
savvy about recommendation agents
Voluntary participation: “Can I
participate if I choose to?”
The customer will be satisfied if he/she has fun in
participating (e.g. “it was fun”), can claim ownership
(e.g. “I have rated 3000 movies”) or claim
achievement (e.g. “my time investment was worth it”)
through participation. The customer will be dissatisfied
if he/she finds participation requirement to be more
than he/she is willing to commit and is thus “time-
consuming”, “effortful”, “burdensome” or “not worth
it”
N.A.
User control: “Do I have some control
over which recommendations to
receive or not to receive?”
The customer will be satisfied if he/she has some
degree of control over what types of recommendations
to receive or not to receive, and to be able to take
actions to correct dissatisfactory recommendations by
providing feedback (e.g. “I own it”, “Don’t use this
for future recommendations”, “Not interested”)
N.A.
Sales motive: “Does the company only
make recommendations out of self-
interested motives?”
The customer will be satisfied if the company makes
recommendations based on his/her preferences, rather
than only out of self-interested motives (e.g. to get
sales, to manage inventories)
N.A.
(continued)
Recommendations as personalized marketing
Anyuan Shen
Journal of Services Marketing
Volume 28 · Number 5 · 2014 · 414 –427
421
based on the overall directions of focal customer expectations
in each category (Table VI).
Theme Pair 1: decision outcome versus decision process
Consumer decision-making may be understood in terms of
two aspects: decision outcome and decision process. Among
the categories of the classification scheme, accuracy (the
expectation that recommendations should match the
customer’s preference) and discovery (the expectation that
recommendations should help the customer find valuable
items he/she would not find otherwise) seem to be directed at
the outcome aspect (i.e. what recommendations to receive). In
contrast, convincing connection (the expectation that
recommendations should be connected to the customer’s
preference in a convincing manner) and algorithm (the
expectation that the recommendation algorithm should be
superior) seem to be directed at the process aspect (i.e. how to
relate recommendations to customer preference).
Theme Pair 2: customer’s role versus marketer’s role
Response to recommendation systems seems to be inherently
relational, as customers expect ongoing support in many
upcoming (rather than one isolated) decision tasks. For example,
customers participate now (e.g. by writing many reviews and
rating many items) in anticipation of receiving quality
recommendations in the future. If customers take a transactional
approach to using recommender systems, they should provide
preference information only when required for the current
decision task and should have this preference information
deleted when the decision task is completed. Although
customers may take a transactional approach, my research
found that this is not the case for customers who use
recommender systems. The relational nature is more clearly
evident in Theme Pair 2 (defining the roles of relationship
partners) and Theme Pair 3 (articulating norms in the
relationship) than in Theme Pair 1.
Table V
Customer expectation by category Typical instances
Condition under which the expectation
is likely to arise
Product knowledge: “Does the
company have specialized knowledge
in the products it recommends?”
The customer will be satisfied if they perceive a
company as knowledgeable–trustworthy because it
knows the stuff it recommends (“expert in these
products ”). The customer will be dissatisfied if the
company is perceived as not knowledgeable (“not
even knowing the stuff it recommends”)
N.A.
Customer knowledge: “Does the
company have knowledge of the
customer when making
recommendations?”
The customer will be satisfied if the company has
good knowledge of his/her preferences when making
recommendations. The customer will be dissatisfied if
the company makes recommendations based on
superficial understanding of the customer or makes
stupid assumptions of the customer (e.g. in the cases
of gift purchases, textbook purchases, random
purchases, using other people’s computers)
N.A.
Propriety: “Do recommendations
violate social norms?”
The customer will be dissatisfied if a recommendation
is found to be inappropriate, offensive, personally
embarrassing or insulting to the customer’s personal
identity
N.A.
Privacy: “Does the system violate my
privacy?”
The customer will be satisfied if his/her privacy (e.g.
personally identifiable information) is secure with the
company when using its recommender system.
Dissatisfaction occurs if he/she has privacy concerns
and feels insecure interacting with the company via its
recommender system
N.A.
Redundancy: “Does the system make
redundant recommendations?”
The customer will be dissatisfied if he/she receives
redundant recommendations. Examples include: a
customer who already owns an iPod gets
recommended another iPod; a customer who has
purchased a paperback copy of the book gets
recommended a hardcover copy; a customer who has
a particular song from Album A gets recommended
that same song from Album B
N.A.
Sub-accounts: “Does the system
separate the preference profiles of
users sharing the account?”
The customer will be dissatisfied if his/her personal
profile or preferences are mixed with those of others
who shared the same account, messing up with
otherwise personalized recommendations
N.A.
Availability: “Does the system have
recommendations available for me
when I need them?”
The customer will be dissatisfied if recommendations,
when sought, are not available
N.A.
Recommendations as personalized marketing
Anyuan Shen
Journal of Services Marketing
Volume 28 · Number 5 · 2014 · 414 –427
422
Among the categories, voluntary participation (the
expectation that the customer should be able to participate
if he/she chooses to) and user control (the expectation that
the customer should have control over which
recommendations to receive or not to receive) seem to be
defining the customer’s role in the recommendation service.
On the other hand, sales motive (the expectation that the
company should not make recommendations only out of
self-interested motives), product knowledge (the expectation that
the company have specialized knowledge in the products it
recommends) and customer knowledge (the expectation that the
company have knowledge of the customer when making
recommendations) seem to pertain to the marketer’s role in the
recommendation service.
Theme Pair 3: social norm versus technology
As discussed above, Theme Pair 3 pertains to the social
norms in this relationship even though it is completely
technology-mediated. Among the categories, propriety (the
expectation that recommendations should respect social norms)
and privacy (the expectation that the system should respect the
customer’s privacy) seem to indicate that social norms should be
respected, whereas redundancy (the expectation that the system
should not make redundant recommendations), sub-accounts (the
expectation that the system separate the preference profiles of
users sharing the account) and availability (the expectation that
the system should have recommendations available when
needed) seem to be instances violating expectations regarding
technical functioning.
Research propositions
The distinction between outcome expectations and process
expectations as outlined in Theme Pair 1 suggests how
recommendation agents could add value through personalization.
Regarding the outcome aspect of decisions, customers will expect
accuracy and discovery as recommendation benefits depending on
their preference development:
P1a. Customers who have well-defined, stable preferences and
good knowledge into their own preferences will expect
accuracy benefit in recommendation outcomes.
P1b. Customers who do not have well-defined preferences and
who clearly know their lack of well-defined preferences
will expect discovery benefit in recommendation
outcomes.
When it comes to the process aspect of decisions, customers
will expect algorithm and convincing connection as
recommendation benefits depending on their technology
savvy about recommendation agents:
P2a. Customers who have the technology savvy about
recommendation agents will expect algorithm benefit in
recommendation process.
P2b. Customers who do not have the technology savvy about
recommendation agents will expect convincing
connection benefit in recommendation process.
Theme Pairs 2 and 3 jointly indicate that customers use
recommendation agents in a relational manner, as they define
roles of the service firm and the customer as relationship
partners and expect the relationship to be governed by not
only technical functionality but also by social norms:
P3a. Customers who use recommendation agents are
committed to the learning relationship with the service
firms.
What, then, is a primary motivation for customers to remain in
such a learning relationship with the service firms? Customer
expectations for accuracy and discovery (as outlined in P1a and
P1b above) seem to suggest that customers who use
recommendation agents may have an underlying motivation.
That is, they have a subjective belief in their true preferences
Table VI Emergent themes for personalized marketing
Emergent themes Categories of (dis)satisfaction sources
Pair 1
Decision outcome Accuracy: “Do recommendations match my preference?”
Discovery: “Do recommendations help me find valuable items I would not find otherwise?”
Decision process Convincing Connection: “Are recommendations connected to my preference in a convincing manner?”
Algorithm: “Is the recommendation algorithm superior or inferior?”
Pair 2
Customer’s role Voluntary participation: “Can I participate if I choose to?”
User control: “Do I have control over which recommendations to receive or not to receive?”
Marketer’s role Sales motive: “Does the company only make recommendations out of self-interested motives?”
Product knowledge: “Does the company have specialized knowledge in the products it recommends?”
Customer knowledge: “Does the company have knowledge of the customer when making recommendations?”
Pair 3
Social norm Propriety: “Do recommendations violate social or relational norms?”
Privacy: “Does the system violate my privacy?”
Technology Redundancy: “Does the system make redundant recommendations? ”
Sub-accounts: “Does the system separate the preference profiles of users sharing the account?”
Availability: “Does the system have recommendations available for me when I need them?”
Recommendations as personalized marketing
Anyuan Shen
Journal of Services Marketing
Volume 28 · Number 5 · 2014 · 414 –427
423
that can be learned (if overt to themselves) or discovered (if
hidden from themselves) with the aid of recommendation
agents. Without recognizing such a motivation, their relational
behaviors toward recommendation systems would be difficult
to understand:
P3b. Customers who use recommendation agents are
motivated by a subjective belief in their true
preferences, regardless of whether preferences are overt
to, or hidden from, themselves.
Discussion
This paper is an exploratory study of customers’ “lived”
experiences of commercial recommendation services. The
critical incident technique was used to analyze 650 critical
incidents collected from 358 online group discussion
participants and bloggers. A classification scheme with 15
categories was developed, and all incidents were sorted into a
category in the scheme. Categories in the classification scheme
are illustrated with satisfactory incidents and dissatisfactory
incidents. Each category (except the “other” category) is
further defined in terms of an underlying customer
expectation, typical instances of satisfaction and
dissatisfaction and, when possible, conditions under which
customers are likely to have such an expectation. Three pairs
of themes emerged from the classification scheme. Tentative
research propositions were introduced regarding how
personalization with recommendation agents can create value
to customers, whether and when customers are motivated to
maintain a learning relationship with the service firm. Next, I
will discuss the managerial and research implications of
findings in this research.
Managerial implications
The research findings reported above may have useful
implications for the development of recommendation
algorithms. To enhance recommendation performance, much
attention has been dedicated to finding the best prediction
models (Ansari et al., 2000;Gershoff and West, 1998;
Iacobucci et al., 2000) or methods – content-based filtering,
collaborative filtering or hybrid methods – for making
recommendations (Adomavicius and Tuzhilin, 2005;Linden
et al., 2003). While algorithm developers often delve deeply
into data to spot patterns and have explored such topics as
filtering methods, data scalability and recommendation
stability (Adomavicius and Tuzhilin, 2005;Adomavicius and
Zhang, 2012;Linden et al., 2003), they tend to devote much
less effort to finding out how customers think or feel when they
use recommendation systems. This research suggests that
these might be gaps of knowledge in the state-of-the-art in
algorithm development and that customer perspectives should
be incorporated to supplement the data-focused approach.
The higher proportion of dissatisfaction incidents (61.8 per
cent of all incidents) suggests that more effort should be made to
improve the overall customer experiences of agent
recommendations. The most discussed categories, the top
dissatisfying categories and the top losing categories point to
some of the priorities for improvement. For example, the eight
top dissatisfying categories are also the most discussed categories:
algorithm (recommendation algorithms perceived as inferior),
convincing connection (recommendations not connected to
preference in a convincing manner), discovery (recommendations
not helping the customer find valuable items), sales motive (the
company only makes recommendations out of self-interested
motive), customer knowledge (recommendations indicate that the
company has poor knowledge of the customer), product knowledge
(recommendations indicate that the company has poor
knowledge of the products it recommends), privacy (the
recommendation system is a violation of privacy) and accuracy
(recommendations do not match the customer’s preference).
Among them, convincing connection,product knowledge,sales motive
and customer knowledge are predominantly losing categories and
may be considered the weakest links in recommendation services.
These major gaps from the customers’ perspective deserve the
serious attention of algorithm development teams.
This research also adds new insights to how to incorporate
customer perspectives in making recommendations. Providing
explanations why the system recommends the suggested items
(e.g. “We recommend this item because you like X”) is an
attempt to make the algorithm more “transparent” and to
increase customer satisfaction and acceptance (Kramer, 2007;
Pu et al., 2012). However, although “explanations” may make
the recommendation logic more “transparent”, this research
finds that they are not enough, particularly when customers
lack the technology savvy about recommendation agents.
What is needed is a convincing connection between the
recommended item and previously revealed preference (as
outlined in P2a and P2b). For example, recommending
accessory items or items often purchased together is
considered convincing. In contrast, unconvincing
recommendations are often weird or difficult to understand,
even if the recommendation algorithm is clearly transparent.
For example, in Table II, the customer was dissatisfied by the
recommendation of “The A Team” by Ed Sheeran based on
liking of Elliott Smith because, according to the customer,
their music styles should not go along together. Similarly,
while there has been discussion on trust of recommendation
agents (Komiak and Benbasat, 2006;Lam et al., 2006),
product knowledge,sales motive and customer knowledge have not
been identified as possible antecedents of customer trust.
Apparently, algorithm developers could benefit from a better
understanding of customer expectations to deliver customer
satisfaction and reduce customer dissatisfaction.
The research findings may also have implications for service
firms hoping to be more adaptive in deploying
recommendation systems as a core part of their competitive
strategy. For example, service firms may use the categories in
the classification scheme as a basis to develop a more
comprehensive checklist to assess the performance of their
overall personalized marketing effort based on the perception
of customers. This research suggests that customer
expectations should be inherently relational in that roles are
being defined for relationship partners (Theme Pair 2) and
norms are being articulated for the relationship (Theme Pair
3). When a recommender system is deployed as a personalized
marketing tool, its performance is not just quality of
recommendations as a consumer decision support –
something that can be left to the discretion of the algorithm
team. Rather, its performance should be a marketing
management concern and should be assessed more
Recommendations as personalized marketing
Anyuan Shen
Journal of Services Marketing
Volume 28 · Number 5 · 2014 · 414 –427
424
comprehensively in terms of how effective the recommender
system is as a marketing tool.
Research implications
By exploring customers’ “lived” experiences of personalized
recommendations, academic researchers may find a new
angle – the customer’s perspective – to examine theories of
personalized marketing and to gain more nuanced
understanding of preference learning and preference
construction with the aid of recommendation agents. Previous
research suggests that customers often do not have hidden or
overt preferences that marketers can reveal by building a
learning relationship. The extent to which customers can
benefit from personalized recommendations (i.e. their
satisfaction) should depend on:
their preference development, i.e. whether they have
well-defined and reasonably stable preference for
marketers to learn; and
their preference insight, i.e. whether they know their
preference so that they can recognize and appreciate that
recommendations are based on their revealed preference
(Kramer, 2007;Simonson, 2005).
By taking the customer’s perspective, this research contributes
original insights into these issues fundamental to personalized
marketing.
First, in terms of preference development and preference
insight, which type(s) of customers should be more satisfied
with personalized recommendations? Presumably, they should
be customers who have well-defined and reasonably stable
preferences and have good insight into their own preferences,
e.g. customers who love Merlot wines and know their love for
Merlot wines (Simonson, 2005). In this research, customer
expectations for accuracy and discovery in recommendation
outcomes seem to have an underlying logic in customer
expectations (Table V) that provides only partial support for
this reasoning. Consistent with previous research, customers
who expect accuracy in recommendation outcomes tend to be
those with well-defined, stable preferences and good
knowledge into their own (overt) preferences as well. In
contrast with previous research, however, customers who
expect discovery in recommendation outcomes tend to be
those who do not have well-defined (overt) preferences, who
clearly know that they do not have well-defined (overt)
preferences but who, nevertheless, believe that they have
(hidden) preferences that can be discovered through
self-exploration and with help from the recommendation
agent. Although customers who expect discovery may look
similar to Group 2 (customers who have poorly defined and
unstable preferences and who know that they have poorly
defined and unstable preferences) discussed in previous
research (Simonson, 2005), they are actually different in that
they believe that they have true (albeit hidden from
themselves) preferences. This distinction can be found in P1a
and P1b.
Second, previous research (Simonson, 2005) suggests that
customers, regardless of their preference development and
preference insight, may be unwilling to commit to a learning
relationship with the recommendation agent for the following
reasons. Customers with well-defined preference and good
knowledge of their preference may be less dependent on the
recommendation agent. On the other hand, customers with
poorly defined preference or poor knowledge of their
preference may get little benefit from using recommendation
agents. In this research, customers seem to be able to stay
fairly committed to a learning relationship with the
recommendation agent, as outlined in P3a and evidenced
particularly in emerging Theme Pairs 2 and 3 (Table VI). For
instance, in Theme Pair 2, customers define roles as
relationship partners, are willing to participate, hoping to take
some control, expecting the relationship partner (the service
firm) to be knowledgeable about the customer, to be more
knowledgeable about the products it recommends and not to
be driven only by sales motives. Similarly, in Theme Pair 3,
customers articulate that the relationship should not only be
governed by technical functionality but also by social norms
involving propriety and respect for privacy. As indicated in the
outcome benefits in Theme Pair 1, customers expect accuracy
benefit or discovery benefit depending on their preference
development. Customers seem to have a subjective belief in
true preferences so that they should be able to benefit from
personalized recommendations regardless of whether their
preferences are overt to or hidden from themselves. This
intuitively held belief might provide a primary motivation for
customers to maintain a learning relationship with the service
firm. Without recognizing such a motivation, relational
behaviors in the use of recommendation systems would be
difficult to understand. This underlying logic on the part of
customers may have interesting implications for preference
learning and preference construction. This insight is outlined
in P3b.
Limitations and future research
Findings from this exploratory research should be regarded as
preliminary. Besides, a few other limitations should also be
recognized. First, the categorization scheme was developed
and verified within one sample only and its content validity
should be subject to more verification in future studies. One
method that can be used to perform validity check of a
classification scheme is to randomly split the sample into two
subsets – one for the development of the classification scheme
and the other for its confirmation (Gremler, 2004). However,
this method is technically challenging to implement in this
research because narratives are often situated in discussion
contexts (e.g. discussion groups or follow-up comments to
Web blog posts). Future studies based on different sets of data
may be needed to further verify the validity of the
categorization scheme. In addition, the reliability and validity
of the categories should be subject to more tests by other
researchers in future research. Second, the categorization
scheme and its categories were developed in the context of
personalized recommendation service. However, personalized
marketing is not limited to only making recommendations. It
is unclear whether these categories may be usefully extended
to other service contexts (e.g. customization of physical
products) or interfaces (e.g. mobile devices, GPS devices).
References
Adomavicius, G. and Tuzhilin, A. (2005), “Toward the next
generation of recommender systems: a survey of the
Recommendations as personalized marketing
Anyuan Shen
Journal of Services Marketing
Volume 28 · Number 5 · 2014 · 414 –427
425
state-of-the-art and possible extensions”, IEEE Transaction
on Knowledge and Data Engineering, Vol. 17 No. 6,
pp. 734-749.
Adomavicius, G. and Zhang, J. (2012), “Stability of
recommendation algorithms”, ACM Transactions on
Information Systems, Vol. 30 No. 4, pp. 23-31.
Aksoy, L., Bloom, P.N., Lurie, N.H. and Cooil, B. (2006),
“Should recommendation agents think like people?”,
Journal of Service Research, Vol. 8 No. 4, pp. 1-19.
Amazon’s Annual Report (2010), Amazon’s Annual Report,
available at: http://phx.corporate-ir.net/phoenix.
zhtml?c97664&pirol-reportsannual (accessed 25 July
2011).
Ansari, A., Essegaier, S. and Kohli, R. (2000), “Internet
recommendation systems”, Journal of Marketing Research,
Vol. 37 No. 3, pp. 363-375.
Ball, D., Coelho, P.S. and Vilares, M.J. (2006), “Service
personalization and loyalty”, Journal of Services Marketing,
Vol. 20 No. 6, pp. 391-403.
Bettencourt, LA. and Gwinner, K. (1996), “Customization of
the service experience: the role of the frontline employee”,
International Journal of Services Industry Management, Vol. 7
No. 2, pp. 3-20.
Bettman, J.R., Luce, M.F. and Payne, J.W. (1998),
“Constructive consumer choice processes”, Journal of
Consumer Research, Vol. 25 No. 3, pp. 187-217.
Bitner, M.J. (1990), “Evaluating service encounters: the
effects of physical surroundings and employee responses”,
Journal of Marketing, Vol. 54 No. 2, pp. 69-82.
Bitner, M.J., Booms, B.H. and Mohr, L.A. (1994), “Critical
service encounters: the employee’s viewpoint”, Journal of
Marketing, Vol. 58 No. 4, pp. 95-106.
Bitner, M.J., Booms, B.H. and Tetreault, M.S. (1990), “The
service encounter: diagnosing favorable and unfavorable
incidents”, Journal of Marketing, Vol. 54 No. 1, pp. 71-84.
Cooke, Alan, D.J., Sujan, H., Sujan, M. and Weitz, B.A.
(2002), “Marketing the unfamiliar: the role of context and
item-specific information in electronic agent
recommendations”, Journal of Marketing Research, Vol. 39
No. 4, pp. 488-497.
Dabholkar, P.A. and Sheng, X. (2012), “Consumer
participation in using online recommendation agents:
effects on satisfaction, trust, and purchase intentions”,
Service Industries Journal, Vol. 32 No. 9, pp. 1433-1449.
Flanagan, J.C. (1954), “The critical incident technique”,
Psychological Bulletin, Vol. 51 No. 4, pp. 327-357.
Flynn, L.J. (2006), “Like this? You’ll hate that, (not all web
recommendations are welcome)”, New York Times,23
January.
Gershoff, A.D. and West, P.M. (1998), “Using a community
of knowledge to build intelligent agents”, Marketing Letters,
Vol. 9 No. 1, pp. 79-91.
Gremler, D.D. (2004), “The critical incident technique in
service research”, Journal of Service Research, Vol. 7 No. 1,
pp. 65-89.
Gwinner, K.P., Gremler, D.D. and Bitner, M.J. (1998),
“Relational benefits in services industries: the customer’s
perspective”, Journal of the Academy of Marketing Science,
Vol. 26 No. 2, pp. 101-114.
Gwinner, K.P., Bitner, M.J., Brown, S.W. and Kumar, S.
(2005), “Service customization through employee
adaptiveness”, Journal of Service Research, Vol. 8 No. 2,
pp. 131-148.
Häubl, G. and Murray, K.B. (2003), “Preference
construction and persistence in digital marketplaces: the
role of electronic recommendation agents”, Journal of
Consumer Psychology, Vol. 13 Nos 1/2, pp. 75-91.
Iacobucci, D. (2006), “Three thoughts on services”,
Marketing Science, Vol. 25 No. 6, pp. 581-583.
Iacobucci, D., Arabie, P. and Bodapati, A. (2000),
“Recommendation agents on the internet”, Journal of
Interactive Marketing, Vol. 14 No. 3, pp. 2-11.
Keaveney, S.M. (1995), “Customer switching behavior in
service industries: an exploratory study”, Journal of
Marketing, Vol. 59 No. 2, pp. 71-82.
Kelley, S.W., Hoffman, D.K. and Davis, M.A. (1993), “A
typology of retail failures and recoveries”, Journal of
Retailing, Vol. 69 No. 4, pp. 429-452.
Komiak, S.Y.X. and Benbasat, I. (2006), “The effects of
personalization and familiarity on trust and adoption of
recommendation agents”, MIS Quarterly, Vol. 30 No. 4,
pp. 941-960.
Kozinets, R.V. (2002), “The field behind the screen: using
netnography for marketing research in online
communities”, Journal of Marketing Research, Vol. 39 No. 1,
pp. 61-72.
Kozinets, R.V. (2010), Netnography: Doing Ethnographic
Research Online, Sage, Thousand Oaks, CA.
Kramer, T. (2007), “The effect of measurement task
transparency on preference construction and evaluations of
personalized recommendations”, Journal of Marketing
Research, Vol. 44 No. 2, pp. 224-233.
Lam, S.K., Frankowski, D. and Riedl, J. (2006), “Do you trust
your recommendations? An exploration of security and privacy
issues in recommender systems”, International Conference on
Emerging Trends in Information and Communication Security
(ETRICS),Springer, Freiburg.
Linden, G., Smith, B. and York, J. (2003), “Amazon.com
recommendations”, IEEE Internet Computing, Vol. 7 No. 1,
pp. 76-80.
Meuter, M.L., Ostrom, A.L., Roundtree, R.I. and Bitner, M.J.
(2000), “Self-service technologies: understanding customer
satisfaction with technology-based service encounters”,
Journal of Marketing, Vol. 64 No. 3, pp. 50-64.
Mittal, B. and Lassar, W.M. (1996), “The role of personalization
in service encounters”, Journal of Retailing, Vol. 72 No. 1,
pp. 95-109.
Netflix’s Annual Report (2010), Netflix’s Annual Report, avail-
able at: http://ir.netflix.com/annuals.cfm (accessed 25 July
2011).
Pu, P., Chen, L. and Hu, R. (2012), “Evaluating
recommender systems from the user’s perspective: survey
of the state of the art”, User Modeling and User-Adapted
Interaction, Vol. 22 Nos 4/5, pp. 317-355.
Rust, R.T. and Chung, T.S. (2006), “Marketing models of
service and relationships”, Marketing Science, Vol. 25 No. 6,
pp. 560-580.
Recommendations as personalized marketing
Anyuan Shen
Journal of Services Marketing
Volume 28 · Number 5 · 2014 · 414 –427
426
Shen, A. and Ball, A.D. (2009), “Is personalization of services
always a good thing? Exploring the role of
technology-mediated personalization (TMP) in service
relationships”, Journal of Services Marketing, Vol. 23 No. 2,
pp. 79-91.
Simonson, I. (2005), “Determinants of customers’ response
to customized offers: conceptual framework and research
propositions”, Journal of Marketing, Vol. 69 No. 1,
pp. 32-45.
Slovic, P. (1995), “The construction of preference”, American
Psychologist, Vol. 50 No. 5, pp. 364-371.
Surprenant, C.F. and Solomon, M.R. (1987), “Predictability
and personalization in the service encounter”, Journal of
Marketing, Vol. 51 No. 2, pp. 86-96.
West, P.M., Ariely, D., Bellman, S., Bradlow, E., Huber, J.,
Johnson, E., Kahn, K., Little, J. and Schkade, D. (1999),
“Agents to the rescue?”, Marketing Letters, Vol. 10 No. 3,
pp. 285-300.
Corresponding author
Anyuan Shen can be contacted at: shena@newpaltz.edu
To purchase reprints of this article please e-mail: reprints@emeraldinsight.com
Or visit our web site for further details: www.emeraldinsight.com/reprints
Recommendations as personalized marketing
Anyuan Shen
Journal of Services Marketing
Volume 28 · Number 5 · 2014 · 414 –427
427
... According to Vesanen (2005), personalized marketing uses technology and customer information to individually adjust electronic commerce interactions between businesses and customers. Furthermore, with customers of ever-increasing technology savvy, personalized products and services become a business necessity (Shen, 2014). Using the information previously obtained in real-time about the customer profile is then used to offer products or services according to customer needs. ...
... It can be inferred that buying it or not, the calculation generated a higher value compared to previous research and the accuracy in this research shows the precision in predicting customers who would purchase voice packages. This is contrary to research by Shen (2014) that argues that preferences are constructed when decisions are being made, which means that customers often do not have stable preferences to be retrieved and applied to decisions. For instance, Ma et al. (2017) believed that when preferences often ill-defined, consumers will evaluate personalized recommendations based on how easily they can identify their stated preferences ...
Article
Full-text available
In 2018, Telkomsel's core business shifted its main services from Telephone and SMS services to Data and Digital services, since a declining trend of revenue starting 2014. However, telephone service still contributed 28.4% to the revenue and was the second largest, while SMS gave 4.1%. This research predicts voice package buyers using predictive analytics to identify customer profiles and significant variables to form appropriate target customer segmentation. Logistic regression was used to predict customers who would buy voice packages using 15 input variables. Next, analytics was done by dividing the data into 70% training data sets and 30% testing data obtained from customer voice package user data. The model accuracy gained 97.2%, and the top seven significant variables were formed. Then five clusters of customer segmentation were formed based on top significant variables using the K-Means clustering technique. Based on the results of the prediction model and clustering, behavioral targeting was conducted to provide targeted gimmick products based on five segmentations formed, and then it was divided into two main target customers by considering the similarity of behaviors based on revenue voice, minutes of voice usage, voice transactions, day of voice usage and data payload, thus it was more targeted.
... Throughout the years, both empirical and conceptual studies have contributed to influencer marketing research. The earliest studies on influencer marketing are about how SMIs present themselves on social media, sponsored content, disclosures, and the formation of digital consumer attitudes toward SMIs (Freberg et al. 2011;Hsu, Lin, and Chiang 2013;Shen 2014;Weiss 2014). The focus of such studies then evolved to include aspects associated with identification, attractiveness, trustworthiness, and expertise of SMIs (Archer and Harrigan 2016;Abidin 2016;Carter 2016;Lee and Watkins 2016;Xiao, Wang, and Chan-Olmsted 2018;Erz and Christensen 2018). ...
Article
Full-text available
Influencer marketing has steadily grown in the past decade as a strategy utilized by digital marketers for spreading brand messages with the help of social media influencers (SMIs). The main objective of this study is to review the academic literature related to influencer marketing between 2011 and 2019 with the help of both bibliometric analysis and content analysis. This review uses the Bibliometrix R-tool and the BiblioShiny app for data analysis and scientific mapping. This review presents a background of how influencer marketing research has evolved and examines the performance analysis based on sources, authors, documents, countries, and keywords. In addition, different knowledge structures were examined and interpreted to determine the most influential aspects of the literature. The trends observed in this research area from the content analysis and bibliometric analysis in terms of the significant methods, theories, emergent topics, thematic evolution, models, variables, industry focus, platforms used, leading research streams, data sources, and context of studies are the focus of the Discussion section. Finally, based on the findings of this analysis, future research directions are recommended to offer the potential to advance research on influencer marketing and SMIs.
... In addition, marketing companies are interested in developing recommender systems from the sociodemographic data of their customers in order to improve customer experience making personalized recommendations of products and services [10]. Recommender systems are based on Machine-Learning algorithms that learn from the textual data generated by the users through comments and reviews or from their browsing activities in shopping applications or websites. ...
Article
Full-text available
Within the area of Natural Language Processing, we approached the Author Profiling task as a text classification problem. Based on the author’s writing style, sociodemographic information, such as the author’s gender, age, or native language can be predicted. The exponential growth of user-generated data and the development of Machine-Learning techniques have led to significant advances in automatic gender detection. Unfortunately, gender detection models often become black-boxes in terms of interpretability. In this paper, we propose a tree-based computational model for gender detection made up of 198 features. Unlike the previous works on gender detection, we organized the features from a linguistic perspective into six categories: orthographic, morphological, lexical, syntactic, digital, and pragmatics-discursive. We implemented a Decision-Tree classifier to evaluate the performance of all feature combinations, and the experiments revealed that, on average, the classification accuracy increased up to 3.25% with the addition of feature sets. The maximum classification accuracy was reached by a three-level model that combined lexical, syntactic, and digital features. We present the most relevant features for gender detection according to the trees generated by the classifier and contextualize the significance of the computational results with the linguistic patterns defined by previous research in relation to gender.
... VAs are perceived by managers to have an agentic role as they attempt to predict which items a target user likes based on expressed preferences or implicit behaviors (Shen, 2014). The informant, Jim Sterne, Emeritus Director of the Digital Analytics Association (DAA), explains: ...
Book
Full-text available
JOCIS is a Scientific Journal created by MediaXXI/ Formalpress in partnership with several international entities, such as International Media Management Academic Association (IMMAA), and also with the collaboration of the Centre for Research in Communication, Information and Digital Culture (CIC. Digital) of the Faculty of Arts and Humanities of the University of Porto and the Faculty of Social and Human Sciences of the University Nova of Lisbon. Co-directed by Terry Flew and Paulo Faustino, JOCIS is created, designed and peer-reviewed by a highly qualified international team of academic researchers and publishers with years of experience.
... Recommendation is taken as a variable because it leads to creation of satisfaction and value in the minds of consumer. Shen (2014) has supported this by stating that the users seem to be more interested to recommend a product that he or she is satisfied and also on the other hasd they value products which have been recommended by others. The researchers have hypothesized that apart from Advertising claims, there are two more factors that can have an influence on the repurchase decision of credence goods. ...
Article
Full-text available
The policy of payment for forest environmental services (PFES) has been implemented in Vietnam since 2011 and in Hoa Binh province since 2013. Governmental agencies have conventionally implemented PFES at central, provincial, and district levels. However, there areshared responsibilitiesof the governmental organizations and otherstakeholders at different levels in reality. Using social network analysis (SNA) indicators shows that the provincial Forest Forest Protection and Development Fundplays the most important role among involving stakeholders. Both density and centrality measurements have shown that the stakeholders in the PFES implementation network in Hoa Binh province have good coherence. Additionally,district-level governmental organizations and Viettel TelecomCompany have showntheir powerlessnessin the policy implementation network
Article
Recommendation systems and the decoy effect are two popular marketing techniques that have been used for facilitating decision making. Practitioners often use decoys to help drive demand for specific items, and prior research has shown the decoy effect to be robust in traditional choice settings, with consistent reporting of an overall positive impact. Recommendation systems are also increasingly being used to present item choice sets to customers and users, assisting users in their decision-making process. However, previous work has not examined the decoy effect in the context of recommendations. The decoy effect may facilitate consumer decision making and positively impact user behavior when used with recommendation systems. However, in the recommendation context, customers often have different expectations for the reliability and quality of the presented information. Hence, a decoy as a recommendation could signal issues in system reliability, resulting in a negative effect. Our study demonstrates that depending on the recommendation context, the decoy effect can be beneficial or counterproductive. Specifically, we find in the personalized context, including a decoy minimizes the demand for the target option and pushes consumers to opt out of purchase, which deviates from the traditional decoy effect. However, a decoy increases the target item’s demand in the nonpersonalized context, following the conventional decoy effect.
Article
Full-text available
While AI and related technologies will indeed have a transformative impact on media markets, automated production of content whether news or entertainment is likely to be a minor part of this story for the foreseeable future. Unlike industries such as manufacturing and transportation, where thousands of jobs consist mainly of repetitive tasks that are well within the capability of current technologies, most of the value in media is in the production of complex content that heavily weights areas like judgment, interpretation, creativity, and communication, where humans continue to dominate algorithms and will do so for many years to come. Instead, the major impact of AI has been and will continue to be on the demand side of media not the production of content, but the process by which this content is matched to consumers. Future improvements in AI have the potential to profoundly alter this process for both good and for ill.
Article
Smart technologies promise to enhance customer experience to new levels in next-generation retail stores. Offline retailers increasingly employ technology-enabled personalization (TEP) strategies to digitally enhance in-store customer experience. To send personalized messages to in-store customers, retailers can choose from two types of smart devices: customer-owned smartphones or retailer-owned immersive screens. Although these smart devices may largely determine customers’ experiences in future retail, research rarely addresses device-related determinants of the effectiveness of personalized messages in stores. Building on assemblage theory, the authors consider the role of these devices in influencing customer experience and eventually consumer shopping behavior. Through two experiments and a mediated moderation analysis, they investigate the interplay of personalized content and device technology in customers’ response to TEP. The results illustrate that consumers react differently to message content depending on the device through which it is conveyed; that is, personalized (standardized) messages are more effective on customer-owned smartphones (retailer-owned screens) because they become integrated into (remain separate from) the customer's extended self. Relational customer experiences, or the extent to which a customer feels positively connected to store assemblages, mediate the effect on shopping behavior. To build TEP strategies, retailers should therefore use smart devices integrated into customers’ extended selves.
Article
Purpose The applications of artificial intelligence (AI), natural language processing and machine learning in e-commerce are growing. Recommender systems (RSs) are interaction-based technologies based on AI that can offer recommendations for products for use or of interest to a potential consumer. Curiosity, focused immersion and temporal dissociation are often treated as the dimensions of cognitive absorption, so exploring them separately can provide valuable insights into their dynamics. The paper aims to determine the effect of the cognitive absorption dimensions namely focused immersion, temporal dissociation and curiosity independently on RSs continuous use intention. Design/methodology/approach A quantitative research design was used to explore the effect of dimensions of cognitive absorption on AI-driven RSs continuous use intention in e-commerce. Data were gathered from 452 active users of Amazon through an online cross-sectional survey and were analysed using partial least squares structural equation modelling. Findings The findings indicated that curiosity and focused immersion directly affect RSs continuous use intention, but temporal dissociation does not affect RSs continuous use intention. Originality/value The current research focused on Amazon’s RSs that use AI and machine learning techniques. The research aimed to empirically explore the effects of the dimensions of cognitive absorption separately on AI-driven RSs continuous use intention in e-commerce. This research may be of interest to executives working in both public and private industries to better harness the potential of recommendations driven by AI to maximize RSs’ reuse and to enhance customer loyalty.
Article
Full-text available
Internet marketing requires a personalized marketing strategy. In this study, the application of data mining in personalized Internet marketing was studied. Based on the mining algorithm, a personalized marketing method was designed. Through the calculation of frequent closed item sets and support counts of positive and negative samples, the interval with a high success rate for marketing was obtained. With performance analysis, it was found that the success rate of the marketing method proposed in this study improved 8% compared with the traditional marketing method and had a better performance under the smaller interval number and smaller minimum success number. After applying the designed method in telecommunication enterprise A, it was found that after adopting the marketing method of this study, the marketing success rate of enterprise A increased from 2.72 to 6.31%, which indicated the effectiveness of the method. The research results of this study verify the role of data mining algorithms in Internet marketing, which is conducive to the further application of mining algorithms in personalized marketing and innovation of business modes.
Article
Full-text available
Service marketers are confronted with two conflicting goals when designing service delivery systems, efficiency and personalization. The relative importance of each factor is determined by the nature of the specific service to be rendered, and by participants’ expectations about degree of personalization. A study was conducted to test two assertions: (1) service personalization is a multidimensional construct and (2) all forms of personalization do not necessarily result in greater consumer satisfaction with the service offering. Three types of personalization strategies were proposed and operationalized in a simulated banking setting. Evaluations of service encounters that differed in the degree and type of personalization employed indicate that personalization is not a unitary phenomenon and must be approached carefully in the context of service design.
Article
For consumers, evaluation of a service firm often depends on evaluation of the “service encounter” or the period of time when the customer interacts directly with the firm. Knowledge of the factors that influence customer evaluations in service encounters is therefore critical, particularly at a time when general perceptions of service quality are declining. The author presents a model for understanding service encounter evaluation that synthesizes consumer satisfaction, services marketing, and attribution theories. A portion of the model is tested experimentally to assess the effects of physical surroundings and employee responses (explanations and offers to compensate) on attributions and satisfaction in a service failure context.
Article
In service settings, customer satisfaction is often influenced by the quality of the interpersonal interaction between the customer and the contact employee. Previous research has identified the sources of satisfaction and dissatisfaction in service encounters from the customer's point of view; this study explores these sources in service encounters from the contact employee's point of view. Drawing on insights from role, script, and attribution theories, 774 critical service encounters reported by employees of the hotel, restaurant, and airline industries are analyzed and compared with previous research. Results generally support the theoretical predictions and also identify an additional source of customer dissatisfaction—the customer's own misbehavior. The findings have implications for business practice in managing service encounters, employee empowerment and training, and managing customers.
Article
The service encounter frequently is the service from the customer's point of view. Using the critical incident method, the authors collected 700 incidents from customers of airlines, hotels, and restaurants. The incidents were categorized to isolate the particular events and related behaviors of contact employees that cause customers to distinguish very satisfactory service encounters from very dissatisfactory ones. Key implications for managers and researchers are highlighted.
Article
Customer switching behavior damages market share and profitability of service firms yet has remained virtually unexplored in the marketing literature. The author reports results of a critical incident study conducted among more than 500 service customers. The research identifies more than 800 critical behaviors of service firms that caused customers to switch services. Customers’ reasons for switching services were classified into eight general categories. The author then discusses implications for further model development and offers recommendations for managers of service firms.
Article
This paper presents an overview of the field of recommender systems and describes the current generation of recommendation methods that are usually classified into the following three main categories: content-based, collaborative, and hybrid recommendation approaches. This paper also describes various limitations of current recommendation methods and discusses possible extensions that can improve recommendation capabilities and make recommender systems applicable to an even broader range of applications. These extensions include, among others, an improvement of understanding of users and items, incorporation of the contextual information into the recommendation process, support for multicriteria ratings, and a provision of more flexible and less intrusive types of recommendations.
Article
One of the main themes that has emerged from behavioral decision research during the past three decades is the view that people's preferences are often constructed in the process of elicitation. This idea is derived from studies demonstrating that normatively equivalent methods of elicitation (e.g., choice and pricing) give rise to systematically different responses. These preference reversals violate the principle of procedure invariance that is fundamental to all theories of rational choice. If different elicitation procedures produce different orderings of options, how can preferences be defined and in what sense do they exist? This book shows not only the historical roots of preference construction but also the blossoming of the concept within psychology, law, marketing, philosophy, environmental policy, and economics. Decision making is now understood to be a highly contingent form of information processing, sensitive to task complexity, time pressure, response mode, framing, reference points, and other contextual factors.
Article
Service marketers are confronted with two conflicting goals when designing service delivery systems, efficiency and personalization. The relative importance of each factor is determined by the nature of the specific service to be rendered, and by participants' expectations about degree of personalization. A study was conducted to test two assertions: (1) service personalization is a multidimensional construct and (2) all forms of personalization do not necessarily result in greater consumer satisfaction with the service offering. Three types of personalization strategies were proposed and operationalized in a simulated banking setting. Evaluations of service encounters that differed in the degree and type of personalization employed indicate that personalization is not a unitary phenomenon and must be approached carefully in the context of service design.
Article
Electronic recommendation agents have the potential to increase the level of service provided by firms operating in the online environment. Recommendation agents assist consumers in making product decisions by generating rank-ordered alternative lists based on consumer preferences. However, many of the online agents currently in use rank options in different ways than the consumers they are designed to help. Two experiments examine the role of similarity between an electronic agent and a consumer, in terms of actual similarity of attribute weights and perceived similarity of decision strategies, on the quality of consumer choices. Results indicate that it helps consumers to use a recommendation agent that thinks like them, either in terms of attribute weights or decision strategies. When agents are completely dissimilar, consumers may be no better, and sometimes worse off, using an agent’s ordered list than if they simply used a randomly ordered list of options.