Content uploaded by Alexander Bleier
Author content
All content in this area was uploaded by Alexander Bleier on Mar 22, 2019
Content may be subject to copyright.
1
Personalized Online Advertising Effectiveness: The Interplay of What, When,
and Where
Marketing Science, forthcoming
Abstract
Firms track consumers’ shopping behaviors in their online stores, to provide individually
personalized banners through a method called retargeting. We use data from two large-scale field
experiments and two lab experiments to show that personalization can substantially enhance
banner effectiveness, yet its impact hinges on its interplay with timing and placement factors.
First, personalization increases click-through especially at an early information state of the
purchase decision process. Here, banners with a high degree of content personalization (DCP)
are most effective when a consumer has just visited the advertiser’s online store, but quickly lose
effectiveness as time since that last visit passes—a phenomenon we term overpersonalization.
Medium DCP banners, on the other hand, are initially less effective, but more persistent, so that
they outperform high DCP banners over time. Second, personalization increases click-through
irrespective of whether banners appear on motive congruent or incongruent display websites. In
terms of view-through, however, personalization increases ad effectiveness only on motive
congruent websites, but decreases it on incongruent websites. We demonstrate in the lab how
perceptions of ad informativeness and intrusiveness drive these results depending on consumers’
experiential or goal-directed Web browsing modes.
Keywords: Retargeting, online advertising, personalization, advertising effectiveness
Acknowledgements: The authors thank Preyas Desai, the associate editor, and two anonymous
reviewers for their valuable and insightful comments. They also thank Catherine Tucker, Avi
Goldfarb, John Hauser, Katherine Lemon, Sam Ransbotham, Edward Norton, Matt Gregas, and
Werner Reinartz, for their helpful suggestions. This paper has benefitted from the comments of
attendees of the 2011 and 2012 Marketing Science Conference as well as the 2013 annual
meeting of the German Academic Marketing Commission and the 2013 AMA Sheth Foundation
Doctoral Consortium. The authors particularly thank xplosion interactive for their support with
the execution of two field experiments.
Alexander Bleier, Assistant Professor of Marketing, Carroll School of Management, Boston
College, 140 Commonwealth Ave., Chestnut Hill, MA 02467, Tel.: 617-552-1870, Email:
bleiera@bc.edu.
Maik Eisenbeiss, Professor of Marketing, University of Bremen, Hochschulring 4, Bremen,
28359, Germany, Tel.: +49 (0) 421-218-66740, Email: eisenbeiss@uni-bremen.de.
2
1. Introduction
As consumers spend more time and money than ever on the Web (eMarketer 2013;
Morris 2013), firms also intensify their online advertising efforts (IAB 2013). Display banners
represent an especially prominent way to reach consumers with more than 5 trillion banner ads
served every year (Lipsman et al. 2013). The recent surge in online advertising noise, however,
also causes many consumers to simply avoid banner ads (Cho and Cheon 2004; Drèze and
Hussherr 2003), such that their click-through rates have come down to as low as .1%
(MediaMind 2012). In response, an increasing number of firms, including major businesses like
Amazon, Google or Facebook personalize their ads based on individual consumers’ recent online
shopping behaviors using a method called retargeting (Helft and Vega 2010; Peterson 2013;
Sengupta 2013). Investments in retargeting are steadily growing (Hamman and Plomion 2013)
and expected to soon drive total display ad spending ahead of search advertising again (Hof
2011).
On the one hand, personalization should render a banner more relevant and thus increase
its effectiveness (Anand and Shachar 2009; Ansari and Mela 2003; Tucker 2014). On the other
hand, consumers might not unanimously favor certain personalized ad content, depending on the
timing and placement of its appearance. In particular, with respect to timing, consumers receive
personalized banners while being at different positions of the purchase decision process (Howard
and Sheth 1969). Relatedly, prior research suggests that consumers’ response to ads featuring the
exact products they previously browsed depends on how narrowly their preferences are
construed (Lambrecht and Tucker 2013). In addition, even within a given position consumers
may respond differently to personalized ad content depending on how much time has passed
between their last online store visit and an ad impression. Personalized banners typically reflect a
3
consumer’s previously revealed preferences at his or her last online store visit. However, such
preferences are subject to change over time (Yoon and Simonson 2008).
Moreover, with respect to placement, consumers are exposed to personalized banners on
different websites while pursuing different goals or motives. In non-personalized settings, prior
research suggests that congruence between the motives to which an ad and its display website
cater affect consumers’ ad response (Goldfarb and Tucker 2011a; Moore et al. 2005; Rodgers
and Thorson 2000; Shamdasani et al. 2001; Yaveroglu and Donthu 2008). Yet, it is not clear how
motive congruence influences the effects of ad personalization on consumers.
In this research we investigate the effectiveness of personalization in banner advertising
by taking into account its interplay with these timing and placement factors in the course of two
large-scale field experiments with a major fashion and sporting goods retailer. In Field
Experiment 1, we examine the interplay between ad content personalization and the two nested
timing factors of a consumer’s current state, defined as the position of the purchase decision
process at which he or she left the retailer’s online store, and the elapsed time since that last visit
at the moment of an ad impression. Specifically, we study this interplay for banners with
different degrees of content personalization (DCP) that reflect consumers’ most recent shopping
behaviors in the retailer’s online store to a greater or lesser degree. The results show that banners
with high DCP, featuring products from a consumer’s most viewed category and brand
combination, generate especially high click-through rates in an early information state where
consumers did not progress beyond the beginning of the buying process. In more advanced
consideration and post-purchase states, however, ad personalization strongly loses effectiveness.
Consistent with prior literature (Hoeffler and Ariely 1999; Simonson 2005) we explain these
empirical findings through consumers’ stabilizing preferences along the buying process that
4
make them less receptive to specific recommendations. Moreover, within the information state,
high DCP banners are most effective when a consumer just left the advertiser’s online store, but
quickly lose effectiveness thereafter over time. After 23 days, they attract even fewer clicks than
initially less effective medium DCP banners that feature products from a consumer’s most
viewed brand. To explain, we draw on previous research (Yoon and Simonson 2008) that
suggests especially the preferences in this state to change over time. Accordingly, high DCP
banners that closely aim at preferences from a consumer’s last online store visit are increasingly
likely to miss their mark as time since this visit passes and preferences change—a phenomenon
we term “overpersonalization.”
In Field Experiment 2, we examine the interplay between ad content personalization and
motive congruence between a banner and its display website as an important placement factor in
online advertising (Edwards et al. 2002; Moore et al. 2005). Surprisingly, and contrary to studies
on ad targeting that suggest effectiveness increases from context matching (Shamdasani et al.
2001; Yaverogly and Donthu 2008), we find that the click-through rate of personalized and non-
personalized banners is unaffected by motive congruence. In particular, personalized ads are
equally more effective than non-personalized ads, across motive incongruent and congruent
websites. In contrast, for view-through, a lagged measure of ad effectiveness (Hamman and
Plomion 2013; Perlich et al. 2012; Perlich et al. 2014), we find an influence of motive
congruence on consumer response to ad personalization. Specifically, personalization
substantially increases view-through response if banners appear on motive-congruent display
websites, but actually slightly decreases it on incongruent websites.
Since these results partially run counter to previous findings on ad targeting, we
investigate them in greater detail through two lab experiments. Drawing on the existent ad
5
processing literature (Edwards et al. 2002) we argue and confirm in the lab that personalization
determines banners’ perceived informativeness and intrusiveness that in turn drive final ad
response in terms of click-through and view-through intentions. Moreover, consistent with
previous empirical indications (Chatterjee 2005; Cho and Cheon 2004) we show that click-
through primarily captures the response of consumers in an experientially browsing mode while
view-through primarily reflects the response of consumers in a goal-directed browsing mode. For
experientially browsing consumers, who do not have a specific goal top of mind when seeing a
banner on a website (Hoffman and Novak 1996), we find motive congruence to not influence the
effects of personalization on perceived ad informativeness and intrusiveness. In particular,
informativeness is always higher for personalized compared to non-personalized ads while
intrusiveness remains unchanged by personalization. These effects translate into click-through
intentions and explain the results we find in the field.
Consumers in a goal-directed browsing mode, however, do pursue a specific goal at a
website where they encounter a banner (Hoffman and Novak 1996). For them, we find that
motive congruence indeed influences the effects of ad personalization on perceived ad
informativeness and intrusiveness. Specifically, personalization leads banners to appear more
informative under congruence, but not under incongruence. Moreover, under incongruence
consumers see personalized ads as more intrusive than non-personalized ads. These findings
explain why in the field we find personalization to increase view-through for banners on motive
congruent websites, but to decrease it for ads on motive incongruent sites.
Overall, this research shows that ad personalization through retargeting can substantially
enhance banner effectiveness; yet, its impact hinges on its interplay with timing and placement
factors. In the next section, we demonstrate how our results contribute to existing research in
6
online advertising. In sections 3 and 4, we report the results of two field experiments and provide
evidence from the lab for the underlying mechanisms of Field Experiment 2. In section 5, we
present “back-of-the envelope” calculations that demonstrate the economic relevance of ad
personalization. In section 6, we conclude with a discussion of our findings and their managerial
implications as well as limitations and avenues for future research.
2. Contribution to Related Literature
Our research primarily relates to two literature streams in online advertising: targeting
and personalization of online ad communications. The focus of ad targeting is to maximize the
effectiveness of given advertisements by managing their recipients (audience targeting), timing,
and placement (contextual targeting) (Raeder et al. 2012). First, the literature on audience
targeting is concerned with the segmentation and selection of recipients for given ads based on
their online shopping behavior (Perlich et al. 2014; Stitelman et al. 2011), social network
(Provost et al. 2009), search and ad response behavior (Bhatnagar and Papatla 2001), cognitive
styles (Urban et al. 2014), affiliations (Tucker 2014), and characteristics unknown to the
researchers (Goldfarb and Tucker 2011b). Second, a number of studies have looked at the effects
of ad timing, i.e., when firms should deliver given display banners ads (Braun and Moe 2013;
Urban et al. 2014), health promotional emails (Lenert et al. 2004), or mobile ads (Baker et al.
2014). Third, work on contextual targeting has dealt with matching display banners to websites
based on contextual characteristics of the display website (Goldfarb and Tucker 2011a; Moore et
al. 2005; Rodgers and Thorson 2000; Shamdasani et al. 2001; Yaveroglu and Donthu 2008).
Moreover, research on real-time bidding, i.e., the auctioning of online advertising space in real
time, studies these three aspects in combination, for example to derive algorithms that optimize
bids for ad impressions (Perlich et al. 2012).
7
In contrast to ad targeting, where the starting point is a given advertisement, ad
personalization begins with a given consumer and seeks to create individualized advertisements
that fit his or her preferences best (Lambrecht and Tucker 2013). Compared to research on ad
targeting, work on ad personalization is just recently emerging. Studies on ad personalization
have for instance investigated display banner and email personalization based on consumers’
inherent characteristics like their names and contact information, educational affiliations, or
celebrity and media likings (Tucker 2014; White et al. 2008). Others have examined the usage of
consumers’ online behavior such as products viewed on travel or financial services websites
(Lambrecht and Tucker 2013; Van Doorn and Hoekstra 2013), response to content links in
advertising emails (Ansari and Mela 2003), or combinations of website browsing and ad
response (Kazienko and Adamski 2007) as the basis for personalized advertisements.
Our work contributes to the research streams of online ad targeting and ad
personalization in four ways: First, previous research on ad personalization has only investigated
the effectiveness of a single given personalization intensity. Specifically, Lambrecht and Tucker
(2013) investigate retargeting banners with the highest DCP possible, i.e., banners featuring the
exact products consumers previously browsed, and compared them with generic advertisements.
Yet, firms apply a vast amount of algorithms to personalize their ads at certain instances. One
way to differentiate approaches is by their personalization intensity, or DCP. To the best of our
knowledge, our Field Experiment 1 is the first empirical investigation that focuses on the
effectiveness of online ads with varying personalization intensities.
Second, as outlined, prior work in both literature streams has found certain timing factors
to influence the effectiveness of targeted and personalized ads. Specifically, a consumer’s
position in the purchase decision process, at the moment of an ad impression is a common factor
8
of interest (Lambrecht and Tucker 2013; Urban et al. 2014). We contribute to this literature by
studying how the interplay of different DCPs and a consumer’s current state determines an ad’s
effectiveness. In contrast to previous research that distinguishes between different pre-purchase
states, we analyze the complete buying process, including a post-purchase state. Moreover,
nested within states, we introduce a dynamic perspective that enables us to investigate state-
specific changes in effectiveness per DCP over time.
Third, in our second field experiment, we advance the literature with an investigation of
placement effects in online ad personalization. While previous research on contextual targeting
suggests motive congruence to be highly relevant for non-personalized ads (Goldfarb and Tucker
2011a; Moore et al. 2005; Rodgers and Thorson 2000; Shamdasani et al. 2001; Yaveroglu and
Donthu 2008), no study so far has examined this aspect in the realm of personalized advertising.
In the course of these analyses, we also show how placement effects from motive congruence
depend on consumers’ current browsing modes, i.e., whether they browse the Web in an
experiential or goal-directed mode when receiving an ad.
Fourth, accurately measuring the impact of different advertisements on consumers is a
key concern in ad targeting and personalization as well as online advertising in general. Previous
work in targeting and personalization assesses ad effectiveness with various indicators. Some
studies rely exclusively on single measures like online sales (Lambrecht and Tucker 2013),
purchase intentions (Goldfarb and Tucker 2011a; Van Doorn and Hoekstra 2013), click-through
(Ansari and Mela 2003; Tucker 2014), click-through intentions (White et al. 2008), or view-
through (Stitelman et al. 2011). Others combine multiple success measures such as online and
offline sales (Lewis and Reily 2014), click-through, brand consideration, and purchase likelihood
(Urban et al. 2014), or click-through, view-through, and online sales (Dalessandro et al. 2012). In
9
our second field experiment, we use click-through as an immediate and the most popular
response measure (PricewaterhouseCoopers 2011) as well as view-through as a measure of
lagged ad response that marketers commonly employ in combination with click-through to
evaluate the effectiveness of their retargeting efforts (Hamman and Plomion 2013). We show
how these measures complement each other by capturing the response of consumers in
experiential or goal-directed browsing modes. In this sense, we also add to the growing literature
of attribution modeling (e.g., Abhishek et al. 2013; Li and Kannan 2014).
3. Field Experiment 1: The Interplay of DCP, State, and Time since Last Online Store Visit
Firms usually engage in retargeting through advertising networks that act on their behalf.
An ad network aggregates advertising space across multiple ad displaying websites, or
publishers, and sells this space to its client firms. When a consumer first visits the online store of
a specific firm, the ad network creates a profile and links it to a cookie on the consumer’s device.
His of her interactions with products in the firm’s online store are then tracked and stored to this
profile. Afterwards, the cookie allows recognizing the consumer at any ad displaying website
within the ad network. Upon his or her arrival at such a website, the ad network uses the
consumer’s profile to deliver banners featuring products from the firm’s assortment,
corresponding to the consumer’s previous shopping behavior there.
1
The product selection for a
banner is often governed by its DCP which determines how closely the ad will relate to the
consumer’s previously viewed items.
3.1 Design and Implementation
In this first study, we analyze the click-through effectiveness of different DCPs,
contingent on two nested timing factors: the consumer’s current state, or position in the purchase
1
We assume here for simplicity that the consumer has only visited the online store of one client from the ad
network.
10
decision process at which he or she left the online store at the last visit, and the elapsed time
since that visit at the moment of an ad impression. In practice, DCPs are matched to consumers
depending on previous or expected response to them. Because such matching causes
endogeneity, we conducted a field experiment with a major fashion and sporting goods retailer
2
which allowed us to randomly assign banners of different DCPs to consumers and analyze their
effectiveness. The retailer’s assortment contains almost 50,000 SKUs, representing more than
180 categories and 700 brands. Categories include general fashion products for men, women, and
children (shirts, jeans, shoes, etc.), as well as sporting apparel and gear for various sports (ball
and racket sports, fitness etc.).
To manipulate DCP, we defined three personalization rules that reflect a consumer’s
most recent shopping behavior in terms of product views at the retailer’s online store to a greater
or lesser extent.
3
According to consumer choice theory, each product view requires an implicit
category and brand choice (Guadagni and Little 1983; 1998). At the end of each shopping
session, a consumer’s most viewed category and brand can be calculated, according to the
products he or she viewed during that session. Using these two choices jointly and in isolation
results in three treatment conditions representing different and theoretically linked DCPs:
(1) High DCP: A banner features products sampled from a consumer’s most viewed category
and brand combination, during the most recent shopping session.
(2) Medium DCP (category): A banner features products sampled from the consumer’s most
viewed category, during the most recent shopping session.
2
The retailer’s name and location are protected because of confidentiality agreements.
3
We focus exclusively on shopping behavior in terms of product views because almost all visitors to an online store
perform these activities, whereas only a subset of them pursue subsequent activities in the buying process such as
product purchases. Personalizing banners only based on product views thus facilitates a randomized experiment.
11
(3) Medium DCP (brand): A banner features products sampled from the consumer’s most
viewed brand, during the most recent shopping session.
For example, if a consumer primarily viewed adidas t-shirts during the last shopping
session, a banner with high DCP would feature t-shirts (category choice) from adidas (brand
choice). A category-based medium DCP banner instead would feature t-shirts of random brands,
whereas a brand-based medium DCP banner would show random categories of adidas-branded
products. This conceptualization deliberately does not imply whether personalizing based on
category choices or brand choices represents a consumer’s preferences more closely than the
other. It only implies that combining both choices (high DCP) represents preferences relatively
more closely than either one of these choices separately. As a control condition we used banners
with no personalization. In the above example, such an ad would feature random products from
random categories and brands. All banners also contained the retailer’s logo.
The experiment ran for six weeks from October–December 2011. At their first visit to the
retailer’s online store during the study period, a random 10% sample of individuals who viewed
at least one product was randomly assigned to one of the three treatment groups or the control
group. Note that the random assignment algorithm allocated proportionally more individuals to
the control group, which was explicitly requested by our partner for internal reasons. Yet,
chances to be included in a given group were equal for all consumers and independent from their
characteristics or browsing behaviors. After this initial allocation procedure, each time a
consumer visited an ad-displaying website within the retailer’s ad network he or she received
banners according to his or her assigned experimental group.
For each ad impression a consumer received, we recorded whether he or she clicked it, as
well as the consumer’s current state, or position in the purchase decision process at which he or
12
she left the online store. To proxy for the state, we used clickstream data of each consumer’s
shopping behavior in the retailer’s online store, in line with prior research (Li and Chatterjee
2005). At the moment of a given ad impression, a consumer is defined to be in an information
state, i.e, at the beginning of the purchase decision process, if he or she has merely browsed
products but conducted no further purchase-related actions during the most recent online store
visit. A consumer who additionally used the virtual shopping cart but still made no purchase is
defined to be in a consideration state, further advanced in the buying process (Li and Chatterjee
2005). A consumer is classified to be in a post-purchase state if he or she completed a purchase
before exiting the online store. Accordingly, in our data we updated a consumer’s state after each
shopping session. Finally, for each ad impression we recorded the time passed since the
consumer left the online store which indicates how long he or she has resided in the current state
before receiving a particular banner. Ad impression levels were endogenously defined without
frequency caps by consumers’ browsing through the ad network. No targeting directed ads to
specific display websites, nor were impressions sold through auctioning systems. The firm did
also not engage in any promotional activities during the experiment.
3.2 Descriptive Statistics
44,995 consumers were randomly selected and assigned to the four experimental groups
as shown in Table 1. We report relevant summary statistics in Panel (A).
[Insert Table 1 about here]
At the ad impression level, Panel (B) presents the distribution of the total 1,264,885 banners over
the experimental groups by states. A mean over all states is provided at the bottom. Column (1)
shows the average time between a consumer’s last visit to the retailer’s online store and an ad
impression. Column (2) reports average click-through rates. It appears that personalization,
13
especially with high DCP, strongly increases click-through in the information state, while
differences between DCPs and non-personalized ads become less distinct in later states.
However, these descriptive insights are generally limited in scope. First, they do not
account for possible changes in click-through probabilities within states as the time since a
consumer’s last online store visit increases. Second, they do not acknowledge consumer-specific
differences in innate tendencies to click on ads (Chatterjee et al. 2003). Third, they do not control
for consumer factors as shown in Panel (A) that might influence click-through rates. We
therefore proceed with a detailed modeling approach.
3.3 Results and Discussion
Our unit of analysis is the individual banner impression. Empirically, we observe whether
consumer i, when exposed to banner impression j, clicks on the banner (Clickij = 1) or not (Clickij
= 0). With the assumption that the click outcome follows a Bernoulli distribution with parameter
πij, we model the click-through probability using the following logistic parameterization:
(1)
1
exp11Pr
ijiijij XβClick
.
Our model includes a consumer-specific intercept αi to account for differences in click-proneness
among individuals due to unobserved characteristics. In line with prior research (Chatterjee et al.
2003; Jones and Landwehr 1988), we specify
i
as drawn from a random distribution
),|(~
ii N
, where
is the average click-proneness intercept over all consumers and its
variance
corresponds to heterogeneity in click-tendencies between consumers. We specify
β’Xij as
(2)
,alBannersTottitionsBannerRepelVisitsTota
TimedMedDCPbranTimegoryMedDCPcateTimeHighDCP
TimedMedDCPbrangoryMedDCPcateHighDCPXβ
ij10ij9ij8
iji7iji6iji5
ij4i3i2i1ij
14
where HighDCPi, MedDCPcategoryi, and MedDCPbrandi are treatment indicator variables for
whether consumer i belongs to an experimental group that receives ads with high (HighDCPi =
1, and 0 otherwise), category-based medium (MedDCPcategoryi = 1, and 0 otherwise), or brand-
based medium (MedDCPbrandi = 1, and 0 otherwise) DCP. The coefficients β1, β2, and β3
capture therefore the effects of the respective DCP, relative to the effect of a banner with no
personalization. Timeij refers to the time in days passed between consumer i’s last online store
visit and ad impression j. In addition to a direct effect of Timeij, the model includes pairwise
interactions between this variable and the three treatment indicator variables. It thus captures
possible changes in the effect of a particular DCP over time.
Finally, we include three control variables that influence consumer response to banner
ads. First, VisitsTotalij, or the number of consumer i’s shopping sessions prior to ad impression j
(Chatterjee et al. 2003), proxies for a consumer’s familiarity with the retailer. Second,
BannerRepetitionsij, i.e., the number of ad impressions since consumer i’s last shopping session
prior to ad impression j, controls for banner wear-out due to repetition effects (Braun and Moe
2013; Chatterjee et al. 2003; Yaveroglu and Donthu 2008). Retargeting banners are assembled
with products in real time at the moment of an ad impression. A consumer’s behavior during the
most recent shopping session defines the pool of eligible products and this pool is updated after
each shopping session. Ad impressions may therefore be repetitive between two sessions but not
across them. Third, BannersTotalij corresponds to the total number of ad impressions consumer i
received before ad impression j (Manchanda et al. 2006).
4
For each state, we estimate a separate
model with a maximum likelihood estimator and report the results in Table 2.
[Insert Table 2 about here]
4
Note that BannerRepetitionsij and BannersTotalij are likely not completely accurate, for instance if consumers use
multiple devices or restrict or delete cookies.
15
Information State. The main effects of all treatment indicator variables are positive and
significant (HighDCP 1.3749, p < .001; MedDCPcategory 1.0081, p < .001; MedDCPbrand
.8357, p < .001), indicating greater click-through effectiveness of personalized compared with
non-personalized ads. Based on this model, when a consumer has just left the retailer’s online
store, high DCP, with a click-through probability of .41%, is the most effective form of
personalization and outperforms non-personalized banners (.11%) by factor of 3.73. It is also
more effective than category-based medium DCP (.29%) and brand-based medium DCP (.24%).
Yet, while all banners lose effectiveness as time since a consumer’s last online store visit
increases (Time –.0217, p < .05), this occurs especially for high DCP (HighDCP × Time –.0279,
p < .05) and category-based medium DCP (MedDCPcategory × Time –.0167 p < .1). These ads
lose effectiveness over time significantly more quickly than non-personalized banners. In fact,
this decline for high DCP banners is so strong that they become less effective than brand-based
medium DCP banners after 23 days. We illustrate these effects in Figure 1.
[Insert Figure 1 about here]
Consideration State. All main treatment effects are also in this state positive and
significant (HighDCP .8526, p < .05; MedDCPcategory .9237, p < .001; MedDCPbrand .7906, p
< .05). However, the estimated click-through probabilities of personalized banners are, especially
for high DCP, lower than in the information state. Just after a consumer has left the online store,
they are .24% (high DCP), .25% (category-based medium DCP), .22% (brand-based medium
DCP), and .1% (non-personalized). Also, in this state, click-through probabilities remain
unchanged over time (Time .0027, p > .1; HighDCP × Time –.0176, p > .1; MedDCPcategory ×
Time –.0254, p > .1; MedDCPbrand × Time –.0338, p > .1).
16
Post-Purchase State. Finally, only the main effect of high DCP is positive and significant
in the post-purchase state (HighDCP .6165, p < .1). The respective click-through probabilities
when a consumer has just left the online store are .18% (high DCP), .16% (category-based
medium DCP), .14% (brand-based medium DCP), and .1% (non-personalized). Again, no
changes in effectiveness occur over time (Time -.0262, p > .1; HighDCP × Time .0064, p > .1;
MedDCPcategory × Time .0214, p > .1; MedDCPbrand × Time .0119, p > .1).
In summary, personalization can substantially enhance click-through, but the incremental
effectiveness of personalized compared to non-personalized banners considerably declines across
the states of the purchase decision process. Moreover, click-through probabilities change over
time, depending on the specific DCP, only in the initial information state, but not in the later
consideration and post-purchase states. We tested the robustness of our empirical findings
against alternative model specifications. A pooled logit specification as well as a random-
intercept probit model and a generalized estimating equation logit model returned similar results,
in support of our modeling approach and the insights from Figure 1. Moreover, a marginal
effects analysis confirmed our findings for the specific interaction effects between each DCP and
Time (for details see Web appendix at <URL>).
3.4 Theoretical Interpretation: Preference Stability
To explain our empirical results, we draw on two key findings from the preference
construction literature. First, consumer preferences are constructive and stabilize with increased
effort and choices (Bettman et al. 1998; Hoeffler and Ariely 1999). We posit that our first main
finding, namely, that personalized banners lose effectiveness across the states of the purchase
decision process, reflects consumers’ increasing preference stability which makes them less
dependent on recommendations. Second, unstable preferences are more likely than stable
17
preferences to change over time (Yoon and Simonson 2008); we align this explanation with our
second main finding that effectiveness decreases over time occur only in the information state,
where preferences are unstable, but not in later states where preferences increasingly stabilize.
When consumers first enter a firm’s online store, they often have only a broad idea of
what they like (Lambrecht and Tucker 2013; Lee and Ariely 2006), without conscious awareness
of their category needs or brand preferences (Hoeffler and Ariely 1999; Simonson 2005). By
browsing through the assortment they start to develop and construct preferences (Bettman et al.
1998; Payne et al. 1993) which are, however, still fuzzy and unstable. For these consumers, not
being aware of their true preferences makes them highly susceptible to influence, very receptive
to advice, and easily convinced that a customized offer fits their actual preferences well (Lee and
Ariely 2006; Simonson 2005). This explains the noticeable responsiveness of consumers in the
information state especially to high DCP banners. In contrast, consumers in the consideration
state have actively advanced in the purchase decision process by evaluating different product
alternatives (Li and Chatterjee 2005) and using effort to build a consideration set and place items
in the virtual shopping cart (Close and Kukar-Kinney 2010). These consumers are therefore more
likely aware of their more accurately defined and stable preferences (Hoeffler and Ariely 1999).
Finally, consumers in a post-purchase state have completed the buying process and thus
developed most precise and stable preferences (Hoeffler and Ariely 1999). With greater
awareness of their stable preferences, they become less dependent on a firm’s recommendations
(Simonson 2005). This explains why we observe consumers to respond less strongly to
personalized banners in later states of the purchase decision process.
Over time, unstable preferences also tend to change more than stable preferences (Yoon
and Simonson 2008) or may simply be forgotten (Petty et al. 1983). Especially in the information
18
state, personalized banners that precisely match a consumer’s preferences at the moment he or
she leaves the online store will therefore grow increasingly divergent from his or her current
preferences and therefore lose effectiveness over time. We term this phenomenon
overpersonalization. Indeed, our results show overpersonalization to be especially acute for high
DCP banners that reflect previous preferences very closely. By contrast, the effectiveness of
brand-based medium DCP banners, reflecting preferences less closely, stays more persistent over
time so that these banners eventually become the most effective form of personalization. This
finding aligns with previous research that shows preferences for brand-based attributes to be
more stable and less likely to change over time than preferences for non-brand attributes, such as
specific categories (Simonson and Winer 1992). Stable preferences being less likely to change
over time finally explains why we do not observe significant declines in banner effectiveness for
any DCP in the consideration or post-purchase states.
A surprising empirical finding is that high DCP is the most effective form of
personalization in the post-purchase state. Consumers in this state are typically well aware of
their stable preferences and should thus be particularly resistant to these ads. Yet, comparing the
number of products viewed during online store visits that started with or without click-through
can help explain this phenomenon. During self-initiated shopping sessions, consumers browsed
on average 4.21 products with no major differences across experimental groups. During visits
following a click-through, however, consumers in the high DCP group viewed on average only
1.63 products (compared to 2.99 over all groups). In fact, 74% of them viewed only a single item
before leaving the online store again. Given that products featured in high DCP banners very
closely reflect, or even include, a previously purchased product, the higher click-through
effectiveness of these ads might be a result of consumers’ curiosity to see these items in an ad.
19
4. Field Experiment 2: The Interplay of Personalization and Placement
4.1 Design and Implementation
In a second field experiment, we analyze the interplay of banner personalization and
placement. Retargeting ads appear on various display websites within the reach of an ad network
so that it is critical to know where they are more or less effective. One placement factor that the
ad targeting literature suggests to influence the impact of online ads on consumers is whether or
not the motive to which an ad caters is congruent to the motive of its display website (Edwards et
al. 2002; Goldfarb and Tucker 2011a; Moore et al. 2005; Rodgers and Thorson 2000;
Shamdasani et al. 2001; Yaveroglu and Donthu 2008). To investigate this aspect for personalized
advertisements we examine the effectiveness of personalized and non-personalized banners
appearing on non–shopping and shopping-related display websites. Retargeting banners refer to
consumers’ shopping motives so that motive congruence exists only when ads appear on
shopping-related websites, but not when shown on non–shopping-related websites. Also, we
complete our analyses by introducing view-through as a commonly applied effectiveness
measure that captures a form of lagged ad response, because banner ads exert effects on
consumers even when they are not clicked (Drèze and Hussherr 2003). In particular, view-
through captures whether a consumer independently returns to the retailer’s online store within a
specific time frame in response to a banner he or she did not click. This measure is commonly
applied in academia (Perlich et al. 2012; Perlich et al. 2014; Stitelman et al. 2011) and practice
where 78% of firms that employ retargeting assess their ad effectiveness with view-through in
addition to click-through (Hamman and Plomion 2013).
In cooperation with the same retailer as in Field Experiment 1 we established one general
personalization treatment condition: At every ad impression, a banner randomly has high DCP,
20
category-based medium DCP, or brand-based medium DCP, as defined in the first study. The
order of DCPs varies randomly to prevent repetition effects. We used the same non-personalized
control condition as in the first experiment. Again, all banners included the retailer’s logo.
5
The retailer ran the experiment for six weeks, simultaneously with the first study, to
ensure comparability. A random 8.5% sample of individuals who viewed at least one product
were randomly selected and assigned to the experimental groups at their first visit to the
retailer’s online store during the study period. After this allocation, they again saw exclusively
banners that matched their experimental group for the entire experiment.
For each ad impression, we observed whether it occurred on a motive-incongruent (non–
shopping-related) or congruent (shopping-related) display website. Shopping-related websites
were those that allow searching for purchase-relevant information, like price comparison or
product testing websites, or to purchase products, like online auctioning sites (Verhoef et al.
2007).
6
All other websites were defined as non–shopping-related (Rodgers and Thorson 2000).
We recorded consumers’ response to each ad impression in terms of click-through and
view-through. Following current standards, we attributed a view-through exclusively to the last
ad a consumer saw but did not click before returning to the online store and limited the allowed
time frame between an ad impression and an independent return to 7 days (Dalessandro et al.
2012; Perlich et al. 2012; Perlich et al. 2014). This is a conservative measure compared to
current industry standards with time frames up to 90 days (e.g., Google 2014).
5
In this study, we refrained from using multiple treatment conditions to ensure a reasonable number of ad
impressions per experimental group, as shopping-related websites form a comparatively small portion of display
websites in the reach of our partner’s ad network.
6
Note that this definition of shopping-related display websites excludes retailer or manufacturer websites since they
typically do not display advertisements for other retailers.
21
4.2 Descriptive Statistics
We present summary statistics by experimental group for our sample of 38,501
consumers in Panel (A) of Table 3.
[Insert Table 3 about here]
Panel (B) reports summary statistics for click-through and view-through at the ad
impression level. For click-through, these statistics confirm our previous findings that
personalized banners generate more click-through than non-personalized ads. In addition, motive
congruence appears to have no influence on ad effectiveness. The average click-through rate for
personalized banners is .35% under incongruence and congruence. For non-personalized ads, the
difference is negligible, with a mean of .12% under incongruence and .11% under congruence.
A completely different picture emerges for view-through. Without accounting for motive
congruence, as reported at the bottom of the table, personalized banners (1.88%) do not generate
more view-through than non-personalized banners (1.90%). However, when we do account for
motive congruence, personalized banners (1.69%) are less effective than non-personalized ads
(1.78%) under incongruence, but more effective under congruence (4.86% for personalized,
3.86% for non-personalized ads). We illustrate these findings in Figure 2.
[Insert Figure 2 about here]
We again apply a more detailed modeling approach to account for consumers’ differential
click tendencies and other potential influences on their ad response.
4.3 Results and Discussion
We model the click-through (view-through) probability of consumer i in response to ad
impression j analogously to Field Experiment 1, except that we specify β’Xij as
(3)
.TimealBannersTottitionsBannerRepelVisitsTota
CongruenceationPersonalizCongruenceationPersonalizXβ
ij7ij6ij5ij4
iji3ij2i1ij
22
Personalizationi assumes 1 if consumer i belongs to the treatment group receiving personalized
banners and 0 otherwise. If banner j appears on a motive congruent website, Congruenceij
assumes 1 and 0 otherwise. We also include the interaction Personalizationi × Congruenceij to
capture possible differences in the effect of a personalized ad that appears on a motive congruent
relative to an incongruent website. Finally, we use the same control variables as in the first study
as well as Timeij to again capture the time passed between consumer i’s last online store visit and
ad impression j. In another baseline model we exclude any placement effects, to show what
advertisers would conclude if they ignored the characteristics of a banner’s display website.
We estimate both models separately with click-through and view-through as the
respective dependent variables. For click-through, the estimates of our proposed model in Table
4, Column (1) confirm our descriptive findings. Personalized banners are more effective than
non-personalized ads (Personalization 1.0688, p < .001) and the incremental benefits of
personalized over non-personalized ads are not influenced by motive congruence
(Personalization × Congruence .0755, p > .1).
The estimates for view-through in Column (3) also match our descriptive findings. In
particular, personalized banners generate less view-through than non-personalized ads on motive
incongruent websites (Personalization –.0522, p < .05), but more view-through on motive
congruent websites (Personalization × Congruence .2426, p < .001). The need to account for
these effects is supported by the increased fit of our proposed model (BIC = 112,059) compared
with the baseline model (BIC = 112,646) in Column (4). As the baseline model also shows, not
accounting for motive congruence would even disguise any differences between personalized
and non-personalized ads (Personalization –.0076, p > .1).
[Insert Table 4 about here]
23
The same alternative model specifications as in Field Experiment 1 confirm the
robustness of our empirical findings in this study. Moreover, for view-through, our results are
robust under varying time frames from 1 to 30 days for the allowed time between receiving a
banner and returning to the online store. We provide details again in the Web appendix.
4.4 Theoretical Interpretation: Motive Congruence and Browsing Mode
We aim to theoretically explain the results we find in the field and verify our reasoning in
a controlled lab setting. Personalization influences two opposing ad perceptions that determine
consumers’ response to online ads: perceived informativeness and intrusiveness. Consumers
generally perceive personalized messages as more relevant and informative than non-
personalized communications (Jensen et al. 2012; Skinner et al. 1999). Yet, they also often view
them as more intrusive and off-putting (Tucker 2014; Van Doorn and Hoekstra 2013). In
particular, intrusiveness refers to the interference of an ad with a consumer’s ongoing cognitive
processes (Li et al. 2002). Since personalized ads can more easily attract consumers’ selective
attention (Ha and McCann 2008; Schneider and Shiffrin 1977), they should be more distracting
than non-personalized ads. This reasoning aligns with literature on consumer response to
persuasive advertising arguing that ads that stimulate increased processing attention may lead
consumers to think about them more thoroughly (Campbell 1995).
We suggest that these effects of personalization on perceived ad informativeness and
intrusiveness are moderated by whether or not motive congruence exists between a banner and
its display website. Congruence between any banner’s motive and its display website enhances
the ad’s perceived informativeness because it matches the goal that a consumer currently pursues
at that website. Moreover, perceived intrusiveness decreases under motive congruence, also
because the ad is more related to a consumer’s current goal (Edwards et al. 2002).
24
To explain our results, we furthermore draw on the widely accepted distinction between
two modes of Web browsing: an experiential and a goal-directed browsing mode. While
experiential browsing is guided by the process itself and does not imply the pursuit of a specific
goal, goal-directed browsing is based on clearly defined objectives and directed search (Hoffman
and Novak 1996). Prior research suggests click-through to mainly capture the response of
experientially browsing consumers because those in a goal-directed mode tend not to click on
ads which would interrupt their current goal achievement (Chatterjee 2005; Cho and Cheon
2004; Rodgers and Thorson 2000). View-through, however, captures the response of these
consumers because it measures a later return to the advertiser’s online store and not an
immediate reaction.
We find click-through of both personalized and non-personalized ads unaffected by
whether or not they appear on motive-congruent websites. Since experientially browsing
consumers, who, as argued, primarily account for this outcome, are not deeply engaged with a
specific goal on an ad’s display website, congruence likely has little or no effect on them so that
personalization should increase ad informativeness on all websites. For the same reason, it
should also not increase ad intrusiveness. These two key constructs can therefore explain why
the positive effects of personalization on click-through are unaffected by motive congruence.
In contrast, personalization decreases view-through when ads appear on motive-
incongruent websites and only increases it on congruent websites. Goal-directed consumers, who
primarily account for these outcomes, pursue a specific goal on a banner’s display website. Thus,
under incongruence, where ads do not match this goal, personalization should not render an ad
more informative. Rather, since personalized ads more easily attract attention, they should be
seen as more intrusive. On motive-congruent websites, however, an ad is more relevant to a
25
consumer’s current goal and personalization should increase its perceived informativeness
without eliciting intrusiveness.
For our field data we neither observe evaluations of ad informativeness and intrusiveness,
nor whether consumers browse in an experiential or goal-directed mode at the moment of an ad
impression. Also, consumers who visit shopping websites might be systematically different from
those who visit other websites. We therefore seek to replicate our second field experiment in the
lab to test our reasoning and obtain behavioral evidence of the suggested mechanisms.
4.5 Lab Experiments
Pre-Test. To verify that click-through primarily reflects the response of experientially
browsing consumers whereas view-through mainly captures that of consumers in a goal-directed
browsing mode, we conducted a pre-study with 200 participants on Amazon Mechanical Turk.
Participants were randomly assigned to two treatment groups. Group 1 was asked to rate under
which condition they would be more likely to click on a banner ad with a seven-point rating
scale from 1 (“when I see the banner while browsing the Web to pursue a specific goal”),
reflecting a goal-directed browsing mode, to 7 (“when I see the banner while browsing the Web
without a clear goal in mind”), reflecting an experiential browsing mode. On the same scale
group 2 indicated when they would be more likely to return to the advertised online store at a
later point in time. The results prove click-through to occur more likely in an experiential
browsing mode, because the average ratings of group 1 significantly exceeded the scale midpoint
(M = 5.310, t = 7.46, p < .001). By contrast, the average ratings of group 2 was significantly
lower than the scale midpoint (M = 2.910, t = -5.93, p < .001), implying view-through to occur
more likely when consumers browse in a goal-directed mode.
26
Design and Procedure. Given these results, we designed two lab experiments. In both
studies participants were to imagine seeing a specific banner on a specific display website. In
Lab Experiment 1, participants browsed the Web in an experiential, in Lab Experiment 2 in a
goal-directed mode. Both studies had 2 × 2 between subject designs where we varied the motive
to which the banner’s display website catered (incongruent vs. congruent to the banner) and the
content of the banner itself (personalized vs. non-personalized). The final questionnaire included
evaluations of perceived ad informativeness and intrusiveness as well as click-through intentions
in Lab Experiment 1 and view-through intentions in Lab Experiment 2.
In the first experiment, focusing on consumers in an experiential browsing mode,
participants were told to imagine that they went online with no specific purpose, just to browse
the Web as a pastime. Roaming different websites, they came across either the website of a news
network (incongruent to the banner’s shopping motive), or a shopping search engine (congruent
to the banner’s shopping motive). On this website they encountered a banner ad from a fashion
retailer whose online store they had recently visited. The banner featured either products from
the category they had most often examined there (personalized banner) or random products from
the retailer’s assortment (non-personalized banner). Participants then evaluated the banner’s
informativeness and intrusiveness, with items and scales from Edwards et al. (2002).
7
They also
indicated their click-through intentions (“I would like to click on the banner”) on a seven point
scale from 1 (“strongly agree”) to 7 (“strongly disagree”).
In the second experiment, focusing on consumers in a goal-directed browsing mode,
participants were told to imagine that they went online to pursue a specific goal. In the motive
incongruent conditions, they visited the website of a news network specifically to read a certain
7
Items for informativeness were “useful,” “informative,” “important,” and “helpful.” Items for intrusiveness were
“distracting,” “disturbing,” “forced upon me,” “interfering,” “intrusive,” “invasive,” and “obtrusive.”
27
article. In the motive congruent conditions, they visited the website of a shopping search engine
with the clear goal to look for clothes to buy. At the respective websites, they also encountered a
banner from a fashion retailer whose online store they had recently visited. The banner was
either personalized or not as in the first experiment. The final questionnaire was also the same,
except that we asked for view-through intentions (“After seeing the banner, I am encouraged to
revisit the retailer's online store sometime in the near future” and “after seeing the banner, I am
likely to revisit the retailer's online store sometime in the near future”) according to Yoo and
Donthu (2001). We recruited 355 and 312 participants for the respective experiments again
through Amazon Mechanical Turk and assigned them randomly to one of the treatment groups.
Results. We illustrate the descriptive outcomes of both lab experiments in Figure 3.
8
The
results of the first experiment replicate our empirical finding that personalized banners have
higher click-through than non-personalized ads under motive incongruence (2.4574 vs. 2.0225; p
< .1) and congruence (2.6923 vs. 2.2716; p < .1) (Panel A.1). Moreover, as suggested,
personalization also increases ad informativeness under both conditions (3.2686 vs. 2.7809; p <
.05 under incongruence; 3.6511 vs. 3.0123; p < .05 under congruence) (Panel A.2). Yet, in each
case it does not affect ad intrusiveness (4.1790 vs. 4.3563; p > .1 under incongruence; 4.6231 vs.
4.6693; p > .1 under congruence) (Panel A.3). By contrast and in line with our empirical findings
and reasoning, the second experiment shows that for consumers in a goal-directed browsing
mode personalized ads have no higher view-through than non-personalized ads on incongruent
(3.0886 vs. 2.9658; p > .1), but only on congruent websites (4.1533 vs. 3.3647; p < .05) (Panel
B.1). Personalization also does not increase informativeness under incongruence (2.6709 vs.
2.3596; p > .1), but only under congruence (4.2800 vs. 3.1676; p < .001) (Panel B.2). Under
8
In both lab experiments, validity and reliability measures of all multi-item scales exceeded critical threshold levels
(indicator reliabilities > .4, composite reliability > .6, average variance extracted > .5). We averaged all multi-item
scales for the following analyses.
28
incongruence, however, it leads to higher perceived intrusiveness (5.2803 vs. 4.8611; p < .05)
which does not occur under congruence (4.0876 vs. 4.3445; p > .1) (Panel B.3).
While these descriptive statistics provide a first view at the experimental outcomes, our
theoretical interpretation requires accounting for certain relationships among the involved
constructs. We thus proceed with a simultaneous equation model to analyze each experiment.
Following previous research and our argumentation, personalization potentially affects both
perceived ad informativeness and intrusiveness. These constructs then influence a consumer’s
final ad response, i.e., click-through in Lab Experiment 1 and view-through in Lab Experiment 2.
Moreover, following prior work we allow informativeness to influence intrusiveness (Edwards et
al. 2002). This way we account for the possibility that consumers perceive ads as more/less
intrusive when they perceive them as less/more informative. We also include a direct effect of
personalization on click-through (view-through) to control for remaining effects of
personalization that are not accounted for by our proposed mechanisms.
Let i indicate the participant and MC = 1 if the display website is congruent to a banner’s
motive and 2 if not. The formal system of equations for Lab Experiment 1 is then:
(4a)
MCMC
i
MCMCMC
iationPersonalizenessInformativ 11211
,
(4b)
MCMC
i
MCMC
i
MCMCMC
ienessInformativationPersonalizessIntrusiven 2232221
,
(4c)
MCMC
i
MCMC
i
MCMC
i
MCMCMC
iessIntrusivenenessInformativationPersonaliztClickInten 334333231
MC = 1, 2, where PersonalizationiMC is a treatment indicator variable for whether participant
i belongs to an experimental group with a personalized banner stimulus (PersonalizationiMC = 1,
and 0 otherwise). InformativenessiMC, IntrusivenessiMC, and ClickIntentiMC are participant i’s
respective averaged item responses for informativeness, intrusiveness, and click-through
intentions. The β and γ coefficients are to be estimated and
),0(~,, 321 MCMCMCMCMC MVNεεεε
.
29
The setup is the same for Lab Experiment 2, except that ViewIntentiMC replaces ClickIntentiMC.
For both experiments, we estimate the model equations simultaneously with maximum
likelihood methods and report the results in Table 5.
[Insert Table 5 about here]
The results for Lab Experiment 1 in Panel (A) confirm the descriptive results and our
reasoning that, for experientially browsing consumers, personalization increases perceived ad
informativeness equally under motive incongruence (.488, p < .05) and congruence (.639, p <
.05) with no significant difference between the estimated coefficients (TRd = .233, Δdf = 1, p >
.1).
9
Moreover, it does not increase perceived intrusiveness under motive incongruence (.270, p >
.1), or congruence (.300 p > .1). As expected, informativeness increases click-through intentions
while intrusiveness decreases them with these effects uninfluenced by motive congruence. The
same applies to the negative effects of informativeness on intrusiveness. Finally, under neither
condition does personalization exert a significant direct effect on click-through intentions (.103,
p > .1 under incongruence; .025, p > .1 under congruence) which rules out alternative
explanations that are not part of the proposed mechanisms. These results support our reasoning
and explain why the empirically found incremental benefits of personalized over non-
personalized banners are equal on shopping and non-shopping websites.
The results for Lab Experiment 2 (Panel B) show that, for consumers in a goal-directed
browsing mode, personalization does not increase perceived ad informativeness under motive
incongruence (.311, p > .1), while it strongly does so under congruence (1.112, p < .001); with a
significant difference between the estimated coefficients (TRd = 5.909, Δdf = 1, p < .05). Also,
personalization directly increases perceived intrusiveness to similar extents under incongruence
9
Difference testing is based on a Satorra-Bentler scaled chi-square difference test that examines whether model fit
statistically decreases when the focal coefficients are fixed across motive congruent (MC=1) and incongruent
(MC=2) groups.
30
(.551, p < .05) and congruence (.516, p < .05), with no significant difference between
coefficients (TRd = 0.016, Δdf = 1, p > .1). The negative effect of informativeness on
intrusiveness, however, is smaller under incongruence (-.423, p < .001) compared to congruence
(-.695, p < .001); with a significant difference between coefficients (TRd = 5.177, Δdf = 1, p <
.05). This finding is especially important because it supports our reasoning that altogether
personalization leads to higher perceived intrusiveness under motive incongruence compared to
congruence. The relationships between informativeness, intrusiveness, and view-through
intentions are again as expected. Finally, personalization has no significant direct effect on view-
through intentions, neither under incongruence (.015, p > .1), nor congruence (.135, p > .1),
which again rules out alternative explanations of how personalization might affect ad
effectiveness other than through informativeness and intrusiveness.
In contrast to consumers in an experiential browsing mode, ad personalization exerts
positive and negative effects on consumers in a goal-directed mode. A final assessment thus
requires the calculation of the total effect of personalization on view-through intentions. Under
motive incongruence, personalization does not increase ad informativeness, but only leads to
higher intrusiveness. The total incremental effect on view-through intentions is .0077
10
. In
contrast, under congruence, personalization increases ad informativeness, which in turn reduces
intrusiveness. Overall, this leads to a considerably stronger total incremental effect of
personalization on view-through (.624
11
). In these two lab experiments we explain the results
from the field. In particular, we confirm that motive congruence does not influence the
effectiveness of ad personalization for consumers in an experiential browsing mode, but only for
10
.0077 = .311 * .546 + (0.311 * (-.423) + .551) * (-.221))
11
.624 = 1.112 * .503 + (1.112 * (-.695) + .516) * (-.250)
31
those in a goal-directed browsing mode. Moreover, we provide evidence for our theorization that
informativeness and intrusiveness mediate the effects of personalization on ad response.
5. Economic Implications
While the model results of our field experiments are statistically robust across different
specifications, we have not yet demonstrated whether they are economically relevant.
12
This,
however, is important for two reasons: First, despite being statistically significant, the absolute
differences in both click-through and view-through between personalized and non-personalized
banners are relatively small (see Tables 1b and 3b). Second, both outcomes are only indirect
measures of economic success, because the ultimate sales impact of given click-through or view-
through occurrences depends on consumers purchasing in the resulting shopping sessions.
To estimate the expected absolute yearly sales revenue after ad costs resulting from
personalized versus non-personalized advertising, we combine our data with additional
information about our partnering retailer. Specifically, we derive (a) the total number of ad
impressions shown in one year and (b) the click-through probabilities for personalized and non-
personalized banners from Field Experiment 1. Our sample of roughly 1.2 million ad
impressions reflects 10% of the total banners the retailer delivered throughout the 6-week study
period. It thus delivers roughly 1.2 million ∙ 10 ∙ 52/6 = 104 million banners per year. Estimates
for the click-through probabilities of personalized and non-personalized banners result from
Table 1 (B) for high DCP banners (.0036), overall the most effective form of personalization,
and non-personalized banners (.0012). In addition, we obtained estimates about (c) the average
conversion probability from click-through to sales (3%), (d) average sales per conversion
($330
13
), and (e) the average costs of banner advertising for our partner retailer. With respect to
12
We thank an anonymous reviewer for suggesting this analysis.
13
We converted all monetary values from Euro to US$.
32
(e), ad networks apply different pricing schemes for banner ads. Most common are the cost per
mille (CPM) scheme, where advertisers pay for the number of banners delivered, and the cost per
click (CPC) scheme, where firms pay per click-through, irrespective of the number of ads
delivered. We provide respective calculations for both of these schemes. A reasonable CPM
estimate for personalized banners is $2.5 per 1,000 impressions or .25 cents per impression,
according to the ad agency of our partnering retailer. Under CPC, it is $.68 per click. We begin
our calculations under the assumption of equal prices for personalized and non-personalized
banners. We then relax this assumption in the course of a sensitivity analysis.
Under a CPM pricing scheme, the expected yearly sales revenue after ad costs resulting
from personalized banner ads is about (impressions = 104 million) ∙ (click-through rate = .0036) ∙
(conversion rate = .03) ∙ (sales per conversion = $330) – [(impressions = 104 million) ∙
(CPM/1000 = .25 cents)] = $3,446,560. This estimate exceeds yearly sales revenue after ad costs
from non-personalized banners ($975,520) by 253%, or $2,471,040. Similar calculations under
CPC yield increases of 200% or $2,301,312 for personalized relative to non-personalized ads.
14
These calculations clearly demonstrate the substantial economic impact of the effectiveness
differences between personalized and non-personalized banners we find in our field studies
15
.
Next, we test the sensitivity of our calculations to go beyond the case of our specific
retailer. First, ad costs might generally be lower for non-personalized than for personalized
banners. In Figure 4, we therefore plot estimates of yearly sales revenue after ad costs for CPM
rates between $1 and $4 (Panel A) and CPC costs between $.2 and $1.2 (Panel B). Both analyses
14
Total ad costs on a CPC basis are derived as (impressions = 104 million) ∙ (click-through rate = .0036) ∙ (CPC =
$.68).
15
Note that non-personalized banners in this study feature random products from the retailer’s assortment. These ads
might be less effective than non-personalized banners that feature for instance generic pictures, evocative of sports.
To this extent our calculations might overstate the difference between yearly sales generated from personalized
compared to non-personalized banners. On the other hand are our calculations conservative in the sense that we only
account for revenue increases after immediate response (click-through), because we lack necessary information to
also incorporate revenue increases after lagged response (view-through).
33
show that personalized banners yield substantially higher yearly sales revenue after ad costs than
non-personalized banners even if personalized ads are extremely more expensive than non-
personalized ads. Second, an average purchase amount of $330 and a conversion probability of
3% apply to our retailer, but may differ across firms. We therefore illustrate the sensitivity of our
calculations for different average purchase amounts (ranging from $50 to $500) and conversion
probabilities (ranging from 1% to 15%) under CPM and CPC in Panels (C) and (D), respectively.
Again, the economic benefits of personalized over non-personalized banners remain substantial
even under very conservative assumptions.
[Insert Figure 4 about here]
6. General Discussion and Further Research
As more firms use the Internet to increase their advertising reach, the effectiveness of
display banners steadily declines. In response, many firms personalize their ads based on
individual consumers’ recent online shopping behaviors with a method called retargeting. While
personalized banners should be more relevant and thus more effective than non-personalized ads,
consumers might not unanimously favor certain personalized ad content, depending on the
timing and placement of its appearance. In this research we investigate the effectiveness of ad
personalization through retargeting by taking into account its interplay with relevant timing and
placement factors. To examine these relationships, we conducted two large-scale field
experiments with a major fashion and sporting goods retailer as well as two lab experiments.
Regarding the interplay of content personalization and timing factors, we differentiate
three consecutive states. These states describe a consumer’s position in the purchase decision
process at which he or she left the advertiser’s online store at his or her most recent visit prior to
receiving an ad. Nested within states, we also account for the elapsed time between that last
34
online store visit and the ad impression. Our results show that, on average, personalization
strongly increases click-through and that banners of highest personalization intensity achieve the
highest click-through rates. However, we also find the click-through effectiveness of
personalized banners to generally decrease as consumers progress toward the completion of the
purchase decision process. We explain these findings through consumers’ constructive
preferences that stabilize during this progression to make them less dependent on firms’ pointed
advice. In addition, early in the buying process, highly personalized banners that aim closely at
specific preferences are very effective when consumers have just left the online store, but also
lose effectiveness quickly over time. With consumers’ preferences at this point still unstable and
subject to change over time, these ads increasingly miss their mark, the more time since the last
online store visit passes—a phenomenon we term overpersonalization. Less close
personalization, catering to consumers’ brand preferences, instead proves more persistent. For
banners that reach consumers more than 23 days after leaving the online store, this form of
personalization therefore becomes most effective.
Our finding that retargeting can produce overpersonalization reinforces previous research
results. Lambrecht and Tucker (2013) show that personalized banners are less effective than
generic brand banners when consumers have abstract, higher-level preferences. Their study
focuses on the highest DCP possible and does not control for effectiveness changes over time.
Overpersonalization might thus have contributed to the low performance they find for these
banners compared with generic ads that appeal beyond a mere product matching.
Regarding the interplay of content personalization and placement factors, we find that
motive congruence has no influence on experientially browsing consumers, but only on
consumers in a goal-directed browsing mode. That is, while personalization always increases
35
click-through, it increases view-through, i.e., a consumer’s probability to return independently to
the advertised online store in response to a banner, only if consumers encounter ads on motive
congruent websites. These findings also tie in nicely with Lambrecht and Tucker (2013) who
show that personalization only fuels sales if consumers are actively involved in the advertised
category. Moreover, we contribute to the ad targeting literature by showing that the context in
which an ad appears does not always influence its effectiveness, but that this influence depends
on a consumer’s current online browsing mode.
For managers, we highlight three key findings. First, personalizing ads with retargeting
methods requires matching personalization intensities to consumers’ last observed positions in
the purchase decision process and time since those last online store visits. Firms use myriad
algorithms to personalize banners to various extents; we recommend that they constantly monitor
their customers to determine the optimal time for a specific personalization approach. Second,
firms seek to increase the reach of their online ads by delivering them on a number of display
websites. As the heterogeneity of these websites in the reach of an ad network increases, each
firm must carefully determine which websites offer the most effective outlets. Third, given that
placement effects are less likely to set in for consumers in an experiential compared to a goal-
directed Web browsing mode, firms should think rigorously about how to recognize and
distinguish between consumers’ current browsing modes before delivering ads to them.
Of course, there are limitations to our research. First, our analyses account only for
consumer response in terms of click-through and view-through. These measures offer insights
into a banner’s effectiveness, but could also be extended with other directly observable indicators
(e.g., duration of shop visit, spending per purchase) or more implicit indicators (e.g., attitudes
toward the ad or firm, ad recall). Moreover, a growing research stream investigates different
36
attribution assumptions of specific responses to given ads (e.g., Abhishek et al. 2013; Li and
Kannan 2014). These aspects should also be relevant for personalized online advertising.
Second, we examine the differential effects of specific personalization intensities which,
however, only reflect previous browsing behavior in terms of product views. Future research
could investigate ad personalization based on further shopping actions, such as products placed
in the shopping cart, put on a wish list, or purchased. Moreover, we only incorporate consumers’
online shopping behaviors, which represent the primary revenue source of our partner firm.
Future research might include dependencies between online and offline shopping and
personalized online advertising. Also, while we proxy for consumers’ states through direct
observables, future work might implement a richer modeling approach that explicitly treats states
as latent, in line with (Abhishek et al. 2013). Third, for our dynamic analysis of Field Experiment
1, we investigate the effects of continuously showing banners with the same DCP. Prior work
also highlights the beneficial effects of certain pattern strategies for banner advertising (Braun
and Moe 2013); these strategies have not been investigated in retargeting settings. Fourth, while
we demonstrate the economic relevance of our results and retargeting in general, we do not
incorporate competitive pricing aspects into our analyses. Retargeting banners are increasingly
marketed through auctioning systems where firms bid for single ad impressions to specific
consumers at given occasions (Perlich et al. 2012). While our findings may help firms to
determine specific monetary values to bid on such impressions, future research could explicitly
focus on optimal bidding strategies for specific ads.
37
References
Abhishek V, Fader PS, Hosanagar K (2013) Media exposure through the funnel: A model of
multi-stage attribution. Working paper, Heinz College, Carnegie Mellon University,
Pittsburgh, PA.
Anand BN, Shachar R (2009) Targeted advertising as a signal. Quant. Marketing Econom.
7(3):237–266.
Ansari A, Mela CF (2003) E-customization. J. Marketing Res. 40(2):131–145.
Baker BJ, Fang Z, Luo X (2014) Hour-by-hour sales impact of mobile advertising. Working
paper, Fox School of Business and Management, Temple University, Philadelphia, PA.
Bettman JR, Luce MF, Payne JW (1998) Constructive consumer choice processes. J. Consumer
Res. 25(3):187–217.
Bhatnagar A, Papatla P (2001) Identifying locations for targeted advertising on the Internet.
Internat. J. Electronic Commerce 5(3):23–44.
Braun M, Moe W (2013) Online display advertising: Modeling the effects of multiple creatives
and individual impression histories. Marketing Sci. 32(5):753–767.
Campbell MC (1995) When attention-getting advertising tactics elicit consumer inferences of
manipulative intent: The importance of balancing benefits and investments. J. Consumer
Psych. 4(3):225–254.
Chatterjee P (2005) Changing banner ad executions on the Web: Impact on clickthroughs and
communications outcomes. Adv. Consumer Res. 32(1):51–57.
———, Hoffman DL, Novak TP (2003) Modeling the clickstream: Implications for web-based
advertising efforts. Marketing Sci. 22(4):520–541.
Cho C-H, Cheon HJ (2004) Why do people avoid advertising on the internet? J. Advertising
33(4):89–97.
Close AG, Kukar-Kinney M (2010) Beyond buying: Motivations behind consumers' online
shopping cart use. J. Bus. Res. 63(9):986–992.
Dalessandro B, Hook R, Perlich C, Provost F (2012) Evaluating and optimizing online
advertising: Forget the click, but there are good proxies. Working paper, Stern School of
Business, NYU.
Drèze X, Hussherr F-X (2003) Internet advertising: Is anybody watching? J. Interactive
Marketing 17(4):8–23.
Edwards SM, Li H, Lee J-H (2002) Forced exposure and psychological reactance: Antecedents
and consequences of the perceived intrusiveness of pop-up ads. J. Advertising 31(3):83–95.
eMarketer (2013) Social, digital video drive further growth in time spent online. (May 8)
http://www.emarketer.com/Article/Social-Digital-Video-Drive-Further-Growth-Time-Spent-
Online/1009872.
Goldfarb A, Tucker CE (2011a) Online display advertising: Targeting and obtrusiveness.
Marketing Sci. 30(3):389–404.
38
———, ——— (2011b) Privacy regulation and online advertising. Management Sci. 57(1):57–
71.
Google (2014) Search funnels reports and conversion data. Accessed December 14, 2014,
https://support.google.com/adwords/answer/1722023?hl=en.
Guadagni PM, Little JDC (1983) A logit model of brand choice calibrated on scanner data.
Marketing Sci. 2(3):203–238.
———, ——— (1998) When and what to buy: A nested logit model of coffee purchase. J.
Forecasting 17(3/4):303–326.
Ha L, McCann K (2008) An integrated model of advertising clutter in offline and online media.
Intern. J. Advertising 27(4):569–592.
Hamman D, Plomion B (2013) Chango retargeting barometer. Report, Chango, Toronto, Canada.
http://www.iab.net/media/file/Chango_Retargeting_Barometer_April_2013.pdf.
Helft M, Vega T (2010) Retargeting ads follow surfers to other sites. New York Times (August
29) http://www.nytimes.com/2010/08/30/technology/30adstalk.html?_r=0.
Hoeffler S, Ariely D (1999) Constructing stable preferences: A look into dimensions of
experience and their impact on preference stability. J. Consumer Psychology 8(2):113–139.
Hof R (2011) Can behavioral targeting survive privacy worries? Forbes (July 20)
http://www.forbes.com/sites/roberthof/2011/07/20/can-behavioral-targeting-survive-privacy-
worries.
Hoffman DL, Novak TP (1996) Marketing in hypermedia computer-mediated environments:
conceptual foundations. J. Marketing 60(3):50–68.
Howard J, Sheth JN (1969) A Theory of Buyer Behavior (John Wiley & Sons, New York, NY).
IAB (2013) First quarter 2013 Internet ad revenues set new high, at $9.6 billion. Press release
(June 3), http://www.iab.net/about_the_iab/ecent_press_releases/press_release_archive/
press_release/pr-060313.
Jensen, JD, King A, Carcioppolo N, Davis L (2012) Why are tailored messages more effective?
A multiple mediation analysis of a breast cancer screening intervention. J. Communication
62(5):851–868.
Jones JM, Landwehr JT (1988) Removing heterogeneity bias from logit model estimation.
Marketing Sci. 7(1):41–59.
Kazienko P, Adamski M (2007) AdROSA—Adaptive personalization of web advertising.
Inform. Sci. 177(11):2269–2295.
Lambrecht A, Tucker C (2013) When does retargeting work? Information specificity in online
advertising. J. Marketing Res. 50(5):561–576.
Lee L, Ariely D (2006) Shopping goals, goal concreteness, and conditional promotions. J.
Consumer Res. 33(1):60–70.
Lenert L, Muñoz RF, Perez JE, Bansod A (2004) Automated e-mail messaging as a tool for
improving quit rates in an internet smoking cessation intervention. J. Amer. Medical
Informatics Assoc. 11(4):235–240.
39
Lewis RA, Reiley DH (2014) Online Ads and offline sales: Measuring the effects of retail
advertising via a controlled experiment on yahoo. Quant. Marketing Econom. 12(3):235–266.
Li H, Edwards SM, Lee J-H (2002) Measuring the intrusiveness of advertisements: Scale
development and validation. J. Advertising 31(2):37–47.
Li H, Kannan PK (2014) Attributing conversions in a multichannel online marketing
environment: An empirical model and a field experiment. J. Marketing Res. 51(1):40–56.
Li S, Chatterjee P (2005) Shopping cart abandonment at retail websites—A multi-stage model of
online shopping behavior. Paper presented at the 2005 Marketing Science Conference, Emory
University, June 16–18.
Lipsman A, Aquino C, Flosi S (2013) 2013 U.S. digital future in focus. Report, ComScore,
Reston. http://www.comscore.com/Insights/Blog/2013_Digital_Future_in_Focus_Series.
Manchanda P, Dubé J-P, Goh KY, Chintagunta PK (2006) The effect of banner advertising on
internet purchasing. J. Marketing Res. 43(1):98–108.
MediaMind (2012) Global benchmarks report—H1 2012. Report, MediaMind, New York.
http://www2.mediamind.com/Data/Uploads/ResourceLibrary/MediaMind_Benchmark_H1_2
012.pdf.
Moore RS, Stammerjohan CA, Coulter RA (2005) Banner advertiser-web site context congruity
and color effects on attention and attitudes. J. Advertising. 34(2):71–84.
Morris B (2013) More consumers prefer online shopping. Wall Street Journal (June 3)
http://online.wsj.com/article/SB10001424127887324063304578523112193480212.html.
Payne JW, Bettman JR, Johnson EJ (1993) The adaptive decision maker (Cambridge University
Press, Cambridge, UK).
Perlich C, Dalessandro B, Hook R, Stitelman O, Raeder T, Provost F (2012) Bid optimizing and
inventory scoring in targeted online advertising. Proc. 18th Internat. Conf. Knowledge
Discovery Data Mining (SIGKDD) (ACM, New York), 804–812.
———, Dalessandro B, Raeder T, Stitelman O, Provost F (2014) Machine learning for targeted
display advertising: Transfer learning in action. Machine Learn. 95(1):103–127.
Petty RE, Cacioppo JT, Schumann D (1983) Central and peripheral routes to advertising
effectiveness: The moderating role of involvement. J. Consumer Res. 10(2):135–146.
Peterson T (2013) EBay opens up its data for ad targeting—Follows lead of Amazon, Google
and Facebook. Adweek (April 8) http://www.adweek.com/news/technology/ebay-opens-its-
data-ad-targeting-148469.
PricewaterhouseCoopers (2011) Measuring the effectiveness of online advertising. Report,
PricewaterhouseCoopers, London. http://www.pwc.com/en_GX/gx/entertainment-media/pdf/
IAB_SRI_Online_Advertising_Effectiveness_v3.pdf.
Provost F, Dalessandro B, Hook R, Zhang X, Murray A (2009) Audience selection for on-line
brand advertising: privacy-friendly social network targeting. Proc. 15th Internat. Conf.
Knowledge Discovery Data Mining (SIGKDD) (ACM, New York), 707–716.
40
Raeder T, Stitelman O, Dalessandro B, Perlich C, Provost F (2012) Design principles of massive,
robust prediction systems. Proc. 18th Internat. Conf. Knowledge Discovery Data Mining
(SIGKDD) (ACM, New York), 1357–1365.
Rodgers S, Thorson E (2000) The interactive advertising model: How users perceive and process
online ads. J. Interactive Advertising 1(1):42–61.
Schneider W, Shiffrin RM (1977) Controlled and automatic human information processing: I.
Detection, search, and attention. Psych. Rev. 84(1):1–66.
Sengupta S (2013) What you didn’t post, Facebook may still know. New York Times (March 25)
http://www.nytimes.com/2013/03/26/technology/facebook-expands-targeted-advertising-
through-outside-data-sources.html?pagewanted=all.
Shamdasani PN, Stanaland AJS, Tan J (2001) Location location location: Insights for advertising
placement on the web. J. Advertising Res. 41(4):7–32.
Simonson I (2005) Determinants of customers' responses to customized offers: Conceptual
framework and research propositions. J. Marketing 69(1):32–45.
———, Winer RS (1992) The influence of purchase quantity and display format on consumer
preference for variety. J. Consumer Res. 19(1):133–138.
Skinner CS, Campbell MK, Rimer BK, Curry S, Prochaska JO (1999) How effective is tailored
print communication? Ann. Behavioral Medicine 21(4):290–298.
Stitelman O, Dalessandro B, Perlich C, Provost F (2011) Estimating the effect of online display
advertising on browser conversion. Proc. 5th Internat. Workshop Data Mining Audience
Intelligence Advertising (ADKDD) (ACM, New York), 8–16.
Tucker CE (2014) Social networks, personalized advertising, and privacy controls. J. Marketing
Res. 51(5):546–562.
Urban GL, Liberali G, MacDonald E, Bordley R, Hauser JR (2014) Morphing banner
advertising. Marketing Sci. 33(1):27–46.
Van Doorn J, Hoekstra JC (2013) Customization of online advertising: The role of
intrusiveness. Marketing Lett. 24(4):339–351.
Verhoef PC, Neslin SA, Vroomen B (2007) Multichannel customer management: Understanding
the research-shopper phenomenon. Internat. J. Res. in Marketing 24(2):129–148.
White TB, Zahay DL, Thorbjørnsen H, Shavitt S (2008) Getting too personal: Reactance to
highly personalized email solicitations. Marketing Lett. 19(1):39–50.
Yaveroglu I, Donthu N (2008) Advertising repetition and placement issues in on-line
environments. J. Advertising 37(2):31–44.
Yoo B, Donthu N (2001) Developing a scale to measure the perceived quality of an internet
shopping site (SITEQUAL). Quart. J. Electronic Commerce 2(1):31–45.
Yoon SO, Simonson I (2008) Choice set configuration as a determinant of preference attribution
and strength. J. Consumer Res. 35(2):324–336.
41
Figure 1. Estimated Click-Through Probabilities over Time in Information State
Notes. Click-through probabilities are based on the parameter estimates in Table 2, Panel (1), with
BannerRepetitions and BannersTotal held constant at 1, and VisitsTotal held at the mean. Predictions start from the
moment the consumer leaves the online store and persist until he or she has not returned for 50 days. These
predictions disentangle a time effect from banner wear-out effects, because we control for both the number of
repetitions since a consumer’s last shopping session (BannerRepetitionsij) and the total number of banners viewed
before the focal ad impression (BannersTotalij). By holding these variables constant at 1, we can predict click-
through probabilities for the first banner encountered following a shopping session, after a given number of days has
passed. Moreover, we set VisitsTotalij at its mean, to derive click-through probabilities for an average loyal
consumer.
Figure 2. Observed Effectiveness of Personalized and Non-Personalized Banners on Motive
Incongruent and Congruent Display Websites
Notes. Error bars denote standard errors.
42
Figure 3. Effects of Personalization and Motive Congruence on Click-Through Intentions, Informativeness and Intrusiveness
(A) Lab Experiment 1: Experiential Browsing Mode
(B) Lab Experiment 2: Goal-Directed Browsing Mode
Notes. Error bars denote standard errors.
43
Figure 4. Sensitivity Analyses: Yearly Sales Revenue after Ad Costs Resulting from Personalized vs. Non-Personalized Advertising
44
Table 1 Summary Statistics for Field Experiment 1
(A) Summary Statistics at the Consumer Level
Treatment
Variable
Mean
Std dev
Min
Max
Obs
High DCP
VisitsTotal
1.9586
1.7632
1
51
9,318
BannerRepetitions
18.0635
36.2231
1
777
9,318
BannersTotal
28.5324
55.1268
1
1062
9,318
Medium DCP
(category)
VisitsTotal
1.9659
1.8574
1
56
9,709
BannerRepetitions
19.1408
39.3960
1
1173
9,709
BannersTotal
29.5754
58.2005
1
1173
9,709
Medium DCP
(brand)
VisitsTotal
1.9337
1.7297
1
43
9,551
BannerRepetitions
18.9830
36.6563
1
734
9,551
BannersTotal
28.5571
53.0665
1
1220
9,551
No
Personalization
VisitsTotal
1.8416
1.5839
1
29
16,417
BannerRepetitions
18.1064
36.6177
1
1228
16,417
BannersTotal
26.7481
50.3662
1
1228
16,417
Notes. DCP = Degree of Content Personalization.
(B) Summary Statistics at the Ad Impression Level
(1)
Time Since Last
Online Store Visit
(2)
Click-Through Rate
State
Treatment
Mean
Std dev
Min
Max
Mean
Std dev
Min
Max
Obs
Information
High DCP
8.5897
7.5149
0
36
0.0040
0.0630
0
1
217,588
Medium DCP (category)
8.6435
7.5186
0
31
0.0029
0.0540
0
1
231,582
Medium DCP (brand)
8.8729
7.6320
0
38
0.0027
0.0519
0
1
222,253
No Personalization
9.2043
8.1260
0
43
0.0013
0.0358
0
1
356,330
Consideration
High DCP
8.6102
7.5822
0
41
0.0025
0.0498
0
1
26,992
Medium DCP (category)
8.5627
7.4905
0
29
0.0026
0.0512
0
1
28,901
Medium DCP (brand)
8.9041
7.7109
0
29
0.0021
0.0457
0
1
27,773
No Personalization
9.6136
8.5139
0
41
0.0012
0.0353
0
1
45,675
Post-
Purchase
High DCP
9.5924
7.9069
0
29
0.0016
0.0393
0
1
21,285
Medium DCP (category)
9.7340
7.8429
0
29
0.0014
0.0372
0
1
26,665
Medium DCP (brand)
9.3330
7.9462
0
33
0.0013
0.0363
0
1
22,723
No Personalization
9.6489
8.3502
0
43
0.0008
0.0289
0
1
37,118
All
High DCP
8.6720
7.5587
0
41
0.0036
0.0602
0
1
265,865
Medium DCP (category)
8.7366
7.5532
0
31
0.0028
0.0524
0
1
287,148
Medium DCP (brand)
8.9144
7.6677
0
38
0.0025
0.0502
0
1
272,749
No Personalization
9.2845
8.1880
0
43
0.0012
0.0352
0
1
439,123
Notes. DCP = Degree of Content Personalization. Time measured in days. In total, 1,264,885 banners were delivered
that resulted in 2,991 click-throughs.
45
Table 2 Parameter Estimates for Field Experiment 1
(1)
(2)
(3)
Information State
Consideration State
Post-Purchase State
Coefficient
Coefficient
Coefficient
(std. error)
p-Value
(std. error)
p-Value
(std. error)
p-Value
Constant
-6.8911***
<0.0001
-7.1445***
<0.0001
-7.4234***
<0.0001
(0.0865)
(0.3784)
(0.5954)
HighDCP
1.3749***
<0.0001
0.8526**
0.0015
0.6165*
0.0896
(0.0893)
(0.2687)
(0.3631)
MedDCPcategory
1.0081***
<0.0001
0.9237***
0.0004
0.4812
0.1780
(0.0929)
(0.2601)
(0.3572)
MedDCPbrand
0.8357***
<0.0001
0.7906**
0.0039
0.4033
0.2748
(0.0953)
(0.2738)
(0.3692)
Time
-0.0217**
0.0029
0.0027
0.8805
-0.0262
0.3343
(0.0073)
(0.0179)
(0.0271)
HighDCP × Time
-0.0279**
0.0035
-0.0176
0.4924
0.0064
0.8622
(0.0095)
(0.0256)
(0.0368)
MedDCPcategory × Time
-0.0167*
0.0878
-0.0254
0.3171
0.0214
0.5422
(0.0098)
(0.0254)
(0.0351)
MedDCPbrand × Time
-0.0048
0.6262
-0.0338
0.2242
0.0119
0.7525
(0.0098)
(0.0278)
(0.0377)
VisitsTotal
0.0376**
0.0012
0.1271***
<0.0001
0.2435***
<0.0001
(0.0116)
(0.0309)
(0.0546)
BannerRepetitions
-0.0036**
0.0028
-0.0076**
0.0436
-0.0031
0.6506
(0.0012)
(0.0038)
(0.0068)
BannersTotal
-0.0044***
<0.0001
-0.0006
0.7489
-0.0084*
0.0813
(0.0008)
(0.0020)
(0.0048)
Random Intercept
1.1974***
<0.0001
0.9574
0.1336
0.8998
0.3880
(0.0939)
(0.6381)
(1.0423)
Observations
1,027,753
129,341
107,791
-2 Log Likelihood
34,591
3,627
1,957
AIC
34,615
3,651
1,981
BIC
34,718
3,732
2,059
Notes. DCP = Degree of Content Personalization. Time measured in days.
*p < .1. **p < .05. ***p < .001.
46
Table 3 Summary Statistics for Field Experiment 2
(A) Summary Statistics at the Consumer Level
Treatment
Variable
Mean
Std dev
Min
Max
Obs
Personalization
VisitsTotal
1.9992
1.8108
1
49
28,474
BannerRepetitions
19.2010
39.0823
1
1420
28,474
BannersTotal
29.3891
53.2950
1
1422
28,474
Time
10.4256
9.1547
0
40
28,474
No Personalization
VisitsTotal
1.8323
1.5376
1
24
10,027
BannerRepetitions
18.4923
37.3097
1
1596
10,027
BannersTotal
26.8324
50.9876
1
1596
10,027
Time
11.4396
9.9357
0
42
10,027
Notes. Time measured in days.
(B) Summary Statistics at the Ad Impression Level
Notes. In total, 641,136 banners were delivered that resulted in 1,795 click-throughs and 11,489 view-throughs.
Measure
Motive Congruence
Treatment
Mean
Std dev
Min
Max
Obs
Click-Through
Incongruent
Personalization
0.0035
0.0593
0
1
417,270
No Personalization
0.0012
0.0344
0
1
186,620
Congruent
Personalization
0.0035
0.0587
0
1
26,281
No Personalization
0.0011
0.0331
0
1
10,965
View-Through
Incongruent
Personalization
0.0161
0.1260
0
1
417,270
No Personalization
0.0170
0.1291
0
1
186,620
Congruent
Personalization
0.0455
0.2083
0
1
26,281
No Personalization
0.0363
0.1870
0
1
10,965
Click-Through
Incongruent and Congruent
Personalization
0.0035
0.0592
0
1
443,551
No Personalization
0.0012
0.0343
0
1
197,585
View-Through
Incongruent and Congruent
Personalization
0.0190
0.1357
0
1
443,551
No Personalization
0.0190
0.1365
0
1
197,585
47
Table 4 Parameter Estimates for Field Experiment 2
Click-Through
View-Through
(1)
(2)
(3)
(4)
Coefficient
Coefficient
Coefficient
Coefficient
(std. error)
p-Value
(std. error)
p-Value
(std. error)
p-Value
(std. error)
p-Value
Constant
-6.9581***
<0.0001
-6.9757***
<0.0001
-3.6953***
<0.0001
-3.6425***
<0.0001
(0.0970)
(0.0956)
(0.0275)
(0.0270)
Personalization
1.0688***
<0.0001
1.0721***
<0.0001
-0.0511**
0.0415
-0.0082
0.7303
(0.0777)
(0.0757)
(0.0251)
(0.0239)
Congruence
-0.2348
0.4362
0.5627***
<.0001
(0.3015)
(0.0589)
Personalization × Congruence
0.0755
0.8142
0.2384***
0.0005
(0.3214)
(0.0682)
VisitsTotal
0.0594***
<0.0001
0.0588***
<0.0001
0.1032***
<0.0001
0.1045***
<0.0001
(0.0128)
(0.0128)
(0.0058)
(0.0059)
BannerRepetitions
-0.0036**
0.0185
-0.0036**
0.0187
-0.0126***
<0.0001
-0.0129***
<0.0001
(0.0015)
(0.0015)
(0.0009)
(0.0009)
BannersTotal
-0.0033***
0.0006
-0.0032***
0.0008
-0.0073***
<0.0001
-0.0078***
<0.0001
(0.0010)
(0.0010)
(0.0005)
(0.0005)
Time
-0.0384***
<0.0001
-0.0382***
<0.0001
-0.0421***
<0.0001
-0.0428***
<0.0001
(0.0049)
(0.0049)
(0.0022)
(0.0022)
Random Intercept
1.2072***
<0.0001
1.2112***
<0.0001
0.3967***
<0.0001
0.4240***
<0.0001
(0.1170)
(0.1170)
(0.0314)
(0.0331)
Observations
641,136
641,136
641,136
641,136
-2 Log Likelihood
23,662
23,664
107,481
108,027
AIC
23,680
23,678
107,499
108,041
BIC
23,757
23,738
107,576
108,101
Notes. Time measured in days.
*p < .1. **p < .05. ***p < .001.
48
Table 5 Parameter estimates for lab experiments
(A) Lab Experiment 1: Experiential Browsing Mode
(B) Lab Experiment 2: Goal-Directed Browsing Mode
(1)
Incongruence
(2)
Congruence
(1)
Incongruence
(2)
Congruence
Variable
Coefficient
(std. error)
p-Value
R2
Coefficient
(std. error)
p-Value
R2
Coefficient
(std. error)
p-Value
R2
Coefficient
(std. error)
p-Value
R2
Informativeness equation
2.8%
4.3%
1.5%
11.8%
Intercept
2.781***
(0.145)
<0.0001
1.485***
(0.151)
<0.0001
2.360***
(0.138)
<0.0001
2.986***
(0.160)
<0.0001
Personalization
0.488**
(0.214)
0.0220
0.639**
(0.227)
0.0050
0.311
(0.205)
0.1300
1.112***
(0.242)
<0.0001
Intrusiveness equation
37.6%
42.0%
21.8%
39.5%
Intercept
2.781***
(0.230)
<0.0001
3.012***
(0.238)
<0.0001
2.360***
(0.272)
<0.0001
3.168***
(0.278)
<0.0001
Personalization
0.270
(0.181)
0.1360
0.300
(0.210)
0.1540
0.551**
(0.172)
0.0010
0.516**
(0.211)
0.0140
Informativeness
-0.648***
(0.066)
<0.0001
-0.747***
(0.064)
<0.0001
-0.423***
(0.101)
<0.0001
-0.695***
(0.072)
<0.0001
Click-/ViewIntent equation
48.8%
47.4%
35.6%
56.0%
Intercept
6.473***
(0.419)
<0.0001
6.607***
(0.506)
<0.0001
5.860***
(0.536)
<0.0001
6.546***
(0.707)
<0.0001
Personalization
0.103
(0.180)
0.5670
0.025
(0.178)
0.8870
0.015
(0.193)
0.9360
0.135
(0.182)
0.4570
Informativeness
0.667***
(0.076)
<0.0001
0.561***
(0.076)
<0.0001
0.546***
(0.080)
<0.0001
0.503***
(0.103)
<0.0001
Intrusiveness
-0.142**
(0.059)
0.0160
-0.208**
(0.075)
0.0060
-0.221***
(0.084)
0.0090
-0.250**
(0.089)
0.0050
Sample size
183
172
152
160
Notes. Results are robust under ML, robust ML, and GLS estimators.
*p < .1, **p < .05, ***p < .001.
- Web Appendix -
1
Robustness Checks for Field Experiment 1
Pooled Logit
Probit
GEE
Information
Consideration
Post-Purchase
Information
Consideration
Post-Purchase
Information
Consideration
Post-Purchase
Coefficient
Coefficient
Coefficient
Coefficient
Coefficient
Coefficient
Coefficient
Coefficient
Coefficient
(std. error)
(std. error)
(std. error)
(std. error)
(std. error)
(std. error)
(std. error)
(std. error)
(std. error)
Constant
-6.2637***
-6.6314***
-7.0389***
-3.1312***
-3.2077***
-3.2648***
-6.2548***
-6.6441***
-6.6441***
(0.0678)
(0.2009)
(0.2881)
(0.0295)
(0.1280)
(0.1728)
(0.0874)
(0.2030)
(0.2030)
HighDCP
1.3242***
0.7926**
0.6833*
0.4846***
0.2860**
0.2042*
1.3104***
0.8086**
0.8086*
(0.0807)
(0.2549)
(0.3560)
(0.0308)
(0.0904)
(0.1170)
(0.1023)
(0.2668)
(0.2668)
MedDCPcategory
0.9351***
0.8915***
0.5350
0.3511***
0.3072***
0.1599
0.9209***
0.9036***
0.9036
(0.0847)
(0.2470)
(0.3501)
(0.0318)
(0.0873)
(0.1145)
(0.1064)
(0.2574)
(0.2574)
MedDCPbrand
0.7959***
0.7386**
0.4766
0.2870***
0.2548**
0.1296
0.7814***
0.7602**
0.7602
(0.0872)
(0.2618)
(0.3625)
(0.0325)
(0.0914)
(0.1175)
(0.1111)
(0.2728)
(0.2728)
Time
-0.0215**
0.0022
-0.0203
-0.0068**
0.0006
-0.0072
-0.0223**
0.0030
0.0030
(0.0071)
(0.0175)
(0.0266)
(0.0023)
(0.0059)
(0.0080)
(0.0080)
(0.019)
(0.0190)
HighDCP
-0.0304**
-0.0171
-0.0004
-0.0112***
-0.0063
0.0006
-0.0292**
-0.0181
-0.0181
× Time
(0.0093)
(0.0251)
(0.0362)
(0.0031)
(0.0083)
(0.0113)
(0.0104)
(0.0277)
(0.0277)
MedDCPcategory
-0.0151
-0.0253
0.0161
-0.0070**
-0.0086
0.0053
-0.0139
-0.0260
0.0260
× Time
(0.0095)
(0.0247)
(0.0346)
(0.0032)
(0.0082)
(0.0107)
(0.0106)
(0.0265)
(0.0265)
MedDCPbrand
-0.0060
-0.0309
0.0061
-0.0024
-0.0105
0.0031
-0.0048
-0.0324
-0.0324
× Time
(0.0095)
(0.0272)
(0.0370)
(0.0032)
(0.0089)
(0.0114)
(0.0109)
(0.0287)
(0.0287)
VisitsTotal
0.0586***
0.1166***
0.2378***
0.0144***
0.0467***
0.0818***
0.0589***
0.1169***
0.1169***
(0.0067)
(0.0254)
(0.0497)
(0.0041)
(0.0117)
(0.0196)
(0.0121)
(0.0229)
(0.0229)
BannerRepetitions
-0.0079***
-0.0097**
-0.0047
-0.0008**
-0.0020*
-0.0005
-0.0076***
-0.0096**
-0.0096
(0.0011)
(0.0033)
(0.0062)
(0.0004)
(0.0012)
(0.0021)
(0.0013)
(0.0037)
(0.0037)
BannersTotal
-0.0031***
-0.0005
-0.0078*
-0.0015***
-0.0004
-0.0029*
-0.0033***
-0.0005
-0.0005**
(0.0006)
(0.0017)
(0.0045)
(0.0002)
(0.0007)
(0.0016)
(0.0007)
(0.0017)
(0.0017)
Random Intercept
0.1533***
0.1233
0.0898
(0.0126)
(0.0776)
(0.0977)
Observations
1,027,753
129,341
107,791
1,027,753
129,341
107,791
1,027,753
129,341
107,791
-2 Log Likelihood
34,960
3,632
1,958
34,615
3,628
1,958
N/A
N/A
N/A
AIC
34,982
3,654
1,980
34,639
3,652
1,982
N/A
N/A
N/A
BIC
35,112
3,761
2,085
34,741
3,733
2,060
N/A
N/A
N/A
QIC
N/A
N/A
N/A
N/A
N/A
N/A
34,995
3,657
1,981
Notes. DCP = Degree of content personalization. GEE = Generalized Estimating Equation. QIC = Quasi-likelihood under the independence model criterion.
Time measured in days. *p < .1, **p < .05, ***p < .001.
2
Robustness Checks for Field Experiment 2
Click-Through
View-Through
Pooled Logit
Probit
GEE
Pooled Logit
Probit
GEE
Coefficient
Coefficient
Coefficient
Coefficient
Coefficient
Coefficient
(std. error)
(std. error)
(std. error)
(std. error)
(std. error)
(std. error)
Constant
-6.3520***
-3.1602***
-6.3612***
-3.4952***
-2.0036***
-3.4952***
(0.0737)
(0.0342)
(0.0799)
(0.0220)
(0.0114)
(0.0354)
Personalization
1.0748***
0.3621***
1.0765***
-0.0693**
-0.0231**
-0.0693**
(0.0723)
(0.0261)
(0.0779)
(0.0219)
(0.0106)
(0.0241)
Congruence
-0.2432
-0.0675
-0.2364
0.5457***
0.2457***
0.5457***
(0.2970)
(0.0979)
(0.2961)
(0.0545)
(0.0267)
(0.0588)
Personalization × Congruence
0.0369
0.0147
0.0339
0.2296***
0.1173***
0.2296***
(0.3161)
(0.1058)
(0.3160)
(0.0634)
(0.0310)
(0.0682)
VisitsTotal
0.0730***
0.0229***
0.0739***
0.1209***
0.0463***
0.1209***
(0.0080)
-0.0046
(0.0128)
(0.0038)
(0.0027)
(0.0129)
BannerRepetitions
-0.0071***
-0.0008*
-0.0065**
-0.0166***
-0.0030***
-0.0166***
(0.0014)
(0.0005)
(0.0021)
(0.0009)
(0.0003)
(0.0014)
BannersTotal
-0.0021**
-0.0011**
-0.0024*
-0.00696***
-0.0030***
-0.007***
(0.0008)
(0.0003)
(0.0014)
(0.0005)
(0.0002)
(0.0009)
Time
-0.0394***
-0.0134***
-0.0389***
-0.0419***
-0.0177***
-0.0419***
(0.0048)
(0.0016)
(0.0053)
(0.0021)
(0.0008)
(0.0023)
Random Intercept
0.1655***
0.0827***
(0.0166)
(0.0063)
Observations
641,136
641,136
641,136
641,136
641,136
641,136
-2 Log Likelihood
23,884
23,670
N/A
107,815
107,747
N/A
AIC
23,900
23,688
N/A
107,831
107,765
N/A
BIC
23,991
23,766
N/A
107,922
107,842
N/A
QIC
N/A
N/A
23,909
N/A
N/A
107,863
Notes. GEE = Generalized Estimating Equation. QIC = Quasi-likelihood under the independence model criterion. Time measured in days.
*p < .1. **p < .05. ***p < .001.
3
Marginal Effects Analysis for Interaction Terms in Field Experiments 1 and 2
Field Experiment 1. The interpretation of interaction terms in nonlinear probability
models is not as straightforward as in linear models and the sign of a marginal effect is not
necessarily the same as that of a coefficient of the corresponding interaction term (Ai and Norton
2003; Goldfarb and Tucker 2011). For the estimates of our initial model (see Table 2), we
therefore calculated the marginal effects of the interactions between the investigated DCPs and
Time to verify our findings. The marginal effects in the information state were –.0000978 (p <
.001) for HighDCP × Time; –.0000502 (p < .001) for MedDCPcategory × Time; and –.0000258
(p < .05) for MedDCPbrand × Time. In line with the estimated coefficients, they were all
negative and the HighDCP × Time interaction had the largest effect. In the consideration and
post-purchase states, the respective marginal effects were not significant, also in accordance with
our findings.
Field Experiment 2. Regarding click-through, the marginal effect of the Personalization
× Congruence interaction (see Table 4) is not significant, whereas it is positive and significant
for view-through (.0060735, p < .001), in line with the estimated coefficients for the
corresponding interaction terms.
4
Parameter Estimates for Field Experiment 2 with Varying View-through Windows
30 Days
7 Days
1 Day
Coefficient
Coefficient
Coefficient
(std. error)
p-Value
(std. error)
p-Value
(std. error)
p-Value
Constant
-3.6544***
<0.0001
-3.6953***
<0.0001
-3.9933***
<0.0001
(0.0270)
(0.0275)
(0.0313)
Personalization
-0.0521**
0.0338
-0.0511**
0.0415
-0.0480*
0.0909
(0.0246)
(0.0251)
(0.0284)
Congruence
0.5802***
<0.0001
0.5627***
<0.0001
0.4659***
<0.0001
(0.0574)
(0.0589)
(0.0699)
Personalization × Congruence
0.2419***
0.0003
0.2384***
0.0005
0.2284**
0.0048
(0.0665)
(0.0682)
(0.0810)
VisitsTotal
0.1016***
<0.0001
0.1032***
<0.0001
0.1032***
<0.0001
(0.0058)
(0.0058)
(0.0059)
BannerRepetitions
-0.0136***
<0.0001
-0.0126***
<0.0001
-0.0090***
<0.0001
(0.0009)
(0.0009)
(0.0010)
BannersTotal
-0.0077***
<0.0001
-0.0073***
<0.0001
-0.0058***
<0.0001
(0.0005)
(0.0005)
(0.0005)
Time
-0.0342***
<0.0001
-0.0421***
<0.0001
-0.0675***
<0.0001
(0.002)
(0.0021)
(0.0027)
Random Intercept
0.4055***
<0.0001
0.3967***
<0.0001
0.3336***
<0.0001
(0.0319)
(0.0314)
(0.0311)
Observations
641,136
641,136
641,136
-2 Log Likelihood
111,897
107,481
82,327
AIC
111,915
107,499
82,345
BIC
111,992
107,576
82,422
Notes. Time measured in days.
*p < .1. **p < .05. ***p < .001.
5
References
Ai C, Norton EC (2003) Interaction terms in logit and probit models. Econom. Lett. 80(1):123–
129.
Goldfarb A, Tucker CE (2011) Online display advertising: Targeting and obtrusiveness.
Marketing Sci. 30(3):389–404.