PreprintPDF Available

Challenges and Future Directions of Computational Advertising Measurement Systems

Authors:

Abstract and Figures

Computational advertising (CA) is a rapidly growing field, but there are numerous challenges related to measuring its effectiveness. Some of these are classic challenges where CA offers a new aspect to the challenge (e.g., multi touch attribution, bias), and some are brand new challenges created by CA (e.g., fake data and ad fraud, creeping out customers). In this paper, we present a measurement system framework for CA to provide a common starting point for advertising researchers to begin addressing these challenges, and we also discuss future research questions and directions for advertising researchers. We identify a larger role for measurement: it is no longer something that happens at the end of the advertising process, but instead measurements of consumer behaviors become integral throughout the process of creating, executing, and evaluating advertising programs.
Content may be subject to copyright.
Page 1
CHALLENGES AND FUTURE DIRECTIONS OF COMPUTATIONAL ADVERTISING
MEASUREMENT SYSTEMS
*Joseph T. Yun (Ph.D., University of Illinois at Urbana Champaign) Research Assistant
Professor of Accountancy and Director of Data Science Research Services, Gies College of
Business, Research Assistant Professor of Advertising, Charles H. Sandage Department of
Advertising, University of Illinois at Urbana-Champaign, jtyun@illinois.edu.
*Claire M. Segijn (Ph.D., University of Amsterdam) Assistant Professor of Advertising,
Hubbard School of Journalism and Mass Communication, University of Minnesota Twin Cities,
segijn@umn.edu.
Stewart Pearson (M.Sc., Birkbeck College, University of London) Chief Executive Officer,
Consilient Group LLC, stewart.pearson@consilient-group.com.
Edward C. Malthouse (Ph.D., Northwestern University) Erastus Otis Haven Professor of
Integrated Marketing Communication, Medill School of Journalism, Media, Integrated
Marketing Communications, Professor of Industrial Engineering and Management Science,
McCormick School of Engineering, Northwestern University, ecm@northwestern.edu.
Joseph A. Konstan (Ph.D., University of California, Berkeley) Distinguished McKnight
University Professor and Associate Dean for Research, Department of Computer Science and
Engineering, University of Minnesota, konstan@umn.edu.
Venkatesh Shankar (Ph.D., Northwestern University) Coleman Chair Professor of Marketing and
Director of Research, Center for Retailing Studies, Mays Business School, Texas A&M
University, vshankar@mays.tamu.edu.
* These authors contributed equally to this work.
This article has been accepted for publication in the Journal of Advertising, published by Taylor
& Francis. Please cite as:
Yun J. T., Segijn, C. M., Pearson, S., Malthouse, E., Konstan, J. & Shankar, V. (Forthcoming) –
Challenges and future directions of computational advertising measurement systems. Journal of
Advertising
Page 2
CHALLENGES AND FUTURE DIRECTIONS OF COMPUTATIONAL ADVERTISING
MEASUREMENT SYSTEMS
ABSTRACT
Computational advertising (CA) is a rapidly growing field, but there are numerous challenges
related to measuring its effectiveness. Some of these are classic challenges where CA offers a
new aspect to the challenge (e.g., multi touch attribution, bias), and some are brand new
challenges created by CA (e.g., fake data and ad fraud, creeping out customers). In this paper, we
present a measurement system framework for CA to provide a common starting point for
advertising researchers to begin addressing these challenges, and we also discuss future research
questions and directions for advertising researchers. We identify a larger role for measurement: it
is no longer something that happens at the end of the advertising process, but instead
measurements of consumer behaviors become integral throughout the process of creating,
executing, and evaluating advertising programs.
Keywords: computational advertising, metrics, stakeholder, computational engine, consumer
data
Page 3
Computational advertising (CA) presents an unprecedented opportunity for measuring the
short- and long-term effectiveness of advertising. Simply defined, CA is personalized
communication that uses computational power to match the right ads and advertisers with the
right consumers at the right time in the right place with the right frequency to elicit the right
response. Computational advertising, and the myriad of digital media through which it is
delivered, offers an explosion in the volume, variety, and velocity of data available, and therefore
new fuel for today’s more powerful machine learning and analytical techniques. At the same
time, CA is being deployed in environments where highly increased personal identification and
tracking across touchpoints, formats, and media, creating an opportunity to measure
effectiveness at a personal level across disparate elements of a campaign and over time. The
nature of these touchpoints presents new types of data and presentation opportunities, from geo-
temporal data, search histories, and voice interaction to personalized placement opportunities
embedded in other media. Together, these changes allow us to incorporate the diverse metrics
from fields such as social media (e.g., Peters et al. 2013), recommender systems (e.g., Herlocker
et al. 2004), and mobile advertising (e.g., Narang and Shankar 2019) to augment more traditional
advertising and marketing metrics (as surveyed, e.g., by Farris et al. 2010), and further extend
these metrics to look at the broader context and scope of the full campaign, full brand, and full
consumer.
Indeed, the entire nature of metrics is changing. No longer simply used to evaluate
performance in the past, metrics today are an integral part of the algorithmic apparatus through
which advertisements are targeted and delivered and are the basis for optimizing the performance
of the advertisements generated and delivered by these algorithms. In 2018 eMarketer reported
over 80% of digital desktop and mobile ads were sold and delivered through programmatic
Page 4
algorithms and auctions (eMarketer 2018). These platforms succeed or fail, based on the data
they incorporate to make these placement decisions. Yet as enterprises spend increasing amounts
on advertising, marketing and their technologies (AdTech/Martech), the reality is an increasing
lack of trust in marketing effectiveness (Odden 2018). That lack of trust has led to challenges to
the dominant players in CA, the duopoly of Google and Facebook. Behind their walled gardens,
they deliver measurement and reporting, but their metrics have been strongly questioned, such as
in one case being fined for misrepresentation (Spangler 2019). The wisdom of enterprises
investing $273 billion annually on online advertising has also been questioned alongside
challenges to the rigor of measurement and whether online is effective at all (Frederik and
Martijn 2019). Thus, in this paper, we necessarily look at metrics from both perspectives:
measurement to facilitate better performance and measurement to evaluate that performance to
make business decisions (and to support research on advertising itself).
Our vision is not new. When Claude Hopkins first published Scientific Advertising in
1923, he outlined a vision where advertising investments could be more predictable and
accountable, insights more strategic and actionable, and experimentation more accessible and
affordable. Today’s CA systems bring us close to the vision he stated nearly a century ago: “The
time has come when advertising has, in some hands, reached the status of a science. It is based
on fixed principles and is reasonably exact. The causes and effects have been analyzed until they
are well understood.” (Hopkins 1968, p. 6). But our goal is more ambitious yet--to bring this
power not just to “some hands” but to advertisers and marketers in general.
To achieve these goals, we start with a framework of the CA process and its context (see
Figure 1). Through this framework, we see the relationship between the CA system as a whole
and its constituent parts, revealing the opportunities for measuring effectiveness at both
Page 5
computational and human points in the system. The figure shows a CA engine at the center of the
system, and in turn highlights the dual optimization struggle of all such machine learning
systems--they seek to exploit the data they have to deliver better performance today (i.e.,
delivering the right advertisements to the right people at the right time and location) while also
seeking to explore the data they receive to be better able to improve performance tomorrow (i.e.,
delivering advertisements specifically to learn how recipients react rather than with confidence
about their effectiveness). After we discuss this framework, we address the challenges for CA
measurement.
PLACE FIGURE 1 ABOUT HERE
A COMPUTATIONAL ADVERTISING MEASUREMENT SYSTEM
Measurement of Strategic and Tactical Effectiveness in Computational Advertising
As with traditional advertising, marketers should begin with their business objectives
(Financial & Business Planning; see Figure 1), which will guide strategic and tactical choices.
Moreover, agreeing on objectives is necessary to evaluate the success of a campaign. After
establishing objectives, the marketer develops strategies (i.e., how to achieve the objective) and
the strategies give rise to tactics (i.e., how to execute the strategy) to identify customers with a
high return on investment (targeting) across media (touchpoints), and through communications.
The strategy itself is increasingly informed by the data, algorithms, and models, leveraging data
on consumer attitudes and brand perceptions. Today, as Figure 1 makes clear, digital data creates
rich sources of information on consumer attitudes, in particular from social media and review
sites (Maslowska et al. 2019). CA is, therefore, playing an increasing role in strategic planning
and tactical executions. In addition, data-intensive CA generates feedback loops, with continuous
Page 6
feedback from tactics to improve both strategy and tactics, from the signals in the marketplace
that inform the enterprise about consumer response to their advertising.
Advertising tactical execution of advertising comprises several dimensions: targeting,
messaging, consumer journey planning, channel delivery, and sales distribution (see Figure 1).
Tracking the customer journey through the purchase funnel can further help to determine the
optimal combination and sequence of content and touchpoints (see paper on content and delivery
channels in this special issue). “We want to address the right browser with the right message at
the right moment and preferably at the right price(Perlich et al. 2012, p. 805). Although
purchase paths are less linear nowadays, it can be argued that customers still go through different
stages (e.g., awareness, consideration, purchase, post-purchase) and they have different
communication needs at each stage (Batra and Keller 2016). The algorithms and models of CA
were initially focused on targeting the right messages to the right consumers. With the
development of new consumer interfaces, the role of computation has expanded to embrace the
planning of ‘end-to-end’ consumer journey through the purchase funnel and the measurement of
‘omnichannel’ marketing campaigns across digital touchpoints. The tactical delivery of messages
and content is increasingly driven by computation. Advertising messages that were once
delivered with mass media are now personalized to the individual consumer, regional markets,
and social communities. While personalization has been possible for many years, it is
increasingly done in real time with better data that have been collected or purchased about the
target customer (Segijn and van Ooijen 2020). In addition, the decisions about whom to target,
when, and with what message are made faster in CA compared to traditional advertising
(Malthouse, Maslowska and Franks 2018).
Page 7
Data for the Computational Advertising Measurement System
A main difference of the framework compared to a framework for traditional advertising
is the multiple information sources and data types that feed into the computational engine at the
heart of computational advertising (Figure 1). We classify four data types. First, advertising
investments and media activities, which include all investments and resources at the command of
advertising, i.e., ad spend (paid, owned and earned media), activity/volumes, campaigns, and
promotions. Second, brand and curated content in multiple formats, which include all
investments and resources at the command of marketing to improve the brand, i.e.,
communication themes and topics, messaging, and customer experience (CX) including service.
Third, brand health and consumer perceptions, which include measures of brand and corporate
attitudes, which have historically come from surveys and qualitative research but are now often
computed from social media. Finally, consumer profiles, media audiences, and context. This
consumer 1:1 data consist of first-, second-, and third-party data (for an overview see Malthouse,
Maslowska and Franks 2018) about customers. A newer category of data is “zero-party” data,
which is data an organization gathers directly from consumers, for example through
questionnaires or quizzes.
In addition to different types of data, advertisers and marketers need to take the context of
data into account. Context refers to the current circumstances of the consumer receiving the ad
message, such as time of day, location, device, etc. Malthouse et al. (2018) discuss context as
who is with the consumer at the time of exposure, what the person is currently doing, what time
of day it is (when), where the consumer is physically, how the consumer is experiencing the
touchpoint (e.g., device), and why the consumer is doing what he/she is currently doing. Note
that this definition of context is broader than contextual targeting, where advertisers match their
Page 8
ads to the surrounding media context, e.g., placing car ads next to articles about cars.
Adomavicius and Tuzhilin (2011) provide a thorough literature review of contextual
personalization, discuss context-aware recommender systems and identify three different
algorithmic approaches to handling context. Herlocker and Konstan (2001) hypothesized that the
inclusion of knowledge about the user’s task into the recommendation algorithm in certain
applications can lead to better recommendations.
Computational Advertising Measurement and Stakeholders
Within CA, various pieces of the aforementioned data are used in the processes of
business and planning, advertising strategy and tactical execution. We set out in Table 1 CA
metrics that are outputted when these processes, combined with CA data, make their way
through the CA measurement system.
PLACE TABLE 1 ABOUT HERE
As shown in Figure 1, we classify metrics in three levels, i.e., Business & Brand Value;
Revenue, Sales & Profitability; and Consumer Behaviors, Actions & Interactions. This
classification enables us to focus on the needs of multiple stakeholders internal or external to an
enterprise. Different stakeholders have different objectives for the enterprise’s investment in
advertising, and therefore require different metrics to be measured to evaluate CA’s
effectiveness. These metrics are typically related to the effects of advertising over time. As a
reference, a spokesperson for a leading CA platform has similarly proposed an impact matrix
(Kaushik 2019), in which different metrics are calculated and reported in different cadences:
real-time/weekly, monthly/quarterly, and biannually.
The roles of different stakeholders in an enterprise, the decisions they will take, and the
time scale in which success or otherwise is determined, are all factors that govern the metrics
Page 9
important to them. Researchers have studied the shorter-term effects of advertising versus the
longer-term effects of offers and promotions (Blattberg, Briesch and Fox 1995). Practitioners
have also studied the balance between (longer-term) brand building and (shorter-term) demand
generation (Binet and Field 2013). The growth of CA means that advertising managers have
immediate access to short-term metrics of customer interactions, which may or may not always
result in revenue, margin, and profitability for the enterprise. External stakeholders also have
specific goals and metrics for the measurement of an enterprise’s advertising. Figure 2 lists out
some exemplary metrics for internal and external stakeholders.
PLACE FIGURE 2 ABOUT HERE
The Computational Advertising Engine
Part of the CA process is the use of various analytical methods in the “computational
engine” to optimize strategy and tactics (Figure 1). In strategy development, computational
methods are being used to analyze the topics of conversation in a marketplace and define future
opportunities of brands to compete (e.g., Fan and Gordon 2013). In tactical executions,
computational methods are being used to place a bid to expose some user, recommend certain
items to a user (Ekstrand, Riedl and Konstan 2011), or predict how likely the user is to churn or
convert (Burez and Van den Poel 2009, Li and Guan 2014). There are many different methods
depending on the stakeholder and their objectives. These methods frequently involve the use of
computational algorithms (for an overview see the editorial of this Special Issue).
The computational engine in Figure 1 leverages algorithms and applies models based on
these algorithms to solve problems in general, measure the contributions of data inputs, and
evaluate the effects of multiple variables in the CA system. The next step in the CA process is to
deliver messages to consumers through their connected devices. Devices became channels,
Page 10
around which proliferating advertising and marketing technology solutions have been created to
deliver the message, manage and analyze the data, and report the metrics (see the media group
paper in this special issue for more details). Every channel results in specific consumer responses
and generates different metrics. Web Analytics is based on ‘clicks’, while Social Analytics is
based on ‘likes’ and ‘shares'. However, a ‘like’ on one Social Media platform may not have the
same exact impact as on another, or as on a retweet on Twitter.
The measurement challenges start here. Further, while the measures that come from
digital channels tend to be behavioral, not all objectives map directly onto behaviors that can be
easily observed, and behavioral measures may have to be supplemented by other research
designs, analytical methods, and models. For example, attitudinal measures will be necessary to
understand the user’s cognitions, emotions and beliefs about touchpoints and the brand. In the
next part, we will further elaborate on measurement challenges in CA.
COMPUTATIONAL ADVERTISING’S MEASUREMENT CHALLENGES
A key question for all organizations is what it should measure. Before answering this
question, one should consider why it should measure something. As mentioned in the
introduction, two main reasons for measuring variables are (1) to evaluate performance of
advertising and manage ad decisions, which is the more traditional reason for measurement, and
(2) to improve or optimize the performance for advertising systems. As an example of (2), a
retailer may choose to measure customer’s behavior on its website so that it can personalize
future interactions with the customer so that they are more likely to be effective. The measures
that are tracked often become the things that employees will optimize through their decisions and
Page 11
actions, especially if the measures are tied to their compensation. Thus, it is important to be
measuring the “right” things. This section discusses considerations in deciding what to measure.
The first section will focus on evaluation of performance (1), with an emphasis how CA
affects these classic challenges with new complexity, data, and/or opportunities. Our list is not
exhaustive, and we recognize that there are many other difficult classic challenges in advertising
measurement such as measuring the duration and cumulative effects of ads, but focus the
discussion here on what is new because of CA. The second subsection details challenges for
measuring variables to optimize performance (2).
Challenges in Evaluating Ad Performance Where CA Offers Something New
Short- versus long-term measures.
Strategic business objectives are longer-term, but in CA campaigns, the measurement
starts immediately in real-time. Yet, many factors in advertising, in particular the measurement
of creativity in shifting consumer perceptions, affect long-term metrics and outcomes. A
common remedy for these problems is to decompose the long-term goal into shorter-term
activities that lead up to the long-term goal, such as the purchase funnel or hierarchy of effects.
The decision on what to measure depends on how to decompose long-term goal into short-term
ones and confirming their relationship with the long-term goal.
As an example, consider a B2B consulting firm that ultimately wants to sell services to
clients (a longer-term goal), but the sales cycles can last months or even years, and sales are
affected by multiple touchpoints and content over time. Thus, the longer-term objectives need to
be achieved by shorter-term activities. These may have short-term metrics but must be evaluated
on their contribution to the longer-term objectives. It may start with making clients aware of the
firm, which is likely to require mass-advertising approaches such as TV ads (Bronner and
Page 12
Neijens 2006, Dijkstra, Buijtels and Van Raaij 2005). A later goal might be to have the client
sign up for a newsletter so that the consulting firm has permission to market directly to the client.
This phase likely involves more branded content in the form of whitepapers, podcasts, and in-
person events such as conferences or lunch speeches (Wang et al. 2019). A next step might be
having a sales call, then submitting a bid for some RFP, etc. There are many steps along the way
to a sale. There may be even longer-term objectives, such as repeat purchases and customer
lifetime value (CLV) (Figure 2).
Just as long-term goals must be decomposed into short-term ones, CLV, which is the
discounted sum of future cash flows due to the relationship, must also be decomposed because it
is a forecast rather than a metric. Enterprises can (and should) measure factors that indicate CLV
including retention rates, average order amounts, purchase rates, and costs to serve the customer,
but they cannot measure CLV itself. Furthermore, they should identify and measure leading
indicators of these CLV determinants. In the B2B consulting firm example, engaging with the
firm’s digital content marketing is such a leading indicator of purchase (Wang, Malthouse,
Calder and Uzunoglu 2019).
There are several challenges. The first is confirming that the short-term metrics have
predictive validity in that they are leading indicators of the long-term goal. For example,
ComScore and Pretarget ran an industry study that looked at metrics such as ad clicks to see if
they were truly correlated with conversions of online sales (Lipsman 2012). After analyzing 263
million impressions over nine months across eighteen advertisers in numerous verticals,
surprisingly they found that ad clicks had no significant correlation with sales conversions.
Maximizing the short-term goal of clicks may not lead to the long-term goal of conversions. CA
exacerbates this problem because so many things are now easy to measure. The fact that
Page 13
something like clicks is easy to measure does not mean that it is a leading indicator of long-term
goals, and the firm should evaluate predictive validity when selecting measures.
A second challenge is to avoid the trap of optimizing against short-term objectives having
a negative impact on the long-term goal. For example, a service provider (say car repairs) may
have a short-term goal of cross-selling other services but realize that those buying the services
are less likely to return in the future. Optimizing against a short-term objective (cross-sales)
could harm longer-term objectives (e.g., repeat purchase, CLV). Binet and Field (2013) have
studied long- and short-term marketing strategies and question whether long-term objectives can
be achieved by a series of short-term activities measured by short-term metrics.
Multi-touch attribution.
CA offers the ability to track exposures over time in more detail than ever before, but
another challenge is multi-touch attribution: there can be many touchpoints (e.g., ad exposures)
over time and the problem is determining which ones contribute to the outcome. For example, a
common practice is to attribute a sale to the most recent touchpoint, where the last click “wins.”
The problem with this is that the last click may not have happened if the consumer had not been
exposed to many other brand messages prior to seeing the most recent message. Previous
messages could have made the consumer aware of the brand and changed brand associations,
which was necessary before the consumer would buy the product. How much “credit” should be
given to the last contact versus the previous ones?
A similar problem is that there is often a high correlation between spending in different
advertising channels, e.g., if there is an increase in the overall ad budget then a firm might
allocate it proportionally across channels, in which case channel effects would be confounded.
Naik and Peters (2009) do not attempt to estimate individual channel effects, and instead use the
Page 14
principal component of offline spend in different channels (TV, radio, magazines, etc.) to handle
the multicollinearity. Using such an aggregated measure may be the most defensible modeling
strategy for multiple channels unless it is possible to run experiments with orthogonal designs.
Causation versus correlation, and endogeneity in measuring the effectiveness of CA
There are two broad categories of research designs for assessing the causal effects of
advertising: randomized, controlled trials (RCT) and observational studies. Observational studies
include those with quasi-experimental methods such as difference-in-differences and regression
discontinuity designs, and those with statistical/econometric models such as regression and
choice models using time-series, cross-sectional, or panel data. (e.g., Assmus, Farley and
Lehmann 1984, Liu and Shankar 2015).
While RCTs are often held up as the gold standard for testing causality because of their
strong internal validity, they are often costly to execute. Firms often have systems that are
designed and optimized to do one thing well, while RCTs require systems to track and randomly
assign customers to treatment groups over time and across multiple touchpoints. In addition to
the costs of executing RCTs, management is often reluctant to risk losing additional revenue by
setting aside a control group. Furthermore, RCTs may be infeasible in many situations because it
may not be possible to randomly assign advertising treatments across a target audience. Given
these issues, it is easy to understand the appeal of observational studies.
The debate over the internal validity of observational approaches dates back to at least the
1950s (e.g., studies on whether smoking causes lung cancer). Endogeneity of advertising is a key
concern that precludes the conclusivity on the causality of advertising for non-randomized
studies. In observational studies, endogeneity is typically controlled through instrumental
variable approach, propensity scoring models, control function method, and copula techniques
Page 15
(e.g., Liu, Shankar and Yun 2017, Liu-Thompkins and Malthouse 2017). DeKimpe and Hanssens
(2000) show the potential for time series models to quantify short and long-term marketing
effects. The increased availability of customer behavior measures over time in digital advertising
environments enables more widespread use of such models. While these approaches can mitigate
endogeneity concerns to a considerable extent, they cannot substitute for RCTs. Indeed, Gordon
et al. (2019) compare RCTs with different methods in CA and find that observational studies
often fail to accurately measure the true effects of advertising. Many of the critiques of current
CA measurement systems arise from a confusion between correlation and causation.
However, the rate of adoption of such models by mainstream practitioners has been slow.
Kireyev, Pauwels and Gupta (2016) studied the influence of display ads on search behavior,
applying Granger causality tests to separate correlation from causality and measuring the
‘spillover’ effects over time across media to reach a true measure of the contribution of two
media. Their analysis reveals the challenges of CA measurement as well as indicates a path
forward for future researchers. It is often easier to execute RCTs with high internal and external
validity in digital environments than in the non-digital environments of the past. The challenge
for academic scholars is getting access to, and manipulating stimuli in, the digital environments,
which tend to be controlled by companies and other organizations.
Incrementality.
Another measurement issue is whether advertising increases a criterion like sales. There
are many situations where this is an issue. An old example comes from retailing (e.g., Hansotia
and Rukstales 2002). Suppose that a retailer sends coupons to its customers. It is easy to measure
how many coupons are redeemed and how much those redeeming them spend, but would the
customers have come into the store anyway? If so, the effect of the coupon is only to reduce
Page 16
margins on a sale that would have happened without the coupon. As a trivial example, coupons
given to customers entering a restaurant will almost certainly be redeemed, but the customers
would have dined anyway and paid full price. More modern examples include paid search. For
example, a retailer might buy its brand name as a search term, but a customer who searches for
the brand name may find the brand without the retailer having to pay for the search term. See the
work of Lewis and Reiley (2014) Johnson, Lewis and Nubbemeyer (2017), and Olaya,
Coussement and Verbeke (2020) for recent surveys and benchmarking studies.
Challenges Around Optimizing Advertising Contact Points
One answer to the question of what to measure is to measure first-party variables that
improve the effectiveness of some marketing contact, often through improved relevance
(personalization) or targeting. The behaviors and contextual factors discussed in the previous
section are prime candidates. Another reason to measure something is if it has value to other
actors (second-party data) --a firm may be able to charge other firms for the information it has on
its customers (Line et al. 2020). There are many research areas that arise because of such data.
Multiple objectives and metrics.
Advertising has often focused on a small set of objectives, such as changing an attitude or
increasing awareness. Designing digital CA environments that create user experiences usually
involves managing tradeoffs between many competing objectives, which often arise because of
multiple stakeholders. Firms need to think about the different stakeholders and design measures
to reflect their needs. For example, a retailer recommending items to consumers on its website
must consider the user’s utility for different items, but also whether the manufacturer will pay to
be a sponsored recommendation, the item’s profit margin and perhaps strategic considerations
such as expanding its presence in some category. Likewise, a media website recommending
Page 17
news stories must consider the user’s utility for stories, but also the needs of its advertisers, e.g.,
if automotive stories are never recommended then there will be lower traffic to such stories, and
auto advertisers may not obtain the exposures they seek from the website. It may have other
objectives around not creating filter bubbles or expanding the interests of the user.
Fake data and ad fraud
Malthouse and Li (2017) discuss fraud in CA via automated computational “bots” or low-
paid “click farm” workers that generate fake clicks on online ads, especially if those ads are
using a pay-per-click pricing system; thus producing fake data in efforts to boost pay-per-click
ad revenue has been a challenge for CA measurement systems. Fulgoni (2016) discuss
advertising fraud. More recently, a Pew study found that 66% of Tweets are from “automated
accounts” (Wojcik et al. 2018) and there are similar concerns around fake reviews. A free fake
ad data Chrome browser extension called AdNauseam was released (Howe and Nissenbaum
2017). They suggest that their open-source project is not driven by desire for financial gain, but
rather a desire for greater societal privacy. AdNauseam is a browser extension that blocks ads
and trackers, but stores ads that users have been sent so that they can review their history of ads
as well as it floods online ads with fake clicks. Their reasoning for including a fake click
function on top of an ad blocker was their attempts at both technologically protesting the current
state of CA as well as technically upending CA. Although technical solutions are likely to be
built to counteract fake data (e.g., machine-learned detection of fake click patterns), one stream
of solutions would be opt-in ads (e.g., permission marketing, zero-party data collection), as
AdNauseam shows how society is pushing back on privacy issues.
Page 18
Expanded set of data measured.
Current CA approaches profile customers through numerous means such as website
cookies, data mined from social media profiles, or data purchased from third-party sources.
Technological advancement will continue to bring new avenues of customer data collection and
allow for measurement of CA effectiveness. For example, wearable technologies such as virtual
reality headsets, augmented reality glasses, and even Elon Musk’s investment into a brain
implant technology called Neuralink (Scaturro 2019) should enable new ways to collect
customer data, behavioral metrics, and even psychological metrics to measure CA effectiveness.
Voice-enabled smart speakers such as Google Home and Amazon Alexa-enabled Echo products
can also open up new avenues of CA measurement and metrics. Amazon started testing audio
ads via its Alexa products this year (Sloane 2019), thus opening up the opportunity to connect an
audio ad “impression” to a sales conversion on whatever e-commerce site is being advertised.
Additionally, bringing together previous research on vocal tone and emotions (Devillers,
Vidrascu and Layachi 2010) and response latency’s correlation with attitude strength (Bassili
1993), voice-enabled smart speakers can open up even more opportunities for CA behavioral
(e.g., audio ad impression to sales conversion) and psychological measurement (e.g., vocally
detected emotion and/or attitude strength).
At the same time, Google recently announced that they would phase out all third-party
cookie tracking on their Chrome browser for advertising (Graham 2019), social media platforms
are tightening privacy controls of people’s profile data, and recent research has shown that
customer profiles built from purchased third-party data may not be worth the cost due to the
black-box nature of how profiles are created (Neumann, Tucker and Whitfield 2019) and ethical
Page 19
concerns (Strycharz et al. 2019). Future data access research questions need to consider the
ethics behind all of this.
Creeping out consumers.
Technological advancements will create more sophisticated techniques that can be used
to collect customers’ personal data and to more accurately distribute messages to customers
(Malthouse, Maslowska and Franks 2018, Segijn and van Ooijen 2020). As mentioned before,
CA is assumed to further improve the effectiveness of a message because it will be personalized.
However, at the same time personalization can be perceived as invasive to a customer’s privacy,
creates discomfort, and increases feelings of being watched (McDonald and Cranor 2010, Segijn
and van Ooijen 2020, Smit, Van Noort and Voorveld 2014). In turn, how customers feel towards
messages that are personalized can influence the effectiveness of the message (Aguirre et al.
2015). Therefore, marketers have to balance between accurately matching the message with the
consumer’s needs without creeping them out by being accurate all the time, also known as the
accuracy trade-off (Segijn 2019). Eventually, technological advancements may become so
sophisticated that at a certain point, machines can learn when to make a random ‘mistake’ on
purpose to not creep out customers, which will result in the most effective strategy. However,
this could only work when the input about the effectiveness of the campaign accurately reflects
reality. Thus, optimizing the measurement of metrics is important because it will serve as input
to optimize the effectiveness of the CA driven campaign.
Behavioral versus psychological data.
Another challenge is related to the distinction between behavioral and psychological
metrics Hofacker, Malthouse and Sultan (2016, p. 93). By behavioral we mean records of a
customer’s actions and by psychological we mean measures of their thoughts, feelings or beliefs.
Page 20
The goals of advertising, especially in the upper funnel, are often psychological in nature, such
as changing an attitude or making consumers aware of something. There are problems in
measuring such psychological phenomena with behavioral metrics. Psychological metrics are
typically gathered with self-reported surveys on a sample of customers and possibly
noncustomers, although neuro measures and machine-learned detection (e.g., Yun, Pamuksuz
and Duff 2019) are being explored. The digital environments in which CA takes place generates
an abundance of behavioral data for all customers who interact in the environment. Behavioral
measures could be called a convenience census of customers, in that measures are known for all
customers who visit but are usually not observed for noncustomers. There are additional
challenges in that observed actions may come from different time periods and have different
levels of completeness, in that less is known about customers who rarely visit. While previous
behaviors are often a good indicator of future ones, thoughts, emotions, and beliefs are more
malleable in that they can be altered with persuasive messages. Therefore, it is desirable to have
a complete understanding of both the cognitions and behaviors of customers.
Bias in data and algorithms.
Sampling and measurement biases have long been a challenge in advertising
measurement. The biases, however, are somewhat different in CA compared to non-CA, but they
do exist. In the past, advertising scholars were concerned about the validity of self-reports and
the representativeness of different panels. In contrast, CA scholars worry about issues including
convenience censuses, algorithmic biases and the effects of non-human (bot) traffic. There has
been a recent movement in the computational creation of advertisements through which social
media profiles and behaviors (Dragoni 2018) or browsing behavior (Deng et al. 2019) is used to
create artificial intelligence (AI) generated creative content. Although such a practice may
Page 21
provide benefits (e.g., less stress for humans to create millions of different advertisements for
RTB marketplaces, Deng, Tan, Wang and Pan 2019), relying on AI for automatic content
creation and recommendation can have detrimental effects from underrepresentation and bias
standpoints. Bias is a major concern in the design of AI systems (e.g., Abdollahpouri et al. 2019,
Shankar et al. 2017). Previous research in underrepresentation within advertising shows that
ethnicities such as Latinos (Taylor and Bang 1997) and age groups such as 50+ (Carrigan and
Szmigin 1998) were highly underrepresented in advertising. This bias poses a problem with the
movement towards AI, computational content creation, and content recommendation. For
example, Google received negative reactions when its AI algorithms within their Google Photos
product mislabeled black people as gorillas, but this was a result of underrepresentation of
people of color within their image collections as well as a lack of racial diversity within Silicon
Valley as a whole (Guynn 2015). Most of the images of people used to train Google’s AI were
white, thus the algorithm learned that “people” are best recognized by white skin tone. If CA
algorithms are being trained by data from previous advertisements that are largely non-Latino
and young, the millions of AI-generated creative content will be full of young non-Latino
models. Some potential research questions for the challenge of biased data include: What are
some ways that CA data representativeness can be measured? How does the concept of biased
data change or remain the same when CA is highly customized to an individual? How can CA
bias be measured and addressed?
FUTURE RESEARCH AND CONCLUSIONS
We now discuss some future research topics that ad scholars should consider. These
research topics will inevitably affect CA measurement. One question we can consider is how can
Page 22
data-driven decisions be blended with human expertise? As a rough rule of thumb, machine
learning tends to excel when there is ample historical data and the business environment is
stable, while humans tend to have an advantage in handling ambiguous or quickly evolving
situations without much data. Designing hybrid ad systems that leverage the strengths of each
will be a new frontier in ad research. A related topic is designing ad programs to accomplish
long-term goals when there is little data for training models. For example, what actions should an
advertiser take each month/quarter over the next three years to achieve some strategic goal,
especially when historical data has limited relevance? This situation likely requires a
parsimonious model with a small set of ad decision variables, and theoretical understanding of
the ad effects. The advertising literature is replete with theories that would be useful in
developing such systems but developing such systems would also require skills from machine
learning and management/marketing science.
Another area for future research is improving advertising by using information about the
consumer’s context, since contextual information will only increase as more digital devices are
invented and consumers adopt them. Devices will increasingly come with microphones, cameras,
heartrate and body-temperature monitors, eye trackers and other new ways of monitoring sensory
variables. Likewise, there will be methodological improvements for inferring individual
consumer attitudes, emotions and beliefs. How will the availability of such measures change
advertising? How do organizations create trust and avoid a consumer backlash? As such data
proliferate, there will be a need to develop ethical guidelines, modes of consent, codes of
practice, data selling/ownership laws, and other standards. There will be a need for research on
how to earn the trust of consumers and not creep them out with personalized contact points. As a
consequence of COVID-19, this area is especially dynamic, and therefore in need of research.
Page 23
Consumer attitudes toward data collection and monitoring will likely evolve when it is used to
create benefits for consumers such as contact tracing and predicting where the next outbreaks
will happen. We see a natural experiment occurring between Europe, the US and China with the
governments implementing different monitoring practices.
Advertising scholars are not the only people working on these issues. Other fields
studying similar issues include recommender systems (Harper et al. 2005), persuasive computing
(human-computer interaction and computer-mediated communication), marketing science, data
mining, and machine learning. This is an opportunity for advertising scholars to come together
with people working on the same problems but with different methods and approaches. Beyond
answering specific questions about the effects of advertising, CA scholars can learn from other
research communities that are steeped in data such as computational biology, genetics, and
epidemiology. We also note that other business functions, such as manufacturing, supply chain
management, financial and insurance management, and firm valuation, have been transformed by
data. Consumer behaviors in other areas such as traffic patterns and medicine have also been
subject to intense data analytics. Researchers can test whether CA meets the same rigor and
standards as these other disciplines, and explore the methods and practices that they can adopt
from these disciplines developed that CA can adapt.
There is an argument that advertising is “special” because it is a discipline that is both
behavioral and psychological. Yet, the fact that we are still discussing the same age-old questions
(e.g., is half my advertising is wasted?) in an age of big data and computation suggests three
challenges for researchers that are of vital importance. What metrics exist today, are they of
value, and what metrics are missing? What methodologies to assess advertising effectiveness are
used today? Are they appropriate for the goals of firms and needs of a consumer-led
Page 24
marketplace? Why is there not more experimentation? Why is there a divide between survey-
based metrics and behavioral metrics?
Finally, there is a risk that CA is “failing” because of manipulation by bad actors that
results in consumer disaffection and fraud. While this issue is not the focus of this paper, we
believe the adoption of the right metrics and methodologies can be mitigating factors. If digital
advertising is annoying, we can measure opt-outs and ad-blocking at the same time as we
measure clicks and sales. Machine learning optimizes a user-specified objective. If the wrong
objective (e.g., maximize clicks) is pursued, machine learning will be very good at optimizing it,
but it might do more harm than good. This situation sets up a well-defined set of challenges for
researchers.
We conclude by reiterating the importance of measurement and methodology for CA. CA
is the basis for the success of several of the most valuable firms today. CA has also contributed
to Schumpeter’s creative destruction of enterprises at a scale and pace never seen before. As we
move forward, CA will likely assume greater importance. The measurement, metrics, methods
and models, including experimentation of CA will continue to evolve in ways not experienced
before. Measurement is not just the last box on the side of an advertising flow diagram. It is an
integral component of the systems themselves.
Page 25
REFERENCES
Abdollahpouri, Himan, Masoud Mansoury, Robin Burke and Bamshad Mobasher (2019), "The
Impact of Popularity Bias on Fairness and Calibration in Recommendation," arXiv
preprint arXiv:1910.05755.
Adomavicius, Gediminas and Alexander Tuzhilin (2011), "Context-Aware Recommender
Systems," in Recommender Systems Handbook: Springer, 217-253.
Aguirre, Elizabeth, Dominik Mahr, Dhruv Grewal, Ko de Ruyter and Martin Wetzels (2015),
"Unraveling the Personalization Paradox: The Effect of Information Collection and
Trust-Building Strategies on Online Advertisement Effectiveness," Journal of Retailing,
91 (1), 34-49.
Assmus, Gert, John U Farley and Donald R Lehmann (1984), "How Advertising Affects Sales:
Meta-Analysis of Econometric Results," Journal of Marketing Research, 21 (1), 65-74.
Bassili, John N. (1993), "Response Latency Versus Certainty as Indexes of the Strength of
Voting Intentions in a Cati Survey," Public Opinion Quarterly, 57, 54.
Batra, Rajeev and Kevin Lane Keller (2016), "Integrating Marketing Communications: New
Findings, New Lessons, and New Ideas," Journal of Marketing, 80 (6), 122-145.
Binet, Les and Peter Field (2013), The Long and the Short of It: Balancing Short and Long-Term
Marketing Strategies: Institute of Practitioners in Advertising.
Blattberg, Robert C, Richard Briesch and Edward J Fox (1995), "How Promotions Work,"
Marketing Science, 14 (3_supplement), G122-G132.
Bronner, Fred and Peter Neijens (2006), "Audience Experiences of Media Context and
Embedded Advertising: A Comparison of Eight Media," International Journal of Market
Research, 48 (1), 81-100.
Page 26
Burez, Jonathan and Dirk Van den Poel (2009), "Handling Class Imbalance in Customer Churn
Prediction," Expert Systems with Applications, 36 (3), 4626-4636.
Carrigan, Marylyn and Isabelle Szmigin (1998), "The Usage and Portrayal of Older Models in
Contemporary Consumer Advertising," Journal of Marketing Practice: Applied
Marketing Science, 4 (8), 231-248.
DeKimpe, Marnik G and Dominique M Hanssens (2000), "Time-Series Models in Marketing::
Past, Present and Future," International Journal of Research in Marketing, 17 (2-3), 183-
193.
Deng, Shasha, Chee-Wee Tan, Weijun Wang and Yu Pan (2019), "Smart Generation System of
Personalized Advertising Copy and Its Application to Advertising Practice and
Research," Journal of Advertising, 48 (4), 356-365.
Devillers, Laurence, Laurence Vidrascu and Omar Layachi (2010), "Automatic Detection of
Emotion from Vocal Expression," in A Blueprint for Affective Computing: A
Sourcebook and Manual, Klaus R. Scherer, Tanja Bänziger and Etienne Roesch eds.,
Oxford: Oxford University Press, 232-244.
Dijkstra, Majorie, Heidi EJJM Buijtels and W Fred Van Raaij (2005), "Separate and Joint Effects
of Medium Type on Consumer Responses: A Comparison of Television, Print, and the
Internet," Journal of Business Research, 58 (3), 377-386.
Dragoni, Mauro (2018), "Computational Advertising in Social Networks: An Opinion Mining-
Based Approach," in Proceedings of the 33rd Annual ACM Symposium on Applied
Computing: ACM, 1798-1804.
Page 27
Ekstrand, Michael D, John T Riedl and Joseph A Konstan (2011), "Collaborative Filtering
Recommender Systems," Foundations and Trends® in Human–Computer Interaction, 4
(2), 81-173.
eMarketer (2018), "More Than 80% of Digital Display Ads Will Be Bought Programmatically in
2018," in eMarketer.
Fan, Weiguo and Michael D Gordon (2013), "The Power of Social Media Analytics,"
Communications of the ACM, 12, 1-26.
Farris, Paul W, Neil Bendle, Phillip Pfeifer and David Reibstein (2010), Marketing Metrics: The
Definitive Guide to Measuring Marketing Performance: Pearson Education.
Frederik, Jesse and Maurits Martijn (2019), "The New Dot Com Bubble Is Here: It’s Called
Online Advertising," in The Correspondent.
Fulgoni, Gian M (2016), "Fraud in Digital Advertising: A Multibillion-Dollar Black Hole: How
Marketers Can Minimize Losses Caused by Bogus Web Traffic," Journal of Advertising
Research.
Gordon, Brett R., Florian Zettelmeyer, Neha Bhargava and Dan Chapsky (2019), "A Comparison
of Approaches to Advertising Measurement: Evidence from Big Field Experiments at
Facebook," Marketing Science, 0 (0), null.
Graham, Megan (2019), "Google Cracks Down on Ads Tracking You across the Web, and
Advertisers Are Preparing for the Worst," in CNBC.
Guynn, Jessica (2015), "Google Photos Labeled Black People 'Gorillas'," in USA Today.
Hansotia, Behram and Brad Rukstales (2002), "Incremental Value Modeling," Journal of
Interactive Marketing, 16 (3), 35.
Page 28
Harper, F Maxwell, Xin Li, Yan Chen and Joseph A Konstan (2005), "An Economic Model of
User Rating in an Online Recommender System," in International conference on user
modeling: Springer, 307-316.
Herlocker, Jonathan L, Joseph A Konstan, Loren G Terveen and John T Riedl (2004),
"Evaluating Collaborative Filtering Recommender Systems," ACM Transactions on
Information Systems (TOIS), 22 (1), 5-53.
Herlocker, Jonathan L. and Joseph A. Konstan (2001), "Content-Independent Task-Focused
Recommendation," IEEE Internet Computing, 5 (6), 40-47.
Hofacker, Charles F, Edward Carl Malthouse and Fareena Sultan (2016), "Big Data and
Consumer Behavior: Imminent Opportunities," Journal of Consumer Marketing.
Hopkins, Claude C (1968), Scientific Advertising: New Line Publishing.
Howe, Daniel C and Helen Nissenbaum (2017), "Engineering Privacy and Protest: A Case Study
of Adnauseam," in IWPE@ SP, 57-64.
Johnson, Garrett A, Randall A Lewis and Elmar I Nubbemeyer (2017), "Ghost Ads: Improving
the Economics of Measuring Online Ad Effectiveness," Journal of Marketing Research,
54 (6), 867-884.
Kaushik, Avinash (2019), "Inside Google Marketing: How We Measure the Bottom-Line Impact
of Our Advertising Campaigns."
Kireyev, Pavel, Koen Pauwels and Sunil Gupta (2016), "Do Display Ads Influence Search?
Attribution and Dynamics in Online Advertising," International Journal of Research in
Marketing, 33 (3), 475-490.
Page 29
Lewis, Randall A and David H Reiley (2014), "Online Ads and Offline Sales: Measuring the
Effect of Retail Advertising Via a Controlled Experiment on Yahoo!," Quantitative
Marketing and Economics, 12 (3), 235-266.
Li, Xiang and Devin Guan (2014), "Programmatic Buying Bidding Strategies with Win Rate and
Winning Price Estimation in Real Time Mobile Advertising," in Pacific-Asia Conference
on Knowledge Discovery and Data Mining: Springer, 447-460.
Line, Nathaniel D, Tarik Dogru, Dahlia El-Manstrly, Alex Buoye, Ed Malthouse and Jay
Kandampully (2020), "Control, Use and Ownership of Big Data: A Reciprocal View of
Customer Big Data Value in the Hospitality and Tourism Industry," Tourism
Management, 80, 104106.
Lipsman, Andrew (2012), "For Display Ads, Being Seen Matters More Than Being Clicked," in
New Research from Pretarget and Comscore Suggests that Buyer Conversion is More
Highly Correlated with Ad Viewability and Hover than with Clicks or Gross Impressions,
ComScore.
Liu, Yan and Venkatesh Shankar (2015), "The Dynamic Impact of Product-Harm Crises on
Brand Preference and Advertising Effectiveness: An Empirical Analysis of the
Automobile Industry," Management Science, 61 (10), 2514-2535.
Liu, Yan, Venkatesh Shankar and Wonjoo Yun (2017), "Crisis Management Strategies and the
Long-Term Effects of Product Recalls on Firm Value," Journal of Marketing, 81 (5), 30-
48.
Liu-Thompkins, Yuping and Edward C Malthouse (2017), "A Primer on Using Behavioral Data
for Testing Theories in Advertising Research," Journal of Advertising, 46 (1), 213-225.
Page 30
Malthouse, Edward C and Hairong Li (2017), "Opportunities for and Pitfalls of Using Big Data
in Advertising Research," Journal of Advertising, 46 (2), 227-235.
Malthouse, Edward C, Ewa Maslowska and Judy U Franks (2018), "Understanding
Programmatic Tv Advertising," International Journal of Advertising, 37 (5), 769-784.
Maslowska, Ewa, Su Jung Kim, Edward C Malthouse and Vijay Viswanathan (2019), "Online
Reviews as Customers’ Dialogues with and About Brands," in Handbook of Research on
Customer Engagement: Edward Elgar Publishing.
McDonald, Aleecia M and Lorrie Faith Cranor (2010), "Americans' Attitudes About Internet
Behavioral Advertising Practices," in Proceedings of the 9th annual ACM workshop on
Privacy in the electronic society, 63-72.
Naik, Prasad A and Kay Peters (2009), "A Hierarchical Marketing Communications Model of
Online and Offline Media Synergies," Journal of Interactive Marketing, 23 (4), 288-299.
Narang, Unnati and Venkatesh Shankar (2019), "Mobile App Introduction and Online and
Offline Purchases and Product Returns," Marketing Science, 38 (5), 756-772.
Neumann, Nico, Catherine E Tucker and Timothy Whitfield (2019), "Frontiers: How Effective Is
Third-Party Consumer Profiling? Evidence from Field Studies," Marketing Science.
Odden, Lee (2018), "Trust in Marketing Is at Risk. These Cmos and Marketing Influencers Share
How to Fix," in TopRank Marketing.
Olaya, Diego, Kristof Coussement and Wouter Verbeke (2020), "A Survey and Benchmarking
Study of Multitreatment Uplift Modeling," Data Mining and Knowledge Discovery, 1-36.
Perlich, Claudia, Brian Dalessandro, Rod Hook, Ori Stitelman, Troy Raeder and Foster Provost
(2012), "Bid Optimizing and Inventory Scoring in Targeted Online Advertising," in
Page 31
Proceedings of the 18th ACM SIGKDD international conference on Knowledge
discovery and data mining: ACM, 804-812.
Peters, Kay, Yubo Chen, Andreas M Kaplan, Björn Ognibeni and Koen Pauwels (2013), "Social
Media Metrics—a Framework and Guidelines for Managing Social Media," Journal of
Interactive Marketing, 27 (4), 281-298.
Scaturro, Michael (2019), "Elon Musk Is Making Implants to Link the Brain with a
Smartphone," in CNN Business.
Segijn, Claire M (2019), "A New Mobile Data Driven Message Strategy Called Synced
Advertising: Conceptualization, Implications, and Future Directions," Annals of the
International Communication Association, 43 (1), 58-77.
Segijn, Claire M and Iris van Ooijen (2020), "Perceptions of Techniques Used to Personalize
Messages across Media in Real Time," Cyberpsychology, Behavior, and Social
Networking, 23 (5), 329-337.
Shankar, Shreya, Yoni Halpern, Eric Breck, James Atwood, Jimbo Wilson and D Sculley (2017),
"No Classification without Representation: Assessing Geodiversity Issues in Open Data
Sets for the Developing World," arXiv preprint arXiv:1711.08536.
Sloane, Garett (2019), "Amazon Tests Audio Ads on Alexa Music - and Here’s a Look at Its
Pitch Deck," in AdAge.
Smit, Edith G, Guda Van Noort and Hilde AM Voorveld (2014), "Understanding Online
Behavioural Advertising: User Knowledge, Privacy Concerns and Online Coping
Behaviour in Europe," Computers in Human Behavior, 32, 15-22.
Spangler, Todd (2019), "Facebook Target of Antitrust Probe by State Attorneys General," in
Variety.
Page 32
Strycharz, Joanna, Guda van Noort, Natali Helberger and Edith Smit (2019), "Contrasting
Perspectives–Practitioner’s Viewpoint on Personalised Marketing Communication,"
European Journal of Marketing.
Taylor, Charles R and Hae-Kyong Bang (1997), "Portrayals of Latinos in Magazine
Advertising," Journalism & Mass Communication Quarterly, 74 (2), 285-303.
Wang, Wei-Lin, Edward Carl Malthouse, Bobby Calder and Ebru Uzunoglu (2019), "B2b
Content Marketing for Professional Services: In-Person Versus Digital Contacts,"
Industrial marketing management, 81, 160-168.
Wojcik, Stefan, Solomon Messing, Aaron Smith, Lee Rainie and Paul Hitlin (2018), Bots in the
Twittersphere: An Estimated Two-Thirds of Tweeted Links to Popular Websites Are
Posted by Automated Accounts-Not Human Beings: Pew Research Center.
Yun, Joseph T, Utku Pamuksuz and Brittany R L Duff (2019), "Are We Who We Follow?
Computationally Analyzing Human Personality and Brand Following on Twitter,"
International Journal of Advertising, 38 (5), 776-795.
Page 33
FIGURE 1
COMPUTATIONAL ADVERTISING MEASUREMENT SYSTEM
Page 34
FIGURE 2
METRICS IMPORTANT TO DIFFERENT STAKEHOLDERS
Order of boxes do not reflect importance
Page 35
TABLE 1
DATA INPUTS AND OUTPUTS IN COMPUTATIONAL ADVERTISING
Process
Objectives
Metrics
Enterprise
strategy
Quantify the marketplace
in which the firm operates
Business and Brand
Value (e.g., Total
shareholder value;
Return on capital;
Pricing power; Brand
equity; Customer
loyalty; Lifetime value)
Quantify the firm’s
business objectives for
advertising
Quantify the firm’s
financial metrics for
investment decisions
Advertising
Strategic
Planning
Define the strategic plan
(e.g., Consumer insight,
Brand communications,
Media paid/owned/earned)
Revenue, Sales &
Profitability (e.g.,
Revenue, Sales
Margins,
Customer Acquisition,
Retention & Cross-
selling, Channel,
Audience & Campaign
ROI)
Advertising
Tactical
Execution
Define the tactical
executions (e.g., Targeting,
Messaging, Consumer
journeys, Channel delivery,
Sales distribution)
Consumer Behaviors,
actions, & Interactions
(e.g., Traffic,
Responses, Likes, Posts,
Shares, Following, Sign-
ups, Registrations,
Comments, Complaints)
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The amount of online messages that are personalized based on people's characteristics and interests is growing. Due to technological advancements, it has become possible to personalize messages across media in real time. However, little is known about people's perceptions of these different personalization techniques, while this can have important implications for message effectiveness and the privacy debate. A survey with U.S. adults (N = 1,008) showed that in the context of real-time personalization, all personalization techniques are seen as unacceptable and they are all associated with perceptions of surveillance. This applies to all generations, but younger generations are more likely to accept and to perceive less surveillance than older generations. Furthermore, we found that, of all predictors, perceived surveillance and attitudes toward personalization were the strongest predictors of acceptance of all personalization techniques. The results advance theory by differentiating between personalization techniques and introducing privacy cynicism and mobile device dependency as factors that positively relate to acceptance of personalization techniques. Practically, the results contribute to the debate on consumer agency related to people's personal data and inform media literacy programs.
Conference Paper
Full-text available
The strategy of obfuscation has been broadly applied—in search, location tracking, private communication, anonymity—and has thus been recognized as an important element of the privacy engineer’s toolbox. However there remains a need for clearly articulated case studies describing not only the engineering of obfuscation mechanisms but providing a critical appraisal of obfuscation’s fit for specific socio-technical applications. This is the aim of our paper, which presents our experiences designing, implementing, and distributing AdNauseam, an open-source browser extension that leverages obfuscation to frustrate tracking by online advertisers.
Article
Full-text available
Uplift modeling is an instrument used to estimate the change in outcome due to a treatment at the individual entity level. Uplift models assist decision-makers in optimally allocating scarce resources. This allows the selection of the subset of entities for which the effect of a treatment will be largest and, as such, the maximization of the overall returns. The literature on uplift modeling mostly focuses on queries concerning the effect of a single treatment and rarely considers situations where more than one treatment alternative is utilized. This article surveys the current literature on multitreatment uplift modeling and proposes two novel techniques: the naive uplift approach and the multitreatment modified outcome approach. Moreover, a benchmarking experiment is performed to contrast the performances of different multitreatment uplift modeling techniques across eight data sets from various domains. We verify and, if needed, correct the imbalance among the pretreatment characteristics of the treatment groups by means of optimal propensity score matching, which ensures a correct interpretation of the estimated uplift. Conventional and recently proposed evaluation metrics are adapted to the multitreatment scenario to assess performance. None of the evaluated techniques consistently outperforms other techniques. Hence, it is concluded that performance largely depends on the context and problem characteristics. The newly proposed techniques are found to offer similar performances compared to state-of-the-art approaches.
Article
Full-text available
Measuring the causal effects of digital advertising remains challenging despite the availability of granular data. Unobservable factors make exposure endogenous, and advertising’s effect on outcomes tends to be small. In principle, these concerns could be addressed using randomized controlled trials (RCTs). In practice, few online ad campaigns rely on RCTs and instead use observational methods to estimate ad effects. We assess empirically whether the variation in data typically available in the advertising industry enables observational methods to recover the causal effects of online advertising. Using data from 15 U.S. advertising experiments at Facebook comprising 500 million userexperiment observations and 1.6 billion ad impressions, we contrast the experimental results to those obtained from multiple observational models. The observational methods often fail to produce the same effects as the randomized experiments, even after conditioning on extensive demographic and behavioral variables. In our setting, advances in causal inference methods do not allow us to isolate the exogenous variation needed to estimate the treatment effects. We also characterize the incremental explanatory power our data would require to enable observational methods to successfully measure advertising effects. Our findings suggest that commonly used observational approaches based on the data usually available in the industry often fail to accurately measure the true effect of advertising.
Article
Full-text available
Mobile devices have become ingrained in people’s lives. They facilitate data collection and are often combined with other media simultaneously. Synced advertising is a new strategy in this hybrid media environment that makes use of personalized advertising based on people’s current media use. Despite its frequent use in the industry, work is needed to understand how this mobile message strategy works. First, this article conceptualizes the phenomenon of synced advertising and describes a set of implications for theory and practice. Second, the article reviews and synthesizes existing theories in related fields. Finally, the article builds on these theories by formulating propositions to guide future research. The work is relevant to different domains, such as advertising, health communication, and political communication. Access to 50 free ePrints: https://www.tandfonline.com/eprint/bSmxAjiQTxdRFAaZcwHU/full?target=10.1080/23808985.2019.1576020 Please only use link once + save article to give access to more people.
Article
Full-text available
This is an accepted manuscript before final edits/changes. In this study, we present a computational method for analyzing the congruence between personality of a brand’s Twitter account and the personality of their followers. We investigated attachment to brands on Twitter by computing personalities through a machine-learned computational analysis of Twitter postings rather than traditional personality tests. By studying three different brands, results revealed that on average, brand followers have personalities that are more congruent with the personality of brands they follow compared to users that do not follow those brands. Taking these findings into consideration, we discuss some considerations for advertising researchers and practitioners, as well as provide a new tool, the Brand Analytics Environment (BAE), to allow individuals without computer programming backgrounds to conduct this method themselves.
Article
While the large-scale harvesting of consumer data is a common practice in the hospitality and tourism industries, the seemingly unassailable right of companies to collect and share consumer data is not without critics. The purpose of this paper is to explore the nature of data-based value co-creation under varying conditions of consumer control and benefits. Academic and press articles were used to explore the nature of big data value co-creation in a wide range of hospitality, tourism, and other industries. A forward-looking approach was adopted by considering the implications of policy and technology as key mechanisms for sharing the power dynamic regarding the ownership, control and use of personal data. The results suggest that reciprocal big data value creation can be seen as a function of the level of benefit and control afforded to consumers regarding the use of their data. Four types of reciprocal big data value creation are proposed.
Article
This paper models the relationship between a retailer’s mobile app launch and product purchases and returns in its online and offline channels.
Article
Artificial intelligence in programmatic advertising constitutes fertile grounds for marketing communication with tremendous opportunities. Yet, despite its touted benefits, contemporary implementations of programmatic advertising do not harness self-generative technologies so much so that different consumers are exposed to identical content. Consequently, we advance a smart generation system of personalized advertising copy (SGS-PAC) that can automatically personalize advertising content to align with the needs of individual consumers. Analytical results from a user experiment involving about 80 subjects underscore that personalized advertising copies generated by SGS-PAC can bolster click rate in online advertising platforms. Findings from this study bear significant implications for the application of artificial intelligence in online advertising.
Article
Purpose The purpose of this paper is to provide insights into personalisation from a practitioner’s perspective to bridge the practitioner-academia gap and steer the research agenda. A wide scope of research has investigated personalisation from a consumer perspective. The current study aims at bridging the consumer and practitioner perspective by entering into a dialogue about the practical application of personalisation. It takes the personalisation process model by Vesanen and Raulas (2006) as the starting point. Design/methodology/approach Lead by the exploratory character of the study, semi-structured expert interviews were conducted with marketers, market researchers and online privacy specialists. Findings The results showcase how practitioners view the issues present in consumer research. First, they are overly positive about personalisation. Second, they are aware of constraining factors; findings showcase best practices to mitigate them. Finally, practitioners are aware of controversies surrounding personalisation and thus engage in ethical discussions on personalisation. Research limitations/implications This study shows that practitioners have somewhat different believes about the utility and appreciation of personalised marketing practices than consumers. It also shows awareness of some of the key concerns of consumers, and that such awareness translates into organisational and technological solutions that can even go beyond what is currently mandated by law. Six insights into personalised marketing as well as expectations for the future of the phenomenon are discussed to steer the research agenda. Practical implications Insights into the practice of personalisation contribute to a shared understanding of this phenomenon between involved actors, such as marketers, advertisers, and consumer representatives. In addition, implications for lawmakers are discussed, suggesting that the implementation of privacy laws needs more clarity and that actions aiming at improving consumer knowledge are needed. Originality/value The paper contributes to the literature first, by drafting a descriptive map of personalisation from a practitioners’ perspective and contrasting it with the perspective stemming from consumer research and, second, by offering insights into the current developments and direct implications for practice and future research.