Content uploaded by Dominique Hanssens
Author content
All content in this area was uploaded by Dominique Hanssens on Mar 27, 2018
Content may be subject to copyright.
Dominique M. Hanssens & Koen H. Pauwels
Demonstrating the Value of Marketing
Marketing departments are under increased pressure to demonstrate their economic value to the firm. This challenge is
exacerbated by the fact that marketing uses attitudinal (e.g., brand awareness), behavioral (e.g., brand loyalty), and
financial (e.g., sales revenue) performance metrics, which do not correlate highly with each other. Thus, one metric could
view marketing initiatives as successful, whereas another could interpret them as a waste of resources. The resulting
ambiguity has several consequences for marketing practice. Among these are that the scope and objectives of
marketing differ widely across organizations. There is confusion about the difference between marketing effectiveness
and efficiency. Hard and soft metrics and offlineand online metrics are typically not integrated. The two dominant tools for
marketing impact assessment, response models and experiments, are rarely combined. Risk in marketing planning and
execution receives little consideration, and analytic insights are not communicated effectively to drive decisions. The
authors first examine how these factors affect both research and practice. They then discuss how the use of marketing
analytics can improve marketing decision making at different levels of the organization. The authors identify gaps in
marketing’s knowledge base that set the stage for further researchand enhanced practice in demonstrating marketing’s
value.
Keywords: accountability, marketing effectiveness, efficiency, return on marketing investment, marketing value
assessment
The Difficulty of Marketing Value
Assessment
I want marketing to be viewed as a profit center, not a cost
center.
—A chief executive officer
I have more data than ever, less staff than ever, and more
pressure to demonstrate marketing impact than ever.
—A chief marketing officer
Marketing is at a crossroads. Managers are frustrated by
the gap between the promise and the practice of effect
measurement, big data, and online/offline integration.
Caught between financial accountability and creative flexibility,
most chief marketing officers (CMOs) do not last long at their
companies (Nath and Mahajan 2011). Top management has
woken up to the fact that their companies make multimillion-
dollar marketing decisions on the basis of less data and analytics
than they devote to thousand-dollar operational changes. Cus-
tomerand market data management, product innovation and
launch, international budget allocation, online search opti-
mization, and the integration of social and traditional media
are just some of the profitable growth drivers that greatly
benefit from analytical insights and data-driven action. Yet
marketing value assessment, defined as the identification
and measurement of how marketing influences business
performance as well as the accurate calculation of return on
marketing investment (ROMI), remains an elusive goal for
most companies, which are struggling to integrate big and
small data and marketing analytics into their marketing
decision and operations.
Why is marketing value assessment so challenging? To
begin with, the term “marketing”refers to several things: a
management philosophy (customer centricity), an organiza-
tional function (the marketing department), and a set of specific
activities or programs (the marketing mix). However, regardless
of the intended use of the term, marketing aims to create and
stimulate favorable customer attitudes with the goal of ulti-
mately boosting customer demand. This demand, in turn,
generates sales and profits for the brand or firm, which can
enhance its market position and financial value. This sequence
of influences has been termed the “chain of marketing pro-
ductivity”(Rust et al. 2004), as depicted in Figure 1.
As a result, marketing has multiple facets, some attitu-
dinal, some behavioral, and some financial. However, the
relation between the metrics that assess these facets is com-
plex and nonlinear (Gupta and Zeithaml 2006), and their
average correlations are below .5 (Katsikeas et al. 2016). For
example, product differentiation tends to be associated with
higher customer profitability but lower acquisition and re-
tention rates (Stahl et al. 2012). Similarly, online behavior
and offline surveys yield different information to explain and
predict brand sales (Pauwels and Van Ewijk 2013). Likewise,
some attitudinal brand metrics (esteem, relevance, and knowl-
edge) are associated with higher sales but not with higher prices,
while others (energized differentiation) show the opposite
pattern (Ailawadi and Van Heerde 2015).
This makes it difficult for researchers to synthesize
findings across studies of marketing impact, and it makes it
difficult for organizations to choose which metrics to rely on
Dominique M. Hanssens is Distinguished Research Professor of Marketing,
Anderson School of Management, University of California, Los Angeles
(e-mail: dominique.hanssens@anderson.ucla.edu). Koen H. Pauwels is
Professor of Marketing, ¨
Ozye˘gin University (e-mail: koen.pauwels@ozyegin.
edu.tr).
©2016, American Marketing Association Journal of Marketing: AMA/MSI Special Issue
ISSN: 0022-2429 (print) Vol. 80 (November 2016), 173–190
1547-7185 (electronic) DOI: 10.1509/jm.15.0417173
when making resource allocation decisions. For example,
advertising is only deemed financially successful if its ability
to increase awareness results in higher sales and/or profit
margins.
Current efforts in marketing measurement often do not go all
the way in connecting metrics to each other. For instance, many
balanced scoreboards and dashboards do not tell managers how
their marketing inputs relate to customer insight metrics and to
product market performance metrics. Consistent with this notion,
in a personal communication, Lehmann uses the term “flow-
boards”for dashboards connecting metrics, while Pauwels (2014)
defines analytic dashboards as a concise set of interconnected
metrics. Indeed, reconciling multiple perspectives on marketing
value requires causality to be shown among marketing actions
and multiple performance outcomes (e.g., customer attitudes,
product markets, financial markets; i.e., quantifying the arrows in
Figure 1). Connecting the metrics is especially challenging if data
and decisions exist in silos within the organization. However,
theconsumerorcustomeristhetargetandrecipientofall these
actions, the combination of which will create the consumer’s
attitude toward the brand and, eventually, his or her purchase
behavior. In assessing marketing’s value, we therefore pay close
attentiontotheintegration of marketing activities as they affect
consumer behavior. In this context, Court et al. (2009) argue that
the critical task is to describe the process that generates sales for
the firm and to identify the bottlenecks that impede profitable
business growth.
In addition to relating performance metrics to each other (the
metrics challenge), these metrics also need to be connected to
marketing activity. Indeed, assessing marketing value requires
various demand functions that quantify how changes in mar-
keting activity influence changes in these dependent variables
(e.g., with response elasticities). Demand functions are often too
complex for senior managers to intuitively understand and
FIGURE 1
The Chain of Marketing Productivity
Source: Rust et al. (2004).
Notes: EVA =economic value analysis; MVA =marketing value analysis.
174 / Journal of Marketing: AMA/MSI Special Issue, November 2016
estimate. Consequently, marketing analytics expertise is needed,
either in-house or through specialized suppliers, which in turn
creates an organizational challenge because those who practice
marketing tend to be different from those who measure it. A final
necessity in marketing value assessment is effective communi-
cation within the organization, including to decision makers who
may not be fluent in the technical aspects of value measurement.
Despite the challenges,the benefits of “marketing smarter”
are substantial, as both academic studies and business cases
demonstrate. Even a small improvement in using marketing
analytics creates, on average, 8% higher return on assets to the
companies, compared with their peers (Germann, Lilien, and
Rangaswamy 2013). This benefit increases to 21% for firms
in highly competitive industries. Organizations of any size and
in any industry have had a sustainable competitive advantage
from using marketing analytics. However, even the large U.S.
companies that participated in the CMO Survey (2016) report
that marketing analytics are used in only 35% of all marketing
decisions. This percentage is expectedly even lower for small
and medium-sized firms across the world.
The causality implied by the chain of marketing productivity
increases the pressure for good performance metrics, causal links
between metrics and marketing actions, and effective communi-
cation to demonstrate the value of a firm’s marketing. This article
discusses the challenges of obtaining those three things. We first
provide a general overview, critically examining the knowledge
base and practice of marketing value assessment in organizations.
We then discuss marketing objectives and how they determine
the choice of marketing metrics. Next, we turn our attention to the
research methods that drive marketing value assessment—namely,
the use of models, surveys, and experiments. Those methods have
generated several important findings about marketing value. Then,
because marketing analysts and marketing decision makers are
typically not the same people, we examine ways of improving
how marketing value is communicated within the organization.
We conclude with a brief summary of current knowledge and
important areas for further research.
The Influence of Marketing
Objectives on Marketing Value
Metrics
As organizations grow and marketing technologies evolve, mar-
keting tasks become increasingly specialized and complex. A
vice president for sales and marketing may be replaced by two
vice presidents, one for sales and another for marketing. Within
marketing, separate departments may focus on advertising and
customer service. Advertising itself may be divided into brand
and direct, offline and online. Each of these people or depart-
ments is held accountable for increasingly focused business
objectives and performance metrics. In customer service, the
performance measure may be the Net Promoter Score, while
brandrecognitionscoresmaybeusedtogaugetheperformance
of the brand advertising team, and CPM (cost per 1,000 pros-
pects touched) may be used for the direct advertising team.
The result is an increasingly siloed marketing department
in which each specialized function has its own objectives,
with little consistency across functions. Another consequence
may be the imposition of inappropriate efficiency metrics that
make marketing less impactful. In some cases, marketing
may be treated as an expense rather than an investment.
What is needed are guidelines for (1) reconciling different
marketing objectives, (2) distinguishing between marketing
effectiveness and efficiency, (3) defining the scope of mar-
keting, and (4) distinguishing between marketing budget set-
ting and budget allocation.
Reconciling Different Objectives for Marketing
Among the multitude of objectives marketing managers aim
to achieve are gains in sales volume and growth, market
share, profits, market penetration, brand equity, stock price,
and a variety of consumer mindset metrics, such as awareness
and consideration. Table 1 presents an overview of the focus
of different performance assessments, their benefits, and their
drawbacks.
Marketing scholars can no longer assume that profit
maximization is the sole goal of marketing (see Keeney and
Raiffa 1993). When Natter et al. (2007) optimized dynamic
pricing and promotion planning for a retailing company,
having initially agreed to maximize profits, their recom-
mendation of higher prices met with substantial resistance
from the purchasing managers, whose supplier discounts
depend on sales volume, and from local branch managers,
whoinsistedonkeepingamarketleadershippositionin
their city. After further discussion, they decided to combine
profits, total sales volume, and local market share objec-
tives in an overall goal function for the model to optimize.
The resulting model yielded recommendations that were more
acceptable to the managers, who successfully implemented
them.
Despite individual contributions such as Natter et al. (2007),
marketing academia and practice have not produced a set of
generalizable weights for using different objectives under dif-
ferent conditions. Instead, marketing practice tends to focus on
case studies of each company’s unique situation and, within the
firm, on individual executives’siloed departments.
Further research should attempt to bridge marketing ob-
jectives and metrics across functional, geographical, and life
cycle boundaries. Bronnenberg, Mahajan, and Vanhonacker
(2000) provide a good example: they demonstrate that, in one
product category, consumer liking and distribution are dominant
success metrics for brands in the early phases of the category life
cycle, with pricing and advertising becoming important only
later. Similarly, Pauwels, Erguncu, and Yildirim (2013) show
that brand liking matters more in mature markets, but brand
consideration is more important in emerging markets. Research
should also investigate the optimal weighting of objectives on the
basis of hard performance measures, along the lines of research
that combines model-based and managerial judgment (Blattberg
and Hoch 1990). Recently, the notion that models should not
ignore human decision makers has reemerged within a big-data
context as algorithmic accountability (Dwoskin 2014). The goal
is to tweak social media classification algorithms not for max-
imum efficiency but to avoid human-relations mistakes (Lohr
2015). A widely shared example is that of Target, which sent out
pregnancy-related coupons to teenagers for whom its algorithm
Demonstrating the Value of Marketing / 175
TABLE 1
Types of Performance Outcomes
Aspect of
Performance Advantages Disadvantages Considerations
Customer
mindset
•Causally close (often closest) to
marketing actions
•May be unique to marketing
performance outcomes vs. other
business disciplines
•Commonly used to set marketing-
specific goals and assess marketing
performance in practice
•Primary data may be difficult and costly
to collect if direct from customers
•Secondary data from research vendors
may not align well with theorized
constructs or data from other vendors
•Sampling: current customers versus
past customers versus all potential
customers in the marketplace
•Possible demographic effects on
measures
•Noise in survey measures (primary
and secondary data)
•Only allows for goal-based assessment
if collected with or supplemented by
primary data
•Transaction-specific versus overall
evaluations
Customer
behaviors
•Causally close to marketing actions
•May be unique to marketing
performance outcomes versus other
business disciplines
•Commonly used to set marketing-
specific goals and assess
performance in practice
•Direct observation shows revealed
preferences
•Primary data may be difficult and costly
to collect if direct self-reports from
customers
•Observed behavior data may require
working with firms and can be difficult to
collect from multiple firms
•Differences across firms in how
observed behaviors are defined and
calibrated
•Noise in survey measures (primary
data)
•Only allows for goal-based assessment
if collected or supplemented by primary
data
Customer-
level
outcomes
•Causally close to marketing actions
•May be unique to marketing
performance outcomes versus other
business disciplines
•Commonly used to set marketing-
specific goals and assess
performance in practice
•May require working directly with firms
and may be difficult to work with multiple
firms
•Differences across firms in how
economic outcomes are determined
and calculated
•Only allows for goal-based assessment
if collected or supplemented by primary
data
•Noise in survey measures (primary
data)
Product-
market-
level
outcomes
•Causally close to marketing actions
•May be unique to marketing
performance outcomes versus other
business disciplines
•Commonly used to set marketing-
specific goals and assess
performance in practice
•Unit sales data are difficult to obtain
from secondary sources for most
industries
•Even firms in the same industry may
differently define the markets in which
they compete
•Higher level of aggregation, so may be
less diagnostic
•How to define the “market”
•Only allows for goal-based assessment
if collected or supplemented by primary
data
•Noise in survey measures (primary
data)
Accounting •Well-defined and standardized
measures
•Revenue-related items commonly
used to set marketing-specific goals
and assess marketing performance
in practice
•Secondary data availability
•For primary survey data, specific
items likely to have the same
meaning across firms
•Corporate level, so may be further away
from marketing actions and less
diagnostic
•Not forward looking
•May undervalue intangible assets
•Mostly ignores risk
•Treats most marketing expenditures as
an expense
•Potential differences between firms and
industries in their accounting practices,
policies, and norms
•Differences in measures across
countries
•Only allows for goal-based assessment
if collected or supplemented by primary
data
•Noise in survey measures (primary
data)
Financial
market
•Investors (and analysts) are forward
looking
•May better value intangible assets
•Finance theory suggests that
investors may be more goal
agnostic (but time frames and even
criteria may be goal related from the
firm’s perspective)
•Secondary data availability
•Corporate level, so may be further away
from marketing actions and less
diagnostic
•Publicly traded firms only, which tend to
be larger
•Difficulties in assessing firms across
different countries (and financial
markets)
•May be subject to short-term
fluctuations unconnected with a firm’s
underlying performance
•Risk adjustment
•Public/larger firm sample-selection bias
•Assumes primacy of shareholders
among stakeholders, but this may not
be true in some countries
•Assumes the financial market is efficient
and participants are well informed of the
marketing phenomena being studied
•Only allows for goal-based assessment
if collected or supplemented with
primary data
•Noise in survey measures (primary
data)
Source: Katsikeas et al. (2016).
176 / Journal of Marketing: AMA/MSI Special Issue, November 2016
predicted pregnancy (Hill 2012). Marketing is in a unique
position to contribute to the debate on the use of such algo-
rithmic predictions by applying the rich existing literature on
quantifying the consequences of loss in customer goodwill and
estimating the probabilities of these loss scenarios.
Effectiveness and Efficiency
When we understand the target objectives of decision makers,
a key question is whether they give primacy to effectiveness or
efficiency in reaching these goals. Effectiveness refers to the
ability to reach the goal; efficiency refers to the ability to do so
with the lowest resource usage. For instance, mass media ad-
vertising may be effective in reaching the vast majority of pro-
spective customers, but it is not very efficient, whereas online
advertising may be very efficient but not as effective because
it reaches fewer prospective customers.
The value of marketing can be expressed in terms of either
effectiveness or efficiency. Return on marketing investment
deals with efficiency. When efficiency is the goal, the result is
almost always a budget reduction through the elimination of
the least efficient marketing programs. However, the firm may
be more interested in the effectiveness of a marketing action,
which may be better expressed as return minus investment,
without dividing by the investment as in the standard return
on investment (ROI) formula from finance. As an illustration,
consider two mutually exclusive projects (e.g., alternative ad
messages aimed at the same segment), with returns of $100
million and $10 million, respectively, and investment costs of
$80 million and $2 million at the same level of risk. The first
project has the larger net return ($20 million is greater than $8
million), but the second project has the larger ROI (25% is less
than 400%). Which project should a manager prefer?
The trade-off between effectiveness and efficiency is par-
ticularly salient when there is a conflict between short-term and
longer-term goals. Price promotional tactics, for example, may
be optimized for their short-term profitability, but the repeated
use of such tactics is known to erode brand equity over a longer
time span (Mela, Gupta, and Lehmann 1997). Efficiency-driven
marketing decisions should be supported only when they do not
jeopardize the long-term viability of the brand.
Ultimately, firms want to strike a balance between effec-
tiveness an d efficiency goals. To accomplish this, beverage
company Diageo displays marketing actions on a 2 ·2matrix
that juxtaposes their effectiveness (on defined objectives) with
their efficiency (ROMI). Actions without sufficient effective-
ness are likely to be canceled, no matter how high their ROMI,
while effective but inefficient actions are reexamined to improve
efficiency in the future (Pauwels and Reibstein 2010). A
company may benefit from instituting a threshold return value
that marketing programs must achieve to be supported. Ex-
amples of such thresholds are the firm’s cost of capital and
its economic profit (Biesdorf, Court, and Willmott 2013).
Research is needed to establish what the thresholds for impact
and efficiency should be.
Beyond defining and relating multiple objectives, we
also need to conceptually and empirically relate effective-
ness and efficiency in reaching these objectives. Measuring
the effectiveness or the efficiency of marketing is not an easy
task. It is important to measure not only the percentage return
of any spending amount but also its magnitude. Conceptual and
empirical models of marketing effectiveness show diminishing
returns (e.g., Kireyev, Pauwels, and Gupta 2016; Little 1979),
implying that ROI (efficiency) is maximized at levels of
marketing spending that are below profitmaximizing(effec-
tiveness) (Pauwels and Reibstein 2010). We propose that the
goal should be to maximize the total effectiveness when a
certain threshold is achieved, even if that reduces the overall
efficiency (Farris et al. 2015). However, our proposal may be
more applicable to large organizations, which have plenty of
resources and opportunities, than to small ones. Further re-
search is needed to determine the best mix of effectiveness
and efficiency for smaller organizations and in dire times.
The Scope of Marketing Within the Organization
The scope of marketing is one of the key determinants of its
objectives and of the effectiveness/efficiency decisions that the
marketing department makes (e.g., Webster, Malter, and
Ganesan 2003). In some organizations, the marketing depart-
ment is only responsible for a subset of the marketing mix, such
as executing advertising campaigns and running sales promo-
tions. Marketing decision makers are typically more junior in
such organizations. Pricing, distribution, and product decisions
are made elsewhere in the organization, by more senior decision
makers. In our experience, this situation is typical in emerging
countries, in engineering-dominated companies, and in
business-to-business industries.
At the other extreme, a few organizations consider the
marketing department to be the true profitable growth driver
and both hold it accountable for profitable growth and
provide it with the necessary resources and authority to
achieve it. Examples include Procter & Gamble and Diageo,
which are marketing-dominated companies in business-to-
consumer industries (Pauwels 2014). Most companies fall
somewhere between these extremes; they may hold mar-
keting responsible for pricing, promotion, and branding, but
not for creating successful new products (which is often the
domain of research and development or a new product de-
velopment group) or expanding distribution (which is often
the domain of the sales organization).
The scope of marketing also has a major impact on the data
collection that underlies marketing value assessment. The
broader the scope, the more variables are included in marketing
databases and, generally, the lower the level of granularity of
these databases. For example, digital attribution models have
a very narrow scope (determining which combination and
sequencing of digital media impressions produces the highest
consumer response) but can be executed daily or even hourly
(see, e.g., Li and Kannan 2014). In contrast, complete marketing-
mix models that include product innovation and sales call metrics
in addition to various marketing communication and sales pro-
motion variables are typically executed monthly or weekly. The
latter, however, assign a much broader responsibility to marketing
than do the former. At the same time, greater data granularity
necessitates more advanced econometrics. A detailed discussion
of econometric advances in market response modeling is beyond
the scope of this article and may be found in Hanssens (2014).
Demonstrating the Value of Marketing / 177
How has academic research advanced the understanding
of the importance of marketing scope? Far too little, argue
Lee, Kozlenkova, and Palmatier (2015). In a recent review,
they call for structural marketing: explicit consideration of
organizational structure when assessing the value of mar-
keting. They hypothesize that moving to a customer-facing
structure increases effectiveness but reduces efficiency in
obtaining data on how products perform. A few academic
articles have investigated whether a more customer-focused
organizational structure induces a market orientation, with
mixed findings. Likewise, the 2015 Marketing Science Institute
conference on “Frontiers in Marketing”featured several man-
agement questions and comments on the cost–benefit trade-offs
of customer-focused teams.
Our recommendation is twofold: we agree with Lee,
Kozlenkova, and Palmatier’s (2015) call for more research on
the impact of organizational structure on market-related out-
comes, but we would also like to see more attention paid to the
relationship between marketing performance and marketing
scope. To what extent does excellent performance help mar-
keting increaseits scopeand get it a “seat at the table”(Webster,
Malter, and Ganesan 2003)? Or is it the communication of such
performance (i.e., “marketing the marketing department”) that
matters most? Because the answer may depend on the industry
and company setting, we recommend further research on the
boundary conditions of the interplay between organizational
structure, marketing actions, and performance outcomes.
Marketing Decisions: Budgets or Allocations?
It is important to know whether marketing actions are con-
sidered tactical or strategic in assessing their value. Broadly
speaking, managerial decisions are either budget (investment)
or allocation (execution) decisions (Mantrala, Sinha, and
Zoltners 1992). For example, a CMO receives a $100 million
budget from his or her CEO, for whom this $100 million
represents an investment. The CMO allocates this budget to
traditional media, digital media, and sponsorships. The owners
of these three marketing groups make subsequent allocation
decisions for their respective (smaller) budgets, and so on.
Setting aside prevailing accounting standards that generally
force these allocations to be expensed in the spending period,
any marketing investment decision becomes an allocation
decision one level down in the hierarchy.
The deeper in the organizational hierarchy one goes, the
more tactical the allocation decisions become, and the more
junior the decision makers are. For example, the decision
to advertise on channel 4 rather than channel 7 is tactical
relative to the higher-order decision to allocate 40% of the
marketing budget to television advertising. At the same
time, the deeper one goes in the hierarchy, the more detailed
the available databases are and, therefore, the more opportu-
nity for analytics-enhanced decision making. Such tactical
decisions lend themselves to continuous data collection and
decision automation, which is a decentralizing force in the
organization (Bloom et al. 2014). However, analytics and
decision support systems should support the different decision-
making modes of optimizing (typical for very structured,
tactical marketing problems), reasoning, analogizing, and
creating (typical for more strategic marketing problems)
(Wierenga and Van Bruggen 2012).
Academic research in marketing has tended to focus
on tactical decisions rather than on strategy. For example,
product line and distribution elasticities are at least seven
times higher than advertising elasticities, which makes them
strategically more relevant (Ataman, Van Heerde, and Mela
2010; Shah, Kumar, and Zhao 2015), but the abundance
of data on the latter has resulted in many more academic
publications on advertising effects than on distribution or
product line effects on business performance. This tendency
is amplified by the increased availability of micro-level mar-
keting data, especially in digital marketing.
Academic research specifically on strategy versus tactics
has focused mainly on the relative merits of setting the budget
size or allocating a given budget (e.g., Mantrala, Sinha, and
Zoltners 1992). More recently, Holtrop et al. (2015) show that
competitive reactions on a strategic level differ substantially
from reactions at a tactical level. Interestingly, strategic
actions (presumably by senior managers) follow marketing
theory expectations, whereas tactical actions (presumably by
junior managers) often violate research recommendations
by (1) retaliating when unwarranted and with an ineffective
marketing instrument and (2) accommodating with an effective
marketing instrument. Manchanda, Rossi, and Chintagunta
(2004) obtain similar findings. Both articles focus on the
pharmaceuticals industry; their important results regarding
suboptimal marketing resource allocations are in need of
replication in different sectors.
In marketing practice, the focus on marketing tactics
benefits the organization’s accountability and profitability
but rarely creates sustained business growth, which is a
more strategic objective. For business growth, product and
process innovation become more important, as evidenced
by empirical work demonstrating the positive impact of inno-
vation on firm value (e.g., Sorescu and Spanjol 2008).
Analytics in the product innovation area has focused
mainly on measuring consumer response to new product
offerings—in particular, using conjoint analysis. The internal
customer of such work is typically the product development
group, which is a separate entity from marketing, with a
separate budget. As a result, the insights from one function are
rarely incorporated in the other; for example, the results from
conjoint analyses (used by the product development group) are
typically not included in marketing-mix models (used by the
marketing group). The critical element of product appeal (e.g.,
conjoint utility) may therefore be missing from demand models,
resulting in lower-quality sales forecasts.
A powerful illustration of the strategic importance of in-
novation is in investor reactions to new product launches, as
measured by stock returns. Not only is investor reaction
typically positive, despite the costs and the risk involved, but
it occurs well ahead of the typical diffusion pattern of the new
product. As an example, when Honda introduced the “sunken
third-row seat”innovation in its minivan, the Odyssey, the
innovation effect was fully absorbed in its stock price in
approximately 12 weeks, whereas the sales diffusion of the
product is much longer. One can surmise that investors realize
the financial value of such an innovation after the first few
178 / Journal of Marketing: AMA/MSI Special Issue, November 2016
weeks of positive consumer feedback and then assume that
the marketing of the innovation will be well executed, so
that the new product can reach its full market potential (Pauwels
et al. 2004).
We recommend a broad definition of marketing in the
organization and a commensurate broad inclusion of business
functions in the generation of demand models for marketing
resource allocation. This task can be complex because data
from a variety of sources need to be combined in an integrated
data and analytics platform. Importantly, such a platform can
become the much-needed integrator of intelligence for senior
management decisions and, as such, a centralizing force in the
modern enterprise (Bloom et al. 2014). This means that the
same strategic asset—the data and analytics platform—serves
as both a centralizing (of intelligence) and a decentralizing
(of execution) force, whereby both directions offer tangible
advantages to the firm.
Methods and Findings About
Assessing Marketing Value
Marketing value measurement has both a methodological
and a knowledge component. We focus on these two here,
leaving the third component, communication of marketing
value, to the next section.
Methods: Models, Surveys, and Experiments
Marketing impact can be assessed empirically in two ways:
by modeling historical data (secondary data) and by running
surveys and experiments (primary data). Both methods have
their proponents and advantages; however, neither is typically
sufficient by itself to convince decision makers of the value of
marketing and to induce change in marketing decision making.
The use of historical data sources has benefited tremen-
dously from improvements in consumer and marketing
databases and from developments in statistics (mainly
econometrics) and computer science. On the data side, recent
history has seen the emergence of scanner databases; customer
relationship management databases; and digital search, social
media, and mobile-marketing databases. On the modeling side, a
steady stream of econometric and computer science advances
has delivered the improvements in estimation methodology
necessary to deal with these novel data (Hanssens 2014; Ilhan,
Pauwels, and K¨
ubler 2016; Murphy 2012).
Criticism of models estimated on historical data stems
mainly from their limitations in capturing “reasons why”(as
shown in surveys) or causal connections (as shown in exper-
imental manipulations). A survey may show that one consumer
visited the brand’s website for reasons of purchase interest,
whereas another visited to rationalize his or her choice for a
competing brand—information not obtainable from models
estimated on historical data.
In particular, the “two geneities”(heterogeneity and endo-
geneity) are challenging for marketing modelers. Heterogeneity
(i.e., differences in response to marketing among consumers)
has been addressed successfully thanks to simulated Bayes es-
timators (for a comprehensive review, see Rossi, Allenby, and
McCulloch 2005). Endogeneity (i.e., the existence of decision
rules in marketing that may bias the results of statistical response
estimation) continues to pose major challenges, which are dis-
cussed in Rossi (2014). However, as marketing databases be-
come more granular (monthly data intervals become weekly,
daily, hourly, or even real time), the endogeneity challenge is
easier to handle because the response models become more
recursive in nature. In these higher-frequency databases, atten-
tion shifts to long-term impact measurement, in particular the
testing for persistent effects, for which modern time-series
techniques are readily available (see Hanssens, Parsons, and
Schultz 2001; Leeflang et al. 2009).
Field experiments, by contrast, require customers and/
or managers to react to an intervention at the time of data
collection and allow for a direct comparison of treatment and
control conditions, thereby removing concerns about endo-
geneity. Unfortunately, field experiments are often costly to
conduct, limited to changing only one or a few decision vari-
ables at a time, and require trust in the organization that dis-
appointing outcomes will not be held against the manager.
For example, managers and salespeople often object to being
part of the control group for a potentially impactful marketing
action. Even online, where experiments are relatively easy to
implement, companies often refuse to do so (Ariely 2010).
Finally, marketing experiments are run for a limited amount of
time and therefore are typically unable to detect long-term
effects of a particular marketing action. Exceptions include
longitudinal single-source field experiments (e.g., Lodish
et al. 1995) and digital-marketing experiments in which,
under the right circumstances, subjects can be tracked dig-
itally after the experiment has concluded in order to infer
long-term effects.
The best insights on marketing value will come from the
combined use of secondary and primary data. Indeed, taken
together, models, surveys, and experiments provide the ben-
efits of highest decision impact at a moderate cost and risk. Yet
what is the best sequence? In our experience, a field experi-
ment on a strategic decision is perceived as too risky without a
model or survey to justify the treatment proposal. For instance,
furniture company Inofec (Wiesel, Arts, and Pauwels 2011)
first had analysts run a response model based on historical data.
After simulating potential scenarios based on the model output,
management decided to double spending on one marketing
channel (paid search) and to halve it on the other (direct mail).
In the ensuing field experiment, the treatment condition earned
14 times the net profit earned by the control condition.
Modeling the data of the field experiment revealed that paid
search continued to yield high returns but that the reduced
direct-mail budget began to break even. As a result, the
company further experimented with increasing paid search
butkeptdirectmailatitsnewlevel.
In situations in which both approaches are feasible,
we recommend the sequence of model, experiment, model,
experiment (MEME) to obtain the maximum impact of
analytics-driven decision making. At the same time, sur-
veys and other methods should be used to provide insight
into the “why”and “how”of customer behavior. Further
research should analyze whether the MEME sequence is
the most productive across situations, consider other possible
sequences, and establish boundary conditions. Regardless of
Demonstrating the Value of Marketing / 179
the method used, a critical question for management is whether
market conditions will have changed by the time the actual
decision is made. The beliefs that change outpaces analytic
insights and that past patterns do not apply to the future hinder
the use of marketing analytics in many organizations.
Findings on Marketing Investments and Allocations
Previously, we discussed investments and allocations in
terms of their relationship to strategy and tactics. Next,
we discuss findings more broadly. Table 2 shows dif-
ferences between allocation and investment decisions on
several fronts. Managers and academics are keenly interested
in decision rules for both, as is evident from the fact that this
topic appears frequently among the biennial research priori-
ties disseminated by the Marketing Science Institute.
Notably, most applications in marketing analytics (includ-
ing analytics exploiting big data) focus on the deep dive for
tactical allocations (see Table 2). Insofar as these contributions
overemphasize areas in which good data are readily available,
they run the risk of being bogged down in details and failing
to see the forest for the trees. In contrast, when complete
marketing-mix data are used along with econometric methods
for inferring long-term impact, marketing analytics can also be
very helpful for strategic investment decisions and for quan-
tifying risk in such decisions (e.g., Leeflang et al. 2009).
In academic research, empirical generalizations on sales
response functions provide valuable guidance for marketing
spending (Hanssens 2015). Table 3 provides a quantitative
overview, expressed as sales or market value elasticity esti-
mates. These relate directly to marketing spending rules by
virtue of the fact that, at optimality, a firm should allocate re-
sources in proportion to its response elasticities (Dorfman and
Steiner 1954). Table 3 also indicates the extent to which the
marketing variable is an organic growth driver (i.e., its impact
on sales is sustained rather than temporary). This is an im-
portant distinction because it identifies the strategic nature of
marketing activities. Although price promotions and adver-
tising for existing brands (which often consume the majority of
marketing’s budget and effort) are not major organic growth
drivers of company performance, marketing assets (e.g., cus-
tomer satisfaction, brand equity) and actions (e.g., distribution,
innovation) have a strong impact on long-term company value.
In an example from the French market, Ataman, Van Heerde,
and Mela (2010) demonstrated across 70 brands in 25 con-
sumer product categories that only breadth of distribution (.61)
and length of product line (1.29) had strong long-term sales
elasticities. By contrast, long-term elasticities of advertising
(.12) and sales promotion (–.04) were small or negative.
At this point, generalizations—expressed as response
elasticities—exist for many quantifiable marketing inputs,
TABLE 2
A Comparison of Allocating and Investing Marketing Resources
Allocating Investing
Resources Budget is received from senior management Budget is created for junior management
Objectives Efficiency, accountability of resource use Stimulating profitable growth for the brand or firm
Use of analytics Detailed analysis of (typically) one marketing-mix
element
Integration across the marketing mix
Key challenges/risks Exaggerated belief in the strategic importance of
one’s own silo
Large financial consequences
Examples Media-mix allocations
Dynamic pricing
Product portfolio decisions across international
markets
TABLE 3
Response Elasticities Summaries
Typical
Elasticity Range Drivers (1)
Organic Growth
Driver?
Advertising .1 0 to .3 Product newness, durables Minor
Sales calls .35 .27 to .54 Early life cycle, European markets Major
Distribution >1 .6 to 1.7 Brand concentration, high-revenue categories,
bulky items
Major
Price -2.6 -2.5 to –5.4 Stockkeeping unit level versus brand level, sales
versus market share, early life cycle, durables
Minor
Price promotion -3.6 -2to–12 Storables versus perishables No
E-word of mouth Positive .24 (volume) Low trialability, private consumption, independent
review sites, less competitive categories
Possibly
.42 (valence)
InnovationaPositive N.A. Radical versus incremental innovations Major
Brand and customer
assetsa
.33 (brand) Major
.72 (customer)
aOn firm value.
Source: Hanssens (2015).
Notes: N.A. =not applicable.
180 / Journal of Marketing: AMA/MSI Special Issue, November 2016
along with expected ranges and distinctions between short-
term and long-term effects on sales. It is also apparent that
firms generally deviate from optimal (profit-maximizing)
spending in the marketing mix (i.e., they either over- or
underspend). However, because the spending objectives of a
firm or brand at any point in time are typically not known to
the researcher, this conclusion about apparent suboptimality
in spending remains tentative. One important conclusion that
can be drawn from Table 3 is that marketing communications
(i.e., advertising and sales calls) have the lowest elasticities.
Their relatively flat response curves imply that they are un-
likely to be the sole drivers of major performance change.
However, when combined with one or more of the other
marketing-mix elements, their impact can be substantial. For
example, a recent study of high-level digital cameras dem-
onstrated that when a camera brand receives highly positive
reviews, advertising can have positive trend-setting effects
on brand sales (Hanssens, Wang, and Zhang 2016). During
these fleeting windows of opportunity, the combination of
high perceived product quality and advertising produces
long-lasting impact that neither driver can achieve by itself.
Such findings illustrate that the timing and sequencing of
marketing initiatives can be determining factors of their
impact.
Recent research has identified conditions in which the
most value is generated, such as distribution in emerging
countries (e.g., Pauwels, Erguncu, and Yildirim 2013), new
product launch during recessions (e.g., Talay, Pauwels, and
Seggie 2012), and owned (vs. paid online) media for lesser-
known products and for services (Demirci et al. 2014). We
call for further research on these and other influential market
conditions.
Researchers should not only help companies identify their
response functions but also derive where on the function
companies’current spending lies. This enables firms to deter-
mine whether to allocate more or less to various marketing
activities than in previous years. Mantrala et al. (2007) demon-
strate this for the publishing industry. An alternative approach
is to run marketing experiments to assess alternative levels of
expenditure and different programs and their resulting impact.
This was done, for example, by the U.S. Navy to determine
optimal levels of recruiters and advertising support to reach its
manpower goals (Morey and McCann 1980). More recently,
the advent of the digital marketing era has allowed for a more
extended use of experimental designs to make advertising more
effective. This is achieved principally through an improved
understanding of the consumer journey (i.e., What are pros-
pects’individual propensities to buy and how can they be
increased through various targeted marketing efforts?; see, e.g.,
Li and Kannan 2014).
Connecting and Integrating Soft Metrics and
Hard Metrics
Whereas finance practice is the domain of hard, monetary
performance metrics, marketing practice has traditionally been
the domain of soft, attitudinal metrics. The marketing literature
has discussed attitude metrics at least since Colley’s (1961)
work on the effect of advertising on how targeted customers
think and feel. Recent literature has demonstrated that includ-
ing such attitude (or “purchase funnel”) metrics in market re-
sponse models increases their predictive and diagnostic power
(Hanssens et al. 2014; Pauwels, Erguncu, and Yildirim 2013;
Srinivasan, Vanhuele, and Pauwels 2010). Furthermore, the
digital age has provided even more metrics of (prospective)
customer behavior in customers’online decision journey (Court
et al. 2009; Lecinski 2011). A key question is how to integrate
soft (attitude) and hard (behavior) metrics, both conceptually
and in empirical models (Marketing Science Institute 2014).
A recent study by Pauwels and Van Ewijk (2013) ad-
dresses this question both conceptually and empirically for
36 brands in 15 categories, including services, durables, and
fast-moving consumer goods. They observe that survey-based
attitude metrics typically move more slowly (i.e., have a lower
variance) than weekly sales, while online behavior metrics
move faster than weekly sales. Thus, attitudes and online
actions represent, respectively, slow and fast lanes on the
road to purchase. Dynamic system models reveal dual cau-
sality among survey-based attitudes and online actions, leading
to the framework in Figure 2.
Although this road-to-purchase framework is inspired by
the classical Think–Feel–Do distinction, it recognizes that the
digital age provides many more metrics regarding customer
behavior, including online search, clicks, website visits, and
(social media) expressions of consumption and (dis)sat-
isfaction. Online behavior does not simply reflect underlying
attitudes (e.g., a known brand obtains higher click-through on
its ads), it also shapes them. For instance, consumers shop-
ping for their next smartphone may begin with a few brands in
mind but then discover new ones online through reviews,
(price) comparison sites, and social media, which increase their
thoughts and feelings about those new brands (Court et al.
2009). This “zero moment of truth”(Lecinski 2011) of online
FIGURE 2
Integrative Model of Attitudes and Actions on the
Consumer Road to Purchase
Source: Pauwels and Van Ewijk (2013).
Demonstrating the Value of Marketing / 181
discovery now precedes consumers’observing the brand at
retail in the “firstmomentoftruth”andconsumingitinthe
“second moment of truth.”
Only a few studies to date have quantified the connection
between soft and hard metrics in ways that managers can use.
Srinivasan, Vanhuele, and Pauwels (2010) analyze a large
number of consumer products and report strong cumulative
sales elasticities for advertising awareness (.29), consumer
consideration (.37), and consumer liking (.59). A recent
meta-analysis in digital marketing reveals that the sales elastici-
ty of electronic word of mouth averages .42 for valence
(sentiment) and .24 for volume (You, Vadakkepatt, and Joshi
2015). These elasticity results compare favorably with those
in Table 3.
Although recent studies have provided some guidance on
integrating soft metrics and online behavior into marketing
analytics, more research is needed to learn the best ways to
model the consumer decision journey and shed light on
whether there are models that are more appropriate than the
decision funnel (Marketing Science Institute 2014, p. 4). The
findings are likely to be nuanced and to vary depending on
the category (high involvement or low involvement) and
existing brand strength (Demirci et al. 2014). This is an
important agenda because attitudinal and transactional met-
rics are not highly correlated, and thus brands run the risk on
focusing on the wrong performance metric in conducting their
marketing valuations.
Dealing with Risk
Risk considerations have had little systematic coverage in mar-
keting academia or practice. Studies of the relationship
between marketing and firm value (the bottom box in Figure 1)
have discussed risk factors because they are critical in investor
valuation of assets or future income streams. Whereas the fi-
nance literature has focused mainly on systemic risk (i.e., risk
faced by all companies in the market), the marketing literature
offers insights into idiosyncratic risk (i.e., risk tied to unique
circumstances of the specific company). For example, Rao
and Bharadwaj (2008, 2016) demonstrate that effective mar-
keting not only generates future cash flows but also lowers
the working capital that is required to accommodate different
scenarios in the economic environment. These authors argue
convincingly that demonstrating the connection between mar-
keting and firm value is essential if marketing is to be a
part of strategic planning in the enterprise. An empowered
CMO—defined as a proficient demand forecaster and
marketing decision maker—is uniquely able to do this
because of his or her “outside-in view”and knowledge
about likely consumer response to different business ini-
tiatives. Drawing on that knowledge, the CMO can project
cash flows and required working capital (both of which
drive firm value) under different economic scenarios and
then advise top management on the best course of action for
the firm’s shareholders. As such, marketing’s ability to man-
age business risk is an integral part of its value creation for
the firm.
In practical terms, an empowered CMO needs to show-
case his or her ability to manage marketing-induced risk, given
uncertainty about consumer, retailer, and competitive reactions
and the timing ofthese responses (Pauwels 2014). Most studies
that have examined the consequences of risk for marketing
planning, execution, and results monitoring have performed
scenario analyses that contrast best and worstcases on the basis
of estimated standard errors of response coefficients. Only one
academic article to date, by Albers (1998), has formalized
this process. After specifying the response functions dis-
cussed in the previous section, Albers decomposes the devi-
ation between actual andpredictedperformance as (1) incorrect
market response assumptions (planning variance), (2) devia-
tions of actual marketing actions from planned ones (execu-
tion variance), and (3) misanticipation of competitive reactions
(reaction variance). Each of these variances can be decomposed
further into the separate effects of single marketing instruments.
Planning variance. Incorrect market response assump-
tions can stem from faulty predictions of market size (driven
by business cycle or other consumption trends that affect
the entire sector) or market share (driven by brand-specific
actions such as advertising messaging or relative price).
Understanding the extent of deviation that results from each
factor helps companies adjust future predictions and also
assign accountability to the proper party (industry forecasters
or brand managers). Although benchmarks exist for mar-
keting effect size (see Table 3), the timing of marketing wear-
in and wear-out effects remains uncertain in practice and is
relatively underresearched.
While early research (Little 1970) has suggested the pos-
sibility of wear-in times for marketing campaigns, empirical
evidence has mainly covered sales effects of advertising, new
product introductions, and point-of-purchase actions. The peak
sales effect of advertising occurs relatively quickly, typically
within two months (Pauwels 2004; Tellis 2004), and the wear-
in times for mindset metrics (e.g., awareness, liking, consid-
eration) are just over two months (Srinivasan, Vanhuele, and
Pauwels 2010).In contrast, new product introductions typically
take several months or years to take off (Golder and Tellis
1997). As can be expected, point-of-purchase actions work
either immediately or not at all (Pauwels 2004), with price
promotions standing out as the most studied marketing action
(Srinivasan et al. 2004). The effect of distribution changes
seems to take longer (2.1 months on average in Srinivasan,
Vanhuele, and Pauwels [2010]). Further investigation of dis-
tribution is important because distribution stands out as the
most impactful marketing action (Ataman, Van Heerde, and
Mela 2010; Bronnenberg, Mahajan, and Vanhonacker 2000).
Finally, we know very little about the timing of ROIs in new
(digital) media such as paid search, banner ads, and word-of-
mouth referrals. Notable exceptions include DeHaan, Wiesel,
and Pauwels’s (2015) study of 11 online and 3 offline adver-
tising forms for an online retailer and Trusov, Bucklin, and
Pauwels’s (2009) report that wear-out times are substan-
tially higher for word-of-mouth referrals than for traditional
marketing actions for a social networking site.
Similarly, we know little about the impact and temporal
effects of marketing spending on brand and customer value, as
opposed to sales response. In modeling terms, marketing brand
value effects are generally captured by state-space models with
182 / Journal of Marketing: AMA/MSI Special Issue, November 2016
Kalman filters (e.g., Naik, Prasad, and Sethi 2008) or by
Bayesian dynamic linear models (e.g., Ataman, Van Heerde,
and Mela 2010). The idea is that insofar as marketing induces
purchases that yield satisfactory consumer associations
with the brand, future purchases may occur without marketing
support, thus increasing baseline demand for the brand. Like-
wise, marketing actions may decrease price sensitivity and
thus increase the price premium (Ataman et al. 2016). Other
researchers have tracked the connection between marketing
spending, customer acquisition, and the value these actions
bring to the firm (Rust et al. 2004). Despite these methodo-
logical developments, we do not yet have a strong empirical
knowledge base on how marketing creates brand and customer
value over time.
Empirical generalizations on wear-in and wear-out effects
are necessary for managerial advice in cases in which data are
missing (Lehmann 2006). We need studies analyzing return
timing for investments in new media and new (emerging)
markets. Moreover, the timing of returns may systematically
vary by medium and target audience, a possibility that should
be taken into consideration when deciding on campaigns.
Considerable research is still required to determine the con-
tribution of marketing spending to a brand’s value as well as
when the firm realizes this value. Conversely, more research is
needed on the impact of cessation or reduction of marketing,
especially its long-term consequences. On that topic, Sloot,
Fok, and Verhoef (2006) find that assortment reductions lower
category sales in the short run, but less so in the long run.
Although Li and Kannan (2014) find virtually no short-term
sales loss from stopping paid search for a well-known brand,
Kireyev, Pauwels, and Gupta (2016) show substantial long-
term sales loss from reducing display and search ads for a
lesser-known brand. Finally, Ailawadi, Lehmann, and Neslin
(2001) report that Procter & Gamble’s strategic decision to
reduce price-promotional spending across 24 product cate-
gories resulted in a drop in long-term market shares but a gain
in profitability. More research of this type will help the CMO
project the impact of alternative marketing plans.
Execution variance and reaction variance. Execution
variance is very important in practice but has had virtually no
research in academia (Albers 1998). Marketing executions
often stray from their plan because of third-party factors (e.g.,
the ad agency did not place billboards in time because local
regulations and insufficient temporary employees) or for in-
ternal reasons, such as lower-level managers reacting more
strongly to competitive moves than necessary (Holtrop et al.
2015). Albers (1998) provides the illustrative example of a
product manager decreasing the price more than planned and
switching the allocation away from distribution to adver-
tising. Because such occurrences are widespread, execution
variance and its consequences require further academic
research.
In contrast, academic research on competitive reaction is
plentiful, including research on its nature (aggressive, accom-
modating, or neutral), its speed, and its absence as a result of
competitors’unawareness or inability to react (Chen 1996).
Notably, managers often overestimate the incidence of com-
petitive reaction (e.g.,Holtrop et al. 2015) because research has
shown that lack of reaction is the dominant response, at least
for advertising and price promotions (Steenkamp et al. 2005).
Even when there is a retaliatory competitive reaction, it typ-
ically decreases the sales benefit from price promotions across
fast-moving consumer goods categories by only 10% (Pauwels
2007). Competitive response has a similarly small impact on
the sales benefits of new product introductions, advertising, and
distribution activity (Pauwels 2004). Further research is needed
to determine the boundary condition of reaction size and var-
iance. If competitive response variance is high, the firm may
want to start a “competitive intelligence”initiative.
Beyond competitors, other market players (e.g., retailers)
also influence the ROMI, as does the marketing organiza-
tion itself—for example, through decision rules that favor
repeating past successes (Dekimpe and Hanssens 1999). The
marketing literature has focused thus far on estimating cus-
tomer and competitor response to marketing actions, but
much less so on the sector ecosystem response that includes
other players and the company’s own decision rules and
heuristics (Dekimpe and Hanssens 1999). A few notable
exceptions include studies on retailer pricing showing, for
example, that retailers tend to increase a promoted price back
to its regular level slowly rather than abruptly (Pauwels 2004;
Srinivasan et al. 2004; Tsiros and Hardesty 2010). Company
decision rules/heuristics include the managerial tendency to
weigh past prices when setting future prices (Krishna, Mela,
and Urbany 2000). Managers should be aware of such ten-
dencies in their company’s decision making and investigate
whether it is appropriate to continue such habits in the current
market environment.
The reaction of market players in offline environments has
been assessed by dynamic system modeling in data-rich envi-
ronments (e.g., Pauwels 2004) and by role playing in data-scarce
environments, such as one-shot negotiations (Armstrong 2001).
Further research is needed on the role of market player reac-
tions in worldwide competition in online environments. As for
research on the marketing–finance interface, more insights are
needed to assess whether investors react appropriately to mar-
keting actions and, thus, how valuable information about
investor reaction is for marketing decision making.
A key research priority is to go beyond documenting
reactions toward understanding the impact of that reaction
on the ROI of the initiating action. For marketing-mix actions,
is it really the case that the majority of the net sales impact
derives not from customer reaction but from support from
other marketing actions (Pauwels 2004)? For strategic mar-
keting actions, how does one assess likely competitive reac-
tion in deciding on location, product quality, and regular price
level?
In conclusion, when marketing plans do not materialize
as anticipated, the reasons can be various, as formalized by
Albers (1998). Only when an organization can identify the
reasons that apply to its own history can it take the right
corrective actions. Risk analysis in marketing planning
is more important to organizations than the paucity of prior
research suggests and, as such, it is one of the most promis-
ingareas for further research. This is especially important
if marketing is to become an integral part of strategic and
financial planning.
Demonstrating the Value of Marketing / 183
Communicating Marketing Value
Within the Organization
After defining and measuring marketing value, it is important
to properly communicate this value within the organization.
This creates closed-loop learning (see the feedback loops in
Figure 1), which both justifies future marketing activities and
examines them for increased effectiveness and/or efficiency.
Internally communicating the value of marketing requires
(1) communicating multiple objectives in marketing dash-
boards, (2) adapting communication to the style of the deci-
sion maker, and (3) adapting communication to the marketing
organization.
Communicating Multiple Objectives in
Marketing Dashboards
In addition to their stated objectives, decision makers also
have personal objectives such as retaining their jobs and
growing their career prospects. The use of marketing ana-
lytics is often impeded by a perception that analytics compete
with people in the organization. Some managers may be
fearful that the spread of analytics in decision making will
eventually make them redundant in the organization. This
need not be the case, as people and data (including models)
have distinct competencies and weaknesses (e.g., Blattberg
and Hoch 1990), which we summarize in Table 4.
Managers tend to excel at diagnosing new situations on
the basis of their experience and integrating a variety of cues,
especially so-called “broken-leg cues”(unusual situations
that may not have prior history but are intuitively known
to be important). However, human decision makers are also
subjective in their judgment and tend to rush to a decision
overconfidently, without properly accounting for uncertainty
and risk. These weaknesses are well addressed by models,
which account for uncertainty and weigh different cues on the
basis of past data and “optimal”rules. However, the rules
may be too rigid for a new situation, and the output of a model
inevitably depends on human inputs, which the model is not
designed to question.
Given those strengths and weaknesses, organizations
should design decision support systems that take advantage
of the distinctive competencies of managers while using tech-
nology to compensate for managers’inherent weaknesses.
For example, after a firm’s business goals for the next
quarter or year are set, marketing planning should start with
analytics or dashboard input. Then, decision makers need to
judge the extent to which unique circumstances require some
of the model outputs to be adjusted. Cross-functional input
is paramount in this exercise, and there needs to be a sense
of internal ownership of the analytics platform across the
business functions. Finally, business objectives need to be
tied to resource allocations. Corstjens and Merrihue (2003)
give the example of global marketing resource allocation
at Samsung: when a model-inspired reduction in marketing
budget for product category Z in country X was enacted, the
sales quota for the manager in charge of ZX was lowered as
well, and vice versa for marketing spending increases. Such
coordinated actions help create a culture in which managers
view models and dashboards as their friends, not their nem-
eses. Automation of marketing decisions is likely to in-
crease for tactical decisions in stable markets, but less so
for strategic decisions, such as choosing new organic growth
options, setting the rules for automation, and reacting to
unexpected changes in turbulent markets (Bucklin, Lehmann,
and Little 1998).
To combine the best of model-based and human-based
strengths, researchers have proposed the use of an analytic
marketing dashboard (Pauwels et al. 2009). Like the dash-
board of a car, a marketing analytics dashboard brings the
main multiple objectives and their metrics into a single display.
It provides “a concise set of interconnected performance
drivers to be viewed in common throughout the organization”
(Pauwels 2014, p. 7). Figure 3 shows the dashboard that Inofec
managers used to project the expected profits expected from
changes to price discounts and to offline and online mar-
keting communication, which created the organizational
buy-in to run experiments demonstrating actual profithikes
(Wiesel, Arts, and Pauwels 2011).
Such communication tools make it possible to integrate
diverse business activities (some of them qualitative) with
performance outcomes. This helps managers in at least five
ways (Pauwels 2014). First, a dashboard enforces con-
sistency in measures and measurement procedures across
departments and business units. For example, Avaya pro-
vides business communication solutions in over 50 coun-
tries and diverse markets, with varying marketing tactics.
Before the dashboard project, the company had no com-
monality of systems around the globe, it used different
definitions of what constituted a “qualified lead”(a key
performance metric in the handoff from marketing to sales
for business-to-business companies), and there was a lack of
regional interest in gathering metrics.
Second, a dashboard helps monitor performance. Mon-
itoring may be both evaluative (who or what performed
well?) and developmental (what have we learned?). Google
provides a good example: dashboard metrics are early indi-
cators of performance, and if a dip occurs in, for example,
the trust-and-privacy metric, the company takes corrective
action.
Third, a dashboard may be used to plan goals and strat-
egies. For example, TD Ameritrade’s corporate scorecards,
developed by the strategic planning department, led to a
dashboard that plugs into the planning cycle and is tied to
quarterly compensation.
TABLE 4
Advice for Communication in Analytic and
Intuitive Companies
Analytic Decision
Making
Intuitive Decision
Making
Present estimates Visualize effectiveness
Discuss assumptions Focus on main insights
Optimize allocation Adjust allocation
Optimize budget Adjust budget
Examples: Procter &
Gamble, Allstate
Examples: Campbell’s,
Inofec
184 / Journal of Marketing: AMA/MSI Special Issue, November 2016
Fourth, a dashboard may be used to communicate to
important stakeholders. The dashboard communicates not
only performance but also, through the choice of metrics, the
things an organization values. Vanguard’sdashboard,forex-
ample, enabled it to share with its corporate board its focus
on customer loyalty, feedback, and word of mouth.
Finally, a dashboard offers a good starting point for im-
portant discussions, such as when management sets stretch
targets without providing additional resources. For instance,
the U.S. division of an automotive company was instructed
to increase profits despite longer innovation cycles and lower
advertising budgets. Analytics and dashboard tools helped
the division present what-if scenarios and make its case to
headquarters that trade-offs were necessary by quantifying
the relation between marketing actions and profits.
Dashboards also allow for more effective communication
with marketing partners, especially as companies move to
performance-based compensation of agency work. As the
sales impact of performance metrics may differ across countries,
managers should use dashboard insights to set specific
metric targets (Pauwels, Erguncu, and Yildirim 2013). In
the case of the U.S. division of the aforementioned auto-
motive company, brand consideration was a more important
performance metric in an emerging market, while brand liking
was more important in a mature market. Further research is
needed to generate empirical generalizations and boundary
conditions in this regard.
Adapting Communication to the Style of the
Decision Maker
In their review of ISMS-MSI Practice Prize finalists, Lilien,
Roberts, and Shankar (2013) detail the characteristics of success-
ful marketing science applications. They advocate estimat-
ing simple, easy-to-use models and obtaining organizational
buy-in through, among other things, speaking the same
language as influential executives. Marketing analytics
customers strongly differ in their decision-making lan-
guage, with some companies favoring a more analytic style
and others using a more intuitive style. We recommend
communicating marketing analytics according to the com-
pany’sstyle.
When decision makers have a more analytical style,
presenting estimates and elasticities straight from the ana-
lytics helps them understand exactly what is going on and
how decision optimality is affected—for example, when
deciding how to allocate marketing budgets by drawing on
their relative elasticities. Even in such cases, though, it is best
to provide the proper context—for example, by comparing
the effects that television advertising elasticities and online
advertising elasticities have on online performance metrics,
as Figure 4 shows.
Decision makers with a more analytical style require more
information on the analytics assumptions and the uncertainty
around the performance projections. Academic researchers are
typically well versed in such explanations. In contrast, decision
makers with a more intuitive decision style may be averse to
discussions on confidence intervals, functional form, and error
distribution assumptions. Communicating analytics insights
in such environments requires more visualization, such as the
heat map of the projected profit consequences of changes to
marketing actions shown in Figure 5.
Figure 5 shows the highest profit (8.51; units disguised)
as a specific combination of price ($45) and advertising
budget ($3.25 million) but also communicates how close
other combinations are to this maximum projected profit. For
instance, at a current price level of $35, the decision maker
may feel uncomfortable with prices over $40, perhaps fear-
ing a customer backlash not included as a model variable.
The decision maker can look up the highest possible profit
and associated marketing actions for prices below $40. After
adjusting the price in this model-suggested direction, more
FIGURE 3
Marketing Analytic Dashboard for Inofec
Source: Pauwels (2014).
Demonstrating the Value of Marketing / 185
data and insights will then be available for recalibration
of analytics and intuition. Alternatively, the decision maker
might decide to allocate only the $2 million communica-
tion budget provided by his or her superior, the investor (see
Table 1). The heat map provides the decision maker with not
only the best outcome under the given budget (a projected
profit of 7.79) but also a quantitative argument for why profits
can be increased (up to) 9% if the advertising budget is increased
toward its optimal level. As such, the heat map enables deci-
sion makers to tweak model-derived optimal allocations, which
provides a level of decision comfort. Decision comfort has been
shown to be an important contributor to managers’willingness
FIGURE 4
Comparison of Television and Online Marketing Elasticities on Online Performance Metrics
Source: Pauwels and Van Ewijk (2013).
FIGURE 5
Profit Heat Map of the Interaction of Price and Advertising
Source: Pauwels (2014).
186 / Journal of Marketing: AMA/MSI Special Issue, November 2016
to adopt analytics in decision making (Parker, Lehmann, and
Xie 2016).
As for the danger of analytics users misunderstanding
the model’s assumptions, note that the heat map in Figure 5
restricts the decision maker’s range of potential price and
advertising levels. Contacts at the company preferred this
restriction on the range of the past data rather than show in-
creasing confidence intervals as users consider options far-
ther away from the mean(s) of past marketing level(s). The
contacts felt that although the latter might be appropriate for
decisions makers with an analytical style and background, it
would confuse other decision makers to the point that they
might not trust or use the model.
Empirical studies have shown that intuition may be better
than analysis in certain conditions—for example, for novices
under time pressure to make complex decisions (Wierenga
2011). Further research is needed to specify such conditions
for marketing decisions and to show how intuition and
analysis interact.
Adapting Communication to the Marketing
Organization
Beyond decision-making styles, the structure and organ-
ization of the marketing team matters in communication
about marketing analytics. At least one analyst should be
included in a decision-making team. During discussions
about, for example, increasing spending on a marketing
action, the analyst could remind others that it has a small
sales elasticity. Such early inclusion of analytics insights
may reduce decision makers’resistance to model-based
objections to proposals in which they are emotionally invested;
moreover, it may help companies guard against the tendency
of decision makers to cherry-pick the data and models
that generate results supporting a priori beliefs (Soyer and
Hogarth 2015). An example at a high strategic level is the
appointment of an algorithm to the board of directors of a
venture capital company (Wile 2014). In this way, analytics
has an independent vote in deciding which new venture
proposals to fund and can break the tie when the human
voters are split.
Conclusions
The multidimensional nature of marketing is expressed in
a variety of performance metrics—attitudinal, behavioral,
and financial—that turn out to be weakly interrelated. This
makes it difficult to assess marketing’s value and often re-
sults in skepticism about marketing’s contributions and a
reduction in the role of marketing at senior levels of decision
making. As the digital age marches on, new marketing ap-
plications are created (e.g., mobile targeting), which may
enable marketing to occupy an increasingly tactical function
in organizations.
This has led us to study marketing value assessment from
three perspectives: metrics, models, and communication. Fol-
lowing the chain of marketing productivity (Figure 1), we
postulate that successful marketing value assessment needs to
reconcile the different performance metrics that are available,
combine historical data analysis with marketing experiments,
and significantly enhance the communication of analytical
results to an audience of decision makers who are not ana-
lytically oriented. Marketing educators can help bridge
this gap by integrating the assessment and the communication
of marketing value in their teaching. The current growth in
marketing and business analytics programs offers a clear op-
portunity in this regard.
We offer a brief review of what is currently known about
metrics, models, and communication, along with suggestions
for specific avenues for further research. First, we know that
market orientation and the use of marketing metrics improve
marketing performance, but we do not yet know how this
marketing performance (as opposed to marketing commu-
nication) drives the scope of marketing in the organization.
Second, we have rules for optimizing profits and sales, but
not for weighting different marketing objectives. Third, we
know how to measure effectiveness and efficiency, but not the
conditions under which each is most appropriately pursued,
nor do we know when it is best to use automated marketing
programs (which focus on efficiency). Fourth, empirical gene-
ralizations regarding response elasticities enable us to optimize
marketing allocations in the short run, but not yet to quantify
marketing synergies for organic growth, nor to identify
which conditions favor top-down allocations and which favor
bottom-up allocations.
Fifth, we know a lot about marketing elasticities on hard
performance metrics but know little about how marketing
affects soft performance metrics and how these relate to hard
performance under different conditions. Still unknown is
whether the complicated relation between soft performance
metrics and sales is better characterized by strong average
effects with large confidence intervals (high elasticity with
high noise) or by small average effects with tight confidence
intervals (low, precise elasticity). Furthermore, we need
to detect and explain outlier brands that buck the average
relationship among metrics (Ailawadi and Van Heerde 2015).
Sixth, risk has been decomposed in terms of performance
variance but is not yet quantified in the timing of these
performance returns. Moreover, we are limited in the advice
we can provide on the risk of stopping marketing activities
and optimal competitive reaction. Finally, we know several
generalities about communicating marketing value (e.g., visu-
alizations), but we have little insight into success factors
for different communication methods and for intuitive and
analytical decision making.
Our overall conclusions are as follows. First, marketing
value assessment is essential if marketing as a discipline wants
to exert an influence at the highest levels of the organization.
Its influence will also determine the scope of its role in the
organization, which could range from tactical execution of
advertising and promotion policies to being a fundamental
driver of organic growth.
Second, significant advances in data quality and quantity,
along with new analytical methods, have served marketing
value assessment well both in academia and in industry. Most
of these advances have occurred at the tactical level. In
particular, digitization allows for a much improved under-
standing of the connection between soft (attitudinal) and hard
(transactional) metrics.
Demonstrating the Value of Marketing / 187
Third, marketing analytics technology has been used mainly
for resource allocation decisions, not investment decisions.
Media mix and digital attribution models, for example, are
widely accepted and used. This evolution pushes market-
ingpracticeinanautomated,programmatic direction, not
unlike the automated trading of securities on Wall Street. It
also necessitates the use of visualization methods to suc-
cessfully communicate the complexities of marketing value
creation.
Finally, to better serve the strategic aspect of marketing,
which is the key interest of senior management in the orga-
nization, databases will need to be better integrated across
the elements of the marketing mix, broadly defined. This
presents an opportunity for providers of enterprise resource-
planning solutions: by including customer and marketing data
in their systems, they can provide a unified data platform that
will allow for a cross-functional view of marketing and the
value of marketing in the organization.
REFERENCES
Ailawadi, Kusum, Donald R. Lehmann, and Scott A. Neslin (2001),
“Market Response to a Major Policy Change in the Marketing
Mix: Learning from Procter & Gamble’s Value Pricing Strat-
egy,”Journal of Marketing, 65 (January), 44–61.
——— and Harald van Heerde (2015), “Consumer-Based and
Sales-Based Brand Equity: How Well Do They Align?”working
paper, Tuck School of Business, Dartmouth College [available at
https://www.researchgate.net/publication/282602155_Consumer-
Based_and_Sales-Based_Brand_Equity_How_Well_Do_They_
Align].
Albers, Sonke (1998), “A Framework for Analysis of Sources of
Profit Contribution Variance Between Actual and Plan,”Inter-
national Journal of Research in Marketing, 15 (2), 109–22.
Ariely, Dan (2010), “Why Businesses Don’t Experiment,”Harvard
Business Review, (April), [available at https://hbr.org/2010/04/
column-why-businesses-dont-experiment].
Armstrong, Scott (2001), Principles of Forecasting: A Handbook
for Researchers and Practitioners. Berlin: Kluwer Academic
Publishers.
Ataman, Berk, Koen Pauwels, Shuba Srinivasan, and Marc Vanhuele
(2016), “Advertising’s Long-Term Impact on Brand Price Elas-
ticity Across Brands and Categories,”working paper, [available at
http://ssrn.com/abstract=2783096].
———, Harald J. van Heerde, and Carl F. Mela (2010), “The Long-
Term Effect of Marketing Strategy on Brand Sales,”Journal of
Marketing Research, 47 (October), 866–82.
Biesdorf, Stefan, David Court, and Paul Willmott (2013), “Big Data:
What’s Your Plan?”McKinsey Quarterly, (March), [available at
http://www.mckinsey.com/business-functions/business-technology/
our-insights/big-data-whats-your-plan].
Blattberg, Robert C. and Stephen J. Hoch (1990), “Database Models
and Managerial Intuition: 50% Model +50% Manager,”Man-
agement Science, 36 (8), 887–99.
Bloom, Nicholas, Luis Garicano, Raffaella Sadun, and John Van
Reenen (2014), “The Distinct Effects of Information Technology
and Communication Technology on Firm Organization,”Man-
agement Science,60(12),2859–85.
Bronnenberg, Bart J., Vijay Mahajan, and Wilfried Vanhonacker (2000),
“The Emergence of Market Structure in New Repeat-Purchase
Categories: A Dynamic Approach and an Empirical Application,”
Journal of Marketing Research,37(February),16–31.
Bucklin, Randolph E., Donald R. Lehmann, and John D.C. Little
(1998), “From Decision Support to Decision Automation: A
2020 Vision,”Marketing Letters, 9 (3), 235–46.
Chen, Ming-Jer (1996), “Competitor Analysis and Interfirm Rivalry:
Toward a Theoretical Integration,”Academy of Management
Review, 21 (1), 100–34.
CMO Survey (2016), “TopLine Results,”(February), (accessed
February 28, 2016), [available at http://cmosurvey.org/files/
2016/02/The_CMO_Survey-Topline_Report-Feb-2016.pdf].
Colley, Russell H. (1961), Defining Advertising Goals for Mea-
sured Advertising Results. New York: Association of National
Advertisers.
Corstjens, Marcel and Jeffrey Merrihue (2003), “Optimal Market-
ing,”Harvard Business Review, (October), 3–8.
Court, David, Dave Elzinga, Susan Mulder, and Ole Jørgen Vetvik
(2009), “The Consumer Decision Journey,”McKinsey & Company,
(accessed July 20, 2016), [available at http://www.mckinsey.
com/business-functions/marketing-and-sales/our-insights/the-
consumer-decision-journey].
DeHaan, Evert, Thorsten Wiesel, and Koen Pauwels (2015), “The
Effectiveness of Different Forms of Online Advertising for Pur-
chase Conversion in a Multiple-Channel Attribution Framework,”
International Journal of Research in Marketing, (published elec-
tronically December 17), [DOI: 10.1016/j.ijresmar.2015.12.001].
Dekimpe, Marnik G. and Dominique M. Hanssens (1999), “Sus-
tained Spending and Persistent Response: A New Look at Long-
Term Marketing Profitability,”Journal of Marketing Research,
36 (November), 1–31.
Demirci, Ceren, Koen Pauwels, Shuba Srinivasan, and Gokhan
Yildirim (2014), “Conditions for Owned, Earned and Paid Media
Impact and Synergy,”Report 14-101, Marketing Science Institute.
Dorfman, Robert and Peter O. Steiner (1954), “Optimal Advertising
and Optimal Quality,”American Economic Review, 44 (5), 826–36.
Dwoskin, Elizabeth (2014), “Trends to Watch in 2015: From
Algorithmic Accountability to the Uber of X,”The Wall Street
Journal, (December 8), [available at http://blogs.wsj.com/digits/
2014/12/08/trends-to-watch-in-2015-from-algorithmic-accountability-
to-the-uber-of-x/].
Farris, Paul W., Dominique M. Hanssens, James D. Lenskold, and
David J. Reibstein (2015), “Marketing Return on Investment:
Seeking Clarity for Concept and Measurement,”Applied Mar-
keting Analytics, 1 (3), 267–82.
Germann, Frank, Gary L. Lilien, and Arvind Rangaswamy (2013),
“Performance Implications of Deploying Marketing Analytics,”
International Journal of Research in Marketing, 30 (2), 114–28.
Golder, Peter N. and Gerald J. Tellis (1997), “Will It Ever Fly:
Modeling the Takeoff of Really New Consumer Durables,”Mar-
keting Science, 16 (3), 256–70.
Gupta, Sunil and Valarie Zeithaml (2006), “Customer Metrics and
Their Impact on Financial Performance,”Marketing Science,25(6),
718–39.
Hanssens, Dominique M. (2014), “Econometric Models,”in The
History of Marketing Science, Russell S. Winer and Scott A.
Neslin, eds. Singapore: World Scientific Publishing, 99–128.
——— (2015), Empirical Generalizations About Marketing Impact,
2nd ed. Cambridge, MA: Marketing Science Institute.
———, Leonard J. Parsons, and Randall L. Schultz (2001), Market
Response Models: Econometric and Time-Series Analysis, 2nd
ed. Boston: Kluwer Academic Publishers.
188 / Journal of Marketing: AMA/MSI Special Issue, November 2016
———, Koen H. Pauwels, Shuba Srinivasan, Marc Vanhuele, and
Gokhan Yildirim (2014), “Consumer Attitude Metrics for
Guiding Marketing Mix Decisions,”Marketing Science, 33 (4),
534–50.
———, Fang Wang, and Xiao-Ping Zhang (2016), “Performance
Growth and Opportunistic Marketing Spending,”International
Journal of Research in Marketing, (published electronically
March 16), [DOI: 10.1016/j.ijresmar.2016.01.008].
Hill, Kashmir (2012), “How Target Figured Out a Teen Girl
Was Pregnant Before Her Father Did,”Forbes, (February 16),
[available at http://www.forbes.com/sites/kashmirhill/2012/02/
16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-
father-did/#419a7ea534c6].
Holtrop, Niels, Jaap E. Wieringa, Maarten J. Gijsenberg, and Peter
Stern (2015), “Competitive Reactions to Personal Selling: The
Difference Between Strategic and Tactical Actions,”working
paper, University of Groningen.
Ilhan, Behice Ece, Koen Pauwels, and Raoul K ¨
ubler (2016), “Dancing
with the Enemy: Broadened Understanding of Engagement in
Rival Brand Dyads,”Report 16-107, Marketing Science Institute.
Katsikeas, Constantine S., Neil A. Morgan, Leonidas C. Leonidou,
and G. Tomas M. Hult (2016), “Assessing Performance Out-
comes in Marketing,”Journal of Marketing, 80 (March), 1–20.
Keeney, Ralph L. and Howard Raiffa (1993), Decisions with Multiple
Objectives: Preferences and Value Trade-Offs. Cambridge, UK:
Cambridge University Press.
Kireyev, Pavel, Koen Pauwels, and Sunil Gupta (2016), “Do Display Ads
Influence Search? Attribution and Dynamics in Online Advertising,”
International Journal of Research in Marketing, (published elec-
tronically October 13), [DOI: 10.1016/j.ijresmar.2015.09.007].
Krishna, Aradhna, Carl Mela, and Joel Urbany (2000), “Inertia in
Pricing,”working paper, University of Notre Dame.
Lecinski, Jim (2011), Winning the Zero Moment of Truth. Mountain
View, CA: Google.
Lee, Ju-Yeon, Irina V. Kozlenkova, and Robert W. Palmatier
(2015), “Structural Marketing: Using Organizational Structure
to Achieve Marketing Objectives,”Journal of the Academy of
Marketing Science, 43 (1), 73–99.
Leeflang, Peter, Tammo Bijmolt, Jenny van Doorn, Dominique M.
Hanssens, Harald van Heerde, Peter Verhoef, et al. (2009), “Lift
Versus Base: Current Trends in Marketing Dynamics,”Inter-
national Journal of Research in Marketing, 26 (1), 13–20.
Lehmann, Donald R. (2006), “The Metrics Imperative,”in Review of
Marketing Research, Vol. 2, Naresh K. Malhotra, ed. Bingley,
UK: Emerald Group Publishing, 177–202.
Li, Hongshuang and P.K. Kannan (2014), “Attributing Conversions
in a Multichannel Online Marketing Environment: An Empirical
Model and a Field Experiment,”Journal of Marketing Research,
51 (February), 40–56.
Lilien, Gary L., John H. Roberts, and Venkatesh Shankar (2013),
“Effective Marketing Science Applications: Insights from the
ISMS-MSI Practice Prize Finalist Papers and Projects,”Mar-
keting Science, 32 (2), 229–45.
Little, John D.C. (1970), “Models and Managers: The Concept of
a Decision Calculus,”Management Science, 16 (8), B466–85.
——— (1979), “Aggregate Advertising Models: The State of the
Art,”Operations Research, 27 (4), 629–67.
Lodish, Leonard M., Magid Abraham, Stuart Kalmenson, Jeanne
Livelsberger, Beth Lubetkin, Bruce Richardson, et al. (1995),
“How T.V. Advertising Works: A Meta-Analysis of 389 Real
World Split Cable T.V. Advertising Experiments,”Journal of
Marketing Research, 32 (May), 125–39.
Lohr, Steven (2015), “Maintaining a Human Touch as the Algo-
rithms Get to Work,”The New York Times, (April 7), [available
at http://www.nytimes.com/2015/04/07/upshot/if-algorithms-know-
all-how-much-should-humans-help.html].
Manchanda, Puneet, Peter E. Rossi, and Pradeep K. Chintagunta (2004),
“Response Modeling with Nonrandom Marketing-Mix Variables,”
Journal of Marketing Research, 61 (November), 467–78.
Mantrala, Murali K., Prasad A. Naik, Shrihari Sridhar, and Esther
Thorson (2007), “Uphill or Downhill? Locating the Firm on a
Profit Function,”Journal of Marketing, 71 (April), 26–44.
———, Prabhakant Sinha, and Andris A. Zoltners (1992), “Impact
of Resource Allocation Rules on Marketing Investment-Level
Decisions and Profitability,”Journal of Marketing Research,
29 (May), 162–75.
Marketing Science Institute (2014), Research Priorities 2014–2016.
Cambridge MA: Marketing Science Institute.
Mela, Carl F., Sunil Gupta, and Donald R. Lehmann (1997), “The
Long-Term Impact of Promotion and Advertising on Consumer
Brand Choice,”Journal of Marketing Research,34(May),
248–61.
Morey, Richard C. and John M. McCann (1980), “Evaluating and
Improving Resource Allocation for Navy Recruiting,”Man-
agement Science, 26 (12), 1198–210.
Murphy, Kevin P. (2012), Machine Learning: A Probabilistic
Perspective. Cambridge, MA: MIT Press.
Naik, Prasad A., Ashutosh Prasad, and Surseh P. Sethi (2008),
“Building Brand Awareness in Dynamic Oligopoly Markets,”
Management Science, 54 (1), 129–38.
Nath, Pravin and Vijay Mahajan (2011), “Marketing in the C-Suite:
A Study of Chief Marketing Officer Power in Firms’Top
Management Teams,”Journal of Marketing, 75 (January),
60–77.
Natter, Martin, Thomas Reutterer, Andreas Mild, and Alfred Taudes
(2007), “An Assortmentwide Decision-Support System for Dy-
namic Pricing and Promotion Planning in DIY Retailing,”
Marketing Science, 26 (4), 576–83.
Parker, Jeffrey R., Donald R. Lehmann, and Yi Xie (2016), “De-
cision Comfort,”Journal of Consumer Research, (published
electronically February 1), [DOI: 10.1093/jcr/ucw010].
Pauwels, Koen (2004), “How Dynamic Consumer Response,
Competitor Response, Company Support and Company Inertia
Shape Long-Term Marketing Effectiveness,”Marketing Sci-
ence, 23 (4), 596–610.
——— (2007), “How Retailer and Competitor Decisions Drive the
Long-Term Effectiveness of Manufacturer Promotions for Fast
Moving Consumer Goods,”Journal of R etailing,83(3),297–308.
——— (2014), It’s Not the Size of the Data—It’s How You Use It:
Smarter Marketing with Analytics and Dashboards. New York:
American Management Association.
———, Tim Ambler, Bruce Clark, Pat LaPointe, David Reibstein,
Bernd Skiera, et al. (2009), “Dashboards as a Service: Why,
What, How, and What Research Is Needed?”Journal of Service
Research, 12 (2), 175–89.
———, Selin Erguncu, and Gokhan Yildirim (2013), “Winning
Hearts, Minds and Sales: How Marketing Communication Enters
the Purchase Process in Emerging and Mature Markets,”Inter-
national Journal of Research in Marketing, 30 (1), 57–68.
——— and David Reibstein (2010), “Challenges in Measuring
Return on Marketing Investment,”in Review of Marketing
Research, Vol. 6, Naresh K. Malhotra, ed. Bingley, UK: Emerald
Group Publishing, 107–24.
———, Jorge Silva-Risso, Shuba Srinivasan, and Dominique M.
Hanssens (2004), “New Products, Sales Promotions and Firm
Value: The Case of the Automobile Industry,”Journal of Mar-
keting, 68 (October), 142–56.
Demonstrating the Value of Marketing / 189
——— and Bernadette van Ewijk (2013), “Do Online Behavior
Tracking or Attitude Survey Metrics Drive Brand Sales? An
Integrative Model of Attitudes and Actions on the Consumer
Boulevard,”Report 13-118, Marketing Science Institute.
Rao, Ramesh and Neeraj Bharadwaj (2008), “Marketing Initiatives,
Expected Cash Flows, and Shareholders’Wealth,”Journal of
Marketing, 72 (January), 16–26.
——— and ——— (2016), “The Importance of Empowered Chief
Marketing Officers to Corporate Decision-Making,”working
paper, University of Texas at Austin.
Rossi, Peter E. (2014), “Even the Rich Can Make Themselves Poor:
A Critical Examination of IV Methods in Marketing Applica-
tions,”Marketing Science, 33 (5), 655–72.
———, Greg M. Allenby, and Robert McCulloch (2005), Bayesian
Statistics and Marketing. Hoboken, NJ: John Wiley & Sons.
Rust, Roland T., Tim Ambler, Gregory Carpenter, V. Kumar, and
Raj Srivastava (2004), “Measuring Marketing Productivity: Cur-
rent Knowledge and Future Directions,”Journal of Marketing,
68 (October), 76–89.
Shah, Denish, V. Kumar, and Yi Zhao (2015), “Diagnosing Brand
Performance: Accounting for the Dynamic Impact of Product
Availability with Aggregate Data,”Journal of Marketing Re-
search, 52 (April), 147–65.
Sloot, Laurens M., Dennis Fok, and Peter Verhoef (2006), “The
Short- and Long-Term Impact of an Assortment Reduction on
Category Sales,”Journal of Marketing Research, 43 (Novem-
ber), 536–48.
Sorescu, Alina and Jelena Spanjol (2008), “Innovation’s Effect
on Firm Value and Risk: Insights from Consumer Packaged
Goods,”Journal of Marketing, 72 (March), 114–32.
Soyer, Emre and Robin Hogarth (2015), “Fooled by Experience,”
Harvard Business Review, (May), [available at https://hbr.org/
2015/05/fooled-by-experience].
Srinivasan, Shuba, Koen Pauwels, Dominique M. Hanssens, and
Marnik Dekimpe (2004), “Do Promotions Benefit Retailers,
Manufacturers, or Both?”Management Science, 50 (5), 617–29.
———, Marc Vanhuele, and Koen Pauwels (2010), “Mind-Set
Metrics in Market Response Models: An Integrative Approach,”
Journal of Marketing Research, 47 (August), 672–84.
Stahl, Florian, Mark Heitmann, Donald R. Lehmann, and Scott A.
Neslin (2012), “The Impact of Brand Equity on Customer
Acquisition, Retention, and ProfitMargin,”Journal of Mar-
keting,76(July),44–63.
Steenkamp, Jan-Benedict E.M., Vincent Nijs, Dominique Hanssens,
and Marnik Dekimpe (2005), “Competitive Reactions to
Advertising and Promotion Attacks,”Marketing Science,
24 (1), 35–54.
Talay, M. Berk, Koen Pauwels, and Steven Seggie (2012), “To
Launch or Not to Launch in Recessions? Evidence from over 60
Years in the Automobile Industry,”Report 12-109, Marketing
Science Institute.
Tellis, Gerard J. (2004), Effective Advertising: Understanding When,
How, and Why Advertising Works. Thousand Oaks, CA: Sage
Publications.
Trusov, Michael, Randolph E. Bucklin, and Koen Pauwels (2009),
“Effects of Word-of-Mouth Versus Traditional Marketing:
Findings from an Internet Social Networking Site,”Journal of
Marketing, 73 (September), 90–102.
Tsiros, Michael and David M. Hardesty (2010), “Ending a Price
Promotion: Retracting It in One Step or Phasing It Out Grad-
ually,”Journal of Marketing, 74 (January), 49–64.
Webster, Frederick E., Alan J. Malter, and Shankar Ganesan (2003),
“Can Marketing Regain Its Seat at the Table?”Report 03-003,
Marketing Science Institute.
Wierenga, Berend (2011), “Managerial Decision Making in Mar-
keting: The Next Research Frontier,”International Journal of
Research in Marketing, 28 (2), 89–101.
——— and Gerrit van Bruggen (2012), Marketing Management
Support Systems: Principles, Tools and Implementation. Berlin:
Springer.
Wiesel, Thorsten, Joep Arts, and Koen Pauwels (2011), “Practice
Prize Paper: Marketing’s Profit Impact: Quantifying Online and
Offline Funnel Progression,”Marketing Science, 30 (4), 604–11.
Wile, Mike (2014), “A Venture Capital Firm Just Named an Algorithm
to Its Board of Directors—Here’s What It ActuallyDoes,”Business
Insider, (May 13), [available at http://www.businessinsider.com/
vital-named-to-board-2014-5#ixzz3haj0RUQh].
You, Ya, Gautham G. Vadakkepatt, and Amit M. Joshi (2015), “A
Meta-Analysis of Electronic Word-of-Mouth Elasticity,”Jour-
nal of Marketing, 79 (March), 19–39.
190 / Journal of Marketing: AMA/MSI Special Issue, November 2016