ChapterPDF Available

Abstract and Figures

The marketing literature uses regression models based on observational data for causal inferences. Endogeneity issues are a threat to inferring causal effects. Endogeneity—the correlation between the regressors and the model error term—will lead to inconsistent estimates of the regression effects and potentially erroneous conclusions. We discuss this in more detail in Sect. 18.2. The standard approach to deal with endogeneity is to use an instrumental variables (IV) approach. In Sect. 18.3, we briefly introduce this technique before we highlight key aspects of IV selection.
Content may be subject to copyright.
Chapter 18
Addressing Endogeneity in Marketing Models
Dominik Papies, Peter Ebbes, and Harald J. Van Heerde
18.1 Introduction
The marketing literature uses regression models based on observational data for
causal inferences. Endogeneity issues are a threat to inferring causal effects.
Endogeneity—the correlation between the regressors and the model error term—
will lead to inconsistent estimates of the regression effects and potentially erroneous
conclusions. We discuss this in more detail in Sect. 18.2. The standard approach to
deal with endogeneity is to use an instrumental variables (IV) approach. In Sect.
18.3, we briefly introduce this technique before we highlight key aspects of IV
One recent development in the domain of endogeneity correction is the quest for
instrument-free methods that allow researchers to correct for endogeneity without
the need to search for and justify IVs. We review these new techniques and highlight
their strengths and weaknesses, point out their identifying assumptions, and discuss
good practices when using these approaches. These methods are discussed in Sects.
18.4 and 18.5.
D. Papies ()
School of Business and Economics, University of Tübingen, Nauklerstr. 47,
Tübingen 72074, Germany
P. Ebbes
Department of Marketing, HEC Paris, 1, Rue de la Libération, Jouy-en-Josas 78351, France
H.J. Van Heerde
School of Communication, Journalism and Marketing, Massey University, Private Bag 102904,
Auckland 0745, New Zealand
© Springer International Publishing AG 2017
P.S.H. Leeflang et al. (eds.), Advanced Methods for Modeling Markets,
International Series in Quantitative Marketing, DOI 10.1007/978-3-319-53469-5_18
582 D. Papies et al.
Section 18.6 then moves on to several extensions of IV estimation and we
consider the cases of multiple endogenous regressors, interactions that involve
endogenous regressors, including squared terms, and binary endogenous regressors.
An important theme in this chapter is that we should only try to correct for
endogeneity if there is a real concern that endogeneity is an important problem.
Including a rather complete set of control variables in the regression model is of
paramount importance. In several instances, controlling for endogeneity can lead to
more biased estimates, less significant estimates, and inferior predictions. Section
18.7 reflects this discussion in more detail.
Our goal is to provide a mostly non-technical but comprehensive discussion of
endogeneity problems and their solutions. While we discuss recent and advanced
topics in IV estimation, our focus is on providing guidelines and best practices when
it comes to addressing an endogeneity problem in marketing models.
18.2 Endogeneity: Consequences and Caveats
18.2.1 What Is Endogeneity?1
Market response modeling is centered around the estimation of the effects of market-
ing activities on performance. However, marketing managers are often strategic in
their use of marketing activities and adapt them in response to factors unobserved by
the researcher. From an econometrician’s perspective, these management decisions
are endogenous to their expected effects on market performance. Empirical market
response models that seek to estimate the causal effect of marketing instruments
need to account for such strategic planning of marketing activities, or otherwise
may suffer from an endogeneity problem, leading to biased estimates of the effects
of the marketing activities on performance.
To illustrate the problem, consider a city with just one hotel. Suppose we have
data on hotel prices and demand for rooms for this hotel. What may be unknown to
a researcher is that the city has hosted a few major events throughout the year, and
the hotel manager capitalized on these events by raising prices during days of peak
demand. If the researcher now estimates a regression model for the effect of price on
demand, the estimated effect will be distorted because she did not include the major
events in the model. That is, during these events, there will be observations with high
prices and high demand, which goes against the commonly assumed downward-
sloping demand curve and which will bias the negative effect of price on demand
toward zero. This is the essence of the endogeneity problem: distorted estimates
due to a correlation between an independent variable (price in the example) and
unobserved factors that are part of the error for demand (major events in this case).
1See also Vol. I, Sects. 6.5–6.7.
18 Addressing Endogeneity in Marketing Models 583
In mathematical terms we can consider a simple demand model, in which ytis
the market response (e.g., demand), and ptis the price for rooms in week t.Weare
interested in estimating2the effect of price on demand (ˇp):
We might be tempted to estimate this equation using Ordinary Least Squares
(OLS). When we do so, we implicitly assume that price is exogenous. However, an
endogeneity problem arises when price is correlated with the error of the demand
equation, i.e., when Cov(pt"t)¤0, and we say that price is endogenous. The
implication then is that the OLS estimates are “distorted”, or in more formal terms,
they are biased (i.e., have an expectation that is unequal to their true values) and
they are inconsistent (i.e., they do not converge to their true values when the sample
size grows to infinity).
In this stylized example, we have an endogeneity problem because the major
events were omitted from the model. If the researcher had observed one or more
variables describing the major events, she should have included these as control
variables in the model (e.g., dummy variables for the weeks during which the
event took place), and this would have taken care of the omitted variable problem.
Unfortunately, in many applications it is impossible to enumerate all possible
demand drivers, measure them, and include them in the model. Thus, the problem
of endogeneity can often not be addressed by control variables alone.
A second example is where we have a cross-section of observations. Suppose
we observe demand for hotels (iD1, :::,I) for a city for one given year (one
observation per hotel). Now yiis annual demand and piis the price for rooms in
hotel i(for simplicity, let’s assume prices are set on a yearly basis). We are interested
in estimating the effect of price on demand (ˇp):
Hotels differ in quality levels, which are not observed by the researcher. Higher-
quality hotels will be in higher demand and they are therefore able to charge higher
prices. This can again lead to a correlation between price and the error term, which
captures unobserved quality, i.e., we have an endogeneity problem. There may be
several other ways in which price and the error term in Eq. (18.1)or(18.2)are
correlated, so that an endogeneity problem can still arise even if all omitted variables
are controlled for (Bascle 2008). One way is when price is measured with error,
such that we have a measurement error problem. Another way is when price and
demand are determined simultaneously, as it is the case, for example, in an auction
for commodities.
2For sake of exposition we assume here that the observations are independent across time and we
do not consider autocorrelations in the errors (e.g., Verbeek 2012, Chap. 4).
584 D. Papies et al.
Endogeneity has arguably become the number one issue in empirical marketing
studies since the late 1990s. The reason is that valid causal effects are often the
desired outcomes of a market response model. A researcher would likely want
to tell a manager: “If you change the marketing variable by x%, the performance
changes by y%”. To make such a statement, consistently estimated parameters that
do not have an endogeneity bias are essential. In experimental studies in which the
researcher has full control over how the values of the regressors are set (i.e., random
variation of prices), estimating valid causal effects is usually straightforward, and
endogeneity is not an issue.
In many empirical applications, however, experimental studies are infeasible,
and the researcher has to rely on observational data. Here, the researcher does not
have full control over how the values of the regressors are set, but she observes
variation in the marketing instruments, but the source of the variation is beyond her
control. In these situations, causal statements (e.g., changing a marketing instrument
by x% leads to a performance change of y%) can only be made with the help
of identifying assumptions (e.g., Pearl 2009). We explore different methods, their
underlying assumptions, and the research implications of these assumptions in this
18.2.2 How Strong are Endogeneity Biases?
Most academic marketing papers focus on causal effects and hence endogeneity
considerations are relevant. This will not only be apparent in the cases we discuss
in this chapter, but also is evident in meta-analyses on marketing effectiveness.
Bijmolt et al. (2005) study 1851 price elasticities that were published across
40 years in 81 articles, and report an average elasticity of 2.62. One of the
factors that drives price elasticity estimates is whether or not the estimation
method accounts for endogeneity. Demand is substantially more price elastic when
controlling for endogeneity (elasticity D3.74) than when endogeneity is ignored
(elasticity D2.47). In other words, without endogeneity controls, the price
elasticity estimate is biased toward zero, similar to the hotel example from before.
In general, the sign of an omitted variable bias is the sign of the correlation between
the included and excluded variable. Using this logic, a positive bias in the elasticity
implies there must be a positive correlation between price and the unobserved
demand shock: manager raising prices in case of a positive shock in demand.
The meta-analysis of advertising elasticity by Sethuraman et al. (2011)also
shows a significant effect of endogeneity correction. The average short-term
advertising elasticity is 0.12 across 751 estimates from 56 studies published between
1960 and 2008. They find the advertising elasticity is lower when endogeneity is
not incorporated than when it is accounted for. In other words, the omission of
endogeneity leads to a negative bias in the estimates, which is consistent with
Villas-Boas and Winer (1999). This negative bias is consistent with a negative
18 Addressing Endogeneity in Marketing Models 585
correlation between advertising and the unobserved demand shock: a manager
increasing advertising in case of a shortfall in demand.
Albers et al. (2010) analyse 506 personal selling elasticities from 75 articles and
find an average personal selling elasticity of 0.34. Interestingly, the mean predicted
elasticity when endogeneity is not taken into account is 0.37, while it is 0.28 when
endogeneity is controlled for. That is, the personal selling elasticity is overestimated
when endogeneity is not incorporated, which is in contrast with the direction of the
endogeneity bias for advertising.3
The previous discussion illustrates the two main points of this chapter. First, in
many empirical settings, endogeneity matters, and ignoring it may lead to erroneous
conclusions. Second, the magnitude and direction of the endogeneity bias (i.e.,
whether the correlation between the error and the endogenous regressors is positive
or negative) is not always apparent but depends on the way managers react to
unobserved demand shocks. However, while there are several ways to address
endogeneity as we discuss in this chapter, none of these are perfect. In certain cases,
it may be even better not to correct for endogeneity at all, as “the cure may be worse
than the disease” (Bound et al. 1993). We will elaborate on these cases throughout
the rest of the chapter.
18.2.3 Caveats in Addressing Endogeneity
Importantly, if the goal of the market response is purely predictive, i.e., to provide
forecasts for future observations, the advice is not to correct for endogeneity (Ebbes
et al. 2011). The reason is that any endogeneity correction in a linear regression
model will tilt the fit line away from the best fitting OLS model, both in- and out of
sample. We elaborate more on this caveat in Sect.
A second caveat is that “treating variables as endogenous variables” is not the
same as “correcting for endogeneity.” VAR models and other vector-based time
series models (VARX, VEC) treat multiple variables as endogenous. For example,
sales and price may be stacked in a bivariate vector in a VAR model (see Chap. 4).
This vector becomes the endogenous (dependent) variable, which is explained by
lagged values of the same vector. VAR models can be used to study the dynamic
relationship between price and sales. The current-period effect between price and
sales is captured through the covariance of their error terms. This means that the
effect is bidirectional (e.g., sales affects price as much as price affects sales). There
is nothing in a VAR model that tries to correct for endogeneity bias when inferring
the effect of one variable on another. To correct for endogeneity in VAR(X) or
VEC models certain assumptions need to be imposed (e.g., structural VAR models,
Gijsenberg et al. 2015) and/or IVs need to be employed (e.g., Van Heerde et al.
3See also Kremer et al. (2008).
586 D. Papies et al.
18.3 Instrumental Variable (IV) Estimation
18.3.1 How IV Works
If a researcher (or reviewer) has strong theory- or evidence-based arguments that
there is a relevant correlation between one or more regressors and the model error
term, then the most common method to estimate the parametersof interest is through
IV estimation. The general idea behind IV estimation is that the observed variation
in the independent variable can be decomposed into an exogenous part and an
endogenous part.
Figure 18.1 provides a stylized illustration. The full set of observations (left
panel) are the split into those representing exogenous variation, i.e., independent
of the error term in demand (middle panel) and those representing endogenous
variation, i.e., correlated with the error term in demand. To estimate the regression
effects, the IV approach uses—instead of the observed variation in the endogenous
regressor—only the exogenous variation (middle panel).
Rather than literally splitting observations, IV isolates the exogenous variation
by using an auxiliary (Dadditional) regression, called the “first-stage regression”.
To illustrate more concretely how the IV approach works, let us revisit our main
equation of interest, which is a market response model for the hotel market with
price as the only regressor4:
The endogeneity problem arises because piis correlated with the error of the
demand equation, i.e., Cov(pi"i)¤0. In the IV approach, we introduce an auxiliary
regression and regress the endogenous regressor pion all variables in the market
response model (18.3) that are not correlated with the error term and an additional
Fig. 18.1 Decomposing the variation in the independent variable, where observed varia-
tion Dexogenous variation Cendogenous variation
4We use the cross sectional case (Eq. 18.2) as the leading example. The same logic applies to a
time series case (Eq. 18.1), which would in addition require a discussion of dealing with potential
autocorrelation, which is beyond the scope of this chapter.
18 Addressing Endogeneity in Marketing Models 587
variable zthat is not part of the market response model (18.3). This additional
variable ziis called an “instrumental variable”. As will become clear below, this
variable has to capture the exogenous variation in price, and thus must not be
correlated with the error term. In our simple illustration in (18.3), there are no further
exogenous variables, and hence the first stage regression becomes:
The most common IV estimator is the two-stage least squares (2SLS) approach,
which can be computed in two simple steps. First, we estimate (18.4) with OLS and
use the OLS estimates from (18.4) to compute predicted values for piusing the fitted
model. Second, we replace piin (1) with these predicted values (bpiDb0Cbzzi)
resulting in:
The ˇpestimated from (18.5) using OLS is now a consistent estimate for the effect
of pion yi. The resulting estimator b
ˇpis the 2SLS estimate for ˇp.
So why does the IV approach work? In step 1, we compute the predicted values
for piusing only exogenous variables. Hence, by construction,bpiis exogenous. Then
in step 2, we replace the endogenous price by the exogenously predicted price. This
variable is not correlated with the error term, and we can therefore simply use OLS
to estimate the effect of price. Of course, we need to make sure that the predicted
price is “meaningful” and the exogenous variables, particularly the IV, needs to
have sufficient predictive power to predict price, otherwise we are substituting a
useless variable in step 2 in the main equation that has little to do with the original
(endogenous) price variable. We can also see that we need an additional variable z
that does not appear in Eq. (18.3), because otherwise bpiwill be perfectly collinear
with the other exogenous variables in the main equation, and the OLS regression in
the second stage will not work either.
Figure 18.2 graphically shows for 10 hypothetical observations the three vari-
ables at play here: observed price (Dp) as a solid black line, the instrumental
variable zas a solid grey line, and the predicted price variable as a dashed black line.
Under the assumption that zis a valid instrument, the extent to which it correlates
with price represents the exogenous variation in price. In the example, it varies in
steps: first 100, then 200, then 125, and finally 175. This is the type of variation we
would like to use to estimate the model with. The remaining variation in price is
endogenous, which we would like to remove. The way we do this is by regressing
price on z, leading to bpi(D“p-hat”), which is the dashed black line in Fig. 18.2.bpi
represents exogenous variation in price, and it follows the same pattern as the IV. In
the model, we use bpiinstead of pto capitalize on the exogenous variation in price.
We would like to be very clear: 2SLS only works if we have a suitable instru-
mental variable(s) Z. Therefore, we must impose and accept two main assumptions
for zito be suitable. The first assumption is that the instrument is strong, i.e., zi
must be strongly related to pi. The second assumption is that the instrument is
588 D. Papies et al.
Fig. 18.2 Exogenous variation in p
valid (Dexogenous), i.e., zmust be uncorrelated with the structural error "ifrom the
main Eq. (18.3). We use the terms “valid IV” and “exogenous IV” interchangeably
in this chapter. In practical applications, the assumption of valid IV is arguably
the most problematic assumption, as it cannot be tested directly. Thus, it truly is
an assumption. We will explore these two assumptions in detail in the following
The IV estimator is a standard estimator that is implemented in many statistical
and econometrical software packages. In Stata, for instance, the IV approach above
could be carried out with “ivreg”:
ivreg y .pDz/;first:
We recommend using this or similar estimation packages from the shelf instead
of manually following the three steps we outline above. Manually performing
these steps is prone to mistakes, and the standard errors for b
ˇ0and b
ˇpwill be
incorrect. In case we have to conduct the 2SLS steps manually and want to obtain
correct standard errors, we have to calculate the residuals based on the observed
independent variable(s) pirather than the predicted independent variable(s) (bpi/
(Wooldridge 2010, p. 97 and p. 101):
Next, the standard error of the estimate is calculated as O2DPn
where Nis the number of observations and Kis the number of independent
variables (in our example KD2). The covariance matrix of b
ˇpis calculated as
where b
Xis a matrix with the observations in rows and in the
columns, in our example, a vector of ones and the values bpi(Wooldridge 2010,
p. 102). The standard error is the square root of the diagonal of b
18 Addressing Endogeneity in Marketing Models 589
In the IV approach we achieve identification (i.e., we identify a causal effect of
pon y) by relying on additional exogenous information by means of the exclusion
restriction: we exclude the exogenous instrument from the main equation. In the
previous example, we computed the 2SLS estimator. However, there are two other
estimation approaches that leverage the IV in the presence of an endogenous
regressor: the control function (CF) approach (Sect. 18.3.2) and the Limited
Information Maximum Likelihood (LIML) approach, which is used to estimate a
simultaneous system of equations (Sect. 18.3.3).
18.3.2 Control Function Approach
The approach that is closest in nature to 2SLS is the so-called control function (CF)
approach (Ebbes et al. 2011; Petrin and Train 2010; Wooldridge 2015). For the linear
model, the CF approach is exactly equivalent to 2SLS. However, the CF approach
is better suited for addressing endogeneity for a non-continuous dependent variable
(Sect. 18.6.4) and offers an alternative for addressing endogenousinteraction effects
(Sect. and squared terms (Sect.
So how does the CF approach look like for the linear model? After fitting the
first-stage regression the way we described above for the 2SLS estimator, we use
the predicted values pito compute the fitted residuals b
iDpibpiand subsequently
include these as an additional regressor in the main Eq. (18.2), resulting in:
The idea is that the control function (b
i) captures the endogenous part of pi.
That new variable is then included, resulting in Eq. (18.7). In Eq. (18.7)we
are now “controlling for” the unobserved variation that makes price endogenous.
Subsequently, we can estimate (18.7) with OLS to obtain a consistent estimate of
ˇp. In contrast, the 2SLS approach eliminates the endogenous variation in piby
using bpiinstead of piin Eq. (18.5). Interestingly, in linear models, both approaches
will yield the exact same results when the same IV is used. Not surprisingly, both
approaches require the same two main assumptions for zi:zimust be a strong and
valid (exogenous) instrument.
Two additional remarks are warranted. First, the inclusion of b
iin (18.7)isa
computationally easy version of the Hausman test for the presence of endogeneity
that we will discuss in Sect.
Second, the OLS sampling standard errors for b
/from (18.7) will
be incorrect because b
iis an estimated quantity (Karaca-Mandic and Train 2003;
Petrin and Train 2002, footnote 3). For the general CF approach (across linear and
nonlinear models), we should use a bootstrap approach to approximate the correct
590 D. Papies et al.
standard errors.5In such an approach, b
ineeds to be sampled repeatedly, and for
each sampled value of b
i, the regression model (18.7) is estimated. More specifically,
we take Mbootstrap samples to estimate the first-stage regression (18.4). Each
bootstrap sample has Nobservations, which are sampled, with replacement, from
the original set of Nobservations. Each bootstrap sample is used to estimate Eq.
(18.4), which leads to Msets of fitted residuals, denoted as b
i,foriD1, :::,N,
and mD1, :::,M. The fitted residuals of bootstrap sample mare subsequently used
in the following equation, which is estimated with OLS:
Each bootstrap sample results in a different point estimate for ˇp, denoted by b
mD1, :::,M. The standard deviation among this set of Mestimates,
with b
p, is now combined with the OLS standard error from Eq.
(18.7) to obtain the corrected standard error for b
Karaca-Mandic and Train (2003, footnote 3) note that this bootstrapping approach
is a reasonable approximation of the standard errors that can also be derived via
an asymptotic formula. We note that this approach is not the same as computing
the residuals from (18.4) once and then bootstrapping by sampling repeatedly from
18.3.3 Simultaneous Equations
An alternative way of addressing the endogeneity problem is to directly model
the correlation between the error term of the endogenous regressor Eq. (18.4)
5For the linear model, we do not need the bootstrap method. We can calculate b"iDyi
ˇppi(hence we exclude the control function in the calculation of the residuals). Next,
the standard error of the estimate is calculated as O2DPn
i=.NK/,whereNis the number
of observations and Kis the number of independent variables. The covariance matrix of b
calculated as b
˙DO2.X0X/1where Xis a matrix with the observations in rows and in the
columns, in our example, a vector of ones and the values piand b
i. The standard error is the square
root of the diagonal of b
18 Addressing Endogeneity in Marketing Models 591
and the error term of the main Eq. (18.3). This can be achieved through a
system of equations consisting of Eqs. (18.3)and(18.4), which we then estimate
simultaneously, correlating the errors of both equations. We would typically assume
a multivariate normal error term, which would then lead to the Limited Information
Maximum Likelihood estimator (LIML).
A limitation of LIML is the multivariate normality assumption, which is not
required in 2SLS or CF estimation. Again, many statistical and econometrical
software packages offer estimation commands for LIML. In Stata for instance,
one approach that is an approximation of a ML estimator is to use an iterative
flavor of “sureg” in combination with the “isure” option to estimate the equations
simultaneously (Gao and Lahiri 2000; Pagan 1979):
sureg .yDp/.pDz/;isure:
Again, in order to obtain consistent estimates using LIML, we must rely on the
same assumption regarding the instruments (i.e., strength and exogeneity) as in the
classical IV case.
Similar approaches to the ones that we describe above can also be implemented
in a Bayesian framework (for a discussion, see e.g., Kleibergen and Zivot 2003
and Chap. 16). Several authors use Bayesian simultaneous equations to control for
endogeneity (e.g., Ataman et al. 2008,2010). Again, the IVs must be strong and
exogenous, and we are therefore making the same assumptions as in the classical
IV case. Also in the Bayesian context, the simultaneous equation approach requires
an assumption on the distribution of the error term, and multivariate normality is the
typical choice (e.g., Ataman et al. 2008,2010).
There are also spatial approaches to deal with endogeneity using simultaneous
equations. Bronnenberg and Mahajan (2001) argue that the unobserved actions of
retailers cause a measurable joint spatial dependence among the marketing variables
and sales. They construct a covariance matrix between these variables based on
spatial proximity of stores and use this to obtain consistent estimates. Van Dijk
et al. (2004) argue that store similarity (rather than store proximity) could underlie
the spatial dependence between shelf space and sales. They use store characteristics
to construct the covariance matrix between these variables to obtain a consistent
estimate for the shelf space elasticity.
18.3.4 Simulation
To illustrate the basic IV approach and the different estimators that we discussed,
we present a brief simulation study that considers a cross-sectional case. Suppose
a researcher has cross-sectional sales data for iD1, :::, 175 hotels in a certain
local geographic region (e.g., an island in a remote area). The researcher observes
only sales for 1 year as well as room prices that each hotel charges in that year
(there is no variation within the year). The researcher does not observe the overall
592 D. Papies et al.
service quality of the hotels, but managers and consumers do observe quality. When
the service quality of the hotel is high, the hotel manager tends to charge a higher
price. Let us assume the researcher wants to estimate the simple demand model
in Eq. (18.3). An endogeneity issue arises because the researcher does not observe
quality, which is then an omitted variable that becomes part of the error term "i.As
managers set prices using information on quality, there now is a positive correlation
between the observed price piand the error term "i. We would expect OLS to be
biased upward (Bijmolt et al. 2005) so that, as the price effect ˇpis negative, the
OLS estimate of price is less negative (or potentially even positive). We expect an
upward bias in OLS because we now observe higher demand with higher prices,
given this price setting behavior.
We simulated 500 datasets (details on the simulation settings are available upon
request) and compute for each dataset the four estimators discussed before: OLS, IV,
CF and LIML. Here we consider a case where we have a valid IV which explains
about 33% of the variance in price (the correlation between the instrument and price
is about 0.58). The true price effect is 1. The results (Table 18.1) show that OLS
in this example is biased upward by about 20%. The three IV estimators perform
equally well and recover the true value. In this case where we have a linear model,
the CF and the 2SLS approach are identical, and with exact identification 2SLS and
LIML are equivalent, and hence the identical rows. We can also see that the three
IV estimators are less efficient than OLS as they have wider confidence intervals.6
18.3.5 Selecting the IVs
The consistency of any IV-based estimate (including 2SLS, the CF and LIML)
critically depends on the strength of the IV. Furthermore, it also critically depends
on whether the IV is exogenous.The proper selection of one or more IVs is therefore
the critical decision in the implementation of an IV approach. In many settings it is
Table 18.1 Simulation results IV
Estimator True value Mean estimate SD 95% CI
OLS 10.834 0.060 0.955 0.714
2SLS 11.008 0.113 1.235 0.781
CF 11.008 0.113 1.235 0.781
LIML 11.008 0.113 1.235 0.781
Correlation between IV and price D0.58. Correlation between IV and error term D0.00
6Strictly speaking, “efficiency” refers to asymptotic standard errors. An efficient estimator has the
lowest standard errors within a class of estimators when the sample size goes to infinity. In this
chapter we use the term “efficiency” somewhat loosely as a synonym for “low standard errors” or
“tight confidence intervals.”
18 Addressing Endogeneity in Marketing Models 593
appropriate to follow a three-step approach when implementing an IV estimation.
First, we should assess the strength of candidate instruments, i.e., the degree to
which the instruments are correlated with the endogenous regressor (Sect.
Second, if the instruments are sufficiently strong, we can proceed to assess whether
they are exogenous (Sect. Once these two requirements are fulfilled, we
would believe the IV is suitable and the researcher can formally assess whether the
IV estimates differ from non-corrected estimates such as OLS (Sect. Instrument Strength
An instrument is strong if it is correlated with the endogenous regressor. Intuitively,
this means that zmust have a strong and significant effect on pin (18.4). A weak
or insignificant estimate for in the first-stage regression is a cause for concern
and most likely means that the instrument is not sufficiently strong. The most
important source for understanding whether an instrument is strong or not is a
good understanding of how the data at hand came about. Theory should provide
a clear prediction of why and how the instrument affects the endogenous variable
and we must have a clear understanding of why there is exogenous variation is the
endogenous regressor. For example, see Germann et al. (2015, p. 8) for a detailed
theoretical development in the context of estimating the effect of a chief marketing
officer (CMO) on firm performance.
Bound et al. (1995) provide a demonstration of the role of instrument strength.
They reanalyze a paper by Angrist and Krueger (1991) and replace the original
instrument (quarter of birth) by randomly generated quarter of birth. This random
instrument generates the same results as the original instruments, which casts strong
doubts on the appropriateness of the original instruments.
Several tests exist to formally assess the strength of an instrument. These tests
typically measure the extent to which the R2of the first stage regression changes
due to the inclusion of the instrument. Loosely speaking, these tests compare the
R2from a first stage regression without the instrument(s) to a first stage regression
including the instrument(s). The R2from the latter should be significantly larger. A
popular way of assessing the strength of an instrument is to assess the change in R2
with an F-test that also considers the number of instruments required to achieve this
change in R2. Stock et al. (2002) highlight the importance of using instruments that
produce a sufficiently large F-statistic. Failure to use strong instruments may result
in severe biases of the estimated coefficients (Rossi 2014).
One situation in which the application of a standard (univariate) F-test will not
work is when two or more regressors, say, advertising and price, are treated as
endogenous. In this case, two IVs are required. If we assume that one instrument
predicts advertising as well as price, and the other instrument is a weak predictor of
both, then two separate F-tests for the two first-stage regressions will not uncover
the resulting identification problem. A multivariate F-test may be used instead when
there is more than one endogenous variable, which we will discuss in Sect. 18.6.1.
594 D. Papies et al.
We urge researchers to report three key statistics to allow readers to assess
instrument strength: the R2of the first-stage regression without IVs, the R2of the
first-stage regression with IVs, and the appropriate incremental F-statistic. Simulation (effects of weak instrument)
In the following simulation study, we illustrate the problems arising from a weak
instrument, and we continue our previous example with data for 175 hotels. As
before, we suspect there is an endogeneity problem. While the researcher has an
IV that she believes is exogenous, the correlation between the instrument and the
endogenous regressor price is weak. In our simulation study, we set the correlation
between price and the instrument to 0.03. For instance, the IV may be the cost of
cleaning supplies, which turns out to be only a small portion of the total cost a hotel
incurs. Therefore, the IV has barely any influence on setting the prices of a hotel
rooms, and weakly correlates with the endogenous variable price.
From Table 18.2 we can see that the performance of the IV approaches decreases
dramatically compared to Table 18.1 (where we have a strong IVs). Across the
simulated datasets, the IV estimators exhibit a very strong bias with a very large
variation in the sampling distributions, indicating strong loss of efficiency of the
IV models due to the weak instrument. The LIML estimator behaves slightly better
under weak instruments than 2SLS and CF. The bias in the OLS estimator is of the
same direction and magnitude as in the previous example (Table 18.1). Hence, when
we have very weak instruments, this is one example where the “cure” (IV-based
methods) may be worse than the “disease” (OLS). Instrument Exogeneity
A valid instrument is uncorrelated with the error from the main equation, i.e.,
Cov(z")D0. Unfortunately—and this is arguably the largest drawback of IV—
this assumption cannot be tested directly. Therefore, it is of critical importance to
provide theoretical arguments that can support this assumption. It is impossible
to overstate the relevance of proper theoretical arguments in this context. The
consistency of the coefficient that we are interested in depends on whether this
Table 18.2 Simulation results with very weak but valid instrument
Estimator True value Mean estimate SD 95% CI
OLS 10.830 0.060 0.950 0.71
2SLS 1 2.060 66.502 130.944 135.064
CF 1 2.060 66.502 130.945 135.065
LIML 11.289 3.011 7.310 4.732
Correlation between IV and price D0.03. Correlation between IV and error term D0.00
18 Addressing Endogeneity in Marketing Models 595
assumption is met. In other words: whether or not the IV estimate of ˇpin (18.3)
is the causal effect of pon ydepends on whether we can provide convincing
theoretical arguments that zis uncorrelated with ". Any IV analysis should be
accompanied by such a theoretic discussion. For instance, Germann et al. (2015,
p. 8) provide extensive theoretic support for why their IV (CMO prevalence) is
exogenous. They first provide an analysis of what omitted variables could affect the
dependent variable, and then provide theoretical support for the exogeneity of their
instrument, drawing on theories in organizational processes and culture (see also
Levitt 1996).
In many situations, it is useful to think of an endogeneity problem in a marketing
model as an omitted variable problem, where the omitted variable is related to
both the dependent variable and the (endogenous) independent variable. To provide
theoretical reasons for or against an endogeneity problem, it is helpful to describe
the process that is unobserved but related to, for instance, price and demand in
the hotel case, or CMO presence and firm performance in the case of Germann
et al. (2015). The argument of instrument validity then becomes traceable for the
observer. Returning to the hotel example, we argued that an endogeneity problem
arises because quality remains unobserved to the researcher, but hotels sets prices
based on quality. When we try to find an IV for this situation, the argument
always needs to refer to the question of whether a candidate IV is correlated with
unobserved factors driving demand. For example, cost of cleaning supplies (which
we used as an IV for price) is unlikely to be related to the unobserved factors that
drive demand for hotel room, such as quality, and hence the IV is valid. However, it
turned out to be weak as well. We will give additional guidance for finding IVs in
Sect. 18.3.6.
Many researchers feel uncomfortable with the fact that the central assumption of
a model is untestable (e.g., Rossi 2014), and indeed, there is no direct way of testing
whether the IV in (18.4) is valid. However, in the special case where more IVs
than endogenous variables are available (i.e., the model is over-identified), over-
identification-tests can shed some light on the adequacy of instruments. The test
utilizes the notion that the residuals from the second stage regression (Eq. 18.5)
should be unrelated to the instruments. To assess this, we would store the residuals
from estimating (18.5), i.e., b"i, and regress these on all exogenous regressors
including all the IVs (in the previous explanations, we mostly considered a single
IV, such that ziand zare scalars; when we have multiple IVs, then ziand zbecome
vectors). The resulting R2multiplied by the number of observations is 2-distributed
and can be used to test the null hypothesis of valid instruments. A large p-value
indicates that the null of valid instruments cannot be rejected and “we can have
some confidence in the set of instruments used up to a point” (Wooldridge 2010,p.
135). This test is sometimes also called the Sargan test7; the Hansen J-test is similar
but allows for heteroskedastic errors (e.g., Bascle 2008).
7See Vol. I, p. 210.
596 D. Papies et al.
So, up to what point can we have confidence in this test for instrument validity?
It is important to use this test with caution and to realize its limitations. First, the
test can only be conducted when we have over-identification, i.e., we have a larger
number of (strong) instruments than endogenous regressors. Second, the test will
not tell us which instrument is suspect, it will only test the set of instruments.
This means that if we reject the null hypothesis that the IVs are exogenous, we
have to reject the complete set of instruments. Hence, we will not know which
specific instrument caused the test to reject. Third, we must assume that at least
one candidate IV is exogenous, otherwise the test yields inconsistent and biased
estimates (Murray 2006). Some of these problems become less severe when at
least two more instruments than needed for identification are available (e.g., Bascle
2008). Irrespective of these refinements, we urge researchers not to rely on this test
as the sole measure for assessing the validity of instruments. Instead, theoretical
arguments for the validity are much more important. See also Murray (2006)for
more advice on this issue. Simulation (effects of invalid instrument)
In the following simulation, we illustrate the problems arising from an invalid
instrument. We consider the same case as before where unobserved quality is the
problem, leading to a correlation between price and the error. Now we use a different
instrument: number of personnel employed per hotel room. This is a strong IV
because it will correlate with the price the hotel charges. However, it is an invalid IV
because higher quality hotels tend to employ more staff, which will result in a better
experience to guests, which leads to more demand (ceteris paribus). Hence this IV
will be correlated with the model error term "i, which means it is not valid.
In the simulation study, we make the instrument invalid by setting the correlation
between the instrument and the error term to 0.1, while keeping the instrument
strong (correlation of 0.53 between IV and price). The true parameter values are
chosen such that the bias in OLS is about the same as in the previous simulation
examples. We can see from Table 18.3 that the IV estimators are now biased, with
approximately the same direction and magnitude as OLS. Hence, the researcher
would conclude for instance that there is no endogeneity bias in OLS (as the
difference between OLS and IV is empirically negligible), whereas there is a
considerable endogeneity bias as all estimators underestimate price sensitivity by
about 20%.
The bias from an endogenous IV in the IV approach is exacerbated when the
instrument is also weak (Bound et al. 1995). In the previous case, we had an
endogenous IV which was fairly strong, as the correlation between the endogenous
18 Addressing Endogeneity in Marketing Models 597
Table 18.3 Simulation results with invalid but strong instrument
Estimator True value Mean estimate SD 95% CI
OLS 10.823 0.065 0.952 0.694
2SLS 10.833 0.136 1.105 0.561
CF 10.833 0.136 1.105 0.561
LIML 10.833 0.136 1.105 0.561
Correlation between IV and price D0.53. Correlation between IV and error term D0.10
Table 18.4 Simulation results with invalid and weak instrument
Estimator True value Mean estimate SD 95% CI
OLS 10.800 0.074 0.948 0.652
2SLS 10.568 0.535 1.637 0.502
CF 10.568 0.535 1.637 0.502
LIML 10.561 0.435 1.432 0.309
Correlation between IV and price D0.25. Correlation between IV and error term D0.10
instrument and the endogenous regressor was 0.53. We now reduce the strength and
set the correlation between the instrument and the regressor to 0.25. We keep the
correlation between the instrument and the error term at 0.10. That is, the instrument
is invalid (endogenous) to the same extent as before but has now a medium
correlation with price (it is certainly not weak). We see that the IV approaches
(2SLS, CF, LIML) are now more biased than OLS and also much less efficient
(Table 18.4). Here, the researcher could wrongfully conclude that the OLS estimator
has a downward bias whereas, in fact, it has an upward bias. Hence, having an
endogenous IV which is only moderately strong definitely again represents a case
“where the cure is worse than the disease”. Test for Presence of Endogeneity
Once we have verified that the instruments are sufficiently strong and we argued
that they are valid, we can proceed with a Hausman test for the presence of
endogeneity (Verbeek 2012, p. 152).8The essence of this test (also known as
the Durbin-Wu-Hausman test) is to test whether there is a significant difference
between the uncorrected set of parameter estimates and the endogeneity-corrected
set (Wooldridge 2010, p. 130). The null hypothesis is that there is no difference, and
a rejection of the null indicates a significant difference. If we trust the instrument,
598 D. Papies et al.
we may conclude that the endogeneity-corrected estimates are preferred. The
equivalent test is for the significance of the control function term ˇcin Eq. (18.7)as
explained in Sect. 18.3.2.
Importantly, we cannot use standard model fit criteria (e.g., R2or holdout sample
fit) to assess whether endogeneity correction is successful (Ebbes et al. 2011). When
we use 2SLS estimation or any other endogeneity-correctingapproach, we sacrifice
fit in the hopes to obtain (more) consistent parameter estimates. The OLS line is
the line that minimizes the sum of squared residuals, and 2SLS will tilt the fit line
with the aim of obtaining a consistent estimate of the true slope. This happens at
the expense of no longer minimizing the sum of squared residuals. This is the case
both in- and out-of-sample: the OLS fitted line should beat the 2SLS fitted line.
Unlike other advances in marketing modeling over the last decades (e.g., unobserved
parameter heterogeneity, nonlinear functional forms), the success or failure of 2SLS
cannot be assessed from the in- and out of sample fit. Only when there are two
competing endogeneity-correcting approaches that are in theory equally valid, we
can use in- and out-of-sample fit as a criterion for which approach should be
preferred (Ebbes et al. 2011).
18.3.6 Where to Find Suitable Instruments
When we face an endogeneity problem that we believe is substantial enough such
that it warrants the use of IVs, the question arises which variables can serve as
potential instruments. We will consider a set of potential sources. Lagged Variables
Quite frequently, researchers rely on lagged values of the potentially endogenous
regressor as instruments, e.g., lagged prices (e.g., Rossi 2014; Villas-Boas and
Winer 1999). The appeal is that lagged marketing variables are often strong
predictors of current marketing variables, and hence they will typically satisfy the
condition of being a strong IV.
However, are lagged regressors also valid instruments? Rossi (2014) argues that
lagged prices will be invalid in a setting in which frequent sales promotions and
consumer stockpiling co-occur. The household’s inventory, and therefore demand,
will then be related to past prices. Similarly, we can argue that reference prices
(Winer 1986), which are formed on the basis of past prices, may invalidate
lagged prices as instruments. More generally, lagged regressors will only be valid
instruments if unobserved demand shocks are restricted to the current period. Since
this will often not be the case in marketing we recommend considerable caution in
using lagged regressors as instruments in general.
18 Addressing Endogeneity in Marketing Models 599
Is there something we can do to make lagged variables more valid instruments?
We believe there may be two approaches to achieve this. The first approach is to use
longer lags as IVs. For example, rather than using last period’s values, go back two,
three or more periods. The longer the lag, the less likely that the regressor was set
deliberately based on a future demand shock, making the instrument more valid. For
example, Ataman et al. (2010) use the Sargan test to lag the IV sufficiently long until
the exclusion restriction is satisfied. At the same time, the strength of the IV will
likely suffer from longer lags because it becomes more removed from the current
period endogenous regressor. Using (longer) lags as IVs also breaks down in case
of (severe) autocorrelation in the error term of the main equation, because it means
that any contemporaneous correlation between the regressor term and the error term
(a.k.a. endogeneity) carries over into the future, making the lagged regressor less
valid yet again as an IV, because the assumption that it is exogenous cannot be
The second approach where a case for using lagged regressors as IVs could be
made is when the mechanism through which these lagged regressors affect current
demand is explicitly included in the model. Following up on the example of price
promotions, a price promotion may lead to consumer stockpiling, which means that
the consumer needs to buy less of the product in the next period. Using a lagged
price promotion variable as an IV for current price promotion is not valid, because
lagged price promotion is (negatively) correlated with current demand because of
the consumer stockpiling, which is often an unobserved variable in many models.
However, we may be able to observe or model a consumer’s inventory and use it as
a control variable in the demand equation. This is in line with consumer theory that
says that lagged price promotion affects demand via a consumer’s inventory. Once
we control for inventory, the lagged price promotion variable is no longer (or much
less) correlated with the current demand error term, making it a more valid IV. Of
course, this argument does hinge upon the correct measurement or approximation
of inventory.
This general principle of including the relevant mechanism as an explicit term in
the demand model to justify the validity of lagged regressors as an IV can also be
applied in other cases, as long as the relevant mechanism(s) is (are) sufficiently well
represented in the model. For example: advertising affects demand via brand equity,
and it has a direct effect on demand as well. Suppose the entire dynamic (over-time)
effect of advertising goes via brand equity, and there is no autocorrelation in the
demand error term. We now may want to estimate a model that regresses demand
on advertising and brand equity, while accounting for the possible endogeneity in
advertising. We can now argue that, since we include the key mechanism through
which lagged advertising affects demand in the model (i.e., brand equity), we
resolve the omitted variable problem that made the IV potentially invalid. Hence,
the IV will have little correlation with the demand error term and that it therefore
can serve as a valid IV for current advertising.
600 D. Papies et al.
In sum, while do not explicitly recommend the use of lagged regressors as IVs,
we do not see reasons to dismiss them in general. Rather, if the researcher can
provide solid theoretical arguments speaking to the strength and exogeneity of the
IV, then lagged regressors may be appropriate IVs in an IV regression. Costs
Several publications rely on costs as an instrument for endogenous price. Rooderk-
erk et al. (2013) for instance tackle price endogeneity of liquid detergent and use
costs for key ingredients (alkalines and chlorines), packaging (i.e., plastics) as well
as transportation (i.e., diesel) as instruments. The idea behind costs as instrument
is that firms will adjust price in response to cost shocks, but costs are unobserved
by consumers, and may therefore be unrelated to unobserved demand shocks. In
a similar vein, Dinner et al. (2014) use the price index for advertising from the
American Bureau of Labor Statistics as an IV for advertising.
There are, however, at least two caveats that researchers should be aware of
when using costs as IVs. First, costs are often difficult to observe. For example,
for strategic reasons, many firms are reluctant to reveal the wholesale costs they
face. In that case, researchers have to use external, more aggregate sources to
approximate costs, such as quarterly statistics available from national statistics
agencies (e.g., Dinner et al. 2014; Rooderkerk et al. 2013). These cost data are likely
to be valid instruments (as they most likely have little relation with unobserved
factors shaping demand of the focal product). Their strength, however, may be
compromised because they are far removed from the focal endogenous regressor.
What’s more, oftentimes they are not measured at the same frequency (e.g., a
weekly observed endogenous regressor and a quarterly observed price index from
the Central Bureau of Statistics), which limits their ability to explain variation in the
Second, consider the case of a retailer setting prices in an endogenous manner.
If manufacturers or other suppliers of goods to this retailer are aware of these
unobserved demand shocks and adjust their prices accordingly, the retailer’s costs
will be correlated with the unobserved demand shocks, and costs become invalid
instruments (Rossi 2014). How can upstream firms have knowledge of demand
shocks? Rossi (2014) gives the example that manufacturers anticipate advertising
and promotional campaigns. Further, let us consider hotel prices in a small US
college town with a popular college football team. Hotel prices will soar on football
weekends, but costs may also increase on this weekends because of overtime pay
or because staff may require higher wages (Otter et al. 2011). As a last example,
prices for music downloads may be endogenous because retailers adjust their prices
to unobserved shocks in artists’ popularity. Music labels, however, may be aware of
shock in popularity and adjust the prices that download stores have to pay to labels
in response to these shocks.
18 Addressing Endogeneity in Marketing Models 601
In sum, costs are among the most promising candidates when looking for valid
instruments. In each instance, however, a researcher has to carefully assess whether
the exogeneity assumption is reasonable and whether they are indeed sufficiently
strong. Different Markets, Industries or Brands
In many situations, unobserved demand shocks may be restricted to local markets,
but firms share costs structures across different markets (Hausman 1996). In these
cases, prices from other markets may be suitable instruments (e.g., Rooderkerk
et al. 2013). However, this only holds if unobserved demand shocks are indeed
restricted to local markets, which is unlikelyto be the case when national advertising
expenditures are an important element of the marketing mix (Rossi 2014), and
Hausman’s (1996) approach has been questioned (see comments to Hausman 1996).
Nevo (2001) also contains a detailed discussion of these considerations.
A similar strategy relies on the use of different brands, firms, or categories.
Van Heerde et al. (2013), for instance, use the marketing instruments from brands
in other categories as instruments for the focal brand’s marketing activities. The
assumptions underlying this approach are the same as above: the different brands
or categories share common cost structures, but the unobserved demand shocks
are restricted to the focal brands. It is important to choose these “other” brands,
firms, or categories sufficiently different from the focal market (to ensure instrument
validity) yet not too far away (to ensure instrument strength). For example, while the
IVsusedbyDinneretal.(2014) are advertising expenditures by (low-end) retailers
that compete in a very different price tier as IVs for the focal (high-end) retailer’s
advertising (to ensure instrument validity), they are from the same broad industry
(clothing and apparel retailing) to ensure instrument strength.
Germann et al. (2015, p. 8) use CMO prevalence as the primary IV for
estimating the effect of CMO presence on firm performance. They compute CMO
prevalence from the sample firms’ peers, which are firms that operate in the same
primary two-digit Standard Industrial Classification (SIC) code(s) as the focal
firm. They provide theoretical arguments using organizational theory and argue
that this instrument meets the exclusion restriction in the context of their study,
although they recommend to include time fixed effects to capture shocks that are
common across industries (such as economy wide boom), which could influence
both firm performance (their dependent variable) and CMO prevalence (their IV).
They also investigate a robustness specification where they include time fixed
effects interacted with industry-specific fixed effects to control for possible time
unobserved shocks at the industry level, that could invalidate the instrument. Hence,
carefully argued robustness analyses can also help providing theoretic support for
the IVs.
602 D. Papies et al.
18.4 Panel Data
18.4.1 Introduction: Endogeneity and Panel Data
As has become apparent above, correcting for endogeneity comes at a cost (Rossi
2014). It is hard to find suitable instruments. Furthermore, the IV estimator is
generally less efficient than OLS, in particular when the instruments are weak.
On top of that, it will be biased when the instrument is invalid. This implies that
researchers should only consider an IV approach to endogeneity correction if other
options are infeasible or insufficient.
Another opportunity lies in panel data, where we have multiple time series
observations per response unit. In some cases, these can be used to control for
endogeneity without needing observed IVs. For that reason, we recommend that
researchers carefully assess if they can address endogeneity concerns by exploiting
the panel structure of the data. A panel data structure is very common in marketing,
where we observe data across a cross-section (e.g., consumers,firms, brands, stores,
countries) as well as across time (e.g., purchase occasions, days, weeks, months,
quarters, years).9
Consider the following demand model, where we extend Eq. (18.3) by a time
dimension (e.g., we observe weekly sales and prices per hotel). yit is then demand
for hotel iin week t:
yit Dˇ0Cˇppit C˛iCtC"it:(18.11)
We can now think of the error as containing three separate components. One
component varies across hotels and time: "it. Another component varies across
hotels but not across time, i.e., these are unobserved hotel characteristics that affect
a hotel’s demand (˛i). These could be the hotel’s location, quality of the facility,
or the management quality, as these aspects may often be considered fairly stable
for a period of time (e.g., for the duration of the panel). Lastly, there may be
a component (t) that varies across time but not across hotels. An example is
seasonality or holidays affecting demand for hotels. When the factors ˛iand t
are not explicitly accounted for in the estimation, they will be part of the (total)
error term Q"it D˛iCtC"it. If these factors ˛iand tare correlated with price,
then an endogeneity problem arises because now prices are correlated with the total
error Q"it.
Fortunately, the panel structure of the data allows us to eliminate two of these
unobserved components and any endogeneity problem arising from these. We first
illustrate this idea for the case of a model in whichthere is an unobserved component
˛ibut no t. By estimating (18.11) with fixed effects (FE), for example by including
one dummy variable per hotel, all time-invariant hotel characteristics are controlled
for. Because in most marketing applications we often have large cross-sections
18 Addressing Endogeneity in Marketing Models 603
leading to a model with many fixed effects, a simpler (yet equivalent) approach
is to use a “within-transformation” that eliminates ˛i. As such, we need to calculate
the mean demand and mean price across tfor each i, i.e., yiand pi, by averaging
(18.11) resulting in10 :
We then take the difference between (18.11)and(18.12):
.yit yi/Dˇp.pit pi/C."it "i/: (18.13)
It becomes apparent that the within-transformation removes the hotel-specific
unobserved effect (˛i) and potential distortions that arise from its correlation with
pi. We can now simplify (18.13) by defining Ryit D.yit yi/;Rpit D.pit pi/;
R"it D."it "i/:
Ryit DˇpRpit CR"it:(18.14)
We can now consistently estimate (18.14) with OLS under the assumption that R"it is
uncorrelated with Rpit . Vis-à-vis our hotel example, this assumption implies that there
are no time-varying unobserved demand shocks that managers take into account
when setting prices, and that potential price endogeneity in this case arises solely
from time-invariant hotel characteristics such as quality.11
A similar reasoning applies when a time-varying unobserved component is
present that is constant across hotels (t). In that case, we can use a within-
transformation to remove the time component and estimate the resulting trans-
formed model with OLS. We can also simultaneously correct for both the cross-
sectional component ˛iand the time-varying component t, for instance by
estimating Eq. (18.14) including time fixed effects; we refer to Verbeek (2012,
Chap. 10) for details.
We note that another approach commonly discussed in panel applications is
the random effects estimator that takes the unobserved intercepts ˛ias random
variables. We note that this approach does not account for endogeneity, and it
has even slightly stronger exogeneity assumptions regarding the identification of
10This part relies heavily on Wooldridge (2010, p. 300). We recommend this as further reading
for those interested in more details. Another very useful econometric resource is Verbeek (2012;
Chap. 10).
11It is important to realize that the panel structure of data does not necessarily refer to repeated
observations over time alone. It can also encompass other cases of a nested or multi-level data
structure, e.g., brands are observed across multiple stores, schools contain multiple classes, which
contain multiple students. The estimation approach applies to these cases as well (e.g., Ebbes et al.
2004; Kim and Frees 2006,2007).
604 D. Papies et al.
the regression effects than OLS. An (informal) discussion on the main identifying
assumptions of panel model applications in marketing is given by Germann et al.
We now illustrate these panel data approaches in a simulation study, in which
we have a panel data with 52 weeks for 175 hotels. We consider four scenarios. In
scenario 1, there are no unobserved differences between hotels (i.e., ˛iD0); tis
a common weekly demand shock (e.g., a home football game) that is observed by
hotel managers and used to set prices, but it is not observed by researchers, hence
Cov(pit t)¤012:
yit Dˇ0Cˇppit CtC"it:(18.15)
In scenario 2, the hotels exhibit unobserved (to the researcher) quality aspects (˛i)
The quality information drives demand but is also used to set prices: Cov(pit ˛i)¤0.
There are no time shocks: tD0. The demand model now is:
yit Dˇ0Cˇppit C˛iC"it:(18.16)
Scenario 3 is the combination of the previous two with unobserved shocks that are
common to all hotels, and unobserved shocks that are hotel specific, but time in-
variant, and the firm sets prices based on both types of shocks: Cov(pit t)¤0and
Cov(pit ˛i)¤0:
yit Dˇ0Cˇppit C˛iCtC"it:(18.17)
Scenario 4 is the same as scenario 3 in that it combines the two previous demand
shocks of scenarios 1 and 2. In addition, we now assume that the price is also
set based on demand shocks that vary across hotel and time, i.e., Cov(pit t)¤0,
Cov(pit ˛i)¤0 and Cov(pit "it)¤0.
As before, we simulate 500 datasets (details are available upon request) and
estimate the following approaches: OLS, OLS with week dummies, FE, FE with
week dummies, and 2SLS. We omit the CF approach and LIML as these perform at
a similar level as 2SLS. We assume that the researcher observes a cost instrument
that she obtained from the local chamber of commerce that provided her with weekly
data of labor cost for maintenance and cleaning staff. Hence, his instrument captures
cost at a weekly level, but is the same for all hotels. After plotting a time series of
Sales, Price and Cost for one hotel, she observes that cost increases throughout
the year following a step function; prices tend to increase, and sales have a small
negative trend because of the price increase. This pattern is representative for the
other hotels.
12For scenarios 1–3, we assume Cov(pit "it )D0.
18 Addressing Endogeneity in Marketing Models 605
18.4.2 Results for Scenario 1: Only Time Shocks
When the endogeneity comes from time unobserved shocks only, just including time
dummies in OLS (or FE or Random Effects (RE)) recovers the true parameters well
(Table 18.5). We note that we have employed the cross-sectional 2SLS approach,
ignoring the panel structure in the data. This approach yields estimates that are
approximately unbiased but are not efficient.13 Hence, we conclude that in such a
scenario, the recommended course of action is the inclusion of time dummies rather
than 2SLS.
18.4.3 Results for Scenario 2: Only Cross-Sectional Shocks
When the unobserved component is purely cross-sectional, 2SLS will resolve the
endogeneity problem (see Table 18.6). It requires, however, the availability of
suitable instruments. FE, in contrast, does not require instruments but works equally
well and is slightly more efficient.
18.4.4 Results for Scenario 3: Both Time and Cross-Sectional
When there are both omitted time varying and time-invariant omitted variables (i.e.,
a combination of scenarios 1 and 2), we see that the FE approach with time dummies
Table 18.5 Simulation results scenario 1 (only unobserved time shocks)
Estimator True value Mean estimate SD 95% CI
OLS 10.834 0.066 0.967 0.702
2SLS 11.011 0.128 1.268 0.754
OLS with time fixed effects 11.000 0.011 1.022 0.979
FE (cross-sectional) 10.833 0.067 0.967 0.699
FE (cross-sectional) and time
fixed effects
11.000 0.011 1.022 0.979
RE (cross-sectional) 10.835 0.066 0.967 0.703
RE (cross-sectional) and time
fixed effects
11.000 0.011 1.022 0.979
13We also note that in this example the instrument only varies across time and not across hotels,
similar to the omitted variable. Hence, in separating out the exogenous and endogenous variation
which are both constant across hotels we only have the time dimension of the data.
606 D. Papies et al.
Table 18.6 Simulation results scenario 2 (only unobserved cross-sectional shocks)
Estimator True value Mean estimate SD 95% CI
OLS 10.832 0.052 0.937 0.728
2SLS 11.001 0.012 1.025 0.977
OLS with time fixed effects 10.717 0.026 0.770 0.665
FE (cross-sectional) 11.000 0.007 1.014 0.986
FE (cross-sectional) and time
fixed effects
11.000 0.011 1.021 0.979
RE (cross-sectional) 10.976 0.011 0.998 0.953
RE (cross-sectional) and time
fixed effects
10.953 0.012 0.978 0.928
Table 18.7 Simulation results scenario 3 (both unobserved time and cross-sectional shocks)
Estimator True value Mean estimate SD 95% CI
OLS 10.827 0.069 0.964 0.689
2SLS 11.012 0.129 1.271 0.754
OLS with time fixed effects 10.776 0.039 0.855 0.698
FE (cross-sectional only) 10.892 0.069 1.030 0.755
FE (cross-sectional) and time
fixed effects
11.000 0.011 1.022 0.979
RE (cross-sectional only) 10.888 0.068 1.025 0.751
RE (cross-sectional) and time
fixed effects
10.990 0.011 1.011 0.968
is the only panel approach that is approximately unbiased. This approach controls
for both the unobserved demand shocks at the hotel level (by applying the within-
transformation) as well as unobserved demand shocks that are time varying (but
constant across hotels) by including time-fixed effects. The RE approach with time
dummies is also performing very well, but still suffers a bit from the demand shocks
at the hotel level. In the presence of a good IV (as we have here), the 2SLS approach
also yields approximately unbiased results but is inefficient. As before, we have
employed the cross-sectional 2SLS approach, ignoring the panel structure in the
data. Here we could improve on its efficiency by appropriately accounting for the
correlated error structure within hotels (Tables 18.7 and 18.8).
18.4.5 Results for Scenario 4: Both Time and Cross-Sectional
Shocks, Plus a Correlation Between Price
and the Error Term
In this scenario, we need an observed IV as neither controlling for time (week
dummies) nor controlling for unobserved hotel random intercepts (fixed effects) is
18 Addressing Endogeneity in Marketing Models 607
Table 18.8 Simulation results scenario 4 (both unobserved time and cross-sectional shocks, and
a correlation between price and the error term)
Estimator True value Mean estimate SD 95% CI
OLS 10.818 0.067 0.952 0.685
2SLS 11.003 0.087 1.178 0.829
OLS with time fixed effects 10.705 0.035 0.775 0.635
FE (cross-sectional only) 10.875 0.064 1.002 0.748
FE (cross-sectional) and time
fixed effects
10.799 0.010 0.820 0.779
RE (cross-sectional only) 10.873 0.064 1.000 0.745
RE (cross-sectional) and time
fixed effects
10.795 0.010 0.816 0.774
sufficient. The 2SLS approach works very well here as it can account for endogeneity
coming from time and cross-sectional shocks, as well as for endogeneity coming
from the correlation between price pit and the error term "it . We could potentially
further improve the 2SLS approach by including week dummies to reduce its
standard error. We note that the cost instrument we use here, although constant
across hotels, is relatively strong (the correlation between the instrument and price
is about 0.64). In the absence of such a high quality instrument, none of the other
approaches (OLS, FE or RE) would estimate the price effect well, despite the
presence of panel data.
In sum, our simulation examples show that:
the fixed-effects approach with time dummies is a very robust approach across
three of the four scenarios (scenarios 1–3);
only when the endogeneity arises at the lowest level in the model (Cov(pit "it )¤0
as in scenario 4), we need an observed instrument.
We need to wonder in which situations we may have situations like scenario 4 (Rossi
2014). Marketing researchers, in many cases, have access to rich data that usually
contain a time and cross-sectional dimension (see also the discussion in Germann
et al. 2015). Therefore, it is our recommendation that researchers attempt to address
endogeneity concerns by exploiting the panel structure of their data. Under the
assumptions that we outlined, this will address potential endogeneity concerns.
Because these assumptions—as most identifying assumptions—are untestable,
researchers need a strong theory and understanding of the data generating process
in order to judge how reasonable this assumption is. If the assumption is reasonable
that the endogeneity is concentrated in the cross-section (i.e., time-invariant,
at the hotel level) or time-dimension (i.e., at the week level, invariant across
hotels), researchers should rely on fixed-effects estimation. If one believes that this
assumption is too strong, i.e., that the idiosyncratic component "it is correlated with
a regressor after including control variables and fixed effects (e.g., for time, cross-
sectional units, or both), only then the analysis should be complemented with more
advanced but potentially costly methods, such as an IV approach.
608 D. Papies et al.
18.5 IV-Free Methods
18.5.1 Introduction to IV-Free Methods
The difficulty of finding suitable instruments (e.g., Rossi 2014; Stock et al. 2002)
has sparked researchers’ interest in finding ways of accounting for endogeneity
in observational data without the need to use observed instruments. At least two
approaches have been discussed in the marketing literature.14
Ebbes et al. (2005) develop the method of latent instrumental variables (LIV)
that provides identification through latent, discrete components in the endogenous
regressors. Similar to the observed IV approach, the LIV approach shares the under-
lying idea that the endogenous regressor is a random variable that can be separated
into two components, i.e., pDC,whererepresents the exogenous variation
and the endogenous variation. The endogenous component is correlated with the
error term of the main regression equation through a bivariate normal distribution.
The LIV model can be estimated using a maximum likelihood approach, as we will
discuss in more detail below.
Park and Gupta (2012) introduce a method that directly models the correlation
between the endogenous regressor and the error using Gaussian copulas. Simply put,
the copula connects the marginal distributions of two or more variables that follow
any distribution (e.g., normal, non-normal). Park and Gupta (2012) add a copula
term to the model that represents the correlation between the endogenous variable
and the error term. By including this term, the effect of the endogenous regressor
can be estimated consistently. Both latent IV and Gaussian copulas exploit non-
normality in the endogenous regressor, and normality of the error term(s). We will
now discuss these two methods in more detail.
18.5.2 The Latent Instrumental Variables (LIV) Approach LIV Approach
Similar to observed IV methods, the LIV approach decomposes the endogenous
variable into two parts: a source of exogenous information and an endogenous error
term. Without an observed IV to provide exogenous identifying information, the
LIV method requires distributional assumptions for the exogenous source (the latent
instrument) and the endogenous error term. Ebbes et al. (2005) approximate the
unobserved instrument by a latent discrete variable. That is, in the basic LIV model
14In addition to the two approaches we discuss here, framed as a potential solution to measurement
error, Lewbel (1997) introduced a method that obtains identification from higher moments when
the endogenous regressor is skewed. Ebbes et al. (2009) provide a detailed analysis.
18 Addressing Endogeneity in Marketing Models 609
the main regression equation for Yis augmented with a linear, additive, specification
for the endogenous regressor:
where is a 1 Lvector of latent category means, which are different, and Qziis
an unobserved L1 indicator variable. The probability of category lD1, 2, :::,L
is l, with PL
lD1lD1,l>0, and L> 1. The endogenous error term iis
correlated with the error term "i. With bivariate normally distributed error terms,
it can be shown that the resulting LIV model belongs to the class of normal
mixture models with Lcomponents. As such, the LIV approach is a parametric,
likelihood based, approach. The parameters of the LIV model can be estimated
through maximum likelihood estimation by numerically optimizing the likelihood
function. The likelihood equation for observation iof the basic LIV model is given
in Eq. (5) of Ebbes et al. (2005). Ebbes et al. (2005,2009) show that identification
of the LIV model requires that the latent instrument has a non-normal distribution,
assuming bivariate normally distributed error terms "iand i. Based on a set of
simulation studies, they conclude that, when applied sensibly, the LIV approach
may be a useful alternative in situations where an endogenous regressor is present
but no good quality observed IVs are available.
The LIV approach has recently been applied, and extended beyond the linear
(cross-sectional) regression model, in several studies in marketing, among which
e.g., Abhishek et al. (2015); Grewal et al. (2010,2013); Lee et al. (2015); Ma
et al. (2014); Narayan and Kadiyali (2015); Rutz et al. (2012); Rutz and Trusov
(2011); Sabnis and Grewal (2015); Saboo and Grewal (2012); Sonnier et al. (2011);
Srinivasan et al. (2013), and Zhang et al. (2009).
Good practice using LIV requires at least the following tasks for the researcher
(Ebbes et al. 2005). First, like traditional IV, the LIV approach assumes that pican
be decomposed in a linear way in two parts: an exogenous part and an endogenous
part. This is an assumption that cannot be tested for. Hence, the development of
theoretical arguments based on an analysis of the problem motivating the investiga-
tion of endogeneity remains crucial, as is standard in traditional IV estimation, and
should precede the application of LIV. Second, when a researcher finds evidence of
endogeneity using LIV, we suggest that the researcher contrasts the LIV estimates
and implications with traditional approaches, such as OLS. Here, the magnitude and
direction of the differences should conformto theoretical predictions. Third, the LIV
approach should be fitted with a range of choices for L(the number of categories of
the latent instrument). We suggest that the researcher fits at least LD2,3,4 and 5.
When the estimated coefficient ˇpvaries strongly for different choices of L, then the
latent instrument is likely ill-defined, and the approach may not be suitable. Fourth,
the researcher should always investigate normality of the error term, particularly "i.
One can start with the OLS residuals, followed by an analysis of the fitted residuals
610 D. Papies et al.
from the LIV model. And, lastly, the LIV approach exploits non-normality of the
endogenous regressor. Hence, one should always test that the endogenous regressor
is not normally distributed. Simulation (LIV Approach)
We consider the same panel data example as in the previous section. Suppose that the
researcher observes for 1 year weekly sales and price data of 175 hotels, where price
is endogenous because it is correlated with the modelerror term ". For simplicity, we
do not consider time and cross sectional shocks. The researcher knows that prices
are also set based on cost, where cost is largelyunaffected by local market drivers. If
she had access to cost data, she would have used cost as an observed IV. Hence, she
believes that prices also exhibit exogenous variation. She will use the LIV approach
to capture the exogenous variation and estimate the regression parameters.
We simulate 500 datasets, and take a Gamma (1,1) distribution for the true,
but unobserved instrument. Note that the LIV approach is misspecified, as the
“true” instrument is not discrete. However, Ebbes et al. (2009) show that this is
not problematic for LIV. The correlation of this unobserved instrument with price is
0.58 (which corresponds to an R2square of 0.33). This true instrument is not used in
estimating the LIV model parameters because the researcher would not have access
to it. Instead, the LIV methods tries to infer the latent IV. The results for OLS and
the LIV approach with LD2,3,4 aregivenTable 18.9.
Hence, the LIV approach works well when there exists exogenous variation that
is not normally distributed. When we test for the non-normality of the endogenous
regressor p, we reject normality in 100% of the cases. We see that the LIV2, LIV3,
and LIV4 results are similar, but taking more categories results here in some small
efficiency gains. Furthermore, residual checks for "iusing the LIV fitted residuals
show that these are normally distributed (we used the Anderson-Darling test, and
examined the skewness and kurtosis of the fitted residuals).
Table 18.9 Simulation
results LIV approach Estimator Mean estimate SD 95% CI
OLS 0.833 0.009 0.850 0.816
LIV2 0.999 0.040 1.079 0.920
LIV3 0.999 0.039 1.077 0.921
LIV4 1.000 0.038 1.076 0.923
18 Addressing Endogeneity in Marketing Models 611
Table 18.10 Illustration of the Gaussian copula method
Column 1: Endogenous regressor
psorted from low to high
Column 2: Empirical
cumulative density
function H(p)
Column 3: The inverse normal
CDF of the cumulative density
function pD˚1(H(Pp))
100 0.01 2.32
105 0.02 2.05
390 0.99 2.32
400 1.00 !0.99 2.32
18.5.3 Gaussian Copula
Park and Gupta (2012) propose to correlate the normally distributed error term
with the non-normally distributed endogenous regressor directly through a copula
structure. Hence, they treat the endogenous variable as a random variable from any
(non-normal) marginal population distribution, which is correlated with the normal
error term of the main equation through a copula.
The Gaussian copula method to correct for endogeneity (Park and Gupta 2012)is
rather simple to implement through a CF approach15 and has been used in a number
of recent marketing papers (e.g., Burmester et al. 2015; Datta et al. 2015). Similar
to the CF approach, it boils down to adding an extra term to the regression model
of interest. This term is pD˚1(H(p)), where H(p) is the empirical cumulative
density function (CDF) of p,and˚1is the inverse normal CDF.
Table 18.10 shows how it works for an example case with ND100. First, we
sort the observations for the endogenous regressor pfrom low to high in column 1.
Suppose the minimum pobserved is 100, next is 105, and the highest two values are
390 and 400, respectively. Column 2 contains H(p), which is the probability mass
of observing a value less than or equal to that value. So for the lowest observation it
is 1/ND0.01, and for the second-lowest observation it is 2/ND0.02, and so forth.
For the highest observation (pD400), H(p) is 1.00, which has to be set to a value
just below that (1(1/N)D0.99) to avoid an error in the next step. The next step
(column 3) is to calculate the inverse normal CDF: pD˚1(H(p)).
This pterm is the copula CF term, and it controls for the correlation between
the error term and the endogenous regressor. In the spirit of the CF approach, we
add this to the main Eq. (18.3) while keeping the original endogenous regressor pi
in the equation:
15We thank Sungho Park for sharing his Gauss code with us.
612 D. Papies et al.
We can now consistently estimate Eq. (18.20) with OLS. The parameter estimate of
central interest is b
ˇp. The estimate of b
ˇcallows us to test whether there is significant
presence of endogeneity, which is the Hausman test discussed before.
The OLS standard errors, however, are incorrect because p
iis an estimated
quantity. Park and Gupta (2012) suggest to use bootstrapping. Each bootstrap
sample (mD1, :::,M)hasNobservations drawn with replacement from the
original Nobservations. Next, for each bootstrap sample m, we estimate Eq. (18.21)
with OLS:
where each sample mD1, :::,Mleads to a different point estimate for ˇp, denoted
by b
p. The standard deviation among this set of Mestimates is now used as the
standard error:
where b
Given how simple the endogeneity correction via Gaussian copulas is, the
question arises whether this method exhibits serious downsides. First and foremost,
the key identifying assumption is the non-normal distribution of the endogenous
regressor. Our simulations below show that the method fails if the distribution of
the endogenous regressor is “too normal”. Hence, we urge researchers to carefully
establish that the endogenous regressor is truly non-normal (e.g., through visual
inspection and tests such as the K-S-test or Shapiro-Wilk-test).
Our simulations show that—conditional on a non-normal endogenous regressor
and normal structural error—the method resolves the endogeneity bias and is about
as efficient as IV. Table 18.11 contains the results of a small simulation study in
which we continue the example from Sect., where the endogenous regressor
is non-normally distributed. In our example, the endogenous regressor follows either
a Gamma, F,2,t, or Poisson distribution, and we vary each distribution across a
set of three different shape parameters and perform 500 replications. This leads to a
Table 18.11 Simulation results Gaussian copulas method across all simulated cases
Estimator Mean estimate SD 95% CI
OLS 0.820 0.101 0.822 0.817
Copula corrected estimate 0.995 0.071 0.997 0.993
18 Addressing Endogeneity in Marketing Models 613
total of 7500 datasets. We discard 56 datasets in which a Shapiro-Wilk-test does not
reject the null hypothesis of a normal distribution with p< 0.05.16
Similarly, as in the case of latent instruments, we highlight the relevance of
assessing the distributional assumptions, i.e., we must ascertain that the endogenous
regressor is not normally distributed and that the error is approximately normal.
18.6 Advanced Topics in IV Estimation
When estimating more complex applied models that go beyond the standard
textbook case (e.g., multiple endogenous marketing instruments), IV estimation
offers challenges that are often not well understood. We will cover several of those.
18.6.1 Multiple Endogenous Regressors, Interactions,
and Squared Terms Multiple Endogenous Regressors
So far, we have mostly only considered cases with one endogenous regressor, and
one IV. However, the IV problem can easily be extended to a case with more than
one endogenous regressor. Consider the following model:
where both p1and p2are potentially endogenous. We now require at least two IVs,
and two first stage regressions, which both contain all exogenous information (i.e.,
all exogenous regressors wand all instruments z):
p1D0C1z1C2z2C3wC1;and (18.24)
Equations (18.24)and(18.25) show that both first-stage regressions share the same
set of predictors. It is important to realize that each instrument needs to be uniquely
associated with one endogenous regressor, i.e., in this case of two endogenous
regressors one instrument, say z1, must be strongly correlated with p1, and the other
instrument z2must now be strongly correlated with p2. We can still have that, in
16The standard deviation of the OLS estimates in Table 18.11 appears large. The reason is that
the distribution of the estimates is not normal, which is due to outliers that arise because of the
non-normal distribution of the endogenous regressor.
614 D. Papies et al.
addition, z2also correlates with p1and z1also correlates with p2. But we cannot
have that z1correlates with both p1and p2, while z2correlates with none. Nor can
we have that z1and z2correlate with p1and neither correlates with p2.
This explains why it is not sufficient in the case of more than one endogenous
regressor to perform independent tests for the strength of instruments (e.g., F-tests)
for both first stage regressions. Rather, we have to use a multivariate F-test (e.g., the
Angrist-Pischke-F-test, Angrist and Pischke 2009, p. 217–218 or the more recent
Sanderson-Windmeijer-F-test, Sanderson and Windmeijer 2015). Importantly, we
recommend using an estimator that produces IV estimates directly, such as the ivreg
or ivreg2 command in Stata. The required code makes it directly obvious that we
use the same IVs (z1and z2) for both endogeneous regressors (p1and p2):
ivreg2yw .p1p2Dz1z2/ ; first:
The option “first”, specified behind the comma, produces an output for the first-
stage regressions that includes a multivariate F-test as well as an over-identification
(Sargan) test if the number of IVs exceeds the number of endogenous variables. This
output should routinely be examined and discussed in any IV analysis. Interaction Terms
In marketing we are often interested in estimating interactions that involve an
endogenous regressor. Consider for example a case where we expect that the price
effect changes over time. We can capture this effect by interacting price with a
(exogenous) seasonal dummy, say, was follows:
To obtain consistent estimates we must treat the interaction p1was a separate
endogenous regressor with its own first stage regression and its own instrument:
We us e z1as the instrument for p1and the interaction between wand z1as the
instrument to identify the interaction p1w.Again,weusethesamesetofregressors
in both first stage regressions (Wooldridge 2015, p. 429). Note that the implication
is that we would need to argue that z1wis a valid instrument for p1w, just like with
any IV.
However, we can also use the CF approach that we discussed above to make
life a bit easier when dealing with interactions that involve endogenous regressors
(Wooldridge 2015, p. 428). If we wish to estimate (18.26) while controlling for
the endogeneity of p1and p1wby means of a control function, it is sufficient
18 Addressing Endogeneity in Marketing Models 615
(Wooldridge 2015, p. 428) to only estimate one first stage regression (18.29)and
add the fitted residuals as additional regressor to (18.26)asshownin(18.30):
It is important to keep in mind that in this control function approach, the standard
errors need to be derived using bootstrapping as described in Sect. 18.3.2.
This approach is also feasible if we would implement the Gaussian copula
approach (see above) in an application with an interaction term. Here, including just
the one correction term (p) will be sufficient to address the regressor’s endogeneity
as well as the endogeneity in the interaction term. Squared Terms
Similar considerations arise when we have to deal with squared terms of endogenous
regressors, for instance if we wish to test whether (endogenous)price has a nonlinear
effect on sales due to e.g., saturation:
We cannot estimate one first-stage regression for p1, compute the predicted values
bp1and bp2
1, and use these two terms in the second stage regression instead of p1
and p2
1, imitating a 2SLS approach. This is occasionally termed the “forbidden
regression” (Wooldridge 2010, p. 267). Instead, we can choose one of the following
three approaches. The first two approaches have in common that they treat p1and
1as two separate endogenous variables, which accordingly require two separate
first-stage regressions and two different instruments (Wooldridge 2010, p. 266).
Approach 1 uses the squared instruments as additional source for identification:
Approach 2 includes the square of the projection of the endogenous regressor on the
instrument as the instrument for the squared endogenous regressor:
where bp2
1is the squared predicted dependent variable after estimating (18.34).
616 D. Papies et al.
Lastly, we could consider the square of p1as in interaction with itself, such that
we could apply the CF approach discussed above for interactions. That is, we would
include the residuals of a first-stage regression from (18.34), and include these as an
additional regressor in (18.31).
18.6.2 Binary or Categorical Endogenous Regressor
Suppose we are interested in the effect of a binary endogenous variable d1on y:
yDˇ0Cˇ1d1C": (18.36)
To obtain a consistent estimate for this effect, researchers have different options.
The easiest option arises from the fact that IV does not make any assumption on
the nature of the endogenous regressor. Hence, we can just use the 2SLS approach
discussed above and not worry about the binary nature of the endogenous regressor
(Wooldridge 2010, p. 90). Leenheer et al. (2007) use this approach to study the effect
of loyalty program membership (an endogenous 0–1 variable) on share of wallet.
This approach, however, may be less efficient than the following three alter-
natives. A first alternative approach uses a probit model17 in the first stage,
P(d1D1) D˚(z), instead of a linear model, and then uses the fitted probabilities,
P.d1D1/,as an instrument for the endogenous binary regressor (Wooldridge
2010, p. 939), i.e., d1Dı0Cı1b
P.d1D1/. The predicted values computed from the
linear regression, b
P.d1D1/ are then used in the main equation:
In other words, the fitted probabilities from the first stage probit are used as IVs for
d1. Note that this approach is not equivalent to as using the predicted probabilities
P.d1D1/ as a regressor in the main equation, which will lead to inconsistent
estimates and should thus be avoided (Wooldridge 2010, p. 941).
A second alternative approach also fits a first stage probit but includes a
generalized residual as control function (e.g., Germann et al. 2015; Wooldridge
2010, p. 949), which is given by d1b
.bz/.1 d1/b
.:/ D
˚.:/ is the inverse Mills ratio, i.e., the ratio of the normal pdf and cdf.18
A third alternative approach is also in the spirit of the CF approach, which uses
the first-stage probit (or other non-linear estimators) to compute the probit residuals,
P.x1D1/ ; and include these as additional regressors in the main equation:
17See Vol. I, Sect.
18See, for example, Franses and Paap (2001, p. 138).
18 Addressing Endogeneity in Marketing Models 617
This approach is known as the two-stage residual inclusion (2SRI) and is discussed
in detail in Terza et al. (2008). Danaher et al. (2015) provide an application of 2SRI
in marketing to a setting where the endogenous variable of interest is whether or not
consumers obtained a mobile coupon.
We are not aware of any study that compares the various approaches of dealing
with binary or categorical endogenous variables.
18.6.3 Selection Models
A case that is closely related to a binary endogenous variable is a selection model.
The key difference, however, is that in a selection model the binary variable is not an
independent variable in the main equation, but it determines whether the regressand
of interest is observed or not. Specifically, consider the following main equation
for y1:
However, y1is only observed when a binary variable, y2equals 1. Suppose we have
the following model for y2:
One example is a household’s weekly expenditure in a supermarket. First the
household needs to choose to buy something in the supermarket (y2D1) and
conditional on this decision, the household decides to spend a certain monetary
amount y1(Van Heerde et al. 2008). The error terms "1and "2follow a bivariate
normal distribution with a nonzero correlation. Estimating Eq. (18.39) without
taking into account the selection process leads to biased estimates (Wooldridge
2010, p. 804). Instead, we first need to estimate a probit model for Eq. (18.40):
P.y2D1/ D˚.ı
0Cı1w2/: (18.41)
Next, we calculate the “inverse Mills ratio” (Wooldridge 2010, p. 805):
where '.:/ is the pdf of the standard normal distribution. Finally, we add the inverse
Mills ratio to the main equation:
618 D. Papies et al.
Equation (18.43) can be estimated consistently with OLS. Note that w2in (18.41)
and (18.42) is not an instrument in the sense that we have to make equally strong
assumptions as in the case of IVs when estimating IV. Rather, we choose w2to
avoid that the predicted probabilities from (18.41) and, hence, the “inverse Mills
ratio”, is a linear combination of the regressors in (18.39) as these would lead to
problems of multicollinearity (Wooldridge 2010, p. 806). We refer to Wooldridge
(2010, Chap. 19) for more detail on selection models.
18.6.4 Limited Dependent Variables
In many marketing settings, the dependent variable is not a continuous measure such
as sales, but it has a more limited distribution, i.e., it can only assume discrete values
or it can only obtain values within a certain range. Examples include:
a binary dependent variable (e.g., does the consumer buy or not);
a multinomial dependent variable (which brand does the consumer buy);
a bivariate binary dependent variable (does the consumer buy online (yes or no)
and/or offline (yes or no);
a truncated dependent variable (monetary amount spent, which has to be
a fractional dependent variable (share of wallet, between 0 and 1);
a discrete dependent variable (how many hotel nights does a consumer book).19
Such “limited-dependent” variables are very common in marketing applications,
and very often, there are endogeneity concerns surrounding the independent vari-
ables. To address these concerns, the CF approach is the recommended course of
action (Petrin and Train 2010). For example, suppose the dependent variable is
brand choice, and price is the endogenous regressor. We can set up a utility model
for brand choice:
Utility .brand j/Dˇ0jCˇ1pjC"j:(18.44)
To account for the endogeneity of pj, we estimate a first-stage regression that follows
the same reasoning as in the standard IV case (i.e., valid and strong instruments),
and save the residuals b
jDpjbzj. We add these residuals to the utility equation
as the control function term:
Utility .brand j/Dˇ0jCˇ1pjCˇ2b
19Compare Sects. 8.2 and 8.5, Vol. I.
18 Addressing Endogeneity in Marketing Models 619
In addition, the researcher has to make an assumption on the distribution of "j.If
we assume a multivariate normal distribution, Eq. (18.45) is estimated as a probit
model follows. In case "jhas an Extreme Value distribution, the logit model follows
(Petrin and Train 2010). As described in Sect. 18.3.1, the standard errors for the
estimates of Eq. (18.46) need to be corrected for the fact that b
jis an estimated
quantity. The bootstrap approach discussed in Sect. 18.3.2 for the control function
can be implemented for this brand choice model in the same manner, as well as in
many other nonlinear or limited dependent variable models.
Andrews and Ebbes (2014) investigate the properties of IV approaches in
logit-based demand models for store-level data. Their study highlights the robust
performance of the CF approach as proposed by Petrin and Train (2010). They
also find good properties of readily available panel based instruments that can
be computed from the data at hand. These store-mean centered instruments are
used in a first-stage regression for price, and the residuals are then included in
the logit based demand model as control functions. The validity of these readily
available instruments rests on the same assumptions as fixed effects approaches
discussed before, i.e., that the common unobserved demand shocks are common
to stores. However, as standard fixed effects approaches are only available for
linear regression models, the approach proposed in Andrews and Ebbes (2014)
demonstrates how the fixed effects approach can potentially be extended to logit
based panel data models without having to estimate a large set of (store-level)
18.7 Discussion
18.7.1 Endogeneity: A Thorny Issue
Endogeneity is, as Van Heerde et al. (2005) put it, often used as a crutch by reviewers
to justify recommending that a marketing article should be rejected. The issue is
that endogeneity can potentially invalidate causal inferences. Outsiders can always
raise this concern for observational data and it can never be fully excluded based on
arguments alone or based on statistical grounds. Correcting for endogeneity requires
additional assumptions and conditions, it can make the problem worse and we are
never sure which estimate is the true one. Finally, correcting for endogeneity leads
to worse in- and out-of-sample fit (Ebbes et al. 2011). In sum, endogeneity is truly
a thorny issue.
While we agree that endogeneity is a potentially very serious issue, we do also
believe that it is key to establish a best practice around it. First, as a researcher, we
have to think carefully what aspect of the variation in the regressor is potentially
endogenous. Very often in marketing, endogeneity problems arise from omitted
variables. A first advice therefore is to think through exactly what information
is missing, and then make every effort to collect data and add additional control
variables to the model.
620 D. Papies et al.
For example, in the hotel example discussed throughoutthis chapter, endogeneity
arose because of (1) unobserved demand shocks due to major events and (2) firms
capitalizing on unobserved quality differences. Rather than right away jumping to
an IV-based or IV-free approach to address endogeneity, our advice is to collect
additional data. This is in line with Germann et al. (2015, p. 4) who suggest to
start with “rich data models” before considering IV estimation. The main goal is
to collect an extensive set of data such that the most relevant control variables
that correlate with the dependent and independent variables is included. Of course
such an approach is only feasible if such an extensive set of control variables
were available. Similarly, Rossi (2014) argues that a convincing argument must be
made that there is a serious endogeneity problem given the set of control variables,
as in absence of good IVs, the researcher is better off trying to measure these
unobservable variables and including them in the model. Drawing on the hotel
example that we used throughout this chapter, an event calendar showing the major
events allows us to capture these events through dummy variables. Including such
dummies in the model would address the endogeneity arising from managers raising
prices during these events. Quality ratings from websites such as TripAdvisor allow
us to make the previously unobserved quality differences observable, and including
them in the model would alleviate much, if not all, of the endogeneity problem due
to manager setting prices based on quality differences.
Second, in a panel data setting, if the endogeneity concern arises because firms
set their marketing instruments based on cross-sectional differences, (firm) fixed
effects fully address this problem. Similarly, if the endogeneity arises because firm
set their marketing instruments based on seasonal patterns, fixed time effects can
completely take care of the problem. These fixed effects can be considered as control
variables that can take care of some of the most pressing endogeneity issues in panel
Third, it is crucial that the researcher describes potential endogeneity problems in
the order of importance and indicates which control variables are included to address
the problem. Researchers should combine this with a clear statement regarding their
identifying assumption, i.e., under which assumption the estimates can be treated as
causal. Fourth, it is easier to raise endogeneity concerns than to invalidate them,
i.e., the proof of burden rests with the researcher. Therefore, reviewers should
not raise these concerns lightly, and not raise them without explicitly providing
strong arguments why the problem may be substantial. Further, reviewers should
acknowledge that carefully chosen control variables—as described above—can
often address a very large part of the endogeneity problem. Of course, a reviewer
can always argue for more far-fetched endogeneity issues, but the question then
becomes whether it is truly worth addressing these because the cure (ill-chosen IVs
for example) can be worse than the disease (the remaining bias after controlling for
a variety of endogeneity mechanisms), as we have shown over and over again in this
Using IV also affects the way we have to think about the coefficient of interest
reflecting a causal effect in the sense that it is an average treatment effect. Strictly
speaking, we can only consider the estimate an average treatment effect if all units
18 Addressing Endogeneity in Marketing Models 621
(e.g., hotels, brands) respond to the instrument in the same way. Coming back to our
hotel example, this would imply that all hotels increase their prices with the same
amount in response to a one unit increase in cost. We could not interpret the price
coefficient as causal, for instance, when some hotels would and others would not
react to cost shocks. Intuitively speaking, there is no exogenous variation in price
that is captured by the cost instrument for some hotels, and for those hotels the
price effect in the demand model becomes essentially un-identified. Consequently,
the estimated price effect in the demand model using 2SLS cannot be interpreted as
average treatment effect anymore. We recommend that these aspects are taken into
consideration when interpreting the results of an IV model, and refer the reader to
Bascle (2008) for more details.
Some may wonder whether we should use IV at all, given the limitations and
recent discussions (e.g., Rossi 2014). IVs need to satisfy two conditions that are
essentially polar opposites: they need to be strong (be a strong predictor of the
endogenous regressor) but exogenous at the same time (so not correlated at all with
the error term of demand). Hence, theoretical arguments that make an instrument
exogenous, also work to make the instrument weak (and vice versa). Importantly,
only the strength condition can be statistically tested. While we can provide some
statistical evidence for the exogeneity assumption (if we have more IVs than
endogenous regressors), this can be done only to a limited extent and some even
argue exogeneity cannot be tested at all (Rossi 2014).
IV-free methods such as the LIV approach or the Gaussian copulas rely yet on
different conditions that are not always satisfied (e.g., the regressor may have a
normal distribution whereas it should not, or assumptions regarding the structural
error terms are violated).
It is important to notice that every non-experimentalempirical research study that
seeks to make causal inferences must rest on identifying assumptions that are not
testable without experiments (Pearl 2009). Therefore, when we use an IV approach,
we replace one assumption (the error is uncorrelated with the independent variable)
by a new assumption (the instrument is uncorrelated with the error). Only theory
and conceptual arguments are the means that can tell us which approach is more
appropriate. Recognizing that there are many potential models available to deal with
endogeneity, Germann et al. (2015) suggest that researchers explore the meaning
of the various models’ identifying assumptions in light of their context and then
determine the appropriate specifications, as opposed to mechanically estimating
many potential models and then only reporting the model or models that provide the
most desirable results. In that sense, researchers should see themselves as “regres-
sion engineers” instead of “regression mechanics” (Angrist and Pischke 2009), and
researchers should understand and discuss the meaning of the model identifying
assumptions for their particular context (Germann et al. 2015, pp. 10–11). This
could include a discussion of a subset of models whose identifying assumptions
make the most sense given the researcher’s problem context. In particular robust
622 D. Papies et al.
findings would strengthen the beliefs in the findings and conclusions. But Germann
et al. (2015) also note that if the findings are not robust, then a discussion of the
identifying assumptions is needed to assess which results, if any, to believe and
whether an altogether different approach is needed.
18.7.2 What Should be Reported
We believe that an empirical study in marketing should report the following to
address endogeneity concerns:
1. What are the core potential endogeneity problems and which control variables
in the model are included to avoid the problem? As discussed in Sect. 18.7.1,
these can be dummies for special events, variables capturing quality differences
or cross-sectional and time fixed effects, and many more. All potential solutions
to endogeneity problems should be accompanied by clear statements on the
identifying assumptions. Germann et al. (2015) discuss for many common
regression models in marketing the respective identifying assumptions (e.g.,
Table 2 in Germann et al. 2015).
2. What remaining potential endogeneityproblem is there? If there is not a plausible
problem, there is no need to address it.
3. If there is a potential endogeneity problem, consider an IV approach or an IV-
free approach, or ideally a combination of observed IV and IV-free approaches
to check the robustness of the findings (e.g., Grewal et al. 2010 and Narayan
and Kadiyali 2015 use an LIV approach to assess robustness of their findings).
For the IV-based approaches (including control functions), strong and detailed
theoretical arguments for the validity of the IVs should be provided. In the case
of over-identification, they may be augmented with Sargan or Hanssen J-tests.
For the strength of the IVs, the appropriate statistics must be reported (see Sect.
18.3.1). Finally, the Hausman test for the presence of endogeneity should be
reported. Actually, if the Hausman test says there is no significant endogeneity
issue, the advice is to use the non-corrected estimates (e.g., OLS) as the focal
For the IV-free approach, the underlying assumption must be investigated (e.g., for
both the LIV and the Gaussian Copula approaches, the endogenous regressor is non-
normally distributed, and the structural error term(s) are normally distributed). We
have discussed good practices for both approaches in this chapter.
Figure 18.3 gives a roadmap on how to address endogeneity issues in marketing,
following the guidelines we have laid out in this chapter. In the presence of
panel data, the roadmap discussed in Germann et al. (2015, pp. 3–11), which
considers rich data models, unobserved effects models, IV models, and panel
internal instrument models, provides more details.
18 Addressing Endogeneity in Marketing Models 623
Fig. 18.3 Roadmap for addressing endogeneity concerns in empirical marketing studies
624 D. Papies et al.
18.7.3 Software
To the best of our knowledge, Stata is the commercial package that has the most
complete and up to date set of procedures to deal with endogeneity concerns. The
Stata package ivreg2 has a very comprehensive set of tests and estimation methods.
We refer to Baum et al. (2007) for details. Further, researchers can write their own
code in Stata, e.g., for the Gaussian Copula approach to correct for endogeneity.
R is a free-of-charge statistical software platform that is open source (https:// Here, researchers can implement the relevant estimators them-
selves, or use one of several endogeneity-addressing packages, e.g., on standard IV
estimation (ivmodel, ivpack, and tosls), IV estimation for panel data (ivpanel and
ivfixed), IV estimation for probit models (ivprobit), and Bayesian IV estimation
Matrix languages such as GAUSS and Matlab are ideally suited to implement
estimation procedures that are not available from the shelf. Examples include the
Gaussian Copula and LIV approaches to correct for endogeneity—both of which
we have programmed in GAUSS ourselves.
Abhishek, V., Hosanagar, K., Fader, P.S.: Aggregation bias in sponsored search data: the curse and
the cure. Mark. Sci. 34, 59–77 (2015)
Albers, S., Mantrala, M.K., Sridhar, S.: Personal selling elasticities: a meta-analysis. J. Mark. Res.
47, 2840–2853 (2010)
Andrews, R.L., Ebbes, P.: Properties of instrumental variables estimation in logit-based demand
models: finite sample results. J. Model. Manag. 9, 261–289 (2014)
Angrist, J.D., Krueger, A.B.: Does compulsory school attendance affect schooling and earnings?
Q. J. Econ. 106, 979–1014 (1991)
Angrist, J.D., Pischke, J.S.: Mostly Harmless Econometrics. An Empiricist’s Companion. Prince-
ton University Press, Princeton (2009)
Ataman, M.B., Mela, C.F., Van Heerde, H.J.: Building brands. Mark. Sci. 27, 1036–1054 (2008)
Ataman, M.B., Van Heerde, H.J., Mela, C.F.: The long-term effect of marketing strategy on brand
sales. J. Mark. Res. 47, 866–882 (2010)
Baum, C.F., Schaffer, M.E., Stillman, S.: Enhanced routines for instrumental variables/GMM
estimation and testing. Stata J. 7, 465–506 (2007)
Bascle, G.: Controlling for endogeneity with instrumental variables in strategic management
research. Strateg. Organ. 6, 285–327 (2008)
Bijmolt, T.H.A., Van Heerde, H.J., Pieters, R.G.M.: New empirical generalizations on the
determinants of price elasticity. J. Mark. Res. 42, 141–156 (2005)
Bound, J., Jaeger, D.A., and Baker, R.: The cure can be worse than the disease: a cautionary tale
regarding instrumental variables, Working Paper 137, National Bureau of Economic Research
Bound, J., Jaeger, D.A., Baker, R.M.: Problems with instrumental variables estimation when the
correlation between the instruments and the endogenous explanatory variable is weak. J. Am.
Stat. Assoc. 90, 443–450 (1995)
Bronnenberg, B.J., Mahajan, V.: Unobserved retailer behavior in multimarket data: joint spatial
dependence in market shares and promotion variables. Mark. Sci. 20, 284–299 (2001)
18 Addressing Endogeneity in Marketing Models 625
Burmester, A.B., Becker, J.U., Van Heerde, H.J., Clement, M.: The impact of pre- and post-launch
publicity and advertising on new product sales. Int. J. Res. Mark. 32, 408–417 (2015)
Danaher, P.J., Smith, M.S., Ranasinghe, K., Danaher, T.S.: Where, when, and how long: factors
that influence the redemption of mobile phone coupons. J. Mark. Res. 52, 710–725 (2015)
Datta, H., Foubert, B., Van Heerde, H.J.: The challenge of retaining customers acquired with free
trials. J. Mark. Res. 52, 217–234 (2015)
Dinner, I.M., Van Heerde, H.J., Neslin, S.A.: Driving online and offline sales: the cross-channel
effects of traditional, online display, and paid search advertising. J. Mark. Res. 51, 527–545
Ebbes, P., Böckenholt, U., Wedel, M.: Regressor and random-effects dependencies in multilevel
models. Statistica Neerlandica. 58(2), 161–178 (2004)
Ebbes, P., Papies, D., Van Heerde, H.J.: The sense and non-sense of holdout sample validation in
the presence of endogeneity. Mark. Sci. 30, 1115–1122 (2011)
Ebbes, P., Wedel, M., Böckenholt, U.: Frugal IV alternatives to identify the parameter for an
endogenous regressor. J. Appl. Econ. 24, 446–468 (2009)
Ebbes, P., Wedel, M., Böckenholt, U., Steerneman, T.: Solving and testing for regressor-error
(in)dependence when no instrumental variables are available: with new evidence for the effect
of education on income. Quant. Mark. Econ. 3, 365–392 (2005)
Franses, P.H., Paap, R.: Quantitative Models in Marketing Research. Cambridge University Press,
Cambridge (2001)
Gao, C., Lahiri, K.: Further consequences of viewing LIML as an iterated Aitken estimator. J.
Econ. 98, 187–202 (2000)
Germann, F., Ebbes, P., Grewal, R.: The chief marketing officer matters! J. Mark. 79(3), 1–22
Gijsenberg, M.J., Van Heerde, H.J., Verhoef, P.C.: Losses loom longer than gains: modeling the
impact of service crises on perceived service quality over time. J. Mark. Res. 52, 642–656
Grewal, R., Chandrashekaran, M., Citrin, A.V.: Customer satisfaction heterogeneity and share-
holder value. J. Mark. Res. 47, 612–626 (2010)
Grewal, R., Kumar, A., Mallapragada, G., Saini, A.: Marketing channels in foreign markets: control
mechanisms and the moderating role of multinational corporation headquarters–subsidiary
relationship. J. Mark. Res. 50, 378–398 (2013)
Hausman, J. A.: Valuation of new goods under perfect and imperfect competition. In: Bresnahan,
T. F., Gordon R. J. (eds.) The Economics of New Goods, pp. 207–248. University of Chicago
Press, Cambridge (1996)
Karaca-Mandic, P., Train, K.: Standard error correction in two-stage estimation with nested
samples. Econ. J. 6, 401–407 (2003)
Kim, J.-S., Frees, E.W.: Omitted variables in multilevel models. Psychometrika. 71, 659–690
Kim, J.-S., Frees, E.W.: Multilevel modeling with correlated effects. Psychometrika. 72, 505–533
Kleibergen, F., Zivot, E.: Bayesian and classical approaches to instrumental variable regression. J.
Econ. 114, 29–72 (2003)
Kremer, S.T.M., Bijmolt, T.H.A., Leeflang, P.S.H., Wieringa, J.E.: Generalizations on the effec-
tiveness of pharmaceutical promotional expenditures. Int. J. Res. Mark. 25, 234–246 (2008)
Lee, J.-Y., Sridhar, S., Henderson, C.M., Palmatier, R.W.: Effect of customer-centric structure on
long-term financial performance. Mark. Sci. 34, 250–268 (2015)
Leenheer, J., Van Heerde, H.J., Bijmolt, T.H.A., Smidts, A.: Do loyalty programs really enhance
behavioral loyalty? An empirical analysis accounting for self-selecting members. Int. J. Res.
Mark. 24, 31–47 (2007)
Levitt, S.D.: The effect of prison population size on crime rates: evidence from prison overcrowd-
ing litigation. Q. J. Econ. 111, 319–351 (1996)
626 D. Papies et al.
Lewbel, A.: Constructing instruments for regressions with measurement error when no additional
data are available, with an application to patents and R&D. Econometrica. 65, 1201–1213
Ma, L., Krishnan, R., Montgomery, A.L.: Latent homophily or social influence? An empirical
analysis of purchase within a social network. Manag. Sci. 61, 454–473 (2014)
Murray, M.P.: Avoiding invalid instruments and coping with weak instruments. J. Econ. Perspect.
20, 111–132 (2006)
Narayan, V., Kadiyali, V.: Repeated interactions and improved outcomes: an empirical analysis of
movie production in the United States. Manag. Sci. 62, 591–607 (2015)
Nevo, A.: Measuring market power in the ready-to-eat cereal industry. Econometrica. 69, 307–342
Otter, T., Gilbride, T.J., Allenby, G.M.: Testing models of strategic behavior characterized by
conditional likelihoods. Mark. Sci. 30, 686–701 (2011)
Park, S., Gupta, S.: Handling endogenous regressors by joint estimation using copulas. Mark. Sci.
31, 567–586 (2012)
Pagan, A.: Some consequences of viewing LIML as an iterated Aitken estimator. Econ. Lett. 3,
369–372 (1979)
Pearl, J.: Causality: Models, Reasoning, and Inference. Cambridge University Press, Cambridge
Petrin, A., Train, K.: Omitted product attributes in discrete choice models, Working paper,
University of Chicago and University of California, Berkeley (2002)
Petrin, A., Train, K.: A control function approach to endogeneity in consumer choice models. J.
Mark. Res. 47, 3–13 (2010)
Rooderkerk, R.P., Van Heerde, H.J., Bijmolt, T.H.A.: Optimizing retail assortments. Mark. Sci. 32,
699–715 (2013)
Rossi, P.: Even the rich can make themselves poor: a critical examination of IV methods in
marketing applications. Mark. Sci. 33, 655–672 (2014)
Rutz, O.J., Bucklin, R.E., Sonnier, G.P.: A latent instrumental variables approach to modeling
keyword conversion in paid search advertising. J. Mark. Res. 49, 306–319 (2012)
Rutz, O.J., Trusov, M.: Zooming in on paid search ads—a consumer-level model calibrated on
aggregated data. Mark. Sci. 30, 789–800 (2011)
Sabnis, G., Grewal, R.: Cable news wars on the internet: competition and user-generated content.
Inf. Syst. Res. 26, 301–319 (2015)
Saboo, A.R., Grewal, R.: Stock market reactions to customer and competitor orientations: the case
of initial public offerings. Mark. Sci. 32, 70–88 (2012)
Sanderson, E., Windmeijer, F.: A weak instrument-test in linear IV models with multiple
endogenous variables. J. Econ. 190, 212–221 (2015)
Sethuraman, R., Tellis, G.J., Briesch, R.A.: How well does advertising work? Generalizations from
meta-analysis of brand advertising elasticities. J. Mark. Res. 48, 457–471 (2011)
Sonnier, G.P., McAlister, L., Rutz, O.J.: A dynamic model of the effect of online communications
on firm sales. Mark. Sci. 30, 702–716 (2011)
Srinivasan, R., Sridhar, S., Narayanan, S., Sihi, D.: Effects of opening and closing stores on chain
retailer performance. J. Retail. 89, 126–139 (2013)
Stock, J.H., Wright, J.H., Yogo, M.: A survey of weak instruments and weak identification in
generalized method of moments. J. Bus. Stat. Econ. Stat. 20, 518–529 (2002)
Terza, J.V., Basu, A., Rathouz, P.J.: Two-stage residual inclusion estimation: addressing endogene-
ity in health econometric modeling. J. Health Econ. 27, 531–543 (2008)
Van Dijk, A., Van Heerde, H.J., Leeflang, P.S.H., Wittink, D.R.: Similarity-based spatial methods
for estimating shelf space elasticities from correlational data. Quant. Mark. Econ. 2, 257–277
Van Heerde, H.J., Dekimpe, M.G., Putsis, W.P.J.: Marketing models and the Lucas critique. J.
Mark. Res. 42, 15–21 (2005)
Van Heerde, H.J., Gijsbrechts, E., Pauwels, K.H.: Winners and losers in a major price war. J. Mark.
Res. 45, 499–518 (2008)
18 Addressing Endogeneity in Marketing Models 627
Van Heerde, H.J., Gijsenberg, M.J., Dekimpe, M.G., Steenkamp, J-B.E.M: Price and advertising
effectiveness over the business cycle. J. Mark. Res. 50, 177–193 (2013)
Verbeek, M.: A Guide to Modern Econometrics, 4th edn. Wiley, Hoboken (2012)
Villas-Boas, J.M., Winer, R.S.: Endogeneity in brand choice models. Manag. Sci. 45, 1324–1338
Winer, R.S.: A reference price model of brand choice for frequently purchased products. J.
Consum. Res. 13, 250–256 (1986)
Wooldridge, J.M.: Econometric Analysis of Cross Section and Panel Data, 2nd edn. MIT,
Cambridge (2010)
Wooldridge, J.M.: Control function methods in applied econometrics. J. Hum. Resour. 50, 420–445
Zhang, J., Wedel, M., Pieters, R.: Sales effects of attention to feature advertisements: a Bayesian
mediation analysis. J. Mark. Res. 46, 669–681 (2009)