The Effects of Adopting and Using a Brand's Mobile
Application on Customers' Subsequent Purchase Behavior☆
Su Jung Kim
⁎& Rebecca Jen-Hui Wang
& Edward C. Malthouse
Iowa State University, 116 Hamilton Hall, Ames, IA 50011, USA
Northwestern University, 2001 Sheridan Road, 4th floor, Evanston, IL 60208, USA
Northwestern University, 1845 Sheridan Road, Evanston, IL 60208, USA
Mobile applications (apps) have become an important platform for brands to interact with customers, but few studies have tested their effects on
app adopters’subsequent brand purchase behavior. This paper investigates whether adopters’spending levels will change after they use a brand’s
app. Using a unique dataset from a coalition loyalty program with implementations of propensity score matching and difference-in-difference-in-
difference methods, we compare the spending levels of app adopters with those of non-adopters. Speciﬁcally, we examine whether the use of the
app’s two main interactive features—information lookups and check-ins—inﬂuences adopters’spending levels. We ﬁnd that app adoption and
continued use of the branded app increase future spending. Furthermore, customers who adopt both features show the highest increase. However,
we also observe “the recency effect”–when customers discontinue using the app, their spending levels decrease. Our ﬁndings suggest that sticky
apps which attract continuing uses can be a persuasive marketing tool because they provide portable, convenient, and interactive engagement
opportunities, allowing customers to interact with the brand on a habitual basis. We recommend that ﬁrms should prioritize launching a mobile app
to communicate with their customers, but they should also keep in mind that a poorly designed app, which customers abandon after only a few
uses, may in fact hurt their brand experience and company revenues.
© 2015 Direct Marketing Educational Foundation, Inc., dba Marketing EDGE.
Keywords: Mobile app; Log data; Location check-ins; Interactivity; Stickiness; Purchase behavior; Propensity score matching; Difference-in-difference-in-difference
The rapid adoption of smartphones and subsequent develop-
ment of mobile applications (“app”or “apps”hereafter) have
been changing the ways in which customers interact with a brand.
According to comScore (2014, October 7), smartphone penetra-
tion in the United States reached 72% as of August 2014. The
switch from feature phones to smartphones is happening globally,
and many parts of the world are embracing the adoption of
smartphones (Meena 2014, August 8). The rapid growth of
mobile technologies comes with the proliferation of various types
of apps. Over 110 billion apps have been downloaded in 2013
(Ingraham 2013, October 22; Welch 2013, July 24). The number
is expected to surpass 268 billion by 2017 (Fox 2013, Sep 19).
Mobile apps account for more than 50% of time spent on digital
media (Lipsman 2014, June 25), suggesting that apps have
deeply penetrated into the daily lives of smartphone users.
Companies have welcomed mobile apps as an additional
communication channel to attract new customers and increase
brand loyalty among existing ones (Wang, Kim, and Malthouse
2016). They realize that customers use a variety of app features
to perform diverse tasks such as searching, retrieving, and
sharing information, passing time with entertainment content,
paying bills, and navigating maps. Therefore, companies have
started to use apps to increase brand awareness and enhance
brand experience. Given younger customers' high expectations
regarding a brand's savvy use of mobile technology (Annalect
☆We acknowledge support from Northwestern University’s IMC Spiegel Digital
and Database Research Center and its Executive Director Tom Collinger. We thank
the Air Miles Rewards Program for providing access to their data.
E-mail addresses: email@example.com (S. Kim),
firstname.lastname@example.org (R.J.-H. Wang), email@example.com
1094-9968© 2015 Direct Marketing Educational Foundation, Inc., dba Marketing EDGE.
Available online at www.sciencedirect.com
Journal of Interactive Marketing 31 (2015) 28 –41
INTMAR-00173; No. of pages: 14; 4C:
2015, January 28), it becomes increasingly important for
companies to offer branded apps that can provide a seamless,
convenient, and enjoyable customer experience.
Despite the growing interest in apps and their potential
marketing impact, there is a dearth of research on the use of
branded apps as a persuasive communication channel or loyalty
building platform that can influence future spending levels. This
paper addresses such shortcomings and examines the adoption
(i.e., downloading and logging into the app at least once) and uses
of two interactive features (i.e., information lookups and location
check-ins). We analyze the effect of app adoption on purchase
behavior by comparing the spending levels of adopters with those
of matched non-adopters, whose pre-adoption demographic and
behavioral characteristics are similar. We also investigate the
impact of using different app features on purchase behavior.
Lastly, we examine the impact of repeated and discontinued use of
a branded app. Most studies have examined the positive outcomes
of adopting and/or using apps, but we know little about how their
spending levels change if customers cease using them. About
20% of the apps are used only once after being downloaded (Hosh
2014, June 11). Nearly half of customers report that they would
delete an app if they find a single bug in it (SmartBear 2014).
Thus, understanding how customers' purchase behavior changes
after abandoning a branded app has major marketing implications.
This paper offers two substantive contributions. First, by
linking customers' app use with their actual spending, this study
provides quantified evidence that shows how adoption of a
branded app impacts customers' spending at its firm. Second, it
discusses the mechanisms for how a branded app leads to an
increase in future spending, using the concepts of interactivity and
stickiness, which have been mostly discussed in the web context.
By applying these concepts to mobile apps, this study expands
our understanding of how the use of interactive technology
influences purchase behavior. Lastly, we offer an empirical
contribution by using improved measures for key variables—app
use and purchase behavior—taken from customers' app logs and
transaction data. Given the issues of using self-reported measures
of mobile phone use (Boase and Ling 2013) and purchase
intentions (van Noort, Voorveld, and van Reijmersdal 2012), our
study adds to the mobile marketing literature by incorporating
behavioral measures of app use and purchase histories.
Literature Review and Hypotheses
Values Created by Using a Mobile App
A growing body of literature has identified distinctive
characteristics of mobile devices and discussed their implications.
Perceived ubiquity is one of the most important aspects of mobile
services (Balasubramanian, Peterson, and Jarvenpaa 2002; Watson
et al. 2002). Okazaki and Mendez (2013) find that perceived
ubiquity is a multidimensional construct consisting of continuity
(i.e., “always on”), immediacy, portability, and searchability. In a
marketing context, mobile media are distinct from mass marketing
channels because the former allow for location-specific, wireless,
and portable communication (Shankar and Balasubramanian
2009). Similarly, Lariviere et al. (2013) contend that mobile
devices are unique in that they are portable, personal, networked,
textual/visual and converged. They explain that these unique
characteristics provide value to customers, including information
(e.g., searching for information), identity (e.g., expressing
personality), social (e.g., sharing experience or granting/gaining
social approval), entertainment and emotion (e.g., killing time
by streaming music/movies or playing games), convenience
(e.g., speedy multitasking such as paying bills), and monetary
value (e.g., getting discount or promotion offers).
Motivated by these types of value and convenience, customers
may initially adopt an app, however, what makes a branded app
powerful is its interactive nature that allows them to experience
the brand by using its advanced features (Bellman et al. 2011;
Calder and Malthouse 2008; Mollen and Wilson 2010; Novak,
Hoffman, and Yung 2000; Yadav and Varadarajan 2005). Kim,
Lin, and Sung (2013) document interactive features that promote
the persuasive effectiveness of branded apps. Their content
analysis of 106 apps of global brands finds that most employ
attributes such as vividness, novelty, motivation, control,
customization and feedback. Their findings suggest that, unlike
computer-based websites, branded apps provide “anytime,
anywhere”interactivity with navigation and control features that
customers can easily use in the mobile environment. Furthermore,
these apps also increase customers' enjoyment and willingness to
continue the relationship with the brands by giving them a sense
Although there are numerous studies on mobile marketing,
few specifically examined the financial effect of using branded
apps. One exception is a study by Bellman et al. (2011), which
conducts an experiment to see whether using popular branded
apps impacts brand attitude and purchase intention. They
demonstrate that using branded apps increases interest in the
brand and product category. They also find that apps with an
informational/user-centered style had larger effects on increasing
purchase intention than those with an experiential style because
the former focus on the users, not on the phone, thus encouraging
them to make personal connections with the brand.
Reflecting on a limited number of previous studies, we see
that the persuasive effectiveness of branded apps is attributable
to the rich user experience made possible by interacting with the
app and the brand. Branded apps allow customers to get easy
access to information, enjoy entertainment, receive customized
coupons, and experience the brand on the move. Also, the
interactive features deepen the customer–brand relationship and
serve as antecedents of positive attitude toward the brand, purchase
intention, and ultimately purchase behavior. Based on the review
of existing literature, we propose the following hypothesis.
H1. The adoption of a branded app increases subsequent
spending with a brand.
Experiences of Using Interactive Features of a Mobile App
Research on human–computer interaction provides theoretical
explanations for how branded apps create awareness, attitudes,
intentions, and behavior. Previous studies have shown that there
are five mechanisms in which technology influences the process
29S.J. Kim et al. / Journal of Interactive Marketing 31 (2015) 28–41
of persuasion: (1) triggering cognitive heuristics, (2) enabling the
receiver to be the source, (3) creating greater user engagement
through interactivity, (4) constructing alternative realities, and
(5) affording easier access to information (Sundar et al. 2013).
Among these, greater user engagement and easier access to
information through interactive features seem highly relevant to
mobile technologies and merit further explanation.
The concept of interactivity has long been discussed since the
emergence of the Internet. Kiousis (2002) defines interactivity
after thoroughly reviewing previous literature on this concept as
“the degree to which a communication technology can create a
mediated environment in which participants can communicate
(one-to-one, one-to-many, and many-to-many), both synchro-
nously and asynchronously, and participate in reciprocal message
exchanges (third-order dependency). For human users, it
additionally refers to the ability to perceive the experience as a
simulation of interpersonal communication and increase their
awareness of telepresence (p. 372).”In a mobile context, Lee
(2005) differentiates between mobile and online interactivity by
highlighting features only available on a mobile platform. In
addition to the four components of online activity (user control,
responsiveness, personalization, and connectedness), mobile
interactivity provides ubiquitous connectivity (i.e., mobility and
ubiquity) and contextual offers (i.e., personal identity and
localization). He finds that all the elements but personalization
increase customers' trust in mobile commerce, which in turn
improves attitudes toward, and behavioral intentions to, engage
in mobile commerce.
Gu,Oh,andWang(2013)discuss the natures and benefits of
machine interactivity and person interactivity. Machine interac-
tivity refers to interactions between humans and the medium. It
provides users with access to dynamic information and the ability
to customize the information they receive. Previous studies show
that the different tools available to access embedded information
such as hyperlinks and drag-and-zoom features produce positive
attitudes toward the interface and message content (Sundar et al.
2010). Person interactivity, on the other hand, is defined as
interactions between humans through a medium. It offers a sense
of social connectedness and empowerment among users by
providing a channel for customers to communicate with the
brand and/or other customers. Gu,Oh,andWang(2013)show
that person interactivity influences customers' perceived reliability
of a website, which in turn increases the financial performance of
the given brands. More specifically, interactivity increases the
perception of usefulness, ease of use, and enjoyment among those
who adopt the technology (Coursaris and Sung 2012). A recent
study on mobile location-based retail apps shows that interactivity
has a positive influence on affective involvement, which in turn
leads to downloading and using the apps (Kang, Mun, and
In this study, we focus on the use of two interactive features
that are available from a branded app of a coalition loyalty
company: information lookups and sponsor location check-ins.
Customers of the loyalty program can find information on their
loyalty point balances and transaction records. Additionally,
they can search for, and email the information about, reward
items that they are interested in. These features motivate the
customers to navigate the app, which provides brand informa-
tion according to their individual preferences through a high
level of machine interactivity. Sponsor location check-in is
another feature that enables customers to find nearby sponsor
storefronts using their phones and “check in”at the stores.
Customers can play a monthly “Check-In Challenge Game,”
make a list of the stores that they have visited, and/or share their
check-in locations on social media. This feature both provides
entertainment and location-based information as well as instills
a sense of social connectedness by allowing virtual interactions
with other customers and/or brands (i.e., person interactivity).
Thus, we propose our second hypothesis:
H2. The more interactive features app adopters use, the higher
the increase in their subsequent spending.
Mobile App Stickiness: Repeated vs. Discontinued Use
Up to this point, we maintain that when customers adopt a
brand's app and use its interactive features, their subsequent
spending increases. Questions remain if and how subsequent
spending changes when app-adopting customers develop a
habit of using it repeatedly. We use the concept of “stickiness”
from extant literature to posit how repeated use of an app
influences purchase behavior.
Website stickiness is defined as “the ability of web sites to draw
and retain customers”(Zott, Amit, and Donlevy 2000, p. 471)
and usually measured as the length and/or the frequency of website
visits. Stickiness is regarded as the level of users' value-expectation
when they visit a site. In other words, if website users decide that a
website meets their expectations and provides enjoyable experi-
ences, they are more likely to come back until the visiting behavior
becomes a routine. Li, Browne, and Wetherbe (2006) approach
stickiness from a relational perspective, arguing that users do not
perceive a website as being separate from an organization that
provides it and treat it as a representative of the organization. Thus,
the continued relationship with the website leads to commitment
and trust toward the website as well as the organization that
provides it. Previous studies support this relational approach:
they find a positive association between website stickiness and
relational concepts such as trust, commitment, and loyalty (Li,
Browne, and Wetherbe 2006), which positively influences
purchase intention (Lin 2007).
Furner, Racherla, and Babb (2014) apply website stickiness
to a mobile context and introduce the concept of mobile app
stickiness (MASS). According to their framework, the outcome
of mobile app stickiness can manifest in the forms of trust in
the provider, positive word-of-mouth behavior, and commer-
cial transactions on the app. If customers find that a branded
app provides a novel brand experience that has not been
possible through other media channels or it fulfills customers'
informational needs (e.g., product reviews, store locations,
coupons) or entertainment needs (e.g., games, check-ins), this
will build up trust in the value of the app and the provider,
which then increases the level of commitment and repeated
use. As suggested by the literature on website stickiness, we
expect that stickier apps will eventually lead to an increase in
30 S.J. Kim et al. / Journal of Interactive Marketing 31 (2015) 28–41
purchase behavior. We use continued patronage as an
indicator of app stickiness in this study, following the
approach in website stickiness research.
In contrast, if consumers find the app useless or irrelevant
(i.e., “less sticky”), it is likely that they will form a negative
attitude toward the brand for its lack of understanding of their
customers' needs. Hollebeek and Chen (2014) discuss how
consumers' perception of brands' actions, quality/performance,
innovativeness, responsiveness, and delivery of promise can
create both positively- or negatively-valenced brand engagement.
We adopt their conceptual model and predict that a branded app
that fails to meet consumers' expectations will create negative
brand attitudes, which will result in a decrease in purchase
intention or actual purchase behavior.
In sum, we expect that downloading and using a well-
designed branded app will improve the customers' attitude
toward, and increase the purchase of, the brand's products or
services. On the contrary, if the app does not satisfy their needs,
they may abandon it, develop negative attitudes toward the
brand, and even decrease future purchases. Thus, we posit:
H3. Repeated use of the app increases subsequent spending.
H4. Discontinuing the use of the app decreases subsequent
We have data from Air Miles Reward Program (AMRP) in
Canada, which has been operating since 1992 and is one of the
largest coalition loyalty programs in the world. Customers of
the loyalty program earn reward points for purchasing at
partnering sponsors across various categories, such as grocer-
ies, gas, banking, automobile repairs, and other types of stores.
They can redeem their points for various types of rewards,
including merchandise, gift cards, and flight tickets.
In 2012, AMRP launched a mobile app on Android and iOS.
After logging in, customers can check point balances, transaction
histories and reward items as shown in Fig. 1a. Besides looking
up information, one notable feature is sponsor location check-in,
which allows customers to find nearby sponsors (e.g., gas,
grocery, or toy stores) with GPS-enabled mobile devices and
“check in”at the stores (Fig. 1b). App users can play the
“Check-In Challenge,”a game that picks 50 customers with the
most check-ins each month and rewards them with double points
(Fig. 1c). The check-in feature is similar to other location-based
apps such as Foursquare (Frith 2014).
On August 30, 2012, AMRP released an app update and
advertised two particular features—balance checking and
location check-in. Using this release date to separate the
customers' point accruals before and after app adoptions, we
analyze 10,776 app adopters, who downloaded and logged in
with the app in September 2012. Besides their app login and
location check-in histories, we also have their point accrual
histories six months prior (from March 2012 to August 2012)
and afterwards (from September 2012 to February 2013). The
company also provided a control group of another 10,766
customers who had never adopted the app during or prior to
February 2013. To reduce potential selection bias, we match each
app adopter with a non-adopter using demographics and pre-
period point accruals, which serve as a proxy for measuring their
spending behavior. This unique dataset and quasi-experimental
design allow us to estimate the effects of adopting and using the
branded app on customers' spending levels.
Propensity Score Model and Matching
As with all observational studies, our research may be subject
to selection biases because customers are not randomly assigned
to app adoption. Therefore, adopters and non-adopters may have
preexisting differences that also influence their post-period point
accruals, which is our dependent variable of interest and a proxy
that measures their spending behavior. Given the potential of
confounds, we employ propensity score matching, which is one
Fig. 1. Screenshots of the brand's mobile app application.
31S.J. Kim et al. / Journal of Interactive Marketing 31 (2015) 28–41
of the most widely used methods to reduce selection bias
(Rosenbaum and Rubin 1985; Rubin 2007; Stuart 2010).
Rosenbaum and Rubin (1985) define matched samples as a
method for selecting untreated units to form a control group
“that is similar to a treated group with respect to the distribution
of observed covariates.”They show that, conditional on those
observed variables, treatment in the matched sample is, in effect,
randomized, thus reducing selection bias. However, matching on
every characteristic becomes challenging when the dataset has
many dimensions. To overcome the curse of dimensionality,
extant literature suggests calculating, and matching on, the
propensity scores (Rosenbaum and Rubin 1983). We first
estimate a propensity score model relating customers' propensity
to become app users with their characteristics. We then calculate
the estimated propensity for each user and select a subset of the
control customers that have similar estimated values.
We now specify the propensity score model that estimates
the probability, or propensity, of app adoption. Denoted by P
it is defined as:
Pi≡Pr Adopti¼1jdi;ln viþ1ðÞ½;ð1Þ
is a binary variable that indicates whether
customer iadopts the mobile app in September 2012, vector d
contains demographics such as age and gender, and vector v
includes behavioral characteristics exhibited during the six
months prior to app adoption, i.e., total points accrued in each
month from March through August 2012, total points accrued
by spending on banking, retail, food, gas, and other miscella-
neous charges, total number of sponsors visited, number of
points used for reward redemptions, and customers' tenure with
the program in days. All numerical behavioral variables are
log-transformed to symmetrize the distributions and reduce the
influence of extreme observations. We use binary logistic
regression to estimate P
Having estimated the propensity score model, we compute the
estimated probability to adopt the app, i.e., the propensity score ^
for each customer. We then employ 1:1 matching (1 control to 1
app adopter) using the nearest-neighbor matching algorithm with
replacement. We evaluate whether covariate balance improves by
calculating the normalized differences (NDs) in means for all
covariates and compare the two sets of NDs before and after
matching (Imbens and Rubin 2015). ND is defined as:
is the normalized difference in means for covariate j,xjt
is the mean of the covariate for the treated units, xjc is the mean of
the covariate for the control units, s
is the variance of the covariate
jfor the treated units, and s
is the variance of the covariate jfor
the control units. Table 1 shows that before matching, nine out of
twenty-one NDs are greater than 0.10.
After matching, the NDs
range from 0.0004 to 0.0282. Except for the three variables,
ln(Points Accrued in March + 1), ln(Points Accrued in July + 1),
and ln(Points Accrued from Banking + 1), which already have
very small NDs to begin with, all other covariates' balances are
improved after matching.
Whether matched sampling should be performed with or
without replacement depends on the original dataset. Matched
sampling with replacement yields better matches, because any
controls that are already matched with treated units can still be
available for the next matching iteration. Obtaining closer
matches is especially important when the original reservoir of
control units is small, as in our study. Our final matched sample
consists all app adopters (n= 10,776) and 5,127 distinct non-
adopter customers. In the following regression models that test
our hypotheses, if a control customer is matched with two (or
more) mobile adopters, his records are duplicated in order to
maintain the one-to-one ratio. After obtaining a matched sample
and a propensity score for each of its customer, we include the
score as a covariate in the regressions that test our hypotheses.
Effects of App Adoption by Feature Usage (H1 and H2)
Having obtained the propensity scores and a matched
sample, we proceed with a difference-in-difference-in-difference
(DDD) regression, which is an extension of the difference-in-
difference (DD) model that is used when there is only one
treatment of interest. The motivation behind a DD model is to
control for any a priori individual differences between treated and
control units. Even though we already account for observed
characteristics using propensity score matching, the adopters and
non-adopters may still have unobserved differences. By assuming
that those unobserved characteristics for both the treated and the
control groups remain unchanged before and after the treatment
shock (i.e., app's update release), a DD model accounts for the
groups' fixed effects and seasonality to derive the treatment effect
by comparing the before-and-after changes in the dependent
variable of the treated group with that of the control group. A
DDD model extends a DD model, and it is used when there are
two treatments present instead of one. In our study, we are
interested in the effect of adopting two app features, namely,
information lookup and sponsor check-ins. We develop a
balanced, monthly-level, cross-sectional panel from the cus-
tomers' purchase histories and demographic profiles. The time
horizon includes both the pre- (March 2012 to August 2012) and
post-adoption periods (September 2012 through February 2013).
The DDD model is specified as:
ln yit þ1ðÞ¼α1þα1iþα1pþβ1
Readers can ﬁnd the rationale for choosing propensity score matching over
the two-stage-least-squares (2SLS) methods with instrumental variables in
Extant literature (e.g., Imbens and Rubin 2015, p. 277), suggest that as a
general rule of thumb, NDs should be close to zero, and if they are greater than
0.25, regression analyses may become unreliable.
32 S.J. Kim et al. / Journal of Interactive Marketing 31 (2015) 28–41
is the number of points accrued
(1 point is earned by
spending approximately $30 CAD) by customer iin month t,and
is the fixed intercept. We classify each customer in the
matched sample into one of the four categories: non-adopters,
adopters who only do information lookups with the app, adopters
who only do sponsor check-ins, and adopters who use both
features. The coefficients of L
account for the group fixed-
effects of being a customer that lookups information or one that
checks in. If a customer does both, his fixed-effect, which is
assumed to be constant before and after the app update release, is
captured by the coefficient of L
in addition to those of main
. In order to both control for seasonality and
observe how the influence of app use may vary over time, we use
vector T, which contains six binary variables indicating if points
have been accrued during a given month from September 2012
through February 2013, e.g., T=fort=
To test H1 and H2, therefore, we observe the
signs and magnitudes of vectors β
, whose estimates
give us the percent changes in y
after each type of app adoptions.
Finally, we control for any observed heterogeneity effects,
including customers' demographics (d
); pre-adoption behavior
), which contains total number of sponsors visited, number of
points used for reward redemptions and customers' tenure with the
program in days
; and propensity score ð^
We also control for
customers' unobserved heterogeneity at both the matched-paired
, and at the individual level, α
. Because we have done
propensity score matching, i.e., each adopter is paired with a
non-adopter, we have a multi-level structure, in which individuals
within a matched pair are correlated, and observations within an
individual are also correlated. Thus, unobserved heterogeneity is
accounted for both within a matched pair (α
) and at the individual
). Each of α
is assumed to be i.i.d. normally
distributed with mean 0 and variances of σ
Quantifying the Effect of App Stickiness and Disengagement
(H3 and H4)
The aforementioned DDD model allows us to assess spending
changes after adopting the app. Now we proceed with examining
the effect of repeated use as posited in H3, which is tested with
It has been shown that in the DD framework, estimation implementation that
uses a stack of observations with saturated explanatory variables (i.e., treatment,
time, and their interaction) is equivalent to taking the paired difference (Angrist
and Pischke 2009, pp. 233–234). Taking the paired difference has the
disadvantage of not being able to easily control for ﬁxed effects due to
demographics and other observable heterogeneity. Thus, in our model, the
dependent variable of interest is log-transformed points accrued in a given
month, rather than a difference.
Similar to a traditional DD model, our model uses observations in any given
month prior to the app update release on August 30, 2012 as the baseline.
Pre-adoption point accruals are excluded because they are the dependent
variable in the model, per the DDD framework.
The descriptive statistics and the correlation matrix of focal variables are
available from the corresponding author upon request.
Descriptive statistics of propensity score model variables, before and after matching.
App-adopter Non-adopter Normalized difference In means
Before matching After matching Before matching After matching
Mean Std. dev. Mean Std. dev. Mean Std. dev.
Is aged 18 to 34 0.4410 0.4965 0.1814 0.3854 0.4408 0.4965 0.5841 0.0004
Is aged 35 to 44 0.2105 0.4077 0.1561 0.3630 0.2124 0.4090 0.1409 0.0047
Is aged 45 to 54 0.1281 0.3342 0.1853 0.3886 0.1334 0.3400 0.1578 0.0157
Is aged 55 to 64 0.0684 0.2524 0.1566 0.3634 0.0658 0.2479 0.2819 0.0104
Is aged 65 plus 0.0282 0.1656 0.1425 0.3496 0.0295 0.1692 0.4179 0.0078
Is female 0.3389 0.4734 0.3988 0.4897 0.3433 0.4748 0.1244 0.0093
Is male 0.3299 0.4702 0.2717 0.4449 0.341 0.4741 0.1272 0.0235
Pre-adoption purchase behavior
ln(Points Accrued in Mar + 1) 2.2653 1.71 2.2592 1.7035 2.2532 1.6913 0.0036 0.0071
ln(Points Accrued in Apr + 1) 2.2771 1.6952 2.3183 1.7014 2.2721 1.6884 0.0243 0.0030
ln(Points Accrued in May + 1) 2.5226 1.7533 2.4971 1.7463 2.5167 1.7493 0.0146 0.0034
ln(Points Accrued in Jun + 1) 2.5408 1.7597 2.5664 1.7562 2.5511 1.7612 0.0146 0.0059
ln(Points Accrued in Jul + 1) 2.4080 1.6857 2.4277 1.6916 2.3772 1.6978 0.0117 0.0182
ln(Points Accrued in Aug + 1) 2.4853 1.7151 2.4050 1.7268 2.4724 1.7178 0.0467 0.0075
ln(Points Accrued from Grocery + 1) 2.8969 1.9744 3.0000 2.0102 2.8811 1.9688 0.0517 0.0080
ln(Points Accrued from Retail + 1) 1.1762 1.5215 1.2067 1.4980 1.2049 1.5148 0.0202 0.0189
ln(Points Accrued from Gas + 1) 1.6599 1.8171 1.3167 1.7125 1.6548 1.7940 0.1944 0.0028
ln(Points Accrued from Banking + 1) 1.2967 2.3615 1.2906 2.3640 1.2808 2.3805 0.0026 0.0067
ln(Points Accrued from Other + 1) 1.0846 1.6352 0.9827 1.5916 1.0665 1.6013 0.0632 0.0112
ln(Number of Sponsors Visited + 1) 1.4177 0.5345 1.3791 0.5137 1.4208 0.5069 0.0736 0.0060
ln(Redeemed Points + 1) 0.4485 1.7383 0.5687 1.9397 0.4391 1.7123 0.0653 0.0054
ln(Tenure in Days + 1) 7.7595 0.9022 8.0336 0.8936 7.7331 0.9715 0.3053 0.0282
Note. After matching, 10,776 mobile app adopters and 10,776 paired non-adopters as a control group with replacement (5,127 distinct non-adopters).
33S.J. Kim et al. / Journal of Interactive Marketing 31 (2015) 28–41
the log–log model depicted below.
Using data from the app
adopters and their matched non-adopters, we regress each
customer's weekly spending from September 2012 to February
2013 on the causal variables that represent repeated use and
discontinued use. We specify our model as follows:
ln yiw þ1ðÞ¼α2þa2iþa2pþγ1ln Cumulative Lookiw−1þ1ðÞ
þγ2ln Cumulative Checkiw−1þ1ðÞ
þγ3Is Before First Lookiw þγ4Is Before First Checkiw
þγ5ln Look Recencyiw þ1ðÞ
þγ6ln Check Recencyiw þ1ðÞ
þγ7ln Look Frequencyiw þ1ðÞ
þγ8ln Check Frequencyiw þ1ðÞþγ9
11 Liw Genderi
12 Ciw Genderi
ðÞþγ13Is iOS Useri
þγ14Is Android Useriþγ15
is the number of points accrued in week w,andaswe
have already mentioned, is a close approximation of the customer's
spent dollars, and α
is the fixed intercept. We represent repeated
app use with two variables: cumulative number of information
) and cumulative number of
sponsor check-ins (Cumulative_Check
by the end of the previous week, w−1. To verify H3, we observe
the signs and magnitudes of γ
H4 posits that discontinued use of the app will decrease
future purchases. We operationalize discontinued use with
recency measures of lookups (Look_Recency
) and check-ins
), recording the days since the last app use for a
given feature by the week's end. Prior to customers' app feature
adoptions, their recency measures are set to 0 day, and binary
variables, Is_Before_First_Lookiw and Is_Before_First_Checkiw,
are set to 1 indicating that they have yet to use the app feature
for not having adopted information
lookups yet, and Is_Before_First_Check
for not having adopted
check-ins). After app adopters' first uses of an app feature, their
corresponding binary variables are set to 0. Note that smaller
values of recency correspond to more recent events. Thus, we
expect negative estimates of γ
Besides capturing the causal variables of interest, namely, the
effect of repeated use and discontinued use of the app, we control
for other behavioral variables and interaction terms. Since app
use within the week might influence spending, yiw,weinclude
the number of information lookups a customer did during the
) and the number of sponsor check-ins
). We also include two-way interaction
, indicating whether customer
ibelongs to a particular age group and whether he does
information lookups or sponsor check-ins during week w,
respectively. As with the propensity score model, we use
customers with unknown age as baseline, and categorize the
rest of the customers into one of five age groups (aged 18–34;
35–44; 45–54; 55–64; and 65 and older). Similarly, we include
two-way interaction terms for gender, L
, to indicate whether the customer belongs to a gender
group and whether he does information lookups or sponsor check-
ins during week w. Again, we use customers with unknown gender
as baseline, and categorize the rest as either male or female. We
also control for the platform that operates the app since conven-
tional wisdom suggests that iOS users may be more affluent than
Android users, and thus customers may have different spending
levels depending on which mobile platforms they use. As with age
and gender, we use unknown platform as the baseline, and denote
as the binary variable that indicates whether we
observe customer ibeing an iOS user. Similarly, Is_Android_User
indicates whether the customer used Android anytime during our
study period. Finally, we control for weekly seasonality for each of
the 25 weeks, as well as customers' observed heterogeneity (d
) and unobserved heterogeneity (α
Propensity Score Model
First, we estimate the propensity score model, which predicts
the likelihood of a customer adopting the app. Table 2 shows the
estimates. Younger customers are more likely to adopt than older
customers, and the oldest customers are the least likely to adopt.
Males are more likely to adopt than females. Customers with
higher point accruals in August just before the adoption (^
078, pb.001) are more likely to adopt the app. Those with point
accruals due to spending on gas (^
β¼0:09 , pb.001) and
miscellaneous charges (^
β¼0:02, pb.05) are also more likely to
adopt. Customers who used more points to redeem for rewards
β¼−0:026, pb.01), suggesting that
after a large reward redemption, they are less likely to use the app
to learn more about the loyalty program or to check their point
balances. After the propensity score calculations, we match up
each app adopter with a control customer, i.e., non-adopter.
Effects on Point Accruals After App Adoption (H1 and H2)
After obtaining the propensity scores and a matched sample,
we begin our regression analyses to test our hypotheses. We use
a random intercept DDD model that incorporates both observed
heterogeneity and unobserved heterogeneity. In addition to the
DDD model specified in Eq. (3), as a robustness check, we also
run an alternative DDD model, where instead of T, we denote
time-fixed effect before and after the app update release with a
binary variable t
, which is set to 1 if the given month is after the
The correlation table of main variables suggests that our dependent variables
of interest regarding H3 and H4 may be highly correlated. To ensure that our
reported results for H3 and H4 are not inﬂuenced due to potential multi-
collinearity, we conduct four additional sets of random effect models as
robustness checks: (1) the ﬁrst alternative model examines the effect of
cumulative use of lookups and check-ins while excluding all of the recency
variables, (2) the second alternative model examines the effect of app use
recency while excluding the effects of cumulative app use, (3) the third
alternative model examines the effect of discontinued lookups without any of
the check-in variables, and (4) the fourth alternative model examines the effect
of discontinued check-ins without any of the look up variables. The results of
these four alternative models remain similar to those from our original model,
and in some cases their standard errors are smaller, suggesting even higher
levels of signiﬁcance. In sum, our ﬁndings pertaining to H3 and H4 remain
unchanged after having conducted numerous robustness checks.
34 S.J. Kim et al. / Journal of Interactive Marketing 31 (2015) 28–41
app release, and 0 otherwise, while also controlling for seasonality
month-by-month. Thus, Table 3 presents two sets of estimate
results—the overall effect on monthly spending after adoption,
and the effect broken down by month from September 2012 to
February 2013 per specifications in Eq. (3). The latter reveals
whether and by how much the effect of app adoption persists.
Any combination of feature use leads to an increase in
subsequent point accruals (and therefore spending),
confirms H1. Overall, being an adopter who does information
lookups leads to a exp(0.213) −1 = 24% increase (^
pb.001) in post-adoption spending, while using sponsor
check-ins gives an effect of exp(0.177) −1 = 19% ( ^
7, pb.001). Thus, customers who use both features increase
their spending by exp(0.213 + 0.177) −1 = 48% after adopting
the app. We therefore confirm H2—using different combinations
of app features leads to different effects on subsequent spending.
Results from the month-to-month model show how the effect
changes over time. The impact on subsequent spending is the
greatest immediately after the adoption: compared to the monthly
spending average in the pre-adoption period, in September, being
a user who does information lookups leads to a exp(0.314) −
β¼0:314, pb.001) in point accruals, while
using sponsor check-ins gives exp(0.280) −1=32%(
0, pb.001). The effects attenuate afterwards. In October, being a
user of information lookups leads to a 23% (^
increase, while using sponsor check-ins yields 14% ( ^
pb.001). By the end of our study period (February 2013), the
effects are 22% ( ^
β¼0:199 , pb.001) and 16% ( ^
pb.01) for information lookups and sponsor check-ins, respec-
tively. The interaction effect from being a user of both features
is positive but insignificant for all months except for September,
right after the app adoption (^
β¼−0:122, pb.05). Therefore, for
customers that use both features, their September spending
increases by exp(0.314 + 0.280 −0.122) −1 = 60%, and their
February spending increase by exp(0.199 + 0.150) −1=42%.
Without complex econometrics, Fig. 2 presents model-free
line charts that show how different types of app users increase
their point accruals over time. As with the DDD model, we
classify all app adopters into one of three types: adopters who
only do information lookups with the app during the post-
adoption period of September 2012 to February 2013, adopters
who only do check-ins, and adopters who use both features. The
charts also show point accruals of each type's matched control
group, i.e., non-adopters. The gap between the lines of adopters
and non-adopters shows a moderate increase in September for
customers who do information lookups and a sharp increase
for customers who use both features, suggesting that the
more features the adopters use, the more they outspend the
Effects of Repeated App Use on Point Accruals (H3)
Confirming H1 and H2 establishes that, using the branded app
is associated with an increase of 19%–48% in point accruals (and
therefore spending) over the course of six months after adoption.
Using a separate model of a balanced weekly panel, constructed
from customers' purchase histories and demographic profiles, we
now seek to determine the effect of repeated use on spending (H3).
We operationalize repeated app use with two variables of interest:
the cumulative numbers of information lookups and sponsor
check-ins a customer has done by the end of the previous week. As
shown in Table 4, both variables have positive and significant
influence on point accruals. If customers increase their cumulative
information lookups by 10%, their weekly point accruals increase
β¼0:0182, pb.001), and if they
increase their cumulative check-ins by 10%, their weekly point
accruals increase by (1.10
)−1 =0.17% ( ^
pb.01). Based on the matched sample, which spans from
September 2012 to February 2013, on average, customers' weekly
point accruals is 11.9 and their cumulative information lookups
and check-ins are 1.7 and 1.5, respectively. Thus, by increasing
their cumulative information lookups by 1 unit, their weekly point
We also performed sensitivity analyses according to Rosenbaum bounds
(Keele 2010;Rosenbaum 2005, 2010). Results show that for customers that
only do information lookups or check-ins, unobserved variables will have to
inﬂuence the odds of app adoption by 20% for the effect on point accruals to
become null. For customers that do both check-ins and information lookups,
they will have to change the odds of app adoption by 50% to nullify the effect.
As a robustness check, we estimate a third DDD model using the sample of
5,127 control customers and matching it to 5,127 app adopters. The ﬁndings
remain the same. See Appendix B for details.
Propensity score model estimates.
Dependent variable Is becoming a user of the app
Estimate (Std. err.)
Intercept –0.4041 ⁎⁎ (0.1534)
Is aged unknown (Baseline)
Is aged 18–34 1.2291 ⁎⁎⁎ (0.0481)
Is aged 35–44 0.6428 ⁎⁎⁎ (0.0512)
Is aged 45–54 –0.0171 (0.0534)
Is aged 55–64 –0.4597 ⁎⁎⁎ (0.0599)
Is aged 65+ –1.2260 ⁎⁎⁎ (0.0749)
Is gender unknown (Baseline)
Is female –0.2091 ⁎⁎⁎ (0.0370)
Is male 0.1288 ⁎⁎⁎ (0.0387)
ln(Points Accrued in Mar + 1) –0.0006 (0.0145)
ln(Points Accrued in Apr + 1) –0.0207 (0.0153)
ln(Points Accrued in May + 1) –0.0034 (0.0152)
ln(Points Accrued in Jun + 1) –0.0515 ⁎⁎⁎ (0.0151)
ln(Points Accrued in Jul + 1) –0.0427 ⁎⁎ (0.0154)
ln(Points Accrued in Aug + 1) 0.0776 ⁎⁎⁎ (0.0145)
ln(Points Accrued from Grocery) 0.0166 (0.0125)
ln(Points Accrued from Retail) –0.0139 (0.0122)
ln(Points Accrued from Gas) 0.0934 ⁎⁎⁎ (0.0106)
ln(Points Accrued from Banking) 0.0272
ln(Points Accrued from Other) 0.0174 ⁎(0.0102)
ln(Number of Sponsors Visited + 1) 0.0786 (0.0488)
ln(Redeemed Points + 1) –0.0262 ⁎⁎ (0.0084)
ln(Tenure in Days + 1) –0.0162 (0.0185)
Note. 10,776 app adopters and 10,776 non-adopters before matching.
Standard errors are in parentheses.
35S.J. Kim et al. / Journal of Interactive Marketing 31 (2015) 28–41
accruals increase by ð1þ1:7
1:7Þ0:0182−1¼0:85%, or 0.10 points,
which is equivalent to $3.0 CAD a week. If the customers increase
their check-ins by 1 unit, their weekly point accruals increase by
1:5Þ0:0181−1¼0:93%, or 0.11 points, which is equivalent to
$3.3 CAD. H3 is confirmed.
Effects of Discontinued Use on Point Accruals (H4)
H4 predicts that discontinuing the use of the app has a
negative influence on spending. We operationalize discontinued
use with recency, the number of days since using the feature.
Estimates of focal parameters, DDD model (H1 and H2).
Dependent variable ln(Points Accrued in Month t+1)
Lookup Check-in Lookup × Check-in
Is Post-Adoption 0.2131 ⁎⁎⁎ (0.0128) 0.1770 ⁎⁎⁎ (0.0353) 0.0148 (0.0421)
Sep 0.3140 ⁎⁎⁎ (0.0169) 0.2804 ⁎⁎⁎ (0.0457) –0.1218 ⁎(0.0541)
Oct 0.2069 ⁎⁎⁎ (0.0175) 0.1329 ⁎⁎ (0.0488) 0.0745 (0.0569)
Nov 0.2189 ⁎⁎⁎ (0.0188) 0.1443 ⁎⁎ (0.0528) 0.0754 (0.0614)
Dec 0.1805 ⁎⁎⁎ (0.0195) 0.1546 ⁎⁎ (0.0536) 0.0075 (0.0626)
Jan 0.1594 ⁎⁎⁎ (0.0188) 0.2001 ⁎⁎⁎ (0.0507) 0.0187 (0.0602)
Feb 0.1987 ⁎⁎⁎ (0.0186) 0.1498 ⁎⁎ (0.0500) 0.0347 (0.0592)
Is aged 18 to 34 –1.1045 ⁎⁎⁎ (0.0641)
Is aged 35–44 –0.5035 ⁎⁎⁎ (0.0431)
Is aged 45–54 0.2314 ⁎⁎⁎ (0.0304)
Is aged 55–64 0.5321 ⁎⁎⁎ (0.0434)
Is aged 65 plus 0.9811 ⁎⁎⁎ (0.0694)
Is female 0.1604 ⁎⁎⁎ (0.0214)
Is male –0.0412 ⁎(0.0197)
Pre-adoption behavioral characteristics and propensity score
ln(Number of Sponsors Visited + 1) 1.5273 ⁎⁎⁎ (0.0150)
ln(Redeemed Points + 1) 0.1343 ⁎⁎⁎ (0.0045)
ln(Tenure in Days + 1) 0.0864 ⁎⁎⁎ (0.0088)
Propensity score 3.5277 ⁎⁎⁎ (0.2010)
Fixed intercept –2.0356 ⁎⁎⁎ (0.1080)
Matched pair random intercept covariance 0.0062 (0.0108)
Individual random intercept covariance 1.0292 ⁎⁎⁎ (0.0152)
Note. The coefficients labeled under “Is Post-Adoption”are estimated from an alternative model that examines the overall treatment effect. Coefficients labeled by
month, i.e., Sep through Feb, are estimated using the model specified in Eq. (3), which breaks down the treatment effect by month after app adoption. Estimates for
demographics, pre-adoption behavioral characteristics, propensity score, fixed intercepts, and random intercept covariance are the same between the two models.
Estimates for seasonality effects are not reported. 10,776 app adopters and 10,776 paired non-adopters as a control group with replacement (5,127 distinct
non-adopters). Total number of observations = 10,776 × 2 × 12 months = 258,624.
Robust standard errors per White (1980) are in parentheses.
Before Adoption Before Adoption Before AdoptionAfter Adoption After Adoption After Adoption
Fig. 2. Average monthly point accruals by app adoption type. Note. One point is equivalent to approximately $30 CAD in spending. Number of adopters of both
features = 2,531. Number of adopters of information lookups = 7,652. Number of adopters of check-ins = 573. Total of 10,776 adopters and 10,776 paired
non-adopters as a control group with replacement (5,127 distinct non-adopters).
36 S.J. Kim et al. / Journal of Interactive Marketing 31 (2015) 28–41
Both variables' coefficients are negative and significant
(information lookup recency: ^
β¼−0:012 , pb.05; sponsor
check-in recency: ^
β¼−0:017, pb.01), suggesting that as the
number of days since the last information lookup or check-in
increases, spending levels drop. If customers increase their
information lookup recency by 10%, their weekly point accruals
decrease by 1 −(1.10
) = 0.11%. If they increase their
check-in recency by 10%, their weekly point accruals decrease by
) = 0.16%. On average, customers' information
lookup and check-in recencies are 3.8 and 8.5, respectively.
If customers lapse in their information lookups by one day, their
weekly point accruals drop by 1−ð3:8þ1
decrease of 0.03 points, or $1.0 CAD a week. If customers lapse in
their check-ins by one day, their weekly point accruals drop by
8:5Þ−0:017 ¼0:19%, which is a decrease of 0.02 points, or
sixty-seven cents CAD a week. The results suggest that as
customers cease using the app, the brand's revenues will decrease.
H4 is confirmed.
Mobile apps are examples of “non-push marketing contacts”
(Shankar and Malthouse 2007, p. 3), with which customers
decide to interact with a brand. It is important to understand their
effectiveness so that firms know whether to invest in mobile
strategies. This study examines the effects of adopting and
interacting with a brand's app on subsequent spending levels.
It also tests how the use of different interactive app features
influences purchases. Additionally, it explores the effects of app
stickiness (i.e., repeated app use) and discontinued app use. We
find that (1) branded app adoption has a positive effect on
purchase behavior; (2) the positive effect on spending persists for
at least six months after adoption; (3) the positive effect of branded
app adoption is elevated when customers become more active and
use more features available on the branded app; (4) branded app
use has a cumulative effect: as customers repeatedly use the app,
their spending levels increase even more; and (5) discontinuing
the use of the branded app is associated with reduced future
spending. These findings are consistent with previous research on
the effectiveness of branded apps (Bellman et al. 2011), mobile
interactivity (Gu, Oh, and Wang 2013), mobile advertising (Rettie,
Grandcolas, and Deakins 2005), and positively- and negatively-
valenced brand engagement (Hollebeek and Chen 2014). We also
find that the concept of stickiness, which has been mostly tested in
a web context, can be applied to a mobile context. Consistent with
the framework of Furner, Racherla, and Babb (2014),wefindthat
the level of stickiness (i.e., repeated use of the app) increases
purchase behavior. To the best of our knowledge, this research is
also one of the first empirical studies that show an app's failure in
satisfying customers' needs hurts brand sales, as customers
discontinue the use of it.
To check whether propensity score matching yields different estimates from
a null model that potentially suffers from selection bias, we conduct a Durbin–
Wu–Hausman test, comparing estimates from the null model without
propensity score matching to one that incorporates it. Null hypothesis is
rejected, suggesting that the two models yield different coefﬁcient estimates.
The test results are available from the corresponding author upon request.
Estimates of focal parameters, log–log model (H3 and H4).
Variable Estimate (Std. err.)
ln(Information Lookups in Week w+ 1) 0.3314 ⁎⁎⁎ (0.0207)
ln(Check-ins in Week w+ 1) 0.0364 ⁎(0.0177)
ln(Cumulative Lookups by Week w−1 + 1) 0.0182 ⁎⁎⁎ (0.0053)
ln(Cumulative Check-ins by Week w−1 + 1) 0.0181 ⁎(0.0090)
Has no lookups so far by week w–0.0765 ⁎⁎⁎ (0.0225)
Has no check-ins so far by week w–0.05114
ln(Information Lookup Recency in Days
in Week w+1)
ln(Check-in Recency in Days in Week w+1) –0.0170 ⁎(0.0070)
Demographics and interactions
Is aged 18 to 34 0.0685 (0.1516)
Is aged 18 to 34 × has lookups in week w–0.0448 ⁎(0.0210)
Is aged 18 to 34 × has check-ins in week w0.1141 ⁎⁎ (0.0429)
Is aged 35 to 44 0.0762 (0.0830)
Is aged 35 to 44 × has lookups in week w–0.0810 ⁎⁎⁎ (0.0244)
Is aged 35 to 44 × has check-ins in week w0.0814 (0.0563)
Is aged 45 to 54 0.1032 ⁎⁎⁎ (0.0135)
Is aged 45 to 54 × has lookups in week w–0.1326 ⁎⁎⁎ (0.0280)
Is aged 45 to 54 × has check-ins in week w0.0460 (0.0617)
Is aged 55 to 64 0.0238 (0.0560)
Is aged 55 to 64 × has lookups in week w–0.0965 ⁎⁎ (0.0357)
Is aged 55 to 64 × has check-ins in week w–0.0853 (0.0750)
Is aged 65 plus –0.0510 (0.1241)
Is aged 65 plus × has lookups in week w–0.0873 (0.0541)
Is aged 65 plus × has check-ins in week w0.2185 (0.1546)
Is female 0.0068 (0.0252)
Is male –0.0008 (0.0166)
Is female × has lookups in week w–0.0464 ⁎(0.0189)
Is female × has check-ins in week w0.0512 (0.0461)
Is male × has lookups in week w–0.0326
Is male × has check-ins in week w–0.0131 (0.0441)
Is iOS user 0.0711 ⁎⁎⁎ (0.0082)
Is Android user 0.0494 ⁎⁎⁎ (0.0099)
Pre-period behavioral characteristics and propensity score
ln(Points Accrued in Mar + 1) 0.0481 ⁎⁎⁎ (0.0034)
ln(Points Accrued in Apr + 1) 0.0346 ⁎⁎⁎ (0.0042)
ln(Points Accrued in May + 1) 0.0389 ⁎⁎⁎ (0.0035)
ln(Points Accrued in Jun + 1) 0.0342 ⁎⁎⁎ (0.0068)
ln(Points Accrued in Jul + 1) 0.0617 ⁎⁎⁎ (0.0061)
ln(Points Accrued in Aug + 1) 0.1148 ⁎⁎⁎ (0.0094)
ln(Points Accrued from Grocery + 1) 0.0926 ⁎⁎⁎ (0.0037)
ln(Points Accrued from Retail + 1) 0.0295 ⁎⁎⁎ (0.0034)
ln(Points Accrued from Gas + 1) 0.0467 ⁎⁎⁎ (0.0109)
ln(Points Accrued from Banking + 1) 0.0237 ⁎⁎⁎ (0.0033)
ln(Points Accrued from Other + 1) 0.0270 ⁎⁎⁎ (0.0041)
ln(Number of Sponsors Visited + 1) –0.1559 ⁎⁎⁎ (0.0140)
ln(Redeemed Points + 1) 0.0172 ⁎⁎⁎ (0.0038)
ln(Tenure in Days + 1) 0.0249 ⁎⁎⁎ (0.0042)
Propensity score –0.1708 (0.5209)
Note. 10,776 app adopters and 10,776 paired non-adopters as a control group with
replacement (5,127 distinct non-adopters). Total number of observations =
10,776 × 2 × 25 weeks = 538,800. Estimates for seasonality effects are not
Robust standard errors per White (1980) are in parentheses.
37S.J. Kim et al. / Journal of Interactive Marketing 31 (2015) 28–41
Our results provide substantive implications for the theory of
interactivity and stickiness, as well as cross-channel advertising.
First, this study expands the concept of interactivity and stickiness
to a mobile context. Previous research has shown that greater
engagement through interactivity generates favorable attitudes
toward message creators (Sundar et al. 2013). Our study confirms
that greater engagement with the interactive app features (and
presumably with the brand) produces a higher level of subsequent
spending. Customers who adopt a branded app and continue
to interact with it will deepen their relationships with the brand.
By logging in with the app and using its features, customers
experience multiple touch points with the brand. If we consider the
continuous and immediate nature of mobile technologies, adopters
have more opportunities to be reminded of the values that the brand
offers and build a relationship of trust and commitment. Thus,
providing a stickier app that encourages adopters to use it
repeatedly and frequently is a key mobile strategy. In addition,
habitual use of an app integrates the brand into the customer's life
(Wang, Malthouse, and Krishnamurthi 2015). The optimization of
cross-channel advertising can maximize the marketing potential of
each communication channel.
However, the finding that abandoning an app decreases
subsequent purchase behaviors warns against releasing an app
whose value to customers is not clear, so that it seemsunhelpfulor
irrelevant to them. Why customers decrease their spending levels
after they stop using the branded app is beyond the scope of this
paper. However, we speculate that the decline may be attributable
to negative or dissatisfying experiences that the customers develop
while exploring the app. Hollebeek and Chen (2014) identified
perceived brand value as one of the key antecedents of brand
engagement behavior. If consumers negatively assess the benefits
of a branded app, they are more likely to experience negatively-
valenced brand engagement such as negative brand attitude and
sharing negative word-of-mouth, which then becomes an
antecedent of decreasing sales. If that is the case, companies
should be cautious about making apps available on the market-
place before their functionalities are fully tested. It may be even
more harmful than not providing any app. Also, app recency can
serve as an early indicator of customer attrition and should be
monitored as a trigger event (Malthouse 2007).
This study has a few limitations. Data wise, we do not have
complete information on what specific functions the customers
used (e.g., when they looked up information, were they checking
their point balances or browsing reward items), which makes it
impossible to estimate the effect of each function beyond the
broad categories of information lookups and check-ins. Our study
also assumes that customers did not look up information when
they did a check-in, which may lead to overestimating the latter's
effect. Additionally, we do not have session times, soa ten-minute
session is assumed to impose the same effect as a one-minute
session. It is possible that accounting for usage time may yield
more nuanced insights. Finally, even though the sample size is
large, this study uses data from only one brand that provides
services and offerings across multiple categories. Hence, our
findings are conditional on the fact that a brand's app successfully
delivers value and interactivity to its customers, and that its
products are relevant to customers' needs.
Methodologically, while our research design uses propensity
score matching to reduce confounding due to non-random
assignment to app adoption, our approachdoes not fully solve the
issues of endogenous selection bias (Elwert and Winship 2014).
Our models control for observed confounders, but any unob-
served confounders, e.g., brand attitude or Internet experience,
are assumed away. Ideally, we would incorporate instrumental
variables and two-stage-least-square (2SLS) regressions in
addition to propensity score matching. For instance, a potentially
suitable instrumental variable could be customers' participation
on social media sites or terms of their smartphone contracts, but
unfortunately we do not have this type of data. Another way to
validate the findings from our observational study would be
conducting experiments that allow for better control of app
use and measure pre- and post-adoption spending behavior
(e.g., Armstrong 2012; Goldfarb and Tucker 2011).
Despite these limitations, this paper offers important sugges-
tions for future research. Few studies have examined the effect of
app adoption and usage with actual feature usage data that are
linked to purchase behaviors of the same individuals. Future
research should test the uses of other interactive features and see
what specific ones are the most effective in changing brand
attitude, purchase intention, or loyalty, and under which
conditions. Future studies should also test the effects of branded
apps in other industries.
Since this study concerns what happens after customers adopt
and repeatedly use (or stop using) a branded app, the process in
which a branded app gets discovered in app store and downloaded
is not investigated in this study, but merits attention from both
academics and professionals. Technological adoption model
(TAM) provides a conceptual framework for identifying factors
that motivate app adoption. TAM maintains that Perceived
Usefulness (PU) and Perceived Ease of Use (PEOU) are two
fundamental determinants of technology adoption (Davis 1989).
Recent literature on TAM suggests that there are four types of
external factors that influence the level of PU and PEOU:
individual differences, system characteristics, social influence,
and facilitating conditions (Venkatesh and Bala 2008). Given the
information and hedonic nature of mobile apps and the influence of
peer reviews on technology adoption, how system characteristics
and social influence, among the four external factors, play a role in
consumers' adoption of branded apps becomes an important area
for future research. For example, what are the key design features
that affect the perceived level of interactivity and/or cognitive and
affective involvement? How much do consumers rely on ratings or
reviews of apps when there is a need to download an app? How do
these two external factors influence PU and PEOU? Are there
differences in motivating factors between branded app adoption
and generic app adoption?
From a practical standpoint, our findings suggest that
companies should provide branded apps that can offer a unique
brand experience without any technical hassles. A recent
survey from Forrester shows that only 20% of digital business
professionals update their apps frequently enough to fix bugs
and handle OS updates, signaling the lack of mobile strategies
in industry (Ask et al. 2015, March 23). Marketers should think
about what user interface and features can be implemented to
38 S.J. Kim et al. / Journal of Interactive Marketing 31 (2015) 28–41
increase interactivity and user engagement (Kiousis 2002; Lee
2005). Companies should also monitor the app marketplace for
customers' evaluations on their branded apps. Negative word-
of-mouth on a branded app may signal the brand's lack of
readiness in the fast-changing mobile environment. With the
explosive growth of mobile technologies and app culture,
customers' expectations of a useful and enjoyable mobile
experience will become the norm. Whether companies can meet
those expectations is a determining factor in maximizing the
impact of mobile applications.
Below we provide our rationale for choosing propensity score
matching over 2SLS with instrumental variables by providing a
comparison between the two methods.
Propensity score matching differs from 2SLS in two ways.
First, after propensity scores are calculated, researchers will
perform the following procedures: check to ensure improved
covariate balance, matching and/or weighting the sample with the
propensity scores, and then perform regression analyses using the
matched or weighted sample. 2SLS, on the other hand, does not
drive at covariate balance or a “perfect stratification”(Morgan
and Winship 2015, p. 291).
Second, propensity score matching improves a regression
sample's covariate balance by selecting a subset of control units
that are similar to the treated units based on observable variables.
This assumes that all confounds are accounted for when observed
covariates exhibit similar distributional patterns between the
treated and the control groups. In contrast, conventional
implementations of 2SLS regressions do not improve covariate
balance across all observed variables. Instead, they derive
causality of the endogenous variable of interest, i.e., variable x,
by assuming that the instrumental variable can only influence the
dependent variable through x.Ifxinfluences the independent
variable directly, then 2SLS also yields biased results. In some
cases 2SLS bias may be greater than the OLS bias because even
with a small direct effect between the dependent variable and the
instrument z, the parameter estimate can be biased upward by
1/Corr(z,x), and standard error be biased downward, resulting in
significant but incorrect results and conclusions.
Furthermore, an instrumental variable that is only weakly
correlated with xcan lead to inconsistent parameter estimations
(Bound, Jaeger, and Baker 1995; Nelson and Startz 1990), and
there are no statistical tests that can definitively show the
degree to which an instrumental variable is valid (Morgan and
Winship 2015, p. 301).
Finally, 2SLS only addresses confounding of one instru-
mental variable at a time and does not account for covariate
balance across all variables. Given the advantages and
disadvantages of the methods, one would ideally have a good
instrumental variable and apply both approaches. However, in
our dataset, there are no variables that affect app adoption and
yet are not directly correlated with point accrual behavior.
Thus, instead of using an ill-suited instrumental variable, we
use propensity score matching to control for confounds due to
customers' demographics and preexisting behavior.
Appendix B. Estimates of focal variables, DDD models, matching on non-adopters
Dependent variable ln(Points Accrued in Month t+1)
Lookup Check-in Lookup × Check-in
Is Post-Adoption 0.2000*** (0.0111) 0.2378*** (0.0350) –0.0063 (0.0402)
Sep 0.2872*** (0.0208) 0.3028*** (0.0657) –0.1118 (0.0755)
Oct 0.2067*** (0.0208) 0.2235*** (0.0657) –0.0115 (0.0755)
Nov 0.2112*** (0.0208) 0.2381*** (0.0657) 0.0389 (0.0755)
Dec 0.1718*** (0.0208) 0.2082** (0.0657) 0.0233 (0.0755)
Jan 0.1463*** (0.0208) 0.2217*** (0.0657) 0.0024 (0.0755)
Feb 0.1770*** (0.0208) 0.2325*** (0.0657) 0.0211 (0.0755)
Is aged 18 to 34 –2.3575*** (0.0601)
Is aged 35–44 –0.8697*** (0.0370)
Is aged 45–540 0.4036*** (0.0321)
Is aged 55–64 0.6834*** (0.0397)
Is aged 65 plus 1.5517*** (0.0591)
Is female 0.3491*** (0.0223)
Is male –0.1306*** (0.0212)
Pre-adoption behavioral characteristics and propensity score
ln(Number of Sponsors Visited + 1) 1.3564*** (0.0181)
ln(Redeemed Points + 1) 0.1707*** (0.0048)
ln(Tenure in Days + 1) 0.0089 (0.0098)
Propensity score (i.e., estimated probability to not adopt the app) –15.5305*** (0.3681)
Fixed intercept 6.0630*** (0.1828)
Matched pair random intercept covariance 0.0050 (0.0142)
Individual random intercept covariance 0.9336*** (0.0182)
39S.J. Kim et al. / Journal of Interactive Marketing 31 (2015) 28–41
Notes to Appendix B
Note. As a robustness check, we use “no adoption”as treatment and first estimate a propensity score model of 5,127 non-adopters and 10,776 app adopters, and then
pair up each non-adopter with a distinct app adopter. Estimates from the propensity score model are not reported. After obtaining a non-adopter matched sample, we
perform the DDD models per our research design. Regression sample includes 5,127 non-adopters and 5,217 paired app adopters. Total number of observations =
5,127 × 2 × 12 months =190,836. Conclusions regarding H1 and H2 remain the same.
The coefficients labeled under “Is Post-Adoption”are estimated from an alternative model that examines the overall treatment effect. Coefficients labeled by month,
i.e., Sep through Feb, are estimated using the model specified in Eq. (3), which breaks down the treatment effect by month after app adoption. Estimates for
demographics, pre-adoption behavioral characteristics, propensity score, fixed intercepts, and random intercept covariance are the same between the two models.
Estimates for seasonality effects are not reported.
pb.10, *pb.05, **pb.01, ***pb.001. Standard errors are in parentheses.
Angrist, Joshua David and Jörn-Steffen Pischke (2009), Mostly Harmless
Econometrics: An Empiricist's Companion. Princeton, NJ: Princeton
Annalect (2015), “#GenerationTech: Millennials & Technology,”http://www.
annalect.com/generationtech-millennials-technology/ (January 28).
Armstrong, J. Scott (2012), “Illusions in Regression Analysis,”International
Journal of Forecasting, 28, 3, 689–94.
Ask, Julie A., Jeffrey S. Hammond, Carrie Johnson, and Laura Naparstek
(2015), “The State of Mobile App Development: Few eBusiness Teams
Keep Pace With Customer App Expectations,”https://www.forrester.com/
The+State+Of+Mobile+App+Development/fulltext/-/E-res120267 (March 23).
Balasubramanian, Sridhar, Robert A. Peterson, and Sirkka L. Jarvenpaa (2002),
“Exploring the Implications of M-commerce for Markets and Marketing,”
Journal of the Academy of Marketing Science, 30, 4, 348–61.
Bellman, Steven, Robert F. Potter, Shiree Treleaven-Hassard, Jennifer A.
Robinson, and Duane Varan (2011), “The Effectiveness of Branded Mobile
Phone Apps,”Journal of Interactive Marketing, 25, 4, 191–200.
Boase, Jeffrey and Rich Ling (2013), “Measuring Mobile Phone Use: Self-
Report Versus Log Data,”Journal of Computer-Mediated Communication,
18, 4, 508–19.
Bound, John, David A. Jaeger, and Regina M. Baker (1995), “Problems with
Instrumental Variables Estimation when the Correlation Between the
Instruments and the Endogenous Explanatory Variable Is Weak,”Journal
of the American Statistical Association, 90, 430, 443–50.
Calder, Bobby J. and Edward C. Malthouse (2008), “Media Engagement and
Advertising Effectiveness,”in Kellogg on Advertising and Media, Bobby J.
Calder, editor. Hoboken: NJ: Wiley. 1–36.
comScore (2014), ComScore Reports August 2014 U.S. Smartphone Subscriber
Market Share. (October 7).
Coursaris, Constantinos K. and Jieun Sung (2012), “Antecedents and Consequents
of a Mobile Website's Interactivity,”New Media & Society, 14, 7, 1128–46.
Davis, Fred D. (1989), “Perceived Usefulness, Perceived Ease of Use, and User
Acceptance of Information Technology,”MIS Quarterly, 13, 3, 319–40.
Elwert, Felix and Christopher Winship (2014), “Endogenous Selection Bias:
The Problem of Conditioning on a Collider Variable,”Annual Review of
Sociology, 40, 1, 31–53.
Fox, Zoe (2013), “Global App Downloads to Pass 100 Billion This Year,”
com-fb-main-photo (Sep 19).
Frith, Jordan (2014), “Communicating Through Location: The Understood Meaning
of the Foursquare Check-in,”Journal of Computer-Mediated Communication,
19, 4, 890–905.
Furner, Christopher P., Pradeep Racherla, and Jeffry S. Babb (2014), “Mobile
App Stickiness (Mass) and Mobile Interactivity: A Conceptual Model,”The
Marketing Review, 14, 2, 163–88.
Goldfarb, Avi and Catherine Tucker (2011), “Online Display Advertising:
Targeting and Obtrusiveness,”Marketing Science, 30, 3, 389–404.
Gu, Rui, Lih-Bin Oh, and Kanliang Wang (2013), “Differential Impact of Web and
Mobile Interactivity on E-retailers' Performance,”Journal of Organizational
Computing and Electronic Commerce, 23, 4, 325–49.
Hollebeek, Linda D. and Tom Chen (2014), “Exploring Positively- Versus
Negatively-valenced Brand Engagement: A Conceptual Model,”Journal of
Product & Brand Management, 23, 1, 62–74.
Hosh, Dave (2014), “App Retention Improves —Apps Used Only Once Declines
to 20%,”http://info.localytics.com/blog/app-retention-improves (June 11).
Imbens, Guido W. and Donald B. Rubin (2015), Causal Inference in Statistics,
Social, and Biomedical Sciences: An Introduction. New York, NY:
Cambridge University Press.
Ingraham, Nathan (2013), “Apple Announces 1 Million Apps in the App Store,
More Than 1 Billion Songs Played on Itunes Radio,”http://www.theverge.
Kang, Ju-Young M., Jung Mee Mun, and Kim K.P. Johnson (2015), “In-store
Mobile Usage: Downloading and Usage Intention Toward Mobile Location-
based Retail Apps,”Computers in Human Behavior, 46, 210–7.
Keele, Luke (2010), An overview of rbounds: An R package for Rosenbaum
bounds sensitivity analysis with matched data. Columbus, OH: White Paper.
Kim, Eunice, Jhih-Syuan Lin, and Yongjun Sung (2013), “To App or Not to App:
Engaging Consumers via Branded Mobile Apps,”Journal of Interactive
Kiousis, Spiro (2002), “Interactivity: A Concept Explication,”New Media &
Lariviere, Bart, Herm Joosten, Edward C. Malthouse, Marcel Van Birgelen,
Pelin Aksoy, Werner H. Kunz, and Ming-Hui Huang (2013), “Value Fusion:
The Blending of Consumer and Firm Value in the Distinct Context of Mobile
Technologies and Social Media,”Journal of Service Management, 24, 3,
Lee, ThaeMin (2005), “The Impact of Perceptions of Interactivity on Customer
Trust and Transaction Intentions in Mobile Commerce,”Journal of Electronic
Li, Dahui, Glenn J. Browne, and James C. Wetherbe (2006), “Why Do Internet
Users Stick with a Specific Web Site? A Relationship Perspective,”
International Journal of Electronic Commerce, 10, 4, 105–41.
Lin, Judy Chuan-Chuan (2007), “Online Stickiness: Its Antecedents and Effect on
Purchasing Intention,”Behaviour & Information Technology, 26, 6, 507–16.
Lipsman, Andrew (2014), Major Mobile Milestones in May: Apps Now Drive
Half of All Time Spent on Digital. (June 25).
Malthouse, Edward C. (2007), “Mining for Trigger Events With Survival
Analysis,”Data Mining and Knowledge Discovery, 15, 3, 383–402.
Meena, Satish (2014), “Forrester Research World Mobile and Smartphone
Adoption Forecast, 2014 to 2019 (Global),”https://www.forrester.com/
2014+To+2019+Global/fulltext/-/E-RES118252 (August 8).
Mollen, Anne and Hugh Wilson (2010), “Engagement, Telepresence and
Interactivity in Online Consumer Experience: Reconciling Scholastic and
Managerial Perspectives,”Journal of Business Research, 63, 9–10, 919–25.
Morgan, Stephen L. and Christopher Winship (2015), Counterfactuals and
Causal Inference: Methods and Principles for Social Research. New York,
NY: Cambridge University Press.
Nelson, Charles R. and Richard Startz (1990), “The Distribution of the
Instrumental Variables Estimator and its T-ratio when the Instrument Is a
Poor One,”Journal of Business, 63, 1, S125–40.
Novak, Thomas P., Donna L. Hoffman, and Yiu-Fai Yung (2000), “Measuring
the Customer Experience in Online Environments: A Structural Modeling
Approach,”Marketing Science, 19, 1, 22–42.
Okazaki, Shintaro and Felipe Mendez (2013), “Perceived Ubiquity in Mobile
Services,”Journal of Interactive Marketing, 27, 2, 98–111.
40 S.J. Kim et al. / Journal of Interactive Marketing 31 (2015) 28–41
Rettie, Ruth, Ursula Grandcolas, and Bethan Deakins (2005), “Text Message
Advertising: Response Rates and Branding Effects,”Journal of Targeting,
Measurement and Analysis for Marketing, 13, 4, 304–12.
Rosenbaum, Paul R. and Donald B. Rubin (1983), “The Central Role of the
Propensity Score in Observational Studies for Causal Effects,”Biometrika,
70, 1, 41–55.
——— and ——— (1985), “Constructing a Control Group Using Multivariate
Matched Sampling Methods That Incorporate the Propensity Score,”The
American Statistician, 39, 1, 33–8.
——— (2005), “Sensitivity Analysis in Observational Studies,”in
Encyclopedia of Statistics in Behavioral Science, Brian S. Everitt, David
C. Howell, editors.
——— (2010), Design of observational studies.
Rubin, Donald B. (2007), “The Design Versus the Analysis of Observational
Studies for Causal Effects: Parallels with the Design of Randomized Trials,”
Statistics in Medicine, 26, 1, 20–36.
Shankar, Venkatesh and Sridhar Balasubramanian (2009), “Mobile Marketing:
A Synthesis and Prognosis,”Journal of Interactive Marketing, 23, 2,
——— and Edward C. Malthouse (2007), “The Growth of Interactions and
Dialogs in Interactive Marketing,”Journal of Interactive Marketing, 21, 2,
SmartBear (2014), “2014 State of Mobile,”http://www2.smartbear.com/rs/
Stuart, Elizabeth A. (2010), “Matching Methods for Causal Inference: A
Review and a Look Forward,”Statistical Science,1,1–21.
Sundar, Shyam S., Qian Xu, Saraswathi Bellur, Haiyan Jia, Jeeyun Oh, and
Guan-Soon Khoo (2010), “Click, Drag, Flip, and Mouse-over: Effects of
Modality Interactivity on User Engagement with Web Content,”Suntec
City, Singapore: International Communication Association.
———, Jeeyun Oh, Hyunjin Kang, and Akshaya Sreenivasan (2013), “How
Does Technology Persuade? Theoretical Mechanisms for Persuasive
Technologies,”in The Sage Handbook of Persuasion: Developments in
Theory and Practice, James P. Dillard, Lijiang Shen, editors. Thousand
Oaks, CA: Sage Publications. 388–404.
van Noort, Guda, Hilde A.M. Voorveld, and Eva A. van Reijmersdal (2012),
“Interactivity in Brand Web Sites: Cognitive, Affective, and Behavioral
Responses Explained by Consumers' Online Flow Experience,”Journal of
Interactive Marketing, 26, 4, 223–34.
Venkatesh, Viswanath and Hillol Bala (2008), “Technology Acceptance
Model 3 and a Research Agenda on Interventions,”Decision Sciences, 39,
Wang, Rebecca Jen-Hui, Edward C. Malthouse, and Lakshman Krishnamurthi
(2015), “On the Go: How Mobile Shopping Affects Customer Purchase
Behavior,”Journal of Retailing, 91, 2, 217–34.
———, Su Kim, and Edward C. Malthouse (2016), “Branded Apps and Mobile
Platforms as New Tools for Advertising,”in The New Advertising: Branding,
Content, and Consumer Relationships in the Data-driven Social Media Era,
Ruth E. Brown, Valerie K. Jones, Ming Wang, editors. Santa Barbara, CA:
Watson, Richard I., Leyland F. Dill, Pierre Berthon, and George M. Zinkhan
(2002), “U-commerce: Expanding the Universe of Marketing,”Journal of
the Academy of Marketing Science, 30, 4, 333–47.
Welch, Chris (2013), “Google: Android App Downloads Have Crossed 50
Billion, over 1M Apps in Play,”http://www.theverge.com/2013/7/24/4553010/
google-50-billion-android-app-downloads-1m-apps-available (July 24).
White, Halbert (1980), “A Heteroskedasticity-consistent Covariance Matrix
Estimator and a Direct Test for Heteroskedasticity,”Econometrica, 48, 4,
Yadav, Manjit S. and Rajan Varadarajan (2005), “Interactivity in the Electronic
Marketplace: An Exposition of the Concept and Implications for Research,”
Journal of the Academy of Marketing Science, 33, 4, 585–603.
Zott, Christoph, Raphael Amit, and Jon Donlevy (2000), “Strategies for Value
Creation in E-commerce: Best Practice in Europe,”European Management
Journal, 18, 5, 463–75.
41S.J. Kim et al. / Journal of Interactive Marketing 31 (2015) 28–41