Access to this full-text is provided by Wiley.
Content available from Psychology and Marketing
This content is subject to copyright. Terms and conditions apply.
Received: 18 January 2023
|
Accepted: 10 July 2023
DOI: 10.1002/mar.21873
RESEARCH ARTICLE
Consumer engagement with AI‐powered voice assistants:
A behavioral reasoning perspective
Fulya Acikgoz
1
|Rodrigo Perez‐Vega
2
|Fevzi Okumus
3
|Nikolaos Stylos
1
1
University of Bristol Business School,
University of Bristol, Bristol, United Kingdom
2
Henley Business School, University of
Reading, Reading, United Kingdom
3
Rosen College of Hospitality Management,
The University of Central Florida, Orlando,
Florida, USA
Correspondence
Nikolaos Stylos, University of Bristol Business
School, Howard House, Queens Ave, Bristol
BS8 1SD, UK.
Email: n.stylos@bristol.ac.uk
Abstract
This study draws upon Behavioral Reasoning Theory and the Technology Acceptance
Model to investigate consumer engagement with AI‐powered voice assistants. The
study creates a theoretical model to examine the effects of reasons for and reasons
against using voice assistants. This research exemplifies attitudes towards using
voice assistants and willingness to provide personal information as key constructs.
The current study tests data from 491 voice assistant users via mTurk, and we utilize
a multimethod analysis scheme including the partial least squares technique and
the fuzzy set qualitative comparative analysis approach to provide an assessment of
the proposed model. Findings indicated that while privacy cynicism has a negative
impact upon the attitude towards using voice assistants, the countervailing values of
trust, perceived usefulness, and ease of use have off‐setting positive impact. The
study also highlights the moderating role of habit on the behavioral mechanisms
driving consumer engagement via willingness to provide privacy information. This
research advances the emerging literature on voice assistants with respect to
privacy‐related factors driving consumer engagement.
KEYWORDS
artificial intelligence, behavioral reasoning theory, engagement, privacy cynicism, technology
acceptance model, trust, voice assistants
1|INTRODUCTION
Voice assistants represent a new form of voice‐enabled services that
simultaneously integrate elements of artificial intelligence with digital
devices. Voice assistants and connected devices have become
increasingly more popular with the improvement of technology and
capabilities (Han & Yang, 2018;Jones,2018). According to Statista
(2022), more than four billion digital voice assistants were used globally
in 2020, and 8.4 billion voice assistants are projected for 2024.
The distinction in terminology between voice assistants,
chatbots, and intelligent speakers has been a source of confusion
(Lister et al., 2020). Chatbots function as conversational agents within
text‐based interfaces, utilizing natural language processing to simulate
human‐like conversations (Ling et al., 2021). Intelligent speakers, on
the other hand, are physical devices with built‐in voice assistant
capabilities, enabling control over services and devices through spoken
commands. Voice assistants primarily focus on voice‐based interac-
tions, comprehending and responding to spoken commands and
inquiries. While chatbots operate through text, intelligent speakers
combine physical functionality with voice assistants, and voice‐
assistants prioritize voice interactions. These technologies enhance
user experiences across different platforms and devices.
Psychol Mark. 2023;40:2226–2243.2226
|
wileyonlinelibrary.com/journal/mar
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium,
provided the original work is properly cited.
© 2023 The Authors. Psychology & Marketing published by Wiley Periodicals LLC.
Voice assistants enable basic tasks such as calling, messaging,
and seeking information (Saad et al., 2017). Voice assistants are
integrated into apps for an array of functions: reading the news,
online shopping, e‐recommendations, and interacting with brands
(McLean & Osei‐Frimpong, 2019). Voice assistants represent a more
natural interaction between consumers and the brand than
information‐retrieval and text‐based reading thus motivating market-
ers to search for new and novel ways to increase usage of voice‐
enabled technologies (Mari et al., 2020). Voice assistants rely on
Natural Language Processing technologies to understand consumers'
voices and machine learning to improve and adapt to the commands
of users (De Barcelos Silva et al., 2020). Voice assistants are designed
to continuously “listen”to the users, and accordingly, are ever ready
to meet users' needs (Jones, 2018). However, concerns have been
expressed about how voice data are handled by voice assistants and
respective managing organizations. These concerns might function as
inhibitors and impede adoption and usage of voice assistants. For
instance, privacy risks, privacy concerns, and general security have
negative impacts on users' adoption of voice assistant according to
Hoy (2018). Easwara Moorthy and Vu (2015) also found that
consumers will avoid using voice assistants because of privacy
concerns. Therefore, examining the factors that stimulate and/or
restrain consumers' voice assistant usage is vital for marketing
decision‐making and strategy formulation with this emerging
technology.
Despite the importance and popularity of voice assistants,
scholars have focused mostly on chatbots and intelligent speakers
(Jiménez et al., 2021; Lei et al., 2021; Loureiro et al., 2021). Some
scholars have acknowledged that ethical aspects of Artificial
Intelligence‐powered (AI) technologies need further investigation
(Belk et al., 2023; Dwivedi et al., 2021; Flavián & Casaló, 2021;
Mariani et al., 2022). Specifically, since voice assistants represents a
young and emerging market (Flavian et al., 2023), scholars have
focused on drivers affecting voice assistants' usage (Aeschlimann
et al., 2020; Pitardi & Marriott, 2021). For instance, Lim, Kumar, et al.
(2022) found that increasing usage of voice‐assistants led to more
concerns about user privacy. Subsequently, privacy‐related issues
derived from using voice‐assistants, have stimulated research on the
adoption of voice assistants that focuses on barriers (Balakrishnan
et al., 2021). Inhibitors/barriers such as privacy concerns, privacy
cynicism, and privacy risks need continued in‐depth examination and
research inclusion to facilitate the development of a useful theoreti-
cal model. Balakrishnan et al. (2021) stated that technical features of
voice assistants have been extensively examined in literature, but
there appears to be a dearth of empirical studies that delve deeper
into user attitudes.
To date, only a few studies that have investigated the
fundamental reasons‐for consumer engagement with voice assistants
(McLean & Osei‐Frimpong, 2019; Moriuchi, 2019; Prentice et al.,
2023). Market predictions indicate that digital voice assistants will
likely exceed 8.4 billion units by the year 2024—even greater than
the global population (Laricchia, 2022). Identification and clarification
of the barriers as well as enablers affecting consumer engagement
with voice‐assistants is ever more significant nowadays. This study
examines enablers (reasons‐for) and barriers (reasons‐against) con-
sumer engagement with voice‐assistants; in particular, consumers
attitudinal disposition towards using voice assistants and as well as
willingness to provide private information are also explored.
Moriuchi (2019) found that daily shopping transactions corre-
spond to habitual purchases that require minimal contemplation for
those consumers who use voice assistants. Furthermore, Ye and
Potter (2011) have shown that activities such as using a web‐browser
for eating, drinking, and commuting to work, has become an integral
part of ordinary users' daily routines. Actions such as these are
frequent and repetitive, thus providing an ideal environment for the
formation of habits (Ye & Potter, 2011). Previous research also
indicates that the degree of certain behaviors becomes stronger as
the formation of habit becomes stronger (Hu et al., 2018). Despite
these significant indicators, the moderating role of habit‐formation
has not been examined in the context of voice assistant. To address
this gap, this study introduced “habit”as a moderator to more
thoroughly comprehend the relationships between consumer atti-
tude towards voice assistants, willingness to provide privacy
information, and engagement with voice‐assistants.
Theoretically, this study makes at five key contributions. First,
the study clarifies the drivers of engagement with voice assistants by
proposing and empirically testing a conceptual model that integrates
two theories from the technology adoption and consumer behavior
literature. Second, this study responds to recent calls to enable
theoretical cross‐fertilization in the fields of new technologies,
consumer research, and marketing to advance knowledge in these
areas further (Mariani et al., 2022) by integrating Behavioral
Reasoning Theory (Westaby, 2005) and the Technology Acceptance
Model (Davis et al., 1989) to determine the drivers of consumer
engagement. In this regard, Technology Acceptance Model serves as
the bedrock to test consumer use of voice assistant, while Behavioral
Reasoning Theory is employed for building a comprehensive
perspective of potential avoidance of using voice assistants. Third,
this study provides empirical evidence of the effects of different
privacy aspects, such as privacy cynicism, which have not been
examined before regarding attitudes towards voice assistants,
willingness to provide privacy information, and consumer engage-
ment with this technology. Fourth, this is the first study to investigate
the moderating role of habit regarding AI‐powered voice assistants.
Lastly, the study provides a fuzzy set qualitative comparative analysis
(fsQCA) to confirm the findings extracted from the structural
equation modeling.
The structure of the current paper is as follows: the theoretical
framework and relevant theories follow the introduction which leads
to hypotheses development. A detailed account of methodological
procedures and the research design comes next and is followed by
the demonstration of findings. The paper continues with a discussion
of findings and implications and concludes with the limitations and
suggestions for further research.
ACIKGOZ ET AL.
|
2227
2|THEORETICAL BACKGROUND
2.1 |Background on voice assistants
The research field of voice assistants is expanding with greater
emphasis on comprehending the reasons underlying individual
acceptance/usage. Previous studies examined the consequences on
children resulting from their interaction with voice assistants
(Aeschlimann et al., 2020), the impact of voice assistant on
consumer‐brand engagement (McLean et al., 2021), the effect of
voice assistants on consumers' attitudes and behaviors
(Poushneh, 2021), and the drivers that shape trust and attitudes
towards usage intention (Pitardi & Marriott, 2021).
Furthermore, psychological and design‐specific factors impact
smart voice assistant usage and word‐of‐mouth according to Mishra
et al. (2022). Also, Aw et al. (2022) investigated the effect of human‐
like attributes, technology attributes, and contextual factors the
influence continuance usage of digital voice assistants in the
shopping context. Furthermore, Maroufkhani et al. (2022) focused
on the effect of brand credibility and the hedonic, utilitarian, and
social presence factors on brand loyalty and continuous intention in
voice assistant context. Recently, studies have examined what
motivates users to adopt voice assistants for different purposes
such as enhancing fashion shopping (Kautish et al., 2023),
consumer–brand relationships, and consumers' well‐being (Kang &
Shao, 2023), the well‐being and emotional connection that users
have with both AI devices and their associated brands (Prentice et al.,
2023), and the relationship between customer experience, satisfac-
tion and recommendation (De Oliveira et al., 2023).
Unlike the above studies, Malodia et al. (2022) drew upon
decision avoidance theory and formulated research questions seeking
to demonstrate why users delay or avoid using voice assistants for
transactions. Contemporarily, Jain et al. (2022) examined whether the
credibility of a brand can alleviate concerns about privacy risks. Other
studies have referenced users' privacy issues deriving from voice
assistants (Brill et al., 2019; Hoy, 2018). Enriching and enlightening,
none‐the‐less, the most current research on voice assistants has not
investigated the “reasons‐for”and “reasons‐against”within a single
model.
2.2 |Behavioral reasoning theory
Behavioral Reasoning Theory by Westaby (2005), provides context‐
specific reasons why people use and maintain certain behaviors as
well as clarifies why people do or do not support them. This theory
postulates that the reasons and/or motives for individuals' interlink
an array of constructs in the behavioral mechanism; values and
beliefs precede formation of attitudes, intentions, and behaviors and
are key determinants of consumer decision‐making (Gupta &
Arora, 2017). These insights describe two important categories
of “reasons‐for”and “reasons‐against”which are subjective
determinants that users draw upon to support their behaviors in
certain contexts (Lalicic & Weismayer, 2021).
In this regard, Behavioral Reasoning Theory provides a
comprehensive understanding derived from the theory of planned
behavior (Ajzen, 1991) and technology‐reasoned action (Ajzen &
Fishbein, 1975). The Behavioral Reasoning Theory is a recently
developed marketing theory (Sahu et al., 2022) and has been
utilized in different technology contexts: mobile shopping adop-
tion (Gupta & Arora, 2017), mobile banking adoption (Gupta &
Arora; Vakola, 2016), tangible product and service innovation
(Claudy et al., 2015), the Internet of Things (Sivathanu, 2018), AI‐
enabled travel service agents (Lalicic & Weismayer, 2021). Behav-
ioral Reasoning Theory indicates that users engage various
cognitive paths and/or processes in behavioral decision‐making.
This theory is contextual and enables a profound understanding of
determinants that occur in both the adoption or resistance of
technologies (Delgosha & Hajiheydari, 2020).
Behavioral Reasoning Theory is advantageous for researchers
when compared to other theories in identifying possible reasons for
and/or against adoption. This theory aids in investigations of specific
reasons in particular contexts while examining distinct cognitive
routes in users' technology adoptions (Ryan & Casidy, 2018). Indeed,
Sahu et al. (2020) advocated for further advancing Behavioral
Reasoning Theory as an explanatory behavioral theory by testing it
in other contexts with multiple methods and considering other
moderators and mediators which could provide a more holistic
perspective to customer decision processes. To this end, this study
contributes to Behavioral Reasoning Theory by empirically testing in a
setting where privacy plays an important role in developing positive
attitudes toward engaging or declining new technologies (voice
assistants). Furthermore, this study aims to assess the robustness of
this theory by triangulating the results with other methods to confirm
or reject its predictive power.
2.3 |Technology acceptance model
The Technology Acceptance Model (Davis et al., 1989) emphasizes
two quintessential factors: the acceptance and usage of new
technology (Moriuchi, 2019,2023). This model enfolds four main
constructs; context‐enabled variables (perceived usefulness, per-
ceived ease‐of‐use), attitude, usage intention, and usage behavior
(Davis et al., 1989). With these core constructs, Technology
Acceptance Model provides an explanatory capability for under-
standing how people accept and use new technology products and
services and has been further developed and adopted to explain
acceptance of continuous information and communication technol-
ogies (Pitardi & Marriott, 2021). This model has been utilized to
determine the circumstances or determinants that ease technology
into everyday business affairs (Moriuchi, 2019; Teo, 2016) and has
been employed in different technology contexts such as virtual
reality devices (Lee et al., 2019), augmented reality (Rese et al., 2017),
2228
|
ACIKGOZ ET AL.
mobile applications (Vahdat et al., 2021), FinTech adoption (Singh
et al., 2020), and e‐commerce (Al‐Maghrabi & Dennis, 2011).
Previous studies such as Pal et al. (2020) examined the effect of
users' intention to adopt voice‐enabled devices. Moriuchi (2019) also
examined the two constructs derived from Technology Acceptance
model to understand their impact on engagement and loyalty for
voice assistants. Kowalczuk (2018) provided a theoretical model
based on this model to understand the effect of enabling as well as
preventing features on smart speakers' usage and adoption. A more
detailed and in‐depth explanation remains to investigate engagement
with using voice assistants. Context‐specific constructs (ease‐of‐use
and perceived usefulness) play an important role in understanding the
acceptance of specific technologies and are pivotal in this study's
hypothetical model (see Figure 1).
3|HYPOTHESES DEVELOPMENT
3.1 |Reasons against using voice assistants
Voice assistants inescapably introduce fundamental privacy chal-
lenges and issues that have appeared in mainstream news
(Perez, 2019). Therefore, the hypothesized model in this study
considers privacy‐relevant factors as potential reasons‐against using
voice assistants. Privacy concerns is one of the negative aspects of
new digital Artificial Intelligence technology and refers to users'
concerns related to personal information gathering/storing, data
usage without permission, the potential misuse of private informa-
tion, and/or unauthorized sharing with third parties (Choi et al., 2018;
Xu et al., 2013), or concerns through data gathering, information
retrieval, and/or data‐mining (Vimalkumar et al., 2021). Particularly in
the context of voice assistant, Easwara Moorthy and Vu (2015) found
that users are not keen on using voice assistants in public settings
because of privacy concerns.
In sum, users have more difficulties handling the complicated
trade‐offs of technology adoption with challenges of information
privacy concerns. Hence, increased users' privacy concerns also
impact user attitudes and behavioral intentions in various settings
(Ofori et al., 2016). Additional extant literature demonstrates that
privacy concerns negatively affect attitude and behavioral intention
(e.g., Bailey et al., 2017; Min & Kim, 2015; Ofori et al., 2016;
Schomakers et al., 2022). Therefore, the following hypothesis
can be derived:
Hypothesis 1. Privacy concerns have a negative influence
on attitudes toward using voice assistants.
Another potential reason‐against using voice assistants is privacy
risk. Privacy risks refer to the perceived threat to user's privacy
because of the increasing amount of information gathered without
the user's awareness and subsequently losing control over one's
personal information (Lee, 2009). Privacy risks are perceived as the
FIGURE 1 Hypothesized research model.
ACIKGOZ ET AL.
|
2229
“expectation of losses due to the disclosure of individual information”
(Xu et al., 2011, p. 804). Fortes and Rita (2016) stated that risk occurs
whenever information is misused and can result from losing users'
personal information in the online shopping environment. Therefore,
privacy risks may hinder users from sharing their personal informa-
tion and even cause them to provide inaccurate information (Abri
et al., 2009) and/or cause negative attitudes towards any specific
service (Walter & Abendroth, 2020). Even though privacy risks have
been examined across different settings (Duan & Deng, 2022; Walter
& Abendroth, 2020), the research team of Kim et al. (2019)
highlighted that privacy risk varies can different outcomes. Recently,
Bateman (2020) stated that voice assistant devices have enough
features to create privacy risks for users. Thus:
Hypothesis 2. Privacy risks have a negative influence on
attitudes toward using voice assistants.
Privacy cynicism is the last factor hypothesized as a potential
inhibitor of voice assistant usage. Privacy cynicism is derived from
psychology and organizational literature and refers to “an attitude of
uncertainty, powerlessness, and mistrust towards the handling of
personal data by online services, rendering privacy protection
behavior subjectively futile”(Hoffmann et al., 2016, p. 5). To manage
or dispel privacy‐related issues, privacy cynicism is used by
consumers as a cognitive coping mechanism (Lutz et al., 2020).
Cynicism emerges whenever expectations are not met in an
organization (Andersson, 1996). Moreover, cynicism may diminish
one's efficiency in achieving a task by decreasing one's sense of
effectiveness (Schaufeli et al., 1996). Especially as highlighted by Choi
et al. (2018), cynicism is related to disappointment and desperation.
Hence, cynicism is considered a construct that has a negative
relationship with other constructs due to corresponding negative
emotions or issues (Lutz et al., 2020). In this vein, it is assumed that
users who are worried about privacy issues have developed privacy
cynicism as a coping mechanism. Lyu et al. (2023) recently found a
significant negative effect of privacy cynicism with facial recognition
services. Consequently, the following hypothesis is proposed:
Hypothesis 3. Privacy cynicism has a negative influence on
attitudes toward using voice assistants.
3.2 |Reasons for using voice assistants
According to Corritore et al. (2003), trust can be identified as a user's
attitude of reliable expectation in response to the risk that their
vulnerabilities will not be misused. Trust represents a significant
foundation for building a successful interaction between the user and
the agent (Moussawi & Benbunan‐Fich, 2020). Trust is vital in
boosting positive consumer behaviors and encouraging behaviors
such as adoption and continuance intention (Hong & Cha, 2013).
Hence, trust is one of the most pivotal antecedents for engaging
technology (Lu et al., 2016). The literature is replete with examples of
trust as a significant determinant in different kinds of relationships in
Information Systems (Foehr & Germelmann, 2020; Malodia
et al., 2023). Moreover, trust is considered a positive predictor of
attitude (Pitardi & Marriott, 2021). Despite the research that has
focused on offline and online trust, Foehr and Germelmann (2020)
stated that more research on trust between users and voice assistant
is needed. Therefore, the following hypothesis is proposed:
Hypothesis 4. Trust has a positive influence on attitude
towards using voice assistants.
Perceived usefulness and perceived ease‐of‐use are additional
concepts that may influence voice assistant usage. Perceived
usefulness expresses how much users routinely use technologies to
increase performance (Davis et al., 1989). Perceived ease‐of‐use
refers to a user's cognitive effort necessary to understand and make
use of new technology (Gefen et al., 2003). The influences of both
perceived ease‐of‐use and perceived usefulness bear upon different
outcomes such as satisfaction (Ofori et al., 2016), attitude (Bailey
et al., 2017; Walter & Abendroth, 2020), intention to use and word‐
of‐mouth intention (Cai et al., 2022) and behavioral intention
(Sepasgozar et al., 2019). In the voice assistant context, Moriuchi
(2019) stated that while perceived usefulness positively impacts
attitudes toward voice assistants and engagement with using voice
assistants, ease‐of‐use only affects attitudes toward voice‐assistants.
Recently, Choung et al. (2022) reported that users' attitude towards
artificial intelligence‐based voice assistant was positively impacted by
perceived ease‐of‐use and usefulness of the technology. However,
scant attention has been given to the relationship between perceived
usefulness, perceived ease‐of‐use, and attitude toward voice
assistants. Therefore:
Hypothesis 5. Perceived usefulness has a positive influence
on attitude towards using voice assistants.
Hypothesis 6. Perceived ease‐of‐use has a positive
influence on attitude towards using voice assistants.
3.3 |Attitude‐willingness to provide privacy
information and engagement with voice assistants
Attitude is a tendency to respond favorably or unfavorably towards a
specific situation within a given context. One's positive or negative
predisposition may decisively shape future intentions and subsequent
behaviors such as loyalty, repurchase, and satisfaction. In the past
two decades, researchers have investigated the effect of attitude on
different outcomes such as behavioral intention in different contexts
based on the Technology Acceptance Model, the Theory of Planned
Behavior, and the Theory of Reasoned Action (Fortes & Rita, 2016;
Venkatesh et al., 2003; Walter & Abendroth, 2020). However, the
effects of attitudes toward the most recent and rapidly emerging
technologies, such as voice assistants, are still largely unknown.
2230
|
ACIKGOZ ET AL.
The focus upon “engagement”in this study examines specific
interactive encounters with technology as stated by Brodie et al.
(2013). Research conducted by Moriuchi (2019) stated that consumer
engagement in new technology is affected by attitude, social norms,
and perceived control. Therefore, this study assumes that the more
positive attitude toward using voice assistants, then the more likely
users will engage voice assistants. Furthermore, willingness to
provide private information as a result of one's attitude towards
using voice assistants is regarded as another potential key factor.
Even while users benefit from voice assistants, they also have
concerns related to their privacy and security issues (Hoy, 2018).
However, if users hold a positive attitude towards using voice
assistants, then they might be more willing to share private
information. In the work of Kim and Kim (2018), Kim et al. (2019),
and Trang and Weiger (2021), even though certain factors affecting
willingness to provide privacy information have been examined, these
scholars overlooked the effect of the attitude on willingness to
provide privacy information. Thus, it is hypothesized:
Hypothesis 7. Attitude toward using voice assistants have a
positive influence on willingness to provide privacy information.
Hypothesis 8. Attitude toward using voice assistants have a
positive influence on engagement with using voice assistants.
Lastly, providing users' privacy information to voice assistants
may potentially lead to higher engagement with voice assistants.
Users may think they will be served better whenever sharing private
information, and if so, then that may contribute to fostering higher
engagement with voice assistants. Creating/cultivating higher en-
gagement with voice‐assistants may necessitate more users' privacy
information to ensure more personalization and meaningful market-
ing strategies. Within the existing literature little attention is given to
the consequences of willingness to provide privacy information. Since
the effect of willingness to provide privacy information on engage-
ment with using voice assistants has not yet fully examined, the
following hypothesis is rendered:
Hypothesis 9. Willingness to provide privacy information has
a positive influence on engagement with using voice assistants.
3.4 |The moderating role of habit
Habit is described as the result of an automatic and unconscious
response to a stimulus that generates an impulse to act (Gardner,
2015). Considering habit as a mechanism to understanding technol-
ogy use has been determined to be essential (Venkatesh et al., 2003).
It is believed that habits are formed the more frequently we repeat an
action and then the more likely we will repeat it again (Iranmanesh
et al., 2022). This is because habits are antecedents of consumers'
assessments and intentions regarding technology use. Customers are
satisfied with the services they habitually receive that fulfill their
needs and expectations (Amoroso & Chen, 2017). These hypotheses
follow:
Hypothesis 10a. Habit strengthens the relationship between
attitude towards using voice assistants and engagement with
using voice assistants.
Hypothesis 10b. Habit strengthens the relationship between
attitude towards using voice assistants and willingness to
provide privacy information.
Hypothesis 10c. Habit strengthens the relationship between
willingness to provide privacy information and engagement with
using voice assistants.
4|METHODOLOGY
4.1 |Sampling and data collection
The sampling in this study followed a conservative approach in
sample size considerations. After performing a statistical power
analysis via G*Power 3.1, a minimum target of 388 survey responses
was adopted (effect size = 0.5; a= 0.10; power = 0.90; df = 621;
critical χ
2
= 680.08) (Faul et al., 2007). Empirical data were collected
from Amazon's Mechanical Turk (mTurk) in November 2020. mTurk is
a very common instrument for social and behavioral sciences for
collecting high‐quality and cost‐effective data from a trustworthy
resource (Prentice et al., 2023).
The field research study gathered data from 555 participants who
have used voice assistants. Careful observation of this initial sample
revealed that 26 respondents had not used voice assistants, and an
additional 38 respondents failed the attention checks. After removing
these responses, a final pool of 491 respondents comprised the final
sample.Thisusablesamplesizeismorethansufficientfordataanalysis
because it exceeds the suggested sample size for employing the PLS
technique, which is 128 in this case (a= 0.10; max number of arrows = 6)
(Hair et al., 2022). Approximately 96% of the respondents who used voice
assistants were from the United States. The USA sample is timely and
appropriate to understand the reasoning for/against using voice assistants
since Statista (2022) found that about 50% of Americans use voice
assistant. Respondentsincluded491users(250malesand241females)
between 18 and 73 years of age. Individuals between 24 and 45 years old
constituted the primary user group. More than 75% of the respondents
have bachelor's and master's degrees. Further, respondents mostly
used Google Assistant (39.3%), Alexa (26.9%), Siri (22%), and
Cortana (3.7%) (Table 1).
4.2 |Construct measures
The proposed scales in the questionnaire measure perceived ease‐of‐
use, perceived usefulness, privacy concern, privacy risk, privacy
ACIKGOZ ET AL.
|
2231
cynicism, attitude, willingness to provide privacy information, and
engagement. Thirty‐six items were measured on a 5‐point Likert scale
ranging from strongly disagree to strongly agree. A five‐point Likert
scale increases respondents' response rate and quality and frustration
levels are reduced (Babakus & Mangold, 1992). Perceived ease‐of‐use
and perceived usefulness were adapted from Ratten (2015)withfour
and five items, respectively. This study adopted a scale for privacy
concerns and privacy risks based on the four items of Xu et al. (2011).
We measured privacy cynicism with five items according to Choi et al.
(2018), and for habit we used the three items proposed by Hsiao et al.
(2016). We also measured willingness to provide private information
and trust with three items from the study by Kim et al. (2019). Lastly,
attitude with five items and engagement with four items were
measured by borrowing from the scale of Moriuchi (2019).
4.3 |Data analysis
Partial Least Squares‐based (PLS‐based) Structural Equation Model-
ing (SEM) with SmartPLS4 software assessed the rigor of the
hypothetical model. This technique was selected due to the guiding
objective of data analysis, theory building and prediction, rather than
confirming relationships based on a given framework (Hair
et al., 2022). Given the exploratory nature of this study, both
Behavioral Reasoning Theory and Technology Acceptance Model
enabled the examination of the proposed hypotheses via processes
through PLS which greatly increases the explained variance of the
dependent variables (Fotiadis & Stylos, 2017). Additionally, multi-
variate normality assumption was relaxed due to applying a data
sample of more than 200 cases. In short, PLS is the most suitable
technique to empirically test the novel conceptualization of user
behavior within the digital voice assistant technology context.
4.4 |Common method bias (CMB)
The field study gathered data from a single source, as it relies upon
self‐report questionnaire forms which belong to a cross‐sectional
design. The CMB check is based on Harman's one‐factor method for
principal component factor analysis. Findings indicate the largest
explained variance was 28.2%, which is less than Podsakoff et al.
(2003) suggestion (50%). Therefore, CMB is not an issue. Addition-
ally, inter‐construct variance inflation factors (VIFs) has also been
checked with Kock's (2015) recommendation should be less than 5;
the biggest value was 3.18 and the smallest value is 1.83, so CMB is
not a critical issue.
5|RESULTS
5.1 |Measurement (outer) model evaluation
Data analysis began with assessing the measurement items which
loaded between 0.747 and 0.939. The items exceeded the suggested
threshold of 0.708 for factor loadings (Hair et al., 2019,2022). The
average variance extracted (AVE) ranged from 0.622 to 0.774, which
was above the value of 0.5 for all constructs (Hair et al., 2022) which
shows acceptable convergent validity. The internal consistency of the
scales demonstrated composite reliability (CR) ranging from 0.865 to
0.915, and Cronbach's αvarying from 0.786 to 0.877, again which are
higher than the minimum value of at least 0.70. In‐toto, this indicates
internal consistency and reliability criteria are met for all constructs
(see Supporting Information: Appendix A).
Next, discriminant validity was measured per Fornell and Larcker
(1981). The AVE values the constructs found to be bigger than any of
the cross‐loadings with other factors (see Supporting Information:
Appendix B). Furthermore, discriminate validity was checked by using
Heterotrait‐Monotrait (HTMT) ratio (see Supporting Information:
Appendix C), and were met according to Henseler's et al. (2015)
proposed values being less than 0.85 or 0.90. Overall, discriminant
validity was well‐established for the factorial structure.
5.2 |Structural (inner) model evaluation
The results of a bootstrapping test with 10,000 subsample and one‐
tailed test according to Hair et al. (2022) appear in Table 2. Path
coefficients showed the relationship strength between dependent
and independent constructs. It appears that privacy concern
TABLE 1 Sample demographic information.
Variable Category %–N(491)
Gender Female 50.9–250
Male 49.1–241
Age 18–24 8.6–42
25–34 38.7–190
35–44 26.1–128
45–54 15.3–75
Above 54 11.4–56
Education level High school 24–118
Bachelor 53.6–263
Master's degree 17.9–88
Doctoral Degree 1.8–9
Professional Degree 2.6–13
Which voice assistants do you
use most?
Siri 22.0–108
Alexa 26.9–132
Cortana 3.7–18
Google's Assistant 39.3–193
Missing 8.1–40
2232
|
ACIKGOZ ET AL.
surprisingly had a significant and positive influence on attitude
towards using voice assistants (β= 0.151, T= 2.952), hence H1 was
rejected. Moreover, the negative relationship between privacy risk
and attitude toward using voice assistants (β=−0.064, T= 1.285) was
statistically significant. Therefore, H2 was supported. Privacy cyni-
cism (β=−0.132, T= 2.243) has a negative impact on attitudes
towards using voice assistants, hence H3 hypothesis was supported.
Regarding reasons for using voice assistants, trust (β= 0.140,
T= 2.712), ease‐of‐use (β= 0.386, T= 8.210) and perceived useful-
ness (β= 0.414, T= 7.331) exhibit significant and positive impacts on
attitude towards using voice assistants. Therefore, support is
demonstrated for H4, H5, and H6, respectively. Additionally, attitude
towards using voice assistants has a positive impact on willingness to
provide personal information and engagement with using voice
assistants, respectively based on H7 (β= 0.288, T= 6.147), and H8
(β= 0.258, T= 6.553). Further, the effect of willingness to provide
privacy information on engagement with using voice assistants was
significantly positive, so H9 (β= 0.292, T= 5.991) was supported.
Lastly, we examine the moderating role of habit on the
relationship between attitude towards using voice assistants and
engagement with using voice assistants was examine in light of H10a
(β=−0.003, T= 0.085); also, attitude towards using voice assistants
and willingness to provide privacy information H10b (β= 0.138,
T= 3.526); and, willingness to provide privacy information and
engagement with using voice assistants H10c (β= 0.060, T= 1.645).
The moderator role of habit in H10b and in H10c was confirmed,
however, habit did not show a moderation role in H11a. Finally, our
analysis reveals that willingness to provide privacy information
positively mediates the relationship between attitude towards using
voice assistants and engagement with using voice assistants
TABLE 2 Hypotheses results.
Original sample
(O)
Sample mean
(M)
Standard deviation
(STDEV)
T Statistics
(|O/STDEV|) pValues
H1: Privacy Concern ‐> Attitude towards using
voice assistant
0.151 0.141 0.051 2.960 0.002***
H2: Privacy Risk ‐> Attitude towards using voice
assistant
−0.064 −0.056 0.050 1.285 0.099*
H3: Privacy Cynicism ‐> Attitude towards using
voice assistant
−0.132 −0.125 0.059 2.243 0.012**
H4: Trust ‐> Attitude towards using voice
assistant
0.140 0.134 0.052 2.712 0.003***
H5: Ease‐of‐use ‐> Attitude towards using voice
assistant
0.386 0.391 0.047 8.210 0.000***
H6: Perceived usefulness ‐> Attitude towards
using voice assistant
0.414 0.416 0.057 7.331 0.000***
H7: Attitude towards using voice assistant ‐>
Willingness to provide privacy information
0.288 0.286 0.047 6.147 0.000***
H8: Attitude towards using voice assistant >
Engagement with using voice assistant
0.258 0.255 0.039 6.553 0.000***
H9: Willingness to provide privacy information ‐>
Engagement with using voice‐assistant
0.292 0.293 0.049 5.991 0.000***
H10a: Habit*Attitude towards using voice
assistants ‐> Engagement with using voice‐
assistant
−0.003 −0.004 0.035 0.085 0.466
H10b: Habit*Attitude towards using voice
assistants ‐> Willingness to provide privacy
information
0.138 0.136 0.039 3.526 0.000***
H10c: Habit* Willingness to provide privacy
information > Engagement with using voice
assistant
0.060 0.060 0.037 1.645 0.050**
Specific indirect effect
Attitude > Willingness to provide privacy
information > Engagement with using voice
assistant
0.084 0.084 0.022 3.761 0.000***
Note: PLS results of the research model (*p< 0.10, **p< 0.05, and ***p< 0.01, one‐tailed).
Abbreviation: PLS, Partial Least Squares.
ACIKGOZ ET AL.
|
2233
(β= 0.084, T= 3.761). In summation, all structural (inner) model
hypotheses outcomes and standard regressions weights are depicted
in Figure 2.
5.3 |The fsQCA model
Considering the PLS findings, this study reexamined the role of
attitude towards using voice assistants and engagement with using
voice assistants through fsQCA to provide a more holistic and
comprehensive understanding of its outcomes and consequences.
The purpose of fsQCA is to evaluate the multiple complex
antecedent conditions (or causal recipes) that lead to high member-
ship in the two outcome conditions, which are attitude toward using
voice assistants and engagement with using voice assistants.
FsQCA is a hybrid approach that includes qualitative‐quantitative
features to explore various cases that demonstrate phenomenon in
complicated conditions (Ragin, 2009). This approach strengthens the
results of theoretical models initially investigated with SEM
(Bawack et al., 2021) by increasing the understanding of the
mechanisms behind the users' perceptions of voice assistants'
engagement which were not clarified through PLS‐SEM.
fsQCA is created through a calibration process of interval scale
variables to identify set configurations. The calibration yields scores
that vary from 0 (not a member)to1(full member), with 0.5 denoting
the highest vagueness in membership (Ragin, 2009). Summated
measures are calculated to calibrate the variables by summing items
measuring each construct according to breakpoints 0.95, 0.50, and
0.05, respectively (Ragin et al., 2008).
Then, the findings of fsQCA analysis are arrayed in a truth table
forged via an algorithmic two‐phase rational procedure. The first
phase creates a truth‐table spreadsheet from the main data to
determine the causal and outcome conditions to integrate into the
analysis (Valaei et al., 2017). Due to the large number of samples in
this study (n> 100), only configurations with a minimum frequency of
three were analyzed (Bawack et al., 2021). Configurations fixed in this
analysis (consistency) encompassed at least 80% of the cases,
denoting the extent to which a causal solution leads to the outcome
(in this study, engagement) (Ragin et al., 2008).
Two models were formed to express attitude towards using
voice assistants and another model to render consumer engagement
with using voice assistants, thus providing different solutions. The
presence of a condition is typically exhibited with (●), the absence/
negation with a crossed‐out circle (⊗), and the “do not care”condition
with a space (Bawack et al., 2021; Fiss, 2011; Pappas & Wood-
side, 2021).
The first outcome condition in Table 3shows that six possible
pathways lead to attitude towards using voice assistants. The findings
illustrate an overall solution coverage as 0.851 and consistency as
0.932. Solution 1 set with the highest consistency (0.994) and
satisfactory coverage (0.489), revealed that the high presence of all
antecedents is a necessary condition for the development of attitude
toward using voice assistants. Solution 2 set covering highest (0.951)
and coverage (0.826) demonstrated that the presence of perceived
usefulness and the absence of privacy cynicism, privacy risk, and
ease‐of‐use would lead to attitude toward using voice‐assistants.
Solution 3 set having the highest (0.950) and coverage (0.824)
indicated the presence of perceived usefulness and the absence of
FIGURE 2 Validated research model.
2234
|
ACIKGOZ ET AL.
privacy cynicism, ease‐of‐use, and that trust would result in attitude
towards using voice‐assistants. Solution 4 set has substantial
consistency (0.951) and important coverage (0.788) which highlights
the presence of ease‐of‐use; the absence of the other antecedents
are key conditions that lead to attitude towards using voice‐
assistants. Solution 5 includes high consistency (0.960) and sufficient
coverage (0.670) which show the presence of trust and the absence
of reasons against factors; ease‐of‐use would constitute attitude
towards using voice assistants. Lastly, Solution 6 set with satisfactory
consistency (0.873) and coverage (0.77) denotes the presence of
privacy concerns and the absence of privacy cynicism; trust would
result in an attitude toward using voice assistants.
The second outcome shown in Table 4highlights the coverage
and the consistency of the eight combinations that adequately
demonstrate high engagement with using voice‐assistants. These
findings demonstrate an overall solution coverage of 0.734 and
consistency of 0.946, which underscore the importance of empirical
and theoretical importance based on the findings (Paykani et al.,
2018). Solution 1 highlights the combination of high consistency
(0.973) with notable coverage (0.608), hence providing the best
explanation for engagement with using voice assistants and illustrates
that each presence of the antecedents is an essential condition
for engagement with using voice assistants. Solution 2 with an
acceptable high consistency (0.805) and significant coverage (0.742),
indicates the presence of privacy concerns, privacy risks, attitudes
towards using voice assistants, and the absence of the other
antecedents would entice engagement with using voice assistants.
Solution 3 presents substantial consistency (0.797) and coverage
(0.884), and this condition illustrates that the presence of perceived
usefulness and the absence of privacy cynicism, privacy concern,
privacy risk, ease‐of‐use, and willingness to provide privacy informa-
tion would lead to more engage with using voice assistants. Solution
4 has important consistency (0.819) and coverage (0.872).
The similarities of Solution set (4) and Solution 3 set denote the
presence of perceived usefulness and attitude together, and the
absence of privacy cynicism, privacy concern, ease‐of‐use, and
willingness to provide privacy information that would lead to
engagement with using voice assistants. Solution 5 demonstrates
important consistency (0.815) and high coverage (0.847). Solution
5 highlights the presence of ease‐of‐use and attitude, and the
absence of the rest antecedents is conducive to engagement with
using voice assistants. Solution 6 is notably different and shows
high consistency (0.890) and importance coverage (0.787); the
presence of a willingness to provide privacy information and the
absence of the rest antecedents facilitate engagement with using
voice assistants. Solution 7 includes important consistency (0.895)
and substantial coverage (0.800), pointing out that the presence of
trust and the absence of the rest antecedents lead to engagement
with using voice assistants.
Lastly, Solution 8 is also remarkably consistent (0.922) and has
substantial coverage (0.782). This solution set proposes that the
presence of perceived usefulness, attitude, willingness to provide
privacy information and the absence of privacy‐related factors and
ease‐of‐use provide engagement with using voice assistants.
6|DISCUSSION, CONCLUSIONS, AND
FUTURE RESEARCH
As stated by Flavian et al. (2023), voice assistants still represent a
young market, and voice‐enabled technologies are becoming more
widely adopted. Considering the increase in the usage of voice
assistants in people's daily lives, this study focused on the effect of
the reasons for using voice assistants and the reasons against not
using voice assistants upon engagement with using voice assistants
through both attitudes towards using voice assistants as well as
willingness to provide privacy information. Drawing on the Behav-
ioral Reasoning Theory and Technology Acceptance Model, we
formulated hypotheses related to the reasons against using voice
assistants, including privacy concerns, privacy risk, and privacy
cynicism, and the reasons for using voice assistants covering trust,
perceived usefulness, and ease‐of‐use have an influence in attitude
towards using voice assistants, and consequently influencing willing-
ness to provide privacy information and engagement with using voice
assistants. Moreover, we also examine the role of habit in the voice
assistant context.
TABLE 3 fsQCA Findings.
Model: Attitude = f(privacy cynicism, privacy risk, privacy concern,
trust, perceived usefulness, ease‐of‐use)
Solution
Configuration 1 23456
Reasons against
Privacy
cynicism
●○○○○○
Privacy concern ● ○○●
Privacy risk ●○ ○○
Reasons for
Ease‐of‐use ●○○● ○○
Perceived
usefulness
●●●○
Trust ● ○○●○
Consistency 0.994 0.951 0.950 0.951 0.960 0.873
Row coverage 0.489 0.826 0.824 0.788 0.670 0.777
Unique
coverage
0.007 0.002 0.002 0.008 0.005 0.034
Overall solution
consistency
0.932
Overall solution
coverage
0.851
Note: Blank, not considered in the solution; hollow circles, absence of the
variable; The black circles, presence of the variable.
ACIKGOZ ET AL.
|
2235
First, this study found that privacy concern positively affects
attitude toward using voice assistants. Some fsQCA findings have
stated that the presence of privacy concerns is essential for shaping
attitudes toward using voice assistants. Specifically, two out of six
solutions indicated that privacy issues play a significant role in
shaping relevant attitudes in using voice assistants. In the strongest
solution, which involves all constructs proposed in the hypothetical
model, all three privacy items (cynicism, concern, risk) are activated.
This finding is consistent with Shin's study (2010). However, this
finding is still not entirely congruent with previous studies showing
the negative relationship between privacy concerns and attitudes
toward using voice assistants (Pitardi & Marriott, 2021). The reason is
that the escalation of privacy concerns among individuals triggers a
heightened level of scrutiny regarding the handling, storage, and
utilization of their personal data. This increased awareness prompts
users to actively assess the extent to which voice assistants provide
privacy protection. Particularly, when users have increased privacy
concerns, then this leads to greater awareness, empowers them to
exert control over their personal information, and stimulates the
demand for features that enhance privacy. Consequently, users may
develop a more positive attitude toward using voice assistants.
Simultaneously, managers could leverage privacy as a competitive
advantage, creating a conducive environment that encourages
users to engage with voice‐assistants. Furthermore, even though
Vijayasarathy (2004) found a nonsignificant relationship between
privacy concern and attitude, he hypothesized that privacy would
positively affect attitude in online shopping. Another alternative
explanation is that users already know that new technology has many
privacy issues. Therefore, privacy concerns may not be perceived as a
factor affecting attitude towards using voice assistants. On the
contrary, it may be assumed that voice assistants are securely based
on privacy issues and privacy concern is essential to shape a
favorable attitude towards using voice assistants.
Second, the negative effect of privacy risk on attitude towards
using voice assistants is significant. This finding is consistent with the
existing literature (Duan & Deng, 2022; Fortes & Rita, 2016; Walter &
Abendroth, 2020), but it is not consistent with Vimalkumar et al.'s
study (2021) who found that privacy risk does not have any impact
on users' adoption behaviors in voice assistant context. Similarly, the
fsQCA findings showed that only the presence of privacy risk is
insufficient to lead to an attitude toward using voice assistants. One
possible explanation for this finding could be that users do not
perceive any risks associated with voice assistants if they do not use
them for risky tasks (Pitardi & Marriott, 2021). Since privacy risk has
been newly examined in the context of voice assistants, the
relationship between privacy risk and attitude toward using voice
assistants needs more investigation. Another notable contribution is
as per the relevant hypothesis, was the negative effect of privacy
TABLE 4 fsQCA findings.
Model: Engagement = f(privacy cynicism, privacy risk, privacy concern, trust, attitude, willingness to provide privacy information, perceived
usefulness, ease‐of‐use)
Solution
Configuration 1 2345678
Reasons‐against
Privacy cynicism ○○○○○○○○
Privacy concern ●● ○○○○○○
Privacy risk ●● ○ ○○○○
Reasons‐for
Ease‐of‐use ●○○○● ○○○
Perceived usefulness ●○●●○○○●
Trust ●○○○○○●
Attitude towards using voice
assistants
●● ●● ○○●
Willingness to provide privacy
information
●○○○○● ○●
Consistency 0.973 0.805 0.79 0.819 0.815 0.890 0.890 0.922
Row coverage 0.608 0.742 0.884 0.872 0.847 0.787 0.800 0.782
Unique coverage 0.009 0.002 0.003 0.002 0.001 0.004 0.005 0.014
Overall solution consistency 0.946
Overall solution coverage 0.734
Note: blank, not considered in the solution; hollow circles, absence of the variable; The black circles, presence of the variable.
2236
|
ACIKGOZ ET AL.
cynicism on attitude toward using voice assistants; this finding builds
on previous studies (Acikgoz & Vega, 2022) and offers new insights in
this area.
Furthermore, we found that trust positively impacts attitudes
toward using voice‐assistants. Pitardi and Marriott's lone study
(2021) investigated the relationship between trust and attitude
toward using voice assistants in an integrated manner. Even though
trust has been examined in different contexts for analyzing the
efficiency of voice assistants (Loureiro et al., 2021), some scholars
(McLean et al., 2021) state that trust in voice assistants remains an
issue despite making the users' life much easier.
Perceived usefulness and ease‐of‐use also influence positive
attitudes toward using voice assistants. The results of this study have
been confirmed in other technology‐based contexts (Bailey
et al., 2017; Walter & Abendroth, 2020). The findings in this study's
voice assistant context concur with Moriuchi (2019) regarding the
effect of perceived usefulness on attitude toward using voice‐
assistants. Moreover, Pitardi and Marriott's study (2021) also validate
the findings herein showing the positive effect of perceived
usefulness and ease‐of‐use on attitude towards using voice assis-
tants. Additionally, Kang and Namkung (2018) also demonstrated
that perceived usefulness is a more effective determinant factor than
ease‐of‐use for specifying users' attitudes toward any technology.
This study examined the effect of attitude towards using voice
assistants on willingness to provide personal information and the
findings demonstrate that users with a favorable attitude toward using
voice assistants are more willing to share their personal information.
Even though Kim et al. (2019) identified factors influencing the
willingness to provide privacy information, they did not include the
effect of attitude on willingness to provide personal information. Shortly
thereafter, Cao and Wang (2022) called for research investigating other
potential drivers (except privacy concerns) that could impact information
disclosure but excluded privacy concerns. However, both attitudes
towards using voice assistants and willingness to provide personal
information were examined in this study and found to have a significant
positive effect on engagement with using voice assistants. This study
explored factors that affect engagement with using voice assistants and
answered the demand for integrating technologies with marketing as
stated by Moriuchi (2019). The findings of this study show that
willingness to provide privacy information positively mediates the
relationship between attitude towards using voice assistants and
engagement with using voice assistants.
Lastly, habit as a moderator plays an important role in the
relationship between attitude towards using voice assistants and
willingness to provide privacy information and willingness to provide
privacy information and engagement with using voice assistants. The
fsQCA solutions on engagement have also shown that privacy
concern and privacy risk contribute to the best two solutions. Overall,
the other six solutions which did not include the privacy factor in
their proposed solution demonstrated a much lower consistency in
their proposed structure Privacy is key to decision‐making modelling
of consumer behavioral mechanism with respect to engaging with
voice assistants.
6.1 |Theoretical implications
This research study makes several theoretical contributions. First, by
integrating technology adoption and consumer behavior theories,
researchers can holistically examine the drivers behind engagement
that go beyond enablers but also consider potential barriers around
privacy dimensions. While some driving factors have been explored
in previous research studies, privacy dimensions have not been
sufficiently examined empirically though these are important for this
type of technology. Although research on voice assistants has gained
increasing attention, there has been limited investigation from a
privacy‐related perspective. Thus, this study responds to Mehta's
(2022) call for research on investigating the relationship between the
use of AI‐technology and users' privacy‐related concerns. Moreover,
Dwivedi et al. (2021) emphasized the necessity for research on the
ethics AI technology related to privacy and security from both
internal (user) and external (manager) stakeholders' perspectives.
Hence, this study contributes to understanding the factors
of “reasons against”using voice assistants. However, although
Balakrishnan et al. (2021) have investigated enablers and inhibitors
to resistance toward adopting AI‐powered voice assistants, this study
adopted a different perspective by adding privacy‐related constructs
along with perceived ease‐of‐use and perceived usefulness. This
study's theoretical model explains actual behavior, setting it apart
from most studies in the extant literature.
This empirical study also provides a unique inclusion by testing
privacy cynicism. Even though cynicism has been well‐developed in
management and organization literature, Lutz et al. (2020) maintain
that privacy cynicism needs even more conceptual work. Acikgoz and
Vega (2022) also called for research on privacy cynicism in AI‐
powered technologies because privacy cynicism might provide a
fuller explanation for the incongruence between users' privacy
attitudes and their privacy behaviors (Van Ooijen et al., 2022). This
study investigated the effect of privacy cynicism in the voice
assistant context.
Another theoretical implication of this study refers to the effect
of trust upon voice‐assistant usage (Foehr & Germelmann, 2020;
Pitardi & Marriott, 2021). Nonetheless, Pitardi and Marriott (2021)
endeavor to understand what leads users to trust voice assistants.
Although trust has been extensively examined in AI‐powered
contexts, this holds true mostly for the chatbot contexts (Lei
et al., 2021; Loureiro et al., 2021; Mostafa & Kasamani, 2022). This
study examined the effect of users' trust on attitudes toward voice
assistants and substantially contributed to examining the role of trust
in voice assistant context.
Furthermore, the relationships between attitude towards using
voice assistants, willingness to provide personal information, and
engagement with using voice assistants have not been previously
examined. This study sheds light on user engagement with using
voice assistants as influenced by consumers' attitudes towards using
voice assistants and their willingness to provide privacy information.
Also, habit as a moderator has not been examined in the voice
assistant context. Voice assistants are considered daily usage devices
ACIKGOZ ET AL.
|
2237
for many consumers. The rate of voice assistants for everyday usage
has witnessed a dramatic uptick during the recent Covid‐19
pandemic such that these devices are among users' daily habits.
Hence, examining the role of habit in the context of voice assistant
also makes a substantive contribution to the existing literature.
6.2 |Managerial implications
The findings of this study indicate a significant linkage between
privacy‐related factors and attitude toward using voice‐assistants
which leads to the willingness to provide privacy information and
engagement with using voice‐assistants. Marketing practitioners and
managers of voice‐assistant brands can strategize and formulate
marketing promotions or organizational offerings built on privacy‐
related determinants. Furthermore, voice assistant managers who
pay attention to these factors and develop user experiences related
to voice assistants, accordingly, stand to or enhance customer‐based
brand equity (brand awareness, brand association, brand love).
Managers can leverage privacy concerns as a competitive
advantage by emphasizing that voice assistants provide high‐level
privacy protection to create a positive attitude towards using voice
assistants. To build on privacy‐related determinants in voice assistant
experiences, managers can enhance privacy policies, implement
privacy‐enhancing features, educate users about specific privacy
measures, incorporate privacy messaging in marketing, ensure
compliance with privacy regulations, and engage with users for
feedback. This way, privacy concerns would be converted to a reason
for using voice‐assistants instead of a reason against using voice
assistants. Voice assistant brands, companies, and managers would
be wise to focus on the users' improving privacy cynicism for voice
assistants. Privacy cynicism to cope with privacy concerns is
legitimate and needs to be included in business decisions and
interfacing with users of voice assistants.
Since a significant negative relationship between privacy risk and
attitude toward using voice assistants was supported by the data
analysis, managers and companies should ensure that voice assistants
do not create privacy risks. Privacy risks might generate severe
concerns to users, and restraints may be felt in using voice assistants
(McLean & Osei‐Frimpong, 2019). To mitigate privacy risks associ-
ated with voice assistants, managers should prioritize data security
measures, adopt privacy by design principles, practice data minimiza-
tion, ensure transparency and obtain user consent, provide user
control over privacy settings, conduct regular privacy audits, and
promote privacy training and awareness. These actions aim to instill
user confidence, address privacy concerns, and foster a trustworthy
environment, ultimately leading to increased user adoption and
satisfaction with voice assistant technology.
In addition to the motives for utilizing voice assistants, managers
ought to devise tactics that enhance trust, perceived ease‐of‐use, and
perceived usefulness. To achieve this, managers can employ
strategies such as transparent communication, user‐friendly design,
personalization and customization options, reliable and accurate
responses, proactive assistance, and continuous improvement. By
focusing on these areas, managers can improve the overall user
experience, increase user satisfaction, and maximize the value
derived from voice assistants' technology.
The findings illustrate that trust is less impactful on attitude
towards using voice assistants when compared to other explanations;
hence building trust for users in their voice assistants experience
should be addressed. Another way is to derive benefit is from
credible and expert influencers who might inspire their followers by
promote voice assistants. Lastly, managers can share the benefits
that users derive from utilizing these technologies.
Managers should be aware that positive attitudes towards using
voice assistants cause willingness to provide personal information
and increase engagement with using voice assistants. Providing
personal information may be important for managers to create user
personalization to provide a better experience for voice assistant
users. The managers can provide incentives (advanced search
capabilities or providing a smart working environment) for their
users when they share their privacy information. The attractive
aspect of such offers can strengthen users' inclination to reveal
information.
6.3 |Limitations and suggestions for future
research
This research study only examines the effects of reasons‐for using
voice‐assistants and reasons‐against using voice‐assistants on engage-
ment with using voice‐assistants. Future research may look at the
different outcomes, such as continuance intention of using voice‐
assistants, or electronic word‐of‐mouth (e‐wom) intention, and others as
indicated in Table 5. In this research, Technology Acceptance Model and
Behavioral Reasoning Theory are combined, but future research may
focus more on the second one by using other theories to delve into the
reasons‐for using voice‐assistants and reasons‐against using voice‐
assistants with different antecedents. For example, future research may
develop a different perspective by employing the Privacy Calculus
Theory combined with Protection Motivation Theory, which may be
alternatively utilized in understanding privacy issues that new technol-
ogy devices create. Specifically, privacy cynicism requires more research
in this context. Future research may consider the effect of privacy
cynicism on or through different constructs as mentioned above by
employing different mediators, such as commitment (Hernandez‐Ortega
&Ferreira,2021).
Another pivotal point is that this study conducts a survey that
relies on self‐reported measures in terms of ownership and the
habitual usage of voice‐assistant devices, and the quantitative data
collected to evaluate the research model. Although this is not a
limitation perse, future research might combine semi‐structured
interviews or in‐depth interview data to understand the voice
assistant context more fully as well as include other methods to
measure actual behavior. Another suggestion is to aim for a
continuous observation of the consumer engagement of using
2238
|
ACIKGOZ ET AL.
voice‐assistants since usage is an on‐going behavior and this would
fit with longitudinal research schemes. Lastly, future studies may
investigate participants' inclination to use voice‐assistants focusing
on a specific service industry such as retailing, tourism, and
hospitality (buying tickets) or other service contexts.
ACKNOWLEDGMENTS
The authors have nothing to report.
CONFLICT OF INTEREST STATEMENT
The authors declare no conflict of interest.
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the
corresponding author upon reasonable request.
ORCID
Fulya Acikgoz http://orcid.org/0000-0003-0357-3771
Rodrigo Perez‐Vega http://orcid.org/0000-0003-1619-317X
Nikolaos Stylos http://orcid.org/0000-0003-1626-0088
REFERENCES
Abri, D. A., McGill, T., & Dixon, M. (2009). Examining the impact of
E‐privacy risk concerns on citizens' intentions to use E‐government
services: An Oman perspective. Journal of Information Privacy and
Security,5(2), 3–26.
Acikgoz, F., & Vega, R. P. (2022). The role of privacy cynicism in consumer
habits with voice‐assistants: A technology acceptance model
perspective. International Journal of Human–Computer Interaction,
38, 1138–1152.
Aeschlimann, S., Bleiker, M., Wechner, M., & Gampe, A. (2020).
Communicative and social consequences of interactions with
voice‐assistants. Computers in Human Behavior,112, 106466.
Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior
and Human Decision Processes,50(2), 179–211.
Ajzen, I., & Fishbein, M. (1975). A Bayesian analysis of attribution
processes. Psychological Bulletin,82(2), 261–277.
Al‐Maghrabi, T., & Dennis, C. (2011). What drives consumers' continuance
intention to e‐shopping? Conceptual framework and managerial
implications in the case of Saudi Arabia. International Journal of Retail
& Distribution Management,39(12), 899–926.
Amoroso, D. L., & Chen, Y. A. N. (2017). Constructs Affecting Continuance
intention in consumers with mobile financial apps: A dual factor
approach. Journal of Information Technology Management,28(3),
1–24.
Andersson, L. M. (1996). Employee cynicism: An examination using a
contract violation framework. Human Relations,49, 1395–1418.
Aw, E. C. X., Tan, G. W. H., Cham, T. H., Raman, R., & Ooi, K. B. (2022).
Alexa, what's on my shopping list? Transforming customer experi-
ence with digital voice‐assistants. Technological Forecasting and
Social Change,180, 121711.
Awad, N. F., & Krishnan, M. S. (2006). The personalization privacy
paradox: An empirical evaluation of information transparency and
the willingness to be profiled online for personalization. MIS
Quarterly,30,13–28.
Babakus, E., & Mangold, W. G. (1992). Adapting the SERVQUAL scale to
hospital services: An empirical investigation. Health Services
Research,26(6), 767–786.
Bailey, A. A., Pentina, I., Mishra, A. S., & Ben Mimoun, M. S. (2017). Mobile
payments adoption by US consumers: An extended TAM.
International Journal of Retail & Distribution Management,45(6),
626–640.
Balakrishnan, J., Dwivedi, Y. K., Hughes, L., & Boy, F. (2021). Enablers and
Inhibitors of AI‐Powered Voice Assistants: A Dual‐Factor Approach
by Integrating the Status Quo Bias and Technology Acceptance
Model. Information Systems Frontiers.https://doi.org/10.1007/
s10796-021-10203-y
Bawack, R. E., Wamba, S. F., & Carillo, K. D. A. (2021). Exploring the role of
personality, trust, and privacy in customer experience performance
during voice shopping: Evidence from SEM and fuzzy set
qualitative comparative analysis. International Journal of Information
Management,58, 102309. https://doi.org/10.1016/j.ijinfomgt.2021.
102309
Belk, R. W., Belanche, D., & Flavián, C. (2023). Key concepts in artificial
intelligence and technologies 4.0 in services. Service Business,17(1),
1–9.
Blut, M., Kulikovskaja, V., Hubert, M., Brock, C., & Grewal, D. (2023).
Effectiveness of engagement initiatives across engagement plat-
forms: A meta‐analysis. Journal of the Academy of Marketing Science,
1–25.
Brill, T. M., Munoz, L., & Miller, R. J. (2019). Siri, Alexa, and other digital
assistants: a study of customer satisfaction with artificial intelligence
applications. Journal of Marketing Management,35(15–16), 1401–1436.
Cai, R., Cain, L. N., & Jeon, H. (2022). Customers' perceptions of hotel AI‐
enabled voice‐assistants: Does brand matter? International Journal of
Contemporary Hospitality Management,34(8), 2807–2831.
TABLE 5 Suggestions and research questions for future studies.
Suggested variables, theories, and techniques for future research
Variables; Continuance intention of using digital technology (Yan et al.
(2021); electronic word‐of‐mouth (e‐wom) intention (Wandoko &
Panggati (2022); commitment (Hernandez‐Ortega & Ferreira, 2021)
Theories; Privacy Calculus Theory (Awad & Krishnan, 2006); Protection
Motivation Theory (Rogers, 1975); Communication Privacy
Management Theory (Petronio & Caughlin, 2006)
Techniques;In‐depth interviews or/and online focus groups (Stylos
et al. (2021); continuous observation of consumers' engagement
(Blut et al. (2023); experimental design (Whang & Im, 2021);
longitudinal schemes (Lim, Rasul, et al., 2022)
Some example research questions for future research
RQ1: How do the reasons that users have for using or not using voice
assistants impact their intention to share their experiences through
electronic word‐of‐mouth (e‐WOM)? What are the underlying
factors that drive this relationship and shape their decision to
spread the word about their voice assistant experiences?
RQ2: How can we combine theories like the Privacy Calculus Theory
and Protection Motivation Theory with Behavioral Reasoning
Theory to gain a deeper understanding of why users choose to use
or avoid using voice assistants?
RQ3: How does commitment come into play as a mediator in this
relationship, influencing the level of engagement and the decisions
individuals make regarding their privacy concerns?
RQ4: How can qualitative methods like semi‐structured interviews or
in‐depth interviews provide a more comprehensive understanding
of how users interact with voice assistants and the contextual
factors that shape their behavior?
RQ5: What are the specific challenges and opportunities faced by
different industries, such as retailing, tourism, or hospitality, in
effectively engaging consumers through voice assistants?
ACIKGOZ ET AL.
|
2239
Cao, G., & Wang, P. (2022). Revealing or concealing: Privacy information
disclosure in intelligent voice‐assistant usage‐a configurational
approach. Industrial Management & Data Systems,122(5),
1215–1245.
Choi, H., Park, J., & Jung, Y. (2018). The role of privacy fatigue in online
privacy behavior. Computers in Human Behavior,81,42–51.
Choung, H., David, P., & Ross, A. (2022). Trust in AI and Its Role in
the Acceptance of AI Technologies. International Journal of
Human–Computer Interaction,39(9), 1727–1739. https://doi.org/
10.1080/10447318.2022.2050543
Claudy, M. C., Garcia, R., & O'Driscoll, A. (2015). Consumer resistance to
innovation—A behavioral reasoning perspective. Journal of the
Academy of Marketing Science,43(4), 528–544.
Corritore, C. L., Kracher, B., & Wiedenbeck, S. (2003). Online trust:
Concepts, evolving themes, a model. International Journal of Human‐
Computer Studies,58(6), 737–758.
Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of
computer technology: A comparison of two theoretical models.
Management Science,35, 982–1003.
de Barcelos Silva, A., Gomes, M. M., da Costa, C. A., da Rosa Righi, R.,
Barbosa, J. L. V., Pessin, G., De Doncker, G., & Federizzi, G. (2020).
Intelligent personal assistants: A systematic literature review. Expert
Systems with Applications,147, 113193. https://doi.org/10.1016/j.
eswa.2020.113193
Delgosha, M. S., & Hajiheydari, N. (2020). On‐demand service platforms
pro/anti adoption cognition: Examining the context‐specific reasons.
Journal of Business Research,121, 180–194.
Duan, S. X., & Deng, H. (2022). Exploring privacy paradox in contact
tracing apps adoption. Internet Research,32(5), 1725–1750.
Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T.,
Duan, Y., Dwivedi, R., Edwards, J., Eirug, A., Galanos, V.,
Ilavarasan, P. V., Janssen, M., Jones, P., Kar, A. K., Kizgin, H.,
Kronemann, B., Lal, B., Lucini, B., …Williams, M. D. (2021). Artificial
Intelligence (AI): Multidisciplinary perspectives on emerging chal-
lenges, opportunities, and agenda for research, practice, and policy.
International Journal of Information Management,57, 101994.
Easwara Moorthy, A., & Vu, K. P. L. (2015). Privacy concerns for use of
voice activated personal assistant in the public space. International
Journal of Human‐Computer Interaction,31(4), 307–335.
Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G* Power 3:
A flexible statistical power analysis program for the social, behavioral,
and biomedical sciences. Behavior research methods,39(2), 175–191.
Fiss, P. C. (2011). Building better causal theories: A fuzzy set approach to
typologies in organization research. Academy of management journal,
54(2), 393–420.
Flavián, C., Akdim, K., & Casaló, L. V. (2023). Effects of voice‐assistant
recommendations on consumer behavior. Psychology & Marketing,
40, 328–346.
Flavián, C., & Casaló, L. V. (2021). Artificial intelligence in services: Current
trends, benefits, and challenges. The Service Industries Journal,
41(13–14), 853–859.
Foehr, J., & Germelmann, C. C. (2020). Alexa, can I trust you? Exploring
consumer paths to trust in smart voice‐interaction technologies.
Journal of the Association for Consumer Research,5(2), 181–205.
Fortes, N., & Rita, P. (2016). Privacy concerns and online purchasing
behaviour: Towards an integrated model. European Research on
Management and Business Economics,22(3), 167–176.
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models
with unobservable variables and measurement error. Journal of
Marketing Research,18(1), 39–50.
Fotiadis, A. K., & Stylos, N. (2017). The effects of online social networking
on retail consumer dynamics in the attractions industry: The case of
‘E‐da’theme park, Taiwan. Technological Forecasting and Social
Change,124, 283–294.
Gardner, B. (2015). Defining and measuring the habit impulse: Response
to commentaries. Health psychology review,9(3), 318–322.
Gefen, D., Karahanna, E., & Straub, D. W. (2003). Trust and TAM in online
shopping: An integrated model. MIS Quarterly,27,51–90.
Gupta, A., & Arora, N. (2017). Understanding determinants and barriers of
mobile shopping adoption using behavioral reasoning theory. Journal
of Retailing and Consumer Services,36,1–7.
Hair, J. F., Hult, T. G. M., Ringle, C. M., & Sarstedt, M. (2022). A primer on
partial least squares structural equation modeling (PLS‐SEM) (3rd ed.).
SAGE Publications.
Hair, J. F., Sarstedt, M., & Ringle, C. M. (2019). Rethinking some of the
rethinking of partial least squares. European Journal of Marketing,
53(4), 566–584.
Han, S., & Yang, H. (2018). Understanding adoption of intelligent personal
assistants: A parasocial relationship perspective. Industrial Management
&DataSystems,118(3), 618–636.
Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for
assessing discriminant validity in variance‐based structural equation
modeling. Journal of the Academy of Marketing Science,43(1),
115–135.
Hernandez‐Ortega, B., & Ferreira, I. (2021). How smart experiences build
service loyalty: The importance of consumer love for smart voice
assistants. Psychology & Marketing,38(7), 1122–1139.
Hoffmann, C. P., Lutz, C., & Ranzini, G. (2016). Privacy cynicism: A new
approach to the privacy paradox. Cyberpsychology: Journal of
Psychosocial Research on Cyberspace,10(4), 1–18.
Hong, I. B., & Cha, H. S. (2013). The mediating role of consumer trust in an
online merchant in predicting purchase intention. International
Journal of Information Management,33(6), 927–939.
Hoy, M. B. (2018). Alexa, Siri, Cortana, and more: An introduction to
voice‐assistants. Medical Reference Services Quarterly,37(1), 81–88.
Hsiao, C. H., Chang, J. J., & Tang, K. Y. (2016). Exploring the influential
factors in continuance usage of mobile social apps: Satisfaction,
habit, and customer value perspectives. Telematics and Informatics,
33(2), 342–355.
Hu, T., Stafford, T. F., Kettinger, W. J., Zhang, X. P., & Dai, H. (2018).
Formation and effect of social media usage habit. Journal of
Computer Information Systems,58(4), 334–343.
Iranmanesh, M., Min, C. L., Senali, M. G., Nikbin, D., & Foroughi, B. (2022).
Determinants of switching intention from web‐based stores to retail
apps: Habit as a moderator. Journal of Retailing and Consumer
Services,66, 102957. https://doi.org/10.1016/j.jretconser.2022.
102957
Jain, S., Basu, S., Dwivedi, Y. K., & Kaur, S. (2022). Interactive voice‐
assistants–Does brand credibility assuage privacy risks? Journal of
Business Research,139, 701–717.
Jones, V. K. (2018). Voice‐activated change: Marketing in the age of
artificial intelligence and virtual assistants. Journal of Brand Strategy,
7(3), 233–245.
Kang, J. W., & Namkung, Y. (2018). The effect of corporate social
responsibility on brand equity and the moderating role of ethical
consumerism: The case of Starbucks. Journal of Hospitality & Tourism
Research,42(7), 1130–1151.
Kang, W., & Shao, B. (2023). The impact of voice‐assistants' intelligent
attributes on consumer well‐being: Findings from PLS‐SEM
and fsQCA. Journal of Retailing and Consumer Services,70,
103130.
Kautish, P., Purohit, S., Filieri, R., & Dwivedi, Y. K. (2023). Examining the
role of consumer motivations to use voice‐assistants for fashion
shopping: The mediating role of awe experience and eWOM.
Technological Forecasting and Social Change,190, 122407.
Kim, D., Park, K., Park, Y., & Ahn, J. H. (2019). Willingness to provide
personal information: Perspective of privacy calculus in IoT services.
Computers in Human Behavior,92, 273–281.
2240
|
ACIKGOZ ET AL.
Kim, M. S., & Kim, S. (2018). Factors influencing willingness to provide
personal information for personalized recommendations. Computers
in Human Behavior,88, 143–152.
Kock, N. (2015). Common method bias in PLS‐SEM: A full collinearity
assessment approach. International Journal of e‐Collaboration,11(4),
1–10.
Kowalczuk, P. (2018). Consumer acceptance of smart speakers: A mixed
methods approach. Journal of Research in Interactive Marketing,12(4),
418–431.
Lalicic, L., & Weismayer, C. (2021). Consumers' reasons and perceived
value co‐creation of using artificial intelligence‐enabled travel
service agents. Journal of Business Research,129, 891–901.
Laricchia, F. (2022). Number of voice assistants in use worldwide from
2019 to 2024 (in billions). Statista. Available from: https://www.
statista.com/statistics/973815/worldwide-digital-voice-assistant-
in-use/
Lee, J., Kim, J., & Choi, J. Y. (2019). The adoption of virtual reality devices:
The technology acceptance model integrating enjoyment, social
interaction, and strength of the social ties. Telematics and Informatics,
39,37–48.
Lee, M. C. (2009). Factors influencing the adoption of Internet banking: An
integration of TAM and TPB with perceived risk and perceived
benefit. Electronic Commerce Research and Applications,8(3),
130–141.
Lei, S. I., Shen, H., & Ye, S. (2021). A comparison between chatbot and
human service: Customer perception and reuse intention.
International Journal of Contemporary Hospitality Management,
33(11), 3977–3995.
Lim, W. M., Kumar, S., Verma, S., & Chaturvedi, R. (2022). Alexa, what do
we know about conversational commerce? Insights from a system-
atic literature review. Psychology & Marketing,39(6), 1129–1155.
Lim, W. M., Rasul, T., Kumar, S., & Ala, M. (2022). Past, present, and future
of customer engagement. Journal of Business Research,140,
439–458.
Ling, E. C., Tussyadiah, I., Tuomi, A., Stienmetz, J., & Ioannou, A. (2021).
Factors influencing users' adoption and use of conversational
agents: A systematic review. Psychology & Marketing,38(7),
1031–1051.
Lister, K., Coughlan, T., Iniesto, F., Freear, N., & Devine, P. (2020).
Accessible conversational user interfaces: Considerations for design.
In Proceedings of the 17th International Web for All Conference
(pp. 1–11).
Loureiro, S. M. C., Japutra, A., Molinillo, S., & Bilro, R. G. (2021). Stand by me:
Analyzing the tourist–intelligent voice‐assistant relationship quality.
International Journal of Contemporary Hospitality Management,33(11),
3840–3859.
Lu, B., Fan, W., & Zhou, M. (2016). Social presence, trust, and social
commerce purchase intention: An empirical research. Computers in
Human Behavior,56, 225–237.
Lutz, C., Hoffmann, C. P., & Ranzini, G. (2020). Data capitalism and the
user: An exploration of privacy cynicism in Germany. New Media &
Society,22(7), 1168–1187.
Lyu, T., Guo, Y., & Chen, H. (2023). Understanding people's intention to
use facial recognition services: The roles of network externality and
privacy cynicism. Information Technology & People.https://doi.org/
10.1108/itp-10-2021-0817
Malodia, S., Ferraris, A., Sakashita, M., Dhir, A., & Gavurova, B. (2023). Can
Alexa serve customers better? AI‐driven voice‐assistant service
interactions. Journal of Services Marketing,37(1), 25–39.
Malodia, S., Kaur, P., Ractham, P., Sakashita, M., & Dhir, A. (2022). Why do
people avoid and postpone the use of voice‐assistants for
transactional purposes? A perspective from decision avoidance
theory. Journal of Business Research,146, 605–618.
Mari, A., Mandelli, A., & Algesheimer, R. (2020). The evolution of marketing in
the context of voice commerce: A managerial perspective. In HCI in
Business, Government and Organizations: 7th International Confer-
ence, HCIBGO 2020, Held as Part of the 22nd HCI International
Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020,
Proceedings 22 (pp. 405–425). Springer International Publishing.
Mariani, M. M., Perez‐Vega, R., & Wirtz, J. (2022). AI in marketing,
consumer research and psychology: A systematic literature review
and research agenda. Psychology & Marketing,39(4), 755–776.
Maroufkhani, P., Asadi, S., Ghobakhloo, M., Jannesari, M. T., &
Ismail, W. K. W. (2022). How do interactive voice assistants build
brands' loyalty? Technological Forecasting and Social Change,183,
121870.
McLean, G., & Osei‐Frimpong, K. (2019). Hey Alexa examines the
variables influencing the use of artificial intelligent in‐home voice‐
assistants. Computers in Human Behavior,99,28–37.
McLean, G., Osei‐Frimpong, K., & Barhorst, J. (2021). Alexa, do voice‐
assistants influence consumer brand engagement?–Examining the
role of AI‐powered voice‐assistants in influencing consumer brand
engagement. Journal of Business Research,124, 312–328.
Mehta, P., Jebarajakirthy, C., Maseeh, H. I., Anubha, A., Saha, R., &
Dhanda, K. (2022). Artificial intelligence in marketing: A meta‐
analytic review. Psychology & Marketing,39(11), 2013–2038.
Min, J., & Kim, B. (2015). How are people enticed to disclose personal
information despite privacy concerns in social network sites?
The calculus between benefit and cost: How are people enticed
to disclose personal information despite the privacy concerns in
social network sites? The calculus between benefit and cost.
Journal of the Association for Information Science and Technology,
66(4), 839–857.
Mishra, A., Shukla, A., & Sharma, S. K. (2022). Psychological determinants
of users' adoption and word‐of‐mouth recommendations of smart
voice‐assistants. International Journal of Information Management,67,
102413.
Moriuchi, E. (2019). Okay, Google!: An empirical study on voice‐assistants
on consumer engagement and loyalty. Psychology & Marketing,36(5),
489–501.
Moriuchi, E. (2023). “Alexa, lock my front door”: An empirical study on
factors affecting consumer's satisfaction with VCA‐controlled
security devices. Psychology & Marketing,40, 169–189.
Mostafa, R. B., & Kasamani, T. (2022). Antecedents and consequences of
chatbot initial trust. European Journal of Marketing,56(6),
1748–1771.
Moussawi, S., & Benbunan‐Fich, R. (2020). The effect of voice and humour
on users' perceptions of personal intelligent agents. Behaviour &
Information Technology,40(15), 1603–1626. https://doi.org/10.
1080/0144929x.2020.1772368
Ofori, K. S., Larbi‐Siaw, O., Fianu, E., Gladjah, R. E., & Boateng, E. O. Y.
(2016). Factors influencing the continuance use of mobile social
media: The effect of privacy concerns. Journal of Cyber Security and
Mobility,4(3), 105–124.
Oliveira, G. G., Lizarelli, F. L., Teixeira, J. G., & Mendes, G. H. S. (2023).
Curb your enthusiasm: Examining the customer experience with
Alexa and its marketing outcomes. Journal of Retailing and Consumer
Services,71, 103220.
Pal, D., Arpnikanondt, C., Funilkul, S., & Razzaque, M. A. (2021). Analyzing
the adoption and diffusion of voice‐enabled smart‐home systems:
empirical evidence from Thailand. Universal Access in the Information
Society,20, 797–815.
Pappas, I. O., & Woodside, A. G. (2021). Fuzzy‐set Qualitative Compara-
tive Analysis (fsQCA): Guidelines for research practice in Information
Systems and marketing. International Journal of Information
Management,58, 102310.
Paykani, T., Rafiey, H., & Sajjadi, H. (2018). A fuzzy set qualitative
comparative analysis of 131 countries: Which configuration of the
structural conditions can explain health better? International journal
for equity in health,17,1–13.
ACIKGOZ ET AL.
|
2241
Perez, S. (2019). 41% of voice‐assistant users have concerns about trust and
privacy, report finding. TechCrunch. https://techcrunch.com/2019/
04/24/41-of-voice-assistant-users-have-concerns-about-trust-and-
privacy-report-finds/
Petronio, S., & Caughlin, J. P. (2006). Communication privacy management
theory: Understanding families. In D. O. Braithwaite & L. A. Baxter
(Eds.), Engaging theories in family communication: Multiple perspectives
(pp. 35–49). Sage Publications.
Pitardi, V., & Marriott, H. R. (2021). Alexa, she's not human but…Unveiling
the drivers of consumers' trust in voice‐based artificial intelligence.
Psychology & Marketing,38(4), 626–642.
Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003).
Common method biases in behavioral research: A critical review of
the literature and recommended remedies. Journal of Applied
Psychology,88(5), 879–903.
Poushneh, A. (2021). Humanizing voice‐assistant: The impact of voice‐
assistant personality on consumers' attitudes and behaviors. Journal
of Retailing and Consumer Services,58, 102283.
Prentice, C., Loureiro, S. M. C., & Guerreiro, J. (2023). Engaging with
intelligent voice assistants for wellbeing and brand attachment.
Journal of Brand Management.https://doi.org/10.1057/s41262-
023-00321-0
Ragin, C. C. (2009). Redesigning social inquiry: Fuzzy sets and beyond.
University of Chicago Press.
Ragin, C. C., Strand, S. I., & Rubinson, C. (2008). User's guide to fuzzy‐set/
qualitative comparative analysis. University of Arizona,87,1–87.
Ratten, V. (2015). A cross‐cultural comparison of online behavioural
advertising knowledge, online privacy concerns and social network-
ing using the technology acceptance model and social cognitive
theory. Journal of Science & Technology Policy Management,6(1),
25–36.
Rese, A., Baier, D., Geyer‐Schulz, A., & Schreiber, S. (2017). How
augmented reality apps are accepted by consumers: A comparative
analysis using scales and opinions. Technological Forecasting and
Social Change,124, 306–319.
Bateman, R. (2020). Voice‐assistants and privacy issues.https://www.
termsfeed.com/blog/voice-assistants-privacy-issues
Rogers, R. W. (1975). A protection motivation theory of fear appeals and
attitude change. The Journal of Psychology,91(1), 93–114.
Ryan, J., & Casidy, R. (2018). The role of brand reputation in organic food
consumption: A behavioral reasoning perspective. Journal of