Fig 1 - uploaded by Elisa B. Schweiger
Content may be subject to copyright.
Conceptual model

Conceptual model

Source publication
Article
Full-text available
Widespread, and growing, use of artificial intelligence (AI)–enabled voice assistants (VAs) creates a pressing need to understand what drives VA evaluations. This article proposes a new framework wherein perceptions of VA artificiality and VA intelligence are positioned as key drivers of VA evaluations. Building from work on signaling theory, AI, t...

Contexts in source publication

Context 1
... full model, including the predictions related to VA features, mediators, moderators, and VA evaluations, is shown in Fig. ...
Context 2
... this representativeness. Larger horizontal bars indicate that the term is more frequently presented in reviews on that topic. For example, "connect" is highly representative of the connectivity topic, and we should expect to find that term fairly often in reviews related to the topic of connectivity. Returning to our overarching framework ( Fig. 1), this topic mining analysis helped us focus our examination, because the first four topics reflect VA features that are likely to influence perceptions of VA artificiality and VA intelligence. The latter three topics relate to the VAs' hardware and set-up (i.e., connectivity, smart home, and speakers) and are less likely to influence ...
Context 3
... test the proposed model ( Fig. 1) among respondents from the online Prolific platform. First, using a screening study, we identified participants likely to have an Amazon Echo (or related) device, by asking 3,000 respondents if they owned an Amazon Echo device, for how long, and if they would be willing to participate in a follow-up ...
Context 4
... used the statistical package SmartPLS 3.3 for PLS-SEM, employing 10,000 bootstrap resamples to obtain robust standard errors and t-statistics for the parameters in our model ( Hair et al., 2017a, b). Measurement models do not apply to singleitem constructs, so we excluded natural speech, task range, accuracy, age, length of ownership, and gender measures from our reliability and validity assessments (Hair et al., 2017a, b). For internal reliability, we considered composite reliability, which takes the outer loadings of the indicator variables into account, and Cronbach's alpha, which is a more conservative measure (Hair et al., 2017a, b). ...
Context 5
... models do not apply to singleitem constructs, so we excluded natural speech, task range, accuracy, age, length of ownership, and gender measures from our reliability and validity assessments (Hair et al., 2017a, b). For internal reliability, we considered composite reliability, which takes the outer loadings of the indicator variables into account, and Cronbach's alpha, which is a more conservative measure (Hair et al., 2017a, b). Both measures indicated good internal reliability, and the values for all the multi-item constructs exceeded 0.7 ( Hair et al., 2017a, b;Hulland, 1999). ...
Context 6
... the Fornell-Larcker criterion arguably might not indicate discriminant validity accurately (Henseler et al., 2015), we also checked the heterotraitmonotrait (HTMT) ratio of correlations, that is, the ratio of within-trait to between-trait correlations, to identify true correlations among constructs. The HTMT values ranged between 0.006 and 0.756, below the conservative threshold of 0.85 (Hair et al., 2017a, b). Thus, the HTMT analysis corroborated AVE findings; the data set has adequate discriminant validity. ...
Context 7
... participants completed the (i) artificiality and intelligence items (as in Study 2), (ii) a two-item warmth scale and a two-item competence scale, and (iii) a threeitem purchase intention scale pertaining to the Amazon Echo. Warmth and competence items were answered on a seven-point Likert scale, from 'not at all descriptive' to 'very descriptive' (Aaker et al., 2012). Re. warmth, participants were asked to indicate the extent to which they found the terms "warm" and "friendly" described the VA. ...
Context 8
... warmth, participants were asked to indicate the extent to which they found the terms "warm" and "friendly" described the VA. Re. competence, participants were asked to indicate the extent to which they found the terms "competent" and "capable" described the VA (Aaker et al., 2012). ...
Context 9
... develop a model for VA evaluations (Fig. 1), we build on extant theory pertaining to VAs, AI, technology adoption, and signaling. We conceptualize VA features as signals of VA artificiality or VA intelligence, which in turn affect VA evaluations, and these effects are moderated by various signal receiver characteristics. Study 1, based on text-mining of more than 150,000 ...

Citations

... However, as shown in Appendix B, only a few studies have tangentially examined the cognitive, psychological, and emotional factors affecting VA engagement and experiences. Existing studies have predominantly used deductive and experimental approaches to investigate the use of VAs, with few examining user reviews (Guha et al. 2023;Jiménez-Barreto et al. 2023), resulting in a lack of alternative and more holistic views on the phenomenon. This study addresses this gap by focusing on both cognitive and emotional factors that drive user engagement with VAs, using a text-analysis approach based on user reviews. ...
... Reviews, as a key source of consumer information, provide authentic, original evaluations of UX. Limited research has utilized user reviews to analyze user behavior and experience with VA (Guha et al. 2023;Jiménez-Barreto et al. 2023). Therefore, we employ a text analytics method, leveraging a large volume of user reviews from the Google VA platform, one of the most widely used VA platforms worldwide. ...
... Causal configurations for LUX. et al. 2023) and text-mining analyses(Guha et al. 2023) on VA, none have examined the cognitive dimensions of VA usage. To the best of our knowledge, our study is one of the first to use a mixed-method approach to investigate users' emotions and psychological responses to VA usage. ...
Article
Full-text available
Although studies have explored user experience (UX) with artificial intelligence (AI)‐powered voice assistants (VAs), the cognitive, psychological, and emotional factors affecting user engagement and experience with VAs have yet to be investigated in depth. Drawing on cognitive absorption and broaden‐and‐build theories, this study investigates how VAs captivate users, enhance positive emotions, and influence the overall UX. To address these questions, we conduct three types of analyses—exploratory, confirmatory, and configurational—utilizing data from 125,600 user reviews on a popular VA platform (Google Assistant) from 2017 to 2024. First, we employ bidirectional encoder representations from the transformers (BERT)topic modeling to unearth relevant cognitive and psychological factors and use multiple regression analyses to test their impact on UX. Our findings both confirm and contradict previous affirmations about these factors' influence on user engagement and experience with VAs. Next, we apply fuzzy‐set qualitative comparative analysis (fsQCA) to explore counterfactual causal configurations that contribute to a higher UX. Contributing to existing theory and knowledge of the psychological antecedents of user experiences with AI, our study identifies several combinations of cognitive and emotional factors that can improve user interactions with VAs.
... Research has shown that tech-savvy users are more likely to effectively harness the creative and innovative potential of GAI tools, enabling value co-creation and enhancing user experience (Demir and Demir, 2023;Zhang et al., 2022). As a composite of experiences, skills and abilities, tech-savviness reflects an individual's propensity to adopt new technologies rapidly and evaluate them based on existing technical knowledge (Guha et al., 2022). Accordingly, it can be taken conceptualized as a boundary condition related to individual competence that influences GAI satisfaction. ...
... Despite experiencing dissatisfaction with the interacting process, tech-savvy users are more likely to derive greater benefits from GAI services, leading to higher satisfaction with the value co-creation outcome. Additionally, signals in the service process are less pronounced for users with higher levels of knowledge or expertise (Guha et al., 2022). In comparison to low-tech-savvy individuals, tech-savvy individuals demonstrate a reduced impact of lower process satisfaction on outcome satisfaction, because they are able to compensate for the lack of technological adaptation (e.g. ...
... The study adapted the scales for satisfaction with process and outcome from Ivanov and Cyr's (2014) research and adapted the item descriptions to reflect ChatGPT-augmented context. Additionally, three items from Guha et al. (2022) were used to measure users' tech-savviness. All the constructs and measures are presented in Appendix 1. ...
Article
Purpose Generative Artificial Intelligence (GAI) offers innovative services to users. For GAI companies, it is crucial to ensure user satisfaction in the face of fierce competition. Unlike traditional AI in automation, GAI services are augmentation and designed for value co-creation with users, which transform users into empowered stakeholders. This study investigates the interrelationship and boundaries of GAI satisfaction by distinguishing between satisfaction with process and outcome. It adopts an affordance actualization perspective, integrated with attachment theory, to examine how affordances influence GAI satisfaction through both affective and cognitive dimensions. Design/methodology/approach An online survey with ChatGPT-augmented contexts was conducted with 529 respondents, and the collected data are analyzed via partial least squares-based structural equation modeling. Findings The co-creation fosters a positive correlation between the process and the outcome, with individual competence (i.e. tech-savviness) as a boundary condition. Perceived creativity and enjoyment are identified as affective affordances and are predictors of GAI identity, while perceived credibility and serendipity are identified as cognitive affordances and are predictors of GAI dependency. GAI identity and dependency further enhance users’ satisfaction with process and outcome. Originality/value This study extends knowledge on GAI satisfaction by distinguishing satisfaction with process and outcome and captures the element of user heterogeneity by individual competence. This study also develops an affective-cognitive framework of GAI affordances and conceptualizes the actualization process, which contributes to empirical experience and practical applications.
... Furthermore, having the option to receive and choose voice output for the chatbot, gave participants the agency to explore modalities beyond text messages. While prior work showed that voice interaction with chatbots often failed to engage people due to perceived artificiality [35], the 70 voice options provided in our study included diverse characters and accents that are more distinctive and interesting. As TikTok creators enjoyed using Volcano TTS for their own content [34], our participants also enjoyed exploring the voice types that gave them comfort and supportiveness or looking for voices that could best match their constructed personas. ...
Preprint
Personalized support is essential to fulfill individuals' emotional needs and sustain their mental well-being. Large language models (LLMs), with great customization flexibility, hold promises to enable individuals to create their own emotional support agents. In this work, we developed ChatLab, where users could construct LLM-powered chatbots with additional interaction features including voices and avatars. Using a Research through Design approach, we conducted a week-long field study followed by interviews and design activities (N = 22), which uncovered how participants created diverse chatbot personas for emotional reliance, confronting stressors, connecting to intellectual discourse, reflecting mirrored selves, etc. We found that participants actively enriched the personas they constructed, shaping the dynamics between themselves and the chatbot to foster open and honest conversations. They also suggested other customizable features, such as integrating online activities and adjustable memory settings. Based on these findings, we discuss opportunities for enhancing personalized emotional support through emerging AI technologies.
... Amazon Alexa, Google Assistant), or vehicles (e.g. Volkswagen's IDA; Coker and Thakur, 2023;Guha et al., 2023). This widespread integration allows consumers to utilize VAs for simple tasks, such as turning on lights, and more complex tasks, such as information search or shopping (Melumad, 2023). ...
Article
Full-text available
Purpose: Voice assistants (VAs) could be a game changer in conversational commerce. However, consumers often reject purchase recommendations by VAs due to disfluency and low credibility of VAs’ messages. To overcome these barriers and increase recommendation adoption, this study examines which language style leads to voice commerce success and how consumer-related boundary conditions shape this effect. Design/methodology/approach: In three experiments, the authors collected data from consumers listening to VA recommendations in different styles and commerce settings. The first study examines the effect of figurative (vs literal) language on recommendation adoption, while the second study focuses on the moderating impact of consumers’ consumption goals (hedonic vs utilitarian). The third study delves into the underlying mechanisms leading to enhanced visual fluency in voice interactions. Findings: The experiments demonstrate that figurative language increases visual fluency, with no differences across consumption goals (hedonic vs utilitarian). However, figurative language enhances credibility in a hedonic context while reducing it in a utilitarian one. Visual fluency and credibility mediate the effect of figurative language on recommendation adoption. The results further reveal the crucial role of arousal for triggering visual fluency in response to figurative language. Originality/value: This research examines how and when using language styles from human-to-human communication context can enhance service interactions in the voice-driven marketplace. It shows that a VA’s figurative language can lead to a trade-off between visual fluency and credibility, but a match with the right consumer goal eliminates it. Therefore, this research provides new insights on when embodied voice systems either limit or boost voice commerce success.
... We collected data from chatbot users using the Prolific platform to test the hypothesized model. Following Guha et al. (2023), a screening study was initially conducted to identify participants who had previously interacted with chatbots by requesting 1,000 respondents (e.g. whether they used the chatbot before, how often, and their willingness to take part in a followup study). ...
... This study employed SmartPLS4 and the PLS-SEM approach to analyze the collected data (Ringle et al., 2022;Sarstedt et al., 2017). The PLS-SEM is commonly preferred to test complex models (Sarstedt et al., 2017), which also accomplishes high levels of statistical power for testing the hypotheses (Guha et al., 2023). Notably, earlier technology studies, including research on chatbots, have employed this approach (e.g. ...
Article
Full-text available
Purpose Drawing on the technology affordance and affinity theories, this study proposes a framework explaining the antecedents and consequences of customers’ smart experiences (CSEs) in the artificial intelligence (AI) chatbot context. Design/methodology/approach The quantitative approach employing an online survey was adopted to obtain data from chatbot users ( N = 761) and analyzed using structural equation modeling. Findings Results from a survey study show that chatbot affordances, including interactivity (two-way communication, active control and synchronicity), selectivity (customization and localization), information (argument quality and source credibility), association (connectivity and sense of safety) and navigation positively affect CSEs (hedonic and cognitive), leading to customer chatbot stickiness through affinity. Originality/value Our study provides evidence that supports and extends the affordances and affinity lens by highlighting the roles of specific chatbot affordances that contribute to a positive-smart experience and subsequently enhances customer chatbot stickiness through affinity.
... In addressing the evolving landscape of AIVA, previous studies have typically focused on general variables like anthropomorphism [19,52,69], human emotions [15,46,61,81], trust [23,77,102], visual elements [58,78,95], technology-related factors [3,17,63], and benefits [12], García de Blanes Sebastián et al., 2022, [33,60], or have been limited to specific industries such as education [20,87], hospitality [13,27,51], healthcare [21,29,99] and finance [8,64,84] in explaining the behavior of AI assistant users. Despite numerous studies, research focusing on the voice of AIVA or the user context remains scarce. ...
Article
Full-text available
The fourth industrial revolution has accelerated the development of artificial intelligence (AI). AI has permeated our lives by being installed as an assistant on smartphones. This study tries to pinpoint the critical factors influencing continued intention to use AI voice assistants. It provides a theoretical framework in which explanatory factors include attitude, interaction, novelty value, voice attractiveness, and discomfort. Data were gathered from 256 users of AI voice assistants. The partial least squares structural equation modeling (PLS-SEM) was used to empirically analyze the data. The findings reveal that attitude impacts continuance intention. Interaction significantly determines both continuance intention and attitude. Novelty value and voice attractiveness are the key factors in forming attitude. Discomfort was found to hurt continuance intention. The findings of this study might offer useful guidelines for future study and application of AI voice assistants.
... Prolific was selected because it is generally considered a credible and valid source of data; thus, it has been widely used in many recent hospitality studies (e.g., T. Zhang et al., 2024) and provides more screening choices, allowing us to recruit higher-quality participants (Tandon et al., 2023). A recent study highlights that conducting a screening study before the main study helps to recruit targeted participants and collect high-quality data (Guha et al., 2023). As a result, a screening study was performed to identify participants with relevant experience (e.g., who had visited a smart hotel before). ...
... The PLS-SEM approach using SmartPLS (version #4) was applied for data analysis, a frequently used approach in the social sciences, particularly when social scientists have complex models to determine the associations among latent variables/constructs with a limited sample size (Sarstedt et al., 2022. Further, it generally achieves high statistical power to test hypotheses (Guha et al., 2023), while this two-stage technique allows for testing the measurement model for reliability (the first step) and the structural model to test the associations among the constructs (the second step) (Hair et al., 2019). ...
Article
Full-text available
Recent advancements in artificial intelligence and smart technologies have fundamentally transformed the operational dynamics within the hospitality industry. Specifically, smart hotels have employed the latest technologies in service delivery to improve tourists' intentions to revisit. Drawing on the cognitive emotion theory (CET), this study investigates the impact of smart service interactional experience (SSIE), a recently introduced construct, on tourists' intentions to revisit smart hotels, with the mediating role of emotions and the moderating role of technophilia. Analysis of the data collected (N = 312) indicates that SSIE influences tourists' emotions, both positive and negative, and their revisit intentions of smart hotels. These emotions further significantly impact revisit intention and play mediating roles in linking SSIE to revisit intention. The findings also reveal the moderating effect of technophilia in strengthening the association between positive emotion and revisit intention. Based on the study findings, we discuss practical implications for concerned stakeholders. 摘要
... Despite their high relevance, research on digital voice assistants for shopping purposes is still scarce. Nonetheless, the use of voice assistants is growing exponentially, greatly facilitated by their installations in various digital devices (Guha et al. 2023). Voice shopping is rooted in the spoken language interaction between consumers and VA (Hu et al. 2023) capable of processing complex service requests (Malodia et al. 2023). ...
Article
Full-text available
Voice is becoming a frontline interface across the customer journey. However, the role of digital assistants, such as chatbots and voice assistants, in shopping remains underexplored and a comprehensive understanding of the value that these assistants provide is still missing. This study addresses this gap by applying consumption value theory to identify the dimensions influencing the use of such digital assistants for shopping. By integrating this theory into the technology acceptance model, we develop a coherent framework to analyze the values driving digital assistant adoption. The research model highlights the importance of epistemic, conditional, emotional, and functional values in determining the perceived usefulness of digital assistants for shopping, while social value and ease of use are less impactful. Trust positively influences both attitude and behavioral intention, while perceived risk negatively affects attitude but not intention; digital assistants in conversational commerce share similarities with smartphones in mobile shopping. When comparing chatbots and smartphones, trust has a stronger impact on consumer attitudes in mobile shopping. In contrast, perceived usefulness plays a more prominent role in shaping attitudes toward the use of voice assistants rather than chatbots. The empirical findings also highlight significant differences across demographic groups.
... First, we contribute to signaling theory (Connelly et al. 2011;Griskevicius, Tybur, and Van den Bergh 2010). Research to date has leveraged signaling theory to show customer acceptance of AI-enabled voice assistants (VA) theorizing about VA features as signals (Guha et al. 2023), or applied signaling theory to examine whether human-AI teaming impacts customer acceptance of chatbots (Li et al. 2024). In line with these works, selfpromotional signals are used to determine a particular reputation with a target audience. ...
... Noteworthy is also that research to date on self-promotional signals has largely focused on a human perspective (e.g., Scopelliti, Loewenstein, and Vosgerau 2015; Sezer 2022), but not included AI agents as a basis of selfpromotional activities. Overall, we extend work on signaling theory applied to other technology realms (e.g., Guha et al. 2023;Guo et al. 2020;Li et al. 2024) by integrating self-promotional signals into the discourse and demonstrating that customer perceptions of those signals can differ depending on if they are associated with AI versus human-based service provisions. ...
Article
Full-text available
As companies actively invest in self‐promotion of Artificial Intelligence (AI) empowered services to sustain their competitive advantage, there is a growing potential for such promotional activities to backfire. Bridging signaling theory with the resource‐based view, this research reveals that companies’ self‐promotion of AI resources can reduce customers’ willingness to engage with AI‐based (vs. human‐based) services. Four studies, including text mining and experiments, demonstrate that companies’ self‐promotion of AI‐based resources has a detrimental effect on willingness to engage, and concurrently perceived as exaggeration. In contrast, companies’ self‐promotion about human‐related resources yields beneficial outcomes, since such promotional signals contribute to the enhancement of human capital. The findings suggest that self‐discrepancy and trust are the key underlying factors driving the effects as customers may experience a discrepancy between their expectations of human‐like service interactions and actual AI capabilities. Additionally, findings reveal the moderating effect of honest (vs. self‐promotional) framing on the relationship between service type (AI vs. human) and willingness to engage. Customer perceptions of AI appear less influenced by presentation style compared to perceptions of human resources. This research provides valuable insights into how customers respond to companies’ self‐promotion of AI resources and emphasizes the need for promotional alignment with customers’ expectations about AI.
... The key role in enhancing human efficiency in ever complex tasks is reflected not only in numerous offerings of these systems on the market, but also in a growing body of research. For example, Malodia et al. (2023) investigate the implication on customers' trust, Guha et al. (2023) evaluate voice assistants (VAs), Xiong et al. (2023) explore the acceptance of AI-based assistants and Perry et al. (2023) investigate the impact of AI code assistants on software security. ...