Article

An Initial Model of Trust in Chatbots for Customer Service—Findings from a Questionnaire Study

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Chatbots are predicted to play a key role in customer service. Users’ trust in such chatbots is critical for their uptake. However, there is a lack of knowledge concerning users’ trust in chatbots. To bridge this knowledge gap, we present a questionnaire study (N = 154) that investigated factors of relevance for trust in customer service chatbots. The study included two parts: an explanatory investigation of the relative importance of factors known to predict trust from the general literature on interactive systems and an exploratory identification of other factors of particular relevance for trust in chatbots. The participants were recruited as part of their dialogue with one of four chatbots for customer service. Based on the findings, we propose an initial model of trust in chatbots for customer service, including chatbot-related factors (perceived expertise and responsiveness), environment-related factors (risk and brand perceptions) and user-related factors (propensity to trust technology). RESEARCH HIGHLIGHTS We extend the current knowledge base on natural language interfaces by investigating factors affecting users’ trust in chatbots for customer service. Chatbot-related factors, specifically perceived expertise and responsiveness, are found particularly important to users’ trust in such chatbots, but also environment-related factors such as brand perception and user-related factors such as propensity to trust technology. On the basis of the findings, we propose an initial model of users’ trust chatbots for customer service.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Additional factors have been identified as particularly important for trust in LLMs. These include transparency [29,54], access to a human operator [44], and perceived privacy and robust data security [29,48]. These insights provide a robust basis for examining how users develop trust when choosing between organizations' customized LLMaaS and commercial LLMs. ...
... Customizations of LLMaaS in corporate design could serve as a salient trust cue for users, increasing the perceived trustworthiness of the system in two dimensions: (a) purpose and (b) process. The perceived purpose of the system may appear more trustworthy when the customizations incorporate university branding, as the university is likely to be seen as a benevolent provider with minimal financial or marketing-driven motives [44]. This perception may be further enhanced due to a sense of familiarity evoked by the university's branding [3]. ...
... Prior research on AI implementation [67] and attitudes towards LLMs [7] suggests that organizational trust can significantly shape users' trust in the system. In line with this, Nordheim et al. [44] have demonstrated that customers' trust in a company's service chatbot is influenced by their trust in the company itself. We therefore assume: H2: The level of organizational trust moderates the effect of system type (customized vs. commercial) on user trust, such that the effect is stronger when organizational trust is higher and weaker when it is lower. ...
Preprint
Full-text available
As the use of Large Language Models (LLMs) by students, lecturers and researchers becomes more prevalent, universities - like other organizations - are pressed to develop coherent AI strategies. LLMs as-a-Service (LLMaaS) offer accessible pre-trained models, customizable to specific (business) needs. While most studies prioritize data, model, or infrastructure adaptations (e.g., model fine-tuning), we focus on user-salient customizations, like interface changes and corporate branding, which we argue influence users' trust and usage patterns. This study serves as a functional prequel to a large-scale field study in which we examine how students and employees at a German university perceive and use their institution's customized LLMaaS compared to ChatGPT. The goals of this prequel are to stimulate discussions on psychological effects of LLMaaS customizations and refine our research approach through feedback. Our forthcoming findings will deepen the understanding of trust dynamics in LLMs, providing practical guidance for organizations considering LLMaaS deployment.
... Drawing from the research of Nordheim. [9] on trust in customer service chatbots, we can identify key challenges and considerations for chatbot implementation in benefits enrollment. ...
... Nordheim. [9] highlight that users' trust in chatbots is influenced by their perception of the chatbot's ability to handle sensitive information securely. This underscores the importance of implementing and communicating strong privacy and security measures in benefits enrollment chatbots. ...
... • Maintenance and updates: Regular updates to both chatbots and existing systems must be carefully managed to avoid disruptions in service. While not directly addressed in Nordheim.'s study, these integration challenges can impact the chatbot's ability to provide accurate and timely information, which their research identifies as a key factor in building user trust [9]. • Email: editor@ijfmr.com ...
Article
Full-text available
This article examines the implementation of artificial intelligence-powered chatbots in public sector benefits enrollment processes, focusing on their potential to streamline operations, enhance user experience, and reduce administrative burdens. Through a comprehensive analysis of case studies in healthcare and social security benefits programs, we demonstrate that chatbots can significantly improve efficiency, accuracy, and accessibility in enrollment procedures. Our findings indicate a 30-40% reduction in processing times and a marked increase in application completion rates. However, challenges related to data privacy, system integration, and user acceptance persist. This article contributes to the growing body of literature on digital transformation in public administration by providing empirical evidence of chatbots' effectiveness in benefits enrollment. We conclude that while chatbots offer promising solutions to longstanding issues in public sector service delivery, their successful implementation requires careful consideration of technical, ethical, and user-centric factors. Our study has important implications for policymakers and public administrators seeking to leverage AI technologies to enhance public service efficiency and accessibility.
... Participants completed survey questions about their experience completing the main task. For each AI tutor, there were nine questions measuring perceived intelligence [67,68], perceived enjoyment [47,68], perceived usefulness [47,68], perceived trust [25,71], perceived sense of connection [43,84,86,87], and perceived human-likeness [43,84,86,87]. All questions were answered in a 7-point Likert scale. ...
... [47,68] • I trust the responses provided by [AI TUTOR NAME]. [25,71] • [AI TUTOR NAME]'s behavior and response can meet my expectations. [25,71] • I enjoy interacting with [AI TUTOR NAME]. ...
... [25,71] • [AI TUTOR NAME]'s behavior and response can meet my expectations. [25,71] • I enjoy interacting with [AI TUTOR NAME]. [47,68] • I feel a strong sense of connection with [AI TUTOR NAME]. ...
Preprint
Intelligent tutoring systems (ITS) using artificial intelligence (AI) technology have shown promise in supporting learners with diverse abilities; however, they often fail to meet the specific communication needs and cultural nuances needed by d/Deaf and Hard-of-Hearing (DHH) learners. As large language models (LLMs) provide new opportunities to incorporate personas to AI-based tutors and support dynamic interactive dialogue, this paper explores how DHH learners perceive LLM-powered ITS with different personas and identified design suggestions for improving the interaction. We developed an interface that allows DHH learners to interact with ChatGPT and three LLM-powered AI tutors with different experiences in DHH education while the learners watch an educational video. A user study with 16 DHH participants showed that they perceived conversations with the AI tutors who had DHH education experiences to be more human-like and trustworthy due to the tutors' cultural knowledge of DHH communities. Participants also suggested providing more transparency regarding the tutors' background information to clarify each AI tutor's position within the DHH community. We discuss design implications for more inclusive LLM-based systems, such as supports for the multimodality of sign language.
... features, especially their capacity to converse in natural language, may make trust even more essential (Nordheim et al., 2019). ...
... Although AI and automation have been around for a for a while, they are currently becoming more integrated into our daily lives (Abdel Wahab, 2023). Chatbots can act as a first line of assistance in customer service by providing an easily accessible and low-threshold source of help and information for frequently asked questions and support tasks (Nordheim et al., 2019). If customers have a positive experience using chatbots, they will be satisfied with the companies that provide them. ...
... The concept of trust means "the extent to which a user is confident in and willingness to act on the basis of the recommendations, actions, and decisions of an artificially intelligent decision support. "According to interpersonal connection theories, trust acts as a social glue in relationships, groups, and societies (Nordheim et al.,2019). Before adopting and using technology, people must trust it. ...
... This frustration arises from the perception that the technology is rigid with generic responses and limited contextual adaptability [4][5]. Such limitations may reduce the quality of the end-user experience [4] and erode trust [6][7], ultimately hampering adoption rates [8] and the potential for value creation within the industry. ...
... Since potential users, in part based on their previous experiences with chatbots for customer service, tend to have modest expectations, it is crucial to adequately manage these expectations and ensure quality in the use cases where LLMs are indeed implemented [cf. 6,27], with providers and deployers needing to show clear benefits to encourage use and build trust. • User uptake of advanced features will likely be gradual. ...
... Trust is a multidimensional construct that includes five factors: truthfulness, credibility, honesty, sincerity, and reliability (Toader et al. 2019). Furthermore, numerous scholars have studied trust in social media (Tang and Liu 2015), AI (Glikson and Woolley 2020), robots (Hancock et al. 2011), and chatbots (Nordheim et al. 2019). ...
... Consistent with the literature (e.g.,Ghandeharioun et al. 2019;Mostafa and Kasamani 2022;Nordheim et al. 2019;Toader et al. 2019), a major finding of this study is that the functional and emotional aspects of the WHO's chatbot that appeal to the public (i.e., its ease of use, usefulness, emotional connection, and social presence) facilitate their trust in the chatbot. Such trust in turn encourages sharing and advocacy of the WHO's chatbot on social media as well as donation to the WHO's initiatives. ...
Article
Full-text available
With the growing utilization of artificial intelligence-powered chatbot services in the realm of nonprofit communication and the increasing significance of public engagement on social media platforms, researchers are facing a pivotal question: How can nonprofit organizations effectively harness chatbot technologies to influence the public's social media engagement and foster donation intention? In this study, we explore this question by integrating theoretical insights from social exchange theory, the service robot acceptance model, and pertinent prior literature. A survey involving 591 chatbot users located in the United States was conducted to explore the interplay between the functional and emotional values attributed to chatbots and their impacts on users' trust in the World Health Organization's chatbot services, subsequently influencing social media engagement and donation intention. The findings of this study have valuable theoretical and practical implications for nonprofit organizations seeking to optimize their use of chatbot technologies, with the goals of enhancing user engagement and encouraging donation behavior.
... although ai chatbots have been promoted as engines for enhancing customer experience, there are mixed views on ai chatbots, and there is still limited research in this domain (Jang et al., 2021). it is widely acknowledged that significant attention has been focused on the adoption of ai chatbots, while little has been documented on how users perceive the information provided by them (nordheim et al., 2019;svikhnushina & Pu, 2022). the lack of attention and understanding to the perceived usefulness or trust in ai chatbot recommendations may hinder current efforts to transform service delivery through ai chatbot integration (nordheim et al., 2019;Wube et al., 2022). ...
... it is widely acknowledged that significant attention has been focused on the adoption of ai chatbots, while little has been documented on how users perceive the information provided by them (nordheim et al., 2019;svikhnushina & Pu, 2022). the lack of attention and understanding to the perceived usefulness or trust in ai chatbot recommendations may hinder current efforts to transform service delivery through ai chatbot integration (nordheim et al., 2019;Wube et al., 2022). the nascent literature on ai chatbots has investigated the drivers of continuance intention from different perspectives. ...
Article
Full-text available
While there has been a significant increase in the use of artificial intelligence (AI) in marketing, limited research has focused on the user experience of AI-powered chatbots and their implications in developing countries. Therefore, this study developed a research model to test the interrelationship between gratification, intention to accept recommendations from AI chatbots, and continuance intention using the Uses and Gratification (U&G) theory. This study gathered data from 410 customers of commercial banks in Tanzania, employing a quantitative survey approach that was analyzed using structural equation modeling. The results indicate that the gratification dimensions significantly drive continuance intention through the intention to accept recommendations from AI chatbots. The study recommends that marketers ensure that AI chatbots, serving as service agents, offer enjoyment and emotional support to customers to enhance continuance intention. The proposed research model developed and tested in this study is among the very few studies that examine AI chatbots’ continuance intentions in the context of developing countries. This study introduces the intention to accept AI chatbots’ recommendations as a mechanism that can promote continuance intention when fueled by gratification.
... Silva et al. [18] identified that chatbot trust has a significant impact on perceived usefulness, satisfaction, and attitude. To this, Nordheim et al. [61] added that chatbot, environment, and user-related factors are required to develop the framework for chatbot trust. A study by Hildebrand & Bergner [62] confirmed that conversational chatbots result in greater trust levels as compared to non-conversational chatbots. ...
... Nordheim et al. [61] Primary data was collected from 154 Norwegian chatbot users using survey methodology. ...
Article
The Fintech industry, particularly banks, has witnessed a profound transformation with the integration of Artificial Intelligence chatbots, redefining customer experience and engagement. As Fintech firms increasingly integrate AI chatbots into their platforms, understanding customer perceptions becomes paramount for strategic decision-making and sustained success. To unravel the complexities of this convergence, a holistic examination is needed, encompassing not only the technological aspects but also the strategic dimensions that underpin competitive advantage. In this context, the role of intellectual property, particularly patents, emerges as a critical factor shaping the innovation landscape. This research aims to comprehensively investigate customers' perceptions towards AI chatbots in the Fintech industry, with a specific focus on technological convergence. The study seeks to analyze the impact of cutting- edge AI chatbot technologies, including those protected by patents, on user attitudes and overall customer experience within the dynamic fintech landscape. This study provides a comprehensive review of 40 empirical studies on AI chatbots in the fintech industry, particularly the banking sector, featuring patented innovations using the PRISMA methodology. Study outcomes illustrate emerging themes related to consumer behavior and response to financial chatbots in terms of acceptance and adoption intention. Additionally, four key factors that influence how people perceive, anticipate, and engage with fintech chatbots, namely satisfaction, trust, anthropomorphism, and privacy are explored. In conclusion, the finance industry's effective integration and broad use of AI chatbots is dependent on the convergence of four factors: satisfaction, privacy, trust, and anthropomorphism. Current study offers a strong basis for analysing and resolving the obstacles to AI chatbot acceptance and deployment in the financial sector by addressing all these elements extensively. This exploration of technological convergence in fintech industry by analyzing customers' behavior and response to financial chatbots not only contributes to a comprehensive understanding of its intricacies but also serves as a foundation for development and deployment of user-centric fintech chatbots.
... The effectiveness of chatbot content holds significant importance in the banking system as it directly influences user trust and satisfaction. Research indicates that user trust in chatbots is a key factor in their acceptance (Nordheim et al., 2019). Additionally, the customer experience with banking chatbots has a substantial impact on brand loyalty and is influenced by perceived risk (Trivedi., 2019). ...
... The result of this study is proven from previous studies, in particular, Zhu et al., (2021) explored the determinants of user experience and satisfaction with mental health chatbots, offering a new perspective to understand the influence of chatbot visibility on user attitudes. Additionally, Nordheim et al., (2019) emphasized the critical nature of users' trust in chatbots for their uptake, highlighting the importance of chatbot visibility in shaping user attitudes. ...
Conference Paper
Full-text available
This study investigates constructs influencing the intention to use chatbots in the banking sector, considering their role in enhancing customer connectivity and streamlining transactions. Chatbots offer immediate assistance, revolutionizing customer service and operational efficiency. With the emergence of chatbots recently, the banking industry brings the advantages of chatbots to accelerate the speed as well as convenience in supporting user demand. Data from 463 Vietnamese users were collected via questionnaires and analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM). The findings underscore the importance of convenience, visibility, association, and information content quality in shaping user attitudes toward banking chatbots. However, privacy concerns, stemming from potential misuse by third parties or misleading practices, act as barriers to user adoption. In conclusion, the study recommends that banks and government entities prioritize improving the perceived usefulness of chatbots and ensuring robust security measures. This approach will foster user trust and encourage wider acceptance of chatbot services in the banking industry.
... While the existing literature on GenAI has extensively addressed technological and ethical implications, along with learning elements of GenAI (Chu 2023), there has been inadequate research on different facets of user trust in this technology, as well as the antecedents and consequences of trust. Indeed, the lack of trust in conversational AI among users has been well-documented, and as a result, the question of establishing trust in AI agents has emerged as a pivotal research focus in this domain (Nordheim et al. 2019;Hu et al. 2021b;Cheng et al. 2022). To address these research gaps in the AI literature and the specific domain of GenAI, we focused on examining human-like trust in GenAI chatbots from a social perspective (i.e., human-like trust). ...
Article
Full-text available
Amid the pervasive integration of AI technologies across societal and industrial domains, understanding users’ trust in these systems becomes increasingly crucial. This study addresses the growing need to understand users’ trust in Generative Artificial Intelligence (GenAI) and explores the societal implications of this type of trust. Based on the socio-technical systems theory, this work employs the FAT (Fairness, Accountability, Transparency) framework and humanness factors of AI, anthropomorphism, social presence, and emotions, as antecedents of users’ human-like trust, which is proposed to influence users’ attitudes, perceived performance, and behavioral intentions. Structural equation modeling analysis (N = 244) reveals that fairness significantly enhances trust, while accountability and transparency do not. Social presence and emotions positively impact trust, whereas anthropomorphism shows no significant effect. Furthermore, trust shapes users’ attitudes, perceived performance, and behavioral intentions toward GenAI systems. This study contributes to the AI adoption and user trust literature by illuminating the main antecedents of human-like trust and showing its impact on user acceptance from a social-technical perspective. Beyond the academic contribution, this research highlights the broader societal relevance of user trust in GenAI, particularly regarding public concerns over black box issues and humanness features of GenAI systems.
... Ability pertains to technical capabilities safeguarding user privacy, while benevolence 56 prioritizes users' interests, and integrity ensures ethical behavior. In this study, trust focuses on secure payments and transactions, crucial in online retail given trust's impact on new technology perceptions, particularly with sensitive information and payment processes [2,10,49,[55][56][57]. Trust reduces hesitation and vulnerability during transactions, shaping positive attitudes and purchasing behavior. ...
... In order to achieve this, one of four customer support chatbots was used to convert the participants. In light of the study's findings, we propose a preliminary research model of confidence in service chatbots that takes into account the inclination for technology adoption, perceived risk and brand, and perceived chatbot attributes (expertise and reaction time) [21]. ...
Article
Full-text available
Chatterbots, also known as chatbots, have become essential for improving human-computer interaction in a number of fields, including e-commerce, healthcare, education, and customer support. From rule-based systems like ELIZA to contemporary AI-driven solutions employing modern machine learning (ML) techniques, this review paper examines the development of chatbots. It highlights how ML technologies, such as decision trees (DT), support vector machines (SVM), linear regression, and natural language processing (NLP), can be used to build chatbots that are more context-aware, responsive, and adaptive. The paper highlights important advances including deep learning, multimodal capabilities, and continuous learning mechanisms by looking at recent advancements and the mathematical models that support these techniques. These developments have driven an increasing support for chatbots by allowing them to provide personalized interactions, enhance accessibility, and reduce repetitive tasks. In order to open the door for further study and applications, this paper aims to bring light on the challenges and the efficacy of using ML into chatbot building.
... ChatCharlie is not a customer support agent, but rather a conversational agent designed to collect and securely store textual responses to a series of contextually relevant questions. ChatCharlie has been designed to reduce perceptions of unresponsiveness by timing messages optimally [39][40][41] , ensuring witnesses are not overwhelmed by information and recall requests, but that response lag does not appear so unhuman like as to feel uncomfortable and/or frustrating. ChatCharlie offers users a human like name, which can increase perceptions of an authentic interaction 42 especially when paired with an informal communication style). ...
Article
Full-text available
Initial account interviews (IAi) offer eyewitnesses more immediate opportunities to answer a series of brief questions about their experiences prior to an in-depth, more formal investigative interview. An IAi is typically elicited in-person near/at the scene of a crime using broadly systematic questioning. Retrieval practice can improve subsequent recall in some contexts, but there is a dearth of research centred on the potential costs and benefits of a quick IAi. Furthermore, where an in-person IAi is impossible, no alternative quick provision exists. Given the systematic nature of the IAi protocol, we developed a conversational chatbot as a potential alternative. Using a mock-witness paradigm, we investigated the memory performance of adults from the general population during in-depth in-person interviews one week after having provided an IAi 10 min post event either (1) in person, (2) via the ChatCharlie chatbot, or (3) no IAi (control). IAi conditions leveraged significantly improved event recall during later investigative interviews versus the Control. Accounts were more accurate and complete, and more correct information was remembered without increased errors indicating the potential of digital agents for IAi purposes Findings concur with predictions from theoretical understanding of episodic memory consolidation and the empirical eyewitness literature regarding the benefits of practice in some contexts.
... These systems are designed to comprehend natural language inputs and generate appropriate responses, making them indispensable tools across multiple industries due to their ability to provide instant, round-the-clock customer support, streamline business processes, and enhance user engagement [2], [3]. They are widely used in customer service to handle inquiries, troubleshoot problems, and facilitate transactions, thereby improving customer satisfaction and operational efficiency [4]. In healthcare, chatbots assist in patient monitoring, appointment scheduling, and providing medical information, making healthcare more accessible and efficient [5], [6]. ...
Article
Full-text available
In Natural Language Processing (NLP) and Artificial Intelligence (AI), chatbots, which are software programs designed to facilitate human-computer interaction through natural language, are becoming increasingly important. However, creating an effective chatbot remains a complex task, as it must accurately interpret user input and generate appropriate responses. This study presents TrBot, a general-purpose Turkish chatbot that utilizes deep learning techniques, specifically a seq2seq model with Long Short-Term Memory (LSTM) layers. This architecture allows TrBot to manage sequential dependencies and effectively generate coherent responses, offering advantages in handling the complex morphological structure of Turkish. In contrast to earlier Turkish chatbots that were application-specific, TrBot is designed for broad conversational use across various topics. In this study, we also proposed and created two comprehensive datasets: a QA dataset with 40,702 entries and a conversation dataset with 304,446 entries, both specifically designed to enhance TrBot’s performance. Trained on these datasets, TrBot achieved an accuracy of 80% on the QA dataset and 70% on the dialog dataset, with BLEU scores of 0.90 and 0.77 respectively, indicating substantial enhancements in response quality. In comparison, a Transformer-based model exhibited reduced training times but achieved lower accuracies of 60% on the QA dataset and 50% on the dialog dataset, with BLEU scores of 0.76 and 0.61 respectively, with the limited size of the datasets and available computational resources. The development of TrBot has significant implications, offering potential benefits in areas such as customer support, language learning, and other fields that require robust Turkish conversational capabilities. This study demonstrates that with adequate data and appropriate modeling techniques, it is possible to create effective conversational agents for complex languages like Turkish, paving the way for further advancements in this domain.
... The increasing prevalence of conversational agents in diverse applications-from customer service to healthcare-has brought user trust to the forefront of human-agent communication research arXiv:2503.07279v1 [cs.HC] 10 Mar 2025 [6,9,22,33,43]. Understanding user trust is crucial for conversational agent design as it significantly affects effective interaction, shapes user satisfaction and engagement, and directs adoption of agent systems [38,41,50]. Yet, understanding how user trust develops and evolves during interactions remains notoriously challenging. ...
Preprint
Full-text available
Trust plays a fundamental role in shaping the willingness of users to engage and collaborate with artificial intelligence (AI) systems. Yet, measuring user trust remains challenging due to its complex and dynamic nature. While traditional survey methods provide trust levels for long conversations, they fail to capture its dynamic evolution during ongoing interactions. Here, we present VizTrust, which addresses this challenge by introducing a real-time visual analytics tool that leverages a multi-agent collaboration system to capture and analyze user trust dynamics in human-agent communication. Built on established human-computer trust scales-competence, integrity, benevolence, and predictability-, VizTrust enables stakeholders to observe trust formation as it happens, identify patterns in trust development, and pinpoint specific interaction elements that influence trust. Our tool offers actionable insights into human-agent trust formation and evolution in real time through a dashboard, supporting the design of adaptive conversational agents that responds effectively to user trust signals.
... User experience refers to how individuals perceive and respond to the use or anticipated use of a product, system, or service (Følstad and Brandtzaeg 2020). Various studies have examined specific aspects of users' experiences with chatbots, focusing on perceptions (trust, enjoyment, satisfaction) and responses (continuance, purchase) (Nordheim et al. 2019). ...
Article
Full-text available
This article explores the ethical considerations, prompt management strategies, and linguistic challenges in eliciting responses from large language models, specifically ChatGPT 3.5 and GPT-4.0, on controversial topics. Using the case study of sex robots influencers, the research investigates how these models deal with sensitive and explicit content while adhering to OpenAI's ethical guidelines. A series of tailored prompts were employed to evaluate and compare the models' performance, including their ability to provide accurate answers, their adherence to ethical standards, and their linguistic adaptability. The findings reveal significant differences between ChatGPT 3.5 and GPT-4.0 in their willingness to engage with controversial topics and the quality of their responses. ChatGPT 3.5 demonstrated a more cautious approach, frequently avoiding direct engagement with sensitive content, while GPT-4.0 exhibited a more nuanced understanding but occasionally provided less accurate information. Both models emphasized ethical considerations, redirecting users towards broader discussions on societal and ethical implications. This study highlights the essential role of prompt management and linguistic adjustments in influencing AI model behavior and demonstrates the limitations and ethical challenges associated with generative AI in addressing controversial subjects. The article concludes with recommendations for future research and the ethical use of AI in complex, sensitive discussions.
... How to build trust in AI agents is an important research topic today (Nordheim et al., 2019). People who are perceived as being similar are frequently regarded as more trustworthy, whereas people who are not similar are regarded as less trustworthy (see Lauren et al., 2009). ...
Conference Paper
Full-text available
With the development of AI technologies, especially generative AI (GAI) like ChatGPT, GAI is increasingly assisting people in various tasks. However, people may have different requirements for GAI when using it for different kinds of tasks. For instance, when brainstorming new ideas, people may want GAI to propose different ideas that supplement theirs with different problem-solving perspectives, but for decision-making tasks, they may prefer GAI adopt a similar problem-solving process with people to make a similar or even the same decision as they would. We conducted an online experiment examining how perceived similarities between GAI and human task-solving influence people's intention to use GAI, mediated by trust, for four task types (creativity, planning, intellective, and decision-making tasks). We demonstrate that the effect of similarity on trust (and so intent to use AI) depends on the type of task. This paper contributes to understanding the impact of task types on the relationship between perceived similarity and GAI adoption, with implications for future use of GAI in various task contexts.
... Studies suggest that familiarity can help reduce perceived risks and enhance trust, especially when consumers have encountered reliable and relevant responses in prior interactions (Komiak and Benbasat, 2006;Nordheim et al., 2019). For instance, users who have had positive experiences with chatbots that provide accurate and personalized recommendations tend to view these systems as more competent, leading to increased satisfaction and adoption intentions (Dwivedi et al., 2023a;Jiménez-Barreto et al., 2023). ...
Article
- GenAI integration enhances perceived chatbot usefulness, human-likeness, and familiarity. - Privacy concerns increase post-GenAI integration, despite unchanged trust levels. - Familiarity directly and indirectly boosts chatbot adoption intentions. - Adoption determinants remain stable across customer journey stages post-GenAI. - SRAM extended to evaluate GenAI's impact on retail chatbot adoption. This study investigates the influence of Generative Artificial Intelligence (GenAI) on consumer adoption of retail chatbots, focusing on how GenAI impacts key adoption determinants, the role of familiarity and assessing its effects across different stages of the customer journey. We conducted two waves of surveys, one pre- and one post-GenAI integration, to compare consumer perceptions across three customer service tasks. Using the Service Robot Acceptance Model (SRAM) as a framework, we found that GenAI enhances consumer perceptions of chatbot usefulness, human-likeness, and familiarity, thereby increasing adoption intentions. However, trust remains largely unchanged, and privacy concerns have risen post-GenAI. Additionally, the relationships remain stable across customer journey stages, with familiarity playing a key role. Our findings extend SRAM to the retail context with GenAI, offering new insights into the temporal stability of chatbot adoption factors. It underscores familiarity's dual role (direct and indirect) in fostering adoption, while highlighting that GenAI impacts specific aspects of consumer interaction. These findings provide insights for retailers to leverage GenAI-powered chatbots to enhance customer engagement and satisfaction.
... The grouping of items resulted in five factors (four latent variables and one outcome variable) proposed in the conceptual part of the paper (see appendix). Validated questionnaire items were used on: K-OL [24], Digitalization [55] and [56], Time Management [57] and [58] Trust [59] and OP [60]. ...
Article
Full-text available
While digitalization and robotics are a reality for companies and contribute to value creation, few studies have examined their impact on operational performance. This study examines how digitalization in knowledge-intensive companies contributes to improving operational performance, emphasizing the importance of trust and effective knowledge-oriented leadership to create a positive context for its implementation. The study is conducted by surveying ten engineers with senior positions in companies with a high level of robotics and digitalization maturity. Through qualitative comparative analysis with fuzzy sets (fsQCA) and Partial Least Squares Structural Equation Modeling (PLS-SEM), combinations of factors leading to business success from the perspective of digitalization are identified. Findings reveal that trust in digital technology and effective leadership are crucial for improving operational efficiency in an increasingly digitized business environment. This study provides valuable insights into how the integration of advanced digital technologies through organizational factors such as knowledge-oriented leadership can contribute to improved operational performance, offering practical perspectives to managers on how to handle digitalization in knowledge-intensive organizations.
... With the help of artificial intelligence (AI) and natural language processing, chatbots are designed as 'Conversational Agents' (CA) to deliver services similar to human customer services agents [4]. Their popularity is due to the advantages in service effectiveness, cost savings and improved customer experience [5]. Companies increasingly adopt chatbots to assist or even replace human customer service agents during service encounters [6]. ...
Article
Full-text available
Chatbots are widely used in customer services contexts today. People using chatbots have their pragmatic reasons, like checking delivery status and refund policies. The purpose of the paper is to investigate what are those factors that affect user experience and a chatbot’s service quality which influence user satisfaction and electronic word-of-mouth. A survey was conducted in July 2024 to collect responses in Hong Kong about users’ perceptions of chatbots. Contrary to previous literature, entertainment and warmth perception were not associated with user experience and service quality. Social presence was associated with user experience, but not service quality. Competence was relevant to user experience and service quality, which reveals important implications for digital marketers and brands of adopting chatbots to enhance their service quality.
... In customer service, AI's impact is already being felt, and its influence will continue to expand. Chatbots and virtual assistants are becoming commonplace, providing immediate responses to customer inquiries and enhancing the overall customer experience (Nordheim et al., 2019). As AI systems become more advanced, they will be able to handle increasingly complex customer interactions, leading to higher satisfaction rates and stronger customer loyalty. ...
Article
Full-text available
An AI-driven end-to-end workflow optimization and automation system can revolutionize small and medium-sized enterprises (SMEs) by addressing inefficiencies and resource constraints that hinder productivity and growth. These enterprises often rely on manual processes and fragmented data systems, limiting their ability to scale and compete effectively. Through AI integration, SMEs can enhance productivity, reduce errors, and drive growth, making them more resilient in a competitive landscape. AI-driven workflow optimization combines several core technologies: data integration, process mapping, predictive analytics, and automation through tools like Robotic Process Automation (RPA). Data integration consolidates disparate data sources into a centralized repository, allowing for a comprehensive view of operations. AI algorithms analyze this data to map current workflows, identify bottlenecks, and suggest optimal pathways for task completion. Predictive analytics enables SMEs to make informed decisions, forecast demand, and optimize supply chain processes, while RPA automates repetitive tasks, reducing human error and allowing employees to focus on more strategic activities. An AI-driven system offers key advantages to SMEs, including increased efficiency, cost savings, and enhanced decision-making. By automating routine tasks such as invoice processing, inventory management, and customer service responses, SMEs can reduce operational costs and increase task completion speed. AI-powered dashboards and predictive analytics provide real-time insights into performance metrics, empowering SMEs to make data-driven decisions swiftly. Additionally, AI-based workflow optimization enhances customer experience through faster response times and consistent service quality. Implementing this system follows a phased approach: initial assessment, pilot testing, full-scale deployment, and ongoing improvement. A pilot phase allows SMEs to validate the system’s efficacy within a controlled environment, gathering feedback to refine processes before broader adoption. Training employees to work with AI-based tools and addressing potential resistance are critical to successful implementation. AI-driven workflow automation is essential for SMEs aiming to grow sustainably. While challenges such as data security, integration with legacy systems, and resistance to change exist, the benefits ranging from increased efficiency to scalability outweigh these limitations. For SMEs, adopting AI-driven systems not only enhances current performance but also builds a foundation for long-term resilience and competitiveness. Keywords: Artificial Intelligence, Automation System, SMEs.
... Constructs, scale items, and descriptive statisticsConstructs and Scale Items [source]Mean S.D. Factor Loading Privacy concerns[89] I am concerned that the information I submit to ChatGPT could be misused I am concerned that a person can find private information I submit to ChatGPT I am concerned about providing personal information to ChatGPT because of what others might do with it I am concerned about providing personal information to ChatGPT because it could be used in a way I did not foresee Perceived ease of use[35] My interaction with ChatGPT would be clear and understandable I would find ChatGPT easy to use Learning to operate ChatGPT would be easy for me Perceived usefulness[90] I find ChatGPT useful in my daily life Using ChatGPT helps me to accomplish things more quickly Using ChatGPT increases my productivity Using ChatGPT helps me to perform many things more conveniently ChatGPT services are compatible with my values ChatGPT services are compatible with my current needs ChatGPT services are compatible with the way I like to interact with a chatbot Reuse intention[91] I will use ChatGPT on a regular basis in the future I will frequently use ChatGPT in the future I will strongly recommend others to use ChatGPT Trust[92] I experience ChatGPT as trustworthy *I do not think ChatGPT will act in a way that is disadvantageous for me I'm suspicious of ChatGPT [reversed] ChatGPT appears deceptive [reversed] I trust ChatGPT Note:The sign * means that the item was removed based on scale purification ...
Article
Full-text available
This study delves into the factors influencing reuse intention and trust toward generative artificial intelligence (AI) chatbots, with a particular focus on the Iranian context—a region where the adoption of such technologies is rapidly evolving yet remains underexplored. Utilizing a framework grounded in the Technology Acceptance Model and the Diffusion of Innovation theory, this research complements those theories with construct privacy concerns. Data were collected via a questionnaire distributed to 567 Iranian ChatGPT users and analyzed using Partial Least Squares Structural Equation Modelling. The findings reveal that perceived usefulness, perceived ease of use, and compatibility significantly enhance trust toward ChatGPT among Iranian users. The findings also reveal that compatibility, perceived usefulness, and trust enhance ChatGPT users’ reuse intention. Interestingly, privacy concerns negatively impact trust but not reuse intention, highlighting a privacy paradox. This study expands upon existing literature by highlighting the intricate dynamics of trust and adoption, especially in an understudied context, Iran, offering practical insights for developers to enhance user experience through human-centric design. The results underscore the necessity for usability, compatibility with user values, and robust privacy measures to foster sustained user engagement and trust in the context of generative AI chatbots.
... Consequently, it is no surprise that many industries use CAs as means of interaction with their customer base and to facilitate service provisioning; for instance in e-health (e.g., Laumer et al. 2019), digital education (e.g., Winkler and Söllner 2018), or customer service (e.g., Qiu and Benbasat 2009;Huang and Rust 2020). Various industries including the digital service industry have already recognized the immense potential of CAs which is why CA adoption is expected to remain steady and increase even more in the future (Nordheim et al. 2019). Especially for automating service interaction and providing a more engaging mode of interaction, CAs can become a key component in providing digital services in the near future (Hollebeek et al. 2021). ...
Conference Paper
Full-text available
Lots of organizations use subscription business models. However, with increasing competition and technological progress switching costs for customers are decreasing. This development can translate to serious issues for subscription-based businesses, requiring action. Traditionally, businesses used mailings or calls, which are costly, time-consuming and often not effective. In this research-in-progress paper, we explore conversational churn prevention as a potential remedy. We present a conversational agent with persuasive design features (e.g., nudges) and first results from a pre-study. We conduct an in-between subject experiment and interviews for our mixed-methods evaluation of our pre-study. Our work contributes to theory, by presenting more insights into the interaction quality of conversational agents in the context of churn prevention of digital services and the role of persuasive design. We support practitioners, by guiding them towards more effective use of conversational agents to improve their services and to predict churn.
Article
Full-text available
This paper investigates the intricacies of the relationship between usability and responsiveness of artificial intelligence chat-bots on customer satisfaction in e-retailing, while examining the mediating role of both extrinsic and intrinsic values on customer satisfaction, , while examining the mediating role of both extrinsic and intrinsic values on customer satisfaction. The sample comprised of 390 active users of e-commerce inLahore, Pakistan, who had frequent interactions with chat-bots. The data was analysed using Pearson correlation and Process Haye’s. The findings revealed a significant relationship between usability, responsiveness, and customer satisfaction. Furthermore, extrinsic and intrinsic values during online customer experience were found to have significant and full mediation on the relationship between usability, responsiveness and customer satisfaction. The findings of the study provide useful implications for marketing practitioners to leverage AI technologies in optimising their marketing endeavours. Keywords: Chatbot Usability, Chatbot Responsiveness, Technology
Article
The trust that users place in generative artificial intelligence (AI) can significantly influence their intentions and behaviors regarding its usage. Nonetheless, our current knowledge regarding user trust in generative AI is limited. To address this research gap, the study conducted semistructured interviews with 29 participants to investigate the factors that may influence user trust, using ChatGPT as an illustrative example. The findings led to the identification of several factors that account for user trust in ChatGPT. These factors encompass user-related aspects (such as technology attitude, innovativeness, and risk perception), information-related factors (including information source, information quality, and information values), technology-related factors (covering system quality, technology quality, and technology ethics), organization-related factors (encompassing organizational structure, brand reputation, and cultural context), and environment-related factors (involving policy environment, network environment, and social environment). In conclusion, the study formulated and presented the ChatGPT user trust framework as a theoretical model to comprehend user trust in generative AI. A thorough discussion of the framework is provided along with suggestions for future applications.
Chapter
Today's commercial sector widely recognises AI and generative AI as catalysts and game-changers. AI and generative AI are empowered to assist organisations in developing new products and can also completely transform commercial processes in an organised way without sacrificing standards and quality. While AI and generative AI are indeed beneficial and efficient, they do come with a few inherent problems. Furthermore, successfully implementing AI and generative AI is a costly endeavor. Therefore, authorities must ensure the cost-effective availability of these technologies, allowing small-scale firms to integrate them into their operations. In general, we are optimistic that this research will provide valuable insights to the whole business community on how to effectively manage their organisations.
Article
Full-text available
Purpose This study examines the strategic value of chatbots in service industries from a managerial perspective, focusing on operational efficiency, cost reduction, customer satisfaction, and leveraging chatbot-generated data for strategic decision-making. Methodology Employing a mixed-methods approach, we conducted qualitative interviews with Chief Technology Officers from seven service firms and a quantitative survey of 287 industry professionals involved in technology implementation. Findings The qualitative findings reveal that firms adopt chatbots primarily to enhance operational efficiency, reduce costs, and meet evolving customer expectations for instant, 24/7 service. Chatbots significantly reduce workload, improve response times, optimize resources, and enhance customer satisfaction through consistent and accessible service. Additionally, chatbot-generated data emerges as an asset for informing product development and service improvements. Motivations for chatbot adoption significantly impact operational outcomes, customer satisfaction, and implementation challenges. Operational impact and customer satisfaction strongly influence overall chatbot effectiveness and the intention to expand their use. Originality We explore chatbot adoption from a managerial perspective, combining qualitative and quantitative methods. By emphasizing the strategic use of chatbot-generated data for decision-making and demonstrating how overcoming implementation challenges can enhance chatbot effectiveness, the research provides novel insights that extend beyond the current literature on AI technologies in service industries. Practical Implications The study emphasizes the importance of aligning chatbot implementation with organizational objectives, proactively addressing implementation challenges, and investing in advanced capabilities to enhance customer experiences. These findings offer practical insights for executives who leverage chatbot technologies to optimize operations, enhance customer satisfaction, and gain a competitive advantage in service industries.
Article
The purpose. This study aims to investigate the application and impact of anthropomorphic design in chatbot digital avatars, focusing on the transition from mechanistic to human-like interfaces. The research analyzes abstract and figurative anthropomorphism, examining their respective roles in improving user engagement and experience. By exploring how abstract designs simplify human traits for enhanced recognition and figurative designs emulate real human expressions for more natural interaction, the study underscores the importance of emotional communication in chatbot design. The findings suggest potential avenues for future research in affective computing and human-computer interaction. Methodology. Literature analysis method: A comprehensive review of existing literature on anthropomorphic design, chatbot digital avatars, and human-computer interaction was conducted to establish a theoretical basis for the study. Interdisciplinary research method: The theories and methods of different disciplines were integrated to examine various chatbot digital avatars using abstract and concrete anthropomorphic designs to understand the design choices and their impact on user experience. The research results. The study identifies two main categories of anthropomorphic design for chatbot digital avatars: abstract and figurative. Abstract avatars use simplified, cartoon-like expressions to convey emotions, creating friendly and approachable digital personas that enhance user acceptance. Figurative avatars, on the other hand, emphasize realism, imitating human body language, expressions, and voice tones to provide a deeper interactive experience. These design styles offer unique benefits, allowing customization based on specific user needs and application contexts. The scientific novelty. This research offers a comparative framework for analyzing abstract and figurative anthropomorphic designs in chatbot avatars, focusing on their impact on user engagement and emotional response. Unlike previous studies centered on functionality, this study highlights the psychological effects of avatar design, categorizing elements like facial expressions and gestures to optimize user- centered design. This contributes to human-computer interaction by enhancing understanding of how avatar design fosters emotional connection and user satisfaction. The practical significance. The findings of this study offer valuable insights for designers and developers in the chatbot and AI interface design field. By understanding the impact of abstract and figurative anthropomorphic elements, designers can make more informed choices in selecting avatar styles that align with user preferences and application needs. Abstract avatars may be particularly effective in contexts that require approachability and simplicity, such as customer service applications. In contrast, figurative avatars can enhance user engagement in scenarios that benefit from realism and emotional depth, such as healthcare or education. This research also provides actionable guidelines for implementing anthropomorphic elements – such as facial expressions, gestures, and voice modulation – that foster emotional connection and user satisfaction, ultimately contributing to a more effective and enjoyable human-computer interaction experience.
Chapter
Research and real-world applications of chatbots have increased dramatically as a result of their popularity in digital marketing. A fundamental query that still needs to be answered is: What makes a good chatbot experience? This study presents a novel method for assessing chatbot customer assistance quality by looking at how it affects customer relationships on different digital channels. We also examine how traditional hotline services and chatbot interactions differ. According to our research, implementing this new framework can result in a large rise in repeat business, a decrease in service costs, a boost in customer satisfaction, positive word-of-mouth, and brand loyalty. Additionally, the paper identifies specific service domains where chatbots can outperform human phone support.
Article
Full-text available
Chatbots are becoming popular in the rapidly developing field of Artificial Intelligence (AI) to facilitate more effective communication between businesses and customers. AI-powered banking chatbots are gaining popularity and present novel opportunities to provide 24/7 front-line support and customised banking assistance. Despite these advantages, banking chatbots are not widely used and have not been adopted as customer service in Indian banks. This research paper explores the obstacles associated with the widespread adoption of banking chatbots in the financial landscape. As disruptive technologies like Artificial Intelligence and Natural Language Processing (NLP) continue to reshape the banking industry globally, understanding the specific barriers to chatbot integration becomes imperative. The current research contributes to the AI discipline by holistically examining the barriers to banking chatbot adoption in India using the Interpretive Structural Modelling (ISM) methodology. The study employs a three-step approach by identifying key barriers to adopting banking chatbots through an extensive literature review and experts' opinions. Then, the Interpretive Structural Modelling (ISM) methodology creates a hierarchical model. For this, data is collected from subject matter experts to develop the interpretive model. Thirdly, MICMAC analysis is conducted to classify and sort the corresponding variables based on their driving and dependence power. The analysis reveals that the absence of AI guidelines, lack of human touch and lack of audibility and transparency of AI systems are some of the critical barriers to the deployment of AI banking chatbots, requiring special focus to streamline the regulatory framework and anthropomorphic features of AI chatbot for successful implementation and deployment. Recommendations for practitioners and other stakeholders and research limitations are also discussed.
Article
Purpose Chatbots emerge as a prominent trend within the context of evolving communication settings and enhancing customer experience to improve firms' total quality management strategies. Specifically, users’ initial trust in such chatbots is critical for their adoption. Under the realm of technology acceptance theories, the present research aims to investigate drivers (perceived ease of use, performance expectancy, compatibility, social influence and technology anxiety) and impacts (customer experience and chatbot usage intention) of chatbot initial trust, among Generation Z considered as the more tech-savvy generation, in the particular telecommunication services context. Design/methodology/approach Research data were collected using an online questionnaire-based survey to test research hypotheses. A sample of 385 students was selected in Tunisia using a convenience sampling technique. Data were then analyzed through structural equation modeling by AMOS 23. Findings The results highlighted that, except for perceived ease of use and performance expectancy, all determinants have a significant influence on chatbot initial trust (positive impact of social influence and compatibility and negative impact of technology anxiety). Furthermore, chatbot initial trust positively stimulates customer experience with chatbots and chatbot intention of use. Practical implications Our results provide particular insights to chatbot developers seeking to enhance trust-building features in these systems and telecommunication operators to better understand user adoption and improve chatbot-based customer interactions among Generation Z in emergent markets. Originality/value This paper attempts to consolidate and enrich the existing body of chatbot initial trust literature by emphasizing the role of customer experience with chatbots and technology anxiety, as two pivotal consumer-related factors that have not yet been treated together in one research.
Article
Full-text available
The integration of chatbots in the financial sector has significantly improved customer service processes, providing efficient solutions for query management and problem resolution. These automated systems have proven to be valuable tools in enhancing operational efficiency and customer satisfaction in financial institutions. This study aims to conduct a systematic literature review on the impact of chatbots in customer service within the financial sector. A review of 61 relevant publications from 2018 to 2024 was conducted. Articles were selected from databases such as Scopus, IEEE Xplore, ARDI, Web of Science, and ProQuest. The findings highlight that efficiency and customer satisfaction are central to the perception of service quality, aligning with the automation of the user experience. The bibliometric analysis reveals a predominance of publications from countries such as India, Germany, and Australia, underscoring the academic and practical relevance of the topic. Additionally, essential thematic terms such as “artificial intelligence” and “advanced automation” were identified, reflecting technological evolution in this field. This study provides significant insights for future theoretical, practical, and managerial developments, offering a framework to optimize chatbot implementation in highly regulated environments.
Chapter
Full-text available
This volume includes seven contributions devoted to inclusive language, understood in two interrelated meanings: the inclusion of women in discourses dominated by masculine forms and the inclusion of people who do not identify with binary gender identities. The goal is to stimulate respectful and informed discussion about linguistic structures, with a focus on Italian. The research presented, based on qualitative and quantitative corpus analyses, psycholinguistics experimental studies and questionnaire-based sociolinguistic investigations, describes specific language varieties or text types, highlights the main structural differences between Italian and German, investigates the supposed neutrality of the so-called ‘unmarked’ (or even ‘inclusive’) masculine or the public perception of inclusive language, and reflects on the use of grammatical desinences in the construction of gender identity in chatbots. A concerning scenario emerges from all the contributions, particularly for the degree of visibility of women in Italian cultural discourse and for the lack of denotation of prestige associated to the female gender.
Book
Full-text available
This volume includes seven contributions devoted to inclusive language, understood in two interrelated meanings: the inclusion of women in discourses dominated by masculine forms and the inclusion of people who do not identify with binary gender identities. The goal is to stimulate respectful and informed discussion about linguistic structures, with a focus on Italian. The research presented, based on qualitative and quantitative corpus analyses, psycholinguistics experimental studies and questionnaire-based sociolinguistic investigations, describes specific language varieties or text types, highlights the main structural differences between Italian and German, investigates the supposed neutrality of the so-called ‘unmarked’ (or even ‘inclusive’) masculine or the public perception of inclusive language, and reflects on the use of grammatical desinences in the construction of gender identity in chatbots. A concerning scenario emerges from all the contributions, particularly for the degree of visibility of women in Italian cultural discourse and for the lack of denotation of prestige associated to the female gender.
Chapter
In today's fast-paced technological world, client engagement is crucial for successful corporate operations. As AI and automation improve procedures, traditional face-to-face communication channels are diminishing. Companies must prioritize consumer engagement to maintain sustainable relationships and capitalize on the internet as the primary communication channel. Chatbots are a significant technological advancement in this context. Our study identifies a gap in the literature, prompting an exploration of customer trust, perceptions of chatbots versus human support, challenges, ethical considerations, and chatbots' potential to foster emotional connections and enhance loyalty. This investigation provides novel perspectives and valuable insights into customer engagement with chatbots. The findings offer practical implications for businesses seeking to optimize chatbot usage, establish customer trust, and leverage emotional connections to cultivate increased loyalty, bridging gaps in the literature and shedding light on critical aspects of AI-driven customer interactions.
Article
While chatbots are increasingly used for customer service, there is a knowledge gap concerning the impact of Conversational Breakdown in such chatbot interactions. In a 2x4 factorial design online experiment, we studied how Conversational Breakdown impacts user emotion and trust in a chatbot for customer service, given variations in task criticality and breakdown task order. Here, 257 participants were randomly assigned to complete high- or low-criticality tasks with a prototype chatbot for customer service, experiencing Conversational Breakdown for the first, second, third, or none of their tasks. The task set was decided from a 63-participant pre-study. We found significant impact of Conversational Breakdown, including a marked order effect on overall trust, as well as a bounce-back effect on task-specific trust and emotion after subsequent successful task completion. We found no post-interaction effect of Task Criticality. Based on our findings, we discuss theoretical and practical implications and suggest future research.
Chapter
This chapter provides a comprehensive overview of the literature on the application of chatbots in customer service and recruiting, emphasizing the function of chatbots in improving customer service. As a tool for streamlining hiring procedures and automating customer service encounters, chatbots have grown in popularity in recent years. The literature review indicates that chatbots significantly impact the recruitment process, such as improved efficiency, increased candidate engagement, and reduced workload for HR personnel. Chatbots have proven effective in improving customer service interactions, with benefits including increased customer satisfaction, reduced wait times, and improved customer engagement. However, the review also highlights several challenges that must be addressed to realize the potential of chatbots in these areas fully. These challenges include issues related to user acceptance and perceptions of chatbot technology and concerns about privacy and data security. Therefore, this study provides recommendations for addressing these challenges.
Conference Paper
Full-text available
Chatbots are increasingly offered as an alternative source of customer ser-vice. For users to take up chatbots for this purpose, it is important that users trust chatbots to provide the required support. However, there is currently a lack in knowledge regarding the factors that affect users' trust in chatbots. We present an interview study addressing this knowledge gap. Thirteen users of chatbots for customer service were interviewed regarding their experience with the chatbots and factors affecting their trust in these. Users' trust in chatbots for customer service was found to be affected (a) by factors con-cerning the specific chatbot, specifically the quality of its interpretation of requests and advise, its human-likeness, its self-presentation, and its profes-sional appearance, but also (b) by factors concerning the service context, specifically the brand of the chatbot host, the perceived security and privacy in the chatbot, as well as general risk perceptions concerning the topic of the request. Implications for the design and development of chatbots and direc-tions for future work are suggested.
Article
Full-text available
Chatbots as a new information, communication and transaction channel enable businesses to reach their target audience through messenger apps like Facebook, WhatsApp or WeChat. Compared to traditional chats, chatbots are not handled by human persons, but software is leading through conversations. Latest chatbots developments in customer services and sales are remarkable. However, in the field of public transport, little research has been published on chatbots so far. With chatbots, passengers find out timetables, buy tickets and have a personal, digital travel advisor providing real-time and context-relevant information about trips. Chatbots collect and provide different data about users and their journey in public transportation systems. They include travel, product, service and content preferences, usage patterns, demographic and location-based data. Chatbots have many advantages for both companies and mobile users. They enable new user touch points, improve convenience, reduce service, sales and support costs, one-to-one marketing, new data collections and deep learning. Using chatbots, smartphone users can reach a company anytime and anywhere. The questioned users of an investigated prototype are remarkably open to new mobile services and they quickly adapt to this technology.
Conference Paper
Full-text available
There is a growing interest in chatbots, which are machine agents serving as natural language user interfaces for data and service providers. However, no studies have empirically investigated people’s motivations for using chatbots. In this study, an online questionnaire asked chatbot users (N = 146, aged 16–55 years) from the US to report their reasons for using chatbots. The study identifies key motivational factors driving chatbot use. The most frequently reported motivational factor is “productivity”; chatbots help users to obtain timely and efficient assistance or information. Chatbot users also reported motivations pertaining to entertainment, social and relational factors, and curiosity about what they view as a novel phenomenon. The findings are discussed in terms of the uses and gratifications theory, and they provide insight into why people choose to interact with automated agents online. The findings can help developers facilitate better human–chatbot interaction experiences in the future. Possible design guidelines are suggested, reflecting different chatbot user motivations.
Conference Paper
Full-text available
How much do visual aspects influence the perception of users about whether they are conversing with a human being or a machine in a mobile-chat environment? This paper describes a study on the influence of typefaces using a blind Turing test-inspired approach. The study consisted of two user experiments. First, three different typefaces (OCR, Georgia, Helvetica) and three neutral dialogues between a human and a financial adviser were shown to participants. The second experiment applied the same study design but OCR font was substituted by Bradley font. For each of our two independent experiments, participants were shown three dialogue transcriptions and three typefaces counterbalanced. For each dialogue typeface pair, participants had to classify adviser conversations as human or chatbot-like. The results showed that machine-like typefaces biased users towards perceiving the adviser as machines but, unexpectedly, handwritten-like typefaces had not the opposite effect. Those effects were, however, influenced by the familiarity of the user to artificial intelligence and other participants' characteristics.
Conference Paper
Full-text available
Users are rapidly turning to social media to request and receive customer service; however, a majority of these requests were not addressed timely or even not addressed at all. To overcome the problem, we create a new conversational system to automatically generate responses for users requests on social media. Our system is integrated with state-of-the-art deep learning techniques and is trained by nearly 1M Twitter conversations between users and agents from over 60 brands. The evaluation reveals that over 40% of the requests are emotional, and the system is about as good as human agents in showing empathy to help users cope with emotional situations. Results also show our system outperforms information retrieval system based on both human judgments and an automatic evaluation metric.
Article
Full-text available
We interact daily with computers that appear and behave like humans. Some researchers propose that people apply the same social norms to computers as they do to humans, suggesting that social psychological knowledge can be applied to our interactions with computers. In contrast, theories of human-automation interaction postulate that humans respond to machines in unique and specific ways. We believe that anthropomorphism-the degree to which an agent exhibits human characteristics-is the critical variable that may resolve this apparent contradiction across the formation, violation, and repair stages of trust. Three experiments were designed to examine these opposing viewpoints by varying the appearance and behavior of automated agents. Participants received advice that deteriorated gradually in reliability from a computer, avatar, or human agent. Our results showed (a) that anthropomorphic agents were associated with greater , a higher resistance to breakdowns in trust; (b) that these effects were magnified by greater uncertainty; and c) that incorporating human-like trust repair behavior largely erased differences between the agents. Automation anthropomorphism is therefore a critical variable that should be carefully incorporated into any general theory of human-agent trust as well as novel automation design. (PsycINFO Database Record
Article
Full-text available
This article explores whether people more frequently attempt to repair misunderstandings when speaking to an artificial conversational agent if it is represented as fully human. Interactants in dyadic conversations with an agent (the chat bot Cleverbot) spoke to either a text screen interface (agent's responses shown on a screen) or a human body interface (agent's responses vocalized by a human speech shadower via the echoborg method) and were either informed or not informed prior to interlocution that their interlocutor's responses would be agent-generated. Results show that an interactant is less likely to initiate repairs when an agent-interlocutor communicates via a text screen interface as well as when they explicitly know their interlocutor's words to be agent-generated. That is to say, people demonstrate the most “intersubjective effort” toward establishing common ground when they engage an agent under the same social psychological conditions as face-to-face human–human interaction (i.e., when they both encounter another human body and assume that they are speaking to an autonomously-communicating person). This article's methodology presents a novel means of benchmarking intersubjectivity and intersubjective effort in human-agent interaction.
Article
Full-text available
Trust plays an important role in many Information Systems (IS)-enabled situations. Most IS research employs trust as a measure of interpersonal or person-to-firm relations, such as trust in a Web vendor or a virtual team member. Although trust in other people is important, this paper suggests that trust in the information technology (IT) itself also plays a role in shaping IT-related beliefs and behavior. To advance trust and technology research, this paper presents a set of trust in technology construct definitions and measures. We also empirically examine these construct measures using tests of convergent, discriminant, and nomological validity. This study contributes to the literature by providing: a) a framework that differentiates trust in technology from trust in people, b) a theory-based set of definitions necessary for investigating different kinds of trust in technology, and c) validated trust in technology measures useful to research and practice.
Conference Paper
Full-text available
Though customer support is argued to be a useful source of usability insight, how to benefit from customer support in usability evaluation is hardly made the subject of scientific research. In this paper, we present an approach to gather usability insight from users when they call customer support. We also present a case implementation of this approach: an evaluation of a telecom operator's customer website. We find that the proposed approach provides insight in usability problems, technical issues, and issues of strategic character. Though the majority of the website users called customer support because they were obstructed in their attempt to use available self-service support options, a substantial proportion of the users called customer support as a planned part of their task. On the basis of the study findings we present practical implications and suggest future research.
Article
Full-text available
This article addresses generalized trust, a construct that is examined in various scientific disciplines and assumed to be of central importance to understanding the functioning of individuals, groups, and society at large. We share four basic lessons on trust: (a) Generalized trust is more a matter of culture than genetics; (b) trust is deeply rooted in social interaction experiences (that go beyond childhood), networks, and media; (c) people have too little trust in other people in general; and (d) it is adaptive to regulate a “healthy dose” of generalized trust. Each of these lessons is inspired and illustrated by recent research from different scientific disciplines.
Article
Full-text available
This paper reports on the linguistic accuracy of five renowned “chatbots,” with an evaluator (an ESL teacher) chatting with each chatbot for about three hours. The chatting consisted of a series of set questions/statements (determined as being in the domain of an ESL learner) – aimed at assessing the accuracy and felicity of the chatbots' answers at the grammatical level. Results indicate that chatbots are generally able to provide grammatically acceptable answers, with three chatbots returning acceptability figures in the 90% range. When meaning is factored in, however, a different picture emerges, with the chatbots often providing meaningless, nonsensical answers, and the accuracy rate for the joint categories of grammar and meaning falling below 60%. The paper concludes on the note that although chatbots as “conversation practice machines” do not yet make robust chatting partners, improvements in chatbot performance bode well for future developments.
Article
Full-text available
Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Article
Full-text available
The Service Recovery Paradox (SRP) has emerged as an important effect in the marketing literature. However, empirical research testing the SRP has produced mixed results, with only some studies supporting this paradox. Because of these inconsistencies, a meta-analysis was conducted to integrate the studies dealing with the SRP and to test whether studies' characteristics influence the results. The analyses show that the cumulative mean effect of the SRP is significant and positive on satisfaction, supporting the SRP, but nonsignificant on repurchase intentions, word-of-mouth, and corporate image, suggesting that there is no effect of the SRP on these variables. Additional analyses of moderator variables find that design (cross-sectional versus longitudinal), subject (student versus nonstudent), and service category (hotel, restaurant, and others) influence the effect of SRP on satisfaction. Finally, implications for managers and directions for future research are presented.
Article
Full-text available
One component in the successful use of automated systems is the extent to which people trust the automation to perform effectively. In order to understand the relationship between trust in computerized systems and the use of those systems, we need to be able to effectively measure trust. Although questionnaires regarding trust have been used in prior studies, these questionnaires were theoretically rather than empirically generated and did not distinguish between three potentially different types of trust: human-human trust, human-machine trust, and trust in general. A 3-phased experiment, comprising a word elicitation study, a questionnaire study, and a paired comparison study, was performed to better understand similarities and differences in the concepts of trust and distrust, and among the different types of trust. Results indicated that trust and distrust can be considered opposites, rather than different concepts. Components of trust, in terms of words related to trust, were similar across the three types of trust. Results obtained from a cluster analysis were used to identify 12 potential factors of trust between people and automated systems. These 12 factors were then used to develop a proposed scale to measure trust in automation.
Article
Full-text available
Electronic commerce is an increasingly popular business model with a wide range of tools available to firms. An application that is becoming more common is the use of self-service technologies (SSTs), such as telephone bank-ing, automated hotel checkout, and online investment trading, whereby customers produce services for them-selves without assistance from firm employees. Widespread introduction of SSTs is apparent across industries, yet relatively little is known about why customers decide to try SSTs and why some SSTs are more widely accepted than others. In this research, the authors explore key factors that influence the initial SST trial decision, specifically focusing on actual behavior in situations in which the consumer has a choice among delivery modes. The authors show that the consumer readiness variables of role clarity, motivation, and ability are key mediators between established adoption constructs (innovation characteristics and individual differences) and the likelihood of trial., Department of Marketing, Arizona State University. The authors thank the Center for Ser-vices Leadership at Arizona State University for its support in securing a research partner for the study. The authors extend their gratitude to the sponsoring firm for its involvement in the project and to the three anony-mous JM reviewers for their insights throughout the review process.
Article
Full-text available
Three experiments were conducted to examine perceptions of a natural language computer interface (conversation bot). Participants in each study chatted with a conversation bot and then indicated their perceptions of the bot on various dimensions. Although participants were informed that they were interacting with a computer program, participants clearly viewed the program as having human-like qualities. Participants agreed substantially in their perceptions of the bot’s personality on the traits from the five-factor model (Experiment 1). In addition, factors that influence perceptions of human personalities (e.g., whether one uses another’s first name and response latency) also affected perceptions of a bot’s personality (Experiments 2 and 3). Similar to interactions with humans, the bot’s perceived neuroticism was inversely related to how long individuals chatted with it.
Conference Paper
Full-text available
A large amount of research attempts to define trust, yet relatively little research attempts to experimentally verify what makes trust needed in interactions with humans and technology. In this paper we identify the underlying elements of trust-requiring situations: (a) goals that involve dependence on another, (b) a perceived lack of control over the other, (c) uncertainty regarding the ability of the other, and (d) uncertainty regarding the benevolence of the other. Then, we propose a model of the interaction of these elements. We argue that this model can explain why certain situations require trust. To test the applicability of the proposed model to an instance of human-technology interaction, we constructed a website which required subjects to depend on an intelligent software agent to accomplish a task. A strong correlation was found between subjects’ level of trust in the software and the ability they perceived the software as having. Strong negative correlations were found between perceived risk and perceived ability, and between perceived risk and trust.
Article
Full-text available
Many have speculated that trust plays a critical role in stimulating consumer purchases over the Internet. Most of the speculations have rallied around U.S. consumers purchasing from U.S.–based online merchants. The global nature of the Internet raises questions about the robustness of trust effects across cultures. Culture may also affect the antecedents of consumer trust; that is, consumers in different cultures might have differing expectations of what makes a web merchant trustworthy. Here we report on a cross-cultural validation of an Internet consumer trust model. The model examined both antecedents and consequences of consumer trust in a Web merchant. The results provide tentative support for the generalizability of the model.
Article
Full-text available
Chatbots are computer programs that interact with users using natural lan- guages. This technology started in the 1960’s; the aim was to see if chatbot systems could fool users that they were real humans. However, chatbot sys- tems are not only built to mimic human conversation, and entertain users. In this paper, we investigate other applications where chatbots could be useful such as education, information retrival, business, and e-commerce. A range of chatbots with useful applications, including several based on the ALICE/AIML architecture, are presented in this paper.
Article
Full-text available
Valid measurement scales for predicting user acceptance of computers are in short supply. Most subjective measures used in practice are unvalidated, and their relationship to system usage is unknown. The present research develops and validates new scales for two specific variables, perceived usefulness and perceived ease of use, which are hypothesized to be fundamental determinants of user acceptance. Definitions for these two variables were used to develop scale items that were pretested for content validity and then tested for reliability and construct validity in two studies involving a total of 152 users and four application programs. The measures were refined and streamlined, resulting in two six-item scales with reliabilities of .98 for usefulness and .94 for ease of use. The scales exhibited high convergent, discriminant, and factorial validity. Perceived usefulness was significantly correlated with both self-reported current usage (r=.63, Study 1) and self-predicted future usage (r =.85, Study 2). Perceived ease of use was also significantly correlated with current usage (r=.45, Study 1) and future usage (r=.59, Study 2). In both studies, usefulness had a significantly greater correlation with usage behavior than did ease of use. Regression analyses suggest that perceived ease of use may actually be a causal antecedent to perceived usefulness, as opposed to a parallel, direct determinant of system usage. Implications are drawn for future research on user acceptance.
Article
Full-text available
We evaluate and quantify the effects of human, robot, and environmental factors on perceived trust in human-robot interaction (HRI). To date, reviews of trust in HRI have been qualitative or descriptive. Our quantitative review provides a fundamental empirical foundation to advance both theory and practice. Meta-analytic methods were applied to the available literature on trust and HRI. A total of 29 empirical studies were collected, of which 10 met the selection criteria for correlational analysis and 11 for experimental analysis. These studies provided 69 correlational and 47 experimental effect sizes. The overall correlational effect size for trust was r = +0.26,with an experimental effect size of d = +0.71. The effects of human, robot, and environmental characteristics were examined with an especial evaluation of the robot dimensions of performance and attribute-based factors. The robot performance and attributes were the largest contributors to the development of trust in HRI. Environmental factors played only a moderate role. Factors related to the robot itself, specifically, its performance, had the greatest current association with trust, and environmental factors were moderately associated. There was little evidence for effects of human-related factors. The findings provide quantitative estimates of human, robot, and environmental factors influencing HRI trust. Specifically, the current summary provides effect size estimates that are useful in establishing design and training guidelines with reference to robot-related factors of HRI trust. Furthermore, results indicate that improper trust calibration may be mitigated by the manipulation of robot design. However, many future research needs are identified.
Article
Full-text available
Our task is to adopt a multidisciplinary view of trust within and between firms, in an effort to synthesize and give insight into a fundamental construct of organizational science. We seek to identify the shared understandings of trust across disciplines, while recognizing that the divergent meanings scholars bring to the study of trust also can add value.
Article
Full-text available
Two experiments are reported which examined operators' trust in and use of the automation in a simulated supervisory process control task. Tests of the integrated model of human trust in machines proposed by Muir (1994) showed that models of interpersonal trust capture some important aspects of the nature and dynamics of human-machine trust. Results showed that operators' subjective ratings of trust in the automation were based mainly upon their perception of its competence. Trust was significantly reduced by any sign of incompetence in the automation, even one which had no effect on overall system performance. Operators' trust changed very little with experience, with a few notable exceptions. Distrust in one function of an automatic component spread to reduce trust in another function of the same component, but did not generalize to another independent automatic component in the same system, or to other systems. There was high positive correlation between operators' trust in and use of the automation; operators used automation they trusted and rejected automation they distrusted, preferring to do the control task manually. There was an inverse relationship between trust and monitoring of the automation. These results suggest that operators' subjective ratings of trust and the properties of the automation which determine their trust, can be used to predict and optimize the dynamic allocation of functions in automated systems.
Article
Full-text available
We provide an empirical demonstration of the importance of attending to human user individual differences in examinations of trust and automation use. Past research has generally supported the notions that machine reliability predicts trust in automation, and trust in turn predicts automation use. However, links between user personality and perceptions of the machine with trust in automation have not been empirically established. On our X-ray screening task, 255 students rated trust and made automation use decisions while visually searching for weapons in X-ray images of luggage. We demonstrate that individual differences affect perceptions of machine characteristics when actual machine characteristics are constant, that perceptions account for 52% of trust variance above the effects of actual characteristics, and that perceptions mediate the effects of actual characteristics on trust. Importantly, we also demonstrate that when administered at different times, the same six trust items reflect two types of trust (dispositional trust and history-based trust) and that these two trust constructs are differentially related to other variables. Interactions were found among user characteristics, machine characteristics, and automation use. Our results suggest that increased specificity in the conceptualization and measurement of trust is required, future researchers should assess user perceptions of machine characteristics in addition to actual machine characteristics, and incorporation of user extraversion and propensity to trust machines can increase prediction of automation use decisions. Potential applications include the design of flexible automation training programs tailored to individuals who differ in systematic ways.
Article
Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet, when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In five studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.
Article
Scientists are learning more about what makes robots and chatbots engaging.
Conference Paper
With the rise of social media and advancements in AI technology, human-bot interaction will soon be commonplace. In this paper we explore human-bot interaction in Stack Overflow, a question and answer website for developers. For this purpose, we built a bot emulating an ordinary user answering questions concerning the resolution of git error messages. In a first run this bot impersonated a human, while in a second run the same bot revealed its machine identity. Despite being functionally identical, the two bot variants elicited quite different reactions.
Conference Paper
The past four years have seen the rise of conversational agents (CAs) in everyday life. Apple, Microsoft, Amazon, Google and Facebook have all embedded proprietary CAs within their software and, increasingly, conversation is becoming a key mode of human-computer interaction. Whilst we have long been familiar with the notion of computers that speak, the investigative concern within HCI has been upon multimodality rather than dialogue alone, and there is no sense of how such interfaces are used in everyday life. This paper reports the findings of interviews with 14 users of CAs in an effort to understand the current interactional factors affecting everyday use. We find user expectations dramatically out of step with the operation of the systems, particularly in terms of known machine intelligence, system capability and goals. Using Norman's 'gulfs of execution and evaluation' [30] we consider the implications of these findings for the design of future systems.
Article
The notion that companies must go above and beyond in their customer service activities is so entrenched that managers rarely examine it. But a study of more than 75,000 people interacting with contact-center representatives or using self-service channels found that over-the-top efforts make little difference: All customers really want is a simple, quick solution to their problem. The Corporate Executive Board's Dixon and colleagues describe five loyalty-building tactics that every company should adopt: Reduce the need for repeat calls by anticipating and dealing with related downstream issues; arm reps to address the emotional side of customer interactions; minimize the need for customers to switch service channels; elicit and use feedback from disgruntled or struggling customers; and focus on problem solving, not speed. The authors also introduce the Customer Effort Score and show that it is a better predictor of loyalty than customer satisfaction measures or the Net Promoter Score. And they make available to readers a related diagnostic tool, the Customer Effort Audit. They conclude that we are reaching a tipping point that may presage the end of the telephone as the main channel for service interactions and that managers therefore have an opportunity to rebuild their service organizations and put reducing customer effort firmly at the core, where it belongs.
Article
The context in which service is delivered and experienced has, in many respects, fundamentally changed. For instance, advances in technology, especially information technology, are leading to a proliferation of revolutionary services and changing how customers serve themselves before, during, and after purchase. To understand this changing landscape, the authors engaged in an international and interdisciplinary research effort to identify research priorities that have the potential to advance the service field and benefit customers, organizations, and society. The priority-setting process was informed by roundtable discussions with researchers affiliated with service research centers and networks located around the world and resulted in the following 12 service research priorities:
Article
We examine how susceptible jobs are to computerisation. To assess this, we begin by implementing a novel methodology to estimate the probability of computerisation for 702 detailed occupations, using a Gaussian process classifier. Based on these estimates, we examine expected impacts of future computerisation on US labour market outcomes, with the primary objective of analysing the number of jobs at risk and the relationship between an occupations probability of computerisation, wages and educational attainment.
Article
We consider customer service chat (CSC) systems where customers can receive real time service from agents using an instant messaging (IM) application over the Internet. A unique feature of these systems is that agents can serve multiple customers simultaneously. The number of customers that an agent is serving determines the rate at which each customer assigned to that agent receives service. We consider the staffing problem in CSC systems with impatient customers where the objective is to minimize the number of agents while providing a certain service level. The service level is measured in terms of the proportion of customers who abandon the system in the long run. First we propose effective routing policies based on a static planning LP, both for the cases when the arrival rate is observable and for when the rate is unobservable. We show that these routing policies minimize the proportion of abandoning customers in the long run asymptotically for large systems. We also prove that the staffing solution obtained from a staffing LP, when used with the proposed routing policies, is asymptotically optimal. We illustrate the effectiveness of our solution procedure in systems with small to large sizes via numerical and simulation experiments.
Article
Self-service technologies (SSTs) are increasingly changing the way customers interact with firms to create service outcomes. Given that the emphasis in the academic literature has focused almost exclusively on the interpersonal dynamics of service encounters, there is much to be learned about customer interactions with technology-based self-service delivery options. In this research, the authors describe the results of a critical incident study based on more than 800 incidents involving SSTs solicited from customers through a Web-based survey. The authors categorize these incidents to discern the sources of satisfaction and dissatisfaction with SSTs, The authors present a discussion of the resulting critical incident categories and their relationship to customer attributions, complaining behavior, word of mouth, and repeat purchase intentions, which is followed by implications for managers and researchers.
Article
Unlike the traditional bricks-and-mortar marketplace, the online environment includes several distinct factors that influence brand trust. As consumers become more savvy about the Internet, the author contends they will insist on doing business with Web companies they trust. This study examines how brand trust is affected by the following Web purchase-related factors: security, privacy, brand name, word-of-mouth, good online experience, and quality of information. The author argues that not all e-trust building programs guarantee success in building brand trust. In addition to the mechanism depending on a program, building e-brand trust requires a systematic relationship between a consumer and a particular Web brand. The findings show that brand trust is not built on one or two components but is established by the interrelationships between complex components. By carefully investigating these variables in formulating marketing strategies, marketers can cultivate brand loyalty and gain a formidable competitive edge.
Article
Ten conditions of trust were suggested by 84 interviews of managers, and two previous studies of managerial trust. Statements made in the interviews and the studies were used to develop a content theory of trust conditions and derive scales measuring them. The scales were generated with an iterative procedure using a total of 1531 management students. The scales were assessed for homogeneity, reliability, and validity with several samples: 180 managers and 173 of their subordinates, 111 machine operators, and four different samples of management students (n = 380, n = 129, n = 290, and n = 132). Construct validity was supported by showing that the scale measures behaved as hypothesized with respect to measures of other variables, a manipulation of expectations, and the reciprocity of trust in vertical dyads.
Article
Scholars in various disciplines have considered the causes, nature, and effects of trust. Prior approaches to studying trust are considered, including characteristics of the trustor, the trustee, and the role of risk. A definition of trust and a model of its antecedents and outcomes are presented, which integrate research from multiple disciplines and differentiate trust from similar constructs. Several research propositions based on the model are presented.
Article
The present studies were designed to test whether people are "polite" to computers. Among people, an interviewer who directly asks about him- or herself will received more positive and less varied responses than if the same question is posed by a 3rd party. Two studies were designed to determine if the same phenomenon occurs in human-computer interaction. In the 1st study, 30 Ss performed a task with a text-based computer and were then interviewed about the performance of that computer on 1 of 3 loci: (1) the same computer, (2) a pencil-and-paper questionnaire, or (3) a different (but identical) text-based computer. Consistent with the politeness prediction, same-computer participants evaluated the computer more positively and more homogeneously than did either pencil-and-paper or different computer participants. Study 2, with 30 participants, replicated the results with voice-based computers. ((c) 1999 APA/PsycINFO, all rights reserved)
Article
In this study a psychometric instrument specifically designed to measure human-computer trust (HCT) was developed and tested. A rigorous method similar to that described by Moore and Benbasat (1991) was adopted. It was found that both cognitive and affective components of trust could be measured and that, in this study, the affective components were the strongest indicators of trust. The reliability of the instrument, measured as Cronbach's alpha, was 0.94. This instrument is the first of its kind to be specifically designed to measure HCT and shown empirically to be valid and reliable.
Article
Trust is emerging as a key element of success in the on-line environment. Although considerable research on trust in the offline world has been performed, to date empirical study of on-line trust has been limited. This paper examines on-line trust, specifically trust between people and informational or transactional websites. It begins by analysing the definitions of trust in previous offline and on-line research. The relevant dimensions of trust for an on-line context are identified, and a definition of trust between people and informational or transactional websites is presented. We then turn to an examination of the causes of on-line trust. Relevant findings in the human–computer interaction literature are identified. A model of on-line trust between users and websites is presented. The model identifies three perceptual factors that impact on-line trust: perception of credibility, ease of use and risk. The model is discussed in detail and suggestions for future applications of the model are presented.
Conference Paper
Given the importance of credibility in computing products, the research on computer credibility is relatively small. To enhance knowledge about computers and credibility, we define key terms relating to computer credibility, synthesize the literature in this domain, and propose three new conceptual frameworks for better understanding the elements of computer credibility. To promote further research, we then offer two perspectives on what computer users evaluate when assessing credibility. We conclude by presenting a set of credibility-related terms that can serve in future research and evaluation endeavors.
Article
This study examines the impact of culture on trust determinants in computer-mediated commerce transactions. Adopting trust-building foundations from cross-culture literature and focusing on a set of well-established cultural constructs as groups of culture ...
Article
Despite the phenomenal growth of Internet users in recent years, the penetration rate of Internet shopping is still low and one of the most often cited reasons is the lack of consumers’ trust (e.g. Hoffman et al., 1999). Although trust is an important concept in Internet shopping, there is a paucity of theory-guided empirical research in this area. In this paper, a theoretical model is proposed for investigating the nature of trust in the specific context of Internet shopping. In this model, consumers’ trust in Internet shopping is affected by propensity to trust and two groups of antecedent factors, namely, "trustworthiness of Internet vendors" and "external environment". Trust, in turn, reduces consumers’ perceived risk in Internet shopping. As an important step towards the rigorous testing of the model, the necessary measurement instrument has been developed with its reliability and validity empirically tested. The psychometric properties of the measurement instrument have been investigated using both a classical approach (based on Cronbach’s alpha and exploratory factor analysis) and a contemporary approach (based on structural equation modeling techniques), as a way of methods triangulation for validating instrument properties. The resulting instrument represents a rigorously developed and validated instrument for the measurement of various important trust related constructs. This research contributes to the development of trust theory in e-commerce and add to the repository of rigorous research instruments for IS survey researchers to use.
Article
Eliza is a program operating within the MAC time-sharing system at MIT which makes certain kinds of natural language conversation between man and computer possible. Input sentences are analyzed on the basis of decomposition rules which are triggered by key words appearing in the input text. Responses are generated by reassembly rules associated with selected decomposition rules. The fundamental technical problems with which Eliza is concerned are: (1) the identification of key words, (2) the discovery of minimal context, (3) the choice of appropriate transformations, (4) generation of responses in the absence of key words, and (5) the provision of an editing capability for Eliza scripts. A discussion of some psychological issues relevant to the Eliza approach as well as of future developments concludes the paper. 9 references.