Chapter

Human Versus Machine: Contingency Factors of Anthropomorphism as a Trust-Inducing Design Strategy for Conversational Agents

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Conversational agents are increasingly popular in various domains of application. Due to their ability to interact with users in human language, anthropomorphizing these agents to positively influence users’ trust perceptions seems justified. Indeed, conceptual and empirical arguments support the trust-inducing effect of anthropomorphic design. However, an opposing research stream that has widely been overlooked provides evidence that human-likeness reduces agents’ trustworthiness. Based on a thorough analysis of psychological mechanisms related to the contradicting theoretical positions, we propose that the agent substitution type acts as a situational moderator variable on the positive relationship between anthropomorphic design and agents’ trustworthiness. We argue that different agent types are related to distinct user expectations that influence the cognitive evaluation of anthropomorphic design. We further discuss how these differences translate into neurophysiological responses and propose an experimental set-up using a combination of behavioral, self-reported and eye-tracking data to empirically validate our proposed model.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Since humans place social expectations on computers, anthropomorphization increases users' trust in computer agents (i.e., the more human-like computer is, the more trust humans apply to it). The second point of view is that humans place more trust on computerized systems as opposed to other humans (Seeger & Heinzl, 2018). According to this view, high-quality automation leads to a more fruitful interaction because the machine seems to be more objective and rational than a human. ...
... According to this view, high-quality automation leads to a more fruitful interaction because the machine seems to be more objective and rational than a human. Humans tend to trust computer systems more than other humans because humans are expected to be imperfect while the opposite is true for automation (Dijkstra, Liebrand, & Timminga, 1998;Seeger & Heinzl, 2018). ...
Article
Chatbots are used frequently in business to facilitate various processes,particularly those related to customer service and personalization. In this article,we propose novel methods of tracking human-chatbot interactions and measuringchatbot performance that take into consideration ethical concerns, particularlytrust. Our proposed methodology links neuroscientific methods, text mining, andmachine learning. We argue that trust isthefocalpointofsuccessfulhuman-chatbot interaction and assess how trust as a relevant category is being redefinedwith the advent of deep learning supported chatbots. We propose a novel methodof analyzing the content of messages produced in human-chatbot interactions, us-ing the Condor Tribefinder system we developed for text mining that is based on amachine learning classification engine. Our results will help build better social botsfor interaction in business or commercial environments
... Another reason is their discontent with the technology [70]. The uncanniness may also result from too much anthropomorphism, which can reduce the user's trust level since having bad intentions is also accredited to human-like behaviour (but not machines) [60]. The discrepancy between expectation and actual performance can lead to the users' anger and frustration, subsequently lowering the adoption rate [70]. ...
... Consumers perceive lower trustworthiness in Alexa compared to computers. Trust as a factor for the adoption and usage of voice assistants has been discussed in literature intensely [11,60] and therefore seems to play a pivotal role for its usage. Although the exact reason for the lower trustworthiness remains unknown in this experiment, manufacturers should proactively minimise the skepticism towards voice assistants and AI technology with multiple approaches. ...
... First, the factors that influence user trust in conversational systems are not limited to Competence Perception, which was the only dimension investigated in our study. Anthropomorphism [68], security and privacy [20] are additional relevant dimensions of user trust. However, these dimensions are frequently discussed in the context of user trust in customer service chatbots and are influenced by additional personal characteristics, such as affective states [2] and privacy concerns [65]. ...
Preprint
Full-text available
Conversational recommender systems (CRSs) imitate human advisors to assist users in finding items through conversations and have recently gained increasing attention in domains such as media and e-commerce. Like in human communication, building trust in human-agent communication is essential given its significant influence on user behavior. However, inspiring user trust in CRSs with a "one-size-fits-all" design is difficult, as individual users may have their own expectations for conversational interactions (e.g., who, user or system, takes the initiative), which are potentially related to their personal characteristics. In this study, we investigated the impacts of three personal characteristics, namely personality traits, trust propensity, and domain knowledge, on user trust in two types of text-based CRSs, i.e., user-initiative and mixed-initiative. Our between-subjects user study (N=148) revealed that users' trust propensity and domain knowledge positively influenced their trust in CRSs, and that users with high conscientiousness tended to trust the mixed-initiative system.
... As a fourth process, algorithmic recommendations can be persuasive as a result of automation bias, which refers to people attributing greater trust in machines and their recommendations compared to other sources of recommendation (Sundar, 2020). Automated decisions made by algorithms can be perceived as more objective and rational than decisions made by humans (Clerwall, 2014;Seeger and Heinzl, 2018) and as a result, lead to more favorable responses to content provided by algorithms (e.g., Graefe et al., 2018). Therefore, the automation bias mechanism could serve an explanation of how users evaluate algorithm-driven communication. ...
Article
Full-text available
Purpose The purpose of this study is to introduce a comprehensive and dynamic framework that focuses on the role of algorithms in persuasive communication: the algorithmic persuasion framework (APF). Design/methodology/approach In this increasingly data-driven media landscape, algorithms play an important role in the consumption of online content. This paper presents a novel conceptual framework to investigate algorithm-mediated persuasion processes and their effects on online communication. Findings The APF consists of five conceptual components: input, algorithm, persuasion attempt, persuasion process and persuasion effects . In short, it addresses how data variables are inputs for different algorithmic techniques and algorithmic objectives, which influence the manifestations of algorithm-mediated persuasion attempts, informing how such attempts are processed and their intended and unintended persuasive effects. Originality/value The paper guides future research by addressing key elements in the framework and the relationship between them, proposing a research agenda (with specific research questions and hypotheses) and discussing methodological challenges and opportunities for the future investigation of the framework.
... Coming to the effect, questions on providing greetings were assessed. There is no specific change noted in both the versions [14]. ...
... Considering subjective aspects that can influence the user experience in interaction people tend to believe more in computers than in humans and other effects that a more human-like chatbot may provoke in the user perceptioncf. (Schuetzler et al., 2019), (Seeger & Heinzl, 2018). And there is also a notion that even with the advances in AI and the widespread dissemination of bots in customer services, there is still a long way to go for conversational agents to present something close to the human characteristicscf. ...
Article
Full-text available
This article proposes a reflection on two important aspects of cutting-edge efforts to the Digital Transformation: the use of Artificial Intelligence supporting conversational agents and the issue of usability for ameliorating the user experience (UX). The rationale is conducted both with a focus on the potential they present for leveraging Digital Transformation and increasing UX as in the barriers or constraints that they can constitute if they are not adhering to a user-centric approach.
... There are some ways to recover lost trust in an intelligent agent. For instance, the agent could employ anthropomorphic cues without altering other behavioral features (Pak et al., 2012;Seeger and Heinzl, 2018;de Visser et al., 2016) or explain why it failed (Dzindolet et al., 2003). However, because the study of trust repair in the context of computers is relatively new, there is a lack of knowledge regarding how to repair lost trust in the agent. ...
Article
Trust is essential in individuals’ perception, behavior, and evaluation of intelligent agents. Because, it is the primary motive for people to accept new technology, it is crucial to repair trust when damaged. This study investigated how intelligent agents should apologize to recover trust and how the effectiveness of the apology is different when the agent is human-like compared to machine-like based on two seemingly competing frameworks of the Computers-Are-Social-Actors paradigm and automation bias. A 2 (agent: Human-like vs. Machine-like) X 2 (apology attribution: Internal vs. External) between-subject design experiment was conducted (N = 193) in the context of the stock market. Participants were presented with a scenario to make investment choices based on an artificial intelligence agent’s advice. To see the trajectory of the initial trust-building, trust violation, and trust repair process, we designed an investment game that consists of five rounds of eight investment choices (40 investment choices in total). The results show that trust was repaired more efficiently when a human-like agent apologizes with internal rather than external attribution. However, the opposite pattern was observed among participants who had machine-like agents; the external rather than internal attribution condition showed better trust repair. Both theoretical and practical implications are discussed.
... The attribution of a humanlike mind was found by de Visser et al. (2016) to be much facilitated by a humanlike appearance of robots, and to hold also-and more specifically-for recommendation agents. In summary, behavioral and physiological measures from different studies confirm that individuals feel greater trust for computer agents that display more anthropomorphic features (de Visser et al., 2017;Seeger & Heinzl, 2018). ...
Article
Advances in artificial intelligence provide new tools of digital assistance that retailers can use to support consumers while shopping. The aim of this research is to examine how consumers react as a function of assistants’ appearance (human- vs. not human-like) and activation (automatic vs. human-initiated). We advance a model of sequential mediation whose empirical validation on 400 participants in two studies shows that non-anthropomorphic digital assistants lead to higher psychological reactance. In turn, reactance affects perceived choice difficulty, which positively reflects on choice certainty, perceived performance and—ultimately—satisfaction. Thus, although reactance might appear as a negative outcome, it eventually leads to higher satisfaction. Furthermore, initiation (system vs. user initiation) does not activate the chain of effects, but significantly interacts with anthropomorphism so that individuals exhibit lower reactance when confronted with human-like digital assistants activated by the consumer. Overall, reactance is highest for non-human like digital assistants that are computer-initiated.
... Hence, chat-bot designers deliberately build in traits that engender trust from users such as predictability, natural language traits and anthropomorphism. In the latter case, the amount of anthropomorphism is critical as too much similarity to humans can actually create an 'uncanny valley' situation where the user becomes put-off by the small differences between the chat-bot and a real human [11]. Studies have shown that people interacting with chat-bots are likely to become overly trusting, handing over personal information and even passwords without proper consideration [12]. ...
Chapter
This chapter describes the latest developments in Conversational Intelligent Tutoring Systems (CITS), which are e-learning systems that adopt Computational Intelligence approaches to mimic a human tutor. CITS deliver a personalized, human-like tutorial via natural language. Like expert human tutors, CITS automatically profile a learner’s knowledge and skills and also profile and adapt to an individual’s affective traits (such as mood or personality) to improve their learning. The chapter explores the challenges for CITS design that prevent them from moving into the mainstream and highlights scalability issues that the research community must address. Two cutting-edge CITS are described. Oscar CITS dynamically profiles learning styles using learner behaviour throughout the tutorial conversation, and adapts at a question level, resulting in significantly better learning gain. Hendrix 2.0 CITS uses a bank of artificial neural networks to automatically profile a learner’s comprehension using webcam images, and intervenes to help improve learner motivation and avoid impasse. The considerable issue of trust in AI is discussed, and a study of public opinion on ethical use of AI in education showed that more needs to be done in educating the public on the benefits and risks of AI in education.
Article
Full-text available
Due to its immense popularity amongst marketing practitioners, online personalized advertising is increasingly becoming the subject of academic research. Although advertisers need to collect a large amount of customer information to develop customized online adverts, the effect of how this information is collected on advert effectiveness has been surprisingly understudied. Equally overlooked is the interplay between consumer’s emotions and the process of consumer data collection. Two studies were conducted with the aim of closing these important gaps in the literature. Our findings revealed that overt user data collection techniques produced more favourable cognitive, attitudinal and behavioral responses than covert techniques. Moreover, consistent with the self-validation hypothesis, our data revealed that the effects of these data collection techniques can be enhanced (e.g., via happiness and pride), attenuated (e.g., via sadness), or even eliminated (e.g., via guilt), depending on the emotion experienced by the consumer while viewing an advert.
Article
The rampant misinformation amid the COVID-19 pandemic demonstrates an obvious need for persuasion. This article draws on the fields of digital rhetoric and rhetoric of science, technology, and medicine to explore the persuasive threats and opportunities machine communicators pose to public health. As a specific case, Alexa and the machine’s performative similarities to the Oracle at Delphi are tracked alongside the voice-based assistant’s further resonances with the discourses of expert systems to develop an account of the machine’s rhetorical energies. From here, machine communicators are discussed as optimal deliverers of inoculations against misinformation in light of the fact that their performances are attended by rhetorical energies that can enliven persuasions against misinformation.
Article
Full-text available
Recently, WhatsApp allowed commercial brands to initiate private chat conversations with users through their direct messaging platform. With more than 1 billion users, it is important to have insights into their trust in brands on WhatsApp, as well as their willingness to disclose personal information to these brands. Using data from a national representative survey, we find that the perceived security, perceived privacy, and perceived socialness of WhatsApp as a platform are positively associated with trusting brands on that messaging platform. In turn, brand trust positively influences consumers' intentions to disclose information to brands on WhatsApp. Finally, these results are also compared with Facebook Messenger; there are significant differences between the two messaging platforms.
Article
Full-text available
People are more and more using social networking sites (SNSs) like Facebook and MySpace to engage with others. The use of SNSs can have both positive and negative effect on the individual; however, the increasing use of SNSs might reveal that people look for SNSs because they have a positive experience when they use them. Few studies have tried to identify which particular aspects of the social networking experience make SNSs so successful. In this study we focus on the affective experience evoked by SNSs. In particular, we explore whether the use of SNSs elicits a specific psychophysiological pattern. Specifically, we recorded skin conductance, blood volume pulse, electroencephalogram, electromyography, respiratory activity, and pupil dilation in 30 healthy subjects during a 3-minute exposure to (a) a slide show of natural panoramas (relaxation condition), (b) the subject's personal Facebook account, and (c) a Stroop and mathematical task (stress condition). Statistical analysis of the psychophysiological data and pupil dilation indicates that the Facebook experience was significantly different from stress and relaxation on many linear and spectral indices of somatic activity. Moreover, the biological signals revealed that Facebook use can evoke a psychophysiological state characterized by high positive valence and high arousal (Core Flow State). These findings support the hypothesis that the successful spread of SNSs might be associated with a specific positive affective state experienced by users when they use their SNSs account.
Conference Paper
Full-text available
This study reports the results of a laboratory experiment exploring interactions between humans and a conversational agent. Using the ChatScript language, we created a chat bot that asked participants to describe a series of images. The two objectives of this study were (1) to analyze the impact of dynamic responses on participants' perceptions of the conversational agent, and (2) to explore behavioral changes in interactions with the chat bot (i.e. response latency and pauses) when participants engaged in deception. We discovered that a chat bot that provides adaptive responses based on the participant's input dramatically increases the perceived humanness and engagement of the conversational agent. Deceivers interacting with a dynamic chat bot exhibited consistent response latencies and pause lengths while deceivers with a static chat bot exhibited longer response latencies and pause lengths. These results give new insights on social interactions with computer agents during truthful and deceptive interactions.
Article
Full-text available
Assessment of text relevance is an important aspect of human–information interaction. For many search sessions it is essential to achieving the task goal. This work investigates text relevance decision dynamics in a question-answering task by direct measurement of eye movement using eye-tracking and brain activity using electroencephalography EEG. The EEG measurements are correlated with the user's goal-directed attention allocation revealed by their eye movements. In a within-subject lab experiment (N = 24), participants read short news stories of varied relevance. Eye movement and EEG features were calculated in three epochs of reading each news story (early, middle, final) and for periods where relevant words were read. Perceived relevance classification models were learned for each epoch. The results show reading epochs where relevant words were processed could be distinguished from other epochs. The classification models show increasing divergence in processing relevant vs. irrelevant documents after the initial epoch. This suggests differences in cognitive processes used to assess texts of varied relevance levels and provides evidence for the potential to detect these differences in information search sessions using eye tracking and EEG.
Thesis
Full-text available
Information and communication services have become ubiquitous in our everyday life and, in turn, research on Ubiquitous Information Systems (UIS) has received increasing attention. UIS services can elicit both negative and positive emotions, which are not necessarily perceived consciously by individuals but which may still have an impact on predictors and outcomes of UIS service use. Due to the limitations of psychological self-reports in uncovering these automatic cognitive processes, the current work investigates emotional stimuli of UIS services with neurophysiological data. In particular, we choose electrodermal activity as an indicator of physiological arousal and assess its utility for the design and use of UIS services. To account for the neurophysiological nature of electrodermal activity and to investigate its value in relation to established self-report instruments, we integrate the stimulus-organism-response paradigm with a two-systems view of cognitive processing. Against the background of this theoretical framework, we hypothesise relationships between breakdown events of UIS services (the emotional stimuli), physiological arousal and perceived ease of use (manifestations of the organism's automatic and inferential cognitive processes), and task performance (the response of the organism). We also consider physiological learning processes related to generalisation effects. In order to test the hypotheses, we use empirical data from two studies. Results indicate that electrodermal activity is a useful measure for the design and use of UIS services, even though generalisation effects can reduce its reliability. Moreover, we demonstrate that electrodermal activity is related to perceived ease of use and task performance. We finally discuss the theoretical and practical implications of our results, examine the limitations of the current work and outline future research.
Article
Full-text available
Recently, researchers started using cognitive load in various settings, e.g., educational psychology, cognitive load theory, or human-computer interaction. Cognitive load characterizes a tasks' demand on the limited information processing capacity of the brain. The widespread adoption of eye-tracking devices led to increased attention for objectively measuring cognitive load via pupil dilation. However, this approach requires a standardized data processing routine to reliably measure cognitive load. This technical report presents CEP-Web, an open source platform to providing state of the art data processing routines for cleaning pupillary data combined with a graphical user interface, enabling the management of studies and subjects. Future developments will include the support for analyzing the cleaned data as well as support for Task-Evoked Pupillary Response (TEPR) studies.
Conference Paper
Full-text available
As information systems (IS) are increasingly able to create highly engaging and interactive experiences, the phenomenon of flow is considered a promising vehicle to understand pre-adoptive and post-adoptive IS user behavior. However, despite a strong interest of researchers and practitioners in flow, the reliability, validity, hypothesized relationships, and measurement of flow constructs in current IS literature remain challenging. By reviewing extant literature in top IS outlets, this paper develops an integrative theoretical framework of flow antecedents, flow constructs, and flow consequences within IS research. In doing so, we identify and discuss four major flow streams in IS research and indicate future research directions.
Article
Full-text available
Because technostress research is multidisciplinary in nature and therefore benefits from insights gained from various research disciplines, we expected a high degree of measurement pluralism in technostress studies published in the Information Systems (IS) literature. However, because IS research, in general, mostly relies on self-report measures, there is also reason to assume that multi-method research designs have been largely neglected in technostress research. To assess the status quo of technostress research with respect to the application of multi-method approaches, we analyzed 103 empirical studies. Specifically, we analyzed the types of data collection methods used and the investigated components of the technostress process (person, environment, stressors, strains, and coping). The results indicate that multi-method research is more prevalent in the IS technostress literature (approximately 37% of reviewed studies) than in the general IS literature (approximately 20% as reported in previous reviews). However, our findings also show that IS technostress studies significantly rely on self-report measures. We argue that technostress research constitutes a nurturing ground for the application of multi-method approaches and multidisciplinary collaboration.
Chapter
Full-text available
I report results from an experiment on the relationship between visual website complexity and users’ mental workload. Applying a pupillary based workload assessment as a NeuroIS methodology, I found indications that a balanced level of navigation complexity, i.e., the number of (sub)menus, in combination with a balanced level of information complexity, is the best choice from a user’s mental workload perspective.
Article
Full-text available
The use of video has become well established in education, from traditional courses to blended and online courses. It has grown both in its diversity of applications as well as its content. Such educational video however is not fully accessible to all students, particularly those who require additional visual support or students studying in a foreign language. Subtitles (also known as captions) represent a unique solution to these language and accessibility barriers, however, the impact of subtitles on cognitive load in such a rich and complex multimodal environment has yet to be determined. Cognitive load is a complex construct and its measurement by means of single indirect and unidimensional methods is a severe methodological limitation. Building upon previous work from several disciplines, this paper moves to establish a multimodal methodology for the measurement of cognitive load in the presence of educational video. We show how this methodology, with refinement, can allow us to determine the effectiveness of subtitles as a learning support in educational contexts. This methodology will also make it possible to analyse the impact of other multimedia learning technology on cognitive load.
Thesis
Full-text available
Business process models have gained significant importance due to their critical role for managing business processes. In particular, process models support the common understanding of a company’s business processes, enable the discovery of improvement opportunities, and serve as drivers for the implementation of business processes. Still, a wide range of quality problems have been observed. For example, literature reports on error rates between 10% and 20% in industrial process model collections. Most research in the context of quality issues of process models puts a strong emphasis on the outcome of the process modeling act by analyzing the resulting model. However, it is rarely considered that process model quality is presumably dependent on the process followed to create the process model. This thesis strives for addressing this gap by specifically investigating the process of creating process models. In this context, different actions on several levels of abstraction might be considered, including elicitation and formalization of process models. During elicitation information is gathered, which is used in formalization phases for actually creating the formal process model. This thesis focuses on the formalization of process models, which can be considered a process by itself—the Process of Process Modeling (PPM). Due to the lack of an established theory, we follow a mixed method approach to exploratively investigate the PPM. This way, different perspectives are combined to develop a comprehensive understanding. In this context, we attempt to address the following research objectives. First, means for recording and performing a detailed analysis of the PPM are required. For this, a specialized modeling environment— Cheetah Experimental Platform (CEP)—is developed, allowing a systematic investigation of the PPM. Further, a visualization for the PPM, i.e., Modeling Phase Diagrams (MPDs), is presented to support data exploration and hypotheses generation. Second, we attempt to observe and categorize reoccurring behavior of modelers to develop an understanding on how process models are created. Finally, we investigate factors that influence the PPM to understand why certain behavior can be observed. The findings are condensed to form a model on the factors that influence the PPM. Summarized, this thesis proposes means for analyzing the PPM and presents initial findings to form an understanding on how the formalization of process models is conducted and why certain behavior can be observed. While the results cannot be considered an established theory, this work constitutes a first building block toward a comprehensive understanding of the PPM. This will ultimately improve process model quality by facilitating the development of specialized modeling environments, which address potential pitfalls during the creation of process models.
Article
Full-text available
We present physiological text annotation, which refers to the practice of associating physiological responses to text content in order to infer characteristics of the user information needs and affective responses. Text annotation is a laborious task, and implicit feedback has been studied as a way to collect annotations without requiring any explicit action from the user. Previous work has explored behavioral signals, such as clicks or dwell time to automatically infer annotations, and physiological signals have mostly been explored for image or video content. We report on two experiments in which physiological text annotation is studied first to (1) indicate perceived relevance and then to (2) indicate affective responses of the users. The first experiment tackles the user’s perception of relevance of an information item, which is fundamental towards revealing the user’s information needs. The second experiment is then aimed at revealing the user’s affective responses towards a -relevant- text document. Results show that physiological user signals are associated with relevance and affect. In particular, electrodermal activity was found to be different when users read relevant content than when they read irrelevant content and was found to be lower when reading texts with negative emotional content than when reading texts with neutral content. Together, the experiments show that physiological text annotation can provide valuable implicit inputs for personalized systems. We discuss how our findings help design personalized systems that can annotate digital content using human physiology without the need for any explicit user interaction.
Article
Full-text available
Recommender systems have become a natural part of the user experience in today's online world. These systems are able to deliver value both for users and providers and are one prominent example where the output of academic research has a direct impact on the advancements in industry. In this article, we have briefy reviewed the history of this multidis-ciplinary field and looked at recent efforts in the research community to consider the variety of factors that may influence the long-term success of a recommender system. The list of open issues and success factors is still far from complete and new challenges arise constantly that require further research. For example, the huge amounts of user data and preference signals that become available through the Social Web and the Internet of Things not only leads to technical challenges such as scalability, but also to societal questions concerning user privacy. Based on our reflections on the developments in the field, we finally emphasize the need for a more holistic research approach that combines the insights of different disciplines. We urge that research focuses even more on practical problems that matter and are truly suited to increase the utility of recommendations from the viewpoint of the users.
Conference Paper
Full-text available
Research on the process of process modeling (PPM) studies how process models are created. It typically uses the logs of the interactions with the modeling tool to assess the modeler's behavior. In this paper we suggest to introduce an additional stream of data (i.e., eye tracking) to improve the analysis of the PPM. We show that, by exploiting this additional source of information, we can refine the detection of comprehension phases (introducing activities such as " semantic validation " or " problem understanding ") as well as provide more exploratory visualizations (e.g., combined modeling phase diagram, heat maps, fixations distributions) both static and dynamic (i.e., movies with the evolution of the model and eye tracking data on top).
Article
Full-text available
Previous studies were able to demonstrate different verbally stated affective responses to environments. In the present study we used objective measures of emotion. We examined startle reflex modulation as well as changes in heart rate and skin conductance while subjects virtually walked through six different areas of urban Paris using the StreetView tool of Google maps. Unknown to the subjects, these areas were selected based on their median real estate prices. First, we found that price highly correlated with subjective rating of pleasantness. In addition, relative startle amplitude differed significantly between the area with lowest versus highest median real estate price while no differences in heart rate and skin conductance were found across conditions. We conclude that interaction with environmental scenes does elicit emotional responses which can be objectively measured and quantified. Environments activate motivational and emotional brain circuits, which is in line with the notion of an evolutionary developed system of environmental preference. Results are discussed in the frame of environmental psychology and aesthetics.
Article
Full-text available
We interact daily with computers that appear and behave like humans. Some researchers propose that people apply the same social norms to computers as they do to humans, suggesting that social psychological knowledge can be applied to our interactions with computers. In contrast, theories of human-automation interaction postulate that humans respond to machines in unique and specific ways. We believe that anthropomorphism-the degree to which an agent exhibits human characteristics-is the critical variable that may resolve this apparent contradiction across the formation, violation, and repair stages of trust. Three experiments were designed to examine these opposing viewpoints by varying the appearance and behavior of automated agents. Participants received advice that deteriorated gradually in reliability from a computer, avatar, or human agent. Our results showed (a) that anthropomorphic agents were associated with greater , a higher resistance to breakdowns in trust; (b) that these effects were magnified by greater uncertainty; and c) that incorporating human-like trust repair behavior largely erased differences between the agents. Automation anthropomorphism is therefore a critical variable that should be carefully incorporated into any general theory of human-agent trust as well as novel automation design. (PsycINFO Database Record
Article
We have created an automated kiosk that uses embodied intelligent agents to interview individuals and detect changes in arousal, behavior, and cognitive ef- fort by using psychophysiological information systems. In this paper, we describe the system and propose a unique class of intelligent agents, which are described as Special Purpose Embodied Conversational Intelligence with Environmental Sensors (SPECIES). SPECIES agents use heterogeneous sensors to detect human physiology and behavior during interactions, and they affect their environment by influencing hu- man behavior using various embodied states (i.e., gender and demeanor), messages, and recommendations. Based on the SPECIES paradigm, we present three studies that evaluate different portions of the model, and these studies are used as founda- tional research for the development of the automated kiosk. the first study evaluates human–computer interaction and how SPECIES agents can change perceptions of information systems by varying appearance and demeanor. Instantiations that had the agents embodied as males were perceived as more powerful, while female embodied agents were perceived as more likable. Similarly, smiling agents were perceived as more likable than neutral demeanor agents. the second study demonstrated that a single sensor measuring vocal pitch provides SPECIES with environmental awareness of human stress and deception. the final study ties the first two studies together and demonstrates an avatar-based kiosk that asks questions and measures the responses using vocalic measurements.
Article
Background: Child and adolescent obesity is increasingly prevalent, and can be associated with significant short- and long-term health consequences. Objectives: To assess the efficacy of lifestyle, drug and surgical interventions for treating obesity in childhood. Search methods: We searched CENTRAL on The Cochrane Library Issue 2 2008, MEDLINE, EMBASE, CINAHL, PsycINFO, ISI Web of Science, DARE and NHS EED. Searches were undertaken from 1985 to May 2008. References were checked. No language restrictions were applied. Selection criteria: We selected randomised controlled trials (RCTs) of lifestyle (i.e. dietary, physical activity and/or behavioural therapy), drug and surgical interventions for treating obesity in children (mean age under 18 years) with or without the support of family members, with a minimum of six months follow up (three months for actual drug therapy). Interventions that specifically dealt with the treatment of eating disorders or type 2 diabetes, or included participants with a secondary or syndromic cause of obesity were excluded. Data collection and analysis: Two reviewers independently assessed trial quality and extracted data following the Cochrane Handbook. Where necessary authors were contacted for additional information. Main results: We included 64 RCTs (5230 participants). Lifestyle interventions focused on physical activity and sedentary behaviour in 12 studies, diet in 6 studies, and 36 concentrated on behaviorally orientated treatment programs. Three types of drug interventions (metformin, orlistat and sibutramine) were found in 10 studies. No surgical intervention was eligible for inclusion. The studies included varied greatly in intervention design, outcome measurements and methodological quality.Meta-analyses indicated a reduction in overweight at 6 and 12 months follow up in: i) lifestyle interventions involving children; and ii) lifestyle interventions in adolescents with or without the addition of orlistat or sibutramine. A range of adverse effects was noted in drug RCTs. Authors' conclusions: While there is limited quality data to recommend one treatment program to be favoured over another, this review shows that combined behavioural lifestyle interventions compared to standard care or self-help can produce a significant and clinically meaningful reduction in overweight in children and adolescents. In obese adolescents, consideration should be given to the use of either orlistat or sibutramine, as an adjunct to lifestyle interventions, although this approach needs to be carefully weighed up against the potential for adverse effects. Furthermore, high quality research that considers psychosocial determinants for behaviour change, strategies to improve clinician-family interaction, and cost-effective programs for primary and community care is required.
Article
When unit prices were posted on separate shelf tags in a supermarket, consumer expenditures decreased by 1%. When unit prices were displayed also on an organized list, consumer savings were 3%. In addition, the list format caused a 5% increase in the market shares of store brands. The benefits to both consumers and retailers justify the cost of providing unit price information on a widespread basis.
Article
This article discusses the role of commonly used neurophysiological tools such as psychophysiological tools (e.g., EKG, eye tracking) and neuroimaging tools (e.g., fMRI, EEG) in Information Systems research. There is heated interest now in the social sciences in capturing presumably objective data directly from the human body, and this interest in neurophysiological tools has also been gaining momentum in IS research (termed NeuroIS). This article first reviews commonly used neurophysiological tools with regard to their major strengths and weaknesses. It then discusses several promising application areas and research questions where IS researchers can benefit from the use of neurophysiological data. The proposed research topics are presented within three thematic areas: (1) development and use of systems, (2) IS strategy and business outcomes, and (3) group work and decision support. The article concludes with recommendations on how to use neurophysiological tools in IS research along with a set ofpractical suggestions for developing a research agenda for NeuroIS and establishing NeuroIS as a viable subfield in the IS literature.
Article
Several independent lines of research bear on the question of why individuals avoid decisions by postponing them, failing to act, or accepting the status quo. This review relates findings across several different disciplines and uncovers 4 decision avoidance effects that offer insight into this common but troubling behavior: choice deferral, status quo bias, omission bias, and inaction inertia. These findings are related by common antecedents and consequences in a rational-emotional model of the factors that predispose humans to do nothing. Prominent components of the model include cost-benefit calculations, anticipated regret, and selection difficulty. Other factors affecting decision avoidance through these key components, such as anticipatory negative emotions, decision strategies, counterfactual thinking, and preference uncertainty, are also discussed.
Conference Paper
Considerable progress regarding impact factors of process model understandability has been achieved. For example, it has been shown that layout features of process models have an effect on model understandability. Even so, it appears that our knowledge about the modeler’s behavior regarding the layout of a model is very limited. In particular, research focuses on the end product or the outcome of the process modeling act rather than the act itself. This paper extends existing research by opening this black box and introducing an enhanced technique enabling the visual analysis of the modeler’s behavior towards layout. We demonstrate examples showing that our approach provides valuable insights to better understand and support the creation of process models. Additionally, we sketch challenges impeding this support for future research.
Chapter
In the history of science, all influential theories sooner or later begin to live their own life, detached from the author. Sometimes, especially in the case of complex theories, the new life (or lives) of a theory may also become detached from the original theory. This is also true about the theory created by Vygotsky and elaborated by Luria. We can find increasingly many approaches that all declare a close relationship to the original Vygotskian cultural-historical psychology. Luria’s neuropsychology, based on principles formulated by Vygotsky, is also commonly mentioned in neuropsychology textbooks today. Yet, the ways in which the Vygotskian cultural-historical psychology is used today seem to deviate from the original. Deviation from old theories would not be a problem in itself, because new evidence and more developed conceptualizations of the studied phenomena often provide rational grounds for modifying older approaches. In the case of cultural-historical psychology, however, contemporary modifications do not always rely on strong evidence or advanced theoretical thinking. There are strong reasons to suggest that Vygotsky’s theory has been misinterpreted by mainstream scholars of today (e.g. Mahn, 2010; Veresov, 2010). The same can be said about Luria’s neuropsychology. For instance, one of the experts in Luria’s theory, Christensen, wrote in the introduction to a book dedicated to Luria’s legacy: “The Zeitgeist, or cultural tradition, in many countries was not as yet ready for an evaluation based on a theory as advanced as Luria’s”; she continued: “On the whole through all areas presented in this volume, support has been obtained for the updating of Luria’s legacy with the purpose of reintroducing his methods into clinical work with brain-injured patients” (Christensen, 2009, p. 11; my emphasis). In this chapter, I am going to show that Vygotsky–Luria’s cultural-historical approach to neuropsychology is pregnant with promises for many new discoveries that may lead to fundamental changes in our understanding of the human mind. These discoveries become possible only when we ask questions that are rarely asked today – questions that directly follow from cultural-historical neuropsychology.
Article
Evidence has indicated that the neuroactive hormone oxytocin is essential for prosocial behavior, particularly trust. Exogenous administration of oxytocin has been shown to increase trust in humans. However, one may argue that, except the administration of oxytocin in nonhealthy patient groups (e.g., those with autism or anxiety disorders) to alleviate negative symptoms, external administration of oxytocin has little relevance in normal life. Music, a ubiquitous stimulus in human society, has been shown to increase oxytocin in medical therapy scenarios. Considering this evidence, we conducted a trust game experiment with a sample of healthy humans and investigated music’s effects on the (a) trustor’s oxytocin levels (blood sample measurement), (b) investment amount (trust behavior measurement), and (c) perception of the other player’s trustworthiness (self-report). The results of our exploratory study show that an increase in oxytocin levels over 40 trials in a trust game increased perceived trustworthiness in the no-music condition but had no impact on investment amount (i.e., trust behavior). Moreover, music had no effect on oxytocin, trust behavior, or perceived trustworthiness. Thus, unlike prior research showing that music listening may increase self-reported trust in another individual, in the present study we found no effect of music on trust (on either a physiological or behavioral level). We surmise that this finding is a result of both the type of music played during task execution and music preferences. Thus, future research must carefully manipulate music features (e.g., pitch, rhythm, timbre, tempo, meter, contour, loudness, and spatial location) and consider a listener’s music preferences to better understand music’s effects on physiological, perceived, and behavioral trust.
Conference Paper
Search effort is an important aspect of Interactive Information Retrieval (IIR). Prior findings show that higher cognitive ability searchers tend to perform more actions than lower ability searchers. In an eye-tracking lab study we investigated the effects of working memory (WM) on search effort. The findings show that higher WM searchers perform more actions and that most significant differences are in time spent on reading results pages. We also show that behavior of high and low WM searchers changes differently in the course of a search task performance.
Article
As human-machine communication has yet to become prevalent, the rules of interactions between human and intelligent machines need to be explored. This study aims to investigate a specific question: During human users' initial interactions with artificial intelligence, would they reveal their personality traits and communicative attributes differently from human-human interactions? A sample of 245 participants was recruited to view six targets' twelve conversation transcripts on a social media platform: Half with a chatbot Microsoft's Little Ice, and half with human friends. The findings suggested that when the targets interacted with Little Ice, they demonstrated different personality traits and communication attributes from interacting with humans. Specifically, users tended to be more open, more agreeable, more extroverted, more conscientious and self-disclosing when interacting with humans than with AI. The findings not only echo Mischel's cognitive-affective processing system model but also complement the Computers Are Social Actors Paradigm. Theoretical implications were discussed.
Article
Evaluative conditioning (EC) effects on established liked and disliked brands were measured via self report, startle reflex modulation (SRM), heart rate (HR), skin conductance (SC), and the Implicit Association Test (IAT). Baseline measures were compared with measures taken after 1, 6, and 16 conditioning procedures. The aim was to determine how the different measures are differently sensitive to EC effects. Although self-report indicated conditioning effects already after 1 conditioning procedure and in both directions, the authors believe this to be an artifact due to a regression to the mean effect and thus reject this finding. Similarly, HR and SC did not show any sensitivity to conditioning effects. However, SRM and the IAT revealed significant conditioning effects, but more than 1 conditioning procedure were needed to cause changes. Most importantly, SRM, the only implicit measure of raw affective processing (subcortical), did show a significant EC effect after six conditioning procedures, but only in case of disliked brands turning into more liked brands. Because implicit measures are assumed to be more sensitive to deep subcortical affective processing it is concluded that this level of affective processing is more easily influenced by evaluative conditioning than higher order (cortical) processing levels. The findings are discussed in terms of different aspects of brand attitude (affective and cognitive) that seem to be differently affected by EC. Implications for marketers and advertisers are suggested.
Article
Previous research has shown that individual decision making is often characterized by inertia—that is, a tendency for decision makers to choose options that maintain the status quo. In this study, I conduct a laboratory experiment to investigate two potential determinants of inertia in uncertain environments: (i) regret aversion and (ii) ambiguity-driven indecisiveness. I use a between-subjects design with varying conditions to identify the effects of these two mechanisms on choice behavior. In each condition, participants choose between two simple real gambles, one of which is the status quo option. The findings indicate that regret aversion and ambiguity-driven indecisiveness are equally important determinants of inertia, which in turn plays a major role in individual decision making. (JEL Codes: C91, D01, D03, D81).
Article
Sociomateriality is gaining acceptance in the IS field as a way of taking technology more seriously, but not without its share of criticism. Technology itself is defined in many different ways. Coupled with the broader debates surrounding the complex issues and controversies around the relationship between the social and the material, discussions on how technology is tied to work and organizations will continue to develop. The goal of this special issue is to contribute towards clarifying what the tenets of sociomateriality mean for IS research. Beginning with this editorial that elaborates on material agency in IT, the articles in the special issue discuss post-humanism and notion of separability and inseparability, compare the tenets of sociomateriality with critical realism, propose a method for researching sociomateriality, and elaborate on how a view of ontological fusion provides a more holistic view of a digitally-infused society.
Article
Recent accounts of problematic electronic gaming machine (EGM) gambling have suggested attentional pathology among at-risk players. A putative slot machine zone is characterized by an intense immersion during game play, causing a neglect of outside events and competing goals. Prior studies of EGM immersion have relied heavily upon retrospective self-report scales. Here, the authors attempt to identify behavioral and psychophysiological correlates of the immersion experience. In samples of undergraduate students and experienced EGM users from the community, they tested 2 potential behavioral measures of immersion during EGM use: peripheral target detection and probe-caught mind wandering. During the EGM play sessions, electrocardiogram data were collected for analysis of respiratory sinus arrhythmia (RSA), a measure of calming self-regulation governed by the parasympathetic nervous system. Subjective measures of immersion during the EGM play session were consistently related to risk of problem gambling. Problem gambling score, in turn, significantly predicted decrements in peripheral target detection among experienced EGM users. Both samples showed robust RSA decreases during EGM play, indicating parasympathetic withdrawal, but neither immersion nor gambling risk were related to this change. This study identifies peripheral attention as a candidate for quantifying game immersion and its links with risk of problem gambling, with implications for responsible gambling interventions at both the game and venue levels.
Article
Major emergencies are high-stakes, ambiguous, dynamic, and stressful events. Emergency response commanders rely on their expertise and training to mitigate these factors and implement action. The Critical Decision Method was used to interview 31 commanders from the police (n = 12), fire and rescue (n = 15), and ambulance services (n = 4) in the United Kingdom about challenges to decision making. Transcripts were analyzed in 2 ways: (a) using thematic analyses to categorize the challenges to incident command and (b) grounded theory to develop a theoretical understanding of how challenges influenced decision processing. There were 9 core challenges to incident command, themed into 2 categories: (a) those relating to the perceived characteristics of the incident itself; and (b) those relating to uncertainties about (inter)personal dynamics of the team(s) responding. Consideration of challenges featured prominently in decision makers’ prospective modeling, especially when thinking about goal accomplishment (i.e., What if I deploy now? What if I do not?). Commanders were motivated to save life (attack/approach goal), yet also sought to prevent harm (defend/avoid goal). Challenges led commanders to redundantly deliberate about what to do; their prospective modeling was related to the anticipation of potential negative consequences that might arise both for acting (attack) and not acting (defend). Commanders identified this difficult trade-off, yet described how experience and their responsibility as a commander gave them confidence to overcome decision inertia. Future research is needed to identify whether decision making training on how to anticipate and overcome difficult cognitive trade-offs would lead to more flexible and expedient commanding.
Article
Thesupport-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data.High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Article
We have long envisioned that one day computers will understand natural language and anticipate what we need, when and where we need it, and proactively complete tasks on our behalf. As computers get smaller and more pervasive, how humans interact with them is becoming a crucial issue. Despite numerous attempts over the past 30 years to make language understanding (LU) an effective and robust natural user interface for computer interaction, success has been limited and scoped to applications that were not particularly central to everyday use. However, speech recognition and machine learning have continued to be refined, and structured data served by applications and content providers has emerged. These advances, along with increased computational power, have broadened the application of natural LU to a wide spectrum of everyday tasks that are central to a user's productivity. We believe that as computers become smaller and more ubiquitous [e.g., wearables and Internet of Things (IoT)], and the number of applications increases, both system-initiated and user-initiated task completion across various applications and web services will become indispensable for personal life management and work productivity. In this article, we give an overview of personal digital assistants (PDAs); describe the system architecture, key components, and technology behind them; and discuss their future potential to fully redefine human?computer interaction.
Chapter
Using Electroencephalography (EEG), this study aims at extracting three features from instantaneous mental workload measure and link them to different aspect of the workload construct. An experiment was designed to investigate the effect of two workload inductors (Task difficulty and uncertainty) on extracted features along with a subjective measure of mental workload. Results suggest that both subjective and objective measures of workload are able to capture the effect of task difficulty; however only accumulated load was found to be sensitive to task uncertainty. We discuss that the three EEG measures derived from instantaneous workload can be used as criteria for designing more efficient information systems.
Article
We investigated differences in reading strategies in relation to information search task goals and perceived text relevance. Our findings demonstrate that some aspects of reading when looking for a specific target word are similar to reading relevant texts to find information, while other aspects are similar to reading irrelevant texts to find information. We also show significant differences in pupil dilation on final fixations on relevant words and on relevance decisions. Our results show feasibility of using eye-tracking data to infer timing of decisions made on information search tasks in relation to the required depth of information processing and the relevance level.
Article
Warning messages are fundamental to users’ security interactions. Unfortunately, they are largely ineffective, as shown by prior research. A key contributor to this failure is habituation: decreased response to a repeated warning. Previous research has only inferred the occurrence of habituation to warnings, or measured it indirectly, such as through the proxy of a related behavior. Therefore, there is a gap in our understanding of how habituation to security warnings develops in the brain. Without direct measures of habituation, we are limited in designing warnings that can mitigate its effects. In this study, we use neurophysiological measures to directly observe habituation as it occurs in the brain and behaviorally. We also design a polymorphic warning artifact that repeatedly changes its appearance in order to resist the effects of habituation. In an experiment using functional magnetic resonance imaging (fMRI; n = 25), we found that our polymorphic warning was significantly more resistant to habituation than were conventional warnings in regions of the brain related to attention. In a second experiment (n = 80), we implemented the four most resistant polymorphic warnings in a realistic setting. Using mouse cursor tracking as a surrogate for attention to unobtrusively measure habituation on participants’ personal computers, we found that polymorphic warnings reduced habituation compared to conventional warnings. Together, our findings reveal the substantial influence of neurobiology on users’ habituation to security warnings and security behavior in general, and we offer our polymorphic warning design as an effective solution to practice
Chapter
Neuroscience research on human motivation in the workplace is still in its infancy. There is a large industrial and organizational (IO) psychology literature containing numerous theories of motivation, relating to prosocial and productive, and, less so, "darker" antisocial and counterproductive, behaviors. However, the development of a viable over-arching theoretical framework has proved elusive. In this chapter, we argue that basic neuropsychological systems related to approach, avoidance, and their conflict, may provide such a framework, one which we discuss in terms of the Reinforcement Sensitivity Theory (RST) of personality. We argue that workplace behaviors may be understood by reference to the motivational types that are formed from the combination of basic approach, avoidance, and conflict-related personalities. We offer suggestions for future research to explore workplace behaviors in terms of the wider literature on the neuroscience of motivation. © 2017 by Emerald Group Publishing Limited All rights of reproduction in any form reserved.
Article
System-generated alerts are ubiquitous in personal computing and, with the proliferation of mobile devices, daily activity. While these interruptions provide timely information, research shows they come at a high cost in terms of increased stress and decreased productivity. This is due to dual-task interference (DTI), a cognitive limitation in which even simple tasks cannot be simultaneously performed without significant performance loss. Although previous research has examined how DTI impacts the performance of a primary task (the task that was interrupted), no research has examined the effect of DTI on the interrupting task. This is an important gap because in many contexts, failing to heed an alert-the interruption itself-can introduce critical vulnerabilities. Using security messages as our context, we address this gap by using functional magnetic resonance imaging (fMRI) to explore how (1) DTI occurs in the brain in response to interruptive alerts, (2) DTI influences message security disregard, and (3) the effects of DTI can be mitigated by finessing the timing of the interruption. We show that neural activation is substantially reduced under a condition of high DTI, and the degree of reduction in turn significantly predicts security message disregard. Interestingly, we show that when a message immediately follows a primary task, neural activity in the medial temporal lobe is comparable to when attending to the message is the only task. Further, we apply these findings in an online behavioral experiment in the context of a web-browser warning. We demonstrate a practical way to mitigate the DTI effect by presenting the warning at low-DTI times, and show how mouse cursor tracking and psychometric measures can be used to validate low-DTI times in other contexts. Our findings suggest that although alerts are pervasive in personal computing, they should be bounded in their presentation. The timing of interruptions strongly influences the occurrence of DTI in the brain, which in turn substantially impacts alert disregard. This paper provides a theoretically grounded, cost-effective approach to reduce the effects of DTI for a wide variety of interruptive messages that are important but do not require immediate attention.