Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

A strong debate has ensued in the computing community about whether Embodied Conversational Agents (ECAs) are beneficial and whether we should pursue this direction in interface design. Proponents cite the naturalness and power of ECAs as strengths, and detractors feel that ECAs disempower, mislead, and confuse users. As this debate rages on, relatively little systematic empirical evaluation on ECAs is actually being performed, and the results from this research have been contradictory or equivocal. We propose a framework for evaluating ECAs that can systematize the research. The framework emphasizes features of the agent, the user, and the task the user is performing. Our goal is to be able to make informed, scientific judgments about the utility of ECAs in user interfaces. If intelligent agents can be built, are there tasks or applications for which an ECA is appropriate? Are there characteristics (in appearance, in personality, etc.) the ECA should have? What types of users will be more productive and happy by interacting with an ECA? Our initial experiment within this framework manipulated the ECA’s appearance (realistic human versus iconic object) and the objectivity of the user’s task (editing a document versus deciding what to pack on a trip). We found that the perception of the ECA was strongly influenced by the task while features of the ECA that we manipulated had littlee ect.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... We use the term "automation" to refer collectively to the variety of computer-driven technologies that humans interact with toward some interaction goal (like McDermott and ten Brink [27] use "automation" and de Visser et al. [11] use the general term "robot") and "agent" to refer to a single entity that represents an underlying automated system (like Catrambone, Stasko and Xiao [5]). Studying automation anthropomorphism can illuminate the dynamics of both human-human and human-automation trust. ...
... Research on embodied conversational agents (ECA's) has taken this view, noting that the appropriateness of an anthropomorphic representation for a technological system can vary across tasks [5] and across users [3]. Cassell aptly summarizes the role that humanness can play in user perceptions and behavior when discussing ECA's: "I want both to help users steer their way through complex descriptions of the world and to prod them into automatically applying such a theory of mind as will allow them to not have to spend their time constructing awkward new theories of the machine's intelligence on the fly" [4]. ...
... There are two general predictions regarding trust appropriateness suggested by this existing literature: 1) humanlike interfaces may elicit more appropriate trust since we are socially accustomed and biologically attuned to interactions with other humans, the view expressed in research on ECA's [3][4][5], or 2) the objectively non-human nature of automated systems may make our tendency to anthropomorphize a vulnerability, leading us to misguided expectations and inappropriate trust. Since we do not use the decreasing reliability paradigm used in de Visser et al. 's experiments, and because trust appropriateness has not been explicitly measured before, H 2 is a somewhat naive hypothesis based on this first prediction, that greater humanness allows for better (i.e., more appropriate) trust calibration: H 2. Participants interacting with more humanlike agents will have more appropriate trust than those interacting with less humanlike agents. ...
... Note that this score measures users' depressive moods at the time of measurement but does not diagnose them. The following survey items gauge users' attitudes towards chatbots based on the literature [12,20,25]. Participant responses were assessed using five-point Likert scales, and the score for each variable was calculated by averaging the items if there were multiple questions. ...
... We adapted this scale to a five-point format, which demonstrated satisfactory internal consistency reliability exceeding 0.7 (Cronbach's =0.87, mean = 3.32, SD = 0.93). • Truthfulness: This dimension assesses the level of trust users place in the chatbot [20]. We used a single question to evaluate whether users trust the chatbot's accuracy, by asking them if they generally rely on the information it shares; 'what the chatbot says is mostly true'. ...
Conference Paper
Natural language processing is enabling machines to communicate with humans naturally, yet the dynamics of extended user-chatbot interactions remain much unexplored. This study characterizes the conversational styles, demographics, psychologies, and emotional tendencies of the most active users (i.e., top 1% by message count) of a commercial chatbot platform (SimSimi.com), whom we refer to as superusers. We analyzed the linguistic patterns and topics of 1,988,971 messages written by 1,994 superusers over a period of three years. We further surveyed 76 users to observe their emotional dispositions and perceptions towards the chatbot. We find that SimSimi superusers empathize and humanize the chatbot more than less active users, and they show a higher tendency to share personal and negative feelings. Our findings suggest that chatbots require new design considerations for users who are vulnerable due to their high anthropomorphism and openness toward machines. Our work also shows that chatbots should have functions to offer social support when necessary.
... But as one tries to trace it, two distinct approaches stand out despite early unified frameworks [Tha96]. Works on autonomous virtual agents represent a first line of research [CSX04]. In this perspective, embodiment aims to provide a natural interface with agent functionalities by using human-human interaction routines [CSX04]. ...
... Works on autonomous virtual agents represent a first line of research [CSX04]. In this perspective, embodiment aims to provide a natural interface with agent functionalities by using human-human interaction routines [CSX04]. The other line of research deals with user embodiment, for example in collaborative interfaces [BBF*95]. ...
Article
Full-text available
A dichotomy exists in the way virtual embodiments are currently studied: embodied entities are considered by conversational approaches as other selves whereas avatar approaches study them as users' hosts. Virtual reality applications such as in our case study often propose a different, in between embodiment experience. In the context of a virtual house for sale visit, this paper aims at examining the user's self-reported embodiment perception resulting from such a hybrid experience. To induce variability in this embodiment experience, we manipulated avatar representations (high versus low anthropomorphism) and frame of reference (egocentric versus exocentric). Results show the importance of the entity humanness to foster both experiences. When controlled by humanness, having a conversational experience appears uncorrelated to an avatar experience. This highlights the need to study these hybrid experiences as a combination of both approaches.
... Although many studies have investigated the effects of interface characters (see [3] [17]) and overviews of relevant factors exist (e.g. [16] [1]), thus far an integrative model is missing. Therefore, we aim to model the relations between design factors of interface characters and user perceptions, engagement, and use [19]. ...
... Although many studies have investigated the effects of interface characters (see [3, 17]) and overviews of relevant factors exist (e.g. [16, 1]), thus far an integrative model is missing. Therefore, we aim to model the relations between design factors of interface characters and user perceptions, engagement, and use [19]. ...
Article
Full-text available
People who are confronted with an interface char- acter, for example an embodied agent, perceive both its appearance as well as the action possibilities (affor- dances) it offers for task execution. Appearance aspects such as aesthetics explain user engagement with the in- terface character (Engagement process). Outcome expec- tations of using affordances explain use intentions (In- teraction process). Interestingly, dependencies between these processes may exist. Appearance aspects may in- fluence use intentions (people are more keen to use a graphically nicely designed than a graphically ugly de- signed system), and affordances may influence user engage- ment with the interface character (people are more engaged with an interface character that offers help than an in- terface character that obstructs task completion). As our theoretical basis, we use the I-PEFiC model, which inte- grates the Engagement and Interaction processes. We pro- pose an experimental study based on this model which will give indications on how bonds between humans and inter- face characters are established or broken down.
... Which is the most likeable?). Also worthy of mention are Catrambone [13] efforts, which combine personal information about users (e.g., gender and personality) and their opinions regarding an ECA (e.g., 'The agent was friendly', 'annoying' or 'cold') with interaction performance measures. An interesting observation from their study was that the perception of the ECA was strongly influenced by the nature of the task). ...
... This can make users overly optimistic about the conversational capabilities of the system, leading ultimately to disappointment with the interaction. The ECA might also be distracting and hinder the users' concentration (new users especially) on the goals of the interaction [13,38]. The purpose of a behaviour sequence at dialogue initiation should therefore be to present a human-like interface that captures the attention of the users, focuses it on the interaction goal and directs users straight to its pursuit. ...
Article
This paper explores the use of embodied conversational agents (ECAs) and their visual communicative ability to improve interaction with spoken language dialogue systems (SLDSs) through an experimental case study in the application context of secure access by speaker verification followed by remote home automation control. After identifying a set of typical interaction problems with SLDSs and associated with each of them a particular ECA gesture or behaviour, we conducted a comparative evaluation based on ITU recommendations for the evaluation of spoken dialogue systems. User tests were carried out dividing the test users into two groups, each facing a different interface setup: one with an ECA, and the other only with voice output. The ECA group encountered fewer interaction problems. Users’ impressions, however, were similar in both groups, with a slight advantage observed for the ECA group. In particular, the ECA seems to help users to better understand the flow of the dialogue and reduce confusion. Results also suggest that rejection (based on privacy and security concerns) is a dimension in its own right that may influence subjective evaluation parameters closely related to user acceptance. KeywordsEmbodied conversational agents-Non-verbal communication-Robustness-Comparative evaluation-Spoken dialogue system-Voice authentication-Speaker verification-Gesture design
... The chatbot we designed for experimentation was not a task-oriented chatbot but rather a social chatbot. Therefore, we assessed the chatbot's helpfulness and likeability by referencing prior studies on social chatbots (Catrambone, Stasko, and Xiao 2004;Chin, Molefi, and Yi 2020). The helpfulness-related questions centred on the convenience of use for agent interaction, the utility of the agent, and intelligence, and were measured using five items (Helpfulness: Cronbach's a = 0.81, mean = 2.09, SD = 0.68). ...
Article
Chatbots possess great potential benefits, yet concerns persist regarding users adopting inappropriate, offensive language. This research delved into the influence of user characteristics on verbally aggressive behaviours towards social chatbots. Employing a mixed-method study, we examined individual characteristics such as personal dispositions, offensive language patterns, academic majors, and prior experiences with conversational agents. Findings from a ten-day field experiment involving 33 participants using a real-world Telegram-based chatbot app unveiled that users' anthropomorphism, computer-related major, and gender significantly impact their moral emotions and evaluations of the chatbot's capabilities. Moreover, employing offensive language towards the chatbot detrimentally impacted users' perceptions of its abilities, helpfulness, and likability. The research findings advocate for ongoing monitoring and effective resolution of users' behaviours regarding the use of offensive language in their interactions with a chatbot. Additionally, the results underscore the importance of incorporating diverse perspectives into chatbot design to address biases and offensive utterances.
... The experimental tasks needed both breadth and depth to test the social facilitation effect but needed to be applicable to the realm of virtual humans. It seems likely that virtual humans would be most helpful with high level cognitive tasks [78]. Some tasks can be opinion-like (e.g., choosing what to bring on a trip), and others can be more objective (e.g., implementing edits in a document). ...
Article
Full-text available
As a virtual human is provided with more human-like characteristics, will it elicit stronger social responses from people? Two experiments were conducted to address these questions. The first experiment investigated whether virtual humans can evoke a social facilitation response and how strong that response is when people are given different cognitive tasks that vary in difficulty. The second experiment investigated whether people apply politeness norms to virtual humans. Participants were tutored either by a human tutor or a virtual human tutor that varied in features and then evaluated the tutor’s performance. Results indicate that virtual humans can produce social facilitation not only with facial appearance but also with voice. In addition, performance in the presence of voice synced facial appearance seems to elicit stronger social facilitation than in the presence of voice only or face only. Similar findings were observed with the politeness norm experiment. Participants who evaluated their tutor directly reported the tutor’s performance more favorably than participants who evaluated their tutor indirectly. This valence toward the voice synced facial appearance had no statistical difference compared to the valence toward the human tutor condition. The results suggest that designers of virtual humans should be mindful about the social nature of virtual humans.
... We adopted items from the Bartneck et al.'s [8] study to assess the anthropomorphism, likability, and perceived intelligence of an agent. We included an extra questionnaire item that was used to measure the agents' tone clarity from prior research [15,18]. For this post-survey, responses to the questionnaire items were measured using five-point Likert scales. ...
Conference Paper
With the popularity of AI-infused systems, conversational agents (CAs) are becoming essential in diverse areas, offering new functionality and convenience, but simultaneously, suffering misuse and verbal abuse. We examine whether conversational agents' response styles under varying abuse types influence those emotions found to mitigate peoples' aggressive behaviors, involving three verbal abuse types (Insult, Threat, Swearing) and three response styles (Avoidance, Empathy, Counterattacking). Ninety-eight participants were assigned to one of the abuse type conditions, interacted with the three spoken (voice-based) CAs in turn, and reported their feelings about guiltiness, anger, and shame after each session. The results show that the agent's response style has a significant effect on user emotions. Participants were less angry and more guilty with the empathy agent than the other two agents. Furthermore , we investigated the current status of commercial CAs' responses to verbal abuse. Our study findings have direct implications for the design of conversational agents.
... Potentielle Nutzer von Chatbots können sich hinsichtlich ihrer demografischen (Alter, Geschlecht, usw.) oder geografischen Daten (z. B. Herkunftslands), aber auch hinsichtlich ihres Nutzungsverhaltens (sporadische Nutzer, erfahrene Nutzer, usw.) stark unterscheiden (Catrambone et al. 2004). Aus diesem Grund können sich soziale Signale von Chatbots auch unterschiedlich auf verschiedene Nutzergruppen auswirken, weshalb ihre individuellen Eigenschaften bei der soziotechnischen Gestaltung berücksichtigt werden sollten. ...
Chapter
Full-text available
Chatbots sind softwarebasierte Systeme, die mittels natürlicher Sprache mit Menschen interagieren. Viele Unternehmen setzen zunehmend auf Chatbots, um Kunden bei der Suche nach Informationen über Produkte oder Dienstleistungen und bei der Durchführung einfacher Prozesse zu unterstützen. Nichtsdestotrotz ist die Akzeptanz von Chatbots bei vielen Nutzern derzeit noch gering. Ein Grund dafür ist, dass sich die Interaktion mit Chatbots nur selten natürlich und menschlich anfühlt. Es wächst deshalb die Erkenntnis, dass neben einer guten technischen Plattform auch weitere Faktoren bei der Gestaltung von Chatbots beachtet werden sollten. Die soziotechnische Gestaltung von Chatbots fokussiert sich daher auf die sozialen Signale eines Chatbots. Diese sozialen Signale (z. B. Lächeln, Sprachstil oder Antwortgeschwindigkeit) spielen nicht nur in der zwischenmenschlichen Kommunikation, sondern auch in der Interaktion mit Chatbots eine grosse Rolle. Dieser Artikel erläutert die Grundlagen der soziotechnischen Gestaltung von Chatbots, verdeutlicht die Wirkung sozialer Signale anhand eines Forschungsbeispiels und diskutiert kritisch die Vermenschlichung von Chatbots.
... Moreover, the system should help chatbot engineers to account for various influencing factors that impact the user reactions towards a chatbot social cue design (C3). In this context, Catrambone et al. (2004) propose to account for eleven features of the user and additional eleven features of the context and task, which makes a chatbot social cue design decision highly complex. To reduce the decision complexity, a chatbot social cue configuration system should account for all potential influencing factors and then provide a suitable social cue design recommendation (R3). ...
Conference Paper
Full-text available
Social cues (e.g., gender, age) are important design features of chatbots. However, choosing a social cue design is challenging. Although much research has empirically investigated social cues, chatbot engineers have difficulties to access this knowledge. Descriptive knowledge is usually embedded in research articles and difficult to apply as prescriptive knowledge. To address this challenge, we propose a chatbot social cue configuration system that supports chatbot engineers to access descriptive knowledge in order to make justified social cue design decisions (i.e., grounded in empirical research). We derive two design principles that describe how to extract and transform descriptive knowledge into a prescriptive and machine-executable representation. In addition, we evaluate the prototypical instantiations in an exploratory focus group and at two practitioner symposia. Our research addresses a contemporary problem and contributes with a generalizable concept to support researchers as well as practitioners to leverage existing descriptive knowledge in the design of artifacts.
... Thus, both publications provide different recommendations to increase information disclosure depending on the context. This becomes even more complicated when we further account for different user variables such as demographics (e.g., age, gender), geography (e.g., country of origin), and usage behavior (e.g., sporadic users, experienced users) [33]. As a result, it is very complex to aggregate, combine, and apply the existing X-knowledge to investigate social cues of chatbots (problem 2). ...
Chapter
Full-text available
In Design Science Research (DSR) it is important to build on descriptive (Ω) and prescriptive (Λ) state-of-the-art knowledge in order to provide a solid grounding. However, existing knowledge is typically made available via scientific publications. This leads to two challenges: first, scholars have to manually extract relevant knowledge pieces from the data-wise unstructured textual nature of scientific publications. Second, different research results can interact and exclude each other, which makes an aggregation, combination, and application of extracted knowledge pieces quite complex. In this paper, we present how we addressed both issues in a DSR project that focuses on the design of socially-adaptive chatbots. Therefore, we outline a two-step approach to transform phenomena and relationships described in the Ω-knowledge base in a machine-executable form using ontologies and a knowledge base. Following this new approach, we can design a system that is able to aggregate and combine existing Ω-knowledge in the field of chatbots. Hence, our work contributes to DSR methodology by suggesting a new approach for theory-guided DSR projects that facilitates the application and sharing of state-of-the-art Ω-knowledge.
... In addition to guilt and shame, we measured anger using the measurement items of Izard's DES IV [6]. We also measured helpfulness, enjoyment, and tone of clarity items from the Catrambone et al.'s [2] study to assess the usability of an agent. The responses were all measured using a five-point Likert scale. ...
Conference Paper
Verbal abuse is a hostile form of communication ill-intended to harm the other person. With a plethora of AI solutions around, the other person being targeted may be a conversational agent. In this study, involving 3 verbal abuse types (Insult, Threat, Swearing) and 3 response styles (Avoidance, Empathy, Counterattacking), we examine whether a conversational agent's response style under varying abuse types influences those emotions found to mitigate people's aggressive behaviors. Sixty-four participants, assigned to one of the abuse type conditions, interacted with the three conversational agents in turn and reported their feelings about guiltiness, anger, and shame after each session. Our study results show that, regardless of the abuse type, the agent's response style has a significant effect on user emotions. Participants were less angry and more guilty with the empathetic agent than the other two agents. Our study findings have direct implications for the design of conversational agents.
... Obviously, the degree to which the character resembles a real person or living creature is likely to influence the user's experience and behavior in human-character communication (e.g., Berry et al., 2005). Next to realism, other factors are also likely to influence human-character interaction, as overviews of relevant factors show (e.g., Catrambone et al., 2004 ). Despite such important insights, there is a need for an integrative model that takes into account the (interdependent) effects of a variety of factors that may explain human-character interaction. ...
Article
The nature of humans interacting with interface characters (e.g. embodied agents) is not well understood. The I-PEFiC model provides an integrative perspective on human–character interaction, assuming that the processes of engagement and user interaction exchange information in explaining user responses with interface characters. An experiment using the Sims2 game was conducted to test the effects of aesthetics (beautiful versus ugly, as engagement factor) and affordances (help versus obstacle, as interaction factor) of interface characters on use intentions, user engagement, and user satisfaction. Results of the experiment showed that (1) people tended to use helpful characters more than obstructing characters, (2) user engagement was enhanced by beauty and perceived affordance of the character whereas (3) intentions to use the character were not affected by good looks, and (4) the most satisfied users were those that were engaged with the character as well as willing to use it. This stresses the importance of enhancing affordances so to increase user engagement with interface characters. The I-PEFiC model provided a valuable framework to study the (interdependent) effects of relevant factors in human–character interaction.
... How users respond to an (facially similar) embodied agent does not only depend on the affordances an agent provides to the user, but also on a variety of user characteristics, such as gender, age, ethnicity, education, computer experience, and others ([Catrambone et al. 2004;Ruttkay et al. 2004]). For example, unskilled Word users might find proactive text-editing suggestions of an agent helpful, whereas skilled Word users might find them counterproductive for their tasks. ...
Article
Full-text available
embodied agents: in one case the agent was designed to look somewhat similar to the user, and in the other case the agent was designed to look dissimilar. We varied between subjects how helpful the agent was for a given task. Results showed that the facial similarity manipulation sometimes aected participants' responses, even though they did not consciously detect the similarity. Specically, when the agent was helpful, facial similarity increased participants ratings of involvement. However, when exposed to unhelpful agents, male participants had negative responses to the similar looking agent compared to the dissimilar one. These results suggest that using facially-similar embodied agents has a potential large downside if that embodied agent is perceived to be unhelpful.
... Overviews of relevant factors (e.g., Ruttkay et al., 2004;Catrambone et al., 2004) indicate that factors related to the character's outer appearance, as well as factors related to the character's behavior, the user, and the task, all poten-tially explain user responses. Therefore, this study takes a more fine-grained perspective by considering a variety of factors mentioned in the literature. ...
Article
Full-text available
Human-like characters in the interface may evoke social responses in users, and literature suggests that realism is the most important factor herein. However, the effects of interface characters on the user are not well understood. We developed an integrative framework, called I-PEFiC, to explain 'persona' and realism effects on the user. We tested an important part of the model using an experimental design in which 140 middle school students were class-wise shown an informative virtual reality demonstration that incorporated either a realistic or an unrealistic (fantasy) interface character, or no character. Findings show, first, no persona effect on task performance. We discuss how user engagement might be related to persona effects. Second, designed realism of the interface character contributed to user engagement when controlled for various user perceptions. Moreover, perceived aesthetics and task-relevance further influenced user engagement. Third, user engagement and task performance combined better predicted satisfaction than either one of the factors alone. In sum, several appearance-and task-related factors contributed to user engagement and user satisfaction. Thus, realism is not all.
Chapter
Full-text available
Antecedents of technology acceptance (TA) are known to be positively associated with measures such as usage intention, behavioral intention, attitude, and satisfaction. Although technology acceptance is investigated widely in prior research, it is not currently clear which variables or factors drive technology acceptance and under different service contexts or conditions. To examine the strength these effects in the artificial intelligence literature, we adopt a meta-analysis approach. We have scoped the literature on artificial intelligence, acceptance measures, and factors affecting acceptance in extant literature. We narrowed our search to business context to find AI-based tools that users, consumers, and customers interact with transactionally, such as chatbots. Findings show AI-based technology factors affect acceptance differently in various service industry contexts as preliminary results. These results have critical implications for researchers and practitioners studying which type of AI-based technology strengthen consumers use in different service contexts. These preliminary findings will be extended to look at interactive relationships of factors affecting acceptance in different contexts.KeywordsTechnology acceptanceArtificial intelligenceMeta-analysisAI factors
Chapter
The increasing prevalence of automated and autonomous systems necessitates design that facilitates user trust calibration. Trust repair and trust dampening have been suggested as behaviors with which a system can correct inappropriate states of user trust, yet trust dampening has received less attention in the literature. This paper aims to address this with a 2 (agent anthropomorphism: low, high) ×\times 3 (message: control, apology, trust dampening) between-subject experiment which observes the effects of trust dampening delivered by anthropomorphic interface agents. Agent stimuli were chosen based on a pretest of 58 participants, after which the main experiment was conducted online with 225 participants. Results indicate that trust dampening increased perceptions of system integrity and may improve trust appropriateness, suggesting that lowering expectations via trust dampening messages is a viable approach for automated system designers.KeywordsTrustDampeningCalibrationAppropriatenessComputers are social actorsAnthropomorphism
Article
Full-text available
Intelligent agents in the form of avatars in networked virtual worlds (VWs) are a new form of embodied conversational agent (ECA). They are still a topic of active re-search, but promise soon to rival the sophistication of virtual human agents developed on stand-alone platforms over the last decade. Such agents in today's VWs grew out of two lines of historical research: Virtual Reality and Artificial Intelligence. Their merger forms the basis for today's persistent 3D worlds occupied by intelligent char-acters serving a wide range of purposes. We believe ECA avatars will help to enable VWs to achieve a higher level of meaningful interaction by providing increased en-gagement and responsiveness within environments where people will interact with and even develop relationships with them.
Article
We discuss our approach to developing a novel modality for the computer-delivery of Brief Motivational Interventions (BMIs) for behavior change in the form of a personalized On-Demand VIrtual Counselor (ODVIC), accessed over the internet. ODVIC is a multimodal Embodied Conversational Agent (ECA) that empathically delivers an evidence-based behavior change intervention by adapting, in real-time, its verbal and nonverbal communication messages to those of the user’s during their interaction. We currently focus our work on excessive alcohol consumption as a target behavior, and our approach is adaptable to other target behaviors (e.g., overeating, lack of exercise, narcotic drug use, non-adherence to treatment). We based our current approach on a successful existing patient-centered brief motivational intervention for behavior change---the Drinker’s Check-Up (DCU)---whose computer-delivery with a text-only interface has been found effective in reducing alcohol consumption in problem drinkers. We discuss the results of users’ evaluation of the computer-based DCU intervention delivered with a text-only interface compared to the same intervention delivered with two different ECAs (a neutral one and one with some empathic abilities). Users rate the three systems in terms of acceptance, perceived enjoyment, and intention to use the system, among other dimensions. We conclude with a discussion of how our positive results encourage our long-term goals of on-demand conversations, anytime, anywhere, with virtual agents as personal health and well-being helpers.
Article
Full-text available
Conversational agents are attributed humanlike characteristics; in particular, they are often assumed to have a gender. There is evidence that gender sets up expectations that have an impact on user experiences with agents. The objective of this paper is to explore gender affordances of conversational agents. Our examination takes a holistic approach to the analysis of the application of gender stereotypes to nine chatterbots: six embodied (three male and three female), two disembodied (male and female), and a robot embodiment. Building on social psychology research, we test the persistence of gender stereotypes in the selection of conversation topics and in the elicitation of disinhibition and verbal abuse. Our study is based on quantitative textual analysis of interaction logs. A dictionary of English sexual slang and derogatory terms was developed for this study. Results show that gender stereotypes tend to affect interaction more at the relational (style) level then at the referential (content) level of conversation. People attribute negative stereotypes to female-presenting chatterbots more often than they do to male-presenting chatterbots, and female-presenting chatterbots are more often the objects of implicit and explicit sexual attention and swear words. We conclude by calling for a more informed analysis of user interactions that considers the full range of user interactions.
Conference Paper
Full-text available
Serious games offer an opportunity for learning communication skills by practicing conversations with one or more virtual characters, provided that the character(s) behave in accordance with their assigned properties and strategies. This paper presents an approach for developing virtual characters by using the Belief-Desire-Intentions (BDI) concept. The BDI-framework was used to equip virtual characters with personality traits, and make them act accordingly. A sales game was developed as context: the player-trainee is a real-estate salesman; the virtual character is a potential buyer. The character could be modeled to behave either extravert or introvert; agreeable or non-agreeable; and combinations thereof. A human subjects study was conducted to examine whether naïve players experience the personality of the virtual characters in accordance with their assigned profile. The results unequivocally show that they do. The proposed approach is shown to be effective in creating individualized characters, it is flexible, and it is relatively easy to scale, adapt, and re-use developed models.
Article
Full-text available
In everyday life, one increasingly encounters virtual characters. Virtual (online) worlds, on the one hand, are populated by avatars, that is, mediated human interlocutors represented by a virtual body. Examples are Second Life or Massively Multiplayer Online Role-Playing Games (MMORPG) like World of Warcraft. On the other hand, websites, pedagogical programs, or technological devices are increasingly being equipped with virtual characters programmed to act autonomously and to conduct basic verbal and nonverbal interaction with the human user. The present special issue focuses on the latter form of virtual characters that have been developed to facilitate human-technology interaction. These agents are developed for, and sometimes already employed in, various areas of application: commercial applications such as so-called chatbots on websites (e.g., Anna of the Ikea website), virtual trainers such as that of the Wii fit, or virtual agents in navigation systems (http://www.charamel.com/). Research groups are developing and testing virtual agents for information kiosks (Cassell et al., 2002; Jung & Kopp, 2003), as health advisors (Bickmore, Pfeifer, & Jack, 2009), as TV/VCR assistants (Krämer, Tietz, & Bente, 2003) or as virtual teachers and tutors (Graesser et al., 2008; Lester, Towns, Callaway, Voerman, & FitzGerald, 2000; Rickel & Johnson, 2000) (for an overview see Krämer, 2008a). Although the development of and research on virtual agents is first and foremost being advanced by researchers from computer science, psychological research now also plays an important role. In order to equip virtual agents with humanlike communication skills and to iteratively optimize the systems with the help of evaluation studies, psychological knowledge as well as psychological methods are needed. However, psychological research on embodied agents is fruitful not only with regard to applied research, but also with respect to fundamental research. In this regard, Krämer and Bente (2007; Krämer, Bente, Troitzsch, & Eschenburg, 2009) distinguished basic research and applied research (realization research and evaluation research). (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Numerous studies have tested the effects of pedagogical agents on learning and the influence of their specific appearance. What has not been analyzed, however, is whether an agent can have indirect effects when it is employed as a tutor for learning strategies rather than directly teaching the relevant learning material. In a between-subjects design (N = 45) we compared two different kinds of pedagogical agents – a cartoon-like rabbit and a realistic anthropomorphic agent – with a control group that was not tutored by an animated agent but was informed by voice only. Results showed no clear advantages for the agents compared to voice-based tutoring with regard to indirect learning effects, but they did demonstrate that the appearance of the agent matters. The rabbit-like agent was not only preferred, but people exposed themselves longer to the tutoring session when the rabbit provided feedback.
Chapter
Three questions have to be answered before designing a speech application: who will use it, why will they use it and how often will they use it? A designer needs answers to all of these questions to best be able to address the needs of the target group. This chapter will outline a methodical procedural model which describes the workflow required to build a speech application that is properly designed for its target groups. The workflow covers the analysis of requirements, specification, implementation, production, delivery and operation. This chapter also provides an overview of the most important information we need to describe a voice user interface, and where this information can be found. It also provides an overview of current and future technical developments in the field of speech processing and their relevance for the design of dialogues in future. We will then recommend 11 design features which, according to our experience, help the designer of a voice user interface to exploit knowledge about the user and to focus the design of the dialogue on the user’s abilities, their competence, expectations and needs.
Conference Paper
This paper presents the results of a field study of the COHIBIT museum exhibit. Its purpose is to convey knowledge about car-technology and virtual characters in an entertaining way. Two life-sized virtual characters interact with the visitors and support them in constructing car models with specific tangible car modules. The evaluation should reveal what makes the exhibit successful in terms of an overall user impression and how users rate the employed virtual characters. The focus of the evaluation is to measure the user experience that manifests in aspects like attraction, rejection and entertainment. Based on an analysis of relevant systems and evaluation models, an on-site evaluation for the COHIBIT exhibit has been designed. The results show that the positive user experience of the exhibit is mainly based on the non-task-specific aspects like virtual characters, joy of use and entertainment.
Conference Paper
Full-text available
While the history of traditional media in post-conflict peace building efforts is rich and well studied, the potential for interactive new media technologies in this area has gone unexplored. In cooperation with the Truth and Reconciliation Commission of Liberia, we have constructed a novel interactive kiosk system, called MOSES, for use in that country's post-conflict reconciliation effort. The system allows the sharing of video messages between Liberians throughout the country, despite the presence of little or no communications infrastructure. In this paper, we describe the MOSES system, including several innovative design elements. We also present a novel design methodology we employed to manage the various distances between our design team and the intended user group in Liberia. Finally, we report on a qualitative study of the system with 27 participants from throughout Liberia. The study found that participants saw MOSES as giving them a voice and connecting them to other Liberians throughout the country; that the system was broadly usable by low-literate, novice users without human assistance; that the embodied conversational agent used in our design shows considerable promise; that users generally ascribed foreign involvement to the system; and that the system encouraged heavily group-oriented usage.
Conference Paper
Full-text available
Our review surveys a range of human-human relationship models and research that might provide insights to understanding the social relationship between humans and virtual humans. This involves investigating several social constructs (expectations, communication, trust, etc.) that are identified as key variables that influence the relationship between people and how these variables should be implemented in the design for an effective and useful virtual human. This theoretical analysis contributes to the foundational theory of human computer interaction involving virtual humans.
Article
Full-text available
To investigate whether virtual humans produce social facilitation effects. When people do an easy task and another person is nearby, they tend to do that task better than when they are alone. Conversely, when people do a hard task and another person is nearby, they tend to do that task less well than when they are alone. This phenomenon is referred to in the social psychology literature as social facilitation. The present study investigated whether virtual humans can evoke a social facilitation response. Participants were given different tasks to do that varied in difficulty. The tasks involved anagrams, mazes, and modular arithmetic. They did the tasks alone, in the company of another person, or in the company of a virtual human on a computer screen. For easy tasks, performance in the virtual human condition was better than in the alone condition, and for difficult tasks, performance in the virtual human condition was worse than in the alone condition. As with a human, virtual humans can produce social facilitation. The results suggest that designers of virtual humans should be mindful about the social nature of virtual humans; a design decision as to when and how to present a virtual human should be a deliberate and informed decision. An ever-present virtual human might make learning and performance difficult for challenging tasks.
Article
Full-text available
Students viewed an animation depicting either the process of lightning formation or how car brakes work and listened to a corresponding narration describing the steps. The entire animation and narration were presented at the same time (concurrent), the entire narration was presented before or after the entire animation (successive large bites), or short portions of the narration were presented before or after corresponding short portions of the animation for each successive portion of the presentation (successive small bites). Overall, the concurrent and successive small bites groups performed significantly better than the successive large bites groups on remembering the explanation in words (retention), generating solutions to transfer problems (transfer), and selecting verbal labels for elements in a line drawing (matching), but they did not differ significantly from each other. Results are consistent with a dual-process model of working memory in which learners are more likely to construct connections between words and corresponding pictures when they are held in working memory at the same time.
Article
Full-text available
How “real” are computer personalities? Using a psychological criterion for “reality,” 2 studies tested whether people respond to computer personalities the same way they tend to respond to human personalities. In Experiment 1, dominant and submissive subjects were randomly matched with a computer endowed with the personality characteristics associated with dominance or submissiveness (N = 48). Consistent with similarity-attraction theory in interpersonal interaction, subjects were more attracted to the similar computer compared to the dissimilar computer. Experiment 2 (N = 88) used the same experimental design to assess users' psychological responses to changes in personality-based behavior in computers. Consistent with gain-loss theory in interpersonal interaction, changes in the direction of similarity had a more positive effect on attraction than a consistently similar personality. Loss effects were not obtained. The findings suggest that computer personalities are psychologically real to users.
Article
Full-text available
With the success of multimedia and mobile devices, human-computer interfaces combining several communication modalities such as speech and gesture may lead to more "natural" human-computer interaction. Yet, developing multimodal interfaces requires an understanding (and thus the observation and analysis) of human multimodal behavior. In the field of annotation of multimodal corpus, there is no standardized coding scheme. In this paper, we describe a coding scheme we have developed. We give examples on how we applied it to a multimodal corpus by producing descriptions. We also provide details about the software we have developed for parsing such descriptions and for computing metrics measuring the cooperation between modalities. Although this paper is concerned with the input side (human towards machine) and thus deals with the annotation of human behavior observed in multimodal corpora, we also provide some ideas on how it might be of use for specifying cooperation between output modalities in multimodal agents.
Article
Full-text available
Students viewed an animation depicting either the process of lightning formation or how car brakes work and listened to a corresponding narration describing the steps. The entire animation and narration were presented at the same time (concurrent), the entire narration was presented before or after the entire animation (successive large bites), or short portions of the narration were presented before or after corresponding short portions of the animation for each successive portion of the presentation (successive small bites). Overall, the concurrent and successive small bites groups performed significantly better than the successive large bites groups on remembering the explanation in words (retention), generating solutions to transfer problems (transfer), and selecting verbal labels for elements in a line drawing (matching), but they did not differ significantly from each other. Results are consistent with a dual-process model of working memory in which learners are more likely to construct connections between words and corresponding pictures when they are held in working memory at the same time. The purpose of this research is to examine theory-based design principles for promoting constructivist learning in multimedia environments. To address this goal, it is neces-sary to clarify what is meant by multimedia environments, constructivist learning, and theory-based design principles, and to specify predictions concerning ways of designing multimedia environments for constructivist learning.
Conference Paper
Full-text available
This paper presents a new experimental paradigm for the study of human-computer interaction, Five experiments provide evidence that individuals' interactions with computers are fundamentally social. The studies show that social responses to computers are not the result of conscious beliefs that computers are human or human-like. Moreover, such behaviors do not result from users' ignorance or from psychological or social dysfunctions, nor from a belief that subjects are interacting with programmers. Rather, social responses to computers are commonplace and easy to generate. The results reported here present numerous and unprecedented hypotheses, unexpected implications for design, new approaches to usability testing, and direct methods for verii3cation.
Conference Paper
Full-text available
This study examines whether people would interpret and respond to paralinguistic personality cues in computer-generated speech in the same way as they do human speech. Participants used a book-buying website and heard five book reviews in a 2 (synthesized voice personality: extrovert vs. introvert) by 2 (participant personality: extrovert vs. introvert) balanced, between-subjects experiment. Participants accurately recognized personality cues in TTS and showed strong similarity-attraction effects. Although the content was the same for all participants, when the personality of the computer voice matched their own personality: 1) participants regarded the computer voice as more attractive, credible, and informative; 2) the book review was evaluated more positively; 3) the reviewer was more attractive and credible; and 4) participants were more likely to buy the book. Match of user voice characteristics with TTS had no effect, confirming the social nature of the interaction. We discuss implications for HCI theory and design.
Conference Paper
Full-text available
A study was conducted with 78 subjects to evaluate the comprehensibility of synthetic speech for various tasks ranging from short, simple e-mail messages to longer news articles on mostly obscure topics. Comprehension accuracy for each subject was measured for synthetic speech and for recorded human speech. Half the subjects were allowed to take notes while listening, the other half were not. Findings show that there was no significant difference in comprehension of synthetic speech among the five different text-to-speech engines used. Those subjects that did not take notes performed significantly worse for all synthetic voice tasks when compared to recorded speech tasks. Performance for synthetic speech in the non note-taking condition degraded as the task got longer and more complex. When taking notes, subjects also did significantly worse within the synthetic voice condition averaged across all six tasks. However, average performance scores for the last three tasks in this condition show comparable results for human and synthetic speech, reflective of a training effect.
Conference Paper
Full-text available
We investigated subjects' responses to a synthesized talking face displayed on a computer screen in the context of a questionnaire study. Compared to subjects who answered questions presented via text display on a screen, subjects who answered the same questions spoken by a talking face spent more time, made fewer mistakes, and wrote more comments. When we compared responses to two different talking faces, subjects who answered questions spoken by a stern face, compared to subjects who answered questions spoken by a neutral face, spent more time, made fewer mistakes, and wrote more comments. They also liked the experience and the face less. We interpret this study in the light of desires to anthropomorphize computer interfaces and suggest that incautiously adding human characteristics, like face, voice, and facial expressions, could make the experience for users worse rather than better.
Conference Paper
Full-text available
Personification of interface agents has been speculated to have several advantages, such as a positive effect on agent credibility and on the perception of learning experience. However, important questions less often addressed so far are what effect personification has on more objective measures, such as comprehension and recall, and furthermore, under what circumstances this effect (if any) occurs. We performed an empirical study with adult participants to examine the effect of the Ppp Persona not only on subjective but also on objective measures. In addition, we tested it both with technical and non-technical domain information. The results of the study indicate that the data from the subjective measures support the so called persona effect for the technical information but not for non-technical information. With regard to the objective measures, however, neither a positive nor a negative effect could be found. Implications for software development are discussed.
Article
Full-text available
Abstract Cognitive load theory has been designed to provide guidelines intended to assist in the presentation of information in a manner that encourages learner activities that optimize intellectual performance. The theory assumes a limited capacity working memory that includes partially independent subcomponents to deal with auditory/verbal material and visual/2- or 3-dimensional information as well as an effectively unlimited long-term memory, holding schemas that vary in their degree of automation. These structures and functions of human cognitive architecture have been used to design a variety of novel instructional procedures based on the assumption that working memory load should be reduced and schema construction encouraged. This paper reviews the theory and the instructional designs generated by it.
Article
Full-text available
Thesis (M.S.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1996. Includes bibliographical references (p. 129-133). by Tomoko Koka. M.S.
Article
Full-text available
This research examined the ability of an anthropomorphic interface assistant to help people learn and use an unfamiliar text-editing tool, with a specific focus on assessing proactive a ssistant behavior. Participants in the study were introduced to a text editing system that used keypress c ombinations for invoking the different editing operations. Participants then were directed to make a set of pre scribed changes to a document with the aid either of a paper manual, an interface assistant that would hear and respond to questions orally, or an assistant that responded to questions and additionally made proactive sug gestions. Anecdotal evidence suggested that proactive assistant behavior would not enhance performance and would be viewed as intrusive. Our results showed that all three conditions performed similarly on objecti ve editing performance (completion time, commands issued, and command recall), while the participants in the l atter two conditions strongly felt that the assistant's help was valuable.
Article
Full-text available
Two data sources--self-reports and peer ratings--and two instruments--adjective factors and questionnaire scales--were used to assess the five-factor model of personality. As in a previous study of self-reports (McCrae & Costa, 1985b), adjective factors of neuroticism, extraversion, openness to experience, agreeableness-antagonism, and conscientiousness-undirectedness were identified in an analysis of 738 peer ratings of 275 adult subjects. Intraclass correlations among raters, ranging from .30 to .65, and correlations between mean peer ratings and self-reports, from .25 to .62, showed substantial cross-observer agreement on all five adjective factors. Similar results were seen in analyses of scales from the NEO Personality Inventory. Items from the adjective factors were used as guides in a discussion of the nature of the five factors. These data reinforce recent appeals for the adoption of the five-factor model in personality research and assessment.
Article
Full-text available
This paper details results of an experiment to empirically evaluate the effectiveness and user acceptability of human-like synthetic agents in a multi-modal electronic retail scenario. The synthetic personae played the roles of interactive conversational sales assistants. The range of life-like personae differed with respect to gender and technology. Participants took part in the controlled experiment, which involved them eavesdropping on spoken dialogues between a customer and each of the synthetic personae. They also completed questionnaires and took part in a debriefing interview designed to elicit information relating to the effectiveness, believability and perceived quality of each of the personae. Results show that participants expected a high level of realistic and human-like verbal and non-verbal communicative behaviour in the synthetic personae. This was demonstrated in the strong preference for personae that exhibited natural facial expressions, gestures and emotions. It was...
Article
Full-text available
Building trust with users is crucial in a wide range of applications, such as financial transactions, and some minimal degree of trust is required in all applications to even initiate and maintain an interaction with a user. Humans use a variety of relational conversational strategies, including small talk, to establish trusting relationships with each other. We argue that such strategies can also be used by interface agents, and that embodied conversational agents are ideally suited for this task given the myriad cues available to them for signaling trustworthiness. We describe a model of social dialogue, an implementation in an embodied conversation agent, and an experiment in which social dialogue was demonstrated to have an effect on trust, for users with a disposition to be extroverts. Keywords Embodied conversational agent, trust, social interface, natural language, small talk, personality. INTRODUCTION Humans use a variety of strategies to proactively establish and maintain ...
Chapter
This book describes research in all aspects of the design, implementation, and evaluation of embodied conversational agents as well as details of specific working systems. Embodied conversational agents are computer-generated cartoonlike characters that demonstrate many of the same properties as humans in face-to-face conversation, including the ability to produce and respond to verbal and nonverbal communication. They constitute a type of (a) multimodal interface where the modalities are those natural to human conversation: speech, facial displays, hand gestures, and body stance; (b) software agent, insofar as they represent the computer in an interaction with a human or represent their human users in a computational environment (as avatars, for example); and (c) dialogue system where both verbal and nonverbal devices advance and regulate the dialogue between the user and the computer. With an embodied conversational agent, the visual dimension of interacting with an animated character on a screen plays an intrinsic role. Not just pretty pictures, the graphics display visual features of conversation in the same way that the face and hands do in face-to-face conversation among humans. This book describes research in all aspects of the design, implementation, and evaluation of embodied conversational agents as well as details of specific working systems. Many of the chapters are written by multidisciplinary teams of psychologists, linguists, computer scientists, artists, and researchers in interface design. The authors include Elisabeth Andre, Norm Badler, Gene Ball, Justine Cassell, Elizabeth Churchill, James Lester, Dominic Massaro, Cliff Nass, Sharon Oviatt, Isabella Poggi, Jeff Rickel, and Greg Sanders.
Conference Paper
We discuss current approaches to the development of natural language dialogue systems, and claim that they do not sufficiently consider the unique qualities of man-machine interaction as distinct from general human discourse. We conclude that empirical studies of this unique communication situation is required for the de-velopment of user-friendly interactive systems. One way of achieving this is through the use of so-called Wizard of Oz studies. We describe our work in this ar-ea. The focus is on the practical execution of the studies and the methodological conclusions that we have drawn on the basis of our experience. While the focus is on natural language interfaces, the methods used and the conclusions drawn from the results obtained are of relevance also to other kinds of intelligent interfaces. 1 THE NEED FOR WIZARD OF OZ STUDIES Dialogue has been an active research area for quite some time in natural language processing. It is fair to say that researchers studying dialogue and discourse have developed their theories through detailed analysis of empirical data from many di-
Article
Agents have become a predominant area of research and development in human interfaces. A major issue in the development of these agents is how to represent them and their activities to the user. Anthropomorphic forms have been suggested, since they provide a great degreeof subtlety and afford social interaction. However, these forms may be problematic since they maybe inherently interpretted as having a high degreeof agency and intelligence. An experiment is presented which supports these contentions.
Article
In this [book] chapter, we cover four main topics. First, we consider the overall questions of evaluating interactive systems. Second, we discuss ways in which speech-based systems and embodied conversational agents are both types of conversational systems, and explain that embodied conversation is nevertheless different from speech-only interaction. Third, we present a detailed formative evaluation that we proposed for a speech-based agent interface: the DARPA Communicator program. Fourth, we then correspondingly discuss the ways that the process of measurement and evaluation of embodied conversational agents is very much related to evaluating speech-only interfaces: we present metrics and evaluation that seem likely to be useful for five important aspects of embodied conversational agents. When one thinks about the overall questions of measuring and evaluating interactive systems, there are two immediate questions to answer. First, why should we evaluate interactive systems at all? Second, if we do evaluate, how do we accomplish the task? Measurement and evaluation is a critical part of both research and development, to focus effort, identify progress, select the best system, and assess success. Measurement and evaluation help by identifying where research and development effort is needed, as well as by identifying success. In order to judge progress in anything, we need a systematic method of evaluation. If we are looking for the best system, then the systems must have comparable evaluations. One of the best methods to achieve comparability is to use the same data and run the various systems on that data. In addition, we need the notion of "ground truth." That is, what answer should the algorithm produce? Then various metrics can be examined such as correctness of answer and the time needed to produce a correct answer.
Article
Suppose that sometimes he found it impossible to tell the difference between the real men and those which had only the shape of men, and had learned by experience that there were only two ways of telling them apart: first, that these automata never answered in word or sign, except by ch ance, to questions put to them; and second, that though their movements were often more regular and certain than those of the wisest men, yet in many things which they would have to do to imitate us, they failed more disastrously than the greatest fools.
Article
Headpedal is a software company that specializes in the production of character interfaces for the web. The goal of this paper is to put forward our vision of Character User Interfaces (CHUI™s), present real-world examples culled from experience with our customers, and to describe a production process that uses state-of-the-art technology wherever possible, but also takes into account real-world constraints. Our hope is that this paper will inform researchers about the state-of-the-market for character interfaces, and assist them in judiciously applying their efforts to the development of technologies for individualizing characters.
Article
ince the beginning of recorded history, people have been fascinated with the idea of non-human agencies.1 Popular notions about androids, hu- manoids, robots, cyborgs, and science fiction creatures permeate our cul- ture, forming the unconscious backdrop against which software agents are per- ceived. The word "robot," derived from the Czech word for drudgery, became popular following Karel Capek's 1921 play RUR: Rossum Universal Robots. While Capek's robots were factory workers, the public has also at times em- braced the romantic dream of robots as "digital butlers" who, like the mechani- cal maid in the animated feature "The Jetsons," would someday putter about the living room performing mundane household tasks. Despite such innocuous beginnings, the dominant public image of artificially intelligent embodied crea- tures often has been more a nightmare than a dream. Would the awesome power of robots reverse the master-slave relationship with humans? Everyday experiences of computer users with the mysteries of ordinary software, riddled with annoying bugs, incomprehensible features, and dangerous viruses rein- force the fear that the software powering autonomous creatures would pose even more problems. The more intelligent the robot, the more capable of pursu- ing its own self-interest rather than its master's. The more humanlike the robot, the more likely to exhibit human frailties and eccentricities. Such latent con- cerns cannot be ignored in the design of software agents—indeed, there is more than a grain of truth in each of them! Though automata of various sorts have existed for centuries, it is only with the development of computers and control theory since World War II that any- thing resembling autonomous agents has begun to appear. Norman (1997) ob- serves that perhaps "the most relevant predecessors to today's intelligent agents are servomechanisms and other control devices, including factory control and the automated takeoff, landing, and flight control of aircraft." However, the agents now being contemplated differ in important ways from earlier concepts.
Article
In the conclusion to his article, `Consciousness as an engineering issue' (JCS, 2 (1995), pp. 52-66), Donald Michie argues that the inclusion of intelligent computer systems in workgroups will lead to a blurring of the distinction between human and machine consciousness. He also refers to the increasing use of intelligent agent software in commercial applications. Given the exponential growth in the availability of on-line information through networked computer systems, AI routines are being developed to filter information, based on the user's own stated needs and preferences. In this article Jaron Lanier, who originated the term `virtual reality', argues that the use of intelligent agents will devalue human intelligence and creativity and diminish the role of conscious experience. The mind-body debate needs to move beyond the confines of academic philosophy as it has important implications for practical issues such as the design of computer systems.
Article
This paper introduces the multi-dimensional concept of anthropocentrism with respect to computers, the tendecy to believe that (1) computers do not possess human physicaland psychological capabilities; and (2) it is not acceptable for computers to fill routinized (e.g., auto mecanic), interpretive (e.g., newspaper reporter), and personal (e.g., baby sitter) roles traditionally held only be people. A mail survey (n=133) of indiviuals in Northern California focuses on individual differences rather than differences between technologies. As suggested by the literature on ethnocentrism, experience with other cultures and education are strong predictors of the dimensions of anthropocentrism; surprisingly, experience with computers fails as a predictor.
Article
Over the last years, the animation of interface agents has been the target of increasing interest. Largely, this increase in attention is fuelled by speculated effects on human motivation and cognition. However, empirical investigations on the effect of animated agents are still small in number and differ with regard to the measured effects. Our aim is two-fold. First, we provide a comprehensive and systematic overview of the empirical studies conducted so far in order to investigate effects of animated agents on the user's experience, behaviour and performance. Second, by discussing both implications and limitations of the existing studies, we identify some general requirements and suggestions for future studies.
Conference Paper
Animated characters are common in user interfaces, but important questions remain about whether characters work in all situations and for all users. This experiment tested the effects of different character presentations on user anxiety, task performance, and subjective evaluations of two commerce websites. There were three character conditions (no character, a character that ignored the user, and a character that closely monitored work on the website). Users were separated into two groups that had different attitudes about accepting help from others: people with control orientations that were external (users thought that other people controlled their success) and those with internal orientations (users thought they were in control). Results showed that the effects of monitoring and individual differences in thoughts about control worked as they do in real life. Users felt more anxious when characters monitored their website work and this effect was strongest for users with an external control orientation. Monitoring characters also decreased task performance, but increased trust in website content. Results are discussed in terms of design considerations that maximize the positive influence of animated agents.
Conference Paper
Most interactive programs have been assuming interaction with a single user. We propose the notion of 'Social Interaction' as a new interaction paradigm between multiple humans and computers. Social interaction requires that first a computer has the multiple participants model, second its behaviors are not only determined by internal logic but also affected by perceived external situations, and finally it actively joins the interaction. An experimental system with these features was developed. It consists of three subsystems, a vision subsystem that processes motion video input to examine an external situation, an action/reaction subsystem that generates an action based on internal logic of a task and a situated reaction triggered by perceived external situation, and a facial animation subsystem that generates a three-dimensional face capable of various facial displays. From the experimental using the system with a number of subjects, we found that subjects generally tended to try to interpret facial displays of the computer. Such involvement prevented them from concentrating on a task. We also found that subjects never recognized situated reactions of the computer that were unrelated to the task although they unconsciously responded to them. These findings seem to imply subliminal involvement of the subjects caused by facial displays and situated reactions.
Article
Apple's Chairman of the Board discusses the future of three core technologies—hypermedia, simulation and artificial intelligence—and the role each will play in education. The following speech was presented to an audience of teachers almost two years ago. Its message, however, is as timely and inspirational today as we prepare for a new era.
Article
Current approaches to the development of natural language dialogue systems are discussed, and it is claimed that they do not sufficiently consider the unique qualities of man-machine interaction as distinct from general human discourse. It is concluded that empirical studies of this unique communication situation are required for the development of user-friendly interactive systems. One way of achieving this is through the use of so-called Wizard of Oz studies. The focus of the work described in the paper is on the practical execution of the studies and the methodological conclusions drawn on the basis of the authors' experience. While the focus is on natural language interfaces, the methods used and the conclusions drawn from the results obtained are of relevance also to other kinds of intelligent interfaces.
Article
this paper we set out to provide a full checklist to compare ECAs, from four points of view: design, usability, practical usage and user perception. By listing all the factors which contribute to the `mind' and `body' aspects of an ECA, we hope to create a common ground to compare ECAs from a technical, design point of view. By identifying usability aspects and methods to measure those, ECAs could be compared from the point of view of usefulness. Thirdly, there are the aspects of how the agent is `subjectively experienced' by the user. Finally there are aspects of practical applicability like cost etc. Ideally the outcome of dedicated evaluation experiments could serve as `design guidelines' to define the `best' ECA for a given purpose
The Illusion of Life Revisited
  • T Barker
Emotional Expression
  • G Collier
The Relationship between Visual Abstraction and the Effectiveness of a Pedagogical Character-Agent
  • H Haddah
  • J Klobas