ArticlePDF Available

Survey of Existing Interactive Systems

Authors:
RINDI
... This paper describes the scale construction and how these new scales can be used in the UEQ+ framework [3,4] to measure the UX of systems based on voice interaction. There are a few questionnaires [6,7,8,9] that measure the usability of voice systems. These questionnaires concentrate solely on the task-related aspects of an interaction and ignore non-task related or hedonic aspects of voice interactions. ...
... This article describes the construction of voice interaction scales for the UEQ+ framework. The modular concept of the UEQ+ is based on various scales, which allows the measurement of product-specific UX aspects [9]. The extension of the UEQ+ scale type 1 by voice interaction closes a gap in the UEQ+ and demonstrates a new method for the flexible evaluation of voice assistance systems. ...
Conference Paper
Full-text available
The UEQ+ is a modular framework for the construction of UX questionnaires. The researcher can pick those scales that fit his or her research question from a list of 16 available UX scales. Currently, no UEQ+ scales are available to allow measuring the quality of voice interactions. Given that this type of interaction is increasingly essential for the usage of digital products, this is a severe limitation of the possible products and usage scenarios that can be evaluated using the UEQ+. We describe in this paper the construction of three specific scales to measure the UX of voice interactions. Besides, we discuss how these new scales can be combined with existing UEQ+ scales in evaluation projects. CCS CONCEPTS • Human-centred computing • Human computer interaction • HCI design and evaluation methods
... There already exist several questionnaires [4,5,6,7] to measure the usability of voice systems-that is, the task-related aspects of an interaction with such a system. But the concentration on such task-related properties is not enough. ...
... Using the utterance data from the dialogue script and chat logs collected from our Wizard of OZ experiments we intend to populate the ontology with instancelevel data and test it with randomized selections from user utterances. Also, we plan on reporting a qualitative assessment with the Trindi tick-list [86], a survey for dialogue systems, with prospective users. ...
Article
Full-text available
Background: In the United States and parts of the world, the human papillomavirus vaccine uptake is below the prescribed coverage rate for the population. Some research have noted that dialogue that communicates the risks and benefits, as well as patient concerns, can improve the uptake levels. In this paper, we introduce an application ontology for health information dialogue called Patient Health Information Dialogue Ontology for patient-level human papillomavirus vaccine counseling and potentially for any health-related counseling. Results: The ontology's class level hierarchy is segmented into 4 basic levels - Discussion, Goal, Utterance, and Speech Task. The ontology also defines core low-level utterance interaction for communicating human papillomavirus health information. We discuss the design of the ontology and the execution of the utterance interaction. Conclusion: With an ontology that represents patient-centric dialogue to communicate health information, we have an application-driven model that formalizes the structure for the communication of health information, and a reusable scaffold that can be integrated for software agents. Our next step will to be develop the software engine that will utilize the ontology and automate the dialogue interaction of a software agent.
... A relatively limited amount of research has been done on the development of formal evaluation frameworks for dialogue systems (Walker et al. 1998;Paek 2007). However, one of these-the TRINDI "tick list" (Bohlin et al. 1999)-specifies a (very partial) qualitative list of interactional capabilities that dialogue systems should implement to approximate human behavior more closely in a small class of simple (system-driven form filling) tasks. The capabilities include the following. ...
Chapter
Full-text available
Automated dialogue systems represent a promising approach for health care promotion, thanks to their ability to emulate the experience of face-to-face interactions between health providers and patients and the growing ubiquity of home-based and mobile conversational assistants such as Apple’s Siri and Amazon’s Alexa. However, patient-facing conversational interfaces also have the potential to cause significant harm if they are not properly designed. In this chapter, we first review work on patient-facing conversational interfaces in healthcare, focusing on systems that use embodied conversational agents as their user interface modality. We then systematically review the kinds of errors that can occur if these interfaces are not properly constrained and the kinds of safety issues these can cause. We close by outlining design recommendations for avoiding these issues.
... The work presented here builds on the "Trindi Tick-list" (Bos et al., 1999) which was constructed in the TRINDI project 1 to examine whether certain dialogue behaviours can be reliably manifested by a dialogue system. The original tick-list is still being used (Hofmann et al., 2014), and there have been later revisions and amendments (although these remain to be published). ...
Preprint
Full-text available
Perceptions of system competence and communicative ability, termed partner models, play a significant role in speech interface interaction. Yet we do not know what the core dimensions of this concept are. Taking a psycholexical approach, our paper is the first to identify the key dimensions that define partner models in speech agent interaction. Through a repertory grid study (N=21), a review of key subjective questionnaires, an expert review of resulting word pairs and an online study of 356 user of speech interfaces, we identify three key dimensions that make up a users' partner model: 1) perceptions toward competence and capability; 2) assessment of human-likeness; and 3) a system's perceived cognitive flexibility. We discuss the implications for partner modelling as a concept, emphasising the importance of salience and the dynamic nature of these perceptions.
Book
With recent advances in natural language understanding techniques and farfield microphone arrays, natural language interfaces, such as voice assistants and chatbots, are emerging as a popular new way to interact with computers. They have made their way out of the industry research labs and into the pockets, desktops, cars and living rooms of the general public. But although such interfaces recognize bits of natural language, and even voice input, they generally lack conversational competence, or the ability to engage in natural conversation. Today's platforms provide sophisticated tools for analyzing language and retrieving knowledge, but they fail to provide adequate support for modeling interaction. The user experience (UX) designer or software developer must figure out how a human conversation is organized, usually relying on commonsense rather than on formal knowledge. Fortunately, practitioners can rely on conversation science. This book adapts formal knowledge from the field of Conversation Analysis (CA) to the design of natural language interfaces. It outlines the Natural Conversation Framework (NCF), developed at IBM Research, a systematic framework for designing interfaces that work like natural conversation. The NCF consists of four main components: 1) an interaction model of "expandable sequences," 2) a corresponding content format, 3) a pattern language with 100 generic UX patterns and 4) a navigation method of six basic user actions. The authors introduce UX designers to a new way of thinking about user experience design in the context of conversational interfaces, including a new vocabulary, new principles and new interaction patterns. User experience designers and graduate students in the HCI field as well as developers and conversation analysis students should find this book of interest.
Chapter
The permanent use of smartphones impacts the automotive environment. People tend to use their smartphone’s Internet capabilities manually while driving, which endangers the driver’s safety. Therefore, an intuitive in-car speech interface to the Internet is crucial in order to reduce driver distraction. Before developing an in-car speech dialog system to a new domain, you have to examine which speech-based human-machine interface concept is the most intuitive. This work in progress report describes the design of various human-machine interface concepts which include speech as main input and output modality. These concepts are based on two different dialog strategies: a command-based and a conversational speech dialog. Different graphical user interfaces, one including an avatar, have been designed in order to best support the speech dialog strategies and to raise the level of naturalness in the interaction. For each human-machine interface concept a prototype which allows for an online hotel booking has been developed. These prototypes will be evaluated in driving simulator experiments on usability and driving performance.
Chapter
Intelligent Environments consist of various entities and in parallel provide and execute different tasks. Since neither the sets of entities nor the tasks may remain constant while the user interacts with the system, we speak of a changing nature of such environments. Hereunder we have revealed adaptation as the major characteristic of an SDM allowing for a consistent interface provision. We have presented several approaches to SDM that partly cover specific aspects of adaptation or adaptivity in Sect. 2.4. However, in order to develop a system that provides adaptive spoken dialogue within IEs, it is necessary to denote a general definition of adaptation. This definition concerns the three main stakeholders involved in spoken interaction: the user(s), the SDS, and the IE. Of course, the fourth party involved is the ASDM, which seems to play a key role: while the ASDM must handle adaptation, the other parties provoke it. In the following we discuss our proposed definition and provide a complete description of adaptivity regarding the stakeholders mentioned above.
Chapter
In this chapter we report on the experiments and evaluation sessions we carried out with the OwlSpeak ASDM. During the implementation phase and the course of the initial investigations, we established an evaluation strategy that covers the three aspects: system integrity, dialogue optimisation, and practicability:
Conference Paper
Full-text available
We present the dialogue module of the speech-to-speech translation system VERB- MOBIL. We follow the approach that the solution to dialogue processing in a medi- ating scenario can not depend on a single constrained processing tool, but on a com- bination of several simple, efficient, and ro- bust components. We show how our solu- tion to dialogue processing works when ap- plied to real data, and give some examples where our module contributes to the cor- rect translation from German to English.
Technical Report
Full-text available
This document describes the design and implementation of TRAINS-96, a prototype mixed-initiative planning assistant system. The TRAINS-96 system helps a human manager solve routing problems in a simple transportation domain. It interacts with the human using spoken, typed, and graphical input and generates spoken output and graphical map displays. The key to TRAINS-96 is that it treats the interaction with the user as a dialogue in which each participant can do what they do best. The TRAINS-96 system is intended as both a demonstration of the feasibility of realistic mixed-initiative planning and as a platform for future research. This document describes both the design of the system and such features of its use as might be useful for further experimentation. Further references and a comprehensive set of manual pages are also provided.
Article
Full-text available
The Trains project is an effort to build a conversationally proficient planning assistant. A key part of the project is the construction of the Trains system, which provides the research platform for a wide range of issues in natural language understanding, mixedinitiative planning systems, and representing and reasoning about time, actions and events. Four years have now passed since the beginning of the project. Each year we have produced a demonstration system that focused on a dialog that illustrates particular aspects of our research. The commitment to building complete integrated systems is a significant overhead on the research, but we feel it is essential to guarantee that the results constitute real progress in the field. This paper describes the goals of the project, and our experience with the effort so far. This paper is to appear in the Journal of Experimental and Theoretical AI, 1995. The TRAINS project has been funded in part by ONR/ARPA grant N00014-92-J-1512, U.S. Air ...
Conference Paper
Full-text available
This paper describes a system that leads us to believe in the feasibility of constructing natural spoken dialogue systems in task-oriented domains. It specifically addresses the issue of robust interpretation of speech in the presence of recognition errors. Robustness is achieved by a combination of statistical error post-correction, syntactically- and semantically-driven robust parsing, and extensive use of the dialogue context. We present an evaluation of the system using time-to-completion and the quality of the final solution that suggests that most native speakers of English can use the system successfully with virtually no training.
Article
Let me start by briefly describing the cooperative process. EAGLES (Expert Advi-sory Group on Language Engineering Standards) is an initiative of the Commission of the European Union, whose purposes include producing specifications and guidelines, and encouraging cooperation between industry and academia, and between European countries. The initiative comprised five working groups, one of which--the Spoken Language Working Group, WG5---is responsible for the handbook under review.
Conference Paper
Verbmobil is a speech-to-speech translation system for spontaneously spoken negotiation dialogs. The actual system translates 74.2% of spontaneously spoken German input. We give an overview of the Verbmobil system. After the introduction of the Verbmobil scenario and the unique constraints of the project, we describe the underlying system architecture and its realization. The progress that was achieved on the end-to-end translation rate owes much to the increase of the word recognition rate from 45% in 1993 to 87% in 1996. In order to achieve the envisaged coverage on the uncertain speech recognizer output, deep and shallow approaches to the analysis and transfer problem had to be combined
Conference Paper
The ambitious goal of the project Verbmobil is the development of a portable speech to speech translation system dealing with face to face dialogs. It constitutes a new generation of translation systems in which spontaneously spoken language, speaker independence and speaker adaptability are among the main features. Verbmobil brings together researchers from the fields of speech processing, computational linguistics and artificial intelligence and goes beyond the state of the art in these areas. Besides the speech and language processing issues, the specific constraints of the project represent an extreme challenge on the part of project management, software engineering and test and evaluation of the system: size and complexity-150 researchers from 29 organizations at different sites on three continents are involved in the software development; integration of heterogeneous software-in order to reuse existing software, hardware and know how, only a few restrictions were given to the partners. The article describes the Verbmobil scenario, the system architecture and the system evolution
In cases where user responds to something not explicitly asked for by the system: System: Von wo n a c h w o m c hten Sie fahren?? User: Ich mmchte gerne am Montag fahren
  • Yes
Yes. In cases where user responds to something not explicitly asked for by the system: System: Von wo n a c h w o m c hten Sie fahren?? User: Ich mmchte gerne am Montag fahren..