To read the full-text of this research, you can request a copy directly from the authors.
Multimodal interfaces are expected to improve input and output capabilities of increasingly sophisticated applications. Several approaches are aimed at formally describing multimodal interaction. However, they rarely treat it as a continuous flow of actions, preserving its dynamic nature and considering modalities at the same level. This work proposes a model-based approach called Practice-oriented Analysis and Description of Multimodal Interaction (PALADIN) aimed at describing sequential multimodal interaction beyond such problems. It arranges a set of parameters to quantify multimodal interaction as a whole, in order to minimise the existing differences between modalities. Furthermore, interaction is described stepwise to preserve the dynamic nature of the dialogue process. PALADIN defines a common notation to describe interaction in different multimodal contexts, providing a framework to assess and compare the usability of systems. Our approach was integrated into four real applications to conduct two experiments with users. The experiments show the validity and prove the effectiveness of the proposed model for analysing and evaluating multimodal interaction.
To read the full-text of this research, you can request a copy directly from the authors.
... (S3) PALADIN [138,139] is a metamodel aimed at describing multimodal interaction in a uniform and dynamic way. Its design arranges a set of parameters to quantify multimodal interaction generalizing the existing dierences between modalities. ...
... In order to ease its integration into research and production systems, a helping framework was developed. It is called Instantiation Framework (IF), its implementation 129 is open-source and can be downloaded from  as well. The IF is aimed at serving as a bridge between the interaction source (e.g., a lter extracting live interaction from an application, an application simulating user-system interaction, an interaction log) and the PALADIN instances. ...
... The current implementation of PALADIN and the IF can be easily integrated in the source code of a Java application. In  it is carefully described how PALADIN can be used with or without the IF, and how these tools are integrated into an application to implement multimodal interaction analysis. The IF provides a facade 3 from which it is easily notied by an external tool instrumenting interaction. ...
... It structures all these data to be the basis for the implementation of QoE analysis and inference processes. We propose to base this meta-model design on an existing one ,  which is being developed in parallel to this work, in a joint effort between the Cátedra SAES  and the Telekom Innovation Labora- tories  to quantify interaction in multimodal contexts. The base meta-model describes interaction by turn, i.e., each time the user or the system take part in the dialog, following a dialog structure, i.e., a set of ordered systemand user-turns. ...
... The process is completely automatic, as data in Model A instances are used by ATL to fill data fields in the Model B instance according to the rules. Some validations tests were conducted in the context of the PALADIN project , . The participants used multimodal input (speech + touch) to book a restaurant within an Android smartphone. ...
... El atributo de naturalidad está presente en el cuestionario MOS-X y SUISQ-R, haciendo referencia sobre la entonación y ritmo; indagando si la voz se parece a la de un humano, si el sistema sonaba como una persona normal y no una voz sintética robotizada. Los demás cuestionarios no cuentan con ítems que diferencien entre la interacción basado en comando o en conversación, por ejemplo, AttrakDiff ha sido utilizado para evaluar sistemas de diálogo basado en conversación  así como en sistemas que utilizan interacción con voz basado en comandos . Teixeira et al.  utilizan el ICF-US test para evaluar un sistema enfocado a adultos mayores que utiliza una interfaz de voz por medio de comandos. ...
Voice user interfaces (VUI) have been increasingly used in everyday settings and they are growing in popularity. These interfaces have predominantly eyes-free and hands-free interactions. This kind of experiences continues to be an inceptive field compared to other input methods such as touch or using the keyboard/mouse. Thus, it is important to identify tools used to evaluate the usability of VUIs. This article presents a systematic review, in which we analyzed 57 articles and describes nine questionnaires used for evaluating the usability of VUIs, assessing the potential suitability of these questionnaires to measure different types of interactions and various usability dimensions. We found that these questionnaires were used to evaluate the usability of voice-only and voice-added VUIs: AttrakDiff, ICF-US, MOS-X, SUISQ-R, SUS, SASSI, UEQ, PARADISE and USE, where the SUS questionnaire is the most commonly used. However, its items do not directly assess voice quality, although it evaluates the general user interaction with a system. All the questionnaires include items related to three usability dimensions (effectiveness, efficiency, and satisfaction). The questionnaire with the most homogeneous coverage regarding the number of items in each aspect of usability is the SASSI questionnaire. It is a normal practice to use multiple questionnaires to obtain a more complete measurement of usability. We perceive the necessity to increase usability research about the differences between the voice interaction with diverse display types (voice-first, voice-only, voice-added) and the dialog types (command-based and conversational), and how usability affects the user expectations about the VUIs.
... By having access to them, the evaluator is able to analyze current data and have a better grasp on the evaluation current status making small changes to it, if required. The major difference of DynEaaS, when compared to other evaluation frameworks, such as those proposed by Navarro et al. , Ickin et al.  and Witt  is that it specifically addresses the context of use and emphasizes the need to collect the data at the best possible time, or at least contextualizing it as best as possible. For example, it makes far more sense to ask a user about an application feature right after he has used (or had problems with) it than to do the same questions at the end of the evaluation session, when most of the impressions have probably faded; or it might not be a good time to enrol the user in providing feedback if he/she is leaving for an appointment. ...
Multimodal user interfaces provide users with diﬀerent ways of interacting with applications. This has advantages both in providing interaction solutions with additional robustness in environments where a
single modality might result in ambiguous input or output (e.g., speech in noisy environments), and for users with some kind of limitation (e.g., hearing diﬃculties resulting from ageing) by yielding alternative and
more natural ways of interacting. The design and development of applications supporting multimodal interaction involves numerous challenges, particularly if the goals include the development of multimodal applications for a wide variety of scenarios, designing complex interaction and, at the same time, proposing and evolving interaction modalities. These require the choice of an architecture, development and evaluation methodologies and the adoption of principles that foster constant improvements
at the interaction modalities level without disrupting existing applications. Based on previous and ongoing work, by our team, we presentour approach to the design, development and evaluation of multimodal
applications covering several devices and application scenarios.
... El diseño que proponemos en este trabajo está basado en PALADIN [16,17,18], un modelo destinado a cuantificar y describir de forma dinámica (i.e., paso a paso) el proceso de interacción en entornos multimodales. ...
Este artículo describe un nuevo enfoque para modelar la calidad de la experiencia de los usuarios (QoE) en
entornos móviles. El modelo presentado tiene el nombre de CARIM, e intenta dar respuesta a las siguientes
preguntas: ¿cómo se puede medir la QoE en entornos móviles a partir del análisis de la interacción usuario-sistema? ¿cómo se pueden comparar y contrastar diferentes medidas de QoE? Para ello, CARIM utiliza un
conjunto de parámetros con los que describe, paso a paso, la interacción entre el usuario y el sistema, el contexto en el cual se produce esta interacción, y el nivel de calidad percibido por los usuarios. Estos parámetros se estructuran dentro de un modelo, lo que proporciona (1) una representación común de cómo transcurre el proceso de interacción en diferentes entornos móviles y (2) una base para calcular la QoE automáticamente así como para comprar diferentes registros de interacción.
CARIM es un modelo en tiempo real que permite el análisis dinámico de la interacción, así como la toma de decisiones basadas en un cierto nivel de QoE en tiempo de ejecución. Esto es utilizado por ciertas aplicaciones durante la ejecución para adaptarse y así proporcionar una mejor experiencia a los usuarios.
A modo de conclusión, CARIM proporciona un criterio unificado con el cual calcular, analizar y comparar la QoE en sistemas móviles de distinta naturaleza.
GUI testing is essential to provide validity and quality of system response, but applying it to a development is not straightforward: it is time consuming, requires specialized personnel, and involves complex activities that sometimes are implemented manually. GUI testing tools help supporting these processes. However, integrating them into software projects may be troublesome, mainly due to the diversity of GUI platforms and operating systems in use. This work presents the design and implementation of Open HMI Tester (OHT), an application framework for the automation of testing processes based on GUI introspection. It is cross-platform, and provides an adaptable design aimed at supporting major event-based GUI platforms. It can also be integrated into ongoing and legacy developments using dynamic library preloading. OHT provides a robust and extensible basis to implement GUI testing tools. A capture and replay approach has been implemented as proof of concept. Introspection is used to capture essential GUI and interaction data. It is used also to simulate real human interaction in order to increase robustness and tolerance to changes between testing iterations. OHT is being actively developed by the Open-source Community and, as shown in this paper, it is ready to be used in current software projects.
Designing interactive computer systems to be efficient and easy to use is important so that people in our society may realize the potential benefits of computer-based tools .... Although modern cognitive psychology contains a wealth of knowledge of human ...
In the MATIS project a multimodal system has been developed for train timetable information. The aim of the project was to obtain guidelines for designing multimodal i nterfaces for information systems. The MATIS system accepts input both in spoken and in graphical mode (no keyboard input) and provides feedback in the same two modes. The user can choose at any time which of the input modalities (s)he prefers to use for a certain action. A user test was carried out in which 25 subjects were asked to evaluate the system. For comparison, users were a lso asked to test a GUI (Graphical User Interface) version of the train timetable information system as well as a speech-only version o f the system. We measured the e fficiency and the e ffectiveness of the interaction and the user satisfaction with all three systems.
This essay is a personal reflection from an Artificial Intelligence (AI) perspective on the term HCI. Especially for the transfer of AI-based HCI into industrial environments, we survey existing approaches and examine how AI helps to solve fundamental problems of HCI technology. The user and the system must have a collaborative goal. The concept of collaborative multimodality could serve as the missing link between traditional HCI and intuitive human-centred designs in the form of, e.g., natural language interfaces or intelligent environments. Examples are provided in the medical imaging domain.
We demonstrate how the Tycoon framework can be put to practice with the Anvil tool in a concrete case study. Tycoon offers a coding scheme and analysis metrics for multimodal communication scenarios. Anvil is a generic, extensible and ergonomically designed annotation tool for videos. In this paper, we describe the Anvil tool, the Tycoon scheme/metrics, and their implementation in Anvil for a video sample. A new Anvil feature, motivated by the Tycoon scheme, is presented: non-temporal annotation objects – an important concept, we argue, of general interest. We also outline future plans for automatizing Tycoon metrics computation using Anvil plug-ins.
The lack of suitable training and testing data is currently a major roadblock in applying machine-learning techniques to dialogue man-agement. Stochastic modelling of real users has been suggested as a solution to this problem, but to date few of the proposed models have been quantitatively evaluated on real data. In-deed, there are no established criteria for such an evaluation. This paper presents a systematic approach to testing user simulations and as-sesses the most prominent domain-independent techniques using a large DARPA Communica-tor corpus of human-computer dialogues. We show that while recent advances have led to significant improvements in simulation quality, simple statistical metrics are still sufficient to discern synthetic from real dialogues.
The MITRE Corporation's Evaluation Working Group has developed a methodology for evaluating multi-modal groupware systems and capturing data on human-human interactions. The methodology consists of a framework for describing collaborative systems, a scenario-based evaluation approach, and evaluation metrics for the various components of collaborative systems. We designed and ran two sets of experiments to validate the methodology by evaluating collaborative systems. In one experiment, we compared two configurations of a multi-modal collaborative application using a map navigation scenario requiring information sharing and decision making. In the second experiment, we applied the evaluation methodology to a loosely integrated set of collaborative tools, again using a scenario-based approach. In both experiments, multi-modal, multi-user data were collected, visualized, annotated, and analyzed.
A key advantage of taking a statistical approach to spoken dialogue systems is the ability to formalise dialogue policy design as a stochastic optimization problem. However, since dialogue policies are learnt by interactively exploring alternative dialogue paths, conventional static dialogue corpora cannot be used directly for training and instead, a user simulator is commonly used. This paper describes a novel statistical user model based on a compact stack-like state representation called a user agenda which allows state transitions to be modeled as sequences of push- and pop-operations and elegantly encodes the dialogue history from a user's point of view. An expectation-maximisation based algorithm is presented which models the observable user output in terms of a sequence of hidden states and thereby allows the model to be trained on a corpus of minimally annotated data. Experimental results with a real-world dialogue system demonstrate that the trained user model can be successfully used to optimise a dialogue policy which outperforms a hand-crafted baseline in terms of task completion rates and user satisfaction scores.
What are the most suitable interaction paradigms for navigational and informative tasks for pedestrians? Is there an influence of social and situational context on multimodal interaction? Our study takes a closer look at a multimodal system on a handheld device that was recently developed as a prototype for mobile navigation assistance. The system allows visitors of a city to navigate, to get information on sights, and to use and manipulate map information. In an outdoor evaluation, we studied the usability of such a system on site. The study yields insight about how multimodality can enhance the usability of hand-held devices with their future services. We show, for example that for our more complicated tasks multimodal interaction is superior to classical unimodal interaction.
In this paper we present our work toward the creation of a multimodal expressive Embodied Conversational Agent (ECA). Our agent, called Greta, exhibits nonverbal behaviors synchronized with speech. We are using the taxonomy of communicative functions developed by Isabella Poggi  to specify the behavior of the agent. Based on this taxonomy a representation language, Affective Presentation Markup Language, APML has been defined to drive the animation of the agent . Lately, we have been working on creating no longer a generic agent but an agent with individual characteristics. We have been concentrated on the behavior specification for an individual agent. In particular we have defined a set of parameters to change the expressivity of the agent's behaviors. Six parameters have been defined and implemented to encode gesture and face expressivity. We have performed perceptual studies of our expressivity model.
Multimodal interaction enables the user to employ different modalities such as voice, gesture and typing for communicating with a computer. This paper presents an analysis of the integration of multiple communication modalities within an interactive system. To do so, a software engineering perspective is adopted. First, the notion of “multimodal system” is clarified. We aim at proving that two main features of a multimodal system are the concurrency of processing and the fusion of input/output data. On the basis of these two features, we then propose a design space and a method for classifying multimodal systems. In the last section, we present a software architecture model of multimodal systems which supports these two salient properties: concurrency of processing and data fusion. Two multimodal systems developed in our team, VoicePaint and NoteBook, are used to illustrate the discussion.
ABSTRACT The development ,and the evaluation of multimodal ,interactive systems on mobile phones remains a difficult task. In this paper weaddress,this problem ,by describing a component-based approach, called ACICARE, for developing and evaluating multimodal,interfaces ,on mobile ,phones. ,ACICARE is dedicated,to the ,overall iterative design ,process of mobile multimodal interfaces, which consists of cycles of designing, prototyping,and ,evaluation. ACICARE is based ,on two complementary,tools that are combined: ICARE and ACIDU. ICARE is a component-based ,platform for rapidly developing multimodal,interfaces. We adapted ,the ICARE components ,to run on mobile phones and we connected them to ACIDU, a probe that gathers customer’s usage ,on mobile ,phones. By reusing and assembling components, ACICARE enables the rapid development ,of multimodal ,interfaces as well ,as the automatic capture of multimodal,usage for in-field evaluations. Weillustrate ACICARE using our contact manager system, a multimodal,system running on the SPV c500 mobile phone. Categories and Subject Descriptors H.5.2 [Information Interfaces and Presentation]: User
Designing and implementing multimodal applications that take advantage of several recognition- based interaction techniques (e.g. speech and gesture recognition) is a difficult task. The goal of our research is to explore how simple modelling techniques and tools can be used to support the designers and developers of multimodal systems. In this paper, we discuss the use of finite state machines (FSMs) for the design and prototyping of multimodal commands. In particular, we show that FSMs can help designers in reasoning about synchronization patterns problems. Finally, we describe an implementation of our FSM-based approach, in a toolkit whose aim is to facilitate the iterative process of designing, prototyping and testing multimodality.
Representing the behaviour of multimodal interactive systems in a complete, concise and non- ambiguous way is still a challenge for formal description techniques. Indeed, multimodal interactive systems embed specific constraints that are either cumbersome or impossible to capture with classical formal description techniques. This is due to both the multiple facets of a multimodal system and the strong temporal constraints usually encountered in this kind of systems. This paper presents a formal description technique dedicated to the engineering of interactive multimodal systems. Its aim is to provide a precise way for describing, analyzing and reasoning about multi-modal interactive systems prior to their implementation. One of the basic components for multi-modal systems is the fusion mechanisms. This paper focuses on this component and, in order to exemplify the approach, the formal description technique is used for the modelling and the analysis of one fusion mechanism. Lastly, benefits and limitations of the approach are discussed.
We propose the CARE properties as a simple way of characterisin g and assessing aspects of multimodal interaction: the Complementarity, Assignment, Redundancy, and Equivalence that may occur between the interaction techniques available in a multimodal user interface. We provide a formal definition of these properties and use the notion of compatibility to show how the system CARE properties interact with user CARE-like properties in the design of a system. The discussion is illustrated with MATIS, a Multimodal Air Travel Information System.
In this paper, the efficiency and usage patterns of input modes in multimodal dialogue systems is investigated for both desktop and personal digital assistant (PDA) working environments. For this purpose a form-filling travel reservation application is evaluated that combines the speech and visual modalities; three multimodal modes of interaction are implemented, namely: "Click-To-Talk", "Open-Mike" and "Modality-Selection". The three multimodal systems are evaluated and compared with the "GUI-Only" and "Speech-Only" unimodal systems. Mode and duration statistics are computed for each system, for each turn and for each attribute in the form. Turn time is decomposed in interaction and inactivity time and the statistics for each input modeare computed. Results show that multimodal and adaptive interfaces are superior in terms of interaction time, but not always in terms of inactivity time. Also users tend to use themost efficient input mode, although our experiments show abias towards the speech modality.
In this paper we report on our experience on the design and evaluation of multimodal user interfaces in various contexts. We introduce a novel combination of existing design and evaluation methods in the form of a 5-step iterative process and show the feasibility of this method and some of the lessons learned through the design of a messaging application for two contexts (in car, walking). The iterative design process we employed included the following five basic steps: 1) identification of the limitations affecting the usage of different modalities in various contexts (contextual observations and context analysis) 2) identifying and selecting suitable interaction concepts and creating a general design for the multimodal application (storyboarding, use cases, interaction concepts, task breakdown, application UI and interaction design), 3) creating modality-specific UI designs, 4) rapid prototyping and 5) evaluating the prototype in naturalistic situations to find key issues to be taken into account in the next iteration. We have not only found clear indications that context affects users' preferences in the usage of modalities and interaction strategies but also identified some of these. For instance, while speech interaction was preferred in the car environment users did not consider it useful when they were walking. 2D (finger strokes) and especially 3D (tilt) gestures were preferred by walking users.
In this paper, we propose two new objective metrics, relative modality efficiency and multimodal synergy, that can provide valuable information and identify usability problems during the evaluation of multimodal systems. Relative modality efficiency (when compared with modality usage) can identify suboptimal use of modalities due to poor interface design or information asymmetries. Multimodal synergy measures the added value from efficiently combining multiple input modalities, and can be used as a single measure of the quality of modality fusion and fission in a multimodal system. The proposed metrics are used to evaluate two multimodal systems that combine pen/speech and mouse/keyboard modalities respectively. The results provide much insight into multimodal interface usability issues, and demonstrate how multimodal systems should adapt to maximize modalities synergy resulting in efficient, natural, and intelligent multimodal interfaces.
While multimodal interfaces are becoming more and more used and supported, their development is still difficult and there
is a lack of authoring tools for this purpose. The goal of this work is to discuss how multimodality can be specified in model-based
languages and apply such solution to the composition of graphical and vocal interactions. In particular, we show how to provide
structured support that aims to identify the most suitable solutions for modelling multimodality at various detail levels.
This is obtained using, amongst other techniques, the well-known CARE properties in the context of a model-based language
able to support service-based applications and modern Web 2.0 interactions. The method is supported by an authoring environment,
which provides some specific solutions that can be modified by the designers to better suit their specific needs, and is able
to generate implementations of multimodal interfaces in Web environments. An example of modelling a multimodal application
and the corresponding, automatically generated, user interfaces is reported as well.
The past few years, multimodal interaction has been gaining importance in virtual environments. Although multimodality makes interaction with the environment more intuitive and natural, the development cycle of such an environment is often a long and expensive process. In our overall field of research, we investigate how model-based design can help shorten this process by designing the application with the use of high-level diagrams. In this scope, we present 'NiMMiT', a graphical notation suitable for expressing multimodal user interaction. We elaborate on the NiMMiT primitives and illustrate the notation in practice with a comprehensive example.
USer Interface eXtensible Markup Language (USIXML) consists of a User Interface Description Language (UIDL) allowing designers to apply a multi-path development of user interfaces. In this development paradigm, a user interface can be specified and produced at and from different, and possibly multiple, levels of abstraction while maintaining the mappings between these levels if required. Thus, the development process can be initiated from any level of abstraction and proceed towards obtaining one or many final user inter- faces for various contexts of use at other levels of abstraction. In this way, the model-to-model transformation which is the cornerstone of Model-Driven Ar- chitecture (MDA) can be supported in multiple configurations, based on com- position of three basic transformation types: abstraction, reification, and trans- lation.
This paper defines the problem space of distributed, migratable and plastic user interfaces, and presents CAMELEON-RT1, a technical answer to the problem. CAMELEON-RT1 is an architecture reference model that can be used for comparing and reasoning about existing tools as well as for developing future run time infrastructures for distributed, migratable, and plastic user inter- faces. We have developed an early implementation of a run time infrastructure based on the precepts of CAMELEON-RT1.
The use of tangible multimodal (TMM) systems, which let safety-critical systems users continue to employ the physical objects, language and symbology of their workplace are described. The TMM users are capable of updating digital systems and of collaborating digitally with colleagues as are users of more traditional systems. TMM has the portability, high resolution, scalability and physical properties of pen and paper and can meet the needs of officers in the field, in particular, robustness to computer failure. The TMM enable users to employ physical objects in their workplace along with natural spoken language, sketch, gesture and other input modalities to interact with information and with co-workers.
One important evolution in software applications is the spread of service-oriented architectures in ubiquitous environments. Such environments are characterized by a wide set of interactive devices, with interactive applications that exploit a number of functionalities developed beforehand and encapsulated in Web services. In this article, we discuss how a novel model-based UIDL can provide useful support both at design and runtime for these types of applications. Web service annotations can also be exploited for providing hints for user interface development at design time. At runtime the language is exploited to support dynamic generation of user interfaces adapted to the different devices at hand during the user interface migration process, which is particularly important in ubiquitous environments.
This paper describes a novel approach to model the quality of experience (QoE) of users in mobile environments. The Context-Aware and Ratings Interaction Model (CARIM) addresses the open questions of how to quantify user experiences from the analysis of interaction in mobile scenarios, and how to compare different QoE records to each other. A set of parameters are used to dynamically describe the interaction between the user and the system, the context in which it is performed and the perceived quality of users. CARIM structures these parameters into a uniform representation, supporting the dynamic analysis of interaction to determine QoE of users and enabling the comparison between different interaction records. Its run-time nature allows applications to make context- and QoE-based decisions in real-time to adapt themselves, and thus provide a better experience to users. As a result, CARIM provides unified criteria for the inference and analysis of QoE in mobile scenarios. Its design and implementation can be integrated (and easily extended if needed) into many different development environments. An experiment with real users comparing two different interaction designs and validating user behavior hypotheses proved the effectiveness of applying CARIM for the assessment of QoE in mobile applications.
Quality of Service (QoS) and Quality of Experience (QoE) have to be considered when designing, building and maintaining services involving multimodal human–machine interaction. In order to guide the assessment and evaluation of such services, we first develop a taxonomy of the most relevant QoS and QoE aspects which result from multimodal human–machine interactions. It consists of three layers: (1) The quality factors influencing QoS and QoE related to the user, the system, and the context of use; (2) the QoS interaction performance aspects describing user and system behavior and performance; and (3) the QoE aspects related to the quality perception and judgment processes taking place within the user. For each of these layers, we then provide metrics which are able to capture the QoS and QoE aspects in a quantitative way, either via questionnaires or performance measures. The metrics are meant to guide system evaluation and make it more systematic and comparable.
This paper describes a general framework for evaluating and comparing the perform ance of multimodal dialogue systems: PROMISE (Procedure for Multimodal Interactive System Evaluation). PROMISE is a possible extention to multimodality of the PARADISE framework ((1, 2) used for the evaluation of spoken dialogue systems), where we aimed to solve the problems of scoring multimodal inputs and outputs, weighting the different recognition modalities and of how t o deal with not directed (non-directed) task definitions and the resulting, potentially uncompleted tasks by the users. PROMISE is used in the end-to-end-evaluation of the SmartKom project - in which an intelligent computer-user interface that deals with various kinds of oral and physic al input is being developed. The aim of SmartKom is to allow a natural form of communication within man-machine interaction.
In this paper we present a novel approach for prototyping, testing and evaluating multimodal interfaces, OpenWizard. OpenWizard
allows the designer and the developer to rapidly evaluate a non-fully functional multimodal prototype by replacing one modality
or a composition of modalities that are not yet available by wizard of oz techniques. OpenWizard is based on a conceptual
component-based approach for rapidly developing multimodal interfaces, an approach first implemented in the ICARE software
tool and more recently in the OpenInterface tool. We present a set of wizard of oz components that are implemented in OpenInterface.
While some wizard of oz (WoZ) components are generic to be reused for different multimodal applications, our approach allows
the integration of tailored WoZ components. We illustrate OpenWizard using a multimodal map navigator.
KeywordsMultimodality-Wizard of oz-Component-based approach
With the technical advances and market growth in the field, the issues of evaluation and usability of spoken language dialogue systems, unimodal as well as multimodal, are as crucial as ever. This paper discusses those issues by reviewing a series of European and US projects which have produced major results on evaluation and usability. Whereas significant progress has been made on unimodal spoken language dialogue systems evaluation and usability, the emergence of, among others, multimodal, mobile, and domain-oriented systems continues to pose entirely new challenges to research in evaluation and usability.
In the context of Model Driven Engineering, models are the main development artifacts and model transformations are among the most important operations applied to models. A number of specialized languages have been proposed, aimed at specifying model transformations. Apart from the software engineering properties of transformation languages, the availability of high quality tool support is also of key importance for the industrial adoption and ultimate success of MDE. In this paper we present ATL: a model transformation language and its execution environment based on the Eclipse framework. ATL tools provide support for the major tasks involved in using a language: editing, compiling, executing, and debugging.
In this paper we present a new approach to the automation of usability evaluation for interactive systems. Design ideas or complete systems are modeled as a conditional state machine. Then, user interactions with the system are simulated on the basis of tasks, by first searching for possible solution paths and then generating deviations from these paths under consideration of user groups and system attributes. The approach has been implemented into a workbench which supports the modeling of the system and the evaluation of the simulations. We present first results for the reliability of the approach in modeling interactions with a spoken dialog system.
While multimodal systems are an active research field, there is no agreed-upon set of multimodal interaction parameters, which would allow to quantify the performance of such systems and their underlying modules, and would there for be necessary for a systematic evaluation. In this paper we propose an extension to established parameters describing the interaction with spoken dialog systems  in order to be used for multimodal systems. Focussing on the evaluation of a multimodal system, three usage scenarios for these parameters are given.
This paper describes a multimodal architecture to control 3D avatars with speech dialogs and mouse events. We briefly describe
the scripting language used to specify the sequences and the components of the architecture supporting the system. Then we
focus on the evaluation procedure that is proposed to test the system. The discussion on the evaluation results shows us the
future work to be accomplished.
Computer-supported cooperative work (CSCW) holds great importance and promise for modern society. This paper provides an overview of seventeen papers comprising a symposium on CSCW. The overview also discusses some relationships among the contributions ...
The evaluation of the usability and the learnability of a computer system may be performed with predictive models during the design phase. It may be done on the executable code as well as by observing the user in action. In the latter case, data collected in vivo must be processed. Our goal is to provide software supports for performing this difficult and time consuming task. This article presents an early analysis and experience towards the automatic evaluation of multimodal user interfaces. With this end in view, a generic Wizard of Oz platform has been designed to allow the observation and automatic recording of subjects'behavior while interacting with a multimodal interface. We then show how recorded data can be analyzed to detect behavioral patterns, and how deviations of such patterns from a data flow-oriented task model can be exploited by a software usability critic.