About
73
Publications
16,332
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,635
Citations
Introduction
Skills and Expertise
Publications
Publications (73)
This workshop aims at identifying, examining, structuring and sharing educational resources and approaches to support the process of teaching/learning Human-Computer Interaction (HCI) Engineering. The broadening of the range of available interaction technologies and their applications, many times in safety and mission critical areas, to novel and l...
Sliders are widely used on mobile devices. Envisioning mobile devices that can dynamically deform to raise tangible controls from the screen surface, tangible sliders offer the benefit of eyes-free interaction. However, reaching for distant values with one hand is problematic: users namely need to change their handgrip, which is not comfortable. To...
When working with mobile Augmented Reality (AR), users often need to visualize off-screen points of interest (POIs). These POIs belong to the context since they are not directly observable in the 3D first-person view on screen. The aim is to present the 3D direction and distance of each POI in a 3D first-person view. The context in mobile AR can in...
We present EXHI-bit, a mechanical structure for prototyping unique shape-changing interfaces that can be easily built in a fabrication laboratory. EXHI-bit surfaces consist of inter-weaving units that slide in two dimensions. This assembly enables the creation of unique expandable handheld surfaces with continuous transitions while maintaining the...
We address the problem of mobile distal selection of physical objects when pointing at them in augmented environments. We focus on the disambiguation step needed when several objects are selected with a rough pointing gesture. A usual disambiguation technique forces the users to switch their focus from the physical world to a list displayed on a ha...
Several ways for selecting physical objects exist, including touching and pointing at them. Allowing the user to interact at a distance by pointing at physical objects can be challenging when the environment contains a large number of interactive physical objects, possibly occluded by other everyday items. Previous pointing techniques highlighted t...
The context of this work is to develop, adapt and integrate augmented reality related tools to enhance the emotion involved in cultural performances. Part of the work was dedicated to augmenting a stage in a live performance, with dance as an application case. In this paper, we present a milestone of this work, an augmented dance show that brings t...
A subject's emotional expression is of course influenced by his personality, but several other factors can play a role. His position, the constraints he undergoes, or the setting of his working space can all deeply impact emotional expression through movement. When performing an automatic emotion recognition, an evaluator must be able to set up som...
Technological advances in hardware manufacturing led to an extended range of possibilities for designing physical–digital
objects involved in a mixed system. Mixed systems can take various forms and include augmented reality, augmented virtuality,
and tangible systems. In this very dynamic context, it is difficult to compare existing mixed systems...
Analysis of emotion recognition is a young but maturing research field, for which there is an emerging need for engineering models and in particular design models. Addressing these engineering challenges of emotion recognition, we reuse and adapt results from the research field of multimodal interaction, since the expression of an emotion is intrin...
In this paper we present a novel approach for prototyping, testing and evaluating multimodal interfaces, OpenWizard. OpenWizard
allows the designer and the developer to rapidly evaluate a non-fully functional multimodal prototype by replacing one modality
or a composition of modalities that are not yet available by wizard of oz techniques. OpenWiza...
Human-Computer Interaction (HCI) is no longer restricted to interaction
between users and computers via keyboard and screen: Currently one of
the most challenging aspects of interactive systems is the integration
of the physical and digital aspects of interaction in a smooth and
usable way. The design challenge of such mixed reality (MR) systems li...
We present OpenWizard, a wizard of oz component-based approach for rapidly prototyping and testing multimodal applications. OpenWizard allows the designer and the developer to rapidly test a non-fully functional multimodal prototype by replacing one modality or a composition of modalities that are not yet available by wizard of oz techniques. We il...
The CARE properties (Complementarity, Assignment, Redundancy and Equivalence) define various forms that multimodal input interaction can take. While Equivalence and Assignment express the availability and respective absence of choice between multiple input modalities for performing a given task, Complementarity and Redundancy describe relationships...
To face the difficulties encountered by the mixed interactive systems designers during the design step, we propose a new approach for the design phase. This article introduces the articulation of an informal method, which is the focus-group, with a formal mixed interaction model. The articulation allows a better integration of the design step into...
Multimodal interaction software development presents a particular challenge because of ever increasing number of novel interaction devices. In this paper, we present the OpenInterface Interaction Development Environment (OIDE) that addresses the design and development of multimodal interfaces. To illustrate our approach, we present a multimodal sli...
The area of multimodal interaction has expanded rapidly. However, the implementation of multimodal systems still remains a difficult task. Addressing this problem, we describe the OpenInterface (OI) framework, a component-based tool for rapidly developing multimodal input interfaces. The OI underlying conceptual component model includes both generi...
The power and versatility of multimodal interfaces results in an increased complexity of the code to be developed. To address
this problem of software development, multimodal interfaces therefore make necessary the definition of software development
tools that satisfy specific requirements such as the fusion of data from different interaction modal...
In this paper, we illustrate the OpenInterface Interaction Development Environment (OIDE) that addresses the design and development of multimodal interfaces. Multimodal interaction software development presents a particular challenge because of the ever increasing number of novel interaction devices and modalities that can used for a given interact...
In this paper we address the problem of the development of multimodal interfaces. We describe a three-dimensional characterization space for software components along with its implementation in a component-based platform for rapidly developing multimodal interfaces. By graphically assembling components, the designer/developer describes the transfor...
We present a test-bed platform for the iterative design of multimodal games on a mobile phone or a PDA. While our test-bed
platform is general to multimodal systems, in this paper we focus on games on mobile devices, since games are intrinsically
multimodal, enriching thus the gaming experience and potential for interaction innovations. As part of...
RESUME Les systèmes mixtes visent à mêler les mondes physique et numérique. Dans ce contexte, le modèle d'interaction mixte permet de définir un objet mixte et l'interaction avec un tel objet au sein d'un système mixte. Nous affi-nons notre définition d'un objet mixte en identifiant des caractéristiques et en capitalisant des résultats de la litté-...
In this paper we present a preliminary study for designing interactive systems that are sensible to human emotions based on the body movements. To do so, we first review the literature on the various approaches for defining and characterizing human emotions. After justifying the adopted characterization space for emotions, we then focus on the move...
Output multimodal interaction involves choice and combination of relevant interaction modalities to present information to the user. In this paper, we present a framework based on reusable software components for rapidly developing output multimodal interfaces by choosing and combining interaction modalities. Such an approach enables us to quickly...
Reading has arguably the longest and richest history of any domain for scientifically considering the impact of technology on the user. From the 1920s to the 1950s, Miles Tinker [1963] and other researchers ran hundreds of user tests that examined the effects of different fonts and text layout variables, such as the amount of vertical space between...
ABSTRACT The development ,and the evaluation of multimodal ,interactive systems on mobile phones remains a difficult task. In this paper weaddress,this problem ,by describing a component-based approach, called ACICARE, for developing and evaluating multimodal,interfaces ,on mobile ,phones. ,ACICARE is dedicated,to the ,overall iterative design ,pro...
In this paper we focus on the design of Computer Assisted Surgery (CAS) systems and more generally Augmented Reality (AR) systems that assist a user in performing a task on a physical object. Digital information or new actions are defined by the AR system to facilitate or to enrich the natural way the user would interact with the real environment....
INTRODUCTION Software tools for the construction of interactive systems such as user interface toolkits, application skeletons, and user interface generators, alleviate the activity of programming but do not eliminate software architecture modelling. For example, the "callback" mechanism made popular by X-Window, does not enforce the distinction be...
One trend in Human Computer Interaction is to extend the sensory-motor capabilities of computer systems to better match the natural communication means of humans.
In this paper we focus on the design of mixed reality (MR) systems. We propose two design spaces that can be useful in a top-down (abstract to concrete) design method for MR systems. The first design space consists of an organized framework of abstract interaction situations for describing mixed systems. Each situation is depicted by an ASUR diagra...
In this paper we present ASUR++, a notation for describing, and reasoning about the design of, mobile interactive computer
systems that combine physical and digital objects and information: mobile mixed systems. ASUR++ helps a designer to specify the key characteristics of such systems and to focus on the relationship between physical
objects and a...
This chapter is concerned with the usability and implementation of multifeature systems such as multimodal and multimedia user interfaces. We show how the usability of such systems can be characterized in terms of the relations they are able to maintain between the interaction languages and the I/O devices they support. Interaction language and dev...
In this paper we present ASUR++, a notation for describing, and reasoning about the design of, mobile interactive computer systems that combine physical and digital objects and information: mobile mixed systems. ASUR++ helps a designer to specify the key characteristics of such systems and to focus on the relationship between physical objects and a...
Integrating computer-based information into the real world of the user is becoming a crucial challenge for the designers of
interactive systems. The Augmented Reality (AR) paradigm illustrates this trend. Information is provided by an AR system to
facilitate or to enrich the natural way in which the user interacts with the real environment. We focu...
Systems combining the real and the virtual are becoming more and more prevalent. The Augmented Reality (AR) paradigm illustrates this trend. In comparison with traditional interactive systems, such AR systems involve real entities and virtual ones. And the duality of the two types of entities involved in the interaction has to be studied during the...
Integrating virtual information and new action in the real world of the user, is becoming a crucial challenge for the designers of interactive systems. The Augmented Reality (AR) paradigm illustrates this trend. Virtual information or new action are defined by the AR system to facilitate or to enrich the natural way the user would interact with the...
Multifeature user interfaces support multiple interaction techniques which may be used sequentially or concurrently, and independently or combined synergistically (Nigay, Coutaz 1993a). New interaction aspects must be considered, such as the fusion and fission of information, and the nature of temporal constraints. The availability of multiple inte...
In this paper, we first present a brief review of approaches used for studying and designing Augmented Reality (AR) systems. The variety of approaches and definitions in AR requires classification. We define two intrinsic characteristics of AR systems, task focus and nature of augmentation. Based on these two characteristics, we identify four class...
This paper introduces a Dimension Space describing the entities making up richly interactive systems. The Dimension Space is intended to help designers understand both the physical and virtual entities from which their systems are built, and the tradeoffs involved in both the design of the entities themselves and of the combination of these entitie...
One of the recent design goals in Human Computer Interaction has been to extend the sensorymotor capabilities of computer systems to combine the real and the virtual in order to assist the user in his environment. Such systems are called Augmented Reality (AR). Although AR systems are becoming more prevalent we still do not have a clear understandi...
The need for ensuring that usability measurement results can contribute to the ongoing development of a software product in a formative way is the main theme of this paper. It is recognized that acquiring, structuring, and analysing data about the actual progression of a product's development is a challenging task. Even more difficult, is the probl...
This article reports our reflection on software architecture modelling for multi-user systems (or groupware). First, we introduce the notion of software architecture and make explicit the design steps that most software designers in HCI tend to blend in a fuzzy way. Building on general concepts and practice from main stream software engineering, we...
. This article is concerned with the usability and implementation of multimodal user interfaces. We show how the usability of such systems can be characterized in terms of the relations they are able to maintain between the modalities they support. Equivalence, assignment, redundancy, and complementarity of modalities form an interesting set of rel...
This paper is concerned with the usability and implementation of multifeature systems such as
The combined,use of multiple interaction techniques such as speech and computer vision as well as video and the automatic generation of text and realistic images, has opened the way,to a new world of experience: that of multimedia and multimodal,interaction. The unexpected,and sudden,success of this new,area of research has resulted in a profusion...
: We propose two topics for discussion at the workshop and describe one of our contributions to the field. One activity relevant for the workshop would be to classify current formalisms and provide recommendations for using a particular class of notations given a goal and a specific development context. The second issue to consider is the difficult...
Software tools for tile construction of interactive systems such as user interface toolkits, application skeletons, and user interface generators, alleviate the activity of programming but do not eliminate software architecture modelling. For example, the "callback" mechanism made popular by X-Window, does not enforce the distinction between domain...
Multimodal interactive systems support multiple interaction techniques such as the synergistic use of speech and direct manipulation. The flexibility they offer results in an increased complexity that current software tools do not address appropriately. One of the emerging technical problems in multimodal interaction is concerned with the fusion of...
This paper discusses software architectures of multimodal systems. The recent availability of new input technologies brought a whole new type of systems, able to support communication with the user through multiple interaction channels. Multimodal systems that allow modalities to be combined seem to be the most promising in the field of multimodal...
One of the new design goals in HCI is to extend the sensory-motor capabilities of computer systems to better match the natural communication means of human beings. This article proposes a framework that shoulp help reasoning about current and future Multi-Sensory-Motor systems (MSM). To do so, we adopt a system centered perspective although we draw...
. One of the new design goals in Human Computer Interaction is to extend the sensory-motor capabilities of computer systems to better match the natural communication means of human beings. This article proposes a dimension space that should help reasoning about current and future Multi-Sensori-Motor systems (MSM). To do so, we adopt a system center...
One of the new design goals in Human Computer Interaction is to extend the sensory-motor capabilities of computer systems to better match the natural communication means of human beings. This article proposes a dimension space that should help reasoning about current and future Multi-Sensori-Motor systems (MSM). To do so, we adopt a system centered...
Multimodal interaction enables the user to employ different modalities such as voice, gesture and typing for communicating with a computer. This paper presents an analysis of the integration of multiple communication modalities within an interactive system. To do so, a software engineering perspective is adopted. First, the notion of “multimodal sy...
One of the new design goals in HCI is to extend the sensory-motor capabilities of computer systems to better match the natural communication means of human beings. This article proposes a framework that shoulp help reasoning about current and future Multi-Sensory-Motor systems (MSM). To do so, we adopt a system centered perspective although we draw...
This document presents MATIS along the lines defined for the description of common exemplars: overview, reference material available, hardware and software platforms, usage, and future plans. MATIS allows an end-user to obtain information about flight schedules using speech, mouse, keyboard, or a synergistic combination of these techniques. At any...
This paper discusses multimodal systems. We have defined a taxonomy of multimodal interactions and built two example multimodal systems. We consider several levels where the fusion can take place in the software architecture of those systems. We emphasize the need for a common abstract representation to represent input data and commands, and to com...
This paper discusses software architectures of multimodal systems. The recent availability of new input technologies brought a whole new type of systems, able to support communication with the user through multiple interaction channels. Multimodal systems that allow modalities to be combined seem to be the most promising in the field of multimodal...
This paper describes a specific Distributed Display Environment (DDE) developed for a Computer-Assisted Surgery (CAS) system. A mini LCD screen is used to display surgical guidance information. The resulting DDE is described according to a multimodal viewpoint.
Multimodal interactive systems support multiple interaction techniques such as the synergistic use of speech and direct manipulation. The flexibility they offer results in an increased complexity that current software tools do not address appropriately. One of the emerging technical problems in multimodal interaction is concerned with the fusion of...
We propose,the CARE properties as a simple way,of characterising and
The growing interest of designers for mixed interactive systems is due to the dual need of users to both benefit from computers and stay in contact with the physical world. Based on two intrinsic characteristics of mixed interactive systems, target of the task and nature of augmentation, we identify four classes of systems. We then refine this taxo...