Conference Paper

Chapter 17 Multimodal Interfaces – A Generic Design Approach

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Integrating new input-output modalities, such as speech, gaze, gestures, haptics, etc., in user interfaces is currently considered as a significant potential contribution to implementing the concept of Universal Access (UA) in the Information Society (see, for example, Oviatt, 2003). UA in this context means providing everybody, including disabled users, with easy human-computer interaction in any context of use, and especially in mobile contexts. However, the cost of developing an appropriate specific multimodal user interface for each interactive software is prohibitive. A generic design methodology, along with generic reusable components, are needed to master the complexity of the design and development of interfaces that allow flexible use of alternative modalities, in meaningful combinations, according to the constraints in the interaction environment or the user’s motor and perceptual capabilities. We present a design approach meant to facilitate the development of generic multimodal user interfaces, based on best practice in software and user interface design and architecture.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

Conference Paper
This article presents the design and a user experimentation of a multimodal service based on an experimental multimodal platform. It describes the technical realization of the platform and the development of a multimodal service. Lastly, it presents the results of an experimentation with users applying this service. These results show that users prefer multimodal interactions and are more efficient with this kind of interface.
Conference Paper
Ambient Intelligence (AmI) scenarios place strong emphasis on the fact that interaction takes place through natural interfaces, in such a way that people can perceive the presence of smart objects only when needed. As a possible solution to achieving relaxed and enjoyable interaction with the intelligent environments depicted by AmI, the ambient could be equipped with suitably designed multimodal interfaces bringing up the opportunity to communicate using multiple natural interaction modes. This paper discusses challenges to be faced when trying to design multimodal interfaces that allow for natural interaction with systems, with special attention to speech-based interfaces. It describes an application that was built to serve as a test bed and to conduct evaluation sessions in order to ascertain the impact of multimodal natural interfaces on users and to assess their usability and accessibility.
Article
Interaction with future coming Smart Environments requires research on methods for the design of a new generation of human–environment interfaces. The paper outlines an original approach to the design of multimodal applications that, while valid for the integration on today’s devices, aims also to be sufficiently flexible so as to remain consistent in view of the transition to future Smart Environments, which will likely be structured in a more complex manner, requiring that interaction with services offered by the environment is made available through the integration of multimodal/unimodal interfaces provided through objects of everyday use. In line with the most recent research tendencies, the approach is centred not only on the user interface part of a system, but on the design of a comprehensive solution, including a dialogue model which is meant to provide a robust support layer on which multimodal interaction builds upon. Specific characteristics of the approach and of a sample application being developed to validate it are discussed in the paper, along with some implementation details.
ResearchGate has not been able to resolve any references for this publication.