Integrating new input-output modalities, such as speech, gaze, gestures, haptics, etc., in user interfaces is currently considered
as a significant potential contribution to implementing the concept of Universal Access (UA) in the Information Society (see,
for example, Oviatt, 2003). UA in this context means providing everybody, including disabled users, with easy human-computer
interaction in any context of use, and especially in mobile contexts. However, the cost of developing an appropriate specific
multimodal user interface for each interactive software is prohibitive. A generic design methodology, along with generic reusable
components, are needed to master the complexity of the design and development of interfaces that allow flexible use of alternative
modalities, in meaningful combinations, according to the constraints in the interaction environment or the user’s motor and
perceptual capabilities. We present a design approach meant to facilitate the development of generic multimodal user interfaces,
based on best practice in software and user interface design and architecture.