Citations

... The engine splits a single DOM tree into multiple DOM sub trees and then dynamically maps the generated DOM sub trees to each device. E. Braun et al. introduced in the paper "Accessing Web Applications with multiple contextaware devices" a solution to extend the possibilities of single authoring to multiple, federated devices using a central server [21]. J. Chmielewski et al. proposed in the paper "Application Architectures for Smart Multi-Device Applications" an a new application architecture (Device Independent Architecture), which makes the process of developing smart multi-device applications much easier [22]. ...
Conference Paper
Full-text available
The ever-growing ubiquity and power of mobile devices and the trend of televisions becoming more than just dumb displays for moving images are two sides of the same coin. Both types of devices have outgrown their original purpose and are now used for tasks that previously required a PC or laptop. At the same time, users often own and use multiple devices in parallel, e.g. when watching TV or while working. Traditional application models are however focused on single devices and screens, although Apps for the ”second screen” have been getting a lot of interest recently. In this paper we propose a new approach for designing and developing multi-screen applications using standard web technologies and outline how to enable traditional web applications to become multi-screen-ready.
... The following sections explain our runtime infrastructure in more detail. Much of this is also explained in [3][4][5]. ...
Article
Full-text available
Ubiquitous computing spaces, which have displays gener- ously embedded into the environment, allow interaction with graphical user interfaces in a much more casual manner than desktop computers, which tie the user a particular desk. But simply putting desktop applica- tions on a ubiquitous display will not make their use casual. We propose applications that can roam among displays as well as to personal de- vices (PDAs, etc.). Applications may also use a combination of public and personal devices simultaneously for interaction. We especially focus on associating displays with speech-based personal devices. Such combi- nations can be used to present interaction in an eortless, casual, and multimodal manner.
... During the last years, various context management models and architectures have been proposed in the literature. The most popular of these include the Context Toolkit [23], Mediacup [24], Aura [25], Cooltown [26], Owl context service [27], Kimura System [28], HotTown [29], Solar [30], CoBrA [31], CORTEX [32], Context Stack [33], CASS [34], CONTEXT [35], SOCAM [36], ContextPhone [37], etc. Yet, none of these systems provide both semantic and spatial context queries as well as VIDbased privacy and security enforcement. ...
Conference Paper
Full-text available
The emerging pervasive computing services will eventually lead to the establishment of a context marketplace, where context consumers will be able to obtain the information they require by a plethora of context providers. In this marketplace, several aspects need to be addressed, such as: support for flexible federation among context stakeholders enabling them to share data when required; efficient query handling based on navigational, spatial or semantic criteria; performance optimization, especially when management of mobile physical objects is required; and enforcement of privacy and security protection techniques concerning the sensitive context information maintained or traded. This paper presents mechanisms that address the aforementioned requirements. These mechanisms establish a robust spatially-enhanced distributed context management framework and have already been designed and carefully implemented.
... This information is used in conjunction with a world model 2 to detect that a user is looking at a device, and automatically associate it. Our other sensor systems include infrared proximity sensing, where small active badges worn by users detect when they are facing similar tags attached to devices [7], and Bluetooth proximity sensing. ...
Conference Paper
Full-text available
One of the ideas of ubiquitous computing is that computing resources should be embedded ubiquitously in the environment, making them available to any nearby users. Some researchers have applied this to interaction, and tried to embed an abundance of interactive devices, such as touch screens, in rooms and whole buildings. The opposite concept is that of a single personal mobile device, which users carry at all times and use for all interactions. Because both concepts have different strengths, we explore building interfaces for federations of personal mobile and stationary embedded devices, exploiting the capabilities of both rather than forcing users to choose between either. We have developed an infrastructure that coordinates multiple devices for that purpose: groups of devices work together to render a user interface. As one of the main challenges for such federated user interfaces we have identified their authoring. How should the interface be divided in multiple parts, and can that decision be made by a computer rather than a human designer?.
... The other devices for which we implemented clients are PDAs (Java), cell phones (J2ME), and voice recognition. The implementation is explained in more detail in [5]. ...
Article
Some ubiquitous computing visions propose to embed an abundance of input and output devices in the environment. Depending on the context, the user can either interact with her limited personal device, or input and output devices from the surrounding infrastructure. But what about situations in which the user has need for the capabilities of both types of devices at the same time? A user might need to display something privately, while needing the screen real estate of a large wall mounted display. For such situations we propose using mobile devices together with those provided by the environment. We have developed a runtime environment that coordinates multiple devices to do just that. Currently we are exploring the usability of the use of multiple devices concurrently. We are also investigating the problems of authoring such interfaces.
Conference Paper
Next generation applications will support a variety of modalities that will be provided by more than one device. Users can carry multiple devices with them or just use fixed devices in their environment. The applications will adapt themselves with respect to rendering according to changes in the user's context. For enabling a wide-spread use of those applications, an infrastructure has to be developed that supports a convenient authoring process for applications as well as a comprehensive runtime, which provides both multimodality and context services. This paper describes the Multimodality Services Component as part of a context-aware runtime for multimodal applications, which was developed as part of the EMODE project. In particular, the Multimodality Services Component is responsible for enabling multimodal interaction with EMODE applications and for the adaptation and transformation of modality-independent user interface descriptions into modality-specific parts, which can be used for generating user interfaces on the used devices. Furthermore, the Multimodality Services Component performs input coordination and interaction with the associated business logic.
Conference Paper
The envisioned architecture allows users to access a single application using multiple devices while taking advantage of their different modalities. Users of this system can fully control which devices they federate rather than relying on automatic federations, e.g., based on location information. As most of the users are mobile, the architecture enables federations between arbitrary devices of the environment. This implies a need for flexibility with respect to changes in availability of processing components, which are distributed among the federated devices and infrastructure servers. This flexibility is necessary for ensuring a robust system, which minimises perceivable interruption due to environmental changes and maximises overall availability. Besides that, the system ensures availability even without infrastructure connection