Conference Paper

The proximity toolkit: prototyping proxemic interactions in ubiquitous computing ecologies.

DOI: 10.1145/2047196.2047238 In proceeding of: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, Santa Barbara, CA, USA, October 16-19, 2011
Source: DBLP

ABSTRACT People naturally understand and use proxemic relationships (e.g., their distance and orientation towards others) in everyday situations. However, only few ubiquitous computing (ubicomp) systems interpret such proxemic relationships to mediate interaction (proxemic interaction). A technical problem is that developers find it challenging and tedious to access proxemic information from sensors. Our Proximity Toolkit solves this problem. It simplifies the exploration of interaction techniques by supplying fine-grained proxemic information between people, portable devices, large interactive surfaces, and other non-digital objects in a room-sized environment. The toolkit offers three key features. 1) It facilitates rapid prototyping of proxemic-aware systems by supplying developers with the orientation, distance, motion, identity, and location information between entities. 2) It includes various tools, such as a visual monitoring tool, that allows developers to visually observe, record and explore proxemic relationships in 3D space. (3) Its flexible architecture separates sensing hardware from the proxemic data model derived from these sensors, which means that a variety of sensing technologies can be substituted or combined to derive proxemic information. We illustrate the versatility of the toolkit with proxemic-aware systems built by students.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this position paper, I introduce my view of gestures, manipulations, and spatial cognition and argue why they will play a key role in future multi-device interaction. I conclude that gestural input will greatly improve how we interact with future interactive systems, provided that we fully acknowledge the benefits of manipulations vs. gestures, do not force users to interact in artificial gestural sign languages, and design for users' spatial abilities.
    Workshop “Gesture-based Interaction Design” (In conjunction with CHI 2014 Conference), Toronto, Canada; 04/2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present two experiments examining the impact of navigation techniques on users' navigation performance and spatial memory in a zoomable user interface (ZUI). The first experiment with 24 participants compared the effect of egocentric body movements with traditional multi-touch navigation. The results indicate a 47% decrease in path lengths and a 34% decrease in task time in favor of egocentric navigation, but no significant effect on users' spatial memory immediately after a navigation task. However, an additional second experiment with 8 participants revealed such a significant increase in performance of long-term spatial memory: The results of a recall task administered after a 15-minute distractor task indicate a significant advantage of 27% for egocentric body movements in spatial memory. Furthermore, a questionnaire about the subjects' workload revealed that the physical demand of the egocentric navigation was significantly higher but there was less mental demand.
    ACM International Conference on Interactive Tabletops and Surfaces (ITS '13), St Andrews, Scotland, UK; 10/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Many interactions naturally extend across smart-phones and de-vices with larger screens. Indeed, data might be received on the mobile but more conveniently processed with an application on a larger device, or vice versa. Such interactions require spontaneous data transfer from a source location on one screen to a target loca-tion on the other device. We introduce a cross-device Drag-and-Drop technique to facilitate these interactions involving multiple touchscreen devices, with minimal effort for the user. The tech-nique is a two-handed gesture, where one hand is used to suitably align the mobile phone with the larger screen, while the other is used to select and drag an object between devices and choose which application should receive the data.
    Mobile and Ubiquitous Multimedia 2013, Luleå, Sweden; 12/2013

Full-text (2 Sources)

Available from
Jun 3, 2014