Content uploaded by Yngve Dahl
Author content
All content in this area was uploaded by Yngve Dahl on Oct 27, 2017
Content may be subject to copyright.
J. Jacko (Ed.): Human-Computer Interaction, Part II, HCII 2007, LNCS 4551, pp. 569–578, 2007.
© Springer-Verlag Berlin Heidelberg 2007
Visualizing Interaction in Digitally Augmented Spaces:
Steps Toward a Formalism for Location-Aware and
Token-Based Interactive Systems
Yngve Dahl and Dag Svanæs
Norwegian University of Science and Technology (NTNU),
Department of Computer and Information Science
Sem Sælandsvei 7-9
7491 Trondheim, Norway
{Yngve.Dahl,Dag.Svanes}@idi.ntnu.no
Abstract. Location and token-based methods of interaction form two broad sets
of interaction techniques for ubiquitous computing systems. Currently there are
few tools available that allow designers to pay attention to the physicality that
characterizes human-computer interaction with such systems, and how users
experience them. This paper reports on ongoing work that focuses on creating a
visual formalism that addresses these limitations. The current approach is
inspired by established storyboard techniques, and aims to complement de facto
modeling formalisms such as UML.
Keywords: Ubiquitous computing; Modeling formalisms; Interaction design;
Embodied interaction; Visual design.
1 Introduction
Human-computer interaction is currently not only about PCs with mouse and
keyboard, but involves a number of different devices that can be used in many
different environments. Mobile phones, portable computers, and wireless networks
are examples of some of the innovations that have helped make computer interaction
“beyond the desktop” a reality. The new interactive possibilities that this has opened
up for has formed the basis for an interaction paradigm commonly referred to as
ubiquitous computing (UbiComp) [1]. Investigating methods that support seamless
interaction with computer systems in mobile settings has become a major branch of
HCI research over the last 10 to 15 years.
While conventional desktop interaction in many ways builds on the assumption
that the user is sitting in a chair in front of a table with a PC, with a standard set of
input and output devices at his or her disposal, assumptions about the physical and
social use situation, or the availability and appropriateness of interaction devices often
cannot be made for UbiComp. Generally, interaction with such systems tends to be
more physical in nature, and is usually mediated through sensors embedded in the
user’s surroundings. These sensors are typically sensitive to certain aspects of the
environment in which they are embedded (e.g. the physical position of users). The
current approach is often referred to as context-aware computing.
570 Y. Dahl and D. Svanæs
Methods of interaction in context-aware environments differ, and can be mediated
through e.g. the users physical location, physical objects that are digitally augmented,
pointing and gesturing, speech, and tracking of eye-movements. Dourish [2] fittingly
use the term embodied interaction to denote such methods.
One of the key implications UbiComp has for interaction design is that the
usability of the employed interaction techniques becomes much more dependant on
the physical and social aspects of the immediate use situation [3, 4]. Conversely, the
usability of the keyboard and the mouse in conventional systems is relatively stable,
and designers can assume that these devices form appropriate tools of interaction for
all desktop applications.
Although computer devices are increasingly used in mobile situations and
changing environments, we find few formalisms and notations suitable to guide
thinking about accommodating the physical aspects of this kind of human-computer
interaction. Conventional computer system modeling formalisms (e.g. UML class and
object diagrams) are often inappropriate for this purpose because they are principally
intended to describe how computer systems work on the “inside”—their primary
concern is how information is represented in, and exchanged between software
objects. Even UML use-case diagrams and formal HCI methods like task analysis,
although they aim to represent a user-perspective, are to a large extent blind to the
physicality that characterizes UbiComp systems. We will in the following refer to this
class of models as system models.
Our primary objective with the current paper is to describe some of the results of
ongoing work on a visual formalism allowing location-aware and token-based
interactive systems to be described from the perspective of users in situ, with a focus
on the physicality of the use situations. The resulting representations are physical
models in the sense that they describe the physical reality of the human-computer
interaction.
We will account for the background and inspiration for the current work, and
describe some selected notational building blocks. To illustrate the added value of
having a formal way of describing the physicality of interaction in digitally
augmented spaces, we will present a generic use scenario and compare two UML
diagrams (use-case and sequence diagrams) that match the example scenario with
models created using the presented notation.
Lastly, the paper will discuss some directions for further development of the
current approach based on feedback from an expert group evaluation.
2 Background and Inspiration
The formalism discussed in this paper has its origin in Svanæs’ phenomenological
perspective on context-aware computing described in Ref. [5]. It has subsequently
been modified, applied, and evaluated in more recent work [3, 6]. The formalism has
also been used as part of internal discussions on design solutions, and in student
exercises.
In many ways, the current modeling technique takes its inspiration from the
storyboard technique, which was developed by Walt Disney and his staff in the early
days of animated films. It allows snapshots or frames of an unfolding event to be
Visualizing Interaction in Digitally Augmented Spaces 571
captured and sequentially put together. Disney and his designers used storyboarding
as a management tool, and to facilitate creative thinking.
Storyboarding was quickly adopted as a modeling technique for graphical user
interface design [7]. With the development of electronic sketching and animation
tools (e.g. SILK [8] and Flash), visualization of dynamic interactive behavior has also
become possible. In recent work, storyboarding has also been recommend as a useful
technique for off-the-desktop interaction design [9, 10].
Storyboards are typically used as informal prototyping tools during early phases of
a design process to create rough, sketch-based models [11]. The complexity of
computer-based systems, however, has raised the need for more formal ways to
describe such systems. The advantages of modeling formalisms are well known.
Formal descriptions of computer systems can act as a common language for
developers and other stakeholders, and a way of creating a common understanding of
the system that is built. They can make it easier to compare and detect similarities
between alternative design models, and reuse previous solutions when developing
new computer systems. In addition, many computerized modeling tools are able to
turn formal models into source code.
3 Notation
The employed notation is based around a variety of components that can be used to
describe interaction in digitally augmented space. For the current purpose we will
only use a selected set of components out of the more extensive collection described
in earlier work [6]. Specifically, we will focus on users, tokens, virtual zones, and
computer devices. The symbols that represent these various components are illustrated
in Fig. 1. All of the above components can be associated with, or act as links to
particular information objects (e.g. web pages, voice messages, etc.). In Fig. 1 such
references are denoted with an “a”.
Tokens, virtual zones, and computer devices come in fixed and mobile variants.
Mobile components can be moved around by users (mobile virtual zones are
considered relative to a user’s physical position).
Tokens, as typically used in UbiComp, are tangible objects that can contain
references to digital information objects, or exclusively identify a particular person
(e.g. the user). A token can be either a digital or a non-electronic artifact. Tokens
typically act as passive interaction elements in UbiComp solutions, i.e. users must
actively scan tokens with appropriate readers to get access to the associated
information.
Virtual zones correspond to the detection area of location and presence
technologies that can respond to the physical presence of users. Examples of
technologies that can facilitate this include GPS, WLAN positioning, Bluetooth, IR,
and face recognition. The size, shape, and accuracy of a virtual zone depend on the
applied sensor technology. Similar to tokens, virtual zones can contain links to digital
information. As apposed to tokens, interaction with location-aware systems typically
requires no conscious action from the user. Entering a digitally augmented area is
more likely to be a consequential act and part of an ongoing activity [12].
572 Y. Dahl and D. Svanæs
Computer devices correspond to the media that respond to a user’s (inter)actions in
digitally augmented spaces. They can contain 0..N information object references, and
can present the actual information object to users, as a response to events taking place
in the digitally augmented spaces in which they are located. A computer device
presenting an information object is denoted with a “flashing” device symbol (see
Fig.1).
Users initiate events in ubiquitous computing environments, through their
interaction with the other interaction components. For location-based solutions user
interaction typically occurs through physical presence or proximity. Token-based
interaction, on the other hand, typically implies that the user must put the token in
physical contact or in immediate proximity of a corresponding reader or device.
Fig. 1. Notation. Users, virtual zones, tokens, and computer devices are represented with
distinct symbols. An “a” denotes a contained information object reference.
4 Comparing Conventional System Models and Physical Models
To illustrate the added value of the physical design models that can be created with
the described notation, we will address a generic use scenario:
“A mobile user wants to retrieve electronic information related to his current
physical location.”
Ubiquitous and context-aware computing in general builds on the premise of
changing use situations, and mobility is often the cause of such change. Providing
users with the possibility to access digital information at while on the move has
motivated research on ubiquitous computing for a number of application areas, such as
guiding [13], hospitals [3, 12], entertainment and gaming [14], and m-commerce [15].
Visualizing Interaction in Digitally Augmented Spaces 573
4.1 System Models
Fig. 2 shows a possible UML use-case diagram for a location-aware or token-based
design solution for the example scenario. It is assumed that all interaction elements
(users, tokens, virtual zones, and computer devices) have unique identities that can be
registered by the sensors in use.
Initially, a user indicates which information he wants to retrieve by walking into a
virtual zone or scanning an appropriate token. The event is detected by the employed
sensor technology, which registers the unique identities of the relevant interaction
elements. Based on the registered sensor data the correct information object is found,
and presented on the user’s computer device.
Fig. 2. Use-case diagram for a location-aware or token-based interactive system
A corresponding sequence diagram is illustrated in Fig. 3. In the current setup it is
assumed that the information objects that the user can access are web pages, and that
the interpreter-object finds the matching URL based on the provided sensor data.
Fig. 3. Sequence diagram for example scenario
4.2 Physical Models
Fig. 2 and Fig. 3 show how information is exchanged between actors and software
objects. With regard to the example scenario, the respective diagrams are principally
similar for all conventional client-server solutions, independently of the employed
sensor technology and interaction tools.
574 Y. Dahl and D. Svanæs
But how, then, can we expect that the same service will look from a user’s
perspective? Which metaphors are suitable for describing physical interaction with
location-aware and token-based interactive systems? The notation described in Sect. 3
allows us to put together a number of alternatives. Some of these have frequently been
employed in UbiComp designs.
Fig. 4-6 show different variants of location-based interaction. In Fig. 4 the user
carries a mobile computer device, which responds and presents the associated
information object as the user enters a virtual zone. Fig. 5 shows an alternative model
where the same information object is presented on a fixed device located in the virtual
zone that the user enters. The model illustrated in Fig. 6 shows a fixed computer
device that initially contains a reference to one specific information object. This is
presented as the user (i.e. his associated mobile virtual zone) comes within a given
range of the device. In this configuration, the mobile virtual zone identifies the user
and work as an access key to the information object.
Fig. 4. The user enters a virtual zone, causing his mobile computer device to present the
associated information object
Fig. 5. The user enters a virtual zone containing a fixed computer device. This causes the
device to present the information object linked to the virtual zone.
Fig. 6. An information object is presented as the user comes within a given range of fixed
device. The device is dedicated to present one particular information object.
Figs. 7-9 show three token-based variants of the same scenario. In Fig. 7 the user’s
mobile computer device responds as it scans a fixed token with a link to an
information object. In Fig. 8 the token and the computer device have switched roles
from the model illustrated in Fig. 7—the computer device is now fixed, and presents
Visualizing Interaction in Digitally Augmented Spaces 575
the information object associated with the particular mobile token that the user scans.
The user might carry multiple tokens, each pointing to a different information object.
The model shown in Fig. 9 involves the same components as the model illustrated
in Fig. 8, but the reference associated with the token now identifies the user, rather
than an information object. Thus, the token is analogue to the mobile zone in Fig. 6.
Fig. 10 illustrates a modified version of the model provided in Fig. 9. Instead of
using an external representation in form of a token for user identification, this is now
done directly (e.g. through fingerprint recognition) as the user touches the computer
device. Similar to the model shown in Fig. 9, the fixed computer device is initially
dedicated to present one particular information object.
Fig. 7. The user carries a mobile computer device, and scans a fixed token containing a
reference to an information object
Fig. 8. The user carries one or more mobile tokens. Each token can contain a reference to a
specific information object.
Fig. 9. The mobile token identifies the user, and provides access to the information object
associated with the fixed computer device
Fig. 10. The user triggers a specific information object associated with the computer device by
touching the device
576 Y. Dahl and D. Svanæs
Some of the models described above can be regarded as emerging interaction
design patterns for ubiquitous computing. The current formalism, however, also
supports modeling of more specific scenarios, which combine various methods of
interaction. Models of more extended scenarios can be found in earlier work [6].
5 Suggestions for Further Development
To assess the applicability of the formalism we have previously conducted a
preliminary evaluation with a usability expert group, described in detail in Ref. [6].
The evaluation suggested that the formalism was intuitive, and that the expert group
quickly managed to combine the notational building blocks into meaningful
interaction models. Moreover, it revealed the formalism promotes discussion and
reflection on design solutions.
Feedback from the expert group has also inspired investigation of how to further
develop the formalism. In particular, we have considered ways to complement the
formalism through sublevels, or supplementary representations, of implementation
specific aspects of the design. The possibility to construct models of computer
systems at various levels of abstraction is a central aspect of de facto modeling
formalisms such as UML. For example, a UML sequence diagram may hide details,
showing only the interaction between actors and the system, or may provide an in
depth view depicting detailed logic at object-level. Flexibility with regard to level of
detail is also an essential feature of storyboards [16].
The level of detail supported by formalism in its current form, allows interaction to
be described in terms of presence, proximity, or touch (immediate proximity). One
feasible way to achieve a more detailed or implementation specific view is to
supplement the interaction models with icons or visual hints indicating e.g. the
concrete interaction tools that implement the model, and how users can operate them.
The graphic language for touch-based interactions described by Arnall [17], can serve
as an inspiration for development of a suitable iconography for this purpose. Fig. 11
and Fig. 12 give examples of how such visual representations may supplement models
built with the presented notation.
Fig. 11 shows the token-based interaction model illustrated in Fig. 8, with a visual
hint indicating the how the computer device response is triggered—a barcode card
being inserted the correct way into a terminal slot. Fig. 12 demonstrates the same
Fig. 11. Token-based interaction. The visual hint to the right suggests how the current
interaction technique can be implemented.
Visualizing Interaction in Digitally Augmented Spaces 577
Fig. 12. Two users exchanging digital information as their computer devices are brought
physically close to each other. The hints suggest two different ways to achieve this.
principle applied to an interaction model where computer devices as well as
information object references are mobile. In this model, information is exchanged
between mobile devices that are brought physically close to each other. The
associated visual hints show two potential ways to achieve this. The upper hint shows
handshake-augmented interpersonal information exchange—a technique employed in
the iBand system [18]. Alternatively, as shown in the lower hint, the same model can
be realized using handheld computer devices that can exchange information over
short physical distance (e.g. via IR communication).
6 Summary and Conclusions
The current paper has described some of the results of ongoing work on a visual
modeling formalism for describing location and token-based interaction in digitally
augmented spaces. We have argued through examples that the physically oriented
perspective that the formalism offers, can complement conventional system modeling
formalisms such as UML. Finally, we have demonstrated how informal icons or
visual hints can supplement the presented formalism by representing more
implementation specific aspects of a design model.
Essentially, the paper has shown that what from a software-perspective is
seemingly trivial, and basically can be handled through a small number of method
calls, opens up for many potential interaction design solutions in a UbiComp
environment. Ubiquitous computing has in many ways contributed to expand the
design space for interactive systems. The presented modeling formalism can be
considered a tool for exploring that design space.
References
1. Weiser, M.: The Computer for the 21th Century. Scientific American 265, 66–75 (1991)
2. Dourish, P.: Seeking a Foundation for Context-Aware Computing. Human-Computer
Interaction 16, 229–241 (2001)
3. Dahl, Y., Svanæs, D.: A Comparison of Location and Token-Based Interaction Techniques
for Point-of-Care Access to Medical Information. Accepted for publication. Personal and
Ubiquitous Computing. To be published
578 Y. Dahl and D. Svanæs
4. Svanæs, D., Alsos, O.A.: Interaction Techniques for Using Handhelds and PCs Together
in a Clinical Setting. NordiCHI 2006, Oslo, Norway (2006)
5. Svanæs, D.: Context-Aware Technology: A Phenomenological Perspective. Human-
Computer Interaction 16, 379–400 (2001)
6. Dahl, Y.: Toward a Visual Formalism for Modeling Location and Token-Based Interaction
in Context-Aware Environments. Accepted for publication. IASTED-HCI 2007 (2007)
7. Verplank, W.L.: Designing Graphical User Interfaces. Tutorial notes. CHI’86 Conference,
Boston, MA (1986)
8. Landay, J.A., Myers, B.A.: Sketching Interfaces: Toward More Human Interface Design.
Computer 34, 56–64 (2001)
9. Li, Y., Hong, J.I., Landay, J.A.: Using Electronic Tools in the Iterative Design of a
Context-Aware Tour Guide: A Case Study. CS Technical Report, University of California,
Berkeley (2005)
10. Dow, S., Saponas, T.S., Li, Y., Landay, J.A.: External representations in ubiquitous
computing design and the implications for design tools. In: Proc. of the 6th ACM
conference on Designing Interactive systems. pp. 241–250 University Park, PA, USA,
(2006)
11. Landay, J.A., Myers, B.A.: Interactive sketching for the early stages of user interface
design. In: Proceedings of the SIGCHI conference on Human factors in computing
systems, ACM Press/Addison-Wesley Publishing Co, Denver, Colorado, United States
(1995)
12. Dahl, Y.: You Have a Message Here: Enhancing Interpersonal Communication in a
Hospital Ward with Location-Based Virtual Notes. Methods of Information in
Medicine 45, 602–609 (2006)
13. Cheverst, K., Davies, N., Mitchell, K., Friday, A.: Experiences of developing and
deploying a context-aware tourist guide: the GUIDE project. In: Proceedings of the 6th
annual international conference on Mobile computing and networking, pp. 20–31. ACM
Press, Boston, Massachusetts, United States (2000)
14. Chalmers, M., Bell, M., Brown, B., Hall, M., Sherwood, S., Tennent, P.: Gaming on the
Edge: Using Seams in Ubicomp Games. In: Proc. ACM Advances in Computer
Entertainment (ACE) (2005)
15. Rahlff, O.-W.: CybStickers – Simple Shared Ubiquitous Annotations for All. UbiPhysics
workshop, UbiComp 2005, Tokyo, Japan (2005)
16. van der Lelie, C.: The value of storyboards in the product design process. Personal
Ubiquitous Computing 10, 159–162 (2006)
17. Arnall, T.: A graphic language for touch-based interactions. Mobile Interaction with the
Real World (MIRW 2006). In: Workshop held in conjunction with MobileHCI 2006,
Espoo, Finland (2006)
18. Kanis, M., Winters, N., Agamanolis, S., Cullinan, C., Gavin, A.: iBand: A Wearable
Device for Handshake-Augmented Interpersonal Information Exchange. In: Adjunct
Proceedings of the Sixth International Conference on Ubiquitous Computing. Nottingham,
U.K (2004)