Conference PaperPDF Available

Peripheral Interaction: Embedding HCI in Everyday Life

Authors:
Proceedings
Peripheral Interaction: Embedding HCI in
Everyday Life
Workshop at INTERACT 2013 14th IFIP TC13
Conference on Human-Computer Interaction, Cape
Town, South Africa, September 2013
Edited by
Doris Hausen, Saskia Bakker, Elise van den Hoven, Andreas Butz, Berry Eggen
Proceedings
Peripheral Interaction: Embedding HCI in Everyday
Life
Workshop at INTERACT 2013 14th IFIP TC13
Conference on Human-Computer Interaction
Edited by:
Doris Hausen, University of Munich (LMU)
Saskia Bakker, Eindhoven University of Technology
Elis e van den Hoven, University of Technology, Sydney , Eindhoven University of Technology
Andreas Butz, University of Munich (LMU)
Berry Eggen, Eindhoven University of Technology
A Volume in the Workshop Proceedings Series
of the INTERACT 2013 Conference
This publication is a Volume in the Workshop Proceedings Series of the INTERACT 2013
Conference.
Copyright © 2013 Authors of Individual Contributions.
Permission to make digital or hard copies of all or part of this work for personal or classroom
use is granted without a fee provided that the copies are not made or distributed for profit or
commercial advantage, that the copies bear this notice and the full citation on the first page.
Copyrights for components of this work owned by others than the authors and INTERACT
2013 must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish,
to post on servers, or to redistribute to lists, requires prior specific permission from the
individual authors of the contributions.
Publication Data
ISBN: 978-0-620-57411-2
Editors: Doris Hausen, Saskia Bakker, Elise van den Hoven, Andreas Butz, Berry Eggen
Publisher: INTERACT 2013
Place of Publication: Cape Town, South Africa
Date of Publication: September 2013
Table of Contents
Peripheral Interaction: Embedding HCI in Everyday Life
Doris Hausen, Saskia Bakker, Elise van den Hoven, Andreas Butz, Berry Eggen
1
A Context Server to Allow Peripheral Interaction
Borja Gamecho, Luis Gardeazabal, Julio Abascal
5
Peripheral Interaction in the context of DJing
Mayur Karnik
11
On the Relevance of Freehand Gestures for Peripheral Interaction
Sebastian Loehmann
15
Towards Ambient Notifications
Heiko Müller, Martin Pielot, Rodrigo de Oliveira
21
Peripheral interaction for sports exploring two modalities for real-time
feedback
Stina Nylander, Jakob Tholander, Alex Kent
27
Animal ̶ Inspired Peripheral Interaction: Evaluating a Dog-Tail Interface
for Communicating Robotic States
Ashish Singh, James E. Young
33
Micro Manage Me! Peripheral Context Annotation for Efficient Time
Management
Bernhard Slawik
39
The building is the program
Andrew Cyrus Smith, Helene Gelderblom
45
Peripheral Interaction: Embedding HCI in Everyday Life
Doris Hausen1, Saskia Bakker2, Elise van den Hoven3,2, Andreas Butz1, Berry Eggen2
1 Human-Computer-Interaction Group, University of Munich (LMU), Germany
2 Industrial Design Department, Eindhoven University of Technology, the Netherlands
2 Faculty of Design, Architecture & Building, University of Technology, Sydney, Australia
doris.hausen@ifi.lmu.de, s.bakker@tue.nl,
elise.vandenhoven@uts.edu.au, andreas.butz@ifi.lmu.de,
j.h.eggen@tue.nl
Abstract. The comparison of actions in the physical world with actions on in-
teractive devices reveals a remarkable difference. In daily life we easily per-
form several tasks in parallel, e.g. when drinking coffee while reading this text,
drinking may be in the background or periphery of the attention. Contrarily, we
almost always have to focus our attention on each digital device we interact
with. Considering the growing number of devices competing for our attention,
novel interaction techniques have to be explored to offer Peripheral Interaction
with digital devices. We believe that this approach supports interactive technol-
ogy to be better embedded in everyday routines. This workshop aims at bring-
ing together researchers and practitioners from different disciplines, to share
their experiences with human-computer interaction (HCI) for the everyday rou-
tine and to shape a shared understanding of Peripheral Interaction.
Keywords: peripheral interaction; human attention; trained routines; calm
technology; ambient information; interaction design.
1 Introduction
Computing technology has become increasingly present in everyday life. This creates
opportunities as well as challenges for interaction design. One of these challenges is
the seamless integration of technology in our everyday routines. A large body of re-
lated work, in areas such as calm technology [9] and ambient displays [4], addresses
this by aiming at moving away from presenting information in a salient way, toward
presenting it subtly, blended into the environment. Though these areas target back-
ground perception of information, we now see an upcoming interest in background
interaction with computing technology [1-5, 8], which is the focus of this workshop.
This vision, which we call Peripheral Interaction, is based on the observation that
in everyday life, many actions occur outside the focus of the attention [2]. For exam-
ple, we can easily tie our shoelaces while having a conversation or drink from a cup
while reading a book. These actions are seamlessly embedded in everyday routines.
1
Similar to everyday actions, Peripheral Interactions are interactions with technology,
which occur outside the focus of the attention and fluently blend into everyday life.
This workshop aims to bring together a community of researchers and practitioners
with various backgrounds (e.g. computer science, interaction design, interactive arts,
psychology, product design, social science), to discuss and create a common ground
for future research on Peripheral Interaction. Besides people working on Peripheral
Interaction or directly related topics, we especially invite those interested in better
fitting interactive technologies in everyday life and challenge them to think of their
work as a Peripheral Interaction. The workshop addresses the following questions:
What is Peripheral Interaction? The term Peripheral Interaction is used in various
ways, for example to describe interfaces located on the side of the user’s visual field
[3]; to describe brief actions performed in parallel to other activities [5, 8]; or to en-
compass both background perception and interaction [1]. Also, several other terms are
known that describe related interaction styles, such as eyes-free interaction [7] and
implicit interaction [9]. The first goal of this workshop is to create a common under-
standing and comprehensive definition of Peripheral Interaction.
How to put Peripheral Interactions into practice? To gain a common understand-
ing of Peripheral Interaction, not only high-level definitions but also practical, interac-
tion level knowledge is required. In this workshop, we will discuss how (potential)
Peripheral Interactions can be put into practice through presentations of participants.
Based on this we will explore the common attributes of Peripheral Interaction. This is
relevant to (1) recognize Peripheral Interaction, (2) support Peripheral Interaction
researchers, evaluators and designers and (3) find opportunities to evaluate and im-
prove existing interactions from the perspective of Peripheral Interaction.
How to evaluate Peripheral Interaction? A major challenge of Peripheral Interac-
tion is evaluating it. Most evaluation methods known for HCI, seem unsuitable to
evaluate if an interactive system blends into everyday life. To assess this main goal of
Peripheral Interaction, one needs to deploy it in an everyday context for a period of
time [4]. Since this approach is demanding and time-consuming, it would be interest-
ing to explore alternatives. Using the participants’ experiences as a starting point, we
will discuss evaluation strategies that are suitable for Peripheral Interaction.
2 Workshop Goals
This workshop has the following four main goals. (1) To create and bring together a
community of artists, practitioners, engineers, designers and researchers with various
backgrounds who are directly or indirectly working on Peripheral Interaction. (2) To
share and discuss definitions in order to create a common understanding of Peripheral
Interaction. (3) To share and discuss (potential) examples of Peripheral Interaction, in
order to identify their common attributes. (4) To share and discuss evaluation strate-
gies suitable for Peripheral Interaction.
2
3 Structure of the One-Day Workshop
Before the Workshop. Potential participants submit a position paper (up to six pag-
es), addressing the authors’ work and its (direct or indirect) relation to Peripheral
Interaction. Participants may bring demonstrators or videos to show their work, but
this is by no means a requirement.
During the Workshop. The workshop will kick-off with a presentation of each par-
ticipant, in the form of a talk, video or demo (chosen by the participant). Next, partic-
ipants will informally get to know each other in a “speed-date” by sharing views on
Peripheral Interaction, followed by a keynote of Albrecht Schmidt, entitled “Creating
Seamless transitions between Central and Peripheral User Interfaces”. In the after-
noon, interaction examples will be enacted and discussed in small groups to establish
common grounds for Peripheral Interaction. After a break, one group will do a crea-
tive activity on design for Peripheral Interaction and another group will explore eval-
uation strategies. The workshop will wrap-up by summarizing the results and thereby
aims to lay foundations for a structured exploration of this new interaction paradigm.
After the Workshop. Accepted submissions will be included in workshop proceed-
ings, published as technical report as well as on the workshop’s webpage. This
webpage (www.peripheralinteraction.com) will also host a blog and a forum for a
continuation of the community-building on Peripheral Interaction after the workshop.
References
1. Bakker, S., van den Hoven, E., and Eggen, B. FireFlies: Physical Peripheral Interaction
Design for the Everyday Routine of Primary School Teachers. Accepted for TEI 2013.
2. Bakker, S., van den Hoven, E., and Eggen, B. Acting by hand: Informing interaction de-
sign for the periphery of people’s attention. Interact Comput 24, 3 (2012), 119-130.
3. Edge, D. and Blackwell, A.F. Peripheral tangible interaction by analytic design. In Proc.
TEI, ACM Press (2009), 6976.
4. Hausen, D., Boring, S., Lueling, C., Rodestock, S., and Butz, A. StaTube: facilitating state
management in instant messaging systems. In Proc. TEI, ACM Press (2012), 283290.
5. Hausen, D. and Butz, A. Extending Interaction to the Periphery. In Proc. “Embodied Inter-
action: Theory and Practice in HCI”, Workshop at CHI, (2011), 6168.
6. Hazlewood, W.R., Stolterman, E., and Connelly, K. Issues in evaluating ambient displays
in the wild: two case studies. In Proc. CHI, ACM Press (2011), 877–886.
7. Oakley, I. and Park, J.-S. Designing eyes-free interaction. In Proc. HAID, Springer-Verlag
(2007), 121–132.
8. Olivera, F., García-Herranz, M., Haya, P.A., and Llinás, P. Do Not Disturb: Physical Inter-
faces for Parallel Peripheral Interactions. In Proc. INTERACT, Springer-Verlag (2011),
479–486.
9. Schmidt, A. Implicit human computer interaction through context. Personal Technologies
4, 2-3 (2000), 191-199.
10. Weiser, M. and Brown, J.S. The Coming Age of Calm Technology. In Beyond Calcula-
tion: the next fifty years of computing. Springer-Verlag, (1997), 7585.
3
4
A Context Server to Allow Peripheral Interaction
Borja Gamecho, Luis Gardeazabal and Julio Abascal
Egokituz: Laboratory of HCI for Special Needs, University of the Basque Country
(UPV/EHU), Donostia, Spain
{borja.gamecho, luis.gardeazabal, julio.abascal}@ehu.com
Abstract. This paper presents a research to create a mobile context server
application that provides other applications with complex context information.
The main objective is to avoid disrupting or overwhelming users with explicit
requests for data that can be obtained otherwise by the interpretation of
combined sensor data. It is mainly aimed at mobile devices used by people with
disabilities to allow them to interact with local services supplied by means of
ubiquitous computing.
Keywords: Context awareness, people with disabilities, accessible ubiquitous
computing.
1 Introduction
There is an increasing variety of services provided by local machines, such as ATMs,
information kiosks, vending machines, etc. These services are frequently inaccessible
for people with disabilities because they are equipped with rigid user interfaces.
Nevertheless, the application of Ubiquitous Computing techniques allows access to
intelligent machines through wireless networks by means of mobile devices.
Smartphones can provide an excellent way to interact with ubiquitous services that
would otherwise be inaccessible. People with disabilities can benefit from this type of
interaction if they are provided with accessible mobile devices that are well adapted to
their characteristics and needs.
The INREDIS1 project created a ubiquitous computing environment to allow
people with disabilities to interact with locally provided services. In this project our
laboratory developed EGOKI [1], an automatic interface generator that is able to
create adapted and accessible user interfaces that are downloaded to the user device
when she or he wants to access a ubiquitous service.
Nevertheless, when users are immersed in an “intelligent environment” they can
become overwhelmed by the quantity of explicit interactions that they have to manage
through their mobile device. For this reason we are working on ways to enhance a
mobile device’s context awareness to ease the interaction with the aforementioned
services.
1 http://www.inredis.es/default.aspx
5
2 Related Work
User attention is an interesting concern for interaction with a ubiquitous system.
The work of Weiser & Brown (1997) distinguishes two levels of attention: central and
peripheral. The central attention focuses on the main task that is being addressed by
the user, while the peripheral attention is related to "what we are attuned to without
attending to explicitly" [2]. Additionally, in multitasking environments the user’s
attention can be negatively affected by interruptions. Leiva et al. (2012) reported that
interruptions while interacting with an application can delay by up to four times the
completion of a task in a mobile environment [3]. Thus, two conclusions can be
drawn: it is desirable to ensure that users can pay attention to applications around
them without feeling overwhelmed; and it should be attempted to maximize the focus
of the user on a single central task, reducing shifting between tasks.
Two different ways to get the attention of the user are described in the following
lines. On the one hand, it is possible to merge background interaction with
peripherical attention. The work of Bakker et al. (2012) [4] presents an interactive
system called FireFlies to explore the way in which primary teachers are able to
manage secondary tasks in the periphery of their attention. The intention is to study
“how interaction with technology can fluently blend into people’s everyday routines,
similar to the way in which interactions with the physical world are a part of
routines”. Using this approach, tasks that would require direct attention or a cognitive
effort disappear from the central attention of the users. On the other hand, a slightly
different approach is to consider the implicit human-computer interaction. Schmidt
(2000) [5] defined this as “An action performed by the user that is not primarily
aimed to interact with a computerized system but which such a system understands as
input.”. This work studies different sources to add implicit information, with the most
relevant to this work being “sensing context using sensors”. Schmidt described
sensor-based perception as a way to recognize the implicit context and illustrates
some examples that can help to manage interruptions and limit the need for input
when users are interacting with computers. Therefore, the implicit context can be
useful to free a users attention from a specific task.
Concerning the supporting technology, two approaches stand out in the literature:
The first frees the user's attention by using wearable devices. Saponas (2010) defines
the always-available interaction, describing methods to interact with a mobile device
without using it explicitly [6]. Likewise, a user can receive notifications from
applications using smartwatches as a second screen at a glance2. The second proposal
enhances the context-awareness of ubiquitous applications using smartphones.
Smartphones and the sensors within them are useful to characterize activities and
recognize context information. Reddy et al. (2010) were able to distinguish between
the movements of the smartphone user (stationary, walking, running, biking, and
travelling in a motor vehicle outside) using the GPS receiver and the accelerometer
[7]. In a similar way, the work of Wiese et al. (2013) recognizes whether a mobile is
in a bag, in a pocket or in the hand [8].
2 Sony Smartwatch (http://www.sony.com/SmartWatch ) or Pebble (http://getpebble.com/)
6
3 A Context Server for Peripheral Interaction
In our case, peripheral interaction includes all the implicit activities that are conducted
to interact with an application. Our objective is therefore to collect, by means of
sensors, any type of information that helps the device to manage the interaction, thus
minimizing the need for explicit user participation. This is called context information
and we gather it by means of sensors that are located in the mobile device, either worn
by the user or deployed in the environment.
Usually, each mobile application has to collect and process data from the sensors
available in the device in order to adapt the interaction to the context. This is
frequently done in real time and competing with other applications, which limits the
possibilities to extract complex results.
Our approach focuses on a context server application that collects data from the
sensors, combines this, and extracts complex information that can be directly used by
the other applications.
3.1 From Sensing to Perception
In order to determine what information is provided in each case, we created a sensor
taxonomy that classifies the different types of sensors that are currently found in
mobile devices or worn by users. This taxonomy allows us to work with “abstract
sensors” independently of their specific datasets.
To extract combined information we developed an ontology of sensors, including
rules that specify the type of information that can be obtained from the combination
of different sets of sensors.
3.2 From Perception to Interaction
Our context server can contribute to peripheral interaction by providing the
applications with valuable information that would otherwise be explicitly requested
from the user.
The context provider can assist developers to make use of the context in a simpler
way. For instance, the context server allows applications to select the most
appropriate modality to interact with a user with communication restrictions, due to
disability or to a situational impairment. For instance, if the microphone detects that
the local level of noise is too high the application can avoid voice commands and
prioritize text or images; or, if the inertial sensors detect that the user is walking,
driving or riding a bicycle, touch input can be switched to voice input.
In addition, some applications for people with disabilities use the server to perform
their tasks without disturbing the user. In the following lines four examples of freeing
usersattention using our context server application approach are described.
7
3.2.1 Affective Interaction
Affective computing focuses on detecting and reacting to emotions by using
computers. Emotional information can be useful to understand and detect the context
of the user when interacting with an application. The work of Haag et al. (2004)
presents an example of inferring a user’s mood and emotions using physiological
signals [9] obtained via sensor devices that measure heart rate variation, perspiration,
respiration rate, skin temperature, etc.
The context server application can detect and manage the data from the wearable
sensor devices and infer information to feed applications with information about the
mood of the user. This is valuable for the peripheral interaction. For instance, it is
possible to avoid stressful situations that occur when a user has to attend to too many
tasks simultaneously. In a similar way automatic rearranging of the tasks can be
performed to distinguish the enjoyable ones from the annoying ones.
3.2.2 Smart Wheelchair
Smart wheelchairs are robotic platforms to assist people with mobility restrictions to
navigate the physical environment. Smart wheelchairs are equipped with sensors
(sonar, laser range finder, bump sensors, etc.) in order to perceive elements that can
affect the navigation. Thereby, diverse modes of operation are developed to assist the
user including: collision avoidance, wall following or close approach to objects [10].
Controlling a smart wheelchair with a joystick can become a stressful task.
Situations such as approaching a narrow space or going through a door may require a
high level of concentration. In such a scenario, the context server application would
discover and integrate the wheelchair sensors. The data collected is helpful to infer
when the user is facing a stressful situation. The context application server provides
this information to the wheelchair, which can trigger automatic guidance procedures.
3.2.3 Smart Traffic Lights
There are smart traffic lights that assist people with special needs to cross the street
safely. For instance, current Audible Pedestrian Signals3 (APS) attached to traffic
lights help people with vision impairments to know when they can walk across a
pedestrian crossing. In addition, works such as UCARE [11] present prototypes for
scenarios where impaired users can negotiate via their mobile devices the period
required to cross the road. If the user has to handle the device when approaching a
pedestrian crossing, his/her attention is disrupted. However, this task is moved to the
periphery by using the context application server. The speed and position of the user
are gathered using accelerometers and GPS and sent to the traffic lights to activate the
APS. Moreover, the mobile device can negotiate, in the background, the time required
to cross without the explicit participation of the user.
3 APS are also called accessible pedestrian signals: http://www.apsguide.org/index.cfm
8
3.2.4 Peripheral Interaction with EGOKI
As mentioned in the introduction, EGOKI is a UI generator for ubiquitous services.
The user’s abilities, device characteristics and service functionalities are taken into
account to create an accessible UI. For each function of the service or application,
EGOKI selects the appropriate input/output elements to ensure a suitable interaction
[1].
The context server application empowers EGOKI to allow peripheral interaction
for some applications. Firstly, it helps to detect appropriate input and output methods;
for instance, by allowing the use of gestures when a wearable device with accelerators
or an electromyogram is detected. Secondly, it helps to choose the communication
modality in an accurate way. For instance, when blind users are in a noisy
environment, avoiding speech and audible channels would be an issue. Instead of that,
the volume of the user device should be adapted to the noise level. Finally, when the
context application server provides accurate information about the user, the UI
generation process avoids having to explicitly ask the user for that information. For
instance, when an application needs the user location it is provided by the context
server and EGOKI excludes that input element from the final UI.
Therefore, some ubiquitous services will not require explicit attention from the
user and due to the change of modality would run in the periphery of the users
attention.
4 Discussion and Conclusion
There are a number of issues that require our attention when merging peripheral
interaction with the context server application.
To begin with, peripheral interaction can be contradictory with a well established
practice of activity-aware systems. As mentioned by Mahmud et al. (2009) [12],
activity-aware systems must inform the users to correct failures in activity recognition
to avoid mistakes and manage uncertainty. This would increase the number of
interactions that a user must perform and would consequently draw his/her attention
more than necessary.
In addition, context information depends on the set of sensors detected by the
context application server. Users can be affected by the loss of smartness when the
availability of sensors changes. This is related to the “masked uneven conditioning
challenge stated by Satyanarayanan (2001) [13].
Moreover, the application domain is a key factor for activity recognition. The
accuracy of the activity and emotion recognition techniques “in the fieldfrequently
produces worse results than in the laboratory. In a similar way, the accuracy of the
results depends on the person.
Finally, the impact on a user’s privacy must be considered, because large quantities
of data about the user are collected and logged. These data must be protected to avoid
their unauthorized use; for instance, by commercial applications.
The combination of sensor data allows the interpretation of the context at a higher
level, providing mobile applications with implicit methods of interaction that
9
augment communication without disrupting the user’s attention for routine
adjustments.
Acknowledgments
EGOKITUZ is funded by the Department of Education, Universities and Research of
the Basque Government (grant IT395-10). Borja Gamecho holds a PhD scholarship
from the Research Staff Training Programme of the Basque Government.
References
1. Abascal, J., Aizpurua, A., Cearreta, I. et al.: Automatically Generating Tailored Accessible
User Interfaces for Ubiquitous Services. In: 13th international ACM SIGACCESS
conference on Computers and accessibility, pp. 187--194. ACM, New York (2011)
2. Weiser, M., Brown, J. S.: The coming age of calm technology. In: Denning P.J., Metcalfe
R.M. (eds.) Beyond calculation, pp. 75--85. Springer, New York (1997)
3. Leiva, L., Böhmer, M., Gehring, S., Krüger, A.: Back to the app: the costs of mobile
application interruptions. In: 14th international conference on Human-computer interaction
with mobile devices and services, pp. 291--294. ACM, New York (2012)
4. Bakker, S., van den Hoven, E., Eggen, B.: FireFlies: supporting primary school teachers
through open-ended interaction design. In 24th Australian Computer-Human Interaction
Conference, pp. 26--29. ACM, New York (2012)
5. Schmidt, A.: Implicit human computer interaction through context. Personal and
Ubiquitous Computing, 4(2-3), 191--199. Springer-Verlag (2000)
6. Saponas, T. S., Adviser-Landay, J. A.: Supporting everyday activities through always-
available mobile computing. University of Washington, Seattle (2010)
7. Reddy, S., Mun, M., Burke, J., Estrin D., Hansen M., Srivastava M.: Using Mobile Phones
to Determine Transportation Modes. J. ACM Trans. Sen. Netw. 6, 2, Article 13 (2010)
8. Wiese, J., Saponas, T. S., Brush, A.: Phoneprioception: Enabling Mobile Phones to Infer
Where they are Kept. In: SIGCHI Conference on Human Factors in Computing Systems
2157--2166. ACM, New York (2013)
9. Haag, A., Goronzy, S., Schaich, P., Williams, J.: Emotion recognition using bio-sensors:
First steps towards an automatic system. In: André, E., Dybkjær, L., Minker, W.,
Heisterkamp, P. (eds.) Affective Dialogue Systems. LNCS, vol. 3068, pp. 36--48.
Springer, Berlin Heidelberg (2004)
10. Simpson, R. C.: Smart wheelchairs: A literature review. J. Rehabil. Res. Dev., 42(4), 423--
436 (2005)
11. Vales-Alonso, J., Egea-López, E., Muoz-Gea, J. P., García-Haro, J., Belzunce-Arcos, F.,
Esparza-García, M. A., Gonzalez-Castao, F. J.: Ucare: Context-aware services for disabled
users in urban environments. In: The Second Mobile Ubiquitous Computing, Systems,
Services and Technologies, 2008, pp. 197--205. IEEE Press (2008)
12. Mahmud, N., Vermeulen, J., Luyten, K., Coninx, K.: The five commandments of activity-
aware ubiquitous computing applications. In: Duffy, V.G. (eds.) Digital Human Modeling.
LNCS, vol. 5620, pp. 257--264. Springer, Berlin Heidelberg (2009)
13. Satyanarayanan, M.: Pervasive computing: Vision and challenges. IEEE Personal Comm.,
8(4), pp. 10--17. IEEE press (2001)
10
Peripheral Interaction in the context of DJing
Mayur Karnik
Madeira Interactive Technologies Institute, University of Madeira
mayurkarnik@gmail.com
Abstract. DJs constantly negotiate between their social and technical roles
while performing and often encounter conflicts in their needs to interact with
the audience vis-à-vis their tools. In recent times, HCI researchers have focused
on tools and systems for DJs to manage interactions with their audience. How-
ever, there is a need to design ‘calm’ systems that help the DJ manage their so-
cial interactions better and that has minimal interference with their primary
tasks. Interpreting this problem through the lens of Peripheral Interaction holds
the promise to suggest appropriate solutions that might lead to a better under-
standing of the broader fields of crowd computer interaction and designing for
spectators.
Keywords: Peripheral Interaction, DJs, Nightclubs
1 Introduction
DJs adopt a wide variety of social and technical roles while performing in night-
clubs. As musicians operating in an inherently technology-led domain, their perform-
ances involve interacting with tools and their audience [1]. These interactions often
occur in busy settings and compete with each other putting a strain on the DJ’s atten-
tion. DJs could benefit from immediate feedback from the audience while performing,
but tend to avoid direct interaction since doing so interferes with their more important
tasks such as browsing music libraries, manipulating controls to manage the music
stream, etc. Moreover, the context of their work (usually dark settings) makes it diffi-
cult for them to easily shift their attention back and forth between their tools and the
audience, resulting in scenarios where the audience interaction becomes limited to
body gestures and direct observations of the crowd. An interpretation of this problem
through the lens of ‘Peripheral Interaction’ could point to new ways of approaching
this design space and consequently contribute to a richer understanding of the broader
fields of crowd-computer interaction [2] and designing for spectators [3].
2 Related Work
HCI researchers have shown considerable interest in recent times in understanding
the needs and work contexts of DJs and proposed technologies for them to manage
11
their work better. Gates et al. classify some of the early works as nightclub specific
interactive technologies in the domains of audience-centered applications, DJ-
centered applications, and applications for DJ-audience interaction. These applica-
tions took advantage of sensors, mobile devices and communication technologies in
the form of playful applications, performative spaces, automation and mixing tools,
and systems based on bio-feedback [1]. More recently, Ahmed et al. conducted eth-
nographic studies around DJs and give a good account of the more recent studies
around DJs that proposed multi-modal prototypes (e.g. wireless, mobile, haptic, and
multi-touch) as DJ tools [4].
However, we argue that most of these proposals require the DJ to pay direct atten-
tion to the tools and hence run the risk of interfering with the intensive primary task
of a DJ: playing music.
3 Observations
Our previous work briefly describes some in-situ observations on how the resident
DJs we studied negotiate their social interactions while performing [5]. We noted that
the DJ’s social circles acted as a resource for receiving feedback. It highlights the
need to differentiate the different degrees of relationships that a DJ has amongst the
audience. We are interested in exploring how technology can help the DJ manage a
two-way interaction with the audience in a ‘calm’ [6] way, without a substantial in-
crease in his or her cognitive load.
As part of this process, the lead author of this work has been engaged in long term
ethnographic studies of DJs and, in the spirit of overt participant observation’, has
performed 12 gigs over the last two years in the capacity of both a DJ and a VJ. In
one of the recent VJ gigs, an interesting phenomenon was observed; people familiar
to the VJ rolled empty plastic bottles to his feet to draw his attention, which they
needed to show appreciation of particular moments during the performance. Others in
the audience observed and imitated this behavior and it gradually turned into a playful
and socially acceptable way of expressing appreciation. Another observation was that
while VJs project and control visuals directly based on the music, the DJs often are
unable to see the projected visuals because of a need to direct their attention to their
primary task. Both these observations point to the need for understanding the periph-
ery of their attention and how some of these social interactions can be supported by
designing non-intrusive interfaces.
4 Peripheral Interaction
We are currently working on a few design directions that have resulted into a num-
ber of concepts for the nightclub settings. One of the concepts is a tangible interface
or an interactive system for the DJs that would be connected to projectors beaming
colored blobs downwards onto the crowds on the dance floor. The DJs will be able to
interact with sections of the crowd by mainupating these color blob projections. How-
ever, one of our primary concerns is to design the interaction paradigms in such a way
12
that they are playful and useful but at the same time have minimal interference with
the DJ’s interaction with the music making tools.
The presentation at the workshop will be structured around a series of edited video
snippets illustrating performers’ behavior as they seek to engage audience members as
a secondary task to their core performance activities.
References
1. Carrie Gates, Sriram Subramanian, and Carl Gutwin. 2006. DJs' perspectives on interac-
tion and awareness in nightclubs. In Proceedings of the 6th conference on Designing Inter-
active systems (DIS '06). ACM, New York, NY, USA, 70-79.
2. Barry Brown, Kenton O'Hara, Timothy Kindberg, and Amanda Williams. 2009. Crowd
computer interaction. In CHI '09 Extended Abstracts on Human Factors in Computing Sys-
tems (CHI EA '09). ACM, New York, NY, USA, 4755-4758.
3. Stuart Reeves, Scott Sherwood, and Barry Brown. 2010. Designing for crowds. In Pro-
ceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending
Boundaries (NordiCHI '10). ACM, New York, NY, USA, 393-402.
4. Ahmed Ahmed, Steve Benford, and Andy Crabtree. 2012. Digging in the crates: an ethno-
graphic study of DJS' work. In Proceedings of the SIGCHI Conference on Human Factors
in Computing Systems (CHI '12). ACM, New York, NY, USA, 1805-1814.
5. Mayur Karnik, Ian Oakley, and Valentina Nisi. 2013. Performing online and offline: How
DJs use social networks. To appear in Proceedings of Interact 2013, Cape Town.
6. Weiser, M. and Brown, J.S. The Coming Age of Calm Technology. In Beyond Calcula-
tion: the next fifty years of computing. Springer-Verlag, (1997), 7585.
13
14
Towards Ambient Notifications
Heiko M¨uller, Martin Pielot, Rodrigo de Oliveira
OFFIS and Telefonica Research
Abstract. In this paper we report on two studies for displaying infor-
mation in the periphery of the user’s attention. One experiment explores
the use of ambient light to inform users of upcoming tasks in an of-
fice scenario, while the other investigates whether vibro-tactile displays
can become peripheral. We show that both modalities have potential for
conveying information outside a user’s focussed attention.
Key words: Ambient light display, reminder, interruptions, user studies.
Peripheral Interaction: Embedding HCI in Everyday Life
http://www.peripheralinteraction.com/participation.html
1 Background and Motivation
Everyday life is filled with information competing for our attention. While at
work, we receive notifications on incoming mail and reminders for the next meet-
ing on top of phone calls and colleagues interrupting. Additionally there may be
many more information sources trying to get our attention. Smartphones deliver
push notifications whenever a contact writes a message in a chat, the Facebook
timeline gets updated, or a tweet is retweeted, to name a few.
Iqbal and Bailey [5] define notification as a visual cue, auditory signal, or
haptic alert generated by an application or service that relays information to a
user outside her current focus of attention. On smartphones, notifications are
typically delivered instantly, e.g., when the user receives a message or when a
meeting is about to begin.
Instant delivery of notifications has been extensively studied in the context
of information workers. One particular challenge is that instantly delivered no-
tifications may interrupt the receiver during other tasks. Czerwinski et al. [3]
highlight that people find it difficult to return to disrupted tasks after being
interrupted by e.g., instant messages, calls, or an engagement with a colleague.
They conducted a diary study, with 11 office workers and found that interrupted
tasks were not resumed immediately after 40% of the interruptions. As a solu-
tion, they suggested to help interrupted users to return to the interrupted task
by grouping applications and folders by task.
Cutrell et al. [2] conducted a study in which 16 participants performed a task
of searching books in a list organized either by title or topic. They compared
performance between search type (concrete title versus abstract topic), notifica-
tion, and marker. Their results show that notifications make tasks much slower,
21
and their effect is more salient when the user is in the middle of a cognitively
demanding task.
Iqbal et al. [6] studied the effect of email notification on the desktop comput-
ers of office workers. For two weeks, they monitored the application usage of 20
Microsoft employees. They found that the study participants spent roughly one
third of their working time in Outlook and one third working in their primary
applications. Turning off notifications had no significant effect on this distribu-
tion. In average, participants received 3 email notifications per hour, and 25%
of notifications led users to immediately switch to email client. When checking
Outlook right after receiving a notification, participants switched back twice as
fast, thus indicating that Outlook notifications were triggering more opportunis-
tic changes between applications. Outlook is accessed 19 22 times per hour,
or roughly every three minutes. In the second week of the study, participants
were asked to turn off email notifications. While 8 participants checked emails
more frequently, 12 participants checked them less often, which indicates that
notifications can influence people in at least two ways: either by creating the
urge to respond immediately or by serving as a form of awareness.
Mark et al. [7] studied the negative effects of interruptions by email through
a radical approach. For 5 work days, they completely cut off 13 information
workers from email usage. Their findings reveal that, without email, the workers
multitasked less, spent more consecutive time on tasks, and had a decreased
stress level.
Adamczyk et al. [1] studied the difference between delivering interruptions
during and after completing a task. 16 graduate students had to fulfill different
tasks (correct text, write text, web search) on a PC. From time to time, they
were interrupted by a full-screen pop-up showing news. The results show that
people felt higher workload, measured by the Nasa-Task Load Index, when the
interruptions were delivered during the tasks. Fogarty et al. [4] showed that it
is possible to predict of human interruptibility with simple sensors .
However, while delivering an email notification can be deferred until the user
has completed a task, other notifications, such as calendar entry reminders, have
to be delivered on time.
2 Ambient Notifications
With the concept of Ambient Notifications, we pursue the idea of slowly and
gently catching a person’s attention towards an upcoming notification over time.
While the users can stay focused on the primary task, they will slowly be made
aware of the upcoming event. According to Matthews et al. [8], (peripheral)
displays can target different attentional levels, ranging from pre-attention to
focussed attention. The typical notification alarm jumps from absence of directly
to full attention. With Ambient Notifications, we aim at moving continuously
from pre-attention to focussed attention by slowly increasing the saliency of the
displayed cues. This allows users to be aware of the upcoming notification before
it is actually due. We assume that this can reduce anxiety and allow workers to
22
finish tasks in time, opposed to leaving them unfinished when e.g. a meeting is
beginning.
The challenge to solve is how to convey information in parallel to a work
task, in particular how to continuously increase the peripheral display’s saliency,
so that it slowly becomes more and more present in the mind of the worker.
We report on two studies investigating the use of ambient light and vibro-tactile
patterns. For ambient light, we provide evidence that by continuously changing
the color of an illuminated office wall behind the monitor, we can keep users
aware of an approaching appointment. For vibro-tactile patters, we provide first
evidence that continually repeated vibration patterns can be consumed in the
periphery of attention at all.
2.1 Ambient Timer
With Ambient Timer [9], we created a system to unobtrusively and continu-
ously remind users of upcoming events in an office scenario. Ambient Timer
exploits the user’s peripheral vision for conveying information on an upcoming
task around a computer monitor in a way that the user can still focus on the
primary task she is executing on the screen (see Figure 1).
Fig. 1. Ambient Timer illuminating the surroundings of the monitor
We built an RGB-LED frame which we mounted to the back of a monitor.
The light emitted by the LEDs was then reflected from the wall the monitor was
23
placed against. Exploring the design space we created continuous light patterns
designed to increase obtrusiveness over time (in terms of Matthews’ classifica-
tion we continually increase obtrusiveness to slowly shift from pre-attention to
devided attention) in order to slowly make users aware of upcoming tasks while
still giving them the chance to wrap up their primary task in a sensible way.
We then conducted a lab experiment with controlled light conditions to test our
system against traditional reminding techniques. 12 Participants were asked to
conduct writing tasks while keeping track of when to finish in time. We found
out that our system is at least competitive with traditional reminding techniques
such as notification popups or users checking the clock.
2.2 Peripheral Perception of Vibration Patterns
While light has shown to be a powerful modality to design ambient displays, it
may have disadvantages if the goal is to keep the interaction private or to avoid
polluting the information with more information. The sense of touch, in con-
trast, offers strong potentials for personal, private information presentation. For
example, Tam et al. [11] recently presented a timing tool for oral presentations
that sends different signals to presenters indicating that 3, 1, or 0 minutes are
left before finishing the talk. At each of the intervals, a wristband would start
generating different vibration cues, which would “terminate after an interval,
but allowed the speaker to stop them earlier by pressing the wristband” [11].
As such, these vibration cues can still be seen as interruptions, which attracts
attention at three points in time, rather than continuously grabbing attention,
as the Ambient Timer.
Hence, we recently explored the question whether continuous vibro-tactile
pattern can, at all, become peripheral [10]. For three days, we exposed 15 sub jects
to a continual vibration pattern, emitted by a mobile devices which was carried
in the trouser pocket. The subjects set the vibration to an intensity, where they
could barely perceive it. At random intervals, the vibration stopped. In this
case, the subject had to take the phone out of the pocket and acknowledge
this event by pressing a button. When doing so, they were presented with a
short questionnaire to gather subjective feedback. In average, subjects did not
acknowledge these events immediately – as if vibration was on their focussed
attention –, but rather in 15.2 minutes in average (˜x= 8.3 min, s= 19.6) At
the same time, they reported not to be annoyed by the signal in 94.4% of the
events. These results indicate that the stimuli were perceived in the periphery
of attention, i.e. outside of focussed attention, while remained aware of it.
While we have yet to investigate how well people perceive subtle, continuous
changes in the vibration pattern, this shows that there is an opportunity to use
peripheral vibro-tactile displays to deliver ambient notifications.
3 Future Work
In future work, we need to deepen our understanding on how to manipulate
perceived saliency of a peripheral display. For vibro-tactile patterns, we just have
24
shown that conveying information in the periphery of attention is possible. What
is missing is a way to continuously increase saliency over time. For the Ambient
Reminder, we have shown how to increase saliency in a lab study. However, first,
informal tests have shown, that in an actual work context other factors appear
to be present which influence the perceived salience. Future work hence needs
to test these displays in-situ in order to identify these factors, and provide us
with an understanding on how to control for them. Taking things a step further
future work has to focus on how users will not only perceive information in the
periphery of their attention but also control the information device in a way that
does not require their focussed attention.
References
1. P. D. Adamczyk and B. P. Bailey. If not now, when?: the effects of interruption at
different moments within task execution. In Proc. CHI ’04, pages 271–278, New
York, NY, USA, 2004. ACM.
2. E. Cutrell, M. Czerwinski, and E. Horvitz. Notification, disruption, and memory:
Effects of messaging interruptions on memory and performance. In Proc. INTER-
ACT ’11, 2001.
3. M. Czerwinski, E. Horvitz, and S. Wilhite. A diary study of task switching and
interruptions. In Proc. CHI ’04, pages 175–182, New York, NY, USA, 2004. ACM.
4. J. Fogarty, S. E. Hudson, C. G. Atkeson, D. Avrahami, J. Forlizzi, S. Kiesler, J. C.
Lee, and J. Yang. Predicting human interruptibility with sensors. ACM Trans.
Comput.-Hum. Interact., 12(1):119–146, mar 2005.
5. S. T. Iqbal and B. P. Bailey. Oasis: A framework for linking notification delivery
to the perceptual structure of goal-directed tasks. ACM Trans. Comput.-Hum.
Interact., 17(4):15:1–15:28, Dec. 2010.
6. S. T. Iqbal and E. Horvitz. Notifications and awareness: a field study of alert usage
and preferences. In Proc. CSCW ’10. ACM, 2010.
7. G. Mark, S. Voida, and A. Cardello. ”a pace not dictated by electrons”: an empirical
study of work without email. In Proc. CHI ’12, pages 555–564, New York, NY,
USA, 2012. ACM.
8. T. Matthews, A. K. Dey, J. Mankoff, S. Carter, and T. Rattenbury. A toolkit for
managing user attention in peripheral displays. In Proc. UIST, 2004.
9. H. Mueller, A. Kazakova, M. Pielot, W. Heuten, and S. Boll. Ambient timer -
unobtrusively reminding users of upcoming tasks with ambient light. In Proc.
INTERACT ’13, 2013.
10. M. Pielot and R. de Oliveira. Peripheral vibro-tactile displays. In Proc. MobileHCI
’13, 2013.
11. D. Tam, K. MacLean, J. McGrenere, and K. J. Kuchenbecker. The design and
field observation of a haptic notification system for timing awareness during oral
presentations. In Proc. CHI ’13. ACM, 2013.
25
26
Peripheral interaction for sports exploring two
modalities for real-time feedback
Stina Nylander, Jakob Tholander, Alex Kent
Mobile Life @ SICS, Swedish Institute of Computer Science, Box 1263,
16429, Kista, Sweden
stny@sics.se, jakobth@dsv.su.se, alexkent@mac.com
Abstract. We believe that sports is a domain that would both provide valuable
input to the area of peripheral interaction, as well as benefit from peripheral in-
teraction itself. We present two pilot studies on peripheral interaction for cross-
country skiing and golf using vibration feedback and audio feedback respective-
ly. We believe the results of these initial studies are encouraging and aim to
pursue the concept of peripheral interaction for the sports domain.
Keywords: Sports, real-time feedback, body movement.
1 Introduction
At her keynote speech at CHI 2010, Genevieve Bell pointed to sports as one of the
domains that have been largely forgotten in Human-Computer Interaction (HCI) re-
search, even though work is starting to emerge. We argue that HCI research in sports
could contribute to the general problems involved in how to develop interaction mod-
els for a range of complex and variable settings where traditional hand-eye interaction
is not sufficient, i.e. settings for peripheral interaction. Sports and physical activity
provide challenging examples of such settings, and design principles and interaction
techniques are potentially transferrable to other mobile domains, such as social and
leisure activities in nature.
2 Peripheral interaction in sports
Our take on peripheral interaction comes from the sports domain, where interactive
technology has been an integrated part for a long time. However, most technology
either support data collection for post analysis such as GPS watches, heart rate moni-
tors, or research prototypes like XC trainer [1], or provide visual interfaces (such as
pulse watches) which can be rather difficult to handle during intense sports sessions.
There are exceptions in HCI research, e.g. Spelmezan’s work on snowboarding [2]
and Stienstra’s work on skating [3], but they are few. We have conducted initial ex-
periments with tactile and audio feedback during sports to explore how we can design
27
interaction that fits into the activity without breaking the experience or focus of the
athletes. We argue that sports technology could benefit from peripheral interaction
due to a number of characteristics of sports and physical activity in general:
many sports involve the whole body and thus requires a mental focus on the
activity and the bodily movement making it difficult for athletes to focus
on visual user interfaces,
it is common that sports use physical props such as ski poles, golf clubs, or in
other ways occupy parts of the athletes body such as holding the reins dur-
ing horseback riding or the handle bar of a bike, refraining athletes from
holding devices for interaction,
athletes, both elite and recreational athletes, strongly appreciate the experi-
ence of doing sports and prefer not to have their focus on that experience
disturbed by technology [4, 5].
This list is in no way exhaustive, but gives some insight on how we see the rela-
tionship between sports and peripheral interaction.
3 Experimenting with two different modalities
To investigate how peripheral interaction could be used in sports we have explored
two modalities for real-time feedback for two different sports: tactile feecback for
cross-country skiing and audio feedback for golf.
3.1 Skiing and vibration feedback
Figure 1: One of our skiers on the treadmill.
The study was carried out at the Swedish Winter Sports Research Centre in Öster-
sund, Sweden. Four Swedish elite skiers participated, recruited by test leaders at the
research centre.
The purpose was to explore how vibrational feedback is perceived during a sport
activity, to what extent it integrates with or disrupt the experience, and how the per-
ception of vibrations are affected by physical activity, and vice versa.
The skiers were equipped with a cell phone strapped around the chest, and skied on
a treadmill using different skating techniques at various speeds and inclinations for
approximately 30 minutes each, see figure 1.
Different vibration signals were remotely triggered in the phone attached to the
skiers’ chest. Signals varied in length and repetition. They were all were of the same
strength (internal to the phone). Skiers were instructed to acknowledge and comment
28
on the vibrations when they felt them. A post interview was carried out after the ski-
ing session. The whole session was video and audio recorded.
Overall, the skiers were very positive to the idea of vibrational feedback on their
skiing technique. They all said they clearly perceived the vibration, and did not de-
scribe the experience as intrusive or distracting. Several of them would have preferred
a stronger more distinct vibration to make it easier to perceive while focusing on the
skiing at higher level of fatigue.
As stated above, the vibration strength did not vary during the session, but the ski-
ers expressed that they had experienced variations in strength. Possible reasons for
this could be variations in tension in the upper body as well as variations in focus and
concentration in different speeds and techniques, and different levels of fatigue. For
instance, one of them said that you need to be really focused to ski fast, so you block
out a lot of stuff. This suggest that the strength should possibly be increased as skiing
intensity increases, but also, that the feedback should not attempt to involve to much
information as it may disturb the focus of the skier, thus, potentially being contra-
productive.
The skiers believed that vibration feedback on their skiing technique would be
helpful during training sessions. In particular, they foresaw using it during high-
intensity sessions where they would be especially focused on maintaining a correct
technique despite a high-level of fatigue. Moreover, they reported that the skiing
technique in general is more in focus at higher workloads since that is when loss of
technique is most costly. Consequently, it would be in these situations that skiers
would benefit mostly from interactive training support. During slower skiing, the
technique is usually less critical so feedback would not be as valuable.
Examples in which they mostly themselves saw the usefulness of real-time feed-
back were technical details such as the transferring of weight from side to side, keep-
ing the appropriate angles in hips or knees, to help keep specific technique training
details in mind, and to be reminded of thinking about technical improvements that
they could be working on.
The skiers also saw connections to video analysis, motion capture and other inter-
active tools that they use to analyze skiing technique. Such tools could be used to
reveal important details that need improvement. Combined with real-time feedback
mechanisms in the field, these could then be used to prompt skiers to think about
those details and keep them constantly in mind during training sessions.
3.2 Golf and audio feedback
For golf we created a system where a sensor attached to the golf club (see figure 2)
records accelerometer data which is mapped to real-time audio feedback. The system
was implemented as an iphone app using pure data to generate the sound (see [6] for
details on the system). Our aim with the feedback was rather to mirror the movement
and support golfers in making their own interpretation of the swing than to provide a
corrective system, inspired by the Interactional Empowerment philosophy [7].
29
Figure 2: Sensor attached to the golf club.
We have tested the system in three iterative sessions with experienced golfers to
get feedback on the concept of real-time audio feedback on the swing. Typically dur-
ing testing, users hit four or five golf balls and then been asked to comment on the
experience and their understanding of the system, see the setting in figure 3. They
tried different sounds and different timing of the feedback. The sessions were video
recorded, and system sudio output was recorded in synch with the video.
Figure 3: The setting of our test sessions.
A few themes came up that are interesting for future development and tuning of the
system, as well as providing input to the design of peripheral interaction in general:
Interpretation of discrete audio feedback participants had some difficulty in
perceiving real-time feedback since they were focused on swinging and did not have
full attention on the feedback. The speculated in this having to do with our audio
memory being less trained compared to our visual memory. It might also be the case
that audio feedback on a discrete movement such as the golf swing requires more
interpretation than a continuous movement such as running or cross-country skiing.
30
For a continuous movement, athletes can listen for a change in the audio, while for the
golf swing they cannot do that.
Timing from that, we of course came to discuss timing, and also from the second
test session provided a mode of the system where the feedback was played directly
after the swing instead of during it. We explored different delays to investigate how
the timing helps users relate the feedback to the movement and how to make it feel
connected to the movement.
Variation in the feedback participants wanted larger differences in the audio
feedback. In the current version of the system they reported that they could hear dif-
ferences in the feedback between various types of shots, but the differences were
quite small and difficult to notice.
In all, participants were positive to audio feedback and has many ideas on how to
make it more useful as a golf training tool, for example allowing users to calibrate the
system by saving successful shots, creating reversed feedback where the system is
silent for good swings and gives audio feedback when the golfer deviates too much,
or extending the system to give feedback already on the stance before the swing starts.
4 Discussion
We have presented initial results from a pilot study on the design of peripheral inter-
action in the form of real time vibrational and audio feedback in sport activities.
Overall, this works targets design of services for movement based and bodily engag-
ing settings in the wild. Our overall conclusion is that well designed real-time feed-
back can be provided for a variety of purposes without disrupting or disturbing the
actual sporting experience. Moreover, even though the feedback we provided was
relatively basic, the athletes saw usages that went beyond what we had foreseen when
designing the study. This points to the possibility of using simple, easy to use devices
when designing for a complex settings and activities.
References
1. Norström, C., Holst, A., Nylander, S., Tholander, J., Jonassson, A., Höök, K., Holmberg, H.-
C.: Internet of Sports: a pilot case study from XC skiing. Submitted to Sports Engineering
(2013)
2. Spelmezan, D. An Investigation into the Use of Tactile Instructions in Snowboarding. In
Proceedings of MobileHCI. (2012) 417-426
3. Stienstra, J., Bruns Alonso, M., Wensveen, S., Kuenen, S. How to Design for Transformation
of Behavior through Interactive Materiality. In Proceedings of NordiCHI. (2012) 21-30
4. Tholander, J., Johansson, C. Design qualities for Whole Body Interaction - Learning from
Golf, Skateboarding and BodyBugging. In Proceedings of NordiCHI. (2010) 493-502
5. Höök, K. Transferring qualities from horseback riding to design. In Proceedings of Nor-
diCHI. (2010) 226-235
6. Kent, A. A prototype sonification system for the golf swing. Master's Thesis, Department of
System and Computer Science, Stockholm University (2013)
31
7. Höök, K., Ståhl, A., Sundström, P., Laaksolahti, J. Interactional Empowerment. In Proceed-
ings of CHI. (2008) 647-656
32
Animal ̶ Inspired Peripheral Interaction
Evaluating a Dog-Tail Interface for Communicating Robotic States
Ashish Singh, James E. Young
Department of Computer Science
University of Manitoba
Winnipeg, MB, Canada R3T 2N2
{ashish, young}@cs.umanitoba.ca
Abstract. Animals use emotions for communicating how they feel, e.g., cats
arch their back and dogs show their teeth when angry. We believe that allowing
robots to communicate using animal-inspired interfaces (e.g., wagging a tail)
will help people understand robots’ states in terms of affect (e.g., happy, sad,
etc.), serving as a clear peripheral awareness channel. This understanding can
help people decide when and how to interact with a robot. For example, by ap-
pearing scared, a robot can suggest that it needs help. As an investigation of our
work, we built a robotic dog-tail prototype and conducted a user study to ex-
plore how various parameters of tail movement (e.g., speed) influence people's
perception of affect. The results from this study indicated that people interpret
tail motions in consistent terms of valence and arousal. We formed an initial set
of design guidelines from the results, and further conducted a design workshop
by inviting people working as interaction-designers to design tail motions for
various states of robots working in different scenarios (e.g., search and rescue),
using our design guidelines. Finally, in this paper, we briefly discuss the user
study we conducted, present our initial set of guidelines, discuss the steps we
took for testing them, and how we improved them so that they can be readily
used by Human-Robot Interaction (HRI) designers to convey affective states of
their robots.
Keywords: human-robot interaction, animal-inspired interfaces, affective com-
puting.
1 Introduction
In this rapidly advancing field of HRI, many robotic interfaces, designs and proto-
types are built to help people in their day-to-day lives (e.g., the iRobot Roomba vacu-
um cleaner robot cleans the floor while moving). Interaction with robots might be
challenging if people are not aware of the present state of the robot, such as low-
battery, etc. In addition, it is also important for robots not to bother people too intru-
sively by giving them status updates, but maintain a peripheral presence to let people
know how and when to interact with them. For example, a dishwasher gives an indi-
cator light to show it is working and you can hear the sound it makes while cleaning
it provides peripheral awareness.
Part of the affective computing tradition in human-computer interaction is to in-
corporate human or animal-like affect and emotion directly into interfaces [6, 8]. For
33
example, a picture frame which uses an ambient color display to communicate emo-
tion between people when they are apart [2]. There is a well-established application of
ideas from affective computing to human-robot interaction, where impressions of
robotic affect can be used to help users gain high-level state information without re-
quiring them to read complex sensory information [9].
One way of communicating robotic affect is to use animal-inspired interfaces (e.g.,
dog ears and tails). Zoological research tells us that dogs can convey a broad range of
states through their tails, for example, suggesting a happy state by wagging, high
arousal or self-confidence by raising, or fear by lowering their tail [1, 3]. In addition,
we believe that people understand basic dog tail language such as wagging and high
vs. low tail posture. This can be leveraged to understand the present affective state of
the robot. For example, when a robot is wagging its tail, it could be considered as
being happy (doing its task and does not need attention).
To investigate this, we built a robotic tail prototype to enable an iRobot Create (a
disc-shaped robot that resembles a Roomba except that it does not a have a vacuum)
to communicate its states (Fig. 1). In addition, we conducted a formal exploratory
user study (20 participants) to investigate how people perceived the affect of three tail
behaviors: wags - tail moving in horizontal, vertical and circular patterns, static - tail
keeps a pose, and discrete gestures such as raising and lowering the tail, which hap-
pened at timed points. Movement parameters were systematically varied, e.g., high,
medium and low speeds and wag sizes, height and offset of wag, and so forth, to re-
sult in 26 distinct tail motions. Participants rated each motion in terms of valence and
arousal using Self-Assessment Manikin (SAM), a psychological instrument for rating
affective states on Russell's circumplex model of affect [4, 5]: this classifies affect on
an arousal dimension (level of energy) and valence dimension (positive versus nega-
tive). We found significant results via within-subjects repeated-measures Analysis of
Variance (ANOVAs). One such result is Speed by Wag type (as shown in Fig. 2). The
results from this study (published in full detail [7]) were used to form a set of prelimi-
Fig. 1. A person notices the ambient tail state of a cleaning robot
34
nary design guidelines to help HRI designers in conveying the affective states via a
dog-tail interface.
Although, we developed our design guidelines, we did not yet know if these could
be readily used by the HRI designers and if they can be further improved to be easy to
read and use. To investigate this, we conducted a design workshop where we invited
people working as interaction-designers and asked them to design tail behaviors for a
set of possible states of robots’ working in different scenarios (e.g., healthcare robot
taking care of people at a hospital)
In this paper, we briefly describe: our preliminary design guidelines, a design
workshop we conducted to evaluate our approach, and the results of this workshop.
We believe that this is an initial step in exploring how animal-inspired interfaces can
be used by robots to communicate affective states to help people decide when and
how to interact with them, for peripheral awareness.
2 Preliminary Design Guidelines
We found that the tail was able to convey a broad range of affective states and that
people reliably interpreted the tail motions in a consistent fashion. Through informal
pilots, we summarized our results into design guidelines for HRI designers for com-
municating affective robotic states via dog-tail interfaces. Our design guidelines com-
prised of having each tail behavior in terms of: motion type - parameter (e.g., horizon-
tal wagging - high speed), level of happiness (valence) and energy (arousal) and a
descriptive keyword (emotional adjective) conveyed by that particular tail behavior
(Fig. 1). Some of the tail characteristics that emerge from our guidelines are:
A higher tail projects a more positive valence (e.g., happier), and lower tail a more
negative valence (e.g., sadder).
-3
-2
-1
0
1
2
3
-3 -2 -1 0 1 2 3
Arousal
Valence
Speed by Wag type
High Speed
Medium Speed
Low Speed
Vertical Wagging
Horizontal Wagging
Circular Wagging
Fig. 2. Average responses (error bars are 95% confidence interval) for low-high speed of horizontal,
vertical, and circular wagging. Significant effects (p<.05) were found of: a) speed on both valence
and arousal, and b) wag type on both valence and arousal. In addition, for valence, vertical wagging
was rated significantly lower than horizontal and circular (no significant results were found between
horizontal and circular wags). For arousal, all wag types were rated significantly different (for full
statistical details see [7]).
35
A smaller wag-size projects more arousal (e.g., energetic) and a larger wag-size
projects less arousal (e.g., lazier).
A higher speed projects a higher valence and arousal (e.g., elated) and a lower
speed projects a lower valence and a lower arousal (e.g., uninterested).
3 Informal Design Workshop
To investigate whether our design guidelines are easy-to-understand, easy-to-use or
need any further improvements, we conducted an informal design workshop where
interaction-designers used our guidelines to communicate the states of various robots
that might work in different scenarios (e.g., search and rescue.). Through this work-
shop, we verified that our design guidelines can actually be used for designing the
robotic states and asked participants to point out the unclear or confusing parts which
might need further improvement.
Our design workshop was conducted with 6 participants (5 males, 1 female) in this
way: they were first brought into our experiment space, and we briefly explained the
purpose of the workshop and their involvement. Next, we presented 6 robotic scenari-
os using cue-cards that contained details of robots working in a particular scenario
(e.g., domestic environment), and some of the states these robot can communicate
(e.g., looking for dirt in case of a utility robot). We used 6 different cue-cards (one for
each participant): search and rescue, robot player, robot learner, robotic teacher, secu-
rity guard robot, domestic robots. We explained our design guidelines to the partici-
pants (using a simplified version and a video) and gave them sheets having some pre-
listed robotic states such as robot looking for a victim (in search and rescue environ-
ment). Next, we asked them to write more states which according to them can possi-
bly be communicated in the given scenario, and asked them to design tail behaviors
for all the listed states. In the end, participants proceeded to fill in a post-study ques-
tionnaire where we asked them to describe their overall experience, some positive and
negative points about our guidelines and suggestions for improving them.
Results. Participants stated that our guidelines as: “very useful,” “thorough,” “easy to
follow,” and “helpful.” Most of the participants were able to design the tail behaviors
for the listed states; however, only one participant wanted the use of sound and LEDs
for one state (a robotic teacher being harassed) and one participant suggested the use
of other tail motions not in our vocabulary, such as tail moving in cross-motion and
“wobbling” in horizontal wagging. One participant noted that “action gestures [dis-
crete tail actions at given times] should be used for events and not states, since they
are not continuous or static like wagging or postures.”
In addition, for improving our guidelines, one participant suggested to use a “re-
verse-index” to avoid the complexity which might arise as the descriptive keywords
were listed according to the categorized tail behaviors. We added an index (lookup
index, Table 2a) to our guidelines by assigning a number to each row in Table 1 and
made Table 2b) by sorting the descriptive keywords alphabetically and placing the
appropriate index value next to them. This improvement is aimed at making the pro-
cess of designing a tail behavior for a specific affective state quicker and easy to use.
36
4 Future Work
Although we have learnt about how various tail parameters are perceived by people,
and how they can be used to communicate affective robotic states, there still remains
a question as to how these parameters can be combined with one another. For exam-
ple, how a tail behavior having large wag size and high speed will be perceived dif-
ferently from one with a small wag size and low speed. In the short term, we will
conduct a formal user study by combining the tail parameters (e.g., speed and wag-
size by wag type) to investigate how people perceive the resultant robotic states. Next,
we aim at conducting studies to investigate how tail usage relates to type of robot
(e.g., humanoid robots like Nao), etc.
Ultimately, this tail exploration is part of a larger program of exploring how other
animal-inspired interfaces (e.g., cats ears to suggest aggressive and relaxed behavior,
dog-like pawing to exhibit playfulness, etc.) can be used by robots for communicating
their states.
category
sub-type
parameter
results
attributes
happiness
energy
descriptive keywords
Lookup index
continuous
wagging
horizontal
speed
low
medium
high
medium
s. more*
more
medium
s. more*
more
modest
wondering
joyful or elated
wag-size
small
large
̶
̶
more
less
strong, mighty or powerful
interested
height
low
parallel to floor
high
less
medium
more
̶
̶
̶
contempt
awed
wonder
vertical
speed
low
medium
high
lesser
lesser
lesser
lesser
medium
more
solemn
shy or disdainful
aggressive
wag-size
small
large
̶
̶
more
less
aggressive
selfish or quietly indignant
circular
speed
low
medium
high
medium
s. more*
more
medium
more
e. more*
reverent
aggressive or astonished
overwhelmed
action gestures
raising
speed
low, medium
and high
̶
̶
shy, selfish, disdainful or
weary
height
low and high
̶
̶
shy, selfish, disdainful,
weary timid or fatigued
lowering
speed
low, medium
and high
̶
̶
shy, selfish, disdainful or
weary
height
low and high
̶
̶
shy, selfish, disdainful,
weary timid or fatigued
static postures
height
low
parallel to floor
high
very less
less
medium
very less
less
s. less*
lonely
fatigued
concentrating
Table 1. Preliminary design guidelines
*s. more = slightly more, s. less = slightly less , and e. more = even more
37
References
1. Brown, S.E.: Self Psychology and the Human–Animal Bond: An O verview. The Hu man-Animal Bond and Self
Psychology: Toward a New Understanding. 12, 1, 67–86 (2004).
2. Chang, A. Resner, B., Koerner, B., Wang, X., Ishii, H.: LumiTouch. In proceedings of the international conference
extended abstracts on Human Factors in Computing Systems - CHI ’01. pp. 313 (2001).
3. Galac, S., Knol, B.W.: Fear-Motivated Aggression in Dogs: Patient Characteristics, Diagnosis and Therapy. Animal
Welfare. 6, 1, 9–15 (1997).
4. Morris, J.D.: Observations : SAM : The Self-Assessment Manikin an Efficient Cross-Cultural Measure ment of
Emotional Response. Journal of Advertising Researc h. 0021-8499, 63–68 (1995).
5. Russell, J.A.: A Circumplex Model of Affect. Journal of Personality and Social Psychology. 39, 6, 1161–1178
(1980).
6. Shibata, T., Tashima, T., Tanie, K.: Subjective Interpretation of Emotional Behavior Through Physical Interaction
Between Human and Robot. I n proceedings of IEEE international conference on Systems, Man, and C ybernetics.
pp. 1024–1029 (1999).
7. Singh, A., Young, J.E.: A Dog Tail for Utility Robots: Exploring Affective Properties of Tail Movement. In
proceedigs of the 14th IFIP TC13 international conference on Human-Computer Interaction - INTERACT ’13.
(2013).
8. Singh, A., Yo ung, J.E.: Animal-Inspired Human-Robot Interaction: a Robotic Tail for Communicating State. In
proceedings of the ACM/IEEE international confere nce on Human-Robot Interaction - HRI ’12. pp. 237–238 (2012).
9. Young, J.E., Xin, E., Sharlin, E.: Robot Expressionism Through Cartooning. In Proceedings of the ACM/IEEE
international conference on Human-Robot Interaction - HRI ’07. pp. 309 (2007).
Table 2. Reverse-index tables suggested by participants: a) part that attaches to Table 1, and b) part that
can be referred by HRI designers to find the tail motion for a specific affective state.
descriptive keywords
lookup index
modest
wondering
joyful or elated
1
2
3
strong, mighty or powerful
interested
4
5
contempt
awed
wonder
6
7
8
solemn
shy or disdainful
aggressive
9
10
11
aggressive
selfish or quietly indignant
12
13
reverent
aggressive or astonished
overwhelmed
14
15
16
shy, selfish, disdainful or weary
17
shy, selfish, disdainful, weary timid or fatigued
17
shy, selfish, disdainful or weary
17
shy, selfish, disdainful, weary timid or fatigued
17
lonely
fatigued
concentrating
18
19
20
descriptive keywords
lookup Index
aggressive or astonished
11,12,15
awed
7
concentrating
20
contempt
6
fatigued
17,19
interested
5
joyful or elated
3
lonely
18
modest
1
overwhelmed
16
reverent
14
selfish or quietly indignant
13
shy or disdainful
10,17
shy, selfish, disdainful or weary
10,17
shy, selfish, disdainful or weary
10,17
shy, selfish, disdainful, weary timid or fatigued
10,17
shy, selfish, disdainful, weary timid or fatigued
10,17
solemn
9
strong, mighty or powerful
4
wonder or wondering
8,2
a)
b)
38
Micro Manage Me! – Peripheral Context Annotation for
Efficient Time Management
Bernhard Slawik
University of Munich (LMU), Human-Computer-Interaction Group,
Amalienstrasse 17, 80333 Munich, Germany
{bernhard.slawik}@ifi.lmu.de
Abstract. Planning ahead in a world that seems to get more complex every day
can be a challenging task. PIM (Personal Information Management)
applications try to minimize the mental work load, but are too cumbersome for
planning rather insignificant tasks. Due to its static nature, PIM data is prone to
unforeseen changes in the real world and therefore require a certain amount of
precognition to be planned successfully. Systems exist that use sensor data to
derive a rough sense of context in order to proactively show notifications when
certain triggers occur. In contrast to that, the proposed system leverages
peripheral interaction with physical tags to gain qualitative information on a
user's current situation and intents. It uses the data to suggest an efficient order
of completion for even small tasks that otherwise would have been regarded too
insignificant to plan.
Keywords: Peripheral Interaction, Wearable Computing
1 Introduction
In daily life people are confronted with an ever growing number of things to keep
track of: Appointments to attend, mails to read, chores, pledges and things always
longed to do.
In order to overcome that complexity of life calendars, to-do lists, memos and PIM
(Personal Information Management) software is used. And still a certain complexity
of use remains: Techniques like setting up appointments in a calendar to finish tasks
at the right time are common practice, as well as meta techniques and self-
management practices like GTD (Getting Things Done). But due to their static nature,
calendar appointments are prone to unforeseen changes in a user's immediate schedule
and hence require a high precognition to be planned successfully. Furthermore, the
time overhead for explicitly planning a task (pulling out the device, switching it on,
starting the PIM application, entering text, putting the device back) creates a new
class of tasks which are considered too insignificant to plan this way. Those are then
kept in mind and tend to be forgotten.
39
Other tasks can only be completed under given preconditions or only in certain
places, so they are kept in to-do lists hoping to be read at the right time and in the
right place. In order for an automatic system to work proactively in those situations, it
has to make assumptions on what the current situation actually is. These systems rely
on sensors or external data sources (e.g. position, time, weather forecast) to estimate a
user's context and show reminders. But since this context is algorithmically derived
from continuous sensor data, it might not correctly reflect the user's real immediate
situation, like entering or leaving a room, because of the limited (temporal or spacial)
resolution of their sensors.
This demands for a system that can precisely capture the user's context and respond
to changes in real-time. This is achieved by incorporating explicit user actions that
happen in the periphery of attention while (or even before) the context change is
actually happening.
2 Proposed System
Instead of relying merely on context information derived from quantitative sensor
data, the proposed system leverages physical tags (bar codes, QR tags and/or RFID
tags) that are peripherally scanned in order to gain more qualitative context
information on what the user is doing right now or even planning on doing next. Since
these context tags, or “ConTags”, are explicitly scanned by the user, they are
expected to convey a higher feeling of control and less lag than existing proactive task
planners that are not triggered by explicit user actions.
ConTags can not only signal that the user is entering a new situation, they can also be
used to plan new tasks, like “empty the trash” by scanning the corresponding ConTag
that is conveniently placed at the trash can. Having such a fine grain of information
on what a user is (planning on) doing –like leaving for work, going to the bathroom or
sitting down to do some work– the system can propose an execution pipeline for the
most efficient time and order that tasks could be done.
The goal is to create a system that works in the background, capturing information
on the user's current and planned tasks, and only springs into attention when it found a
task that best fits into the user's immediate schedule, context and free resources.
Wrist worn smart watches, equipped with suitable sensors for reading the context
tags peripherally, is used in a first prototype. Data is processed either on the watch
itself or on a wirelessly attached smart phone. Notifications are conveyed to the user
using the smart watch display, sound, vibration and/or a connected head up display.
The optimal mode of notification is still to be evaluated.
40
Fig. 1. Prototype using acoustic bar codes (top left) and smart watches
equipped with a camera (top right) and microphone (bottom)
3 Related Work
A key aspect of the proposed system is the peripheral nature of the interaction,
meaning that it is designed to be done in parallel to a main task [8], causing only
micro-interruptions or no interruption of the main task at all [2][5]. This attribute sets
the proposed system apart from other context annotation systems [6] that require the
user's full attention while entering data. The complexity of interaction and hence the
mental resources needed to complete the side task strongly affects how well it can be
done peripherally or automatically [1], and how it impacts the performance of the
main task. That's why ergonomics must also be taken into consideration when
selecting technologies for peripherally annotating context.
4 Peripherally Annotating Context
Capturing information on the user's context is a crucial and challenging task for this
system. Asking the user to annotate each action using text entry or speech input
requires too much engagement and is therefore considered not to be peripheral
(happening at the periphery of attention).
Using RFID tags and a body-worn reader seems to be a more subtle approach than
text entry, but carrying an always-on RFID antenna near the body might bring power
consumption problems as well as raise health concerns. Requiring users to pull out
and activate an NFC enabled smart phone for every action they do is not considered
peripheral and would impact the intended use of the system. RFID technology can,
41
however, be incorporated into the system for annotating situations where the user's
action itself leverages RFID technology, like checking in to work using an RFID pass.
Printed 1D or 2D bar codes, like RFID tags, can be read without physical contact,
but require a camera to be pointed at them on every use [7]. This, again, would
require the user to pull out and activate a camera phone or wear an always-on camera
[3][9] which raises privacy and power consumption concerns. But since bar codes are
easy to produce and already incorporated into a variety of products, optical scanning
of bar codes can optionally be incorporated into the system for capturing interaction
with said products, like “having a reading break” by scanning a book or “having
breakfast” by scanning the cereal box.
Another method that combines the advantages of being rather easy to produce and
requiring less power than RFID while being always-on, is the use of acoustic bar
codes [3]: Like a printed bar code, information is stored in a series of lines, but
instead of black lines on white background, acoustic bar codes use grooves that are
engraved along the surface of an (3d printed) object. These grooves can be read by
scratching a microphone over them and capturing the resulting clicking sounds. The
relative temporal distance between these clicks can be decoded back to binary
information. Although privacy concerns might still raise from carrying an always-on
audio recording device, the system requires only a small amount of power for
recording audio and can easily be implemented into wearable devices like smart
watches. Swiping the hand across a surface is expected to be a rather non-engaging
action, classifying context annotation using acoustic bar codes as a viable peripheral
interaction.
5 Intended Use
Having detailed information on the user's context allows a PIM system to better
estimate whether a reminder is suitable and worth interrupting the user in the current
situation. It can also be used to input new information, like adding tags to business
cards or calendars to signal a new appointment when swiping it.
Incorporating this kind of context information might also pose interesting for micro
blogging and live journal applications, because ConTags are not limited to carrying
ad-hoc information, but can also signal what a user is about to do next, like leaving
home, finishing work or meeting other people. This is ideally implemented by adding
tags to physical objects that are directly connected to the intended action, like a
ConTag on the door handle for signaling leaving the room or ConTags on the bed
stand for signaling going to and out of bed.
Using that data the system can, for example, recommend to take out the trash once
it has been marked as “full” just in time when the user is about to leave the room or
switch all systems to silent mode the second a user gets into bed.
42
6 Open Questions
By the time this document is written, the prototype is not yet ready for evaluation.
Experiments are planned to investigate, among others, the following questions:
Ergonomics: How does the mere presence of the context annotation device affect
users in the completion of a set of common tasks?
Peripheral Interaction: How does the annotation task impact the completion of the
main task? Is it disruptive? Does it cause significant time overhead?
Optimization: How can the collected data be best used to optimize a user's
schedule?
Future Work: How can other fields of research profit from having timely and
accurate context information?
References
1. Bakker, S., van den Hoven, E., Eggen, B.: Design for the Periphery. In: EuroHaptics 2010,
pp. 71--80 (2010)
2. Edge, D., Blackwell, A. F.: Peripheral tangible interaction by analytic design. In:
Proceedings of the 3rd International Conference on Tangible and Embedded Interaction,
ACM (2009)
3. Google Glass, http://www.google.com/glass/start/
4. Harrison, C., Xiao, R., Hudson, S.: Acoustic barcodes: passive, durable and inexpensive
notched identification tags. In: Proceedings of the 25th annual ACM symposium on User
interface software and technology, ACM 2012, pp. 563--568 (2012)
5. Hausen, D., Butz, A.: Extending Interaction to the Periphery. In: CHI 2011, pp. 61--64
(2011)
6. Kern, N., Schiele, B., Schmidt, A.: Recognizing context for annotating a live life
recording. In: Personal and Ubiquitous Computing, v.11 n.4, pp. 251--263 (2007)
7. Maurer, M., De Luca, A., Hang, A., Hausen, D., Hennecke, F., Löhmann, S., Palleis, H.,
Richter, H., Stusak, S., Tabard, A., Tausch, S., von Zezschwitz, E., Schwamb, F.,
Hussmann, H., Butz, A.: Long-Term Experiences with an Iterative Design of a QR-Code-
Based Payment System for Beverages. To appear in: Proceedings of the 14th IFIP TC13
Conference on Human-Computer Interaction, INTERACT 2013 (2013)
8. Olivera, F., García-Herranz, M., Haya, P. A., Llinás. P.: Do not disturb: physical interfaces
for parallel peripheral interactions, Human-Computer Interaction. In: INTERACT 2011,
pp. 479--486 (2011)
9. SenseCam, http://research.microsoft.com/en-us/um/cambridge/projects/sensecam/
Appendix: Biography
Bernhard Slawik is a first year PhD student at the Human-Computer-Interaction
Group at the University of Munich (LMU), Germany. His research focuses mainly on
wearable computing and its social implications.
http://www.medien.ifi.lmu.de/team/bernhard.slawik/
43
http://www.bernhardslawik.de/
44
The building is the program
Andrew Cyrus Smith1,2, Helene Gelderblom2
1CSIR Meraka Institute, Pretoria, South Africa
2University of South Africa, Pretoria, South Africa
acsmith@csir.co.za,geldejh@unisa.ac.za
Abstract. We present interaction with a physical building as a hypothet-
ical example of peripheral interaction. The state of the building’s win-
dows provides input to an algorithm which produces abstract art as the
result of the interaction. This paper assumes the principles of autoto-
pography and Gestalt when considering the use of physical objects for
peripheral interaction and computer program definition. By including
the Internet of Things in the discussion on peripheral interaction, the
latter is no longer constrained to geographically co-located stimuli and
responses.
Keywords: internet of things, computer program, peripheral interaction.
1 Introduction
Individuals often modify their environment towards self-determined objectives. For
example, a person might turn on a desk lamp or open a window. These examples of
individualistic actions are peripheral to the ultimate objectives of reading a book or
breathing fresh air. Not only are these actions peripheral, but they are also executed at
the periphery of an individual’s attention.
The result of an action may be instantaneous (a lit lamp) or gradual (fresher air). A
delay may therefore exist between an action and its outcome. Also, an action may
manifest itself remotely. An example of an action with both delayed and remote r e-
sults is when a window is opened at one end of a long passage to allow air in all inter-
connected offices to be refreshed.
An individual action may affect multiple persons. Conversely, the actions of multi-
ple persons may affect an individual. Therefore, one-to-many and many-to-one rela-
tions between actions and results are possible.
In the lamp and window scenarios it would be quite feasible to enhance these phys-
ical devices with computational abilities and have them interact with each other when
manipulated. Such human-initiated action-reaction, which incorporates computation-
ally enhanced physical devices, is generically called Tangible Interaction (TI) (Bask-
inger & Gross 2010). However, because the interaction is no longer generic but at the
periphery of an individual’s attention, it is called Peripheral Interaction (PI).
45
The Internet of Things (IoT) is an internet-supported action-reaction phenomenon
that connects geographically dispersed sensors, computational devices, and actuators.
The geographically dispersed sensing and acting dimension of PI can be enhanced by
exploiting the IoT to make the relationship between multiple actions and multiple
reactions even more multifaceted. The almost unlimited geographic distances which
IoT affords to PI can only be fully realised if Hornecker’s space-centered view of TI
(Hornecker & Buur 2006) is applied to PI. We call TI which includes both PI and IoT,
space-centered peripheral interaction (SPI).
In this paper we explore SPI by considering an individual’s hypothetical peripheral
interaction with a physical building. Here, the building is computationally enhanced
and receives input from its windows, and reacts by producing abstract two-
dimensional art.
This paper approaches SPI from the theoretic standpoints of autotopography and
Gestalt. Section two provides the theoretical perspective to our approach. Section
three discusses, with examples, objects and their relationships. In section four we
consider the potential relationship between objects and computer programs. Section
five concludes.
2 Autotopography and Gestalt School of Thought
Autotopography (auto=one’s own (from the Greek auto), and topography=place
(from the Greek topo)) is the behaviour a person exhibits by adjusting the physical
environment to “…construct a sense of themselves”, through arranging physical ob-
jects to create a physical map of memory, history and belief.” According to Hoven,
external memory is a subset of distributed cognition, and one of the functions served
by external memory is to reduce memory load by facilitating memory recollection
(van den Hoven 2004).
Petrelli (Petrelli et al. 2008) studied, amongst others, (1) what types of objects per-
sons used for autotopography, (2) the way in which these objects were used, and (3)
what made these objects suitable for this purpose. These studies revealed that the
appearance of the physical objects was not always important, but rather the “time or
emotion” it represented.
As far as the use of generic objects to recall memory is concerned, Hoven states
that these are not ideal for this purpose because they all look the same. Hoven contin-
ues by suggesting that personal objects would be better served for this purpose
“…because the mental model is created by the user herself and not imposed by the
system.” Yet Hoven states that a single object can have different meanings to different
persons. It thus seems plausible that a generic object could be used to recall memory
if the person has emotion attached to the object.
The Gestalt theory of perception states that sensations are not perceived in isola-
tion, but are “…assembled into perceptual experiences… called a Gestalt(Kasschau
2003, p224). According to the Gestalt school of thought, the brain constructs percep-
tions from sensations based on the principles of proximity, continuity, similarity,
simplicity, and closure (Kasschau 2003, p224).
46
3 Objects and Their Relationship
We consider computer programming with the premise that the spatial relationship
between a set of objects carries information for the person who has placed and orient-
ed the objects.
3.1 In Physical Space
A physical artefact can be considered an ‘object’ or a ‘thing’, depending on its co n-
text. When an artefact is considered in isolation from its surroundings, the artefact is
classified as a ‘thing’ but when it is considered in context with its surroundings it is
classified as an ‘object’ (Latour 2004, p233). Objects ‘gather’ meaning because of
their relation to other things(Boradkar 2010).
3.2 In Print
The lines, colours, and curves of a drawing are at times interesting to some in that
these two-dimensional prints contain a story (Suda 2010). This is also called “visuali-
sation” of data and has become the subject of study for some. It has also been sug-
gested by some that a “language of charts and graphs” exists (Suda 2010). The pur-
pose of the visualisation graphs and charts is to convey the complicated messages
contained in a data set to the observer in a simple way. Suda compares the graphs that
tell a story to the reader to the story is carried by text, for example in a novel, or the
story conveyed to the observer with a cartoon or painting. Examples are respectively
that of a painting, a plan for an electrical circuit, and a building plan for a dwelling.
These are interpreted by the observer. Depending on the observer’s training and cul-
tural background, the three examples will each convey some message to the observer.
The nature of the message could range from being of no interest or value, to one of
instruction/informative, to philosophical. The message can be both subjective and
objective at the same moment in time, depending on the observer and the circum-
stance it which it is being observed. For example, the painting shown here could elicit
a philosophical discussion amongst the group of artists viewing it at the Musée du
Louvre in Paris. However, for a young electronic engineer it may have very little
value, simply representing something a renowned person created long ago. The con-
verse could be stated about the electrical diagram when viewed by the young engineer
and the group of artists; it has little value to the artists, but to the engineer it repre-
sents a very specific assembly of physical objects that can transform an electrical
signal.
Dondis (Dondis 1973, p17) explains that when we see…it is a multidimensional
process…”, that is, we see so many things at the same time and impos-
ing…compositional forces on what we are seeing. We are thus not looking at an
image as one would read a manuscript line by line, but taking notice of the complete
image all at once and deriving the compositional forcestherein. Dondis states that
visual literacy is acquired through training and learning, and this explains why an
47
electrical engineer, artist, and architect would identify with ease respectively a electri-
cal circuit diagram, the message in a painting, and the designed function of a building.
3.3 In Art
The previous subsection considered patterns created by engineers, for engineers.
Here, we consider patterns created by artists.
Artists sometimes personify art; objects depicted in a pencil drawing on paper has
been described as "a carafe with mugs as bodyguards..." (Clement & Kamena 2000).
This supports our thinking that to the observer it seems that there exists a relationship
between objects. In the example of Clements, there exists a relationship between the
carafe and mugs. The relationship is that of a master and those whose function it is to
protect the master. Here the carafe is the master, and the mugs are the bodyguards.
Next we consider how this relationship may be made clear by adding another dimen-
sion to the relationship: the dimension of forces. Fig. 1 depicts Clement’s description
of the bodyguards as a force diagram. In the diagram, the red objects guard the yel-
low object from approaches by the blue objects. ‘Force lines’ emanating from the red
and blue objects indicate the direction on strength these forces. The length of the
force line is proportional to the magnitude of the force. The solid force lines are repel-
ling forces, and the dashed force lines represent the force propelling the object in the
direction of the arrow. The solid line linking objects indicate the bodyguard/master
relationship.
Fig. 1. ‘Bodyguards’ (red) repel ‘in-
vaders’ (blue). Inspired by Clement
(Clement & Kamena 2000).
Fig. 2. James Stirling. New State Gal-
lery, Germany (Fichner-Rathus 2012,
p28)
4 A relationship between Objects and Computer Programs
Art on canvas, and engineering drawings, may also include straight lines and geomet-
rical symbols.
Our research considers the extension of the two-dimensional relationship between
art, engineering, and computer programs to the possible three dimensional corre-
spondences between art, engineering, and computer programs.
The vertical lines in Stirling’s New State Gallery architecture (Fig. 2) may remind
one of the sequential and uninterrupted execution of instructions in a computer pro-
gram. The multiple vertical lines may represent multiple simultaneous streams of
code being executed in a computer program, commonly known in the field of com-
puter science as parallel execution of multiple program threads.
48
This is just one discussion of what the architecture might represent if it were to be
interpreted as the logic for a computer program. It would be for the designer of the
physical language to define the meaning of the physical artefact.
4.1 A relationship between Architecture and Computer Programs
We now consider how architecture could be interpreted as a computer program.
45.0°
0.0°
45.0°
45.0°
90.0° 0.0° Window
angles
Repeat
indefinately
Logo
program
Program output after
a single execution
Program output after
repeated executions
Window
1
Window
2
Window
3
Window
4
Window
5
Window
6
Fig. 3. Window positions translate to Logo program parameters.
Consider an array of windows (Fig. 3, top) and assume that the state of each window
can be interpreted as a computer program instruction. Using this approach we antici-
pate that an office complex could be regarded as a computer program. We illustrate
this concept using indoor photographs of windows along a passage linking two sec-
tions of an office complex. In this example some of the windows are fixed and others
can be opened. The angle to which a particular window is opened is determined by
the user and can vary between zero degrees and 90 degrees. Let’s make the assump-
tion that this angle represents the angle a Logo turtle (Abelson & diSessa 1980) turns
and each turn is followed by 20 units of forward motion.
We use the following mapping: if the window opens to the left as per the user’s
49
point of view, the Logo turtle will turn to the left. The converse is also true. We do
not yet have a means to instruct the Logo turtle to move forward or backward. To add
this ability to our bag of instructions, let us agree that the turtle moves a fixed amount
forward immediately after a turn instruction has been executed. As we do not have a
mechanism to state how much the Logo turtle should move forwards, let’s make this
an arbitrary constant of, say, 20 units. The angle and direction which the Logo turtle
rotates can simply be the same angle and direction in which the window has been
opened. We further assume the Logo pen is always down. Fig. 3, bottom left, is the
result.
5 Conclusion
We have explained why peripheral interaction can be considered to be a special case
of tangible interaction, and how the inclusion of the Internet of Things enhances the
spatial quality of interaction. Spatial Peripheral Interaction (SPI) was used to describe
the resultant interaction form. The potential of SPI was illustrated by means of a hy-
pothetical computationally enhanced physical building which produces abstract art in
response to the status of its windows.
References
1. Abelson, H. & diSessa, A. A. (1980), Turtle Geometry The Computer as a Medium for
Exploring Mathematics, The MIT Press.
2. Baskinger, M. & Gross, M. (2010), ‘Cover story tangible interaction = form + computing’,
interactions 17(1), 611.
3. Boradkar, P. (2010), Designing Things: A Critical Introduction to the Culture of Objects,
Berg Publishers.
4. Clement, S. & Kamena, M. (2000), The joy of art - a creative guide for beginning painters,
Harry N. Abrams, Inc. Translated from the French by Anthony Roberts.
5. Dondis, D. A. (1973), A primer of visual literacy, The MIT Press.
6. Fichner-Rathus, L. (2012), Foundations of art & design, Wadsworth.
7. Hornecker, E. & Buur, J. (2006), Getting a grip on tangible interaction: a framework on
physical space and social interaction, in ‘CHI ’06: Proceedings of the SIGCHI conference
on Human Factors in computing systems’, ACM Press, New York, NY, USA, pp. 437
446.
8. Kasschau, R. A. (2003), Understanding psychology, Glencoe/McGraw-Hill.
9. Latour, B. (2004), ‘Why has critique run out of steam? From matters of fact to matters of
concern’, Critical Inquiry 30(2), pp. 225248.
10. Petrelli, D., Whittaker, S. & Brockmeier, J. (2008), Autotopography: what can physical
mementos tell us about digital memories?, in ‘Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems’, ACM, pp. 5362.
11. Suda, B. (2010), A Practical Guide to Designing with Data, Five Simple Steps.
12. van den Hoven, E. (2004), Graspable cues for everyday recollection, PhD thesis, Eindho-
ven Technical University.
50
... The research on Peripheral Interaction was largely influenced by Saskia Bakker and Doris Hausen with their dissertations [6,41], numerous papers [10,11,42,43]..., panel discussions, workshops [8], and books [7,9]. In 2014, Hausen [41] defined Peripheral Interaction as the interplay between several tasks similar to multitasking, although there is a great difference. ...
... Moreover, several projects employ the body for input (e.g., [97]), expecting miniaturization of sensing devices, this type of interaction is especially of interest because it uses the always available body and thereby makes use of bodily capabilities thus offering an interaction that relies on physicality, which is as stated very beneficial for peripheral interaction. With this in mind, the overall application area of peripheral interaction should be increased addressing many different scenarios and user groups such as athletes [187], drivers [158] and many others [20,99]. ...
Thesis
Full-text available
In our everyday life we carry out a multitude of activities in parallel without focusing our attention explicitly on them. We drink a cup of tea while reading a book, we signal a colleague passing by with a hand gesture, that we are concentrated right now and that he should wait one moment, or we walk a few steps backwards while taking photos. Many of these interactions - like drinking, sending signals via gestures or walking - are rather complex by themselves. By means of learning and training, however, these interactions become part of our routines and habits and therefore only consume little or no attentional resources. In contrast, when interacting with digital devices, we are often asked for our full attention. To carry out - even small and marginal tasks - we are regularly forced to switch windows, do precise interactions (e.g., pointing with the mouse) and thereby these systems trigger context and focus switches, disrupting us in our main focus and task. Peripheral interaction aims at making use of human capabilities and senses like divided attention, spatial memory and proprioception to support interaction with digital devices in the periphery of the attention, consequently quasi-parallel to another primary task. In this thesis we investigate peripheral interaction in the context of a standard desktop computer environment. We explore three interaction styles for peripheral interaction: graspable interaction, touch input and freehand gestures. StaTube investigates graspable interaction in the domain of instant messaging, while the Appointment Projection uses simple wiping gestures to access information about upcoming appointments. These two explorations focus on one interaction style each and offer first insights into the general benefits of peripheral interaction. In the following we carried out two studies comparing all three interaction styles (graspable, touch, freehand) for audio player control and for dealing with notifications. We found that all three interaction styles are generally fit for peripheral interaction but come with different advantages and disadvantages. The last set of explorative studies deals with the ability to recall spatial locations in 2D as well as 3D. The Unadorned Desk makes use of the physical space around the desktop computer and thereby offers an extended interaction space to store and retrieve virtual items such as commands, applications or tools. Finally, evaluation of peripheral interaction is not straightforward as the systems are designed to blend into the environment and not draw attention on them. We propose an additional evaluation method for the lab to complement the current evaluation practice in the field. The main contributions of this thesis are (1) an exhaustive classification and a more detailed look at manual peripheral interaction for tangible, touch and freehand interaction. Based on these exploration with all three interaction styles, we offer (2) implications in terms of overall benefits of peripheral interaction, learnability and habituation, visual and mental attention, feedback and handedness for future peripheral interaction design. Finally, derived from a diverse set of user studies, we assess (3) evaluation strategies enriching the design process for peripheral interaction.
... The results from the first experiment motivated us to consider the application of gestural chair interaction for computing activities where interaction is not the primary focus of a user's attention, but rather in the periphery [11]. Existing work in the emerging field of peripheral interaction has investigated the design of inattentive interaction techniques that can be easily performed in the periphery of attention, and demonstrated significant benefits for such secondary task interactions [13,14,16]. ...
Conference Paper
Full-text available
During everyday office work we are used to controlling our computers with keyboard and mouse, while the majority of our body remains unchallenged and the physical work-space around us stays largely unattended. Addressing this untapped potential, we explore the concept of turning a flexible office chair into a ubiquitous input device. To facilitate daily desktop work, we propose the utilization of semaphoric chair gestures that can be assigned to specific application functionalities. The exploration of two usage scenarios in the context of focused and peripheral interaction demonstrates high potential of chair gestures as additional input modality for opportunistic, hands-free interaction.
Conference Paper
Full-text available
We present a dog-tail interface for utility robots, as a means of communicating high-level robotic state through affect. This interface leverages people’s general knowledge of dogs and their tails (e.g., wagging means happy) to communicate robotic state in an easy to understand way. In this paper, we present the details of our tail construction, and the results of a study which explored a base case of people’s reactions to the tail: how various parameters of tail movements and configuration influence perception of the robot’s zoomorphized affective state. Our study indicated that people were able to interpret a range of affective states from various tail configurations and gestures, and in summary, we present a set of guidelines for mapping tail parameters to intended perceived affective robotic state.
Conference Paper
Full-text available
This paper presents a design approach tackling the transformation of behavior through 'interactive materiality' from a phenomenological perspective. It builds upon the Interaction Frogger framework that couples action to reaction for intuitive mapping in intelligent product interaction. Through the discussion of two research-through-design cases, the augmented speed-skate experience and affective pen, it highlights the opportunities for design of an action-perception loop. Consequently, an approach is suggested that defines three steps to be incorporated in the design process: affirming and appreciating current behavior; designing continuous mapping for transformation; and fine-tuning sensitivities in the interactive materiality. Thereby, it discusses how behavior transformation through interactive materiality derived from a theoretical level, can contribute to design knowledge on the implementation level. The aim of this paper is to inspire design-thinking to shift from the cognitive approach of persuasion, to a meaningful and embodied mechanism respecting all human skills, by providing practical insights for designers.
Conference Paper
Full-text available
When executing one task on a computer, we are frequently confronted with secondary tasks (e.g., controlling an audio player or changing the IM state) that require shifting our attention away from the actual task, thus increasing our cognitive load. Peripheral interaction aims at reducing that cognitive load through the use of the periphery of our attention for interaction. In previous work, token- or tag-based systems alongside wearable and graspable devices were the dominant way of interacting in the periphery. We explore touch and freehand interaction in combination with several forms of visual feedback. In a dual-task lab study we found that those additional modalities are fit for peripheral interaction. Also, feedback did not have a measurable influence, yet it assured participants in their actions.
Conference Paper
To moderate oral presentations a chair must manage time, and communicate time parameters to speakers through a variety of means. But speakers often miss time cues, chairs cannot confirm their receipt, and the broken dialogue can be a sideshow for the audience. We developed HaNS, a wireless wrist-worn chair-speaker Haptic Notification System that delivers tactile cues for time-managing oral presentations, and performed field observations at university research seminars and two mid-sized academic conferences (input from 66 speakers, 21 chairs, and 65 audience members). Results indicate that HaNS can improve a user's awareness of time, facilitate chair-speaker coordination, and reduce distraction of speaker and audience through its private communication channel. Eliminating overruns will require improvement in speaker 'internal' control, which our results suggest HaNS can also support given practice. We conclude with design guidelines for both conference-deployed and personal timing tools, using touch or another notification modality.
Conference Paper
We present acoustic barcodes, structured patterns of physical notches that, when swiped with e.g., a fingernail, produce a complex sound that can be resolved to a binary ID. A single, inexpensive contact microphone attached to a surface or object is used to capture the waveform. We present our method for decoding sounds into IDs, which handles variations in swipe velocity and other factors. Acoustic barcodes could be used for information retrieval or to triggering interactive functions. They are passive, durable and inexpensive to produce. Further, they can be applied to a wide range of materials and objects, including plastic, wood, glass and stone. We conclude with several example applications that highlight the utility of our approach, and a user study that explores its feasibility.
Conference Paper
In many sports, athletes are spatially separated from their coach while practicing an exercise. This spatial separation makes learning new skills arduous because the coach cannot give instructions or feedback on performance. We present the findings of an in the wild study that demonstrate the potential for teaching sport skills with realtime tactile instructions. We focused on snowboard training. Ten amateurs learned a riding technique with a wearable system that automatically provided tactile instructions during descents. These instructions were in sync with the movements of the snowboard and signaled how to move the body. We found that tactile instructions could help snowboarders to improve their skills. We report insights into the snowboarders' opinion and give recommendations for teaching sport skills with tactile instructions. Our findings help to identify the conditions under which tactile instructions can support athletes in sports training.
Conference Paper
Enabling phones to infer whether they are currently in a pocket, purse or on a table facilitates a range of new interactions from placement-dependent notifications setting to preventing "pocket dialing". We collected data from 693 participants to understand where people keep their phone in different contexts and why. Using this data, we identified three placement personas: Single Place Pat, Consistent Casey, and All-over Alex. Based on these results, we collected two weeks of labeled accelerometer data in-situ from 32 participants. We used this data to build models for inferring phone placement, achieving an accuracy of approximately 85% for inferring whether the phone is in an enclosed location and for inferring if the phone is on the user. Finally, we prototyped a capacitive grid and a multispectral sensor and collected data from 15 participants in a laboratory to understand the added value of these sensors.