Content uploaded by Johannes Schöning
Author content
All content in this area was uploaded by Johannes Schöning on Aug 24, 2017
Content may be subject to copyright.
Content uploaded by Hans-Christian Jetter
Author content
All content in this area was uploaded by Hans-Christian Jetter on May 17, 2014
Content may be subject to copyright.
35
9 Collaborative Interactions in Future Crisis Rooms
Hans-Christian Jetter1 Johannes Schöning1,2 Roman Rädle3 Harald Reiterer3 Yvonne Rogers1
1 Intel ICRI Cities, University College London, United Kingdom, United Kingdom
2 Hasselt University, Belgium
3 University of Konstanz, Germany
h.jetter@ucl.ac.uk johannes.schoening@uhasselt.be roman.raedle@uni-konstanz.de
harald.reiterer@uni-konstanz.de y.rogers@ucl.ak.uk
9.1 Abstract
In this paper, we tie together different streams of research from our institutions to present a joint vision of
collaborative computer-supported interactions in future crisis rooms. We envision novel interaction and visualisation
techniques for the next generation of crisis rooms that will better support collaborative search, analysis, and
comparison of data during sensemaking activities. Such a sensemaking activity for example could be the production
of daily situation reports. We focus on how users can benefit from device ecologies consisting of big wall displays,
interactive tabletop surfaces and tangible user interfaces during such activities.
9.2 Introduction
Since we are researchers in the field of Human-Computer Interaction (HCI) and Information Visualisation (InfoVis),
we combine technological aspects and human aspects in our research. For us, novel devices, sensors and
visualisations are enabling technologies that, if they are designed and combined in an appropriate user-centred
manner [7], can support collaborative activities to a much greater extent than desktop personal computers with
traditional WIMP (Windows Icons Menu Pointer) applications. Therefore, rather than focusing on individual
technologies or single WIMP applications, our research emphasis here is on how to support collaborative activities of
multiple people in an interactive physical space through multiple devices, displays and post-WIMP interaction and
visualisation techniques.
We believe that future crisis room should be designed holistically as collaborative interactive spaces with careful
consideration of the users’ individual interactions, their social interactions, their workflows and their physical
environment [8]. For example, we need to facilitate the switching between different topics, representations and
phases of the workflow. We must also consider how to design lures, salient information, and grabbers for our
attention [6] and how to employ seams, bridges and niches to let users benefit from device ecologies [9]. To
approach this goal, we here propose different designs and technologies that are based on our previous research and
are combined to illustrate how collaboration in future crisis rooms could look like.
9.3 Components of a Future Crisis Room
Before we describe the individual components and their interplay, we introduce a simple scenario of use as a starting
point. In this scenario, two or more users are searching Twitter feeds to find tweets that contain relevant information
about a specific situation, e.g., a humanitarian crisis in a city struck by a natural or man-made catastrophe. They
want to use keywords (e.g., hashtags) or different facets (e.g., time, geo location, number of retweets) to narrow
down the flood of incoming tweets to a manageable amount. In the following, we describe a setup that enables this
collaborative search for selected facets and keywords in large amounts of tweets by integrating a tabletop system
with tangible user interface elements into a setup with a large video wall. This setup can be regarded as
complementary to the setup that is described in a further submission to this workshop by the University of Konstanz
[3]. To illustrate the interplay between the different components, we created a video sketch4 that shows the
workflow of a tangible and collaborative exploration of crisis data on a tabletop and a subsequent analysis of the
filtered data in a multi-focus visualization on a large wall-sized display.
4 http://hci.uni-konstanz.de/researchprojects/crisis-room
36
Figure 17: Combination of a video wall showing a geographic visualisation and a tabletop for searching and
filtering tweets.
9.3.1 Tabletop for Collaborative Search and Filtering
The tabletop system for collaborative search and filtering is based on “Facet-Streams” a hybrid visual-tangible user
interface that was designed, implemented and evaluated by the University of Konstanz [1]. It enables co-located
collaborative search by combining techniques of information visualisation with tangible and multi-touch interaction5
(see Figure 18). It harnesses the expressive power of facets and Boolean logic without exposing users to complex
formal notations. User studies revealed how the system unifies visual and tangible expressivity with simplicity in
interaction, supports different search strategies and collaboration styles, and turns search into a fun and social
experience [1]. More recently this system was extended with keyword and faceted search in large amounts of tweets
and visualising and manipulating them on external devices such as video walls (see Figure 17).
Figure 18: Facet-Streams combines a filter/flow metaphor with tangible user interface elements.
5 Video: http://www.youtube.com/watch?v=giDF9lKhCLc
37
9.3.2 Optical User Identification
To enable a truly collaborative search activity, it is important to track the users’ identity throughout this task to later
use this information to create the daily report. Recent work on “Carpus” by the University of Hasselt6 shows how
multi-user collaboration on interactive surfaces can be enriched significantly if touch points can be associated with a
particular user [2]. Carpus is a non-intrusive, high-accuracy technique for mapping touches to their corresponding
users in a collaborative environment. By mounting a high-resolution camera above any interactive surface, it is able
to identify touches reliably without any extra instrumentation, and users are able to move around the crisis room
(see Figure 19). The technique, which leverages the back of users’ hands as identifiers, supports walk-up-and-use
situations in which multiple people interact on a shared surface. Using such an identification technique to extend the
search will greatly enhance the possible design space and will enable better support of individual users and their
needs during collaboration. The technique could also be used to identify users, when interacting via touch with
stereoscopic data as proposed in [12].
Figure 19: Carpus enables multi-user identification on interactive surfaces by analyzing the back of users’ hands.
(left) Illustration of the technical setup. (right) Carpus identifies two users and distinguishes between their left and
right hand.
9.3.3 Space-folding Techniques for Geographical Visualisation
When specific information is found, it is also important to provide different (often geographic) visualisation
techniques to present this information (in our case geo-tagged Twitter feeds) in the daily report. Please refer to [3]
for detailed information. As an example, Schwarz et al. used a space-folding technique (see Figure 17Figure 16) that
enables multiple users to collaboratively explore map data and to focus on different geographic regions while
sustaining their spatial context and creating spatial awareness among group members [4].
9.3.4 Lenses for Multi-user Interaction with Geographic Data
In [5], a lens concept is used to allow synchronous multi-user interaction with geographic visualisations. These lenses
are GUI widgets that can be used like scalable as well as zoomable magnifying lenses. GeoLenses are fully multi-
user capable while still being intuitive to use. Bier et al. first introduced the notion of the “magic lens” in an UI in
1993 [10]. Bier et al.’s original lenses are transparent or semi-transparent user interface elements, which can be
placed over objects to change their appearance and/or facilitate interaction. Since then, the concept has been applied
frequently to geographic visualisation [11] to overcome the problems of multi-user interaction with spatial
information.
6 Video: http://www.youtube.com/watch?v=HNQfjnw4Aw4
38
Figure 20: A GeoLens visualizes location-dependent geographic data similar to a “magic lens”.
9.4 Conclusion
In the previous section we highlighted four different components and their interplay to show how to better support
collaborative search, analysis, and comparison of data during sensemaking activities in future crisis response rooms.
We still see a lot of potential of ICT technology to better support the heterogeneous teams in crisis control rooms
and their various, often highly complex, tasks and activities. This clearly involves designing user-centred tools,
interactive visualisations and novel user interfaces for crisis response rooms, alongside with the teams working in
theses rooms, to provide them with information that they can readily understand and act upon to save lives.
9.5 About the authors
The newly founded Intel Collaborative Research Institute (ICRI) on Sustainable Connected Cities [6] is led by Yvonne
Rogers (UCL), who has been researching interactive tabletops and shareable user interfaces “in the wild” for many
years [7], Julie McCann (Imperial College London, UK) and Duncan Wilson (Intel).
ICRI Cities is also the home of Johannes Schöning who is working on tabletop and mobile interaction technologies
and Hans-Christian Jetter who has joined ICRI Cities in April 2013 from the University of Konstanz, where he has
been working with Roman Rädle and Harald Reiterer on the design, implementation, and evaluation of collaborative
interactive spaces and user interfaces for knowledge work.
Harald Reiterer heads the Human-Computer Interaction Group at the University of Konstanz [13]. His research
focuses on the development of new interaction techniques and visualisations for distributed user interface
environments like control rooms.
Roman Rädle is PhD student at the Human-Computer Interaction Group of the University of Konstanz. His current
research focuses on proxemic interaction techniques to support spatial navigation in large information spaces.
39
9.6 References
[1] Jetter, Hans-Christian, Gerken, Jens, Zöllner, Michael, Reiterer, Harald, and Milic-Frayling, Natasa (2011), 'Materializing the
query with facet-streams: a hybrid surface for collaborative search on tabletops', Proc. CHI '11 (ACM), 3013-22.
[2] Ramakers, Raf, Vanacken, Davy, Luyten, Kris, Schöning, Johannes, and Coninx, & Karin (2012), 'Carpus: A Non-Intrusive
User Identification Technique for Interactive Surfaces', Proc. UIST ’12 (ACM), 35-44.
[3] Butscher, Simon, Müller, Jens, Weiler, Andreas, Rädle, Roman, Reiterer, Harald, Scholl, Marc H. (2013, Jan). Multi-user
Twitter Analysis for Crisis Room Environments. (submitted to this workshop)
[4] Schwarz, Tobias, Butscher, Simon, Müller, Jens, & Reiterer, Harald (2012, May). Content-aware navigation for large
displays in context of traffic control rooms. In Proceedings of the International Working Conference on Advanced Visual
Interfaces (pp. 249-252). ACM.
[5] von Zadow, Ulrich, Daiber, Florian, Schöning, Johannes, & Krüger, Antonio. GlobalData: multi-user interaction with
geographic information systems on interactive surfaces. In ACM International Conference on Interactive Tabletops and
Surfaces 2010 (pp. 318-318). ACM.
[6] Schöning, Johannes, Rogers, Yvonne, Bird, Jon, Capra, Licia McCann, Julie A., Prendergast, David, & Sheridan, Charles. Intel
Collaborative Research Institute-Sustainable Connected Cities. In Proc. of AmI 2012.
[7] Rogers, Yvonne, Sharp, Helen, & Preece, Jenny (2011). Interaction Design: Beyond Human Computer Interaction. 3rd
Edition.
[8] Jetter, Hans-Christian, Geyer, Florian, Schwarz, Tobias, Reiterer, Harald (2012), Blended Interaction – Toward a Framework
for the Design of Interactive Spaces, Workshop “Designing Collaborative Interactive Spaces” (DCIS 2012) at AVI 2012, HCI
Group, Univ. of Konstanz, May 2012.
[9] Tim Coughlan, Trevor D. Collins, Anne Adams, Yvonne Rogers, Pablo A. Haya, Estefanía Martín (2012) The conceptual
framing, design and evaluation of device ecologies for collaborative activities. International Journal of Human-Computer
Studies, Volume 70, Issue 10, October 2012, Pages 765–779
[10] Bier, Eric A., et al. “Toolglass and magic lenses: the see-through interface.” Proceedings of the 20th annual conference on
Computer graphics and interactive techniques. ACM, 1993.
[11] Carpendale, Sheelagh, John Ligh, and Eric Pattison. “Achieving higher magnification in context.” Proceedings of the 17th
annual ACM symposium on User interface software and technology. ACM, 2004.
[12] Schöning, Johannes, Steinicke, Frank, Krüger, Antonio, Hinrichs, Klaus. “Bimanual interaction with interscopic multi-touch
surfaces.” Human-Computer Interaction–INTERACT 2009. Springer Berlin Heidelberg, 2009. 40-53.
[13] Reiterer, Harald: Human-Computer Interaction Group, University of Konstanz, Germany. In interactions, Nov+Dec 2011
(pp.82-85). ACM.