Conference PaperPDF Available

Collaborative Interactions in Future Crisis Rooms

Authors:

Abstract

In this paper, we tie together different streams of research from our institutions to present a joint vision of collaborative computer-supported interactions in future crisis rooms. We envision novel interaction and visualisation techniques for the next generation of crisis rooms that will better support collaborative search, analysis, and comparison of data during sensemaking activities. Such a sensemaking activity for example could be the production of daily situation reports. We focus on how users can benefit from device ecologies consisting of big wall displays, interactive tabletop surfaces and tangible user interfaces during such activities.
35
9 Collaborative Interactions in Future Crisis Rooms
Hans-Christian Jetter1 Johannes Schöning1,2 Roman Rädle3 Harald Reiterer3 Yvonne Rogers1
1 Intel ICRI Cities, University College London, United Kingdom, United Kingdom
2 Hasselt University, Belgium
3 University of Konstanz, Germany
h.jetter@ucl.ac.uk johannes.schoening@uhasselt.be roman.raedle@uni-konstanz.de
harald.reiterer@uni-konstanz.de y.rogers@ucl.ak.uk
9.1 Abstract
In this paper, we tie together different streams of research from our institutions to present a joint vision of
collaborative computer-supported interactions in future crisis rooms. We envision novel interaction and visualisation
techniques for the next generation of crisis rooms that will better support collaborative search, analysis, and
comparison of data during sensemaking activities. Such a sensemaking activity for example could be the production
of daily situation reports. We focus on how users can benefit from device ecologies consisting of big wall displays,
interactive tabletop surfaces and tangible user interfaces during such activities.
9.2 Introduction
Since we are researchers in the field of Human-Computer Interaction (HCI) and Information Visualisation (InfoVis),
we combine technological aspects and human aspects in our research. For us, novel devices, sensors and
visualisations are enabling technologies that, if they are designed and combined in an appropriate user-centred
manner [7], can support collaborative activities to a much greater extent than desktop personal computers with
traditional WIMP (Windows Icons Menu Pointer) applications. Therefore, rather than focusing on individual
technologies or single WIMP applications, our research emphasis here is on how to support collaborative activities of
multiple people in an interactive physical space through multiple devices, displays and post-WIMP interaction and
visualisation techniques.
We believe that future crisis room should be designed holistically as collaborative interactive spaces with careful
consideration of the users’ individual interactions, their social interactions, their workflows and their physical
environment [8]. For example, we need to facilitate the switching between different topics, representations and
phases of the workflow. We must also consider how to design lures, salient information, and grabbers for our
attention [6] and how to employ seams, bridges and niches to let users benefit from device ecologies [9]. To
approach this goal, we here propose different designs and technologies that are based on our previous research and
are combined to illustrate how collaboration in future crisis rooms could look like.
9.3 Components of a Future Crisis Room
Before we describe the individual components and their interplay, we introduce a simple scenario of use as a starting
point. In this scenario, two or more users are searching Twitter feeds to find tweets that contain relevant information
about a specific situation, e.g., a humanitarian crisis in a city struck by a natural or man-made catastrophe. They
want to use keywords (e.g., hashtags) or different facets (e.g., time, geo location, number of retweets) to narrow
down the flood of incoming tweets to a manageable amount. In the following, we describe a setup that enables this
collaborative search for selected facets and keywords in large amounts of tweets by integrating a tabletop system
with tangible user interface elements into a setup with a large video wall. This setup can be regarded as
complementary to the setup that is described in a further submission to this workshop by the University of Konstanz
[3]. To illustrate the interplay between the different components, we created a video sketch4 that shows the
workflow of a tangible and collaborative exploration of crisis data on a tabletop and a subsequent analysis of the
filtered data in a multi-focus visualization on a large wall-sized display.
4 http://hci.uni-konstanz.de/researchprojects/crisis-room
36
Figure 17: Combination of a video wall showing a geographic visualisation and a tabletop for searching and
filtering tweets.
9.3.1 Tabletop for Collaborative Search and Filtering
The tabletop system for collaborative search and filtering is based on “Facet-Streams” a hybrid visual-tangible user
interface that was designed, implemented and evaluated by the University of Konstanz [1]. It enables co-located
collaborative search by combining techniques of information visualisation with tangible and multi-touch interaction5
(see Figure 18). It harnesses the expressive power of facets and Boolean logic without exposing users to complex
formal notations. User studies revealed how the system unifies visual and tangible expressivity with simplicity in
interaction, supports different search strategies and collaboration styles, and turns search into a fun and social
experience [1]. More recently this system was extended with keyword and faceted search in large amounts of tweets
and visualising and manipulating them on external devices such as video walls (see Figure 17).
Figure 18: Facet-Streams combines a filter/flow metaphor with tangible user interface elements.
5 Video: http://www.youtube.com/watch?v=giDF9lKhCLc
37
9.3.2 Optical User Identification
To enable a truly collaborative search activity, it is important to track the users’ identity throughout this task to later
use this information to create the daily report. Recent work on “Carpus” by the University of Hasselt6 shows how
multi-user collaboration on interactive surfaces can be enriched significantly if touch points can be associated with a
particular user [2]. Carpus is a non-intrusive, high-accuracy technique for mapping touches to their corresponding
users in a collaborative environment. By mounting a high-resolution camera above any interactive surface, it is able
to identify touches reliably without any extra instrumentation, and users are able to move around the crisis room
(see Figure 19). The technique, which leverages the back of users’ hands as identifiers, supports walk-up-and-use
situations in which multiple people interact on a shared surface. Using such an identification technique to extend the
search will greatly enhance the possible design space and will enable better support of individual users and their
needs during collaboration. The technique could also be used to identify users, when interacting via touch with
stereoscopic data as proposed in [12].
Figure 19: Carpus enables multi-user identification on interactive surfaces by analyzing the back of users’ hands.
(left) Illustration of the technical setup. (right) Carpus identifies two users and distinguishes between their left and
right hand.
9.3.3 Space-folding Techniques for Geographical Visualisation
When specific information is found, it is also important to provide different (often geographic) visualisation
techniques to present this information (in our case geo-tagged Twitter feeds) in the daily report. Please refer to [3]
for detailed information. As an example, Schwarz et al. used a space-folding technique (see Figure 17Figure 16) that
enables multiple users to collaboratively explore map data and to focus on different geographic regions while
sustaining their spatial context and creating spatial awareness among group members [4].
9.3.4 Lenses for Multi-user Interaction with Geographic Data
In [5], a lens concept is used to allow synchronous multi-user interaction with geographic visualisations. These lenses
are GUI widgets that can be used like scalable as well as zoomable magnifying lenses. GeoLenses are fully multi-
user capable while still being intuitive to use. Bier et al. first introduced the notion of the magic lens” in an UI in
1993 [10]. Bier et al.’s original lenses are transparent or semi-transparent user interface elements, which can be
placed over objects to change their appearance and/or facilitate interaction. Since then, the concept has been applied
frequently to geographic visualisation [11] to overcome the problems of multi-user interaction with spatial
information.
6 Video: http://www.youtube.com/watch?v=HNQfjnw4Aw4
38
Figure 20: A GeoLens visualizes location-dependent geographic data similar to a “magic lens”.
9.4 Conclusion
In the previous section we highlighted four different components and their interplay to show how to better support
collaborative search, analysis, and comparison of data during sensemaking activities in future crisis response rooms.
We still see a lot of potential of ICT technology to better support the heterogeneous teams in crisis control rooms
and their various, often highly complex, tasks and activities. This clearly involves designing user-centred tools,
interactive visualisations and novel user interfaces for crisis response rooms, alongside with the teams working in
theses rooms, to provide them with information that they can readily understand and act upon to save lives.
9.5 About the authors
The newly founded Intel Collaborative Research Institute (ICRI) on Sustainable Connected Cities [6] is led by Yvonne
Rogers (UCL), who has been researching interactive tabletops and shareable user interfaces “in the wild” for many
years [7], Julie McCann (Imperial College London, UK) and Duncan Wilson (Intel).
ICRI Cities is also the home of Johannes Schöning who is working on tabletop and mobile interaction technologies
and Hans-Christian Jetter who has joined ICRI Cities in April 2013 from the University of Konstanz, where he has
been working with Roman Rädle and Harald Reiterer on the design, implementation, and evaluation of collaborative
interactive spaces and user interfaces for knowledge work.
Harald Reiterer heads the Human-Computer Interaction Group at the University of Konstanz [13]. His research
focuses on the development of new interaction techniques and visualisations for distributed user interface
environments like control rooms.
Roman Rädle is PhD student at the Human-Computer Interaction Group of the University of Konstanz. His current
research focuses on proxemic interaction techniques to support spatial navigation in large information spaces.
39
9.6 References
[1] Jetter, Hans-Christian, Gerken, Jens, Zöllner, Michael, Reiterer, Harald, and Milic-Frayling, Natasa (2011), 'Materializing the
query with facet-streams: a hybrid surface for collaborative search on tabletops', Proc. CHI '11 (ACM), 3013-22.
[2] Ramakers, Raf, Vanacken, Davy, Luyten, Kris, Schöning, Johannes, and Coninx, & Karin (2012), 'Carpus: A Non-Intrusive
User Identification Technique for Interactive Surfaces', Proc. UIST ’12 (ACM), 35-44.
[3] Butscher, Simon, Müller, Jens, Weiler, Andreas, Rädle, Roman, Reiterer, Harald, Scholl, Marc H. (2013, Jan). Multi-user
Twitter Analysis for Crisis Room Environments. (submitted to this workshop)
[4] Schwarz, Tobias, Butscher, Simon, Müller, Jens, & Reiterer, Harald (2012, May). Content-aware navigation for large
displays in context of traffic control rooms. In Proceedings of the International Working Conference on Advanced Visual
Interfaces (pp. 249-252). ACM.
[5] von Zadow, Ulrich, Daiber, Florian, Schöning, Johannes, & Krüger, Antonio. GlobalData: multi-user interaction with
geographic information systems on interactive surfaces. In ACM International Conference on Interactive Tabletops and
Surfaces 2010 (pp. 318-318). ACM.
[6] Schöning, Johannes, Rogers, Yvonne, Bird, Jon, Capra, Licia McCann, Julie A., Prendergast, David, & Sheridan, Charles. Intel
Collaborative Research Institute-Sustainable Connected Cities. In Proc. of AmI 2012.
[7] Rogers, Yvonne, Sharp, Helen, & Preece, Jenny (2011). Interaction Design: Beyond Human Computer Interaction. 3rd
Edition.
[8] Jetter, Hans-Christian, Geyer, Florian, Schwarz, Tobias, Reiterer, Harald (2012), Blended Interaction – Toward a Framework
for the Design of Interactive Spaces, Workshop “Designing Collaborative Interactive Spaces” (DCIS 2012) at AVI 2012, HCI
Group, Univ. of Konstanz, May 2012.
[9] Tim Coughlan, Trevor D. Collins, Anne Adams, Yvonne Rogers, Pablo A. Haya, Estefanía Martín (2012) The conceptual
framing, design and evaluation of device ecologies for collaborative activities. International Journal of Human-Computer
Studies, Volume 70, Issue 10, October 2012, Pages 765–779
[10] Bier, Eric A., et al. “Toolglass and magic lenses: the see-through interface.” Proceedings of the 20th annual conference on
Computer graphics and interactive techniques. ACM, 1993.
[11] Carpendale, Sheelagh, John Ligh, and Eric Pattison. “Achieving higher magnification in context.” Proceedings of the 17th
annual ACM symposium on User interface software and technology. ACM, 2004.
[12] Schöning, Johannes, Steinicke, Frank, Krüger, Antonio, Hinrichs, Klaus. “Bimanual interaction with interscopic multi-touch
surfaces.” Human-Computer Interaction–INTERACT 2009. Springer Berlin Heidelberg, 2009. 40-53.
[13] Reiterer, Harald: Human-Computer Interaction Group, University of Konstanz, Germany. In interactions, Nov+Dec 2011
(pp.82-85). ACM.
Book
In this workshop, we reviewed and discussed opportunities, technical challenges and problems with cross-device interactions in real world interactive multi-surface and multi-device ecologies. We aim to bring together researchers and practitioners currently working on novel techniques for cross-surface interactions, focusing both on technical as well as interaction challenges for introducing these technologies into the wild, and highlighting opportunities for further research. The workshop will help to facilitate knowledge exchange on the inherent challenges of building robust and intuitive cross-surface interactions, identify application domains and enabling technologies for cross-surface interactions in the wild, and establish a research community to develop effective strategies for successful design of crossdevice interactions. Please find more details about the workshop, in the submitted proposal [1]. The workshop was held in conjunction with the 2015 ACM International Conference on Interactive Tabletops and Surfaces, that took place from November 15 to 18 in Funchal in Madeira, Portugal.
Conference Paper
Two major form factors for virtual reality are head-mounted displays and large display environments such as CAVE®and the LCD-based successor CAVE2®. Each of these has distinct advantages and limitations based on how they’re used. This work explores preserving the high resolution and sense of presence of CAVE2 environments in full stereoscopic mode by using a see-though augmented reality HMD to expand the user’s field of regard beyond the physical display walls. In our explorative study, we found that in a visual search task in a stereoscopic CAVE2, the addition of the HoloLens to expand the field of regard did not hinder the performance or accuracy of the participant, but promoted more physical navigation which in post-study interviews participants felt aided in their spatial awareness of the virtual environment.
Chapter
Emergencies, crises, and disasters happen frequently, with significant impact on the lives of countless people. To respond to these events, many organizations including the Police, EMS, and Fire departments work together in a collaborative effort to mitigate the effects of these events. In addition, these agencies are often joined by third-party organizations such as the Red Cross or utility companies. For all of these groups to work together, an Emergency Operations Centre (EOC) acts as a hub for centralized communication and planning. Despite the significant role of the EOC, many existing EOCs still rely on aging technologies, leaving many potential improvements available by adopting new technologies. Considering the impact of emergencies on human lives and lost resources, and the scale of these emergencies, even a minor improvement can lead to significant benefits and cost-savings. Emergency Operations Centre of the Future (EOC-F) is an exploration into the integration of various novel technologies in EOC design, in an effort to make emergency response more efficient and collaborative. We have built a multi-surface environment (MSE) which utilizes various digital surfaces including display walls, tabletops, tablet devices, and mobile/wearable computing devices. Within this multi-surface EOC, we look at proxemic interactions and augmented reality as useful ways to transfer and access information. We also discuss how analysis of information gathered within the EOC, as well as social media, can lead to more informed decision making during emergency response.
Conference Paper
Full-text available
Interactive surfaces have great potential for co-located collaboration because of their ability to track multiple inputs simultaneously. However, the multi-user experience on these devices could be enriched significantly if touch points could be associated with a particular user. Existing approaches to user identification are intrusive, require users to stay in a fixed position, or suffer from poor accuracy. We present a non-intrusive, high-accuracy technique for mapping touches to their corresponding user in a collaborative environment. By mounting a high-resolution camera above the interactive surface, we are able to identify touches reliably without any extra instrumentation, and users are able to move around the surface at will. Our technique, which leverages the back of users' hands as identifiers, supports walk-up-and-use situations in which multiple people interact on a shared surface.
Conference Paper
Full-text available
Cities are places where people, meet, exchange, work, live and interact. They bring people with di erent interests, experiences and knowledge close together. They are the centres of culture, economic development and social change. They o er many opportunities to innovate with technologies, from the infrastructures that underlie the sewers to computing in the cloud. One of the overarching goals of Intel's Collaborative Research Institute on Sustainable Connected Cities is to integrate the technological, economic and social needs of cities in ways that are sustainable and human-centred. Our objective is to inform, develop and evaluate services that enhance the quality of living in the city.
Conference Paper
Full-text available
The geographical domain was often used as a showcase to show the possibilities of multi-touch interaction. Nonetheless, researchers have rarely investigated multi-user interaction with GIS - in fact, most of the geographical tabletop applications are not suited to multi-user interaction. Our multitouch application, GlobalData, allows multiple people to interact and collaborate in examining global, geolocated data. In idle mode, the device simply shows a stylized map of the earth. Users can open circular GeoLenses. These circles show the same map segment as the underlying base map and superimpose different data layers on it.
Conference Paper
Full-text available
We introduce "Facet-Streams", a hybrid interactive surface for co-located collaborative product search on a tabletop. Facet-Streams combines techniques of information visualization with tangible and multi-touch interaction to materialize collaborative search on a tabletop. It harnesses the expressive power of facets and Boolean logic without exposing users to complex formal notations. Two user studies reveal how Facet-Streams unifies visual and tangible expressivity with simplicity in interaction, supports different strategies and collaboration styles, and turns product search into a fun and social experience.
Conference Paper
Full-text available
We present an approach to control information flow in object-oriented systems. The decision of whether an informatin flow is permitted or denied depends on both the authorizations specified on the objects and the process by which information is obtained ...
Conference Paper
Full-text available
Multi-touch interaction has received considerable attention in the last few years, in particular for natural two-dimensional (2D) interaction. However, many application areas deal with three-dimensional (3D) data and require intuitive D interaction techniques therefore. Indeed, virtual reality (VR) systems provide sophisticated 3D user interface, but then lack efficient 2D interaction, and are therefore rarely adopted by ordinary users or even by experts. Since multi-touch interfaces represent a good trade-off between intuitive, constrained interaction on a touch surface providing tangible feedback, and unrestricted natural interaction without any instrumentation, they have the potential to form the foundation of the next generation user interface for 2D as well as 3D interaction. In particular, stereoscopic display of 3D data provides an additional depth cue, but until now the challenges and limitations for multi-touch interaction in this context have not been considered. In this paper we present new multi-touch paradigms and interactions that combine both traditional 2D interaction and novel D interaction on a touch surface to form a new class of multi-touch systems, which we refer to as interscopic multi-touch surfaces (iMUTS). We discuss iMUTS-based user interfaces that support interaction with 2D content displayed in monoscopic mode and 3D content usually displayed stereoscopically. In order to underline the potential of the proposed iMUTS setup, we have developed and evaluated two example interaction metaphors for different domains. First, we present intuitive navigation techniques for virtual 3D city models, and then we describe a natural metaphor for deforming volumetric datasets in a medical context.
Article
A variety of computing technologies, in addition to the personal computer, are now commonly used in many settings. As networking infrastructures mature, it is increasingly feasible and affordable to consider closer integration and use of these heterogeneous devices in tandem. However, little is known about how best to design or evaluate such ‘device ecologies’; in particular, how best to combine devices to achieve a desired type of collaborative user experience. A central concern is how users switch their attention between devices, to utilize the various elements to best effect. We describe here the development of an ecology of devices for groups of students to use when engaged in collaborative inquiry-learning activities. This included a multi-touch tabletop, laptops, projections, video streams and telephone. In situ studies of students and tutors using it in three different settings showed how individuals and groups switched their foci between the multiple devices. We present our findings, using a novel method for analysing users’ transitions between foci, identifying patterns and emergent characteristics. We then discuss the importance of designing for transitions that enable groups to appropriately utilise an ecology of devices, using the concepts of seams, bridges, niches and focal character.