Content uploaded by Adam Widera
Author content
All content in this area was uploaded by Adam Widera on Jan 19, 2021
Content may be subject to copyright.
Crowdsourcing and Crowdtasking in Crisis
Management
Lessons Learned From a Field Experiment Simulating a Flooding in the City of the Hague
Michael Middelhoff, Adam Widera,
Roelof P. van den Berg and Bernd Hellingrath
Westfälische Wilhelms-Universität Münster
Münster, Germany
{michael.middelhoff, adam.widera,
roelof.vandenberg, bernd.hellingrath}@wi.uni-muenster.de
Daniel Auferbauer, Denis Havlik
and Jasmin Pielorz
AIT Austrian Institute of Technology GmbH.
Vienna, Austria
{daniel.auferbauer, denis.havlik,
jasmin.pielorz}@ait.ac.at
Abstract—The EU FP7 project DRIVER conducts a number of
experiments that explore new approaches for addressing known
deficiencies in crisis management. The “Interaction with Citizens”
experiment campaign focuses on testing the usability and
acceptance of various methods and tools that facilitate crisis
communication via several channels. These include: informing,
alerting, micro-tasking, incident information crowdsourcing from
volunteers, and usage of this information to improve situational
awareness. The results highlight that volunteer motivation in a
serious game like scenario is important to simulate participation
in crisis events. We also argue that the scenario complexity level
needs to be simple enough to avoid difficulties in communication
with non-professional participants in addition to external
influences in a field experiment. In this paper, we present lessons
learned from the final experiment of this campaign that
investigated two-way communication solutions between crisis
managers and citizens or unaffiliated volunteers in a simulated
flooding scenario in the city of The Hague.
Keywords—crisis management; unaffiliated volunteers; decision
support; crowdsourcing; crowdtasking; micro-tasking; personalized
alerting
I. INTRODUCTION
The rise of social networking has allowed ad-hoc groups of
citizens to organize large-scale activities in a flexible manner.
From a crisis manager’s point of view, the appearance of such
loosely coordinated groups of unaffiliated volunteers is, both, a
blessing and a curse, as they do not fit into the hierarchical
procedures prevalent in crisis management and are difficult to
control.
Unlike professional response organizations, such as firemen
or medical first responders, these ad-hoc groups lack a command
structure, mechanisms to distinguish information from
misinformation, as well as procedures to prioritize and split tasks
among themselves. The merit of unaffiliated volunteers has been
demonstrated on various occasions [1]. Nevertheless, the
absence of coordination mechanisms and missing situational
awareness can render such groups inefficient. This happens in
particular when (too) many volunteers are concentrating on few,
evident tasks, while omitting to address equally important, but
less visible needs. Furthermore, crisis managers are not aware of
acting volunteers without coordination mechanisms in place. In
the worst case scenario, the positive energy of the ad-hoc
volunteers could turn into the potentially very destructive energy
of a smart mob [2] and even increase adverse effects during a
crisis. Whether in order to better benefit from resources offered
by unaffiliated volunteers or simply to avoid the worst case
scenario, crisis management professionals need to improve their
ability to communicate with citizens.
Many organizations already use social networks for crisis
communications [3]. However, the type of information that is
posted through social media is often not very different from what
is posted through mass media. The one notable exception from
this rule is provided by interactive web-based crisis maps. Tools
like Ushahidi allow citizens to easily obtain relevant information
according to their geographic position, e.g. reports on crisis
situations and needs in their neighborhood [5].
A more crucial problem is that general-purpose social media
does not facilitate many-to-one communication. This is a major
shortcoming from the point of view of first responders. In crisis
situations, these organizations can allocate only a small number
of people for monitoring social media and communicating with
their users. A related issue is the one of trust and validity of
information. In social networks, real information and
misinformation is posted alike so that distinguishing between the
two is difficult. A recent discussion of the various ways to use
the (information received from) volunteers, ranging from
passive social media data mining, over use of dedicated
crowdsourcing tools to crowdtasking of the volunteers is
described in [4].
The aforementioned issues have to be addressed during all
disaster phases and are influenced by the specific disaster
scenario. It is therefore reasonable to study new approaches to
support crisis communication in a series of field experiments,
simulating different disasters along multiple disaster phases. The
resulting lessons learned are then used to draw generalized
guidelines and further develop ICT tools to support crisis
managers and volunteers alike.
The need for improved crisis communication is addressed by
the European project “Driving Innovation in Crisis Management
for European Resilience” (DRIVER
1
). DRIVER evaluates
emerging crisis management solutions in three key areas: civil
society resilience, responder coordination as well as training and
learning. These solutions are evaluated in a series of experiments
targeting various gaps in the European crisis management that
were previously discovered by the “Aftermath Crisis
Management System-of-systems Demonstration Phase 1”
(ACRIMAS) project team [5].
In this paper, we present lessons learned from an field
experiment that investigates two-way communication solutions
between crisis managers and citizens or unaffiliated volunteers
in a simulated flooding scenario and address the following gaps:
(1) informing and involving the society via improved crisis
communication; (2) coordination and tasking of unaffiliated
volunteers; (3) dissemination of disaster alerts and other relevant
information to citizens; and (4) the collection of information
relevant in crisis situations, such as, needs and observations from
citizens. Starting with an overview of crowdtasking and
crowdsourcing for interacting with citizens in disaster response,
we discuss alternative approaches. The paper continues with the
experiment setup and methodology of the performed experiment
campaign. The main contribution of this paper are lessons
learned regarding the planning and execution of experiments in
crisis and disaster management, which are based on the findings.
First results of the initial experiments have been presented by [5]
and are consolidated with the final results in this paper, and are
presented as lessons learned. We conclude with an outlook on
future works.
II. RELATED WORK
After the first occurrence of the term crowdsourcing by
Howe [6], there is still no clearly defined understanding
available. Different characteristics have been developed, which
describe for example, communication paradigms in
crowdsourcing [7]. Liu [8] presents an overview of works
elaborating on crowdsourcing characteristics from different
domains. Crowdsourcing in crisis management can be
differentiated into two major approaches; (1) data oriented and
(2) tool oriented [9]. Data oriented approaches include an
aggregation, mining, and processing of data from a set of sources
such as social media platforms. Tool oriented means that the
focus is on communication between actors and the disaster
management system. In order to have an effective
crowdsourcing approach in place, it is often required to combine
multiple crowdsourcing configurations to strategically leverage
the people, information, and resources that converge during
crisis situations [8]. The conducted field experiment therefore
aims at integrating crowdsourcing ICT solutions.
Crowdtasking has been investigated as a form of
crowdsourcing in several prior works. The concept was
originally discussed by Neubauer et al. [11] as a new approach
to volunteer management. Crowdtasking and its implications
have since been investigated in several other papers: Schimak et
al. [4] discuss use cases of crowdtasking as well as issues with
information quality, volunteer self-organisation and user
incentives. Flachberger et al. [12] have given an overview of
crowdtasking (“crowd tasking” in their paper) as well as the
1
http://driver-project.eu/
prototype implementation and the national research project that
shaped the concept. Auferbauer et al. have described the
crowdtasking workflow [13] as well as the implications that this
shift towards ICT has for social inclusion [14]. Efforts to
evaluate crowdtasking by fielding a prototype implementation
have previously been described [15]. As mentioned above, a
preliminary insight paper into the DRIVER experimentation
discussed here, has been published recently [5].
III. EXPERIMENT DESIGN
All experiments conducted in the DRIVER “Interaction with
Citizens” campaign concentrate on the following functions:
Provision of context-aware and timely information
tailored to specific needs of different societal groups over
various channels, in order to improve their understanding
of the crisis situation and to minimize adverse impacts.
Context-aware (micro-)tasking of non-affiliated
volunteers to perform real and virtual tasks.
Efficient gathering of situational information about an
incident from volunteers.
Efficient usage of the received information from
volunteers to improve the situational awareness of crisis
managers and consequently their handling of the crisis.
The hypothesis leading the experiment states that modern ICT
technology can be used to improve communication between
crisis managers and citizens. On the on hand, this addresses
information gathering from citizens via crowdsourcing and on
the other hand, information sharing via broadcast as well as
location- or skill-based selection of recipients. In addition, the
experiment also addressed directed tasking of volunteers in the
field. The hypothesis further includes that this can be achieved
without overwhelming crisis managers due to information
overload or the effort in processing and distributing information,
and that the tested methodologies and tools are complementary
and not overlapping. We will now describe two tools for
interacting with citizens that were evaluated in the experiment:
CrowdTasker and GDACSmobile.
A. CrowdTasker
The concept of crowdtasking has been realized as a
prototype implementation, as was already mentioned in Section
II. The technological implementation of the crowdtasking
workflow was dubbed “CrowdTasker” and features the majority
of functionality defined in the theoretical concept. CrowdTasker
consists of three distinct components:
1. Web interface for crisis or community managers to
define and publish events and tasks.
2. Smartphone application for volunteers to receive,
acknowledge and execute tasks.
3. Web interface for crisis managers to visualize
results – this component is provided by external
tools (also present during the experimentation).
Fig. 1. CrowdTasker Workflow
The workflow of CrowdTasker is depicted in Fig. 1. The
components enumerated above are color-coded with purple
representing the web interface for task definition, green being
the smartphone app and grey representing the visualization of
information from volunteers. In the preparation phase,
volunteers may register via the smartphone application
(available through the Google Play Store) and create an account
and login data. If there is cause (such as e.g. a flooding) for
requesting voluntary helpers, a community manager or other
personnel of a relief organization compiles an “event” on
CrowdTasker’s web interface. A description of the cause will be
included with the event as information for potential volunteers.
Volunteer requirements and restrictions are defined for the
event, such as current GPS location, home address or skillset
(e.g. language or medical experience). The event is then
published to all pre-registered volunteers that fit the
requirements.
Volunteers receive the published event as a request for
participation on their smartphone applications. If they agree to
participate, they are eligible for tasks that will be published later
as part of this event.
On the web interface, disaster response personnel can now
define tasks for the event they have published. Each task consists
of an arbitrary number of steps. Each step has a well-defined end
result that dictates the response options on the volunteer’s
smartphone application. Possible types of task steps are:
choosing from pre-defined answers to a question, submitting a
photo, submitting a number or submitting free text. A task may
be defined as any combination of these step types. For example:
a task to estimate the extent of a flooding consists of 1)
answering a multiple choice question about water levels on the
street at the volunteers’ current location, 2) taking a photo of said
street and 3) inputting a number representing affected neighbors.
Tasks defined on CrowdTasker’s web interface are then
published to all volunteers that have previously accepted the
participation request for this event. Tasks are presented to the
volunteers on their smartphone application (and may be
accepted or declined individually). The application guides the
user through each task step, whereby the user interface only
allows for the requested input at each step. After all steps of the
task are completed, the volunteer’s input is transmitted to the
CrowdTasker servers. User input for each step is bundled with
the GPS location at the time of interaction. Data received from
volunteers may be relayed to services such as common
operational picture tools or a common information space for
appropriate visualization.
B. GDACSmobile
GDACSmobile aims to support two main target groups:
people concerned with disaster relief, and the (affected)
population itself. Both groups will be able to use the application
for sharing information, thus in turn, creating a better situational
awareness which is crucial for effective disaster response. The
general workflow is depicted in the following figure.
Fig. 2. GDACSmobile Actors
Although both groups, registered users as well as public
users, will be able to use the application, different rights and
roles are assigned to the users. People concerned with disaster
relief will be referred to as authorized persons. These authorized
persons are assumed to be working for a professional
humanitarian organization (governmental or non-governmental)
providing professional response services to the local population.
Information retrieved from those users is believed to be highly
accurate due to the professional background and thus are
classified as authorized and trustworthy.
The public users, i.e. the affected population, will also be
able to use the GDACSmobile application, but is provided with
comparably less functionality. Primarily, not all information
which is relevant for professional disaster relief operations is
directly needed by the affected population. Although the
application will try to enable the local population to start self-
organization, the primary focus lies on information retrieval to
obtain a better awareness of the current situation. Consequently,
the population will be provided with a limited ability to assess
the current needs situation and submit this data to the server and
thus to professional helpers receiving the information.
This also affects the workflow within GDACSmobile. All
users provide observations as reports to a category, e. g.
infrastructure, health needs, including further details such as an
image, text, and geo-location. Reports from public users will be
reviewed by professional or trained volunteers to filter wrong or
invaluable information, while those from registered users are
directly accepted and visible to crisis mangers. Publicly accepted
reports are furthermore visible to all users on the devices. Crisis
managers are also able to share information by providing public
reports, for example to highlight locations offering shelter.
C. Roles of Tools
Apart from CrowdTasker and GDACSmobile there were
several other tools in use during the experiment for which we
will give a brief overview. Each tool adds to the methodology
applied in the experiment and fulfills a specific role, as listed in
Table I. These roles are not distinct among the tools and create
areas of cooperation. They align with the introduced gaps to be
addressed by the experiment scenario.
(1) Observing: collection of geo-located crisis
information from volunteers (e.g. citizens).
(2) Tasking: assigning tasks to groups of volunteers and
collecting results and completion notifications.
(3) Informing: providing crisis information to volunteers
and the general public.
(4) Alerting: providing targeted information to a defined
group of recipients.
TABLE I. ASSIGNMENT OF TOOLS TO ROLES WITHIN EXPERIMENT
Role
Gap
Tool (primary underlined)
Observing
(1), (3), (4)
GDACSmobile, CrowdTasker
Tasking
(1), (2), (4)
CrowdTasker
Informing
(1), (3)
SafeTrip, GDACSmobile, CrowdTasker
Alerting
(1), (3)
DEWS
GDACSmobile collects geo-located and categorized
observations from volunteers within the affected area. These
observations are then reviewed and cleared from false
information. As mentioned in chapter I, information quality and
reliability is an important aspect for crowdsourcing and needs to
be taken into account. In addition, CrowdTasker can collect
observations in form of observation tasks. The responses and the
revised GDACSmobile data add to the situational overview of
the crisis managers. CrowdTasker is the only tool in the
experiment, which is able to create and assign tasks, which
address a specific need. This enables crisis mangers to not only
passively consume information, but actively ask for information
or to guide volunteers in their actions. In terms of sharing
information, multiple tools are able to support crisis managers.
Primarily, general information about the region can be published
via SafeTrip to the general public. A more detailed but also,
more localized level of information can be shared by an
overview map in GDACSmobile for public observations from
the field, or in the form of tasks in CrowdTasker. Targeted
alerting is supported by DEWS, which does not offer a mobile
application, but utilizes classical communication channels like
e-mail and SMS.
D. COP tools and information flow
All tools jointly feature directed communication from
volunteers to crisis managers, crisis managers to volunteers and
bidirectional communication between both groups. Between the
tools and the crisis managers is another layer, which groups the
information into a situational overview, called the Common
Operational Picture (COP). This layer is supported by two
additional tools, LifeX and csWeb, which consolidate
information from the above mentioned applications and
illustrates them on maps. The information sharing between the
tools is realized by the Common Informational Space (CIS)
using the Google Common Alerting Protocol (CAP) standard.
Each tool transforms all or a subset of its information to the CAP
standard and publishes it to the CIS, from which the COP tools
read the information. It is thereby possible to add, remove, or
change tools in the systems without changes to one of the other
components, as long as the CAP standard is used for
communication. Thereby, the tools are also not dependent on
each other, in case of any technical or organizational issues
during the experiment. Fig. 3 shows a schematic of the
information flows across the layers of tools and actors. On the
bottom, volunteers can share information in their role as
observer using for example, GDACSmobile. The information is
forwarded via the CIS to the COP tools from which crisis
managers get a situational overview. They can issue tasks to be
fulfilled by volunteers using for example, CrowdTasker. Again,
feedback is collected and forwarded via the CIS and presented
in the COP. Finally, targeted alerts and information can be sent
to volunteers and citizens using for example, DEWS and
SafeTrip. For the communication from crisis managers to
volunteers, the two intermediate layers COP and CIS are omitted
as no information filtering or sharing among tools is needed. The
CIS itself is not visible to the actors directly.
Fig. 3. Communication layers and information flows
E. Experiment Scenario
To fulfill the objective of an appropriate scenario [16], the
experiment scenario is based on a storyline designed by
practitioners who were involved as experiment platform
providers. They defined a fictitious disaster event based on past
experience, which resulted in a more realistic and relevant
scenario compared to a tool-friendly situation developed by the
tool providers. The scenario included a ground truth describing
a flooding in a central region of The Hague. This ground truth
illustrated flood levels at different locations and further flood
related insights, e.g. displaced people, damaged infrastructure or
supply needs. To test the quality of the information flow from
the volunteers to the crisis managers as well as the review
process, an information conflict was designed between a
forecasted ground truth for crisis managers and an actual ground
truth for volunteers in the field. Only if information is provided
by volunteers in sufficient quantity and quality, crisis mangers
are able to recognize the new situation. Additionally, the
scenario was split into two phases. The morning session was
dedicated to disaster preparation and the afternoon to disaster
response. Therefore, the ground truth included information on
potential needs and damages before the crisis event, and
occurred incidents after the crisis event. With these two different
settings, the experiment studied the participation of volunteers
and the utility of tools changing with the disaster phase. The
crisis managers were tasked to perform their regular procedure
assisted by the tools. As an intermediary, each tool was supposed
to have an information manager from the crisis management
team who would transfer the information collected by the tools
to crisis management and back. Due to the fact that the tools are
not yet embedded into the crisis management, the novelty would
likely have caused too much distraction for crisis managers, if
they are in direct contact. The experiment design was defined
over several telephone conferences, as well as a preparation
workshop and a rehearsal closely before the experiment. It was
thereby assured that the scenario is realistic for professionals and
provides sufficient ground trough information for the tools to
operate.
F. Evaluation Methodology
Due to the two components the experiment is based on, scenario
and tools, it is necessary to evaluate the experiment in scenario
related and tool related aspects. With the goal to test a context-
aware informing and tasking of volunteers as well as to evaluate
the value of these activities for both citizens and crisis managers,
the experiment is evaluated separately by volunteers using the
apps, professionals involved in the experiment, and dedicated
observers according to the following matrix.
TABLE II. EVALUATION PERSPECTIVES
Volunteers
Professionals
Observers
Methodology
acceptance
Citizens’
perspective:
usability of
information,
performing tasks,
posting reports.
Professionals
perspective:
informing,
alerting, tasking,
situation
awareness
X
Impact on
crisis
management
Informing,
Involvement and
tasking of citizens
Situation
awareness,
information
dissemination and
crisis management
X
Tool Usability
Citizens
perspective
(mobile apps)
Professionals
perspective
(backend
applications)
X
Tool reliability
Mobile apps
Backend
applications
X
Experiment
setup
-
-
X
To evaluate the above perspectives, different evaluation
methods have been selected to address the experiment
participants; evaluator observations, debriefing discussions,
questionnaires for volunteers, and questionnaires for
professionals.
Aside from the active participants in the experiment,
additional personnel external to the experiment team were
dedicated to observing the experiment. They were tasked with
collecting observations on all evaluation perspectives as shown
in the table above. Each observation was recorded electronically
as a free form comment with a timestamp to align with the
experiment schedule.
After the morning (disaster preparation) and afternoon
(disaster response) sessions, a debriefing discussion was held
with the crisis managers involved in the experiment. The
discussion was held as an open forum to exchange ideas and
experiences from the experiment. The discussion was used to
reflect on the schedule and to discuss the achieved results from
the collected information.
The volunteer’s perspective in this experiment consisted of
understanding of the methodology, communicated crisis
information and tasking as well as usability of the mobile
applications. To gather volunteer’s input on these issues, we
used online questionnaires.
For crisis managers, observers, and the experiment team , we
used a different online questionnaire. Responses are
distinguished according to these groups. From the professional
perspective the evaluation was focused on the methodology of
informing, tasking, alerting, and information gathering, and the
usability and impact of the collected information for the crisis
management, as well as usability of the tools regarding the
backend applications.
IV. RESULTS & LESSONS LEARNED
With the findings from the experiment, we can draw results
and lessons learned for similar experiment designs from four
different perspectives: design, organization, control and
evaluation.
A. Experiment setup
During the experiment design, one important aspect was the
conceptualization and implementation of the technical
integration of tools. As mentioned in section III, the tools can
share information with the CIS, which can then be displayed by
the COP tools. Although a collaborative specification of the
interface was designed and basic tests have been performed,
technical issues still arose in the experiment setup. Later,
evaluations of this issue by the experiment team showed that the
tests have not covered the complete functionality used in the
experiment. In the end, the experiment was not affected and all
issues were solved before the start of the experiment. As a lesson
learned one can conclude that rehearsal or further tests need to
cover the complete functionality needed for the experiment in
full detail.
The design decision to include two phases in the
experiment, disaster preparation and response, was motivated by
the idea to test the tools in more than one phase. Although this
resulted in valuable insights for the evaluation of the tool
usability for these phases, it also caused some negative
influences. Due to the short timeframe of half a day for each
phase, volunteers and crisis managers were not able to gain a
complete situational overview in detail. Also, in a real flooding
scenario, the preparation and response phase would last longer,
giving users more time to collect and use the gathered
information. Fig. 4 shows results taken from the questionnaire
answered by volunteers on their perception of the experiment.
The weakest points are the communication regarding the
experiment progress and what is expected from the volunteers.
Although there was rigorous contact and supervision of
volunteers, the complexity of the scenario was too difficult to be
conveyed easily to non-professionals. We conclude that for each
disaster phase at least one experiment day should be conducted,
allowing all participants can adapt to the situation properly
enabling a sufficient timeframe to make use of the tool’s
functionality. Nevertheless, we see the progression across
multiple phases as relevant and valuable, which should at least
be conducted in consecutive experiments.
Fig. 4. Assessment of Experiment Perception of Volunteers
Complexity was further evaluated regarding the use of the
tools in the experiment. Fehler! Verweisquelle konnte nicht
gefunden werden.Fig. 5 shows results for CrowdTasker and
GDACSmobile on the impression of volunteers. Overall it can
be seen that CrowdTasker is perceived to be easier to learn and
use, while GDACSmobile gives better feedback and creates
more incentives for users.
Fig. 5. Assessment of Tool Impression on Volunteers
Several factors can explain this result and provide some
lessons learned. First, CrowdTasker starts with a simple tutorial
guiding through the first steps, while GDACSmobile targets an
intuitive design without tutorials. Second, as explained in
chapter III, for GDACSmobile markers were distributed all over
the designated area, which required volunteers to walk around
and search for them, which supported the feeling of participation
in a serious game. CrowdTasker gave all required information
on the ground truth to the volunteers directly, to allow volunteers
to complete the tasks more easily. This impression was also
reported to the volunteer manager in the field, which supports
this conclusion. As a lesson learned we suggest to emphasize the
game character in such an experiment to motivate volunteers.
This motivation is assumed to be replaced by the willingness to
help in a crisis event. Furthermore, the tool design needs to
account the two mentioned problems of user training and user
motivation. Intuitive designs and tutorials needs to be tested and
approved by unaffiliated users of the target group.
After the experiment, crisis managers, volunteers and the
tool provider where asked to answer for which tasks volunteers
should be involved. The results in Fig. 6 show an interesting
tendency. While the tool providers are very cautious in sending
volunteers into actual crisis areas and would rather give them
tasks to complete in safe places, like off site or at home, crisis
managers are more interested in having volunteers on site and at
the incident location. Although the safety of volunteers was
addressed as an important concern in the debriefings, most help
is needed in actively responding to a crisis aftermath. Volunteers
themselves were split on these opinions, as they likely cannot
imagine how they can assist in the crisis. As a lesson learned we
see that it is important that tool providers and crisis managers
align their impression of volunteer involvement.
Fig. 6. Expected Involvement of Volunteers
B. Experiment organization
Prior to the experiment, the observers were introduced to
their task and were provided with the required observation
forms. Overall, the observers were able to collect valuable
insights for the experiment, but the quantity and level of detail
did not meet expectations. Later assessments showed that this
was mainly due to a lack of supervision of the observers. During
the experiment the observers reacted differently, resulting in
confusion about their task. Since all experiment personnel were
actively caught up in the experiment execution, this was not
realized in time. It is therefore reasonable to assign a dedicated
person to guide and supervise observers as it was done with the
volunteers by having a dedicated volunteer manager in the field.
Thereby, questions and confusions can be resolved without
involving the active experiment team.
An unforeseen event prior to the experiment hindered several
members of the crisis management team from participating in
the experiment as planned. This resulted in the crisis
management team not being at full capacity. While the tools
were operated according to the experiment setup,
communication with the crisis management was limited. The
few available CMs were not able to react to all the incoming
information. It was therefore not possible to evaluate the crisis
coordination in the experiment from the perspective of
professionals as planned. Future rounds of experiments should
have a backup plan or alternative solutions to cope with such
events or at least need to take this into account in the experiment
design.
With the experiment team comprised of crisis managers, tool
providers and platform providers, each participant had a clear
role during the experiment design and execution. This role
model allowed for structured communication and planning of
the experiment scenario and schedule during preparation.
During the experiment, some of the participants took over
multiple roles. Like mentioned above, for the supervision of
observers, this partially resulted in role conflicts. Overall, it can
be concluded that each participant should only have one role, or
that possible role conflicts need to be checked and avoided.
C. Experiment control
The above mentioned role conflict further affects the control
of the experiment during its execution. In order to have another
layer of control above the active participants, there should be one
person or team solely in charge of the experiment control. In the
presented experiment, this was conflicting with the tool
providers. It was identified that tool provider and experiment
control have an interest conflict in the given flooding scenario.
Experiment control is supposed to guide through all scenario
elements equally, but tool providers unintendedly focus those
elements most relevant for the respective tool. While in the
presented experiment this conflict did not result in significant
negative effects, it should be avoided by having a dedicated
experiment control team. This also increases the ability to react
faster and more flexibly to unexpected influences, like the above
mentioned limited availability of crisis managers.
D. Experiment evaluation
While the experiment yielded solid and usable results, there
were some lessons to be learned regarding the questionnaire
design. The experiment highlighted an unexpectedly
heterogeneous knowledge base among crisis managers, although
all of them were experienced in crisis situations. During the
evaluation design, a knowledge gap between tool providers and
crisis managers was accounted for by testing the questionnaires
with sample groups. Yet, the evaluation showed that there were
different understandings among the responders. This was
discussed in debriefing discussions and resulted in the
conclusion that it is necessary to characterize the participating
groups and to have discussions in advance with representatives
to identify and overcome these knowledge gaps.
The above mentioned knowledge gap leads to the conclusion
that you need to know your audience – or even better, carefully
select your audience. For researchers that are not sufficiently
experienced in the crisis and disaster management domain, or
even the authority structure of specific organizations, the
different levels of command may be difficult to grasp. Especially
using a qualitative approach, however, it is important to
understand the level at which participants of an experiment
usually operate in their organization. Professional responder’s
needs vary vastly between the strategic, tactical, and operational
levels of service. Group discussions and debriefings during the
experiment were at times difficult due to the homogenous
command level of participating professionals, as most
participants are usually not active in the field, but have
coordinating roles. A more careful selection of participants or
arrangement of distinct groups would probably have led to more
fruitful interactions. This is not to say that including
representatives from different levels is discouraged: Input from
different levels of the same organization can provide valuable
input regarding their workflows. But we do caution against
including different levels of service from different response
organizations based on our experiences during the experiment,
as this may lead to the discussion frequently veering off track
towards fundamental issues that are subject to organizational
culture.
V. CONCLUSION AND OUTLOOK
In this exercise we have seen the first tentative steps towards
an integrated European solution for volunteer management. The
tools we have fielded in this event are not meant to be
competitors, vying for the same target audience. Rather, each
fulfills a specific role in communicating and cooperating with
citizens. As an example, we wish to highlight the possible
synergies between the concepts represented by GDACSmobile
and CrowdTasker. While these tools may mistakenly be
considered as serving the same purpose (both are crowdsourcing
solutions), in reality they are complementary. GDACSmobile
follows a more open approach to crowdsourcing, where users
may submit any information they consider relevant. This creates
a wealth of information that needs to be validated not only for
submission quality and relevance but also truthfulness – which
is where CrowdTasker, with the crowdtasking approach, can be
complementary by offering a way to a) task people with
verifying the information on scene or b) assigning appropriate
action tasks to volunteers nearby. We argue that both approaches
need to be part of a complete solution for interacting with
citizens: the openness of GDACSmobile that allows for citizen
initiative, generating a lot of data, and the directedness of
CrowdTasker towards achieving a specific target.
We have pointed out different lessons learned from the
conducted exercise and previous rounds of experiments.
Introducing ICT solutions into the field of crisis management is
influenced by various factors. Field exercises need to take into
account that there is no full control of the scenario and its
development. We showed that apart from the planned ground
truth, further influences affect the information exchange
between crisis management and volunteers. These influences
can be addressed by a dedicated experiment control team and an
in depth preparation and training of participants. The evaluation
needs to take into account that the executed exercise can differ
from the planned schedule and that knowledge gaps among the
participants need to be addressed as well. The selected
evaluation techniques complemented each other to overcome
most of the identified issues and resulted in valuable insights for
the further tool development and future rounds of experiments.
VI. ACKNOWLEDGEMENTS
The research leading to these results has received funding
from the European Union Seventh Framework Programme
(FP7/2007- 2013) under grant agreement n° 607798. We thank
the DRIVER “Interaction with Citizens” experiment team that
has been working together for several months in order to
prepare, conduct, and finally assess the field exercise at The
Hague. We would like to explicitly thank the crisis managers at
the Safety Region Haaglanden (http://www.vrh.nl/) and the
volunteers organized by the Dutch Red Cross present during the
days of the exercise. A special thanks goes out to Lex and Silvia
of VRH for their help in hosting the field exercise, marker
design, and coordination of the many volunteers.
VII. BIBLIOGRAPHY
[1] C. Reuter, O. Heger, and V. Pipek, “Combining Real and Virtual
Volunteers through Social Media,” 2013, pp. 780–790
[2] H. Rheingold, "Smart mobs: the next social revolution," Mass:
Perseus Publ., Cambridge, 2009.
[3] Pan American Health Organisation, "Information management and
communication in emergencies and disasters: manual for disaster
response teams," Washington, D.C., 2009
[4] G. Schimak, D. Havlik, and J. Pielorz, "Environmental Software
Systems. Infrastructures, Services and Applications," vol. 448. 2015.
[5] D. Havlik, J. Pielorz, and A. Widera, "Interaction with Citizens
Experiments: From Context-aware Alerting to Crowdtasking,"
Proceedings of the ISCRAM 2016 Conference, Rio de Janeiro,
Brazil, 2016
[6] J. Howe. The Rise of Crowdsourcing. Wired Magazine, vol. 14, pp.
1–4, 2006.
[7] S. Roche, E. Propeck-Zimmermann, and B. Mericskay. GeoWeb and
Crisis Management: Issues and Perspectives of Volunteered
Geographic Information. GeoJournal, Netherlands: Springer, vol. 78,
no. 1, pp. 21–40, 2011.
[8] S. B. Liu. Crisis Crowdsourcing Framework: Designing Strategic
Configurations of Crowdsourcing for the Emergency Management
Domain. Computer-Supported Cooperative Work (CSCW) special
issue on Crisis Informatics and Collaboration, 2014.
[9] M. Poblet, E. García-Cuesta, and P. Casanovas. Crowdsourcing
Tools for Disaster Management: A Review of Platforms and
Methods. In AI Approaches to the Complexity of Legal Systems
Volume 8929 of the series Lecture Notes in Computer Science pp
261-274, 2013.
[10] M. Vollmer, M. Hamrin, H.-M. Pastuszka, M. Missoweit, and D.
Stolk, "Improving Aftermath Crisis Management in the European
Union," Bonn, Germany, 2012
[11] G. Neubauer, A. Nowak, B. Jager, C. Kloyber, C. Flachberger, G.
Foitik, and G. Schimak, “Crowdtasking – A New Concept for
Volunteer Management in Disaster Relief,” in Environmental
Software Systems. Fostering Information Sharing, vol. 413, J.
Hřebíček, G. Schimak, M. Kubásek, and A. Rizzoli, Eds. Springer
Berlin Heidelberg, 2013, pp. 345–356.
[12] C. Flachberger, G. Neubauer, C. Ruggenthaler, and G. Czech,
“Crowd Tasking – Realising the Unexploited Potential of
Spontaneous Volunteers,” in Security Research Conference: 10th
Future Security Proceedings, 2015, pp. 9–16.
[13] D. Auferbauer, R. Ganhör, and H. Tellioʇlu, “Moving towards crowd
tasking for disaster mitigation,” in ISCRAM 2015 Conference
Proceedings - 12th International Conference on Information Systems
for Crisis Response and Management, 2015.
[14] D. Auferbauer, G. Czech, and H. Tellioglu, “Communication
Technologies in Disaster Situations: Heaven or Hell?,” in Security
Research Conference: 10th Future Security Proceedings, 2015, pp.
25–32.
[15] D. Auferbauer, R. Ganhör, H. Tellioglu, and J. Pielorz,
“Crowdtasking: Field Study on a Crowdsourcing Solution for
Practitioners in Crisis Management,” in Proceedings of the ISCRAM
2016 Conference, 2016.
[16] I. R. Whitworth, S. J. Smith, G. N. Hone, and I. McLeod, "How do
we know that a scenario is‘appropriate’," 11th International
Command and Control Technology Symposium, Cambridge, UK,
2006.