Conference PaperPDF Available

Evaluating Platforms for Community Sensemaking: Using the Case of the Kenyan Elections Vittorio Nespeca


Abstract and Figures

The profusion of information technology has created new possibilities for local communities to self-organize and respond to disruptive events. Along with the opportunities, there is also a series of challenges that need to be addressed in order to improve societal resilience. One of these challenges is to make sense of the continuous stream of information to create a coherent understanding and improve coordination. The research presented in this paper focuses on the socio-technical requirements of IT platforms that support sensemaking and coordination. Using a comprehensive evaluation exercise based on real data from the 2017 Kenyan elections, we examine the development, workflows and use of this shared situational awareness in a group decision making process. In this manner, we identify requirements for resilience platforms and identify further research directions.
Content may be subject to copyright.
Nespeca et al.
Evaluating Platforms for Community Sensemaking
WiPe Paper – T12 - Designing for Resilience
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
Evaluating Platforms for Community
Sensemaking: Using the Case of the
Kenyan Elections
Vittorio Nespeca
TU Delft
Kenny Meesters
TU Delft
Tina Comes
TU Delft
The profusion of information technology has created new possibilities for local communities to self-organize
and respond to disruptive events. Along with the opportunities, there is also a series of challenges that need to be
addressed in order to improve societal resilience. One of these challenges is to make sense of the continuous
stream of information to create a coherent understanding and improve coordination.
The research presented in this paper focuses on the socio-technical requirements of IT platforms that support
sensemaking and coordination. Using a comprehensive evaluation exercise based on real data from the 2017
Kenyan elections, we examine the development, workflows and use of this shared situational awareness in a
group decision making process. In this manner, we identify requirements for resilience platforms and identify
further research directions.
Sensemaking, community engagement, evaluation, requirements, resilience
Disruptive events, irrespective of their nature, often exceed the capacities of professional authorities – leading to
institutional voids and giving rise to localized response activities (IFRC 2016). At the same time, the situation is
often highly uncertain. Through the proverbial ‘information firehose’ decision-makers and affected communities
are confronted with conflicting or redundant information, and actionable information is increasingly hard to find
(Van de Walle and Comes 2015).
Given the uncertain and chaotic nature of these disruptive events establishing an understanding of the crisis, and
attributing meaning to what is happening are crucial for the response (Weick 1993; Weick 2010). Sensemaking
process is a continuous process of information collection, enactment and retainment (Sharoda and Reddy 2010).
While this sensemaking process has been studied extensively for responders, such as Search and Rescue teams
(Muhren and Van de Walle 2009), disasters coordinators (Weick 2010), and emergency services (Kuligowski
2011), there is limited research in sensemaking and decision-making at the community level.
However, communities play an important role in the response to disruptive events. The World Disaster Report
2013 by the International Federation of the Red Cross, highlighted that: “90% of the people saved in a disaster
are saved by local people” (IFRC 2013). Communities are not only the first on-site, but also have access to local
resources and capacities that can be deployed immediately. For sensemaking, particularly the legacy or
‘sensemaking trajectories’ (Muhren et al. 2008) of the local communities are highly relevant.
The increasing importance of mobile technology and social media has given communities more options to
create, share and analyze information. Increasingly, communities use collaborative platforms to self-organize.
These can be neighborhood apps fostering communications, or dedicated tools and platforms for crowdsourcing
(Shanley et al. 2013) or crisis mapping. One of the most recent examples has been provided in the 2017 Harvey
Hurricane response (Sebastian et al. 2017).
This paper sets out to study the impact of collaborative platform on community sensemaking and decision-
making. In this paper we present the initial findings from an extensive evaluation session, based on a real-world
case. In the next section we explore the theoretical background related to the community sensemaking and
Nespeca et al.
Evaluating Platforms for Community Sensemaking
WiPe Paper – T12 - Designing for Resilience
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
decision-making. This background informed both the setup of our evaluation as well as the approach to
collecting data. Next, we present the results from our various data collection efforts, including surveys, self-
reporting and group discussions. We conclude the paper by discussing these results, the methodology used,
presenting the key findings and future research opportunities.
A widely used definition of resilience sees it as ‘systems ability to resist, absorb, accommodate and recover
from the effects of a hazard in a timely and efficient manner’ (UNISDR 2009). However, many other definitions
and insights have been developed. This includes discussions on whether resilience is the ability bouncing back
to a previous state as predominantly used in engineering (Bruneau et al. 2003; Comes and Van de Walle 2014;
Comfort et al. 2010; Madni and Jackson 2009; Zobel and Khansa 2014) or an adaptive process of change and
transformation as is typical for the socio-ecological school of resilience and complex adaptive systems theory
(Filatova et al. 2016; Folke 2006; Jerneck and Olsson 2008; Klein et al. 2003). The later emphasis the
transformative nature that disruptive events have on communities and socio-ecological systems, requiring them
to not only deal with the consequences of a disaster (‘bouncing back’) but also reflect on the causes of such an
event and implement measures to reduce future risk (‘bouncing forward’). Increasingly we see the latter
approach be part of disaster risk reduction strategies, often referred to as ‘building back better’ (Fan 2013) and
strengthen the affected communities capacities (Almedom 2008).
A different, but related debate is the discussion whether resilience is linked to tangible assets and characteristics
of a community as argued by Norris et al. (2008), or rather a process (Brown and Kulig 1996). However as
illustrated by Giddens (1979), the two aspects are linked: certain characteristics, infrastructures and/or assets of
a community can enable or hinder the community driven resilience process.
The recent survey on urban resilience of Meerow et al. (2016) highlights particularly well how fragmented the
field of resilient is with respect to the definition and scope of the term; the only consensus that they found was
that resilience is something positive. Maybe most important from an information system and ICT perspective is
the missing combination of the engineering and design thinking of resilience with the adaptive and
transformative theories that come from the socio-ecological systems, thereby expanding the scope of the
analysis to complex adaptive socio-technological-ecological systems. We discuss here specifically the role of
information sharing platforms for community resilience, thereby making a headway in bridging the gap between
(ICT) design and resilience in disaster response.
As illustrated, communities play a pivotal role in dealing with a disruptive event. These disruptive events test
the resilience of a community and reveal shortcomings. As such, communities are not only at the heart of
responding to disruptive events, but also benefit from understanding how they can improve their resilience when
disruptive events affect them.
Communities can be identified by three characteristics according to Frankenberg (1966): common interests
among people; or common ecology and locality; or a common social system or structure. This definition implies
a shared responsibility and interest in the resilience of their community and in dealing with disruptive events.
This shared responsibility among everyone in the community has been described by Ronan and Johnston (2005).
Moreover, communities are also driven to develop mechanisms to improve their resilience, whether pre- or post-
disaster (Kendra and Wachtendorf 2007; Taket 1999).
Theories on social capital in resilience stress the importance of social interaction in terms of networks and
information flows for collective action (Adger 1999; Pelling and High 2005). Besides keeping links to their
close families or within their neighborhood (‘bonding’ social capital), the networking capital stresses the
importance of external links that a community has, governed by economic or legal relations (Adger 2009),
where the latter are particularly important in institutional ‘voids’, marked by an absence of governmental actors
(Klievink and Janssen 2014). Extending these concepts Szreter and Woolcock (2004) define linking social
capital as “norms of respect and networks of trusting relationships between people who are interacting across
explicit, formal or institutionalized power or authority gradients in society”. This is particular important as in
disasters, the ability to connect quickly to professional decision makers and public authorities for coordination
of relief efforts can be crucial (Aldrich and Meyer 2015; Aldrich and Sawada 2015).
Nespeca et al.
Evaluating Platforms for Community Sensemaking
WiPe Paper – T12 - Designing for Resilience
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
In the response to disasters, communities and professional actors hence need to work together, pooling their
knowledge, resources and response capacities. Accordingly, coordination in the context of disasters entails
information sharing, collaborative use of resources or expertise, joint policies and definition of responsibilities
(Comfort 2007). Information and communication technologies empower new actor groups to self-organize and
engage, taking on new roles and responsibilities.
At the same time, particularly the early chaotic phase of disasters is subject to ‘severe’ or ‘deep’ uncertainty
(Comes et al. 2015). The combination of uncertainty, time pressure, and proliferation of technology has wide
implications for coordination. The widespread access to data and analysis tools gives rise to an unprecedented
number of forecasts, predictions and analyses at any level, adding noise and uncertainty. Particularly, the
problem of politicizing data and the link between information and power has been increasingly in focus recently
(Comes and Adrot 2016).
The increasing pressure for accountability and transparency can lead to stalemates and long decision processes.
But the longer it takes to decide the more likely it is that, given the pressure to respond, reactive measures have
already been implemented locally. This lacking coherence increases the risks of decisions being rendered
obsolete or seriously flawed even before they are implemented. Under the pressure to respond and given
disrupted communication lines, in these settings localized response efforts emerge, making it particularly
challenging to coordinate and align their efforts. Nevertheless, coordination is a critical success factor in
effective disaster response as the situation is often so overwhelming not a single actor is able to deal with the
situation single-handedly (Bharosa et al. 2010).
Sensemaking & Information
Sensemaking is the process that individuals go through to develop an image of what is happening or what
people are doing (Weick 1995). This -iterative- process is a collaborative effort by different actors to develop a
shared awareness. While this sensemaking process has been studied extensively for responders, such as Search
and Rescue teams (Muhren and Van de Walle 2009), disasters coordinators (Weick 2010), and emergency
services (Kuligowski 2011), there is limited research in sensemaking and building situational awareness at the
community level. However, over time different platforms have emerged that support responders in supporting
this process, including crowd-sourced information, for example during the Kenyan Elections (Goldstein and
Rotich 2008).
Before, during and after elections a custom-build web platform called Ushahidi was used to collect reports from
the public through different channels, including SMS and submissions on a web page (Okolloh 2009). The
resulting information from this platform provided both government agencies and individual citizens with an
understanding of the unfolding situation. It has been studied how these crowdsourcing platforms support
mandated responders in their sensemaking process for example after natural disaster such as the Earthquake in
Haiti 2010 (Gao et al. 2011; Heinzelman and Waters 2010) or conflict situations such as the Libya crisis
(Stottlemyre and Stottlemyre 2012) or Kenya elections (Meier and Brodock 2008). However, it is there has been
limited study how such platforms and the resulting information support sensemaking at the community level.
Research gap
Building resilience is a process that is targeted at and - more importantly- driven by communities. Building
resilience enables communities to reduce the impact of disruptive events and supports them in dealing with the
resulting effects. Especially in these circumstances communities are dependent on themselves. If we are to
encourage the community resilience process, supporting sensemaking at the community level is an essential
step. Nevertheless, the currently available platforms mostly support the sensemaking process of formal
(mandated) responders, but they could also support the communities in their collective sensemaking process
(Comes et al. 2017). However, to date no research focused on investigating if this is a valid assumption and how
platforms should be designed to encourage such community sensemaking process. This paper aims to address
this research gap. In the study presented here, we use a comprehensive evaluation approach to examine the
contribution of (augmented) crowdsourcing platforms to the sensemaking process for non-mandated responders
in disruptive events.
When designing an evaluation to examine the above mentioned effects, there are several considerations (Link et
al. 2014; Meesters 2014) First and foremost, we need to design the content, this encompasses not only the
Nespeca et al.
Evaluating Platforms for Community Sensemaking
WiPe Paper – T12 - Designing for Resilience
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
objective, but also the tasks to be executed. Those objectives and tasks stem from the research objective outlined
above: Sensemaking in a disruptive event by non-mandated (experienced) responders. This is then translated in
specific tasks. Next, we consider the delivery of the evaluation; how the evaluation is framed and presented to
participants. This includes the scenario and background introduced to the participants as well as the workflows
and tools at the disposal of the teams. Finally, data has to be collected during the scenario exercise in order to
evaluate the initial (research) objective.
Evaluation Design
To examine if the platform and services offer sense-making support, the scenario would have to fulfill the
following criteria: (1) provide an unknown context to the participants, (2) provide a clear objective that
encourages participants to common to a situational understanding, (3) provide sufficient data to enable
participants to reach that objective (4) provide a comprehensive and realistic scenario. The participants were
students with various backgrounds in Technology and Management, with a common interest and some
introductory lectures on the design of participatory systems. A total of twelve students joined the evaluation.
Evaluation Delivery
The Kenyan elections, which initially took place on August 8, 2017, were highly disputed and have led to
several eruptions of violence. The results of the elections were also disputed and eventually annulled by the
supreme court of Kenya, leading to another election cycle held in October 2017. As part of the government’s
effort to make the elections more transparent, the Uchaguzi
initiative was re-launched. The goal of the
initiative was: “... to help Kenya have a free, fair, peaceful, and credible general election in 2017.” Uchaguzi
aimed to support this objective through a broad network of civil society and citizen observations. It provided a
platform which enabled citizens to report, with any technology available to them, any incidences significant to
the election.
Fig 1. Real-world Uchaguzi workflow
This scenario was a good fit with the criteria outlined above: the participants were not closely familiar with the
specific background and context of the Kenyan elections. Furthermore, the scenario provided a clear objective:
mapping the reports of voting incidents and irregularities to support the Kenyan election board in assessing the
validity of the elections as well as support the security & safety forces in containing the violence. Additionally,
access to sufficient data was provided to create a comprehensive and realistic scenario that enabled enough
depth for the participants to play their role and complete their objective. Finally, the validity of the scenario was
ensured by being based on the Uchaguzi deployment which was part of the real elections with proven results.
Using Uchaguzi as inspiration the scenario was adopted for the evaluation. The participants took on the role as
Uchaguzi ‘Election’ in Swahili
Nespeca et al.
Evaluating Platforms for Community Sensemaking
WiPe Paper – T12 - Designing for Resilience
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
members of an ‘Election Observation Mission-team’, send from the European Union in response to a request
made by the Kenyan government. In this scenario the participants support the election observation mission by
collating, processing and analyzing the data send to them by other observers in the team, closely resembling the
real-world mandate, process and workflow of Uchaguzi. Aside from the participants themselves, the evaluation
included several role-players to provide realism and support to the participants, without breaking the scenario:
Head of mission: The head of mission provided the participants with the required objectives, necessary
information and mandatory tasks. This included the data collection tools showed in the next section.
Kenyan Election Board: This player of the Kenyan Election Board, would inform the participants about
the impact of their work, provide general updates on the unfolding situation and respond to country-
specific inquiries if needed.
IT Support: IT support provided the role-players with their own ‘digital workspace’, including dedicated
email accounts, a shared workspace for their team on Google Drive, the pre-configured Ushahidi platforms
and a Slack channel were the support was offered. Most importantly, IT support would also provide daily
data updates (injects) to the participants.
The data provided to the participants was created specifically for this exercise using the real-world data from
the Uchaguzi deployment. A representative sample for the evaluation was made from the complete data-set,
ensuring a similar ratio of ‘published’ (indicating verified and useful reports) and ‘unpublished’ (indicating were
valuable reports). The data-set also had the same distribution across distinct categories as the complete real-
world data-set. The timestamps of the reports were updated to match the dates of the evaluation period.
Participants were divided in three groups and worked in their own environment. In their role of ‘digital election
observers’, the participants received a briefing at the start of the evaluation outlining their mandate, mission
objective and tools at their disposal. Each group was provided with their own ‘digital workspace’ as described
This platform was augmented with several services developed as part of the COMRADES research project.
Using these tools, the team would receive, process and publish the ‘reports’ (data) received from the observers.
While the participants were free to determine their own workflow, a generic daily workflow would encompass
receiving data (daily), cleaning & importing it, processing it, and publishing the results to the public.
Data collection
The data collection was carried out using three approaches: surveys, mission reports and group discussion. The
surveys focused on an individual participant while the reports had to be filled by the whole groups. In fact this
was aimed at making the group work together and summarize their collective perspectives and situational
awareness. Eventually, the group discussion was a collective meeting of all groups to determine if the single
teams shared a common understanding of the local context, and if, given the same roles and data, could come up
with a common decision.
The repetition of surveys and reports over time was aimed at tracking changes in the perspectives of the
participants. To achieve that, three surveys and two reports were filled in and written by the participants at
different times along the evaluation. Table 1 illustrates the schedule of the evaluation, including the main
objective (task) for that day, the in-person meetings and the data-collection for the evaluation. This schedule
was presented and monitored by the head-of-mission role-player. The next sections show the content and
structure of the surveys, reports and group discussion.
Table 1 Schedule of the Evaluation
Participant Task
Team Meeting
Eval. data collect ion
Day 1
System test
Briefing & Tutorials
1st survey
Day 2
System test
Day 3
Process observation reports
Q&A Session
Day 4
Process observation reports
Day 5
Process observation reports
Intermediate Report
Day 6
Process observation reports
Q&A Session
2nd survey
Day 7
Process observation reports
Day 8
Process observation reports
Final report
Day 9
Presentation / Debrief
3rd survey
Nespeca et al.
Evaluating Platforms for Community Sensemaking
WiPe Paper – T12 - Designing for Resilience
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
The surveys aimed at evaluating Situational Awareness of each individual participant and were structured
according to the Situational Awareness Rating Technique (SART) as suggested by Selcon and Taylor (1990).
This approach uses ten questions based on a 7pt Likert Scale, aimed at investigating three components of
Situational Awareness: Situational Understanding, Attentional Demand, and Attentional Supply. Situational
Understanding represent the confidence a participant has that the situation at hand is completely understood.
Attentional Demand relates to how challenging the current set of events is in terms of Instability, Variability and
Complexity. While Attentional Supply applies to the amount of effort a participant is investing in understanding
the course of events.
The reports were aimed at collecting open feedback from the teams. The participants were given a predefined
structure to be filled in. The structure was divided in two main parts: one was a Situation Report as part of the
scenario exercise and the second one was aimed at gathering feedback on the evaluation exercise itself, but also
on the platform. The situation report included the following sections: workflows, IT tools, current situation, and
advice on the election validity and recommendation for security force allocation. For this purpose, a template
map was provided to the students to report Security, Staffing and Voting Issues.
With regards to the security force allocation, participants had to come up with a final decision in terms of the
allocation of three police forces within the most affected areas of Kenya. This had to be included in the final
report. The report included also the reasoning behind the above-mentioned choice based on their situational
awareness. In the Evaluation section, participants had to provide feedback based on the following elements:
Mission and Objective (e.g. feasibility of the exercise), Workflows and Team Organization (e.g. critical
perspective on the flows chosen by the team) including IT Systems and Services used.
Group Discussion:
Once the final report was delivered, the group discussion phase was implemented. This section was structured in
the following way. First, the participants presented their map with the final decision and related argumentations.
The groups then discussed to come up with a joint decision. This discussion was facilitated by the researchers.
Based on the data collection strategy shown above, several results were gathered for each of the data collection
tools. Table 2 summarizes the results, while the following sections provide more details.
Table 2 Summary of Results for each of the Data collection tools adopted
Data Collectio n Tool
Evaluation Dim ension
Results for Situational Awareness
Other Results
Individual Participant
Increasing over time, affect ed by continuous
information inflow.
Within Group
Different Prioritization of Areas
Similar workflow for separat e groups,
need for cont extual knowledge,
Socio-Technical Requirements
Group Discussion
Among Groups
Different prioritization of Areas,
Different Interpretation of Information
Need for contextual knowledge,
Socio-Technical Requirements
Results of Surveys
Situational Awareness seemed to increase during the simulation and reaching a plateau between the second and
the third survey. As far as the components of situational awareness are concerned Situational understanding
increased over the simulation but reached a plateau between the second and third surveys. Attentional demand
increased considerably between the first and second surveys. The demand kept rising the second and third
survey, but with decreased momentum. Attentional Supply increased over time almost steadily. Table 3 shows
the results from the surveys.
Table 3 Results of the Surveys averaged per group and for all groups (overall average)
Group 1
Group 2
Nespeca et al.
Evaluating Platforms for Community Sensemaking
WiPe Paper – T12 - Designing for Resilience
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
Group 3
Results of Mission Reports
In addition to the surveys, each group also produced two reports: an intermediary report (approximately halfway
during the evaluation) and a final report (at the end of the evaluation). The participants described the workflows
they used throughout the evaluation period. This workflow involved processing the incoming data and using the
various systems at their disposal. Based on these descriptions, it became clear that all groups followed the same
approach: 1 or 2 persons in the team would be responsible for uploading the data in the platform, after which all
the group members would go over the results and fine-tune the entries on the platform as needed.
The Participants were asked to provide advice to the Kenyan Government based on the data they received and
analyzed. They were also asked to describe how confident they felt in the recommendation and feedback
provided in the report. For the final report only, the participants had to provide a final decision in terms of
validity of the elections and resources allocation. The results of resources allocation showed that the groups
prioritized different areas in terms of intervention of three different police forces (see figure 2). Also, the
decision on the validity of the elections was different for the groups.
Figure 2 Comparison of Police forces allocation among the three groups of participants. While there seems to be some
agreement on Nairobi, there are different opinions on Kisumu and Mombasa.
Results of the Group Discussion
During the discussion the participants showed to have a collective understanding of the local context and could
discuss the topic using the same terms. Nevertheless, their presentation of final decisions showed a different
prioritization of areas as in the final reports (see fig. 2). The group discussion revealed that participants had
different interpretations of the available information despite having the same role in the simulation. This led to
the difficulty of finding a common decision. Differences among groups rose based on different prioritization of
areas, or the trustworthiness of the information from specific sources.
Combining the results from the individual surveys, the group reports and the joint discussion with all
participants, findings related to different topics emerge. First, we discuss the use of the platform and its features
by the participants. Secondly, we examine the way that the participants handled, processed and interpreted the
data provided to them. Next, we zoom more specifically into the relation between the data and the sensemaking
process. Finally, we reflect on the evaluation method itself.
Platform evaluation & socio-technical requirements
The platform to process and share data played a vital role in the evaluation setup. This platform, along with new
Nespeca et al.
Evaluating Platforms for Community Sensemaking
WiPe Paper – T12 - Designing for Resilience
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
features designed to support communities in their sense-making process, was used by the participants to enter
and map the incoming reports. Our evaluation proved useful to collect socio-technical requirements for the
further development and deployment of this platform. The feedback given by participants in the surveys, reports
and group discussion emphasized the importance of training. This training related to the technical operation of
the platform, as well as the workflows within the team.
Some of the feedback produced by participants was also more technical and directly related to possible
additional features for the platform they used. This feedback included improvements in the user-interface, the
ability to process data in a more structured manner, and integrations with other services and platforms. Some of
these elements were directly implemented by the developers of the platform, while other are in the list of
possible improvements for the future. This ensures direct applications of the findings from research and gives
importance to the HCI component of a platform, which could ultimately affect the goal of stimulating a shared
situational awareness and cooperation among stakeholders (Streefkerk et al. 2014). This direct feedback loop
also highlights evaluation approach as a tool for direct and actionable feedback to developers of these platforms.
Information processing & sensemaking
Even though the situation evolved, and new events started to emerge, the participants felt more confident in their
tasks. This was also indicated by the reduced need for external services that supported participants in
understanding specific keywords (places, names, terms, etc.). Participants indicated in both the surveys and
reports that the usefulness of services decreased. By working with the data over a longer period, they implicitly
built up their tacit knowledge to understand the situation, essentially supporting their own sense-making
process. However, in the group discussions participants also mentioned that as the evaluation progressed they
felt the need to classify the information in the initial (pre-defined) categories in the system, even though they
were free to adapt them to their needs. This desire to fit the data to certain categories, trends and previous
findings indicated the risk of confirmation & fitting (Comes 2016)
Furthermore, in the final report and group discussions, the participants seemed to mainly focus on the latest data
received. Rather than building a comprehensive evaluation, including all the data gathered and processed
throughout the evaluation, participants focused on the issues that the latest update revealed. The historical data
(and underlying trends) was selectively used (or even discarded) in the final reports to highlight these issues,
essentially displaying an immediacy effect (Anderson 1965; Huber et al. 2011)
In the reports the participants noticed the reliability of their self-reporting and stated that they felt confident in
the data they had processed and their analysis but lacked the contextual knowledge to assess how their ‘internal’
findings related to the real-world. Specifically, in relation to media reports, public awareness and -most
importantly- as historical baseline, for example from previous elections. Participants indicated that this
information and contextual knowledge would enable them to better frame and interpret the results from their
data and better cater to the needs of the requesting agency. This contextual knowledge is especially relevant to
identify abnormalities (baseline), or to give meaning/value to certain signals presented in specific messages.
The evaluation setup and scenario provided the participants with an in-depth experience of handling crowd-
sourced information during a disruptive event. None of the participants had any prior experience with the
crowd-sourcing information in general or with these situations specifically. Moreover, during the briefing the
participants indicated that they were not familiar with the details of the scenario, other than what had been
reported in the news. While the participants were not directly affected by the event, these properties are similar
to communities who aim to understand and make sense of a unknown situation.
The participants indicated in the group discussion that they felt the scenario was comprehensive but that a clear
motivation was needed to feel (continuously) engaged. As they were not (in reality) affected by the elections or
had another incentive, motivation was at times suboptimal. Additionally, the extend time-period of the
evaluation also proved challenging for them to stay involved and motivated. While the participants found the
case interesting as an introduction into this field, a more compressed, relatable case would have kept the
engagement stronger over the longer period of the evaluation. These findings emphasize the importance of
reciprocity for information systems aimed at communities, as illustrated in the socio-technical requirements
found within the COMRADES project (Piccolo et al. 2017).
Despite these improvements, the evaluation methodology provided comprehensive feedback on the platform, the
use of information, and the sense-making process of non-professional responders at the community level. It also
demonstrated that the use of platforms and -more importantly- working directly with information, supports the
community in building an understanding of the ongoing situation. However, there are caveats and biases that
Nespeca et al.
Evaluating Platforms for Community Sensemaking
WiPe Paper – T12 - Designing for Resilience
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
have to be taken into consideration and studied in more detail.
Various situational awareness platforms are now available to deal with social unrest resulting from conflict
situations. Nevertheless, these platforms are rarely evaluated scientifically in order to assess their actual support
(especially for communities) and which requirements are key for their design. Effectively evaluating these
platforms during real crises is impractical, therefore scenario evaluations are needed. One of the challenges
related to the development for scenarios, consists of the creation of realistic settings, able to emulate reality as
much as possible in order to provide scientific ground for the evaluation.
One key finding of this study is that it is possible to transform data recorded from a real situation into a scenario
for the evaluation of a situational awareness platform. The proposed methodology was used to set up a scenario
with social media data from the Kenyan election of August 8th, 2017. The course of the election was stretched
from one to nine days, in order to adapt the workload to the number of participants available compared to the
real case. Moreover, the purpose of an extended schedule, was aimed at giving the participants time to fill in the
intermediate surveys and write the intermediate report. Given the limitations above, the findings of this research
approach have to be considered as the qualitative results of an exploratory study.
The methodology still provided us with a tool to analyze the ability of the platform to create a shared situational
awareness and a coordinated approach among stakeholders, but also some directions for future research. As far
as this is concerned, we were able to create a collective understanding even for people who were not initially
familiar with the context. This only solved part of the problem. The participants interpreted the situation
differently, due to their background, bias, etc. Therefore, even though we established a common narrative and
vocabulary that could be used to communicate effectively, additional efforts are needed to translate this into a
coordinated approach among different stakeholders.
We thank the COMRADES project for funding this research. Furthermore, we are grateful to our project partner
Ushahidi for providing data from the case study and IT support, and also for promptly taking up the suggestions
from the participants on possible new features.
Adger, N. W. 1999. "Social Vulnerability to Climate Change and Extremes in Coastal Vietnam," World
Development (27:2), pp. 249-269.
Adger, W. N. 2009. "Social Capital, Collective Action, and Adaptation to Climate Change," Economic
Geography (79:4), pp. 387-404.
Aldrich, D. P., and Meyer, M. A. 2015. "Social Capital and Community Resilience," American Behavioral
Scientist (59:2), pp. 254-269.
Aldrich, D. P., and Sawada, Y. 2015. "The Physical and Social Determinants of Mortality in the 3.11 Tsunami,"
Social Science & Medicine (124), pp. 66-75.
Almedom, A. M. 2008. "Resilience to Disasters: A Paradigm Shift from Vulnerability to Strength," African
Health Sciences (8:Suppl 1), p. S1.
Anderson, N. H. 1965. "Primacy Effects in Personality Impression Formation Using a Generalized Order Effect
Paradigm," Journal of personality and social psychology (2:1), p. 1.
Bharosa, N., Lee, J., and Janssen, M. 2010. "Challenges and Obstacles in Sharing and Coordinating Information
During Multi-Agency Disaster Response: Propositions from Field Exercises," Information Systems
Frontiers (12:1), pp. 49-65.
Brown, D. D., and Kulig, J. C. 1996. "The Concepts of Resiliency: Theoretical Lessons from Community
Bruneau, M., Chang, S. E., Eguchi, R. T., Lee, G. C., O’Rourke, T. D., Reinhorn, A. M., Shinozuka, M.,
Tierney, K., Wallace, W. A., and von Winterfeldt, D. 2003. "A Framework to Quantitatively Assess
and Enhance the Seismic Resilience of Communities," Earthquake Spectra (19:4), pp. 733-752.
Comes, T. 2016. "Cognitive Biases in Humanitarian Sensemaking and Decision-Making Lessons from Field
Research," Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 2016 IEEE
International Multi-Disciplinary Conference on: IEEE, pp. 56-62.
Comes, T., and Adrot, A. 2016. "Power as Driver of Inter-Organizational Information Sharing in Crises,"
Comes, T., Meesters, K., and Torjesen, S. 2017. "Making Sense of Crises: The Implications of Information
Nespeca et al.
Evaluating Platforms for Community Sensemaking
WiPe Paper – T12 - Designing for Resilience
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
Asymmetries for Resilience and Social Justice in Disaster-Ridden Communities," Sustainable and
Resilient Infrastructure), pp. 1-13.
Comes, T., and Van de Walle, B. 2014. "Measuring Disaster Resilience: The Impact of Hurricane Sandy on
Critical Infrastructure Systems," ISCRAM2014, R. Hiltz (ed.), State College, pp. 195-204.
Comes, T., Wijngaards, N., and Van de Walle, B. 2015. "Exploring the Future: Runtime Scenario Selection for
Complex and Time-Bound Decisions," Technological Forecasting and Social Change (97), pp. 29-46.
Comfort, L. K. 2007. "Crisis Management in Hindsight: Cognition, Communication, Coordination, and
Control," Public Administration Review (67:s1), pp. 189-197.
Comfort, L. K., Boin, A., and Demchak, C. C. 2010. Designing Resilience: Preparing for Extreme Events.
University of Pittsburgh Pre.
Fan, L. 2013. "Disaster as Opportunity? Building Back Better in Aceh, Myanmar and Haiti," ODI: HPG
Working Group: http://www. odi. org. uk/sites/odi. org. uk/files/odi assets/publications-opinion-
files/8693. pdf).
Filatova, T., Polhill, J. G., and van Ewijk, S. 2016. "Regime Shifts in Coupled Socio-Environmental Systems:
Review of Modelling Challenges and Approaches," Environmental modelling & software (75), pp.
Folke, C. 2006. "Resilience: The Emergence of a Perspective for Social–Ecological Systems Analyses," Global
environmental change (16:3), pp. 253-267.
Frankenberg, R. 1966. Communities in Britain: Social Life in Town and Country. Penguin Books.
Gao, H., Barbier, G., and Goolsby, R. 2011. "Harnessing the Crowdsourcing Power of Social Media for Disaster
Relief," Intelligent Systems, IEEE (26:3), pp. 10-14.
Giddens, A. 1979. "Agency, Structure," in Central Problems in Social Theory. Springer, pp. 49-95.
Goldstein, J., and Rotich, J. 2008. "Digitally Networked Technology in Kenya’s 2007–2008 Post-Election
Crisis," Berkman Center Research Publication (9), pp. 1-10.
Heinzelman, J., and Waters, C. 2010. "Crowdsourcing Crisis Information in Disaster-Affected Haiti,").
Huber, M., Van Boven, L., McGraw, a. P., and Johnson-Graham, L. 2011. "Whom to Help? Immediacy Bias in
Judgments and Decisions About Humanitarian Aid," Organizational Behavior and Human Decision
Processes (115:2), pp. 283-293.
IFRC. 2013. "World Disaster Report. Technology and the Future of Humanitarian Action," Geneva.
IFRC. 2016. "Resilience: Saving Lives Today, Investing for Tomorrow," Geneva.
Jerneck, A., and Olsson, L. 2008. "Adaptation and the Poor: Development, Resilience and Transition," Climate
Policy (8:2), pp. 170-182.
Kendra, J. M., and Wachtendorf, T. 2007. "Community Innovation and Disasters," in Handbook of Disaster
Research. Springer, pp. 316-334.
Klein, R. J., Nicholls, R. J., and Thomalla, F. 2003. "Resilience to Natural Hazards: How Useful Is This
Concept?," Global Environmental Change Part B: Environmental Hazards (5:1-2), pp. 35-45.
Klievink, B., and Janssen, M. 2014. "Developing Multi-Layer Information Infrastructures: Advancing Social
Innovation through Public–Private Governance," Information Systems Management (31:3), pp. 240-
Kuligowski, E. D. 2011. Terror Defeated: Occupant Sensemaking, Decision-Making and Protective Action in
the 2001 World Trade Center Disaster. University of Colorado at Boulder.
Link, D., Meesters, K., Hellingrath, B., and Van de Walle, B. 2014. "Reference Task-Based Design of Crisis
Management Games," ISCRAM.
Madni, A. M., and Jackson, S. 2009. "Towards a Conceptual Framework for Resilience Engineering," IEEE
Systems Journal (3:2), pp. 181-191.
Meerow, S., Newell, J. P., and Stults, M. 2016. "Defining Urban Resilience: A Review," Landscape and urban
planning (147), pp. 38-49.
Meesters, K. 2014. "Towards Using Serious Games for Realistic Evaluation of Disaster Management It Tools,"
AIM SG, pp. 38-48.
Meier, P., and Brodock, K. 2008. "Crisis Mapping Kenya’s Election Violence: Comparing Mainstream News,
Citizen Journalism and Ushahidi," iRevolution Blog, October (23).
Muhren, W., Eede, G. V. D., and Van de Walle, B. 2008. "Sensemaking and Implications for Information
Systems Design: Findings from the Democratic Republic of Congo's Ongoing Crisis," Information
technology for development (14:3), pp. 197-212.
Muhren, W. J., and Van de Walle, B. 2009. "Sensemaking and Information Management in Humanitarian
Disaster Response: Observations from the Triplex Exercise," Proceedings of the 6th International
Conference on Information Systems for Crisis Response and Management (ISCRAM).
Norris, F. H., Stevens, S. P., Pfefferbaum, B., Wyche, K. F., and Pfefferbaum, R. L. 2008. "Community
Resilience as a Metaphor, Theory, Set of Capacities, and Strategy for Disaster Readiness," American
journal of community psychology (41:1-2), pp. 127-150.
Nespeca et al.
Evaluating Platforms for Community Sensemaking
WiPe Paper – T12 - Designing for Resilience
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
Okolloh, O. 2009. "Ushahidi, or ‘Testimony’: Web 2.0 Tools for Crowdsourcing Crisis Information,"
Participatory learning and action (59:1), pp. 65-70.
Pelling, M., and High, C. 2005. "Understanding Adaptation: What Can Social Capital Offer Assessments of
Adaptive Capacity?," Global Environmental Change (15:4), pp. 308-319.
Piccolo, L., Meesters, K., and Roberts, S. (2017). “Co-designing for Community Resilience Beyond the Local.”
Ronan, K., and Johnston, D. 2005. Promoting Community Resilience in Disasters: The Role for Schools, Youth,
and Families. Springer Science & Business Media.
Sebastian, A., Lendering, K., Kothuis, B., Brand, A., Jonkman, S., van Gelder, P., Kolen, B., Comes, M.,
Lhermitte, S., and Meesters, K. 2017. "Hurricane Harvey Report: A Fact-Finding Effort in the Direct
Aftermath of Hurricane Harvey in the Greater Houston Region,").
Selcon, S., and Taylor, R. 1990. "Evaluation of the Situational Awareness Rating Technique(Sart) as a Tool for
Aircrew Systems Design," AGARD, Situational Awareness in Aerospace Operations 8 p(SEE N 90-
28972 23-53)).
Shanley, L., Burns, R., Bastian, Z., and Robson, E. 2013. "Tweeting up a Storm: The Promise and Perils of
Crisis Mapping,").
Sharoda, P. A., and Reddy, M. C. 2010. "Understanding Together: Sensemaking in Collaborative Information
Seeking," in: CSCW. ACM, pp. 321-330.
Stottlemyre, S., and Stottlemyre, S. 2012. "Crisis Mapping Intelligence Information During the Libyan Civil
War: An Exploratory Case Study," Policy & Internet (4:3-4), pp. 24-39.
Streefkerk, J. W., Neef, M., Meesters, K., Pieneman, R., and van Dongen, K. 2014. "Hci Challenges for
Community-Based Disaster Recovery," International Conference on Digital Human Modeling and
Applications in Health, Safety, Ergonomics and Risk Management: Springer, pp. 637-648.
Szreter, S., and Woolcock, M. 2004. "Health by Association? Social Capital, Social Theory, and the Political
Economy of Public Health," International Journal of Epidemiology (33:4), pp. 650-667.
Taket, A. 1999. "Information, Systems and Information Systems-Making Sense of the Field." JSTOR.
UNISDR, M. 2009. "Unisdr Terminology for Disaster Risk Redution," United Nations International Strategy
for Disaster Reduction (UNISDR) Geneva, Switzerland).
Van de Walle, B., and Comes, T. 2015. "On the Nature of Information Management in Complex and Natural
Disasters," Procedia Engineering (107), pp. 403-411.
Weick, K. E. 1993. "The Collapse of Sensemaking in Organizations: The Mann Gulch Disaster," Administrative
Science Quarterly (38:4), pp. 628-652.
Weick, K. E. 1995. Sensemaking in Organizations. Sage.
Weick, K. E. 2010. "Reflections on Enacted Sensemaking in the Bhopal Disaster," Journal of Management
Studies (47:3), pp. 537-550.
Zobel, C. W., and Khansa, L. 2014. "Characterizing Multi-Event Disaster Resilience," Computers & Operations
Research (42), pp. 83-94.
... Many digital services today use some mechanism of crowdsourcing, rely on a pool of users (for example to match offer/demand), or require information from others to create value. Often in an emergency there is (initially) limited data available to leverage this network effect (Nespeca et al., 2018). For example, when a digital service relies on delivering information, the data needed to create that information first has to be available. ...
Full-text available
For a plethora of decisions we make on a daily basis, we can rely on timely, tailored information delivered via digital services. The technologies and the knowledge needed for the design, development and delivery of digital services have become increasingly accessible. These developments have also made their way to the crisis response domain and resulted in a variety of digital services to deliver information , supporting emergency responders in their decision-making processes. At the same time, affected communities by disasters are also in need of information during such critical events. Timely, relevant information helps affected citizens to understand the situation, make informed decisions, and gain access to life-saving aid. However, designing, creating, and delivering digital services to deliver vital information to communities warrant specific considerations. This chapter per the authors explores the concept of digital services in regard to empowering communities affected by critical events.
... It was created in a provision of the 2010 constitution and the Independent Electoral and Boundaries Commission Act. The mandates of IEBC includes; the continuous registration of voters and revision of the voter"s roll, the delimitation of constituencies and wards, the regulation of political parties process, the settlement of electoral disputes, the registration of candidates for elections, voter education, the facilitation of the observation, monitoring and evaluation of elections, the regulation of money spent by a candidate or party in respect of any poll, the development of a code of conduct for candidates and parties, the monitoring of compliance with legislation on nomination of candidates by parties (Nespeca, Meesters, Comes, Boersma & Tomaszewski, 2018). The utilization of technology in the 2017 Kenya electoral processes was guided by Section 44 of the Elections Act of 2011. ...
... As such, the requirements for systems that support effective information management also change. For example, systems need operate in a more distributed manner, allowing different user(groups) to connect, share and use information (Baharmand et al. 2016;Nespeca et al.). In addition, communities and individuals play a more prominent role. ...
Conference Paper
Full-text available
Information and supporting information systems is a key element in an effective emergency response. From creating situational awareness to informed decision making, information enables responders to optimize their decisions and operations. Today, with the increased availability of information technology around the globe, a new active player in the field of information management is emerging as communities are becoming increasingly active in the field of information gathering, analyzing and sharing. However, communities may have specific requirements and approaches to using information systems in crisis situations. Moreover, connecting information systems between communities and responder pose specific challenges due to the different information needs, capacities and incentives to use them. In this paper we build on the DERMIS premises and explore through a case study if and how these principles apply to inclusive information systems. We present the initial findings of this work of designing information systems involving both communities and formal responders.
Full-text available
Fostering resilience in the face of environmental, socioeconomic, and political uncertainty and risk has captured the attention of academics and decision makers across disciplines, sectors, and scales. Resilience has become an important goal for cities, particularly in the face of climate change. Urban areas house the majority of the world's population, and, in addition to functioning as nodes of resource consumption and as sites for innovation, have become laboratories for resilience, both in theory and in practice. This paper reviews the scholarly literature on urban resilience and concludes that the term has not been well defined. Existing definitions are inconsistent and underdeveloped with respect to incorporation of crucial concepts found in both resilience theory and urban theory. Based on this literature review, and aided by bibliometric analysis, the paper identifies six conceptual tensions fundamental to urban resilience: (1) definition of ‘urban’; (2) understanding of system equilibrium; (3) positive vs. neutral (or negative) conceptualizations of resilience; (4) mechanisms for system change; (5) adaptation versus general adaptability; and (6) timescale of action. To advance this burgeoning field, more conceptual clarity is needed. This paper, therefore, proposes a new definition of urban resilience. This definition takes explicit positions on these tensions, but remains inclusive and flexible enough to enable uptake by, and collaboration among, varying disciplines. The paper concludes with a discussion of how the definition might serve as a boundary object, with the acknowledgement that applying resilience in different contexts requires answering: Resilience for whom and to what? When? Where? And why?
Full-text available
Despite the ubiquity of disaster and the increasing toll in human lives and financial costs, much research and policy remain focused on physical infrastructure–centered approaches to such events. Governmental organizations such as the Department of Homeland Security, United States Federal Emergency Management Agency, United States Agency for International Development, and United Kingdom’s Department for International Development continue to spend heavily on hardening levees, raising existing homes, and repairing damaged facilities despite evidence that social, not physical, infrastructure drives resilience. This article highlights the critical role of social capital and networks in disaster survival and recovery and lays out recent literature and evidence on the topic. We look at definitions of social capital, measurement and proxies, types of social capital, and mechanisms and application. The article concludes with concrete policy recommendations for disaster managers, government decision makers, and nongovernmental organizations for increasing resilience to catastrophe through strengthening social infrastructure at the community level.
Conference Paper
Full-text available
In disaster recovery, responding professional organizations traditionally assess the needs of communities following a disaster. Recent disasters have shown that volunteer capacities within the community are not yet integrated in recovery activities. To improve the efficiency of responding professionals and utilize the potential capacity from within the community, a platform is needed that identifies needs and capacities and provides situational overviews of recovery activities for different stakeholders. The proposed COBACORE platform aims to 1) bring community needs and capacities directly together, 2) allow professionals to better maintain awareness of recovery activities and to better deploy their capacities and 3) facilitate collaboration between professionals and responding communities. For each function and feature, the Human-Computer Interaction (HCI) challenges are outlined. In ongoing work, a first prototype of this platform is implemented and evaluated with stakeholders in simulated disaster recovery activities.
Full-text available
Technological developments, along with political and financial incentives have encouraged an increasing number of developers and researchers to develop tools and systems to aid disaster responders. To ensure adoption of these innovations evaluations are necessary. Evaluations aid in assessing the impact, the fit with the users' requirements and the identification of improvements. However options for conducting realistic evaluations of tools tailored for use in a disaster context are limited. Real-life evaluations are unfeasible because of the disaster context, while dedicated simulations are costly. This paper proposes the use of serious games as an evaluation method; striking a balance between realism required for valid and useful feedback and the light-weight resource use to ensure multiple iterative evaluations are feasible. In addition to the theoretical outline this evaluation method has been applied to a system for crisis-related data collection, demonstrating the usability of serious games for IT-tool evaluation in a disaster context.
The principal issue with which I shall be concerned in this paper is that of connecting a notion of human action with structural explanation in social analysis. The making of such a connection, I shall argue, demands the following: a theory of the human agent, or of the subject; an account of the conditions and consequences of action; and an interpretation of ‘structure’ as somehow embroiled in both those conditions and consequences.1
When large-scale disasters occur, they typically strike without warning-regardless of whether the cause is natural, such as a tsunami or earthquake, or human-made, such as a terrorist attack. And immediately following a hazardous event or mass violence, two of the most vulnerable groups at risk are a community's children and their family members. Promoting Community Resilience in Disasters offers both clinicians and researchers guidance on hazard preparation efforts as well as early response and intervention practices. It emphasizes an evidence- and prevention-based approach that is geared toward readiness, response, and recovery phases of natural and human-made disasters, examining such key topics as: - Establishing a community resilience framework - Reviewing current theory and research - Understanding the role for schools, youth, and families - Building a partnership and multidisciplinary perspective - Recognizing the importance of readiness and risk reduction - Providing public education and response during a crisis - Developing recovery programs that focus on physical and social factors - Setting evidence-based guidelines for practice Establishing an interface between research and practice Promoting Community Resilience in Disasters is specifically geared toward assisting those who work in school or community settings-including school psychologists and counselors, emergency managers and planners, and all mental health professionals-not only to increase resilience after a disaster, but to respond and intervene as quickly as possible when catastrophe strikes. It will assist those charged with the responsibility for helping others respond to and rebound from major traumas, especially clinicians and other professionals who work with children and their family members. © 2005 Springer Science+Business Media, Inc., All rights reserved.
Future changes in climate pose significant challenges for society, not the least of which is how best to adapt to observed and potential future impacts of these changes to which the world is already committed. Adaptation is a dynamic social process: the ability of societies to adapt is determined, in part, by the ability to act collectively. This article reviews emerging perspectives on collective action and social capital and argues that insights from these areas inform the nature of adaptive capacity and normative prescriptions of policies of adaptation. Specifically, social capital is increasingly understood within economics to have public and private elements, both of which are based on trust, reputation, and reciprocal action. The public-good aspects of particular forms of social capital are pertinent elements of adaptive capacity in interacting with natural capital and in relation to the performance of institutions that cope with the risks of changes in climate. Case studies are presented of present-day collective action for coping with extremes in weather in coastal areas in Southeast Asia and of community-based coastal management in the Caribbean. These cases demonstrate the importance of social capital framing both the public and private institutions of resource management that build resilience in the face of the risks of changes in climate. These cases illustrate, by analogy, the nature of adaptation processes and collective action in adapting to future changes in climate.