Content uploaded by Adam Widera
Author content
All content in this area was uploaded by Adam Widera on May 22, 2018
Content may be subject to copyright.
Widera et al.
Measuring Innovations in Crisis Managements
WiPe/ Paper – Open Track
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
Measuring Innovations in Crisis
Management
Adam Widera
Chair of Information Systems and Supply
Chain Management,
University of Münster
adam.widera@ercis.uni-muenster.de
Chiara Fonio
European Commission
Joint Research Centre
Chiara.FONIO@ec.europa.eu
Sandra Lechtenberg
Chair of Information Systems and Supply
Chain Management,
University of Münster
sandra.lechtenberg@uni-muenster.de s
Bernd Hellingrath
Chair of Information Systems and Supply
Chain Management,
University of Münster
bernd.hellingrath@ercis.uni-muenster.de
ABSTRACT
Crisis management (CM) organizations regularly face the challenge to assess the potential impact of a change in
their socio-technical setup. No matter if a new software, a new tool, a simple workflow or a broader
organizational structure become available, CM organizations need to estimate the potential added value under a
high degree of uncertainty. In general, the more reliable information about the new solution is available, the
more informed the decisions are. One promising way in assessing the potential impact of new CM solutions can
be found through its application in an as realistic as possible and an as secure as necessary setup. However, such
artificial scenarios like simulation exercises hold the risk of measuring the performance of the solution itself
rather than its contribution to the CM operation. In this paper we review the state of the art in measuring crisis
management performance, discuss the results in the context of performance measurement in general and present
a performance measurement approach supporting a structured assessment of innovative CM solutions applied
within collaborative demonstration project.
Keywords
Performance measurement, crisis management, innovation.
INTRODUCTION
During CM operations even minor decisions can directly have a major impact on human lives. In consequence,
CM organizations rely on proven and well known infrastructures, procedures and tools which are applied to run
the operations. These circumstances lead to a relatively high tendency of reluctance towards any kind of change
in the way how CM operations are planned, executed and evaluated. Decision makers in CM need to be very
careful and require clear evidence before introducing new ways of how their operations are designed, planned
and executed.
The probably most comprehensible way to reach decision-making reliability is the combination of external
reputation (e.g. through well documented references of other organizations) with internal evaluations (e.g.
through actual applications and testing of new solutions). Combining both elements, the internal and external
perspectives, decision makers are able to get relevant data supporting certain decisions as well as to establish
trust within the organizations towards the successful implementation of a new solution. Hence, CM
organizations get in a better position to justify their decisions on appropriate data (evidence) as well as to ensure
the required backing within the organization in order to support organizational change (organizational trust).
Widera et al.
Measuring Innovations in Crisis Managements
WiPe/ Paper – Open Track
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
However, what if there are no appropriate external reputations CM organizations can refer to, for example
because of different value systems between the commercial and humanitarian sector? When thinking of
“innovation” in its original meaning (“Innovare” as “to change into something new”) there probably should even
be no clear reputations. This of course does not mean that the new solution is not allowed to have any
experience at all. Even the contrary, CM organizations often experience companies pitching solutions which
have quite acknowledged expertise but they can still be innovative for the CM domain. If we think about
technological innovations, e.g. RFID chips to track and monitor relief goods or mobile applications as
communication means, there is still the uncertainty whether one piece of technology being successful in one
context has the same performance in another application domain (Coletti et al. 2017). One reason for the
potential mismatch is not only that there are different application requirements but mainly that even a small
technological change takes place in a broader socio-technical context which has to be elaborated first
(Orlikowski 1993, Toyama 2004).
The internal reliability of a new solution, i.e. the provision of appropriate data proving the potential impact of
the innovation, remains as an alternative to identify the innovation degree and measure potential impact. There
are many elaborated and standardized approaches to measure attributes of specific artefacts like the sensitive
analysis of a simulation run, the technological readiness level (TLR) of a software or the perceived usability of a
tool. It can be questioned how such measures satisfy the need of CM organizations to make informed decisions
about investments in changes of legacy infrastructures, procedures or tools. Thus, the research question of this
paper is how the potential impact of a solution can be measured for the specific application domain of CM. Are
there any generic performance measurement approaches available and if not, how could such approaches be
designed in order to systematically identify and analyze relevant data?
In this paper we firstly present the results of a literature review on general performance measurement in crisis
management. This step is necessary in order to first clarify whether and how CM performance and effectiveness
can be measured. Based on these findings we discuss how objective- and process-oriented performance
measurement in general can be identified in order to systemize and interrelate relevant metrics. Finally, we
present a use case of an adjustable performance measurement architecture being deployed in an ongoing CM
demonstration project.
LITERATURE REVIEW
Our literature analysis of publications dealing with performance measurement in crisis management within
Scopus, the largest abstract and citation database for peer-reviewed literature
1
, results in more than 500 hits. In
order to gather an as broad as possible picture the following generic search string was applied: („performance
measurement“ OR effectiveness) AND („crisis management“ OR „disaster relief“). Comparing the number of
relevant sources per year, it becomes obvious that there is an ongoing increase of results starting in the early
2000s with a peak in 2006, which is two years after the major south East Asian tsunami disaster. Since then,
publications have increased even more with a maximum of more than 50 sources in 2013 (see Figure 1).
0
10
20
30
40
50
60
before
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
Number of publications per year
Figure 1: Number of publications per year
1
https://www.elsevier.com/solutions/scopus
Widera et al.
Measuring Innovations in Crisis Managements
WiPe/ Paper – Open Track
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
The found results stem from a wide variety of research areas, including Social Sciences, Business and
Management, Engineering and Computer Science among the top ones (see Figure 2).
28,70%
22,10%
22,10%
21,30%
14,80%
9,40%
7,80%
7,00%
6,30%
27,20%
Subject areas of search results
(multiple answers possible)
Social Science
Business & Management
Engineering
Computer Science
Medicine
Economics
Environmental Sciences
Decision Sciences
Mathematics
Other
Figure 2: Subject areas of search results
However, when reviewing the first sources, it becomes apparent, that especially by including “effectiveness” in
the search term, the search delivers many results which only measure the effectiveness of e.g. a model or an
algorithm that has been developed. The actual CM performance and effectiveness is, in most cases, considered
as an underlying assumption being directly related to the model or algorithm effectiveness. However, neglecting
the measurement of operational applicability (relevance) significantly limits the conclusions regarding the
potential impact of the investigated artifact. Having excluded the term “effectiveness”, the search only leads to
14 results. The excluded sources are not necessarily concerned with the performance measurement of crisis
management itself but rather try to provide a solution to a problem and aim at measuring the effectiveness of
this specific solution – with a missing relation to the potential impact on CM effectiveness. Nonetheless, they
are of interest, as even when giving statements about the effectiveness of one single model, algorithm etc.
certain methods for doing so have to be developed or applied.
Considering only sources which deal with trial-related performance measurement by extending the search term,
i.e. adding “AND experiment OR trial” to it, leads to a significantly less amount of sources. Only 34 out of the
over 500 publication satisfy the refined search term. While the yearly amount of publications shows similar
behavior as the first search, the results of the refined search mostly stem from more technical research areas
such as computer science or engineering (see Figure 3).
Widera et al.
Measuring Innovations in Crisis Managements
WiPe/ Paper – Open Track
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
50,00%
38,20%
20,60%
14,70%
11,80%
8,80%
8,80%
8,80%
20,50%
Subject areas of refined search results
(multiple answers possible)
Computer Science
Engineering
Business & Management
Social Sciences
Decision Sciences
Earth & Planetary Sciences
Mathematics
Agricultural & Biological Science
Other
Figure 3: Subject areas of refined search results
Reflecting the overall results, it can be concluded that the area of performance measurement is highly multi-
disciplinary. In order to showcase the huge range of the identified approaches, rooted both in positivistic and
interpretivist domains, an exemplary selection will be reviewed in the following sub-paragraph.
Insights from the literature review
Schulz and Heigh (2009) present a logistics performance measurement approach on the case of a Development
Indicator Tool which has been developed with the International Federation of Red Cross and Red Crescent
Societies (IFRC). The project was initiated by the Logistics and Resource Mobilization Department (LRMD) of
the IFRC that defined a logistics strategy which required concrete scores of response levels to be achieved when
delivering relief supplies. In order to achieve this aim, a descriptive approach is utilized, starting from
explicating the strategic change which has been initiated by the LRMD and targets the establishment of regional
instead of central supply chains through the formation of regional logistics units (RLUs). Schulz and Heigh
(2009) conclude that the design and implementation process for performance measurement systems must be
kept simple and requires ongoing improvement and development by the RLUs. Thereby it is essential to involve
tool users and administrators besides developers at early stage to achieve an acceptance towards its introduction
and make all stakeholders getting familiar with it. Co-creation and mutual understanding of the involved parties
is perceived as a key success factor. In this context, support from organizations’ management is of particular
importance. Furthermore, the key action behind performance measurement is seen in analyzing the needs for
development, which is revealed by the actions’ impact on defined target scores. In case of the LRMD the system
for instance is supposed to become an efficient web-based application to replace data interchange via email.
Driven by simplicity, the case at LRMD is moreover supposed to foster the emergence of research contributions
that provide conceptual and practical insights of system development and improvement in the context of
humanitarian logistics.
Rongier et al. (2013) developed a method that supports decision making in real-time during response and further
at its implementation to demonstrate how performance indicators support crisis response management and the
collaboration of stakeholders. The authors present a four-step procedure for CM PMS, applied in a case study at
the French Red Cross. For that purpose, a web-based prototype of the PMS was developed and tested in
practice. Therefore, the tools requirements and characteristics are further listed, and its pages and database
illustrated before a conclusion is drawn. The identified KPIs are structured along the dimensions efficiency,
relevance, expectations, satisfaction, agility, and impact. Due to the focus on a PMS dashboard the actual
measurements were identified in the literature and covered for example logistical areas like response times
defined as “cycle times” (p. 1098) or response quantities (p. 1100). The actual PMS was presented for the first
time in Rongier et al. (2010), which was also identified by our literature search. The underlying research
methodology involved the same practitioner organization as the (2013) paper, however it was related to a
specific earthquake scenario.
Owen et al. (2016) investigate the challenges faced by representatives on the strategic level of emergency
management and to elaborate on their relevance for tactical front-line operations and political compliance. The
Widera et al.
Measuring Innovations in Crisis Managements
WiPe/ Paper – Open Track
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
analyzed data is retrieved from a former study on emergency management and the method includes both surveys
and interviews with representatives from 36 Australian organizations of the respective branch. In contrary to
Rongier et al. (2013) the authors conclude “that before we can propose any revision of measures to assess
emergency management performance, we need to understand the interaction between underlying values base
and the tensions inherent in carrying out emergency management work.” (p. 186).
The work conducted by Wang (2012) is intended to provide guidance for organizational performance (OP)
during crisis through the design and development of a measurement framework that supports organizations in
linking their OP with their strategies, technical systems and crisis management objectives. This is motivated by
the identified need to derive improvement measures mitigating future incidents. In order to draw linkages across
the organizations’ performance, strategy, objectives and technical systems, multi-dimensional frameworks are
needed. Following a literature analysis Wang (2012) compiled a set of performance indicators in the areas
information dissemination, involvement of top management in CM tasks, cooperation with stakeholders,
financial measures, stakeholders’ confidence in the sustainability of an organization, documentation, and time
for making specified improvements (p. 679). The author acknowledges the limitation of the generalizability of
the framework due to the diversity of CM organizations and recommends further investigations of “(…)
operational variables for a specific business context for empirical validation purposes” (p. 684).
One major conclusion of the sources is that the performance measurement of disaster relief operations is a
difficult task requiring context-specific adjustments due to various challenges which can be classified in four
categories (Abrahamsson et al. 2010):
1. Evaluation based on value judgement: Any evaluation needs to be based on certain values, i.e. in order
to be capable of assessing how successful an operation has been there need to be values which define
what is supposed to be successful. At least implicitly, these values, which can also be understood or
formulated as objectives of an operation, will always be based on subjective opinion and personal
beliefs.
2. Complexity of crisis situations: The high complexity of crisis situations significantly affects the way, a
relief operation can be analyzed, understood and evaluated. High dependencies and complicated
relationships between actors as well as causal ones lead to great difficulties when trying to understand
what happened as well as why it happened.
3. Questionable validity of information: When evaluating an operation, this evaluation has to be based on
information about how the course of events during the operation. Often such information is gained by
conducting interviews etc. and rarely based on e.g. ongoing data collection. Consequently, there is
always the question how reliable humans are as a source of information and therefore how valid the
information is on which an evaluation is based upon (in terms of generalizability of specific results).
4. Limiting operation conditions: Every disaster relief operation can have negative effects or outcomes,
which simply could not have been prevented, independently of how successful the operation has been.
Any immediate and unavoidable casualties caused by the crisis should not be included into the
evaluation of an operation. For example, the number of injured people is not relevant for the evaluation
while the time until they receive help is. Overall, it can be difficult to distinguish between the
evaluation of the operation’s performance itself and the analysis of what might have happened under
different circumstances.
In general, these challenges make it difficult if not impossible to establish a generic performance measurement
approach for crisis management in general, and in the context of measuring CM innovation in particular. What
CM effectiveness is depends on many variables like the observed time frame, the type of CM entity, its
organizational level or the specific crisis situation. Hence, the results of the literature analysis suggest that
dedicated iterative and systematic procedures are necessary in order to develop appropriate performance
measurement approaches being able to give insights of specific solutions on the CM performance. Owen et al.
(2016) also concludes that before suggesting measures to assess crisis management, it is necessary to understand
the complex and often intertwined challenges and relationships of crisis management and its actors. Only after
making sense of the interaction between the underlying value base and the events and actions of an operation,
methods to measure and evaluate the performance of it can be developed (Owen et al. 2016).
Overall, the literature analysis show that the high variety in scenarios, tasks, stakeholders etc. related to
disasters, results in a lack of generic performance indicators - especially when evaluating and measuring CM
innovations. The results of the identified papers can be very useful for the development of specific performance
measurement project as the documented results might be fitting to the high variety of potential cases (e.g. an
open set of performance indicators for the area of humanitarian logistics can be utilized for specific cases, see
Widera et al.
Measuring Innovations in Crisis Managements
WiPe/ Paper – Open Track
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
Widera and Hellingrath 2011). However, it appears to be necessary to develop a structured approach supporting
the case-based identification of:
- Specific CM objectives and values (e.g. evacuation of an area or the setup and maintenance of an
internally displaced persons camp)
- Clear systematization of the relief operation (e.g. responsibilities or specific processes and workflows)
- Appropriate research methods covering both quantitative and qualitative data gathering and analysis
techniques (e.g. sensitivity analysis of simulation results in combination with focus groups of potential
simulation applicants in CM organizations)
- Differentiation of the investigated objects (e.g. controllable vs. uncontrollable variables or operation
specific phenomena vs. solution-specific perspective).
In the next paragraph we briefly introduce different considerations and existing approaches in the area of
performance measurement in order to be used for the measurement of CM innovations.
PERFORMANCE MEASUREMENT
Following the literature review above, the identification of innovative and value adding solutions in CM can
only be achieved through a context-dependent and specific measurement, analysis and adjustment of its
exemplary application in a secure environment such as trials, serious games or exercises. For a supporting
performance measurement approach there are several guidelines and elaborated concepts to be considered. As
discussed above, each indicator might have a weighted importance for the overall CM performance of a single
organization or a dedicated scenario, and that is the reason why the identification of key performance indicators
(KPIs) might be useful. A systematization and categorization of potential KPIs prevents an isolated view and
possible misinterpretation. However, because of the different actors involved in CM operations, different
processes with specific objectives and relations need to be considered for the identification of relevant KPIs for
an application.
The findings from the literature review described above offer a huge source of potentially appropriate indicators.
They can be considered as an open set of CM KPIs. However, in order to ensure the relevance of the KPIs in
particular CM applications, a specific set of KPIs needs to be developed for each application (or application
context). Paramenter (2010) differentiates between result (e.g. number of evacuated persons) and performance
indicators (e.g. time needed to evacuate) as well as key result indicators and KPIs. KPIs can be defined as
business-oriented relevant and numeric information and represent a set of measures focusing on those aspects of
organizational performance that are most critical for the current and future success of the organization
(Paramenter 2010). Thus, the indicators can be related to different elements (like objectives or processes) and
weighted by targeting specific goals (Reichmann 1990).
Thus, it might be concluded that each performance measurement application is specific and hardly transferable.
However, there are some general aspects to be considered for the specific identification of KPIs. Schulz and
High (2009) provided an overview of KPI requirements, which is presented in the following table.
Requirement
Short Description
Validity
Address the real performance drivers.
Relevance
Reveal decision relevant information.
Cardinality
Cover a wide range of key issues under consideration.
Completeness
Use additional metrics if not all relevant issues can be covered by only one.
Comparability
Allow intra- and inter-organizational comparisons as well as comparisons
over time.
Compatibility
Input data for calculating the metrics should be available from the existing
systems.
Cost and benefit
Development and continuous measuring costs have to be contrasted with
the resulting benefits.
Table 1: KPI Requirements (Schulz and High 2009)
In addition, the following two points should be added: (1) manageability (Keller and Hellingrath 2007) and (2)
adaptability (Preißler 2008). By considering the manageability of possible KPIs, more than just the reasonable
Widera et al.
Measuring Innovations in Crisis Managements
WiPe/ Paper – Open Track
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
economic relationship between costs and benefits of measuring the KPI itself will be ensured. This is especially
important when reflecting the today’s technical opportunities to store huge amounts of data requiring advanced
data analysis approaches, which are probably not covered in CM organizations as they are covering main core
competencies. Adaptable KPIs enable a necessary flexibility, which can be caused by changing structures and
processes. Thus, organizations using the selected KPIs do not have to invest in time consuming redefinitions of
metrics.
As a final notice on requirements of KPIs, they should be constructed in a way that an assignment of
measurement points to specific process steps is allowed. As VDI 4400 (2002) points out, existing data collection
tools are able to document relevant processes by the identification of events in form of quantity and time data as
each KPI should to be quantifiable. The following figure illustrates how measurement points can be assigned to
processes within in an example of a distribution process model.
Figure 4: Assignment of KPIs in a humanitarian organization (Widera and Hellingrath 2016)
The figure depicts an exemplary representation of how warehousing tasks can be structured along a process
using the BPMN standard. This logical structure visualizes an ideal flow of the sequence, e.g. the shipment
information should be verified (grey box marking the starting event) before the delivery gets accepted. To each
of such clearly defined processes specific measurement points can be assigned. Setting specific measurement
points, concrete stages in process sequences are predefined in order to evaluate the performance executed within
the tasks. The IP.4 KPIs represents the “Mean quality inspection costs per incoming goods item” and is an
important performance driver in humanitarian logistics. Such a process orientation puts the organization in a
position to analyze several tasks they have to deal with. The identified open set of KPIs can be found in Widera
and Hellingrath (2011, pp. 1335-1336).
In the last years a large amount of different performance measurement concepts have been provided within the
scientific community. Keller and Hellingrath (2007) presented an overview of existing frameworks only for the
area of logistics and supply chain management. The overview is illustrated in the table below:
Widera et al.
Measuring Innovations in Crisis Managements
WiPe/ Paper – Open Track
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
Benchmarking
KPI Collections
VDI- Guidelines 4400,
LogiBEST
SCOR
Sennheiser ProdChain:
Supply Chain Design
Decomposition (SCDD)
Performance Management
Holistic logistics KPI Collections and PMS
Kirchhausen (2002)
Erdmann (2002)
GIPP
SCOR
VDI- Guidelines 4400
WEKA- Practical
Handbook
VDI- Guidelines 2525
Grochla (1983)
Syska (1990)
Reichmann (1985)
Pfohl (1994)
Weber (1995)
Gollwitzer, Karl (1998)
Hieber (2002)
Tableau de Bord (TdB)
J. I. Case- Approach
Harmann- Approach
Caterpillar- Approach
Skandia- Approach
Data Envelopment Analysis
Performance Measurement Matrix
Performance Pyramid
Kaplan, Norton (1992): Balanced Scorecard
Quantum Performance Measurement-Approach
Brewer, Speh (2000): BCS- Approach
Stölzle et al. (2001): BSC- Approach
Weber et al. (2002): BSC- Approach
Jehle et al. (2002), SFB 559: BSC- Approach
KPI Collections as
Reference Works
German Logistics Association (BVL), BearingPoint 82002)
Lindemann, Notz (2005): Supply Chain Scorecard
BiLog: Value Check
Keller, Stommel (2007), LiNet
Keller (2006), ILIPT
KPI Collections and PMS
Cost-Benefit Analysis
Radke (1999)
Specific logistics KPI
Collections and PMS
Degen (1978)
Martin (1979)
Berg (1982)
Wiethoff (1986)
Dierks (1988)
Sell (1978)
Berg, Maus (1980)
Fieten (1981)
Schaab (1982)
Budde, Schwarz (1983)
Treptau (1982)
Kwijas, Pieper-Musiol (1984)
Konen (1985)
Van der Meulen, Spijkerman (1985)
Beamon (1999)
Jacobsen, Nofen (2004), DynaMoZ
Ossola-Haring (1999)
Economic
Du Pont System of Financial Control
ZVEI-PMS
Profitability –Liquidity- (RL-) PMS
Groll (1991)
BiLog: Potenzial-Check
Schnetzler (2005),
ProdChain: SCDD
SCM-Best
KPI Collections and
PMS
Potential Analysis
Sennheiser (2005),
ProdChain: SCDD
Table 2: Performance Measurement Concepts (Keller and Hellingrath 2007)
The overview of the performance measurement systems (PMS) listed in the table above is systemized by an
orientation on the field of application. The following classifications were used: benchmarking, cost-benefit
analysis, potential analysis as well as performance measurement divided into economic, specific logistics and
holistic logistics PMS. Each inter-organizational PMS is listed in bold type.
It can be stated, that commercial enterprises are able to fall back on a wide range of established and proved
PMS. They offer several advantages and disadvantages in order to fulfill organization-specific and inter-
organizational requirements for performance measurement. Keller and Hellingrath (2007) conclude, that a
general comparability of these different PMS is nearly impossible as each KPI contains differences in terms of
classification, notation, definitions, calculations and applications. They propose to develop a holistic approach
for performance measurement, which can be classified between the PMS investigated. The existing PMS and the
developed framework cannot be discussed in detail in this work, but out to be deepened in further literature
(Keller and Hellingrath 2007, Keller 2009). For the area of CM performance measurement approaches these
findings suggest that there might be a good reason and a realistic chance to design and develop a generic PMS
focusing on the identification of innovations in a secure (i.e. non-operational) context. For this purpose we
present a use case of a demonstration project covering the area of innovations in crisis management. The main
idea of the project is to develop a trial-oriented environment being able to identify major innovations in CM.
Widera et al.
Measuring Innovations in Crisis Managements
WiPe/ Paper – Open Track
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
USE CASE: TRIAL-ORIENTED PERFORMANCE MEASUREMENT
In this chapter we present the performance measurement developed in the project “Driving Innovation in Crisis
Management for European Resilience” (DRIVER+, www.driver-project.eu). The demonstration project
addresses the challenge for CM organizations to assess and integrate new solutions, while coping with a rapidly
changing infrastructure, evolving risks across cultural, administrative and national boundaries and engage with
populations to enhance their resilience. The aim is to develop a Pan-European test-bed supporting the evaluation
of CM solutions in realistic but secure environments in the face of their true benefits and for their overall
suitability, before being adopted by CM practitioners. For this purpose a dedicated methodology was developed,
which cannot be discussed in detail in this paper. The DRIVER+ methodology provides a structured approach of
a CM innovation test-bed supporting the involved stakeholders to identify, design, plan execute and analyze
relevant trials.
A key element of this methodology is the performance measurement architecture. The architecture is structured
along performance measurement dimensions of DRIVER+ trials. It allows an explicit relation to tasks, processes
and organization- or mission-specific targets. Because of the functional complexity of specific measurement
“objects”, the first step is to categorize them according to the DRIVER+ logic. The following figure illustrates
the architecture of the DRIVER+ performance measurement dimensions.
Figure 5: Performance Measurement Dimensions in DRIVER+ Experiments
The three dimensions include the trial dimension, the solution dimension and – as the core DRIVER+ dimension
– the CM dimension. All three performance measurement dimensions are served by an overall performance
measurement trial support, where all potentially relevant guidelines, recommendations, and trial data is
collected, stored and processed (e.g. the actual KPI definition guidelines, generic KPIs, domain-specific KPIs or
data storage policies).
(1) The trial dimension covers the perspective of the trial owner (i.e. the organization hosting a DRIVER+
trial) and measures all relevant data related to the predefined trial objectives. One example in case of a
trial in the context of spontaneous volunteer management could be the question how many voluntary
participants can be motivated to join a trial in order to fill sandbags needed to build a dike (KPI could
be: “participating volunteers/required volunteers” or “participating volunteer profile/representative
volunteer profiles”). The trial objectives are defined by the trial owner, but the main source are the CM
practitioner needs and, hence, the objectives of the missions being “simulated” in a trial. In order to
“operationalize” the trial objectives, trial modules are derived (e.g. communication and coordination of
volunteers taking part in the trial). Within this module, the trial owner is able to define which processes
are required to fulfil the objectives and assign specific weighting. This step contains an estimation of
the effectiveness of each processes (with relation to the trial objectives). Once this task is done, the trial
owner can apply the performance measurement guidelines to deduce specific and relevant KPIs.
(2) The CM dimension is, however, the key performance measurement area. In the context of one
upcoming trial on the a chemical spill one exemplary KPI can be derived from the major objectives
targeting the evacuation of affected population (e.g. “number of evacuated persons/number of persons
to be evacuated”).The identification of CM objectives, described as mission objectives, is the foremost
Widera et al.
Measuring Innovations in Crisis Managements
WiPe/ Paper – Open Track
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
place to indicate whether a change of a process, the application of a new technology or a training
module has an impact on the CM performance. Besides, the CM objectives need to be understood as
the determining element of experiment objectives and the decision support objectives. Due to the
different relief situations, stakeholders and time horizons the measurement objects vary in terms of
specific roles, tasks, and processes. The question if a particular performance is effective or not can only
be evaluated once the involved actors including their responsibilities and practices are defined. These
definitions have to be used to identify and configure the appropriate KPIs.
(3) Finally yet importantly, the solutions dimension must be measured in order to learn whether a
particular piece of technology or a new process has the potential to drive innovation in CM. In the
presented example it could be a solution supporting evacuation tasks through the interaction with
citizens; here one objective or solution function could be to identify the location of evacuees through
the application of drones (one related KPI could be “time to locate evacuees with a drone/time to
locate evacuees without a drone”). The solutions objectives have always a relation to ease or support
one particular task, decision problem or a process, even if this is only defined as a new standard
operational procedure. Hence, the decision support objectives build the first starting point for
evaluating the performance of a particular solution. These objectives need to be derived or at least have
a direct relation to the CM objectives, in terms of a practical impact. The identified objectives can be
used to extract specific solution functions which in turn can be used to derive appropriate KPIs. One
important aspect here is, that the KPIs need to have a relation to the CM KPIs. To give an example, a
high usability of a software might be absolutely irrelevant if the software itself has no contribution to
the relevant CM performance (which does not mean, usability should not have to be measured, but its
CM impact is key for the overall evaluation).
Having the three dimensions and its interrelations in mind, a clear and structured way allows to identify relevant
KPIs being able to assess the real impact of new solutions in CM. This process is supported with generic rules of
performance measurement approaches (as discussed above), procedural guidelines and recommendations.
Evaluation examples can be found e.g. in Detzer et al. (2016), Havlik et al. (2016), van den Berg et al. (2016),
Dubost et al. (2017).
Looking back at the initial problem on the lack of generic performance measurement approaches and the
combination of the desire to support the identification of innovation for CM organizations focusing on the
questions how to measure and analyze can be described as the main conclusion from both the literature review
and our practical findings. Starting from a multi-dimensional framework (see Wang 2012) like the three-
dimensional DRIVER+ approach, a structured and significant analysis of potential innovation impacts can be
enabled in a relevant (practitioner-driven) and rigor manner. The suggested way is to follow predefined
guidelines and steps identifying specific CM objectives and values (see also Schulz and Heigh 2009 and Owen
et al. 2016), an as clear as possible systematization of the CM operation (see Rongier et al. 2013), the
application of mixed-research approaches covering both quantitative and qualitative data gathering and analysis
techniques (see Coletti et al. 2017) and a clear differentiation of the investigated objects (e.g. Wiel et al. 2010).
SUMMARY AND OUTLOOK
We have presented and discussed the state of the art on general performance measurement in crisis management
based on a literature review. The findings show that the high variety in scenarios, tasks, stakeholders and
interdependencies related to disasters, results in a lack of generic performance measurement approaches in the
literature. In order to develop an appropriate measurement approach to evaluate CM innovations existing works
in the area of performance measurement research were discussed. We presented a way how objective- and
process-oriented performance measurement can be identified and developed in order to systemize and interrelate
relevant metrics. Finally, we introduced a use case of an adjustable performance measurement architecture being
deployed in an ongoing CM demonstration project DRIVER+.
The current state of the projects allows the conclusion of the applicability of the presented approach. It is
especially the practitioners who are giving positive feedback when applying the performance measurement
approach. Due to having a clear structure between and within the dimensions there definition of KPIs and the
sense making of the gathered data allows the creation of (internal) evidence of the trialed solutions is eased
significantly. Sophisticated data analysis approaches of certain artefacts (like flight or machine learning
algorithms) become much more useful because of a dedicated relation to the specific KPIs of involved
stakeholders.
However, the presented results reflect the conceptualization of the performance measurement approach and only
first results of the development of the first trial taking place in June 2018. We have incorporated some results of
preparatory trials (exemplary objectives and KPIs introduced in the last chapters), but in order to provide an
Widera et al.
Measuring Innovations in Crisis Managements
WiPe/ Paper – Open Track
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
elaborated reflection on the application results it is necessary to incorporate further trials with other stakeholders
and also to compare the gathered qualitative and quantitative data of the trial itself.
ACKNOWLEDGMENTS
The research leading to these results has received funding from the European Union Seventh Framework
Programme (FP7/2007- 2013) under grant agreement n° 607798.
REFERENCES
Abrahamsson, M.; Hassel, H.; Tehler, H. (2010) Towards a System-Oriented Framework for Analysing and
Evaluating Emergency Response, Journal of Contingencies and Crisis Management, 18, 1, pp. 14-25.
Blecken, A. (2010) Humanitarian Logistics: Modelling Supply Chain Processes of Humanitarian Organisations,
Bern Stuttgart Wien: Haupt Verlag.
Coletti, G., Mays, R., Widera, A. (2017). Bringing Technology and Humanitarian Values Together: A
Framework to Design and Assess Humanitarian Information Systems. In Proceedings of the International
Conference on Information and Communication Technologies for Disaster Management, Münster, Germany.
Detzer, S., Gruczik, G., Widera, A., & Nitschke, A. (2016). Assessment of Logistics and Traffic Management
Tool Suites for Crisis Management. In Proceedings of the European Transport Conference, Barcelona.
Dubost, L., Giroud, F., Boisnon, J.M., Clémenceau A., Quéré, B. (2017) Trialing a Common Operational
Picture in a Simulated Environment. In Proceedings of the International Conference on Information and
Communication Technologies for Disaster Management, Münster, Germany.
Havlik, D., Pielorz, J., Widera, A. (2016) Interaction with Citizens Experiments: From Context-aware Alerting
to Crowdtasking. ISCRAM 2016 Conference Proceedings – 13th International Conference on Information
Systems for Crisis Response and Management, At Rio de Janeiro, Brazil.
Keller, M (2009) Kennzahlenbasierte Wirtschaftlichkeitsbewertung der Integration von Unternehmen in
Produktions- und Logistiknetzwerken, Dissertation, Universität Dortmund.
Keller, M.; Hellingrath, B. (2007) Kennzahlenbasierte Wirtschaftlichkeitsbewertung in Produktions- und
Logistiknetzwerken, in: Otto, A., Obermaier, R.: Logistikmanagement: Analyse, Bewertung und Gestaltung
logistischer Systeme, Wiesbaden: Gabler, pp. 51-76.
Orlikowski, W. (1993) “Learning from Notes: Organizational Issues in Groupware Implementation.” The
Information Society 9 (3): 237–50.
Owen, C.; Brooks, B.; Bearman, C.; Curnin, S. (2016) Values and Complexities in Assessing Strategic-Level
Emergency Management Effectiveness, Journal of Contingencies and Crisis Management, 24, 3, pp. 181-190.
Paramenter, David (2010): Key Performance Indicators (KPI): Developing, Implementing, and Using Winning
KPIs. John Wiley & Sons.
Rongier, C., Lauras, M., Galasso, F., & Gourc, D. (2013). “Towards a crisis performance-measurement system“,
International Journal of Computer Integrated Manufacturing, Vol 26 Iss 11, pp. 1087 - 1102.
Rongier, C., Gourc, D., Lauras, M., & Galasso, F. (2010). “Towards a performance measurement system to
control disaster response”, In: Working Conference on Virtual Enterprises, pp. 189 - 196, Springer, Berlin,
Heidelberg.
Reichmann, T. (1990) Controlling mit Kennzahlen. Grundlagen einer systemgestützten Controlling-Konzeption,
München.
Schulz, S. F.; Heigh, I. (2007) Logistics Performance Management in Action: Design and Piloting of a
"Development Indicator Tool" for Regional Logistics Units of IFRC, Proceedings of the 1st In-ternational
Cardiff/Cranfield Humanitarian Logistics Symposium, Faringdon, UK.
Toyama, K. (2015) Geek Heresy: Rescuing Social Change from the Cult of Technology. PublicAffairs.
van den Berg R., Widera, A., Lechtenberg, S., Middelhoff, M., & Hellingrath, B. (2016). Pictograms and
Assessment Categories as Crisis Communication Language: Lessons From a Field Exercise with
GDACSmobile. In Proceedings of the International Conference on Information and Communicaiton
Technologies for Disaster Management, Vienna, Austria.
Verein Deutscher Ingenieure (VDI) (2002) VDI 4400 Logistic Indicator for Distribution, VDI-Guidelines, Part
Widera et al.
Measuring Innovations in Crisis Managements
WiPe/ Paper – Open Track
Proceedings of the 15th ISCRAM Conference – Rochester, NY, USA May 2018
Kees Boersma and Brian Tomaszewski, eds.
3.
Wang, W. T. (2012). “Evaluating organisational performance during crises: A multi-dimensional framework”,
Total Quality Management & Business Excellence, Vol. 23 Iss 5-6, pp. 673 - 688.
Widera, A., Lechtenberg, S., Gurczik, G., Bähr, S., & Hellingrath, B. (2017) Integrated Logistics and Transport
Planning in Disaster Relief Operations. Proceedings of the 14th International Conference on Information
Systems for Crisis Response and Management.
Widera, A., & Hellingrath, B. (2016). Making Performance Measurement Work in Humanitarian Logistics. The
Case of an IT-supported Balanced Scorecard. In Haavisto, I., Kovacs, G., & Spens, K. M. (Eds.), Supply Chain
Management for Humanitarians. Tools for Practice (1st ed., pp. 339–352). Kogan Page.
Widera, A.; Hellingrath, B. (2011) Performance Measurement in Humanitarian Logistics. Proceedings of the
NOFOMA conference, Harstad/Norway.
Wiel, W.M. van der et al, (ed.), Concept Maturity Levels Bringing structure to the CD&E process. Proceedings
I/ITSEC 2010. Interservice / Industry Training, Simulation and Education Conference, Orlando, Florida,
November 29 - December 2, 2010.