ArticlePDF Available

The RISKSUR EVA Tool (Survtool): a tool for the integrated evaluation of animal health surveillance systems


Abstract and Figures

Information about infectious diseases at the global level relies on effective, efficient and sustainable national and international surveillance systems. Surveillance systems need to be regularly evaluated to ensure their performance, the quality of the data and information provided, as well as to allocate resources efficiently. Currently available frameworks for evaluation of surveillance systems in animal or human health often treat technical, process and socio-economic aspects separately instead of integrating them. The surveillance evaluation (EVA) tool, a support tool for the evaluation of animal health surveillance systems, was developed to provide guidance for integrated evaluation of animal health surveillance including economic evaluation. The tool was developed by international experts in surveillance and evaluation in an iterative process of development, testing and revision taking into account existing frameworks and guidance, scientific literature and expert opinion. The EVA tool encompasses a web interface for users to develop an evaluation plan, a Wiki classroom to provide theoretical information on all required concepts and a generic evaluation work plan to facilitate implementation and reporting of outputs to decision makers. The tool was tested by planning and conducting epidemiological and economic evaluations of surveillance for classical and African swine fever, bovine virus diarrhoea, avian influenza, and Salmonella Dublin in five European countries. These practical applications highlighted the importance of a comprehensive evaluation approach to improve the quality of the evaluation outputs (economic evaluation; multiple attributes assessment) and demonstrated the usefulness of the guidance provided by the EVA tool. At the same time they showed that comprehensive evaluations might be constrained by practical issues (e.g. confidentiality concerns, data availability) and resource scarcity. In the long term, the EVA tool is expected to increase professional evaluation capacity and help optimising animal health surveillance system efficiency and resource allocation for both public and private actors of the surveillance systems.
Content may be subject to copyright.
HAL Id: hal-02627242
Submitted on 26 May 2020
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entic research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diusion de documents
scientiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
Distributed under a Creative Commons Attribution - NonCommercial - NoDerivatives| 4.0
International License
The RISKSUR EVA tool (Survtool): A tool for the
integrated evaluation of animal health surveillance
Marisa Peyre, Linda Hoinville, James Njoroge, Angus Cameron, Daniel Traon,
Flavie Goutard, Clementine Calba, Vladimir Grosbois, Alexis Delabouglise,
Viktor Varant, et al.
To cite this version:
Marisa Peyre, Linda Hoinville, James Njoroge, Angus Cameron, Daniel Traon, et al.. The RISKSUR
EVA tool (Survtool): A tool for the integrated evaluation of animal health surveillance systems. Pre-
ventive Veterinary Medicine, Elsevier, 2019, 173, �10.1016/j.prevetmed.2019.104777�. �hal-02627242�
Contents lists available at ScienceDirect
Preventive Veterinary Medicine
journal homepage:
The RISKSUR EVA tool (Survtool): A tool for the integrated evaluation of
animal health surveillance systems
Marisa Peyre
*, Linda Hoinville
, James Njoroge
, Angus Cameron
, Daniel Traon
Flavie Goutard
, Clémentine Calba
, Vladimir Grosbois
, Alexis Delabouglise
, Viktor Varant
Julian Drewe
, Dirk Pfeiffer
, Barbara Häsler
Animal, Santé, Territoires, Risques et Ecosystèmes (ASTRE), Centre de Coopération Internationale en Recherche Agronomique pour le Dévelopement (CIRAD), TA C 22/E
Campus International Baillarguet, 343988 Montpellier Cedex 5, France
Veterinary Epidemiology, Economics and Public Health Group, Royal Veterinary College, Hawkshead Lane, North Mymms, Hatfield AL9 7TA, UK
Tracetracker, Prinsens gate 5, 0152 Oslo, Norway
AusVet Animal Health Services, Wentworth Falls, New South Wales, Australia
Arcadia International E.E.I.G. Chaussée de Louvain, 1220 1200 Brussels, Belgium
Veterinary Epidemiology, Economics and Public Health Group and Leverhulme Centre for Integrative Research on Agriculture and Health, Department of Pathobiology and
Population Sciences, Royal Veterinary College, Hawkshead Lane, North Mymms, Hatfield AL9 7TA, UK
Animal health
Animal health economics
Decision tool
Information about infectious diseases at the global level relies on effective, efficient and sustainable national and
international surveillance systems. Surveillance systems need to be regularly evaluated to ensure their perfor-
mance, the quality of the data and information provided, as well as to allocate resources efficiently. Currently
available frameworks for evaluation of surveillance systems in animal or human health often treat technical,
process and socio-economic aspects separately instead of integrating them. The surveillance evaluation (EVA)
tool, a support tool for the evaluation of animal health surveillance systems, was developed to provide guidance
for integrated evaluation of animal health surveillance including economic evaluation. The tool was developed
by international experts in surveillance and evaluation in an iterative process of development, testing and re-
vision taking into account existing frameworks and guidance, scientific literature and expert opinion. The EVA
tool encompasses a web interface for users to develop an evaluation plan, a Wiki classroom to provide theoretical
information on all required concepts and a generic evaluation work plan to facilitate implementation and re-
porting of outputs to decision makers. The tool was tested by planning and conducting epidemiological and
economic evaluations of surveillance for classical and African swine fever, bovine virus diarrhoea, avian in-
fluenza, and Salmonella Dublin in five European countries. These practical applications highlighted the im-
portance of a comprehensive evaluation approach to improve the quality of the evaluation outputs (economic
evaluation; multiple attributes assessment) and demonstrated the usefulness of the guidance provided by the
EVA tool. At the same time they showed that comprehensive evaluations might be constrained by practical issues
(e.g. confidentiality concerns, data availability) and resource scarcity. In the long term, the EVA tool is expected
to increase professional evaluation capacity and help optimising animal health surveillance system efficiency
and resource allocation for both public and private actors of the surveillance systems.
1. Introduction
The development of efficient, effective and sustainable surveillance
systems, in particular to detect emerging and exotic diseases in a timely
manner, has gained importance in recent years (Anthony et al., 2012).
Surveillance systems provide useful information for effective disease
prevention and control thereby improving food system productivity and
food security, animal welfare, economic development and access to
international trade. Information about infectious diseases at a global
scale relies on national and international surveillance systems. The re-
sources, capacity and reliability of national public and/or private sur-
veillance systems can vary considerably, especially in countries char-
acterized by weak economies, political instability (Jebara, 2004) and/
or a limited surveillance tradition. To make best use of available
Received 7 February 2018; Received in revised form 12 March 2019; Accepted 17 September 2019
Corresponding author at: CIRAD, Campus International de Baillarguet TA A-117/E, F-34398 Montpellier, France.
E-mail address: (M. Peyre).
Preventive Veterinary Medicine 173 (2019) 104777
0167-5877/ © 2019 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license
resources, it is critical to perform timely and relevant evaluations of
surveillance systems (Drewe et al., 2012). Evaluation is an essential step
in the policy cycle and allows transparent interpretation of outputs,
more objective decision-making and resource allocation as well as im-
provements in system design and enhanced acceptance of system out-
puts by stakeholders (Jann and Wegrich, 2007). Given the almost
continuous changes occurring in disease epidemiology and therefore in
the surveillance system activities, it is essential to regularly (re-)eval-
uate surveillance taking into account surveillance system organisation,
effectiveness evaluation attributes, and economic assessment criteria
and methods. This requires the design of comprehensive, practical, and
affordable evaluation plans for timely assessment of not only the ef-
fectiveness and efficiency of a surveillance programme but also un-
derlying determinants (e.g. acceptability) which are linked to the
As described by Calba et al. (2015a) and Drewe et al. (2012),
available frameworks and/or guidance for evaluation of surveillance
systems in the animal and human health fields provide robust founda-
tions, but could be expanded towards a more comprehensive, integrated
approach. Indeed, none of the available guides provides a framework
for a comprehensive evaluation that includes functional, process,
technical and economic aspects simultaneously (Calba et al., 2015a,
2013a). Consequently, there is a need to integrate existing evaluation
frameworks, practical methods and tools for the assessment of sur-
veillance attributes and to provide a standardised evaluation termi-
nology. Specific evaluation of surveillance (as opposed to the evalua-
tion of disease interventions) has been performed only on limited
occasions and a variety of approaches and methods are used without a
generally agreed protocol (Drewe et al., 2012). Indeed more than 25
attributes have been described for the evaluation of animal health
surveillance systems, making a complete evaluation – if all attributes
are used – time-consuming and expensive. In some cases no methods
have been described for the measurement of these attributes and only a
fraction of these evaluation attributes have been included in evaluation
process templates and in practical case studies (Drewe et al., 2012;
Hoinville et al., 2013;Calba et al., 2013b).
There are always three main parts in the evaluation process of
surveillance: planning, implementation and reporting (Calba et al.,
2015a). Guidance and support is needed for those three parts, and
especially for the definition of the evaluation plan, involving: i) the
description of the surveillance system/component under evaluation; ii)
the socio-economic context and the rationale for evaluation of the
surveillance; iii) the definition of a precise evaluation question, and iv)
the choice of evaluation attributes to be measured. The choice of eva-
luation attributes and methods to measure them will depend on the
type of evaluation considered (e.g. evaluation of the process vs. eva-
luation of the outputs of surveillance) and on the surveillance system
and its socio-economic context (e.g. availability of resources, the
structuring of the animal production industry, political will to develop
or sustain animal breeding and production, etc) (Peyre et al., 2011,
A survey conducted in seven European countries highlighted the
fact that decision-makers considered economic criteria to be important
in decision-making for surveillance (Häsler et al., 2014;RISKSUR
consortium, 2014). Yet, economic evaluations of surveillance (EES) are
sparse (Drewe et al., 2012;Undurraga et al., 2017;Wall et al., 2017)
and available guidelines for the evaluation of surveillance fail to pro-
vide guidance on systematic economic appraisal (Calba et al., 2015a).
The use of economic evaluation to inform surveillance system design
has been limited so far, mainly due to a lack of appropriate guidance to
allow for practical use of these methods and understanding and trust in
their outputs from decision makers (Calba et al., 2015a).
The RISKSUR consortium ( investigated novel
approaches for cost-effective surveillance and developed a web-based
surveillance design and evaluation tool directed at users with advanced
surveillance knowledge and skills. The main objective of the
surveillance evaluation (EVA) tool was to develop a practical frame-
work to guide users in planning and implementing integrated epide-
miological and economic evaluations of surveillance systems. The EVA
tool was developed building on existing evaluation frameworks,
methods and tools taking into account input from expert meetings and
discussions. The RISKSUR surveillance design framework complements
the EVA tool to support the design, review and documentation of sur-
veillance systems (Comin et al., 2016). The EVA tool development
process, characteristics and application using practical case studies are
described and discussed in this paper.
2. Methods
A five stage process was used to develop the EVA tool from the
initial development of the evaluation approach to its validation and
refinement: i) technical workshops of international surveillance experts
(researcher and users) to develop and agree on a first conceptual model
including key elements of an evaluation plan; ii) expert opinion elici-
tation to review and score the relevance of evaluation attributes and the
methods to assess them; iii) application of the model to the evaluation
of practical case studies (referred to as development case studies); iv)
development of the web tool; v) application of the web tool to case
studies (referred to as validation case studies of the feasibility and
operability of the evaluation model and tool). This process is described
in the following sections.
2.1. Technical workshop to develop the conceptual model
A one day technical workshop was held in June 2013 (Montpellier,
France), attended by 15 experts from six European countries. The ex-
perts were selected based on their expertise in surveillance, surveillance
design, evaluation, economic evaluation and surveillance evaluation
(Fig. 1). The objectives of the workshop were to review and select the
key elements to be included in an integrated evaluation framework
looking at the system performance, process and value/impact (Fig. 2).
At the end of the workshop, a conceptual model of the EVA tool was
proposed and a first version of the EVA tool was subsequently devel-
oped by the enlarged RISKSUR consortium (including the initial group
of experts) based on this model (RISKSUR Consortium, 2013).
2.2. Expert opinion elicitation process
Two independent expert opinion elicitations rounds were im-
plemented among the experts involved in the first technical workshop
(Fig. 1) to i) define the list of evaluation attributes and their relevance
level according to defined surveillance contexts; ii) identify and
Fig. 1. Field of expertise and number of experts per field involved in the EVA
tool development process.
M. Peyre, et al. Preventive Veterinary Medicine 173 (2019) 104777
validate attribute assessment methods. This process is described below:
2.2.1. Evaluation attribute list and relevance
A list of evaluation attributes was developed by the RISKSUR project
team based on data retrieved from the literature (List 1) (Calba et al.,
2013b;Drewe et al., 2012). A three stage expert opinion elicitation
process was then conducted (Fig. 3). First, a technical workshop with
11 experts was held to review the evaluation attribute list including
definitions and to identify the attributes only relevant for the evaluation
process cycle (Bilal, 2001) (Fig. 4) resulting in List 2. Second, based on
the outputs of the first technical workshop, a second workshop with 15
experts was held to define the relevance level of each attribute ac-
cording to a specific evaluation context (i.e. a combination of evalua-
tion objective and evaluation question). The relevance levels were de-
fined in a qualitative manner using 3 categories: low, medium and high
relevance. The experts were asked to justify their choices. The inputs
were compiled and reviewed by the project team to identify dis-
crepancies between experts and to produce a generic statement on the
relevance level of each attribute. Four different degrees of agreement/
disagreement were identified: 1) full agreement; 2) moderate dis-
agreement; 3) disagreement; 4) strong disagreement. Finally, a third
technical workshop with the same 15 experts was held to reach a
consensus on the attributes with strong disagreement and disagreement
to generate a final list.
2.2.2. Attribute assessment methods (including economic analysis)
A literature review was conducted to identify available evaluation
attribute assessment methods including economic analysis techniques.
A specific search algorithm was used to retrieve methods for each
evaluation attribute, combining keywords for the attribute and
methods, e.g. (sensitivity + (methods OR tools)). Information on the
type of method (qualitative, semi-quantitative, quantitative), the pur-
pose of use, the target users, a descriptive summary, data and expertise
requirements as well as strengths and limits were retrieved from the
published literature. This information was compiled by the RISKSUR
project team and sent out for review to the experts who developed the
Fig. 2. Scale and complexity of different levels of health surveillance sytsem evaluation: technical (looking at the performance of the system); process (looking at the
factors affecting system performance); comprehensive (looking at the value of the system).
Fig. 3. Expert opinion process that was used to define and agree on the list and relevance of evaluation attributes.
M. Peyre, et al. Preventive Veterinary Medicine 173 (2019) 104777
method and/or applied it in the field, to ensure that the compiled data
were valid and relevant.
2.3. Development case studies
The first version of the EVA tool was applied to six development
case studies (Table 1) (Peyre et al., 2015). The case studies were se-
lected to ensure representativeness of the different surveillance objec-
tives, target species and hazards under surveillance. The development
case studies allowed testing the logic of the tool and developing it
further. They also provided data on the evaluation attribute list, their
degree of relevance according to a specific context, attribute assessment
methods and challenges linked to specific economic assessment tech-
niques. The information was part of an iterative process of framework
development and relevant feedback was included in the expert opinion
elicitation processes described above (Fig. 3).
2.4. EVA tool web application
An online web version of the EVA tool was developed by
Tracetracker Ltd. The online tool was linked to a Wiki Classroom ap-
plication that provides theoretical evaluation concepts and EVA tool
user manual ( The
operability of the web tool was validated using the validation case
studies described below.
2.5. Validation case studies
Three case studies were selected to perform an integrated epide-
miological and economic evaluation and thereby validate the feasi-
bility, operability and usefulness of the EVA web tool: early detection of
avian influenza in the UK; freedom from Classical Swine Fever in wild
boars in Germany; and case detection of salmonella Dublin in cattle in
3. Results
3.1. Outputs from technical expert workshops
The EVA tool framework was developed based on other existing
frameworks, guidelines and methods available in the literature (Calba
et al., 2015a,b). During the first technical workshop, the experts
identified all common critical elements and essential evaluation steps
from those guidelines to be included in the EVA tool with the aim to
provide a harmonised approach to surveillance evaluation based on the
validated and available reference guides (including guidelines from
Center for Diseases Control (CDC), WHO and OIE (Calba et al.,
To ensure adequate framing of the evaluation and define the specific
evaluation question, critical elements of the evaluation context need to
be captured. The expert workshops highlighted the critical importance
of the evaluation context (especially the surveillance objective and the
evaluation needs) and the evaluation question to define the relevance of
evaluation attributes to be included in the evaluation process. The
conceptual model of the EVA tool addressed these needs by defining the
four fundamental steps: what is my situation; why am I doing an eva-
luation; what to evaluate and how.
The experts identified thirteen critical elements of the context
(surveillance system and evaluation needs) that were deemed essential
to frame the evaluation, define the evaluation question, analyse and
discuss the outputs of the evaluation (Table 2).
A list of 11 evaluation questions were defined by the experts to
account for diverse evaluation needs (Table 3). A decision tree pathway
was also developed to assist the user with the choice of the evaluation
question. In this pathway, the users are guided through a series of
questions (eleven in the longest pathway) to define their evaluation
priorities (e.g. system or component evaluation; previous knowledge of
effectiveness; need for economic analysis) and to identify the most re-
levant evaluation question. At the end of the pathway, the user is di-
rected to the evaluation question list and the tool will pre-select the
appropriate question in order to assist the evaluators in their final
choice. Users who feel comfortable with the selection of the evaluation
question, can go directly to the list with evaluation questions.
3.2. General overview of the EVA tool framework
The EVA tool is freely available online ( and is
shared under the principles of a non commercial Creative Commons
licence 2017 (i.e. the tool can be freely used and shared for any non-
commercial purposes but appropriate credit should be given, providing
a link to the licence and changes made should be indicated). The tool is
linked to the surveillance evaluation Wiki (
surveillance-evaluation/) which is also freely available upon registra-
tion as a member.
Fig. 4. Evaluation process cycle, adapted from the better evaluation initiative rainbow framework (
M. Peyre, et al. Preventive Veterinary Medicine 173 (2019) 104777
The tool has been organized into three main sections to capture all
the elements critical to an evaluation process as highlighted by the
experts during the iterative development process of the tool (Fig. 5):
Section 1) a general introduction to the tool and essential information
on evaluation concepts, including evaluation attributes and economic
methods to promote the understanding of the evaluation process and
economic evaluation; Section 2) guidance on how to define an
evaluation plan based on Steps 1 and 2 with data entry on the eva-
luation context and the evaluation question and Steps 3 and 4 where the
tool facilitates the selection of relevant evaluation attributes and as-
sessment methods (including economic analysis); and Section 3) gui-
dance on how to perform the evaluation and how to report the outputs
of the evaluation to decision makers.
Table 1
Overview of the case studies applied to develop and validate the EVA tool.
Case studies Case detection of
salmonella Dublin in
cattle in Sweden
Early detection of
avian influenza in
the United Kingdom
Case detection of
bovine virus
diarrhoea virus in
the UK
Demonstrate freedom
from Classical Swine
Fever in wild boars in
Demonstrate freedom
from bluetongue in
ruminants in Germany
Measuring prevalence
of highly pathogenic
avian influenza in
Hazard under
Target species cattle laying hens cattle wild boar ruminants poultry
Surveillance goal:
Case finding ̶ ̶ ̶
Demonstrate freedom
from disease
̶ ̶ ̶ ✓
Early detection ̶ ̶ ̶ ̶
Prevalence estimate ̶ ̶ ̶
Level country country country region country country
Surveillance structure:
multi-component ✓ ̶
single component ̶ ̶ ̶ ̶
Use of the case study:
EVA tool development
EVA tool validation
Organisational attribute evaluated:
Surveillance system
✓ ✓
Functional attributes evaluated:
Acceptability ̶ ̶ ̶ ̶
Availability ̶ ̶ ̶ ̶ ̶
Engagement ̶ ̶ ̶ ̶ ̶
Simplicity ̶ ̶ ̶ ̶ ̶
Sustainability ̶ ̶ ̶ ̶ ̶
Performance attributes evaluated:
Coverage ̶ ̶ ̶ ̶ ̶
Detection fraction ̶ ̶ ̶ ̶ ̶
Precision ̶ ̶ ̶ ̶ ̶
Sensitivity (other than
detection fraction)
̶ ✓ ̶ ✓ ̶
Timeliness ̶ ̶ ̶ ̶
Economic attributes evaluated:
Cost ̶ ✓ ✓ ̶ ̶
Economic efficiency ̶ ̶ ̶
PRRS = Porcine Respiratory and Reproductive Syndrome, AD = Aujeszky’s Disease, CSF = Classical Swine Fever, AI = Avian Influenza, ASF = African
Swine Fever, BVD = Bovine Viral Diarrhoea, BHV1 = Bovine Herpes Virus 1, BT = Bluetongue.
Table 2
List of evaluation context elements included in the EVA tool and their relevance in the framing of the evaluation process.
Context elements Relevance
Surveillance objective Impacts on the selection of evaluation attributes
Hazard name Provides information about the disease under evaluation which will impact the complexity of the
evaluation (e.g. between animal disease and zoonotic diseases)
Geographical area Provides information about the scale of the evaluation
Legal requirements Provides information about the need to meet an effectiveness target or not
Strengths and weaknesses of the current approach Provide summary information about the rationale behind the decision to evaluate - help the evaluator to
frame the evaluation question
Stakeholder concerns about current approach Provide information about the involvement and interest of decision makers in the evaluation process -
help the evaluator to frame the evaluation question
Alternative strategies to consider Provides information about the type of evaluation required (based on a counterfactual or not)
Do you want to evaluate or change the system or some components
in the system ?
Provide information about the level of evaluation
How many components will you include in this evaluation? Provides information about the number of counterfactual considered
Are you considering risk-based options? Relevant for the inclusion of the attribute risk-based criteria definition in the evaluation plan
Will you consider the costs of surveillance in your evaluation? Provides information about the interest of economic evaluation
Do you know the current cost of your system and/or components? Provides information about the data required
Do you have a budget constraint? Provides information for the economic evaluation (meeting a budget target or not)
M. Peyre, et al. Preventive Veterinary Medicine 173 (2019) 104777
3.3. Relevance of evaluation attributes
A total of 19 evaluation attributes were included in the final list
consolidated within the RISKSUR project team (Table 4). The differ-
ences in relevance of evaluation attributes mainly depended on the
surveillance objective (e.g. early detection; freedom from disease; case
finding), the evaluation question (e.g. value attributes, organisational
attributes) and in some situations on the surveillance design (e.g. risk-
based surveillance) (the full table of attribute relevance can be accessed
In the second stage of expert consultation, full agreement was
reached on the relevance level of three attributes (acceptability; pre-
cision; simplicity) and moderate disagreement for four attributes (ne-
gative predictive value (NPV); positive predictive value (PPV); sensi-
tivity; risk-based criteria). Disagreement was observed for seven
attributes: availability & sustainability; cost; compatibility; false alarm
rate; multiple hazard; representativeness; timeliness. Strong disagree-
ment was observed for two attributes: bias and coverage.
Disagreements between experts were observed for two other attributes
(robustness and surveillance system organization), but these were
caused by misunderstanding of the two attribute definitions. The defi-
nitions were subsequently revised. A consensus between experts was
reached during the last stage of the expert consultation process to
produce the final list presented in Table 4.
3.4. Guidance on the evaluation attribute assessment methods
A list of 70 different methods and/or specific applications of a
method were retrieved from the scientific literature. Their character-
istics including advantage, limits and competences required to apply
the methods were validated by the relevant experts and included in the
EVA tool and the Wiki. The number of methods validated for each
evaluation attribute is indicated in Table 4.
Table 3
List of evaluation questions developed within the EVA tool and evaluation criteria and methods linked to each question.
Evaluation question Evaluation criteria Evaluation method
Evaluation at the component level
Q1. Assess whether one or more surveillance component(s) is/are capable of meeting a specified
technical effectiveness target
Effectiveness Effectiveness attribute assessment
Q2. Assess the technical effectiveness of one or more surveillance components
Q3. Assess the costs of surveillance components (out of two or more) that achieve a defined
effectiveness target, where effectiveness is already known
Effectiveness, Cost Least cost assessment
Q4. Assess the costs and effectiveness of surveillance components (out of two or more) to
determine which achieves a defined effectiveness target at least cost, the effectiveness needs
to be determined
Q5–Q7. Assess whether a surveillance component generates a net benefit, the biggest net benefit or the the biggest under a budget constraint for society, industry, or animal holder(s):
Benefit to be measured in monetary terms Effectiveness, Monetary benefit, Cost Cost benefit assessment
Benefit to be measured in non-monetary terms or to be expressed as an effectiveness measure Effectiveness, Non-monetary benefit,
Cost effectiveness assessment
Benefit to be measured in both monetary and non-monetary terms (or to be expressed as an
effectiveness measure)
Monetary benefit, Non-monetary,
benefit/effectiveness, Cost
Cost benefit and cost effectiveness
Evaluation at the system level
Q8. Assess the functional aspects of surveillance which may influence effectiveness Effectiveness Functional attribute assessment
Q9. Assess the technical effectiveness of one or more surveillance components and the functional
aspects of surveillance that may influence effectiveness
Effectiveness and functional
attribute assessment
Q10. Assess the technical effectiveness of the surveillance system Effectiveness attribute assessment
Q11. Assess the surveillance structure, function and processes Process assessment
Fig. 5. General organisation of the EVA tool: Section 1) general introduction to evaluation concepts and economic methods; Section 2) guidance on how to define an
evaluation plan; and Section 3) guidance on how to perform the evaluation and how to report the outputs of the evaluation to decision makers.
M. Peyre, et al. Preventive Veterinary Medicine 173 (2019) 104777
Table 4
Final list of evaluation attributes included in the EVA tool and number of related assessment methods
Attribute name Attribute definition Number of assessment methods
validated by experts
Functional Availability and sustainability The ability to be operational when needed (availability) and the robustness and
ability of system to be ongoing in the long term (sustainability).
Functional Acceptability and engagement Willingness of persons and organisations to participate in the surveillance system, the
degree to which each of these users is involved in the surveillance. (Could also assess
their beliefs about the benefits or adverse consequences of their participation in the
system including the provision of compensation for the consequence of disease
Functional Simplicity Refers to the surveillance system structure, ease of operation and flow of data
through the system.
Functional Flexibility, adaptability The ability to adapt to changing information needs or operating conditions with little
additional time, personnel or allocated funds. The extent to which the system can
accommodate collection of information about new healthhazards or additional/
alternative types of data; changes in case definitions or technology; and variations in
funding sources or reporting methods should be assessed.
Functional Compatibility Compatibility with and ability to integrate data from other sources and surveillance
components e.g. One Health surveillance (part of data collection and data
Functional Multiple hazard Whether the system captures information about more than one hazard 1
Organisational Risk-based criteria definition Validity and relevance of the risk criteria selected and the approach/method used for
their identification
Organisational Surveillance system organisation An assessment of the organisational structures and management of the surveillance
system including the existence of clear, relevant objectives, the existence of steering
and technical committees whose members have relevant expertise and clearly
defined roles and responsibilities, stakeholder involvement and the existence of
effective processes for data management and dissemination of information.
Effectiveness Coverage The proportion of the population of interest (target population) that is included in
the surveillance activity.
Effectiveness Representativeness The extent to which the features of the population of interest are reflected by the
population included in the surveillance activity, these features may include herd size,
production type, age, sex or geographical location or time of sampling (important for
some systems, e.g. for vector borne disease)
Effectiveness False alarm rate (inverse of specificity) Proportion of negative events (e.g. non-outbreak periods) incorrectly classified as
events (outbreaks). This is the inverse of the specificity but is more easily understood
than specificity.
Effectiveness Bias = Accuracy The extent to which a prevalence estimate produced by the surveillance system
deviates from the true prevalence value. Bias is reduced as representativeness is
Effectiveness Precision How closely defined a numerical estimate is. A precise estimate has a narrow
confidence interval. Precision is influenced by prevalence, sample size and
surveillance approach used.
Effectiveness Timeliness Timeliness can be defined in various ways
This is usually defined as the time between any two defined steps in a
surveillance system, the time points chosen are likely to vary depending on the
purpose of the surveillance activity.
For planning purposes timeliness can also be defined as whether surveillance
detects changes in time for risk mitigation measures to reduce the likelihood of
further spread
The precise definition of timeliness chosen should be stated as part of the evaluation
process. Some suggested definitions are;
For early detection and demonstrating freedom
Measured using time - Time between introduction of infection and detection of
outbreak or presence by surveillance system
Measured using case numbers - Number of animals/farms infected when
outbreak or infection detected
For case detection to facilitate control
Measured using time - Time between infection of animal (or farm) and their
Measured using case numbers – Number of other animals / farms infected before
case detected
For detecting a change in prevalence
Measured using time - Time between increase in prevalence and detection of
Measured using case numbers - Number of additional animals/farms infected
when prevalence increase is identified.
Effectiveness Sensitivity (detection probability and
detection fraction)
Sensitivity of a surveillance system can be considered at three levels.
Surveillance sensitivity (case detection probability) refers to the proportion
of individual animals or herds in the population of interest that have the health-
related condition of interest that the surveillance system is able to detect.
Sensitivity could be measured in terms of detection fraction (number of cases
detected divided by the coverage level) in a context of non-exhaustive coverage.
Surveillance sensitivity (outbreak detection) refers to the probability that the
surveillance system will detect a significant increase (outbreak) of disease. This
(continued on next page)
M. Peyre, et al. Preventive Veterinary Medicine 173 (2019) 104777
Novel methods which were developed as part of the RISKSUR pro-
ject to assess the risk-based definition criteria (EVA Risk); acceptability
and engagement and benefits (AccePT method) and effectiveness were
also included in the EVA tool (Calba et al., 2015b;Grosbois et al.,
3.5. Guidance on economic evaluation concepts and methods
The EVA tool further promotes the understanding and use of eco-
nomic evaluation by explaining relevant economic theory and chal-
lenges underpinning economic evaluation of surveillance. The re-
lationship between surveillance, intervention and loss avoidance along
with the value of information, and non-monetary benefits are described
and linked to economic analysis methods commonly used in animal
health. In order to promote best practices in economic evaluation of
surveillance, guidance and practical information on economic evalua-
tion is provided both in the tool itself and the Wiki. A series of relevant
questions that allow defining an economic evaluation question has been
developed to help frame the evaluation context and the evaluation
questions according to this context. Out of the 11 evaluation questions
defined in the tool, 5 are economic evaluation questions covering three
common types of economic evaluation methods: least-cost assessment,
cost-effectiveness and cost-benefit analysis (Table 3). These economic
analysis techniques are listed and described in the tool and linked to the
economic evaluation methods described in detail in the evaluation
3.6. Guidance on how to report the evaluation outputs back to the decision
Detailed guidance and a roadmap on how to report the evaluation
outputs to decision makers has been integrated in the EVA tool and the
evaluation Wiki.
3.7. The EVA Wiki: a dynamic platform on evaluation concepts and
The EVA Wikispace was developed to gather and share extensive
information and references/links to support the successful use and
further development of the EVA tool (
surveillance-evaluation/). This information sharing space helps enga-
ging the scientific community by allowing users to edit and add in-
formation and therefore ensure relevance of the tool by updating it with
the latest developments in the field of animal health surveillance eva-
luation. The EVA wiki is organised in a similar way as the EVA tool but
provides additional sections on important elements of evaluation and
economic evaluation concepts along with background and practical
information on the EVA tool and application examples.
3.8. Application of the EVA tool to case studies
The application of the tool for economic evaluation of surveillance
for classical and African swine fever, bovine virus diarrhoea, avian in-
fluenza, and Salmonella Dublin in five European countries provided
important feedback on the relevance, functionality, advantages, feasi-
bility and limits of the EVA tool for surveillance evaluation.
All evaluation questions selected were deemed feasible and could be
addressed using available methods and data.
For each case study, 4–9 evaluation attributes were identified by the
EVA tool as highly relevant for the evaluation and the users included
2–9 in their evaluations (Table 1). This choice was reported to be
mainly due to practical and timing issues (e.g. time to collect additional
Table 4 (continued)
Attribute name Attribute definition Number of assessment methods
validated by experts
may be an increase in the level of a disease that is not currently present in the
population or the occurrence of any cases of disease not currently present.
Surveillance sensitivity (presence) –refers to the probability that disease will
be detected if present at a certain level (prevalence) in the population.
Effectiveness PPV Probability that health event is present given that health event is detected 2
Effectiveness NPV The probability that no health event is present given that no health event is detected 1
Effectiveness Robustness The ability of the surveillance system to produce acceptable outcomes over a range of
assumptions about uncertainty by maximising the reliability of an adequate outcome.
Robustness can be assessed using info-gap models.
Value Cost The concept of economic cost includes 1) the losses due to disease (e.g. reduced milk
yield, mortality), and 2) the resources required to detect the disease by a system (e.g.
time, services, consumables for surveillance). In economic evaluation, the resources
used to detect disease are compared with the disease losses with the aim to identify
an optimal balance where a higher economic efficiency is achieved. Estimation of the
total economic cost stemming from losses and expenditures is called a disease impact
assessment. Estimation of the resource expenditures only is called a cost analysis.
6 (including 2 non published
from RISKSUR members)
Value Benefit The benefit of surveillance quantifies the monetary and non-monetary positive direct
and indirect consequences produced by the surveillance system and assesses whether
users are satisfied that their requirements have been met. This includes financial
savings, better use of resources and any losses avoided due to the existence of the
system and the information it provides. These avoided losses may include the
avoidance of, Animal production losses, Human mortality and morbidity, Decrease in
consumer confidence, Threatened livelihoods, Harmed ecosystems, Utility loss Often,
the benefit of surveillance estimated as losses avoided can only be realised by
implementing an intervention. Hence, it is necessary to also assess the effect of the
intervention and look at surveillance, intervention and loss avoidance as a three-
variable relationship. Further benefits of surveillance include maintained or
increased trade, improved ability to react in case of an outbreak of disease,
maintaining a structured network of professionals able to react appropriately against
a (future) threat, maintaining a critical level of infrastructure for disease control,
increased understanding about a disease, intellectual capital, social capital and
improved ability to react in case of an outbreak of disease.
Functional = attributes aimed to evaluate the system function; effectiveness = attributes aimed to evaluate the system performance; organisational = attributes
aimed to evaluate the system management and process.
M. Peyre, et al. Preventive Veterinary Medicine 173 (2019) 104777
data to assess acceptability and engagement). All case studies con-
ducted an assessment of the costs in comparison to one or more effec-
tiveness criteria; one case study translated the effectiveness measures
into a monetary benefit for inclusion in a cost-benefit analysis. Because
all case studies looked at new designs to either complement or replace
old designs, the analyses were prospective / ex ante. Users reported
difficulties in the estimation of fixed and variable costs, non-monetary
benefits, co-benefits resulting from using synergies, and the selection of
meaningful effectiveness measures for inclusion in economic analysis.
Importantly, the limits identified in the case studies were linked to
the application of the evaluation method rather than the use of the tool
4. Discussion
The EVA tool was developed to provide practical guidance on how
to design integrated evaluation protocols for surveillance, conduct an
evaluation and how to communicate the findings to facilitate decision-
The EVA tool provides a practical evaluation framework, which
guides users on the implementation of the evaluation and provides
essential elements for the interpretation of the results. Within the
RISKSUR project a complementary tool (surveillance design frame-
work) was also developed to support design or re-design of a surveil-
lance system (Comin et al., 2016). As for the EVA tool, the design fra-
mework does not take decisions for the users but provides specific
guidance to facilitate the design or re-design of surveillance system
according to the user’s specific needs. The design framework is also
complemented by a web interface and a Wiki classroom (https:// The combined set
of tools covers all the essential steps in the decision making cycle for
strategic planning of animal health surveillance (design – evaluation –
re-design) (Comin et al., 2016). It promotes understanding of critical
concepts, suitable methods, data and time requirements and is expected
to nurture the use of economic evaluation of surveillance, which is still
in its infancy (Häsler et al., 2015).
The evaluation question is the most important aspect of the eva-
luation process. Evaluation is intrinsically linked to action; it makes
little sense, and is of limited interest, to perform an evaluation without
a specific objective for action or at least the willingness to consider
action (the outcome may be to decide that no action is currently
needed). In order to guide the evaluator in the selection of an appro-
priate evaluation question, a list of evaluation question was developed
along with a selection guidance pathway and integrated within the EVA
tool. However, this list might not be exhaustive and could be reviewed
based on feedback from users of the tool and/or comments made in the
EVA wiki.
Until recently, recommendations on the choice of attributes to
evaluate animal health surveillance systems have been based on case
study application and methodological experience from public health
evaluation (Calba et al., 2015a). In 2011, Hoinville et al. (2013) pro-
vided a comprehensive list of evaluation attributes relevant to the
evaluation of animal health surveillance as an output of the first In-
ternational Conference on Animal Health Surveillance. Drewe et al.
(2015) provided an attribute selection matrix to aid with ranking of
evaluation attributes according to the surveillance objective of the
system under evaluation. However, these studies only provided limited
information on the relevance of the evaluation attributes according to a
specific context. Indeed, ranking of evaluation attributes was shown to
be a challenging process as it depends on many factors and degree of
interactions (Peyre et al., 2014). Within the RIKSUR project and with
the development of the EVA tool, we further contributed to this work by
i) identifying which attributes of the system are important for the
evaluation process rather than for the design process; ii) identifying the
contextual factors impacting on the priority of evaluation attributes; iii)
assessing the links between the attributes, and iv) promoting the use of
a comprehensive list of evaluation attributes with expert defined re-
levance levels rather than a selection of attributes. In order to ensure
maximum flexibility of the decision support tool without withdrawing
information from the user and account for the difficulties in reaching
expert consensus during the process, it was decided that the choice of
the evaluation attributes to be included in the evaluation process will
ultimately be determined by the user, but the tool provides basic sug-
gestions that can be considered by the user and overridden manually if
A key innovative feature of the EVA tool is the provision of user-
friendly and practical guidance to support the design and conduct of
economic evaluation of surveillance. Economic theory underpinning
economic evaluation of surveillance is explained and challenges high-
lighted that accrue from application of differing paradigms. In parti-
cular, the three-variable relationship between surveillance, interven-
tion and loss avoidance; value of information, and non-monetary
benefits are elaborated and linked to economic analysis methods
commonly used in animal health. We identified and explained the use
of the most common economic evaluation criteria according to the
different surveillance objectives and evaluation questions. This re-
presents an added value in the guidance to decision maker (technical
advisers) to facilitate/promote the use of economic evaluation.
The tool has also been developed as a collaborative tool to enable
regular update by users and to ensure its sustainability and relevance
over time.
Evaluation itself is only a means to an end: it helps to see what is
happening so that the surveillance system can be improved and changes
promoted based on systematic analysis and reflection.. The purpose of
evaluations is to provide feedback to decision makers about program
operations and their (cost-)effectiveness so that their decisions can be as
fully informed as possible. The ad hoc evaluation exercise is completed
by a deep analysis of the results which are placed in the global context
of policy and/or operational interventions. Potentially this analysis
would lead to the identification of improvement measures at different
levels. Experienced administrators and evaluators know that this does
not often happen. Evaluations may be undertaken because they are
required, and the ad hoc evaluation reports are subsequently not ana-
lysed in full details. This may occur for several reasons, including:
failing to address directly the policy makers' or program administrators'
principal questions (wrong selection of the evaluation question); lack of
communicating the evaluation results in a way that can be readily
understood by non-evaluation experts (what to communicate?); lack of
clear understanding of which the primary and secondary audiences of
the results are (who to target?); not matching the results of the eva-
luation with decision makers’ planning during which policy or pro-
grammatic operational decisions are made (when to target)?; evalua-
tion findings may be perceived as too challenging to implement by
stakeholders if no preparatory work is associated with the evaluation
results. This could lead to resistance in implementing changes (are
proposals for change subject to high acceptability and appropriation?).
The EVA tool also promotes the application of an integrated eva-
luation process. It generates a balanced suggestion of evaluation attri-
butes and measurement methods to assess not only effectiveness (e.g.
sensitivity) but also functional aspects influencing the overall perfor-
mance of a surveillance system (e.g. acceptability, flexibility) and
economic efficiency. The functional attributes are of critical importance
to generate meaningful recommendations for all stakeholders (Fig. 2).
The purpose of an evaluation and the research that goes into it is not
just to tell whether or not the surveillance has been a success or not.
The real value of evaluation lies in its ability to help identifying and
correct problems – as well as to celebrate progress. Further reflection on
how to make the surveillance even better and more effective is still
required. The results for process and impact should be analysed, and
changes made where they will gain greater effectiveness and/or effi-
ciency. This integrated approach should promote uptake of the eva-
luation outcomes by helping the technical adviser to position them
M. Peyre, et al. Preventive Veterinary Medicine 173 (2019) 104777
firmly the complex process of decision-making.
The application of the EVA tool to practical case studies highlighted
the importance of considering comprehensive evaluation to improve
the quality of the evaluation outputs (economic evaluation; multiple
attributes assessment); and at the same time identified practical issues
and resource constraints to do so. The cost-effectiveness analysis (CEA)
of both Salmonella Dublin and Bovine Virus Diarrhoea in cattle de-
monstrated the challenges associated with interpreting these kinds of
outcomes. For example, the results for Salmonella Dublin demonstrated
that one surveillance design was cheaper than the other one in de-
tecting cases, but it was not clear what the value was of one detected
case and consequently how much money could be invested to detect
these cases.
Similarly, in the BVD case study, decision makers could not provide
clear information on what level of effectiveness would be desirable.
While the analysis provided information on the prevalence, distribu-
tion, and risk factors, it was difficult to judge whether the estimated
accuracy generated enough economic value to recover the additional
costs related to coordination and centralisation of data.
The CSF case study highlighted the importance of considering more
than one evaluation attributes to provide meaningful results and to
discriminate between the different surveillance designs under evalua-
tion. Indeed, most surveillance designs (including the current one)
reached the target effectiveness value defined in terms of surveillance
system sensitivity. However, the timeliness, simplicity and acceptability
differed between the different designs under evaluation. The combined
analysis of all these different attributes allowed identifying the most
effective and least-cost design (Schulz et al., 2017). The findings from
the case studies illustrated limitations in terms of CEA of surveillance
and identified common pitfalls. The feedback was used to underline the
importance of reflecting carefully on the attributes included in the CEA
and to ask what the outputs will mean in terms of value and whether
they will help to make a recommendation to decision-makers from an
economic point of view. Consequently, further information and refer-
ences were added to the Wiki to explain relevant concepts in more
detail. This feedback was important to be able to refine the tools and
provide further guidance for users. Indeed, by making the evaluator
aware of the limitations of the process (i.e. what could or should be
done), the robustness and usefulness of the evaluation can be increased
by generating higher confidence among decision makers in the eva-
luation outputs and recommendations.
5. Conclusion
The EVA tool was developed to integrate different evaluation di-
mensions in a structured way and to guide the users in the development
and implementation of their evaluation plans for surveillance. The
objective of the tool was to promote the use of comprehensive eva-
luation including economic evaluation by providing detailed informa-
tion on the available methods and relevance according to a specific
evaluation question and context. As such, the EVA tool contributes to
the implementation of robust and standardised evaluations of surveil-
lance activities and thereby helps to produce evidence-based informa-
tion relevant for surveillance decision-makers. This in turn promotes
data quality and stakeholder trust in the animal health status of a
country. In the long term, this will increase professional capacity and
help to improve resource allocation to surveillance for the benefit of all.
The authors to update periodically the EVA tool based on feedback
from future users which interested people can provide by using the
collaborative wiki web platform.
The research leading to these results received funding from the
European Union’s Seventh Framework Programme for research, tech-
nological development and demonstration under grant agreement N°
310806 (RISKSUR). The authors would like to acknowledge all the
experts of the RISKSUR consortium who provided valuable support and
information, especially Katharina Staerk, Betty Bisdorff, Birgit Schauer,
Katja Schulz, Christophe Staubach, Ann Lindberg, Fernanda Dorea,
Arianna Comin and Timothée Vergne.
Anthony, S., Fauci, M.D., Morens, M.D., 2012. The perpetual challenge of infectious
diseases. N. Engl. J. Med. 366 (February), 454–461.
Bilal, M.A., 2001. Elicitation of Expert Opinions for Uncertainty and Risks, 1st ed. CRC
Press, Boca Raton, Florida.
Calba, C., Comin, A., Conraths, F., Drewe, J., Goutard, F., Grosbois, V., Häsler, B., Horeth-
Bontgen, D., Hoinville, L., Lindberg, A., Peyre, M., Pfeiffer, D., Rodriguez Prieto, V.,
Rushton, J., Staerk, K., Schauer, B., Traon, D., Vergne, T., 2013a. Evaluation Methods
of Surveillance Systems and Current Practices. RISKSUR Deliverable 1.2. Available
at: (Accessed on 29th
September 2017). .
Calba, C., Drewe, J., Goutard, F., Grosbois, V., Häsler, B., Hoinville, L., Peyre, M., Pfeiffer,
D., Vergne, T., 2013b. The Evaluation Attributes Used for Evaluating Animal Health
Surveillance Systems. RISKSUR Deliverable 1.3. Available at:http://www.fp7- on 29th September 2017). .
Calba, C., Goutard, F., Hoinville, L., Hendrikx, P., Lindberg, A., Saegerman, C., Peyre, M.,
2015a. Surveillance systems evaluation: a review of the existing guides. BMC Public
Health 15, 448.
Calba, C., Antoine-Moussiaux, N., Charrier, F., Hendrikx, P., Saegerman, C., Peyre, M.,
Goutard, F.L., 2015b. Applying participatory approaches in the evaluation of sur-
veillance systems: a pilot study on African swine fever surveillance in Corsica. Prev.
Vet. Med. 122, 389–398.
Comin, A., Häsler, B., Hoinville, L., Peyre, M., Dorea, F., Schauer, B., Snow, L., Stärk, K.,
Lindberg, A., Brouwer, A., van Schaik, G., Staubach, C., Schulz, K., Bisdorff, B.,
Goutard, F., Ferreira, J., Conraths, F., Cameron, A., Martínez-Avilés, M., Pfeiffer, D.,
2016. RISKSUR Tools: Taking Animal Health Surveillance into the Future Through
Interdisciplinary Integration of Scientific Evidence.
Drewe, J., Hoinville, L., Cook, A., Floyd, T., Stärk, K., 2012. Evaluation of animal and
public health surveillance systems: a systematic review. Epidemiol. Infect. 140,
Drewe, J.A., Hoinville, L.J., Cook, A.J.C., Floyd, T., Gunn, G., Stärk, K.D.C., 2015.
SERVAL: a new framework for the evaluation of animal health surveillance.
Transbound. Emerg. Dis. 62, 33–45.
Grosbois, V., Häsler, B., Peyre, M., Hiep, D.T., Vergne, T., 2015. A rationale to unify
measurements of effectiveness for animal health surveillance. Prev. Vet. Med. 120,
Häsler, B., Howe, K., Peyre, M., Vergne, T., Calba, C., Bisdorff, B., Comin, A., Lindberg, A.,
Brouwer, A., Snow, L., Schulz, K., Staubach, C., Martínez-Avilés, M., Traon, D.,
Hoinville, L., Stärk, K., Pfeiffer, D., Rushton, J., 2015. Economic evaluation of animal
health surveillance—moving from infancy to adolescence? In: ISVEE. November 3–7,
2015 Merida, Yucatan, Mexico.
Häsler, B., et al., 2014. Mapping of surveillance and livestock systems, infrastructure,
trade flows and decision-making processes to explore the potential of surveillance at
a systems level. In: 2nd ICAHS Conference. Cuba, May 2014.
Hoinville, L.J., Alban, L., Drewe, J.A., Gibbens, J.C., Gustafson, L., Häsler, B., Saegerman,
C., Salman, M., Stärk, K.D.C., 2013. Proposed terms and concepts for describing and
evaluating animal-health surveillance systems. Prev. Vet. Med. 112 (1–2), 1–12.
Jann, W., Wegrich, K., 2007. Theories of the policy cycle. Handbook of Public Policy
Analysis: Theory, Politics, and Methods. Public Administration and Public Policy.
Jebara, K.B., 2004. Surveillance, detection and response: managing emerging diseases at
national and international levels. Rev. Sci. Technol. 23 (2), 709–715.
Peyre, M., Hendrikx, P., Do Huu, D., Goutard, F., Desvaux, S., Roger, F., et al., 2011.
Evaluation of surveillance systems in animal health: the need to adapt the tools to the
contexts of developing countries, results from a regional workshop in South East Asia.
Epidémiologie Santé Anim. 59, 415–417.
Peyre, M., Hoinville, L., Haesler, B., Lindberg, A., Bisdorff, B., Dórea, F., Wahlström, H.,
Frössling, J., Calba, C., Grosbois, V., Goutard, F., 2014. Network analysis of sur-
veillance system evaluation attributes: a way towards improvement of the evaluation
process. In: Proceedings of the 2nd International Conference on Animal Health
Surveillance (ICAHS2). The Havana, Cuba, 7–9 May.
Peyre, M., Haesler, B., Goutard, F., 2015. Case Study Selection for Economic Evaluation
Framework Development and Validation. RISKSUR Deliverable 5.20. Available at: (Accessed on 4th January
index.php?n=Main.ConferenceProgram. .
Peyre, M.-I., Pham, H.T.T., Calba, C., Schulz, K., Delabouglise, A., Goutard, F.L., Roger, F.,
Antoine-Moussiaux, 2017. Animal health surveillance constraints in North and South:
same-same but different? [WWW Document]. In: Proc. ICAHS - 3rd Int. Conf. Anim.
Health Surveill. Beyound Anim. Health Surveill. Rotorua New-Zealand 1–5 May
RISKSUR Consortium, 2013. The Eva Tool: An Integrated Approach for the Evaluation of
Animal Health Surveillance Systems. Research Brief, No. 1.4. Available at:http://
M. Peyre, et al. Preventive Veterinary Medicine 173 (2019) 104777
10 on 29
2017). .
RISKSUR Consortium, 2014. Mapping of Surveillance Systems, Animal Populations, Trade
Flows, Critical Infrastructure and Decision Making Processes in Several European
Countries. Research Brief, No. 1.2. Available at:
progress/public-deliverables(Accessed on 29
September 2017). .
Schulz, K., Peyre, M., Staubach, C., Schauer, B., Schulz, J., Calba, C., Häsler, B., Conraths,
F.J., 2017. Surveillance strategies for Classical Swine Fever in wild boar—a com-
prehensive evaluation study to ensure powerful surveillance. Sci. Rep. 7. https://doi.
Undurraga, E.A., Meltzer, M.I., Tran, C.H., Atkins, C.Y., Etheart, M.D., Millien, M.F.,
Adrien, P., Wallace, R.M., 2017. Cost-effectiveness evaluation of a novel integrated
bite case management program for the control of human rabies, Haiti 2014–2015.
Am. J. Trop. Med. Hyg. 96, 1307–1317.
Wall, B.A., Arnold, M.E., Radia, D., Gilbert, W., Ortiz-Pelaez, A., Stärk, K.D., Klink, E.V.,
Guitian, J., 2017. Evidence for more cost-effective surveillance options for bovine
spongiform encephalopathy (BSE) and scrapie in Great Britain. Eurosurveillance 22,
M. Peyre, et al. Preventive Veterinary Medicine 173 (2019) 104777
... Despite their differences, many systems shared common weaknesses, e.g., in data management and representativeness, and common threats, such as economic vulnerability and data access. Moreover, only two monitoring systems underwent evaluations, although this is an important practice to allow more transparent interpretation of outputs, more objective decisionmaking and resource allocation, as well as improvements in system design and enhanced acceptance of system outputs by stakeholders (Peyre et al., 2019). However, our analysis showed that solutions exist to these frequent challenges. ...
Full-text available
The monitoring of antimicrobial resistance (AMR) in bacterial pathogens of animals is not currently coordinated at European level. To fill this gap, experts of the European Union Joint Action on Antimicrobial Resistance and Healthcare Associated Infections (EU-JAMRAI) recommended building the European Antimicrobial Resistance Surveillance network in Veterinary medicine (EARS-Vet). In this study, we (i) identified national monitoring systems for AMR in bacterial pathogens of animals (both companion and food-producing) among 27 countries affiliated to EU-JAMRAI, (ii) described their structures and operations, and (iii) analyzed their respective strengths, weaknesses, opportunities and threats (SWOT). Twelve countries reported having at least one national monitoring system in place, representing an opportunity to launch EARS-Vet, but highlighting important gaps in AMR data generation in Europe. In total, 15 national monitoring systems from 11 countries were described and analyzed. They displayed diverse structures and operations, but most of them shared common weaknesses (e.g., data management and representativeness) and common threats (e.g., economic vulnerability and data access), which could be addressed collectively under EARS-Vet. This work generated useful information to countries planning to build or improve their system, by learning from others’ experience. It also enabled to advance on a pragmatic harmonization strategy: EARS-Vet shall follow the European Committee on Antimicrobial Susceptibility Testing (EUCAST) standards, collect quantitative data and interpret AMR data using epidemiological cut-off values.
Meat-borne hazards—such as pathogenic microbes and hazardous chemicals—can have a range of negative effects on the animals themselves and on human consumers. It is essential to control such hazards because the international trade in meat means outbreaks can rapidly affect many countries. Mitigation of risks arising from meat-borne hazards involves a careful balance of surveillance and intervention strategies. This article discusses the range of approaches that are available to reduce the risks of hazards in meat, using examples from around the world.
Full-text available
To facilitate cross‐sector integration of surveillance data it is necessary to improve and harmonize the meta‐information provided in surveillance data reports. Cross‐sector integration of surveillance results in sector‐specific reports is frequently difficult as reports with a focus on a single sector often lack aspects of the relevant meta‐information necessary to clarify the surveillance context. Such reporting deficiencies reduce the value of surveillance reports to the One Health community. The One Health Consensus Report Annotation Checklist (OH‐CRAC), described in this paper along with potential application scenarios, was developed to improve the current practice of annotating data presented in surveillance data reports. It aims to provide guidance to researchers and reporting officers on what meta‐information should be collected and provided to improve the completeness and transparency of surveillance data reports. The OH‐CRAC can be adopted by all One Health‐related sectors and due to its cross‐sector design, it supports the mutual mapping of surveillance meta‐information from sector‐specific surveillance reports on federal, national and international levels. To facilitate the checklist completion, OH‐CRAC is also available as an online resource that allows the collection of surveillance meta‐information in an easy and user‐friendly manner. Completed OH‐CRAC checklists can be attached as annexes to the corresponding surveillance data reports or even to individual data files regardless of the data source. In this way, reports and data become better interpretable, usable and comparable to information from other sectors, improving their value for all surveillance actors and providing a better foundation for advice to risk managers.
La fièvre Q est une maladie zoonotique causée par la bactérie Coxiella burnetii. La plupart des cas cliniques humains recensés sont liés à une exposition à des ruminants domestiques excréteurs de la bactérie. Chez les ruminants domestiques, en raison de la présence de porteurs latents et de l’excrétion intermittente de la bactérie, les tests sérologiques de type ELISA sont recommandés par l’OIE pour témoigner d’infections passées ou en cours. Cependant, il n’existe pas de méthode gold standard permettant de déterminer avec certitude le statut sérologique des individus. À ce jour, trois tests ELISA (ci-après nommés test 1, 2, 3) sont commercialisés en Europe, mais leurs performances diagnostiques sont mal connues. La thèse est divisée en trois parties avec respectivement pour objectifs : (1) d’estimer les performances diagnostiques chez les bovins, ovins et caprins des trois tests ELISA commercialisés en Europe pour le diagnostic sérologique des infections par C. burnetii ; (2) d’estimer les séroprévalences réelles des infections par C. burnetii chez ces trois espèces de ruminants domestiques sur la base du test 2 en prenant en compte son incertitude diagnostique ; (3) de construire un outil en ligne d’aide à l’interprétation des plans de dépistage sérologique en intégrant les estimations réalisées dans les parties 1 et 2. Les travaux s’appuient sur les données collectées dans le cadre du dispositif de surveillance de la fièvre Q mis en place entre 2012 et 2014 dans dix départements français. La base de données comprend 9 972 bovins, 5 024 caprins et 7 632 ovins échantillonnés dans 1 602 élevages sélectionnés de façon aléatoire. Tous les échantillons de sérum ont été analysés avec le test 2 par un panel de laboratoires d’analyses vétérinaires et environ 20 % (n= 4 319) ont également été analysés avec les tests 1, 2 et 3 au laboratoire national de référence de la fièvre Q. Les performances diagnostiques des trois tests ELISA pour chaque espèce de ruminant ont été estimées à l’aide d’un modèle à classes latentes prenant en compte la dépendance conditionnelle entre les trois tests. L’incertitude diagnostique estimée pour le test 2 a permis d’étudier la distribution de la proportion d’élevages séropositifs selon les départements (séroprévalence inter-élevage) et la proportion d’animaux séropositifs selon les élevages (séroprévalence intra-élevage) pour chacune des espèces. Enfin, une application Shiny libre d’accès à destination des acteurs de terrain a été développée pour permettre d’estimer la probabilité que l’individu testé ou le troupeau dépisté soit réellement séropositif sachant les caractéristiques de l’élevage, le contexte épidémiologique et les résultats du plan de dépistage réalisé. L’ensemble de ces travaux permet de mieux interpréter les résultats obtenus avec ces tests ELISA dans le cadre d’un dépistage ciblé ou d’un programme de surveillance de la fièvre Q à l’échelle d’élevages, de régions, voire de pays. Ils proposent donc des éléments de réponses aux exigences réglementaires récemment mises en application dans le cadre du Règlement (UE) 2018/1882 et catégorisant la fièvre Q comme une maladie de catégorie E à surveillance obligatoire.
Full-text available
Transmissible spongiform encephalopathies (TSEs) are an important public health concern. Since the emergence of bovine spongiform encephalopathy (BSE) during the 1980s and its link with human Creutzfeldt-Jakob disease, active surveillance has been a key element of the European Union's TSE control strategy. Success of this strategy means that now, very few cases are detected compared with the number of animals tested. Refining surveillance strategies would enable resources to be redirected towards other public health priorities. Cost-effectiveness analysis was performed on several alternative strategies involving reducing the number of animals tested for BSE and scrapie in Great Britain and, for scrapie, varying the ratio of sheep sampled in the abattoir to fallen stock (which died on the farm). The most cost-effective strategy modelled for BSE involved reducing the proportion of fallen stock tested from 100% to 75%, producing a cost saving of ca GBP 700,000 per annum. If 50% of fallen stock were tested, a saving of ca GBP 1.4 million per annum could be achieved. However, these reductions are predicted to increase the period before surveillance can detect an outbreak. For scrapie, reducing the proportion of abattoir samples was the most cost-effective strategy modelled, with limited impact on surveillance effectiveness.
Full-text available
Haiti has the highest burden of rabies in the Western hemisphere, with 130 estimated annual deaths. We present the cost‐effectiveness evaluation of an integrated bite case management program combining community bite investigations and passive animal rabies surveillance, using a governmental perspective. The Haiti Animal Rabies Surveillance Program (HARSP) was first implemented in three communes of the West Department, Haiti. Our evaluation encompassed all individuals exposed to rabies in the study area (N = 2,289) in 2014–2015. Costs (2014 U.S. dollars) included diagnostic laboratory development, training of surveillance officers, operational costs, and postexposure prophylaxis (PEP). We used estimated deaths averted and years of life gained (YLG) from prevented rabies as health outcomes. HARSP had higher overall costs (range: $39,568–$80,290) than the no‐bite‐case‐management (NBCM) scenario ($15,988–$26,976), partly from an increased number receiving PEP. But HARSP had better health outcomes than NBCM, with estimated 11 additional annual averted deaths in 2014 and nine in 2015, and 654 additional YLG in 2014 and 535 in 2015. Overall, HARSP was more cost-effective (US$ per death averted) than NBCM (2014, HARSP: $2,891–$4,735, NBCM: $5,980–$8,453; 2015, HARSP: $3,534–$7,171, NBCM: $7,298–$12,284). HARSP offers a cost‐effective human rabies prevention solution for countries transitioning from reactive to preventive strategies (i.e., comprehensive dog vaccination).
Full-text available
Surveillance of Classical Swine Fever (CSF) should not only focus on livestock, but must also include wild boar. To prevent disease transmission into commercial pig herds, it is therefore vital to have knowledge about the disease status in wild boar. In the present study, we performed a comprehensive evaluation of alternative surveillance strategies for Classical Swine Fever (CSF) in wild boar and compared them with the currently implemented conventional approach. The evaluation protocol was designed using the EVA tool, a decision support tool to help in the development of an economic and epidemiological evaluation protocol for surveillance. To evaluate the effectiveness of the surveillance strategies, we investigated their sensitivity and timeliness. Acceptability was analysed and finally, the cost-effectiveness of the surveillance strategies was determined. We developed 69 surveillance strategies for comparative evaluation between the existing approach and the novel proposed strategies. Sampling only within sub-adults resulted in a better acceptability and timeliness than the currently implemented strategy. Strategies that were completely based on passive surveillance performance did not achieve the desired detection probability of 95%. In conclusion, the results of the study suggest that risk-based approaches can be an option to design more effective CSF surveillance strategies in wild boar.
Conference Paper
Full-text available
To enable wide-spread acceptance and adoption of risk-based surveillance approaches by stakeholders it is essential to provide those designing such systems with science-based frameworks guiding them through the systematic process of design and evaluation. The RISKSUR project has addressed this particular need through the development of integrated surveillance system design and evaluation frameworks and associated decision support tools (RISKSUR tools). This paper provides an overview of the RISKSUR tools and presents their application using several disease case studies relevant to EU member states. The RISKSUR tools provide user-friendly access to comprehensive, flexible and state-of-the-art integrated frameworks for animal health surveillance design and evaluation, thereby providing effective guidance during the complex decision making process. The tools will continue to be refined in response to user feedback and new methodological developments. Their availability in the public domain will facilitate access by users and allows widespread integration into training materials.
Full-text available
The implementation of regular and relevant evaluations of surveillance systems is critical in improving their effectiveness and their relevance whilst limiting their cost. The complex nature of these systems and the variable contexts in which they are implemented call for the development of flexible evaluation tools. Within this scope, participatory tools have been developed and implemented for the African swine fever (ASF) surveillance system in Corsica (France). The objectives of this pilot study were, firstly, to assess the applicability of participatory approaches within a developed environment involving various stakeholders and, secondly, to define and test methods developed to assess evaluation attributes. Two evaluation attributes were targeted: the acceptability of the surveillance system and its the non-monetary benefits. Individual semi-structured interviews and focus groups were implemented with representatives from every level of the system. Diagramming and scoring tools were used to assess the different elements that compose the definition of acceptability. A contingent valuation method, associated with proportional piling, was used to assess the non-monetary benefits, i.e., the value of sanitary information. Sixteen stakeholders were involved in the process, through 3 focus groups and 8 individual semi-structured interviews. Stakeholders were selected according to their role in the system and to their availability. Results highlighted a moderate acceptability of the system for farmers and hunters and a high acceptability for other representatives (e.g., private veterinarians, local laboratories). Out of the 5 farmers involved in assessing the non-monetary benefits, 3 were interested in sanitary information on ASF. The data collected via participatory approaches enable relevant recommendations to be made, based on the Corsican context, to improve the current surveillance system.
Full-text available
Regular and relevant evaluations of surveillance systems are essential to improve their performance and cost-effectiveness. With this in mind several organizations have developed evaluation approaches to facilitate the design and implementation of these evaluations. In order to identify and to compare the advantages and limitations of these approaches, we implemented a systematic review using the PRISMA guidelines (Preferred Reporting Items for Systematic Reviews and Meta-Analyses). After applying exclusion criteria and identifying other additional documents via citations, 15 documents were retained. These were analysed to assess the field (public or animal health) and the type of surveillance systems targeted; the development process; the objectives; the evaluation process and its outputs; and the attributes covered. Most of the approaches identified were general and provided broad recommendations for evaluation. Several common steps in the evaluation process were identified: (i) defining the surveillance system under evaluation, (ii) designing the evaluation process, (iii) implementing the evaluation, and (iv) drawing conclusions and recommendations. A lack of information regarding the identification and selection of methods and tools to assess the evaluation attributes was highlighted; as well as a lack of consideration of economic attributes and sociological aspects.
Experts, despite their importance and value, can be double-edged swords. They can make valuable contributions from their deep base of knowledge, but those contributions may also contain their own biases and pet theories. Therefore, selecting experts, eliciting their opinions, and aggregating their opinions must be performed and handled carefully, with full recognition of the uncertainties inherent in those opinions. Elicitation of Expert Opinions for Uncertainty and Risks illuminates those uncertainties and builds a foundation of philosophy, background, methods, and guidelines that helps its readers effectively execute the elicitation process. Based on the first-hand experiences of the author, the book is filled with illustrations, examples, case studies, and applications that demonstrate not only the methods and successes of expert opinion elicitation, but also its pitfalls and failures. Studies show that in the future, analysts, engineers, and scientists will need to solve ever more complex problems and reach decisions with limited resources. This will lead to an increased reliance on the proper treatment of uncertainty and on the use of expert opinions. Elicitation of Expert Opinions for Uncertainty and Risks will help prepare you to better understand knowledge and ignorance, to successfully elicit expert opinions, to select appropriate expressions of those opinions, and to use various methods to model and aggregate opinions.