ArticlePDF Available

SurF: an innovative framework in biosecurity and animal health surveillance evaluation

Authors:

Abstract

Surveillance for biosecurity hazards is being conducted by the New Zealand Competent Authority, the Ministry for Primary Industries (MPI) to support New Zealand's biosecurity system. Surveillance evaluation should be an integral part of the surveillance life cycle, as it provides a means to identify and correct problems and to sustain and enhance the existing strengths of a surveillance system. The surveillance evaluation Framework (SurF) presented here was developed to provide a generic framework within which the MPI biosecurity surveillance portfolio, and all of its components, can be consistently assessed. SurF is an innovative, cross‐sectoral effort that aims to provide a common umbrella for surveillance evaluation in the animal, plant, environment and aquatic sectors. It supports the conduct of the following four distinct components of an evaluation project: (i) motivation for the evaluation, (ii) scope of the evaluation, (iii) evaluation design and implementation and (iv) reporting and communication of evaluation outputs. Case studies, prepared by MPI subject matter experts, are included in the framework to guide users in their assessment. Three case studies were used in the development of SurF in order to assure practical utility and to confirm usability of SurF across all included sectors. It is anticipated that the structured approach and information provided by SurF will not only be of benefit to MPI but also to other New Zealand stakeholders. Although SurF was developed for internal use by MPI, it could be applied to any surveillance system in New Zealand or elsewhere.
ORIGINAL ARTICLE
SurF: an innovative framework in biosecurity and animal
health surveillance evaluation
Petra Muellner
1
|
Jonathan Watts
2
|
Paul Bingham
2
|
Mark Bullians
2
|
Brendan Gould
2
|
Anjali Pande
2
|
Tim Riding
2
|
Paul Stevens
2
|
Daan Vink
2
|
Katharina DC St
ark
3,4
1
Epi-interactive, Wellington, New Zealand
2
Ministry for Primary Industries, Wellington,
New Zealand
3
SAFOSO AG, Bern-Liebefeld, Switzerland
4
Royal Veterinary College, London, UK
Correspondence
J. Watts, Ministry for Primary Industries,
Wellington, New Zealand.
Email: Jonathan.Watts@mpi.govt.nz
Summary
Surveillance for biosecurity hazards is being conducted by the New Zealand Compe-
tent Authority, the Ministry for Primary Industries (MPI) to support New Zealands
biosecurity system. Surveillance evaluation should be an integral part of the surveil-
lance life cycle, as it provides a means to identify and correct problems and to sus-
tain and enhance the existing strengths of a surveillance system. The surveillance
evaluation Framework (SurF) presented here was developed to provide a generic
framework within which the MPI biosecurity surveillance portfolio, and all of its
components, can be consistently assessed. SurF is an innovative, cross-sectoral
effort that aims to provide a common umbrella for surveillance evaluation in the ani-
mal, plant, environment and aquatic sectors. It supports the conduct of the following
four distinct components of an evaluation project: (i) motivation for the evaluation,
(ii) scope of the evaluation, (iii) evaluation design and implementation and (iv) report-
ing and communication of evaluation outputs. Case studies, prepared by MPI subject
matter experts, are included in the framework to guide users in their assessment.
Three case studies were used in the development of SurF in order to assure practi-
cal utility and to confirm usability of SurF across all included sectors. It is anticipated
that the structured approach and information provided by SurF will not only be of
benefit to MPI but also to other New Zealand stakeholders. Although SurF was
developed for internal use by MPI, it could be applied to any surveillance system in
New Zealand or elsewhere.
KEYWORDS
biosecurity, evaluation, surveillance
1
|
INTRODUCTION
The New Zealand Ministry for Primary Industries (MPI) undertakes
and invests significantly in a range of national biosecurity surveil-
lance activities across the plant, animal, environmental and aquatic
sectors (Acosta & White, 2011). Biosecurity surveillance aims to
detect hazards such as infectious disease agents or introduced pests
and inform their management. It is thereby part of the larger biose-
curity system aimed at reducing biosecurity risks and facilitating
trade. These activities underpin New Zealands ability to enable trade
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium,
provided the original work is properly cited.
©2018 2018 The Authors. Transboundary and Emerging Diseases Published by Blackwell Verlag GmbH
Received: 2 November 2017
|
Accepted: 13 April 2018
DOI: 10.1111/tbed.12898
Transbound Emerg Dis. 2018;18. wileyonlinelibrary.com/journal/tbed
|
1
and to protect itself from biological risks through the early detection
of pests and diseases, and the provision of evidence of pest or dis-
ease freedom. Given the importance of these activities to New Zeal-
and stakeholders, it is essential that the performance of these
programmes can be assessed to provide assurances regarding the
quality of delivery and outputs of these programmes. The impor-
tance of understanding, and being able to assess, the quality of
surveillance programmes was a focus of New Zealands Biosecurity
Surveillance Strategy 2020 (Ministry of Agriculture and Forestry
[MAF], 2009), which identified three strategic goals related to the
delivery of quality surveillance:
The most appropriate mix of surveillance activities is chosen to
ensure surveillance programmes meet their specific objectives
Surveillance delivery is effective, efficient and responsive to
changes in the biosecurity environment
The outputs of surveillance programmes can be relied upon by
decision makers.
It is also critical to ensure that surveillance programmes are
responsive to change and continually evolve to meet changing biose-
curity needs in an efficient and responsive manner. As concluded by
Drewe et al. (2015), evaluation can be used to help both identify
and correct problems, as well as to protect, enhance and provide
assurance on the strength of a surveillance system. Furthermore, in
the animal health context, the assessment of surveillance systems is
a component of both the import risk analysis and the veterinary ser-
vices assessment procedures documented by the World Organization
for Animal Health (Hendrikx et al., 2011).
The continuous evolution of surveillance systems therefore war-
rants periodic re-evaluation of their continued relevance and effec-
tiveness and underscores the importance of surveillance evaluation
in the surveillance life cycle (Figure 1).
The surveillance evaluation framework (SurF) was developed to
provide a consistent generic framework for the assessment of the
MPI biosecurity surveillance portfolio, including all of its compo-
nents. It was also envisaged that in achieving MPIs cross-sector
requirements that this framework could be applied more broadly by
others delivering biosecurity surveillance activities. This novel cross-
sectoral effort aims to provide a common umbrella for surveillance
evaluation in the animal, plant, environment and aquatic (including
marine, aquaculture and freshwater) sectors. Here, we present tech-
nical details of the framework and its development.
2
|
MATERIALS AND METHODS
In order to collate available information and example materials to
inform development of the New Zealand biosecurity evaluation
framework, a scoping review methodology was used to rapidly map
the key concepts underlying surveillance evaluation in different sec-
tors. The terminology proposed by Hoinville et al. (2013) was used
wherever possible to align with existing standards. A surveillance
evaluation framework was developed based on these findings. Three
case studies were developed to test the developed framework and
provide applied guidance to future users.
2.1
|
Review methodology
A scoping review technique was used for the purpose of creating a
common evidence base for the planning and development of the
framework. Scoping reviews are considered a useful and increasingly
popular way to collect and organize important background informa-
tion and to gain an overview of the existing evidence base (Arm-
strong, Hall, Doyle, & Waters, 2011).
Initially, relevant documents were identified through discussions
with stakeholders and surveillance experts. Reference lists of identi-
fied publications were considered as additional sources of information.
As two extensive reviews, including a full and systematic review of
surveillance evaluation in the animal and human health field, have
recently been completed (Calba et al., 2015; Drewe, Hoinville, Cook,
Floyd, & St
ark, 2012), it was considered most efficient to build on
these rather than duplicating the work already conducted. However,
to cover most recent publications, the literature search query devel-
oped by Drewe et al. (2012) was re-run in Web of Science covering
articles published between 2011 and 15 February 2015.
To identify relevant non-animal surveillance publications, a scan-
ning search of the scientific literature database Web of Science was
conducted using the Boolean query: Topic =surveillance AND
Title =((surveillance AND (evaluat*OR analy*OR perform*)) OR
(evaluat*AND perform*)) AND (environ*OR marine*OR plant*).
Through the use of wildcards (*), articles containing any variation of
each of the search terms were identified. All articles published in the
last 20 years (1995 and later) were included. To cover unpublished
work, the grey literature was investigated through a Google web
search built on the core search terms as described above (surveil-
lance AND (evaluat*OR analy*OR perform*). The first 200 results
were assessed and if relevant, findings were included in this report.
FIGURE 1 Evaluation as part of the surveillance life cycle
2
|
MUELLNER ET AL.
2.2
|
Framework development
A project team consisting of subject matter experts from the biose-
curity sectors that the framework was aiming to cover was assem-
bled. This included MPI experts from the environmental, aquatic,
plant and terrestrial animal surveillance teams plus two external epi-
demiologists. Taking into account the literature review outcomes,
the framework was specified during regular face-to-face group
meetings that took place over an 18-month time period. Case
studies were prepared by MPI subject matter experts between
September and December 2015, using data and information that
were already available. The objective of the case studies was to
provide a proof of concept approach, to demonstrate that the frame-
work was robust, complete, fit-for-purpose and user-friendly across
the different biosecurity sectors it is targeting. Further the case
studies were used to identify any framework components that
needed rewording or further refinement.
3
|
RESULTS
3.1
|
Review results
The updated search by Drewe et al. identified a total of 1,531 arti-
cles. All titles were scanned by the assessor. If a title appeared rele-
vant to this review, the abstract was retrieved and reviewed.
Although a large number of titles were returned by the search, only
one additional article (Hoinville et al., 2013) of relevance to the
objectives of this review and not included in the reference lists of
Drewe et al. (2012) or Calba et al. (2015) could be identified. In
addition to the animal and human health-focused publications, the
literature searches specific to the environmental, marine and plant
sector delivered a total of 79 titles. The assessor scanned all titles
returned and zero articles of relevance to the objective of this
review could be identified. A complete list of all articles retrieved
and assessed by the above-described protocols is available on
request. The search of the grey literature identified one additional
publication of relevance from public health surveillance (European
Centre for Disease Control and Prevention, 2014).
In conclusion, although a structure search was conducted, no
evaluation frameworks specific to surveillance in the environmental,
aquatic or plant sectors were identified by the scoping review.
Current efforts appear concentrated on the evaluation of public
health and animal health surveillance; however, existing frameworks
offered the flexibility to be adapted to support the wider context
of New Zealand biosecurity surveillance. It was therefore decided
to build SurF on previous work conducted nationally and interna-
tionally in the context of the evaluation of human and animal
health surveillance. This included, in particular, the SERVAL frame-
work (Drewe et al., 2015), the recently published guidelines by the
European Centre for Disease Control (ECDC) (2014), the Centers
for Disease Control and Prevention Guidelines (CDC) (2001) and
the EVA tool (Comin et al., 2016; The RISKSUR Project Consor-
tium, 2013).
3.2
|
Framework development
Any framework for biosecurity surveillance evaluation will have to
be very flexible and generic, as not only programmes with different
objectives but also programmes in different sectors have to be
assessed. Following the scoping review and expert discussions, it
was concluded that several existing evaluation frameworks, while
not originating from a cross-sectoral biosecurity surveillance per-
spective, could be readily adapted to the New Zealand requirements.
Following a series of expert meetings, it was concluded that SERVAL
and EVA were most suitable tools to build upon as they offer the
required flexibility to answer the diversity of evaluation questions
that needed to be addressed while build on existing literature and
good practice standards. Based on the findings of the review and
the above considerations,
SurF consists of four components, each supporting a distinct
phase in the evaluation:
1. Motivation for the evaluation
2. Scope of the evaluation
3. Evaluation design and implementation
4. Reporting and communication of evaluation outputs.
Each component describes the activities and decisions related
to a phase within an evaluation project. Table 1 provides a sche-
matic overview of the four components and their individual con-
tent. The framework and the supporting guidance notes describe
the aspects to be considered during each specific activity of the
evaluation process. Depending on the situation and the system
under evaluation, it might not be possible to assess or describe all
components in full detail; any abbreviations from the full protocol
are therefore documented to ensure consistency. Further, for con-
venience, SurF provides users with an evaluation template to sup-
port consistency of outputs (Supporting Information 1 SurF
Evaluation Template).
SurF includes a total of 29 different attributes (Table 2), which
are divided into core attributes (n=10; highlighted in bold) and
accessory attributes (n=19). Inclusion and categorization of attri-
butes were jointly decided by the different experts participating in
the framework development. This included experts representing each
of the biosecurity sectors. However, attributes, their definitions and
recommended methods for assessment build on existing frameworks,
in particular SERVAL and EVA, but also the review of Drewe et al.
(2015) and the Centers for Disease Control and Prevention (2001)
and ECDC (2014) guidelines on surveillance evaluation and monitor-
ing. SurF also includes some additional attributes, which were devel-
oped with the objectives and scope of SurF in mind, for example
Field and laboratory services.Also, some previously proposed attri-
butes were modified to provide the framework with sufficient flexi-
bility to be used across the whole spectrum of New Zealands
biosecurity surveillance portfolio. This was an important component
of the development as existing frameworks were focused on surveil-
lance of human or animal disease while the biosecurity context of
MUELLNER ET AL.
|
3
this project required extending several definitions to also encompass
other risk organisms such as invasive aquatic species or pests of
plants. Therefore, consideration was given to compatibility with plant
and aquatic health and surveillance terminology. Ecological concepts
and related terminology also had to be included to encompass the
non-animal health sectors.
Traffic-light coding is, like in the SERVAL framework (Drewe
et al., 2015), used to provide a summary appraisal in SurF for each
of the attributes, using a standardized coding approach.
Within SurF, attributes are grouped into five Functional Attri-
bute Groupsbased on the logic presented in Figure 2. Each group
includes at least one attribute that is considered to be a core attri-
bute. Core attributes assess essential aspects common to all surveil-
lance systems, and it is recommended that they be included in all
evaluations. If for any reason this is not done, justification has to be
provided. The choice of accessory attributes is left to the evaluators
judgement and is not specified in SurF. The choice will ultimately be
situation- and sector-specific and may be influenced by factors such
as the evaluation question, the surveillance objective or the surveil-
lance systems design.
Detailed guidance for the assessment of each SurF attribute is
in dedicated guidance notes. While the aim was to align with exist-
ing standards such as those proposed by SERVAL (Drewe et al.,
2015), the EVA Tool (Comin et al., 2016; The RISKSUR Project
Consortium, 2013) or Hoinville et al. (2013) at times wording of the
guidance had to be adapted to meet the needs of the non-terres-
trial animal health sectors. For example, the text had to be
extended to also apply to unwanted pest organisms (such as inva-
sive plant or insect species) and hence had to consider, for exam-
ple, an organisms habitat or the search efficiency of an activity. In
addition, a methodscatalogue has been compiled to further
TABLE 1 Overview of the evaluation process described in SurF
Identification of the system under evaluation
I. Motivation for the evaluation
A. Evaluation trigger
B. Context
II. Scope of the evaluation
A. Evaluation objective
B. Evaluation question(s)
C. Time and resources
D. Evaluation intensity
E. Evaluation organization and composition of evaluation team
F. Status of evaluation outputs
III. Evaluation design and implementation
Design of the evaluation
A. Select attributes from master list
B. Choose methods to assess attributes
C. Make an inventory of available information sources about
the system
D. Identify missing information
Implementation of the evaluation
A. Describe the surveillance system under evaluation
B. Describe the surveillance systems objective(s)
C. Describe the organizational structure
D. Identify and engage surveillance system users
E. Identify the target population and geographical coverage
F. Describe the design of the surveillance system
G. Describe the processes
H. Collect data and information
I. Assess the included attributes
IV. Reporting and communication of evaluation outputs
A. State target audience
B. Report main findings
C. Summarize and synthesize results
D. Provide guidance for interpretation of results
E. Make recommendations
F. Facilitate plain reporting
TABLE 2 List of core and accessory attributes included in SurF
(n=29). Core attributes are highlighted in bold
Functional attribute
group Attribute
A. Organization &
management
1. Flexibility
2. Organization and management
3. Performance indicators and evaluation
B. Processes 4. Data analysis
5. Data and information collection
6. Data management and storage
7. Field and laboratory services
8. Resource availability
9. Technical competence and training
C. Technical
implementation
10. Acceptability and engagement
11. Coverage
12. Data completeness and correctness
13. Interoperability
14. Multiple utility
15. RARR
16. (Reliability, availability, repeatability and
robustness)
16. Timeliness
D. Outputs 17. Historical data
18. Negative predictive value
19. Positive predictive value
20. Precision
21. Representativeness and bias
22. Sensitivity
23. Specificity
E. Impact 24. Benefit
25. Decision support
26. Efficiency
27. External communication and
dissemination
28. Internal communication
29. Utility
4
|
MUELLNER ET AL.
support attribute assessment by the various groups and to support
the development of standard operating procedures. SurF further
provides a visual output that allows for comparison of core perfor-
mance between systems and within individual systems over time
(Figure 3).
3.3
|
Framework testing
Three case studies, including the National Apiculture Surveillance
Programme (NASP), Marine High Risk Site Surveillance Programme
(MHRSS) and the Forestry High Risk Site Surveillance Programme
(HRSS), were used to demonstrate how SurF can be used in ongoing
surveillance activities. The case studies were also used in the devel-
opment of SurF in order to assure practical utility and to confirm
usability of SurF across all included sectors. In brief, the first collabo-
ratively developed framework version was tested on the case stud-
ies, and the expert group was reconvened once all studies were
completed. The expert group then jointly discussed the outcomes
and where required made adjustments to the framework, mainly to
improve the clarity of wording and application to the non-animal
sectors. Since SurF builds on published animal health surveillance
evaluation framework, its application on the selected animal health
case study was straightforward, while a large part of the expertsdis-
cussions at this final stage of the development process focused on
ensuring that SurF is fit-for-purpose for all biosecurity sectors it will
be applied to. While no detailed results can be provided here for
confidentiality reasons, overall experts agreed that the framework
could successfully be used to evaluate the diverse set of case studies
and that the framework was ready to be rolled-out for routine use
within MPI.
4
|
DISCUSSION
The MPI evaluation framework was designed to ensure consistency
in the evaluation of different biosecurity surveillance systems by
providing a robust process that is not sector- or context-specific.
This should also make results of evaluations comparable and easily
interpretable by managers. SurF draws from existing surveillance
frameworks and, when appropriate, adopts what has been developed
elsewhere. Its greatest innovation lies in the extension from animal
health-specific designs to plant, environment and aquatic biosecurity
surveillance and combining this with animal health biosecurity
surveillance under a common umbrella. This is a valuable new
development as it can provide organizations like MPI, whose man-
date encompasses several sectors, with a standardized means to
evaluate the surveillance activities under its care.
The aim was to develop a generic framework to allow suffi-
cient flexibility for use across the wide range of MPI surveillance
systems and to compare and assess system performance. While
the standardized assessment of core attributes provides consis-
tency between the assessments of different systems, the choice of
accessory attributes allows users to tailor the evaluation to unique
contexts. SurF provides users with a large amount of flexibility in
the selection of attributes. This differs from recently published ani-
mal surveillance frameworks that emphasize alignment of attributes
with specific surveillance objectives, for example early detection or
freedom from disease (Comin et al., 2016; Drewe et al., 2015; The
RISKSUR Project Consortium, 2013). Further, a substantial number
of attributes are included in SurF to accommodate the diversity
and unique context of MPIs surveillance systems. Although SurF
was developed for internal use by MPI, it was envisaged to be
useful for reviewing any biosecurity surveillance system, including,
surveillance conducted by others operating within the biosecurity
system. While SurF has been built for surveillance evaluation in
the animal, plant, environment and aquatic sectors; by extension, it
could also support human health surveillance, for example where
mosquito surveillance programmes inform surveillance of vector-
borne diseases.
Although a formal literature search was conducted, no evaluation
frameworks specific to surveillance in the environmental, aquatic or
plant sector could be identified by this scoping review. SurF was
built on the assumption that a cross-sector framework can be devel-
oped using existing frameworks and attributes while performance
indicators can be adapted to meet the needs and realities of the dif-
ferent sectors. While approaches within the different disciplines are
slightly different (e.g., public health surveillance evaluations tend to
be more qualitative than animal health surveillance evaluations), the
general concepts are transferable and have informed the develop-
ment of a biosecurity surveillance evaluation framework for New
Zealand. There is a range of ways evaluation can be conducted, and
this is met by the diversity of possible evaluation questions. Specific
evaluation design will be highly influenced by the evaluation ques-
tion (St
ark, 2012), and each type of surveillance system will require
a tailored evaluation effort (European Centre for Disease Prevention
and Control, 2014). However, recommendations regarding the gen-
eric workflow of an evaluation and evaluation best practice have
previously been made.
Surveillance system
(a) Organization &
management
(b) Processes Technical
implementation
Output Impact(e)
(d)
(c)
FIGURE 2 Logic of functional attribute
groups (ae) used in SurF
MUELLNER ET AL.
|
5
Surveillance system 2
Surveillance system 1
(a) Organization &
Organization &
management
management
Performance indica-
tors & Evaluvation
Organization &
management
Performance indica-
tors & Evaluvation
Data management and
Field and laboratory
services
& training
Technical competence
storage
Data management and
Field and laboratory
services
& training
Technical competence
storage
Interoperability
Interoperability
Acceptability and
engagement
& correctness
Timeliness
Timeliness
Precision
Sensitivity
Sensitivity
Specificity
Specificity
Efficiency
External
communication and
dissemination
communication
Internal
Utility
Utility
Benefit
Benefit
Data completeness
& correctness
Data completeness
Multiple utility
(b) Processes
Data & information
collection
Data & information
collection
Data analysis
Data analysis
Flexibility
Resource availability
Resource avalilability
Coverage
Rarr
Negative predictive
Positive predictive
value
value
Negative predictive
value
Representativeness
Historical dataHistorical data
Historical data
Decision support
Decision support
Description: Excellent of very good; Good, through room for improvement; In need of attention
& bias
Representativeness
& bias
Technical
implementation Output Impact
(e)
(d)
(c)
(a) Organization &
management (b) Processes Technical
implementation Output Impact
(e)
(d)
(c)
FIGURE 3 Visual outputs of performance assessment of attributes using the SurF framework. The format allows comparison between
different evaluations or systems (described here as System 1and System 2). Attributes assessed positively are always placed at the top of
the process box, while those in potential need of attention are placed below
6
|
MUELLNER ET AL.
The case studies were commissioned with the goal of testing
SurF and providing applied guidance to future SurF users. As such
they provide non-peer-reviewed example evaluations to illustrate the
framework at use, ready at hand to support MPI users of the frame-
work.
Attribute assessment by SurF is supported by a visual output. At
the individual evaluation level, this allows quick assessment of a sys-
tems strengths and weaknesses and, in addition to the evaluation
template, standardizes the reporting of SurF results across different
evaluations. An additional element of SurF is the frameworks ability
to support the assessment of the performance of MPIs surveillance
systems and programmes to provide assurances around the quality
of delivery and the outputs of those programmes. This may include
business intelligence reporting requirements such as the number of
MPI surveillance systems that have elements in need of attention, or
the percentage of systems with the majority of attributes rated as
good or excellent. However, the latter functionality should be
applied with caution as it assumes that all attributes have the same
weight. This is almost certainly not the case. Furthermore, previous
results could be used to benchmark performance over time, if evalu-
ations are conducted consistently and results are reported in a com-
parable format. We recommend using this feature mainly for
providing a quick overview. Users should still refer to the detailed
evaluation text to gain an in-depth understanding of each attribute
and its assessment.
As outlined by Drewe et al. (2012), until recently there has been
little comprehensive evaluation taking into account all aspects of a
programme with quantitative indicators dominating at the cost of
qualitative descriptors such as flexibility or acceptance of the pro-
grammes (St
ark, 2012). While economic evaluation is strongly recom-
mended as an integral part of a comprehensive evaluation
framework, it is not commonly done and can be practically challeng-
ing (Drewe et al., 2015). Stakeholder participation or consultation is
highly recommended in the literature to capture the programmes
acceptability, sustainability and impact (Calba et al., 2015). The
importance of a high standard of documentation, including the value
of visual outputs to support practical implementation of an evalua-
tion effort, has been highlighted (Drewe et al., 2015). These were all
important considerations in the development of SurF.
Differences in the use of terminology can pose major challenges
to collaboration and cross-sectoral efforts such as SurF. However,
the use of consistent specified terminology, that is understood
across sectors, facilitates internal and external communication and
the implementation of any evaluation. The development of SurF
aided the project team in understanding where terminology and
methods differ between sectors, and this new appreciation will likely
lead to improved cross-sectoral collaboration in the future. The pro-
posed terminology is based on current good practice of animal
surveillance evaluation in an international context (The RISKSUR
Project Consortium, 2017) but was extended in close collaboration
with subject matter experts to align with the requirements of other
sectors. However, it is noted that terminology is dynamic and can
vary between sectors. It was therefore recommended that
terminology is discussed and updated regularly as the framework is
being used to assure a common understanding among users.
Designing and implementing surveillance programmes are becom-
ing increasingly challenging (The RISKSUR Project Consortium, 2013)
as factors like climate change and globalization impact on population
health and impact on the risk of biosecurity incursions. A structured,
transparent and logical evaluation process supports outputs that
could become a source of assurance and credibility for the system
examined (Drewe et al., 2015), both nationally and internationally. In
our understanding, SurF is the first framework of its kind providing a
unique cross-sectoral approach to surveillance evaluation. SurF is
accessible via the MPI website: https://www.mpi.govt.nz/dmsdocu
ment/18091-surveillance-evaluation-framework-surf-main-document
and https://mpi.govt.nz/dmsdocument/18094-surveillance-evalua
tion-framework-surf-appendix-1-surf-methods-catalogue.
ACKNOWLEDGEMENTS
This work would not have been possible without recently completed
advances in animal health surveillance evaluation. We would in par-
ticular like to acknowledge SERVAL (Julian Drewe and colleagues)
and the EVA Tool (RiskSur project team), which have both informed
the development of SurF. The recent work on surveillance terminol-
ogy by Linda Hoinville has also provided a foundation for this work
to build on. Funding was provided by the Ministry for Primary Indus-
tries (New Zealand).
ORCID
Jonathan Watts http://orcid.org/0000-0001-6069-4137
Katharina DC St
ark http://orcid.org/0000-0002-0553-5499
REFERENCES
Acosta, H., & White, P. (2011). Atlas of Biosecurity Surveillance. 2011
May. Wellington, New Zealand, Ministry of Agriculture and Forestry.
Retrieved from http://www.mpi.govt.nz/mpi-surveillance-guide/atlas.
pdf (accessed on 18/02/18)
Armstrong, R., Hall, B. J., Doyle, J., & Waters, E. (2011). Cochrane update.
Scoping the scopeof a Cochrane review. Journal of Public Health,
33, 147150. https://doi.org/10.1093/pubmed/fdr015
Calba, C., Goutard, F. L., Hoinville, L., Hendrikx, P., Lindberg, A., Saeger-
man, C., & Peyre, M. (2015). Surveillance systems evaluation: A
review of existing guides. BCM Public Health,15, 448. https://doi.
org/10.1186/s12889-015-1791-5
Centers for Disease Control and Prevention. (2001). Morbidity and Mor-
tality Weekly Report.50,135.
Comin, A., Haesler, B., Hoinville, L., Peyre, M., D
orea, F., Schauer, B., ...
Pfeiffer, D. U. (2016). RISKSUR tools: taking animal health surveillance
into the future through interdisciplinary integration of scientific evidence.
Paper presented at the Annual Meeting of the Society for Veterinary
Epidemiology and Preventive Medicine, Elsinore, Denmark.
Drewe, J. A., Hoinville, L. J., Cook, A. J. C., Floyd, T., Gunn, G., & St
ark,
K. D. C. (2015). SERVAL: a new framework for the evaluation of ani-
mal health surveillance. Transboundary and Emerging Diseases,62,33
45. https://doi.org/10.1111/tbed.12063
MUELLNER ET AL.
|
7
Drewe, J. A., Hoinville, L. J., Cook, A. J. C., Floyd, T., & St
ark, K. D. C.
(2012). Evaluation of animal and public health surveillance systems: A
systematic review. Epidemiology and Infection,140, 575590.
https://doi.org/10.1017/S0950268811002160
European Centre for Disease Prevention and Control. (2014). Data
quality monitoring and surveillance system evaluation. Retrieved from
http//www.ecdc.europa.eu/en/publications/Publications/Data-quality-
monitoring-surveillance-system-evaluation-Sept-2014.pdf (accessed on
18/02/18)
Hendrikx, P., Gay, E., Chazel, M., Moutou, F., Danan, C., Richomme, C., ...
Dufour, B. (2011). OASIS: An assessment tool of epidemiological surveil-
lance systems in animal health and food safety. Epidemiology and Infec-
tion,139,14861496. https://doi.org/10.1017/S0950268811000161
Hoinville, L. J., Alban, L., Drewe, J. A., Gibbens, J. C., Gustafson, L., Has-
ler, B., ... St
ark, K. D. (2013). Proposed terms and concepts for
describing and evaluating animal-health surveillance systems. Preven-
tive Veterinary Medicine,112,112. https://doi.org/10.1016/j.preve
tmed.2013.06.006
Ministry of Agriculture and Forestry (2009). Biosecurity Surveillance Strat-
egy 2020. Wellington, New Zealand: MAF Biosecurity New Zealand.
St
ark, K. D. C. (2012). Evaluating surveillance programmes: Ensuring
value for money. Veterinary Record,171, 421422. https://doi.org/
10.1136/vr.e7124
The RISKSUR Project Consortium. (2013). RISKSUR Task 1.4. The EVA
tool: an integrated approach for evaluation of animal health surveillance
systems. Retrieved from http://www.fp7-risksur.eu/sites/default/file
s/documents/Deliverables/RISKSUR%20(310806)_D1.4.pdf (accessed
on 18/02/18)
The RISKSUR Project Consortium. (2017). Glossary. Retrieved
from http://www.fp7-risksur.eu/terminology/glossary (accessed on
18/02/18)
SUPPORTING INFORMATION
Additional supporting information may be found online in the
Supporting Information section at the end of the article.
How to cite this article: Muellner P, Watts J, Bingham P,
et al. SurF: an innovative framework in biosecurity and animal
health surveillance evaluation. Transbound Emerg Dis.
2018;00:18. https://doi.org/10.1111/tbed.12898
8
|
MUELLNER ET AL.
... Salvo melhor juízo, a seleção dos atributos a serem utilizados na avaliação é a parte mais sensível do processo, visto que, atualmente, a literatura elenca mais de duas dezenas deles (DREWE et al., 2012;MUELLNER et al., 2018). Drewe e colaboradores (2012) sugerem, como regra geral, que a definição de poucos atributos não resultará em uma avaliação consistente e a definição de muitos pode comprometer os objetivos da avaliação pela dificuldade em se reunir e analisar uma quantidade muito grande de dados. ...
Book
Full-text available
Os sistemas de vigilância constituem a forma mais eficiente de se combater um grande número de doenças que acometem as populações animais, fornecendo informações de alta qualidade para embasar a tomada de decisões dos gestores. Para se elaborar bons sistemas de vigilância são necessários o pleno domínio dos métodos epidemiológicos e um sólido conhecimento da história natural da doença alvo, da população alvo, da região onde essa população está inserida e do modo de produção adotado na região. Para serem exitosos, os sistemas de vigilância precisam nascer de forma pactuada entre todos os segmentos e atores da sociedade envolvidos no processo e, para tanto, é central a importância de Comitês Gestores representativos dessas forças. Os autores destacam a relevância de se avaliar os sistemas de vigilância e discorrem sobre a metodologia disponível para tal.
... The sensitivity analysis showed that the probability that a horse in these areas, an increase in the probability of AHS introduction to 20% and above (from the simulation mean of 3.6%) was required to bring the maximum probability of freedom down to below 90%. generally not considered a production animal and, even where they are kept for production purposes, such as in the breeding indus- (Calba et al., 2013;Cameron et al., 2014;Comin et al., 2019;Drewe et al., 2015;Hoinville et al., 2013;Muellner et al., 2018); the results of this study would be best contextualized within one of these frameworks to provide a more holistic evaluation of AHS surveillance in the controlled area of South Africa. ...
Article
An African horse sickness (AHS) outbreak occurred in March and April 2016 in the controlled area of South Africa. This extended an existing trade suspension of live equids from South Africa to the European Union. In the post‐outbreak period ongoing passive and active surveillance, the latter in the form of monthly sentinel surveillance and a stand‐alone freedom from disease survey in March 2017, took place. We describe a stochastic scenario tree analysis of these surveillance components for 24 months, starting July 2016, in three distinct geographic areas of the controlled area. Given that AHS was not detected, the probability of being free from AHS was between 98.3% ‐ 99.8% assuming that, if it were present, it would have a prevalence of at least one infected animal in 1% of herds. This high level of freedom probability had been attained in all three areas within the first 9 months of the two–year period. The primary driver of surveillance outcomes was the passive surveillance component. Active surveillance components contributed minimally (less than 0.2%) to the final probability of freedom. Sensitivity analysis showed that the probability of infected horses showing clinical signs was an important parameter influencing the system surveillance sensitivity. The monthly probability of disease introduction needed to be increased to 20% and greater to decrease the overall probability of freedom to below 90%. Current global standards require a two‐year post‐incursion period of AHS freedom before re‐evaluation of free zone status. Our findings show that the length of this period could be decreased if adequately sensitive surveillance is performed. In order to comply with international standards, active surveillance will remain a component of AHS surveillance in South Africa. Passive surveillance, however, can provide substantial evidence supporting AHS freedom status declarations, and further investment in this surveillance activity would be beneficial.
... AHSURED will not solve this issue but may inform such standards since the AHSURED guidelines can be seen as a form of metadata definition, albeit more free in their format. Unlike existing tools promoting structured ways to design or evaluate AHS [e.g., RISKSUR design and EVA tools (5,11), SERVAL (12), SurF (13)], AHSURED does not involve any assessment of surveillance performances, but rather aims at documenting how surveillance activities were designed and carried out. The focus of AHSURED is really on communication, through the systematic description of how the output of surveillance have been generated. ...
Article
Full-text available
With the current trend in animal health surveillance toward risk-based designs and a gradual transition to output-based standards, greater flexibility in surveillance design is both required and allowed. However, the increase in flexibility requires more transparency regarding surveillance, its activities, design and implementation. Such transparency allows stakeholders, trade partners, decision-makers and risk assessors to accurately interpret the validity of the surveillance outcomes. This paper presents the first version of the Animal Health Surveillance Reporting Guidelines (AHSURED) and the process by which they have been developed. The goal of AHSURED was to produce a set of reporting guidelines that supports communication of surveillance activities in the form of narrative descriptions. Reporting guidelines come from the field of evidence-based medicine and their aim is to improve consistency and quality of information reported in scientific journals. They usually consist of a checklist of items to be reported, a description/definition of each item, and an explanation and elaboration document. Examples of well-reported items are frequently provided. Additionally, it is common to make available a website where the guidelines are documented and maintained. This first version of the AHSURED guidelines consists of a checklist of 40 items organized in 11 sections (i.e., surveillance system building blocks), which is available as a wiki at https://github.com/SVA-SE/AHSURED/wiki. The choice of a wiki format will allow for further inputs from surveillance experts who were not involved in the earlier stages of development. This will promote an up-to-date refined guideline document.
Conference Paper
Full-text available
To enable wide-spread acceptance and adoption of risk-based surveillance approaches by stakeholders it is essential to provide those designing such systems with science-based frameworks guiding them through the systematic process of design and evaluation. The RISKSUR project has addressed this particular need through the development of integrated surveillance system design and evaluation frameworks and associated decision support tools (RISKSUR tools). This paper provides an overview of the RISKSUR tools and presents their application using several disease case studies relevant to EU member states. The RISKSUR tools provide user-friendly access to comprehensive, flexible and state-of-the-art integrated frameworks for animal health surveillance design and evaluation, thereby providing effective guidance during the complex decision making process. The tools will continue to be refined in response to user feedback and new methodological developments. Their availability in the public domain will facilitate access by users and allows widespread integration into training materials.
Article
Full-text available
Regular and relevant evaluations of surveillance systems are essential to improve their performance and cost-effectiveness. With this in mind several organizations have developed evaluation approaches to facilitate the design and implementation of these evaluations. In order to identify and to compare the advantages and limitations of these approaches, we implemented a systematic review using the PRISMA guidelines (Preferred Reporting Items for Systematic Reviews and Meta-Analyses). After applying exclusion criteria and identifying other additional documents via citations, 15 documents were retained. These were analysed to assess the field (public or animal health) and the type of surveillance systems targeted; the development process; the objectives; the evaluation process and its outputs; and the attributes covered. Most of the approaches identified were general and provided broad recommendations for evaluation. Several common steps in the evaluation process were identified: (i) defining the surveillance system under evaluation, (ii) designing the evaluation process, (iii) implementing the evaluation, and (iv) drawing conclusions and recommendations. A lack of information regarding the identification and selection of methods and tools to assess the evaluation attributes was highlighted; as well as a lack of consideration of economic attributes and sociological aspects.
Article
Full-text available
Systematic reviews use a transparent and systematic process to define a research question, search for studies, assess their quality and synthesize findings qualitatively or quantitatively. A crucial step in the systematic review process is to thoroughly define the scope of the research question. This requires an understanding of existing literature , including gaps and uncertainties, clarification of definitions related to the research question and an understanding of the way in which these are conceptualized within existing literature. This information is often acquired in an ad hoc fashion, however a useful and increasingly popular way to collect and organize important background information and develop a picture of the existing evidence base is to conduct a scoping review. Such reviews may be published as a research outcome in their own right and are appealing since they produce a broad map of the evidence that, if sufficiently transparent and widely available via publication, can be used by many and for applications beyond the authors originally intended purpose. Scoping reviews can inform a systematic review, particularly one with a very broad topic scope, such as those edited by the Cochrane Public Health Group.
Article
Full-text available
Disease surveillance programmes ought to be evaluated regularly to ensure they provide valuable information in an efficient manner. Evaluation of human and animal health surveillance programmes around the world is currently not standardized and therefore inconsistent. The aim of this systematic review was to review surveillance system attributes and the methods used for their assessment, together with the strengths and weaknesses of existing frameworks for evaluating surveillance in animal health, public health and allied disciplines. Information from 99 articles describing the evaluation of 101 surveillance systems was examined. A wide range of approaches for assessing 23 different system attributes was identified although most evaluations addressed only one or two attributes and comprehensive evaluations were uncommon. Surveillance objectives were often not stated in the articles reviewed and so the reasons for choosing certain attributes for assessment were not always apparent. This has the potential to introduce misleading results in surveillance evaluation. Due to the wide range of system attributes that may be assessed, methods should be explored which collapse these down into a small number of grouped characteristics by focusing on the relationships between attributes and their links to the objectives of the surveillance system and the evaluation. A generic and comprehensive evaluation framework could then be developed consisting of a limited number of common attributes together with several sets of secondary attributes which could be selected depending on the disease or range of diseases under surveillance and the purpose of the surveillance. Economic evaluation should be an integral part of the surveillance evaluation process. This would provide a significant benefit to decision-makers who often need to make choices based on limited or diminishing resources.
Article
Full-text available
The purpose of this study was to develop a standardized tool for the assessment of surveillance systems on zoonoses and animal diseases. We reviewed three existing methods and combined them to develop a semi-quantitative assessment tool associating their strengths and providing a standardized way to display multilevel results. We developed a set of 78 assessment criteria divided into ten sections, representing the functional parts of a surveillance system. Each criterion was given a score according to the prescription of a scoring guide. Three graphical assessment outputs were generated using a specific combination of the scores. Output 1 is a general overview through a series of pie charts synthesizing the scores of each section. Output 2 is a histogram representing the quality of eight critical control points. Output 3 is a radar chart representing the level reached by ten system attributes. This tool was applied on five surveillance networks.
Article
Animal health surveillance programmes may change in response to altering requirements or perceived weaknesses but are seldom subjected to any formal evaluation to ensure that they provide valuable information in an efficient manner. The literature on the evaluation of animal health surveillance systems is sparse, and those that are published may be unstructured and therefore incomplete. To address this gap, we have developed SERVAL, a SuRveillance EVALuation framework, which is novel and aims to be generic and therefore suitable for the evaluation of any animal health surveillance system. The inclusion of socio-economic criteria ensures that economic evaluation is an integral part of this framework. SERVAL was developed with input from a technical workshop of international experts followed by a consultation process involving providers and users of surveillance and evaluation data. It has been applied to a range of case studies encompassing different surveillance and evaluation objectives. Here, we describe the development, structure and application of the SERVAL framework. We discuss users' experiences in applying SERVAL to evaluate animal health surveillance systems in Great Britain.
Atlas of Biosecurity Surveillance
  • H Acosta
  • P White
Acosta, H., & White, P. (2011). Atlas of Biosecurity Surveillance. 2011-May. Wellington, New Zealand, Ministry of Agriculture and Forestry. Retrieved from http://www.mpi.govt.nz/mpi-surveillance-guide/atlas. pdf (accessed on 18/02/18)