Available via license: CC BY 4.0
Content may be subject to copyright.
ORIGINAL ARTICLE
SurF: an innovative framework in biosecurity and animal
health surveillance evaluation
Petra Muellner
1
|
Jonathan Watts
2
|
Paul Bingham
2
|
Mark Bullians
2
|
Brendan Gould
2
|
Anjali Pande
2
|
Tim Riding
2
|
Paul Stevens
2
|
Daan Vink
2
|
Katharina DC St€
ark
3,4
1
Epi-interactive, Wellington, New Zealand
2
Ministry for Primary Industries, Wellington,
New Zealand
3
SAFOSO AG, Bern-Liebefeld, Switzerland
4
Royal Veterinary College, London, UK
Correspondence
J. Watts, Ministry for Primary Industries,
Wellington, New Zealand.
Email: Jonathan.Watts@mpi.govt.nz
Summary
Surveillance for biosecurity hazards is being conducted by the New Zealand Compe-
tent Authority, the Ministry for Primary Industries (MPI) to support New Zealand’s
biosecurity system. Surveillance evaluation should be an integral part of the surveil-
lance life cycle, as it provides a means to identify and correct problems and to sus-
tain and enhance the existing strengths of a surveillance system. The surveillance
evaluation Framework (SurF) presented here was developed to provide a generic
framework within which the MPI biosecurity surveillance portfolio, and all of its
components, can be consistently assessed. SurF is an innovative, cross-sectoral
effort that aims to provide a common umbrella for surveillance evaluation in the ani-
mal, plant, environment and aquatic sectors. It supports the conduct of the following
four distinct components of an evaluation project: (i) motivation for the evaluation,
(ii) scope of the evaluation, (iii) evaluation design and implementation and (iv) report-
ing and communication of evaluation outputs. Case studies, prepared by MPI subject
matter experts, are included in the framework to guide users in their assessment.
Three case studies were used in the development of SurF in order to assure practi-
cal utility and to confirm usability of SurF across all included sectors. It is anticipated
that the structured approach and information provided by SurF will not only be of
benefit to MPI but also to other New Zealand stakeholders. Although SurF was
developed for internal use by MPI, it could be applied to any surveillance system in
New Zealand or elsewhere.
KEYWORDS
biosecurity, evaluation, surveillance
1
|
INTRODUCTION
The New Zealand Ministry for Primary Industries (MPI) undertakes
and invests significantly in a range of national biosecurity surveil-
lance activities across the plant, animal, environmental and aquatic
sectors (Acosta & White, 2011). Biosecurity surveillance aims to
detect hazards such as infectious disease agents or introduced pests
and inform their management. It is thereby part of the larger biose-
curity system aimed at reducing biosecurity risks and facilitating
trade. These activities underpin New Zealand’s ability to enable trade
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium,
provided the original work is properly cited.
©2018 2018 The Authors. Transboundary and Emerging Diseases Published by Blackwell Verlag GmbH
Received: 2 November 2017
|
Accepted: 13 April 2018
DOI: 10.1111/tbed.12898
Transbound Emerg Dis. 2018;1–8. wileyonlinelibrary.com/journal/tbed
|
1
and to protect itself from biological risks through the early detection
of pests and diseases, and the provision of evidence of pest or dis-
ease freedom. Given the importance of these activities to New Zeal-
and stakeholders, it is essential that the performance of these
programmes can be assessed to provide assurances regarding the
quality of delivery and outputs of these programmes. The impor-
tance of understanding, and being able to assess, the quality of
surveillance programmes was a focus of New Zealand’s Biosecurity
Surveillance Strategy 2020 (Ministry of Agriculture and Forestry
[MAF], 2009), which identified three strategic goals related to the
delivery of quality surveillance:
•The most appropriate mix of surveillance activities is chosen to
ensure surveillance programmes meet their specific objectives
•Surveillance delivery is effective, efficient and responsive to
changes in the biosecurity environment
•The outputs of surveillance programmes can be relied upon by
decision makers.
It is also critical to ensure that surveillance programmes are
responsive to change and continually evolve to meet changing biose-
curity needs in an efficient and responsive manner. As concluded by
Drewe et al. (2015), evaluation can be used to help both identify
and correct problems, as well as to protect, enhance and provide
assurance on the strength of a surveillance system. Furthermore, in
the animal health context, the assessment of surveillance systems is
a component of both the import risk analysis and the veterinary ser-
vices assessment procedures documented by the World Organization
for Animal Health (Hendrikx et al., 2011).
The continuous evolution of surveillance systems therefore war-
rants periodic re-evaluation of their continued relevance and effec-
tiveness and underscores the importance of surveillance evaluation
in the surveillance life cycle (Figure 1).
The surveillance evaluation framework (SurF) was developed to
provide a consistent generic framework for the assessment of the
MPI biosecurity surveillance portfolio, including all of its compo-
nents. It was also envisaged that in achieving MPI’s cross-sector
requirements that this framework could be applied more broadly by
others delivering biosecurity surveillance activities. This novel cross-
sectoral effort aims to provide a common umbrella for surveillance
evaluation in the animal, plant, environment and aquatic (including
marine, aquaculture and freshwater) sectors. Here, we present tech-
nical details of the framework and its development.
2
|
MATERIALS AND METHODS
In order to collate available information and example materials to
inform development of the New Zealand biosecurity evaluation
framework, a scoping review methodology was used to rapidly map
the key concepts underlying surveillance evaluation in different sec-
tors. The terminology proposed by Hoinville et al. (2013) was used
wherever possible to align with existing standards. A surveillance
evaluation framework was developed based on these findings. Three
case studies were developed to test the developed framework and
provide applied guidance to future users.
2.1
|
Review methodology
A scoping review technique was used for the purpose of creating a
common evidence base for the planning and development of the
framework. Scoping reviews are considered a useful and increasingly
popular way to collect and organize important background informa-
tion and to gain an overview of the existing evidence base (Arm-
strong, Hall, Doyle, & Waters, 2011).
Initially, relevant documents were identified through discussions
with stakeholders and surveillance experts. Reference lists of identi-
fied publications were considered as additional sources of information.
As two extensive reviews, including a full and systematic review of
surveillance evaluation in the animal and human health field, have
recently been completed (Calba et al., 2015; Drewe, Hoinville, Cook,
Floyd, & St€
ark, 2012), it was considered most efficient to build on
these rather than duplicating the work already conducted. However,
to cover most recent publications, the literature search query devel-
oped by Drewe et al. (2012) was re-run in Web of Science covering
articles published between 2011 and 15 February 2015.
To identify relevant non-animal surveillance publications, a scan-
ning search of the scientific literature database Web of Science was
conducted using the Boolean query: Topic =surveillance AND
Title =((surveillance AND (evaluat*OR analy*OR perform*)) OR
(evaluat*AND perform*)) AND (environ*OR marine*OR plant*).
Through the use of wildcards (*), articles containing any variation of
each of the search terms were identified. All articles published in the
last 20 years (1995 and later) were included. To cover unpublished
work, the grey literature was investigated through a Google web
search built on the core search terms as described above (surveil-
lance AND (evaluat*OR analy*OR perform*). The first 200 results
were assessed and if relevant, findings were included in this report.
FIGURE 1 Evaluation as part of the surveillance life cycle
2
|
MUELLNER ET AL.
2.2
|
Framework development
A project team consisting of subject matter experts from the biose-
curity sectors that the framework was aiming to cover was assem-
bled. This included MPI experts from the environmental, aquatic,
plant and terrestrial animal surveillance teams plus two external epi-
demiologists. Taking into account the literature review outcomes,
the framework was specified during regular face-to-face group
meetings that took place over an 18-month time period. Case
studies were prepared by MPI subject matter experts between
September and December 2015, using data and information that
were already available. The objective of the case studies was to
provide a proof of concept approach, to demonstrate that the frame-
work was robust, complete, fit-for-purpose and user-friendly across
the different biosecurity sectors it is targeting. Further the case
studies were used to identify any framework components that
needed rewording or further refinement.
3
|
RESULTS
3.1
|
Review results
The updated search by Drewe et al. identified a total of 1,531 arti-
cles. All titles were scanned by the assessor. If a title appeared rele-
vant to this review, the abstract was retrieved and reviewed.
Although a large number of titles were returned by the search, only
one additional article (Hoinville et al., 2013) of relevance to the
objectives of this review and not included in the reference lists of
Drewe et al. (2012) or Calba et al. (2015) could be identified. In
addition to the animal and human health-focused publications, the
literature searches specific to the environmental, marine and plant
sector delivered a total of 79 titles. The assessor scanned all titles
returned and zero articles of relevance to the objective of this
review could be identified. A complete list of all articles retrieved
and assessed by the above-described protocols is available on
request. The search of the grey literature identified one additional
publication of relevance from public health surveillance (European
Centre for Disease Control and Prevention, 2014).
In conclusion, although a structure search was conducted, no
evaluation frameworks specific to surveillance in the environmental,
aquatic or plant sectors were identified by the scoping review.
Current efforts appear concentrated on the evaluation of public
health and animal health surveillance; however, existing frameworks
offered the flexibility to be adapted to support the wider context
of New Zealand biosecurity surveillance. It was therefore decided
to build SurF on previous work conducted nationally and interna-
tionally in the context of the evaluation of human and animal
health surveillance. This included, in particular, the SERVAL frame-
work (Drewe et al., 2015), the recently published guidelines by the
European Centre for Disease Control (ECDC) (2014), the Centers
for Disease Control and Prevention Guidelines (CDC) (2001) and
the EVA tool (Comin et al., 2016; The RISKSUR Project Consor-
tium, 2013).
3.2
|
Framework development
Any framework for biosecurity surveillance evaluation will have to
be very flexible and generic, as not only programmes with different
objectives but also programmes in different sectors have to be
assessed. Following the scoping review and expert discussions, it
was concluded that several existing evaluation frameworks, while
not originating from a cross-sectoral biosecurity surveillance per-
spective, could be readily adapted to the New Zealand requirements.
Following a series of expert meetings, it was concluded that SERVAL
and EVA were most suitable tools to build upon as they offer the
required flexibility to answer the diversity of evaluation questions
that needed to be addressed while build on existing literature and
good practice standards. Based on the findings of the review and
the above considerations,
SurF consists of four components, each supporting a distinct
phase in the evaluation:
1. Motivation for the evaluation
2. Scope of the evaluation
3. Evaluation design and implementation
4. Reporting and communication of evaluation outputs.
Each component describes the activities and decisions related
to a phase within an evaluation project. Table 1 provides a sche-
matic overview of the four components and their individual con-
tent. The framework and the supporting guidance notes describe
the aspects to be considered during each specific activity of the
evaluation process. Depending on the situation and the system
under evaluation, it might not be possible to assess or describe all
components in full detail; any abbreviations from the full protocol
are therefore documented to ensure consistency. Further, for con-
venience, SurF provides users with an evaluation template to sup-
port consistency of outputs (Supporting Information 1 SurF
Evaluation Template).
SurF includes a total of 29 different attributes (Table 2), which
are divided into core attributes (n=10; highlighted in bold) and
accessory attributes (n=19). Inclusion and categorization of attri-
butes were jointly decided by the different experts participating in
the framework development. This included experts representing each
of the biosecurity sectors. However, attributes, their definitions and
recommended methods for assessment build on existing frameworks,
in particular SERVAL and EVA, but also the review of Drewe et al.
(2015) and the Centers for Disease Control and Prevention (2001)
and ECDC (2014) guidelines on surveillance evaluation and monitor-
ing. SurF also includes some additional attributes, which were devel-
oped with the objectives and scope of SurF in mind, for example
“Field and laboratory services.”Also, some previously proposed attri-
butes were modified to provide the framework with sufficient flexi-
bility to be used across the whole spectrum of New Zealand’s
biosecurity surveillance portfolio. This was an important component
of the development as existing frameworks were focused on surveil-
lance of human or animal disease while the biosecurity context of
MUELLNER ET AL.
|
3
this project required extending several definitions to also encompass
other risk organisms such as invasive aquatic species or pests of
plants. Therefore, consideration was given to compatibility with plant
and aquatic health and surveillance terminology. Ecological concepts
and related terminology also had to be included to encompass the
non-animal health sectors.
Traffic-light coding is, like in the SERVAL framework (Drewe
et al., 2015), used to provide a summary appraisal in SurF for each
of the attributes, using a standardized coding approach.
Within SurF, attributes are grouped into five “Functional Attri-
bute Groups”based on the logic presented in Figure 2. Each group
includes at least one attribute that is considered to be a core attri-
bute. Core attributes assess essential aspects common to all surveil-
lance systems, and it is recommended that they be included in all
evaluations. If for any reason this is not done, justification has to be
provided. The choice of accessory attributes is left to the evaluator’s
judgement and is not specified in SurF. The choice will ultimately be
situation- and sector-specific and may be influenced by factors such
as the evaluation question, the surveillance objective or the surveil-
lance system’s design.
Detailed guidance for the assessment of each SurF attribute is
in dedicated guidance notes. While the aim was to align with exist-
ing standards such as those proposed by SERVAL (Drewe et al.,
2015), the EVA Tool (Comin et al., 2016; The RISKSUR Project
Consortium, 2013) or Hoinville et al. (2013) at times wording of the
guidance had to be adapted to meet the needs of the non-terres-
trial animal health sectors. For example, the text had to be
extended to also apply to unwanted pest organisms (such as inva-
sive plant or insect species) and hence had to consider, for exam-
ple, an organism’s habitat or the search efficiency of an activity. In
addition, a methods’catalogue has been compiled to further
TABLE 1 Overview of the evaluation process described in SurF
Identification of the system under evaluation
I. Motivation for the evaluation
A. Evaluation trigger
B. Context
II. Scope of the evaluation
A. Evaluation objective
B. Evaluation question(s)
C. Time and resources
D. Evaluation intensity
E. Evaluation organization and composition of evaluation team
F. Status of evaluation outputs
III. Evaluation design and implementation
Design of the evaluation
A. Select attributes from master list
B. Choose methods to assess attributes
C. Make an inventory of available information sources about
the system
D. Identify missing information
Implementation of the evaluation
A. Describe the surveillance system under evaluation
B. Describe the surveillance system’s objective(s)
C. Describe the organizational structure
D. Identify and engage surveillance system users
E. Identify the target population and geographical coverage
F. Describe the design of the surveillance system
G. Describe the processes
H. Collect data and information
I. Assess the included attributes
IV. Reporting and communication of evaluation outputs
A. State target audience
B. Report main findings
C. Summarize and synthesize results
D. Provide guidance for interpretation of results
E. Make recommendations
F. Facilitate plain reporting
TABLE 2 List of core and accessory attributes included in SurF
(n=29). Core attributes are highlighted in bold
Functional attribute
group Attribute
A. Organization &
management
1. Flexibility
2. Organization and management
3. Performance indicators and evaluation
B. Processes 4. Data analysis
5. Data and information collection
6. Data management and storage
7. Field and laboratory services
8. Resource availability
9. Technical competence and training
C. Technical
implementation
10. Acceptability and engagement
11. Coverage
12. Data completeness and correctness
13. Interoperability
14. Multiple utility
15. RARR
16. (Reliability, availability, repeatability and
robustness)
16. Timeliness
D. Outputs 17. Historical data
18. Negative predictive value
19. Positive predictive value
20. Precision
21. Representativeness and bias
22. Sensitivity
23. Specificity
E. Impact 24. Benefit
25. Decision support
26. Efficiency
27. External communication and
dissemination
28. Internal communication
29. Utility
4
|
MUELLNER ET AL.
support attribute assessment by the various groups and to support
the development of standard operating procedures. SurF further
provides a visual output that allows for comparison of core perfor-
mance between systems and within individual systems over time
(Figure 3).
3.3
|
Framework testing
Three case studies, including the National Apiculture Surveillance
Programme (NASP), Marine High Risk Site Surveillance Programme
(MHRSS) and the Forestry High Risk Site Surveillance Programme
(HRSS), were used to demonstrate how SurF can be used in ongoing
surveillance activities. The case studies were also used in the devel-
opment of SurF in order to assure practical utility and to confirm
usability of SurF across all included sectors. In brief, the first collabo-
ratively developed framework version was tested on the case stud-
ies, and the expert group was reconvened once all studies were
completed. The expert group then jointly discussed the outcomes
and where required made adjustments to the framework, mainly to
improve the clarity of wording and application to the non-animal
sectors. Since SurF builds on published animal health surveillance
evaluation framework, its application on the selected animal health
case study was straightforward, while a large part of the experts’dis-
cussions at this final stage of the development process focused on
ensuring that SurF is fit-for-purpose for all biosecurity sectors it will
be applied to. While no detailed results can be provided here for
confidentiality reasons, overall experts agreed that the framework
could successfully be used to evaluate the diverse set of case studies
and that the framework was ready to be rolled-out for routine use
within MPI.
4
|
DISCUSSION
The MPI evaluation framework was designed to ensure consistency
in the evaluation of different biosecurity surveillance systems by
providing a robust process that is not sector- or context-specific.
This should also make results of evaluations comparable and easily
interpretable by managers. SurF draws from existing surveillance
frameworks and, when appropriate, adopts what has been developed
elsewhere. Its greatest innovation lies in the extension from animal
health-specific designs to plant, environment and aquatic biosecurity
surveillance and combining this with animal health biosecurity
surveillance under a common umbrella. This is a valuable new
development as it can provide organizations like MPI, whose man-
date encompasses several sectors, with a standardized means to
evaluate the surveillance activities under its care.
The aim was to develop a generic framework to allow suffi-
cient flexibility for use across the wide range of MPI surveillance
systems and to compare and assess system performance. While
the standardized assessment of core attributes provides consis-
tency between the assessments of different systems, the choice of
accessory attributes allows users to tailor the evaluation to unique
contexts. SurF provides users with a large amount of flexibility in
the selection of attributes. This differs from recently published ani-
mal surveillance frameworks that emphasize alignment of attributes
with specific surveillance objectives, for example early detection or
freedom from disease (Comin et al., 2016; Drewe et al., 2015; The
RISKSUR Project Consortium, 2013). Further, a substantial number
of attributes are included in SurF to accommodate the diversity
and unique context of MPI’s surveillance systems. Although SurF
was developed for internal use by MPI, it was envisaged to be
useful for reviewing any biosecurity surveillance system, including,
surveillance conducted by others operating within the biosecurity
system. While SurF has been built for surveillance evaluation in
the animal, plant, environment and aquatic sectors; by extension, it
could also support human health surveillance, for example where
mosquito surveillance programmes inform surveillance of vector-
borne diseases.
Although a formal literature search was conducted, no evaluation
frameworks specific to surveillance in the environmental, aquatic or
plant sector could be identified by this scoping review. SurF was
built on the assumption that a cross-sector framework can be devel-
oped using existing frameworks and attributes while performance
indicators can be adapted to meet the needs and realities of the dif-
ferent sectors. While approaches within the different disciplines are
slightly different (e.g., public health surveillance evaluations tend to
be more qualitative than animal health surveillance evaluations), the
general concepts are transferable and have informed the develop-
ment of a biosecurity surveillance evaluation framework for New
Zealand. There is a range of ways evaluation can be conducted, and
this is met by the diversity of possible evaluation questions. Specific
evaluation design will be highly influenced by the evaluation ques-
tion (St€
ark, 2012), and each type of surveillance system will require
a tailored evaluation effort (European Centre for Disease Prevention
and Control, 2014). However, recommendations regarding the gen-
eric workflow of an evaluation and evaluation best practice have
previously been made.
Surveillance system
(a) Organization &
management
(b) Processes Technical
implementation
Output Impact(e)
(d)
(c)
FIGURE 2 Logic of functional attribute
groups (a–e) used in SurF
MUELLNER ET AL.
|
5
Surveillance system 2
Surveillance system 1
(a) Organization &
Organization &
management
management
Performance indica-
tors & Evaluvation
Organization &
management
Performance indica-
tors & Evaluvation
Data management and
Field and laboratory
services
& training
Technical competence
storage
Data management and
Field and laboratory
services
& training
Technical competence
storage
Interoperability
Interoperability
Acceptability and
engagement
& correctness
Timeliness
Timeliness
Precision
Sensitivity
Sensitivity
Specificity
Specificity
Efficiency
External
communication and
dissemination
communication
Internal
Utility
Utility
Benefit
Benefit
Data completeness
& correctness
Data completeness
Multiple utility
(b) Processes
Data & information
collection
Data & information
collection
Data analysis
Data analysis
Flexibility
Resource availability
Resource avalilability
Coverage
Rarr
Negative predictive
Positive predictive
value
value
Negative predictive
value
Representativeness
Historical dataHistorical data
Historical data
Decision support
Decision support
Description: Excellent of very good; Good, through room for improvement; In need of attention
& bias
Representativeness
& bias
Technical
implementation Output Impact
(e)
(d)
(c)
(a) Organization &
management (b) Processes Technical
implementation Output Impact
(e)
(d)
(c)
FIGURE 3 Visual outputs of performance assessment of attributes using the SurF framework. The format allows comparison between
different evaluations or systems (described here as “System 1”and “System 2”). Attributes assessed positively are always placed at the top of
the process box, while those in potential need of attention are placed below
6
|
MUELLNER ET AL.
The case studies were commissioned with the goal of testing
SurF and providing applied guidance to future SurF users. As such
they provide non-peer-reviewed example evaluations to illustrate the
framework at use, ready at hand to support MPI users of the frame-
work.
Attribute assessment by SurF is supported by a visual output. At
the individual evaluation level, this allows quick assessment of a sys-
tem’s strengths and weaknesses and, in addition to the evaluation
template, standardizes the reporting of SurF results across different
evaluations. An additional element of SurF is the framework’s ability
to support the assessment of the performance of MPI’s surveillance
systems and programmes to provide assurances around the quality
of delivery and the outputs of those programmes. This may include
business intelligence reporting requirements such as the number of
MPI surveillance systems that have elements in need of attention, or
the percentage of systems with the majority of attributes rated as
good or excellent. However, the latter functionality should be
applied with caution as it assumes that all attributes have the same
weight. This is almost certainly not the case. Furthermore, previous
results could be used to benchmark performance over time, if evalu-
ations are conducted consistently and results are reported in a com-
parable format. We recommend using this feature mainly for
providing a quick overview. Users should still refer to the detailed
evaluation text to gain an in-depth understanding of each attribute
and its assessment.
As outlined by Drewe et al. (2012), until recently there has been
little comprehensive evaluation taking into account all aspects of a
programme with quantitative indicators dominating at the cost of
qualitative descriptors such as flexibility or acceptance of the pro-
grammes (St€
ark, 2012). While economic evaluation is strongly recom-
mended as an integral part of a comprehensive evaluation
framework, it is not commonly done and can be practically challeng-
ing (Drewe et al., 2015). Stakeholder participation or consultation is
highly recommended in the literature to capture the programmes’
acceptability, sustainability and impact (Calba et al., 2015). The
importance of a high standard of documentation, including the value
of visual outputs to support practical implementation of an evalua-
tion effort, has been highlighted (Drewe et al., 2015). These were all
important considerations in the development of SurF.
Differences in the use of terminology can pose major challenges
to collaboration and cross-sectoral efforts such as SurF. However,
the use of consistent specified terminology, that is understood
across sectors, facilitates internal and external communication and
the implementation of any evaluation. The development of SurF
aided the project team in understanding where terminology and
methods differ between sectors, and this new appreciation will likely
lead to improved cross-sectoral collaboration in the future. The pro-
posed terminology is based on current good practice of animal
surveillance evaluation in an international context (The RISKSUR
Project Consortium, 2017) but was extended in close collaboration
with subject matter experts to align with the requirements of other
sectors. However, it is noted that terminology is dynamic and can
vary between sectors. It was therefore recommended that
terminology is discussed and updated regularly as the framework is
being used to assure a common understanding among users.
Designing and implementing surveillance programmes are becom-
ing increasingly challenging (The RISKSUR Project Consortium, 2013)
as factors like climate change and globalization impact on population
health and impact on the risk of biosecurity incursions. A structured,
transparent and logical evaluation process supports outputs that
could become a source of assurance and credibility for the system
examined (Drewe et al., 2015), both nationally and internationally. In
our understanding, SurF is the first framework of its kind providing a
unique cross-sectoral approach to surveillance evaluation. SurF is
accessible via the MPI website: https://www.mpi.govt.nz/dmsdocu
ment/18091-surveillance-evaluation-framework-surf-main-document
and https://mpi.govt.nz/dmsdocument/18094-surveillance-evalua
tion-framework-surf-appendix-1-surf-methods-catalogue.
ACKNOWLEDGEMENTS
This work would not have been possible without recently completed
advances in animal health surveillance evaluation. We would in par-
ticular like to acknowledge SERVAL (Julian Drewe and colleagues)
and the EVA Tool (RiskSur project team), which have both informed
the development of SurF. The recent work on surveillance terminol-
ogy by Linda Hoinville has also provided a foundation for this work
to build on. Funding was provided by the Ministry for Primary Indus-
tries (New Zealand).
ORCID
Jonathan Watts http://orcid.org/0000-0001-6069-4137
Katharina DC St€
ark http://orcid.org/0000-0002-0553-5499
REFERENCES
Acosta, H., & White, P. (2011). Atlas of Biosecurity Surveillance. 2011–
May. Wellington, New Zealand, Ministry of Agriculture and Forestry.
Retrieved from http://www.mpi.govt.nz/mpi-surveillance-guide/atlas.
pdf (accessed on 18/02/18)
Armstrong, R., Hall, B. J., Doyle, J., & Waters, E. (2011). Cochrane update.
‘Scoping the scope’of a Cochrane review. Journal of Public Health,
33, 147–150. https://doi.org/10.1093/pubmed/fdr015
Calba, C., Goutard, F. L., Hoinville, L., Hendrikx, P., Lindberg, A., Saeger-
man, C., & Peyre, M. (2015). Surveillance systems evaluation: A
review of existing guides. BCM Public Health,15, 448. https://doi.
org/10.1186/s12889-015-1791-5
Centers for Disease Control and Prevention. (2001). Morbidity and Mor-
tality Weekly Report.50,1–35.
Comin, A., Haesler, B., Hoinville, L., Peyre, M., D
orea, F., Schauer, B., ...
Pfeiffer, D. U. (2016). RISKSUR tools: taking animal health surveillance
into the future through interdisciplinary integration of scientific evidence.
Paper presented at the Annual Meeting of the Society for Veterinary
Epidemiology and Preventive Medicine, Elsinore, Denmark.
Drewe, J. A., Hoinville, L. J., Cook, A. J. C., Floyd, T., Gunn, G., & St€
ark,
K. D. C. (2015). SERVAL: a new framework for the evaluation of ani-
mal health surveillance. Transboundary and Emerging Diseases,62,33–
45. https://doi.org/10.1111/tbed.12063
MUELLNER ET AL.
|
7
Drewe, J. A., Hoinville, L. J., Cook, A. J. C., Floyd, T., & St€
ark, K. D. C.
(2012). Evaluation of animal and public health surveillance systems: A
systematic review. Epidemiology and Infection,140, 575–590.
https://doi.org/10.1017/S0950268811002160
European Centre for Disease Prevention and Control. (2014). Data
quality monitoring and surveillance system evaluation. Retrieved from
http//www.ecdc.europa.eu/en/publications/Publications/Data-quality-
monitoring-surveillance-system-evaluation-Sept-2014.pdf (accessed on
18/02/18)
Hendrikx, P., Gay, E., Chazel, M., Moutou, F., Danan, C., Richomme, C., ...
Dufour, B. (2011). OASIS: An assessment tool of epidemiological surveil-
lance systems in animal health and food safety. Epidemiology and Infec-
tion,139,1486–1496. https://doi.org/10.1017/S0950268811000161
Hoinville, L. J., Alban, L., Drewe, J. A., Gibbens, J. C., Gustafson, L., Has-
ler, B., ... St€
ark, K. D. (2013). Proposed terms and concepts for
describing and evaluating animal-health surveillance systems. Preven-
tive Veterinary Medicine,112,1–12. https://doi.org/10.1016/j.preve
tmed.2013.06.006
Ministry of Agriculture and Forestry (2009). Biosecurity Surveillance Strat-
egy 2020. Wellington, New Zealand: MAF Biosecurity New Zealand.
St€
ark, K. D. C. (2012). Evaluating surveillance programmes: Ensuring
value for money. Veterinary Record,171, 421–422. https://doi.org/
10.1136/vr.e7124
The RISKSUR Project Consortium. (2013). RISKSUR Task 1.4. The EVA
tool: an integrated approach for evaluation of animal health surveillance
systems. Retrieved from http://www.fp7-risksur.eu/sites/default/file
s/documents/Deliverables/RISKSUR%20(310806)_D1.4.pdf (accessed
on 18/02/18)
The RISKSUR Project Consortium. (2017). Glossary. Retrieved
from http://www.fp7-risksur.eu/terminology/glossary (accessed on
18/02/18)
SUPPORTING INFORMATION
Additional supporting information may be found online in the
Supporting Information section at the end of the article.
How to cite this article: Muellner P, Watts J, Bingham P,
et al. SurF: an innovative framework in biosecurity and animal
health surveillance evaluation. Transbound Emerg Dis.
2018;00:1–8. https://doi.org/10.1111/tbed.12898
8
|
MUELLNER ET AL.