ArticlePDF Available

Do We Need More Evidence-Based Survey Guidance?


Abstract and Figures

As ecologists and environmental managers, we rely on good quality baseline information. However, the survey methods we currently employ are often unsupported by scientific testing and are not proven to provide high quality outputs. As a community of practitioners, we should seek to change this, taking on board new research and technological developments – and building more evidence explicitly into our survey guidance.
Content may be subject to copyright.
In this 100th issue
Issue 100 | June 2018
Bulletin of the Chartered
Institute of Ecology
and Environmental
Big Issues: Human Population
Growth and Climate Change
– Beyond Carrying Capacity?
Big Issues: The Hidden
Tragedy of the Earth’s
Freshwater Ecosystems
Big Ideas: Creating
a Mess – The Knepp
Rewilding Project
In this 100th issue
In this 100th issue
In this 100th issue
53Issue 100 | June 2018
Big Ideas
: Do We Need More Evidence-Based
Survey Guidance?
Do We Need More Evidence-Based
Survey Guidance?
Carlos Abrahams MCIEEM Darryn J. Nash CEcol MCIEEM
Keywords: evidence-based, good
practice, guidance, monitoring, survey
As ecologists and
environmental managers, we
rely on good quality baseline
information. However, the
survey methods we currently
employ are often unsupported
by scientifi c testing and are not
proven to provide high quality
outputs. As a community of
practitioners, we should seek
to change this, taking on board
new research and technological
developments – and building
more evidence explicitly into
our survey guidance.
As ecologists and environmental managers,
the data we gather through survey and
monitoring programmes is vitally important
in all aspects of our work. It allows us
to predict impacts with some level of
confi dence, track and anticipate trends
in biodiversity, and assess whether our
management interventions are working
– or not. To generate good quality data
though, we need good quality survey
methods, which are developed, reviewed
and updated in line with existing evidence,
new scientifi c fi ndings and technological
developments (Figure 1).
To an extent, we already have reasonable
survey methods, which have provided
much useful information in national
monitoring programmes or in site-based
assessments. We are lucky in the UK to
have a well-developed history of voluntary
and professional work in the conservation
sector, and long established standards for
surveying fl ora and fauna. However, if we
consider the age of some extant survey
guidance (such as the Great Crested Newt
Figure 1. GPS-enabled tablets allow accurate fi eld recording, with forms that can be
customised to di erent types of survey or sites, to allow standardised data collection.
Photo credit Carlos Abrahams.
Mitigation Guidelines, English Nature
2001), against the pace of research
and technological change, the need for
ongoing updates becomes clear.
We all have a responsibility to ensure that
our survey methods are fi t for purpose.
Both BS:42020 (BSI 2013) and the CIEEM
Code of Professional Conduct require that
methods used to undertake surveys should
follow published good practice guidelines
where they exist. However, if published
guidance is out of date and/or better
techniques have been developed, then we
should take new innovative approaches
where these could provide a better
outcome. To make this type of judgment
call we should be basing our decisions on
evidence of what actually works best for
our particular needs. However, in the fi rst
instance, how much of our established
54 Issue 100 | June 2018
Big Ideas
: Do We Need More Evidence-Based
Survey Guidance?
and published good practice guidance is
based on evidence? How frequently has
testing of methods been undertaken,
allowing comparisons between different
survey approaches? And how many of our
methods have been developed for site-
based assessments by professionals, rather
than for national monitoring by citizen
scientists? For example, why do we still
apply the Great Crested Newt Mitigation
Guidelines recommendation of four visits
for presence/absence surveys and six for
population size class assessment (English
Nature 2001) when recent publications
(Kropfl i et al. 2010, Sewell et al. 2013)
state that up to six visits may be required
to accurately record presence/absence at
some ponds, and seven to eight surveys are
needed to consistently gauge population
numbers (although the population size class
can probably be determined at the majority
of sites from only four visits, Wynn 2013)?
CIEEM and its contributing members have
done a very useful job in recent years of
compiling the Sources of Survey Methods,
and following this up with A Guide to
Good Practice Guidance, as highlighted by
Sally Hayns in the December 2017 issue of
In Practice (Hayns 2017). Both resources
list a wide range of references, which form
the canon of our professional practice
as ecologists. In January 2016, CIEEM
also produced the excellent Principles of
Preparing Good Guidance for Ecologists
and Environmental Managers. This states
at PRINCIPLE IV that good guidance should
be explicitly based on good evidence:
‘All guidance should be evidence-
based and should reference
original sources, where available,
that illustrate that the techniques
recommended are appropriate……
Where guidance is based on existing
good practice, but the scientifi c
evidence supporting it is limited, this
should be stated and there should be
suffi cient fl exibility in the guidance
to allow for individuals to innovate.
Scientifi c testing, e.g. comparative
studies of different techniques, is
strongly recommended where new
approaches are suggested and the
results should be published widely.’
This principle sets out an aspiration for our
survey guidance that is not being regularly
met in our current documentation. Any
review of guidance drawn from a range of
sources will show that the reasons being
put forward for specifi c recommendations
are often not clear or appropriately justifi ed
even though the actual methods may be
set out in great detail. This omission is
well demonstrated in some of our most
commonly used publications.
Survey Methods
Bats: The Bat Conservation Trust’s (BCT)
Bat Surveys for Professional Ecologists
(Collins 2016) is one of the best pieces of
guidance that we have available, and has
been repeatedly updated to its current third
edition. However, some areas remain that
could benefi t from increased explanation
and by reference to the scientifi c literature.
When conducting bat surveys, a critical
rst step in determining the level of survey
effort to be employed at a site is a habitat
quality assessment into low, medium
or high categories. This translates into
the number of surveys that should be
undertaken, with 1-3 emergence surveys,
or 3-12 transects being recommended.
Although the guidance for this habitat
assessment process has been improved
in the third edition, it is still limited and
qualitative, with no obvious basis in
evidence. Furthermore, why does the
guidance recommend one visit to low-
potential roost features and three visits to
high-potential features – and why this way
round? Has this approach been tested to
determine whether it will provide accurate
information about roost presence or
absence? If so, it would be very useful to
see the underlying evidence. The inclusion
of background research would serve to
increase confi dence in the method and
would reassure bat surveyors that the
recommendations will provide sound and
valid data. However, the broad rules of
thumb put forward as ‘good practice’ in
the BCT guidance don’t appear to be based
on scientifi c studies that determine how
much survey is appropriate, or how survey
effort should be programmed through
the season. Research that has carried out
method testing should be incorporated
into guidance, and could help to improve
the protocols for assessing building roosts
(Underhill-Day 2017), inform the levels of
survey effort needed to detect common or
rare species at sampling locations (Skalak
et al. 2012), and identify which type of bat
detector we should be using to capture call
data (Figure 2) (Adams et al. 2012).
Birds: There are a number of recognised
survey methods for birds, depending
on the habitats and taxa being targeted
(Gilbert et al. 1998). However, many of
these are designed for national survey
programmes by volunteers, rather than
being optimised for the needs of smaller-
scale site assessments, such as EcIA studies.
A notable exception is the windfarm
survey guidance produced by the statutory
authorities, e.g. Scottish Natural Heritage
(2014). For breeding bird studies, the
majority of consultants will probably use
a territory mapping approach, based
on Common Birds Census (Marchant
Figure 2. Full-spectrum audio recording allows high quality acoustic data to be collected from
vocal species groups, such as bats and birds. Photo credit Carlos Abrahams.
55Issue 100 | June 2018
1983). This method is useful for providing
detailed information on the distribution
of bird territories, but is time-consuming,
and difficult to apply and interpret. As
there is no set number of site visits for
this method when used by consultants,
the number of surveys carried out within
EcIA studies is often determined by the
consultant’s qualitative assessment of the
site or their own established practice. The
appropriate level of survey effort required
to accurately assess the composition and
species-richness of a bird assemblage
in a particular location has not been
determined in many cases (Calladine et
al. 2009). In addition, territory mapping
may not even be the best option for EcIA
purposes: point counts, line transects or
bioacoustic recording might provide equal
or better quality data, and probably with
less survey effort (Figure 3) (Abrahams and
Denny 2018; Gregory et al. 2004).
Reptiles: Our current reptile survey
guidance consists principally of Froglife’s
(1999) ‘Advice Sheet 10: Reptile Surveys’.
There was an attempt to update this
with Natural England’s (2011) Mitigation
Guidelines (TIN102), which were rapidly
withdrawn, and the more recent survey
protocols from Sewell et al. (2013),
which incorporated seasonal variations
in detectability by species. This latter
document was perhaps the first major
advance in our approach to reptile survey
in the past two decades, but remains
unknown to many practising ecologists.
The lack of scientific support for established
methods and the need for improved
approaches was recently highlighted in a
review of reptile monitoring programmes
(Nash 2018), which showed that new
evidence is available to support the revision
of survey protocols (Figure 4).
Using Evidence
We need to use science more to tell us
the answers to two important questions:
(i) which survey methods are best – or at
least ‘good’, and (ii) how much survey
effort is needed to generate a sound
understanding of a study area? If we
want to develop robust and accurate
ecological baselines for Environmental
Impact Assessments (and other purposes),
then we should make sure that our
methods are up to the job. It may be that
the methods we currently employ are
just fine, and incorporating referenced
research into our existing guidance would
allow us to demonstrate this. If so, we
have no need for concern. However, if the
methods we use have no demonstrable
scientific basis then we need to recognise
this as an industry and develop new
protocols over time to promote the best
practicable methods for data collection,
clearly based on evidence. After all, this
is the absolute bedrock of our day-to-day
work, on which we base assessments,
make recommendations and stake our
reputations. How can we not take a more
evidence-based approach to survey?
Creating survey guidance is a hard and
thankless task. Building the content, gaining
agreement from a range of professionals
with their own views and experiences, and
then getting organisations to approve the
finished article will never be easy. Griffiths
et al. (2015) note that ‘The uptake of new
methods by professional practice will.....
be strongly influenced by cost, practicality
and the explicit requirements of regulatory
authorities’. However, there is always room
for developments in practice where these
are supported by good argument and good
evidence, so each of us as individuals – and
as a community of practitioners – are free
to pave new ways where they are needed.
One could (correctly) argue that professional
judgment should be applied by all ecologists
when designing their surveys, and we
should all be prepared and able to go
beyond standard survey guidance. However,
we don’t always have time to keep up to
date with technical developments in all the
fields in which we might work. Accessing
information on methodological advances
can be difficult in itself, especially for those
who aren’t fortunate enough to have access
to the scientific literature.
To help develop a better scientific context
for our published guidance, there are a
number of ways forward. Firstly, any new
guidance that is produced should explicitly
state the evidence on which it is based,
Figure 3. The use of bioacoustics is common practice for bat surveyors,
but could be used eectively by ecologists studying other groups of
species. Here an acoustic recorder is deployed to record capercaillie
Tetrao urogallus
in north-east Scotland. Photo credit Carlos Abrahams.
Figure 4. The use of artificial cover objects (ACO) has long been the
mainstay of reptile surveys. In the absence of rigorous scientific testing,
there are still disagreements over the number, material and colour of
ACOs that should be used. Photo credit Carlos Abrahams.
56 Issue 100 | June 2018
Abrahams, C. and Denny, M. (2018). A fi rst test
of unattended, acoustic recorders for monitoring
capercaillie (Tetrao urogallus L.) lekking activity.
Bird Study, in press.
Adams, A.M., Jantzen, M.K., Hamilton, R.M. and
Fenton, M.B. (2012). Do you hear what I hear?
Implications of detector selection for acoustic
monitoring of bats. Methods in Ecology and
Evolution, 3: 992–998.
BSI (2013). BS 42020: Biodiversity — Code of
practice for planning and development. British
Standards Institution, London.
Calladine, J., Garner, G., Wernham, C. and Thiel,
A. (2009). The infl uence of survey frequency on
population estimates of moorland breeding birds.
Bird Study, 56(3): 381-388.
Collins, J. (ed.) (2016). Bat Surveys for Professional
Ecologists: Good Practice Guidelines (3rd edn).
The Bat Conservation Trust, London.
English Nature (2001). Great Crested Newt
Mitigation Guidelines. English Nature, Peterborough.
Froglife (1999). Advice Sheet 10: Reptile Survey.
Froglife, London.
Gilbert, G., Gibbons, D.W. and Evans, J.
(1998). Bird Monitoring Methods.
Pelagic Publishing Ltd, London.
Gregory R.D., Gibbons D.W. and Donald P.F.
(2004). Bird census and survey techniques.
In: W.J. Sutherland, I. Newton and R.E. Green
(eds), Bird Ecology and Conservation: A Handbook
of Techniques, pp. 17-56. Oxford University
Press, Oxford.
Griffi ths, R.A., Foster, J., W ilkinson, J.W. and
Sewell, D. (2015), Science, statistics and surveys:
a herpetological perspective. Journal of Applied
Ecology, 52: 1413-1417.
Hayns, S. (2017). A Guide to Good Practice
Guidance: A new resource for CIEEM members.
In Practice – Bulletin of the Chartered Institute of
Ecology and Environmental Management, 98: 45.
Kropfl i, M., Heer, P. and Pellet, J. (2010). Cost-
effectiveness of two monitoring strategies for
the great crested newt (Triturus cristatus)
Amphibia-Reptilia, 31(3): 403-410.
Marchant, J. (1983). BTO Common Birds Census
instructions. British Trust for Ornithology, Tring.
Nash, D.J. (2018). An assessment of mitigation
translocations for reptiles at development sites.
Unpublished Ph.D. Thesis, University of Kent
Natural England (2011). Technical Information
Note TIN102. Reptile Mitigation Guidelines.
Natural England, Peterborough.
Scottish Natural Heritage (2014). Recommended
bird survey methods to inform impact assessment
of onshore wind farms. Scottish Natural Heritage.
Available at
Accessed 20 April 2018.
Sewell, D., Griffi ths, R.A., Beebee, T.J.C.,
Foster, J. and Wilkinson, J.W.W. (2013). Survey
protocols for the British herpetofauna Version 1.0.
DICE, Canterbury,
Skalak, S.L., Sherwin, R.E. and Brigham, R.M.
(2012). Sampling period, size and duration
infl uence measures of bat species richness from
acoustic surveys. Methods in Ecology and
Evolution, 3: 490-502.
Underhill-Day, N. (2017). The Bat Roost Trigger
Index – A New Systematic Approach to Facilitate
Preliminary Bat Roost Assessments. In Practice –
Bulletin of the Chartered Institute of Ecology and
Environmental Management, 96: 37-42. http://
Wynn, J. (2013). Evaluation of survey methods
to determine population-size class for great
crested newts in England and Wales. In Practice –
Bulletin of the Chartered Institute of Ecology and
Environmental Management, 79: 24-25.
Many thanks to Dr Claire Wordley at
Conservation Evidence, Dr Gill Kerby at
CIEEM, and colleagues at Baker
Consultants for comments on an
earlier draft.
Big Ideas
: Do We Need More Evidence-Based
Survey Guidance?
About the Authors
Carlos Abrahams is
Technical Director at
Baker Consultants
in Derbyshire and a
Senior Lecturer on the
CIEEM-accredited BSc
at Nottingham Trent
University. He has been
an ecology consultant
for 17 years, following earlier work in
countryside management. He has research
interests in drawdown zones, amphibian
ecology and bird bioacoustics.
Contact Carlos at:
Dr Darryn Nash is a
Principal Ecologist
with AECOM based in
Bristol. He has recently
completed his doctorate
at the University of
Kent, where he assessed
whether translocation
was effective in
mitigating reptile-development confl ict.
Contact Darryn at:
and provide appropriate references. Or, if it
is only based on best-guess rules-of-thumb,
this should be stated clearly. Secondly,
consultants, consultees and regulators
should all take a more fl exible approach
to survey methods, and concentrate more
on the quality (and meaning) of outputs
rather than whether standard protocol has
been slavishly followed. Most importantly
though, we would make a call for a ‘Survey
Evidence’ initiative for ecologists, along
similar lines to Conservation Evidence
( This
would gather, assess and disseminate
research fi ndings to allow optimal survey
and monitoring recommendations to be
developed. This could be done within an
organisational setting, or perhaps better,
in a crowd-sourced, Wikipedia-style, online
forum to which anyone interested could
contribute. Such an approach would
allow new research fi ndings to be added
regularly, allowing constant ongoing
development of scientifi cally supported
survey methods and technological
innovations – and rapid communication of
these across the sector, instead of waiting
for irregular approval by a formal authority.
It would be independent, authoritative
and available to all, demonstrating good
practice for our work and enabling us
to make better, informed decisions on
how we gather data. It would require us
to examine our established, and often
outdated, methods. In the end, it would
raise the questions we should all be asking
ourselves. Is our good practice guidance
actually proven to be good enough? And if
not, how can we all make it better?
... Tiwary and Kumar 2014;McDonald et al. 2005), and looks at the validity of particular evaluative methods relating to GI (e.g. Abrahams and Nash 2018;Busch et al. 2012;Weber, Sloan, and Wolf 2006). Yet, whether the evaluation practice actually leads to the incorporation of recommended GI intentions into masterplanned neighbourhood developments is less understood (Cormier et al. 2017). ...
Full-text available
Green Infrastructure (GI) evaluation is reported to improve the sustainability of neighbourhood masterplans, but there has been little research examining the links between GI evaluation and masterplan decision-making. A study of six English masterplanned sites was carried out, with paired case studies reflecting three types of neighbourhood development (estate regeneration, urban infill, and rural-urban extension) to examine whether the sustainable neighbourhood standard, BREEAM Communities (BC), affected GI evaluation and masterplan decisions. In each of three pairs, one site had adopted BC and one had not. Strategy-as-Practice provided a conceptual framework to analyse 13 evaluative episodes, based on 48 interviews and public documents. The analysis revealed that GI-related recommendations were typically deprioritised at later masterplan stages, despite earlier decisions or the application of BC. Potential ways to enhance the embeddedness of GI evaluative practice include improving practitioners’ understanding of GI and increasing accountability at later masterplan stages, such as through sustainability reporting.
Full-text available
Capsule: Automated acoustic recording can be used as a valuable survey technique for Capercaillie Tetrao urogallus leks, improving the quality and quantity of field data for this endangered bird species. However, more development work and testing against traditional methods are needed to establish optimal working practices. Aims: This study aims to determine whether Capercaillie vocalizations can be recognized in lek recordings, whether this can be automated using readily available software, and whether the number of calls resulting varies with location, weather conditions, date and time of day. Methods: Unattended recording devices and semi-automated call classification software were used to record and analyse the display calls of Capercaillie at three known lek sites in Scotland over a two-week period. Results: Capercaillie calls were successfully and rapidly identified within a data set that included the vocalizations of other bird species and environmental noise. Calls could be readily recognized to species level using a combination of unsupervised software and manual analysis. The number of calls varied by time and date, by recorder/microphone location at the lek site, and with weather conditions. This information can be used to better target future acoustic monitoring and improve the quality of existing traditional lek surveys. Conclusion: Bioacoustic methods provide a practical and cost-effective way to determine habitat occupancy and activity levels by a vocally distinctive bird species. Following further testing alongside traditional counting methods, it could offer a significant new approach towards more effective monitoring of local population levels for Capercaillie and other species of conservation concern.
Full-text available
Bridging the gap between conservation science and conservation practice is a widely acknowledged issue in applied ecology (Hulme 2011). Nowhere is the gap greater than in the area of data collection, analysis and interpretation. Population assessments for conservation are frequently based on traditional practices that use rules of thumb and quasi-quantitative methods. This means that important decisions that have far-reaching conservation, commercial and financial implications are often based on sketchy population assessments.This article is protected by copyright. All rights reserved.
Full-text available
1. Understanding animal ecology depends on an ability to accurately inventory species. However, there are few quantitative data available, which allow for an assessment of the effectiveness of acoustic sampling methods for determining bat species richness. 2. We assessed inventory efficiency, defined as the percentage of species detected per survey effort, using data from 7 to 9 Anabat bat detectors deployed concurrently between June 2008 and August 2009 at fixed locations. We examined sampling period and time of night to calculate the minimum duration of sampling effort required to detect the greatest percentage of species. 3. In all cases, multiple survey nights at multiple sampling locations were necessary to detect higher levels of species richness using acoustic detectors. Additionally, continuous sampling throughout the night was important for detecting more species, especially during summer, fall and spring months. 4. Species accumulation curves indicated that relatively few nights were needed to detect ‘common’ species at various sampling locations (2–5 nights on average); however, longer sample periods (>45 nights) were necessary to detect ‘rare’ species at some sampling locations. Accumulation curves indicated that the number of detector locations positively influenced the number of species detected during surveys periods. 5. A priori knowledge of sampling effort is fundamental for designing biologically robust inventories. We make recommendations for improving the efficiency of acoustic surveys using analytical methods that are broadly applicable to a range of survey methods and taxa.
Full-text available
Designing cost-effective monitoring protocols is a fundamental prerequisite for amphibian conservation. Here, we report a comparison of flashlight survey and trapping (with and without light sticks as trap baits) in order to determine flashlight detectability and trap detectability of great crested newts (Triturus cristatus). Twelve ponds were surveyed in Switzerland where T. cristatus had been known to occur. We measured covariates affecting both flashlight detectability and trap detectability. Newt flashlight detectability using 20 min long flashlight surveys was on average ± SE = 39% ± 10%). Flashlight detectability was mostly influenced by surface and submerged vegetation density, as well as by water temperature. Newt trap detectability during one night using six funnel traps per pond was on average±SE = 41%±10%. Trap detectability was mainly affected by trap position in the pond, with traps lying on the pond floor being more likely to attract newts. The use of light sticks did not enhance the trap detectability. Estimates of flashlight detectability and trap detectability were used to define how many times the sites have to be visited to be 95% certain of not missing T. cristatus in ponds where they are present. In both cases multiple visits (7 flashlight surveys or 6 trapping sessions) have to be performed. Flashlight surveys are the most easily applied and most cost-effective method to use in large scale programs.
Summary The probability of detecting the echolocation calls of bats is affected by the strength of the signal as well as the directionality and frequency response of the acoustic detectors. Regardless of the research question, it is important to quantify variation in recording system performance and its impacts on bat detection results. The purpose of this study was to compare the detection of echolocation calls among five commonly used bat detectors: AnaBat SD2 (Titley Scientific), Avisoft UltraSoundGate 116 CM16/CMPA (Avisoft Bioacoustics), Batcorder 2·0 (ecoObs), Batlogger (Elekon AG) and Song Meter SM2BAT (Wildlife Acoustics). We used playback of synthetic calls to optimize detection settings for each system. We then played synthetic signals at four frequencies (25, 55, 85 and 115 kHz) at 5-m intervals (5–40 m) and three angles (0°, 45°, 90°) from the detectors. Finally, we recorded free-flying bats (Lasiurus cinereus), comparing the number of calls detected by each detector. Detection was most affected by the frequency dominating the signal and the distance from the source. The effect of angle was less apparent. In the synthetic signal experiment, Avisoft and Batlogger outperformed other detectors, while Batcorder and Song Meter performed similarly. Batlogger performed better than the other detectors at angles off-centre (45° and 90°). AnaBat detected the fewest signals and none at 85 kHz or 115 kHz. Avisoft detected the most signals. In the free-flying bat experiment, Batlogger recorded 93% of calls relative to Avisoft, while AnaBat, Batcorder and Song Meter recorded 40–50% of the calls detected by Avisoft. Numerous factors contribute to variation in data sets from acoustic monitoring; our results demonstrate that choice of detector plays a role in this variation. Differences among detectors make it difficult to compare data sets obtained with different systems. Therefore, the choice of detector should be taken into account in designing studies and considering bat activity levels among studies using different detectors.
Capsule A minimum of four constant‐effort‐search survey visits are required to generate reliable population estimates of breeding birds on moorland that are not subject to biases associated with varying levels of detectability through the season.Aims To investigate the influence of the number and the combination of survey visits on the population estimates of breeding birds on moorland.Methods Four constant‐effort‐search surveys (80–100 minutes per km per visit) of moorland in southwest Scotland were undertaken in each of six years, 2003–2008. Using standard protocols, the numbers of apparent territories that would have been identified for each possible combination of survey visits were determined.glms were used to assess the influence of the frequency of survey visits, and different combination scenarios on the derived population estimates for Red Grouse, European Golden Plover, Common Snipe, Eurasian Curlew, Sky Lark, Winter Wren and Stonechat. Independent assessments of population density were made by transect sampling for Red Grouse and Sky Lark.Results Robust population estimates were possible from three survey visits for European Golden Plover, Eurasian Curlew and Stonechat. However, there were differences between species in the seasonal variation of their detectability. Four survey visits would underestimate the populations of Red Grouse (probably by 67–91%), Sky Lark (probably by 31–61%) and Winter Wren (by an undetermined proportion). Common Snipe were also likely to be underestimated after four survey visits, but the value of the derived estimate as an index of population density deserves further investigation.Conclusions If there is a need to carry out a multi‐species survey on moorland, we suggest that a minimum of four survey visits is required to ensure the derivation of reliable population estimates for a suite of the most readily detectable species. Population estimates derived from three or fewer survey visits risk biases through uneven sampling in periods of differing detectability. With evidence for changes in the breeding phenology of birds associated with changing climate or weather patterns, it arguably becomes more important to ensure that surveys sample an adequately broad period of the breeding season.
BS 42020: Biodiversity -Code of practice for planning and development
BSI (2013). BS 42020: Biodiversity -Code of practice for planning and development. British Standards Institution, London.
Bat Surveys for Professional Ecologists: Good Practice Guidelines (3 rd edn). The Bat Conservation Trust
  • J Collins
Collins, J. (ed.) (2016). Bat Surveys for Professional Ecologists: Good Practice Guidelines (3 rd edn). The Bat Conservation Trust, London.
Advice Sheet 10: Reptile Survey
  • Froglife
Froglife (1999). Advice Sheet 10: Reptile Survey. Froglife, London.