Methods in Ecology and Evolution

Published by Wiley
Online ISSN: 2041-210X
Publications
Spearman correlations between 15 N and 13 C atom percent excess (APE) isotopic enrichments and plant characteristics across species and sampling dates. Significant correlations (P < 0AE005) after Bonferroni adjustments are in bold
Enrichment in 15N and 13C (APE) in shoots and roots of 12 grassland species belonging to three functional groups (grasses, non-leguminous forbs, leguminous forbs) after daily foliar labelling for 5 days. Means ± Maximum/Minimum values, n = 3. The ratio between enrichments in roots and shoots (R/S ratio) is also shown for each species. [Correction added after online publication 26 Nov 2010: incorrect minus signs removed from Y-axes]
Time course of the 15N enrichment in shoots and roots of 12 grassland species comprising the functional groups grasses, non-leguminous forbs and leguminous forbs during 4 weeks of foliar labelling. Means, n = 3.
Time course of the 13C enrichment in shoots and roots of 12 grassland species comprising the functional groups grasses, non-leguminous forbs and leguminous forbs during 4 weeks of foliar labelling. Means, n = 3. Note different Y-axis scales for shoots and roots.
Article
1.Labelling plants with 15N and 13C stable isotopes usually require cultivation of plants in isotopically enriched soil and gas-tight labelling chambers - both approaches are not suitable if one aims to investigate in situ species interactions in real plant communities. 2.In this greenhouse experiment, we tested a labelling method in which dual-labelled (15N, 13C) urea solution is brushed directly onto leaves of twelve temperate grassland species representing grasses, non-leguminous forbs and legumes. 3.Across all plant species, shoots (15N: 0·145; 13C: 0·090 atom percent excess, APE) and roots (15N: 0·051; 13C: 0·023 APE) were significantly enriched after five daily labelling events. Generally, isotopic enrichments were significantly higher in shoots than in roots. No clear pattern of absolute isotopic enrichment was observed between plant functional groups; however, grasses showed a more even allocation between shoots and roots than forbs and legumes. Isotopic enrichment levels after 4 weeks were lower, higher or unchanged compared to those of week one and varied between species or plant parts. 4.Considering the consistent enrichment levels and simplicity of this method, we conclude that it can be applied widely in ecological studies of above-belowground plant–plant or plant–animal interactions even in real plant communities.
 
Major steps of oligotyping analysis. In step 1, reads that were identified as one taxon or a single OTU from all samples in a data set are gathered. In the hypothetical example given in the figure, reads with very subtle nucleotide variation (positions of variation are highlighted with green) are shared between three samples, A, B and C. In step 2, the collection of reads is analysed with Shannon entropy, during which the variable positions are recovered. In step 3, each read is affiliated with the base they possess at the high entropy position among the reads, and thus, oligotypes are generated (AC and TG in this mock example), and finally, oligotype profiles, depicted as pie charts, are generated to explain differences among samples.
Bacteroides oligotype distribution inferred from the study published by Yatsunenko et al. (2012). Bars indicate the presence of an oligotype in a given community; a full-length bar represents oligotypes that occur in 100% of the analysed samples. The lower panel magnifies numbered regions in the cladogram. Numbers 1, 2 and 3 are Bacteroides oligotypes that are more than 97% similar in full length, yet exhibit noteworthy differences in their geographical distribution. Light yellow background colour on the cladogram marks the oligotypes with perfect matches in NCBI's non-redundant nucleotide sequence data base. Number 4 demonstrates several oligotypes that consistently occur in samples from the Malawian and Amerindian communities but not in samples from the United States. None of the oligotypes in Number 4 have perfect matches in NCBI's nr data base. Number 5, on the other hand, shows several oligotypes with similar occurrence patterns in Malawian and Amerindian communities with the ones shown in Number 4, but with a remarkably larger presence in the samples collected from the United States. In contrast to Number 4, 3 out of 4 oligotypes listed in Number 5 have perfect matches in NCBI's nr data base.
Pelagibacter oligotype and OTU distribution in samples from Little Sippewissett Marsh. In panel (a), seasonal variation of two Pelagibacter oligotypes is shown based on their relative abundance. The representative sequence of Oligotype 1 is identical to HTCC1062 (predominant in polar regions) through the V4-V6 region, and the representative sequence of Oligotype 2 is identical to HTCC7211 (more abundant in tropical regions) at the V4-V6 region. These oligotypes are 99·57% identical to each other over their 459 nt amplicon lengths. The water temperature observed during the sampling is superimposed on the figure. In panel (b), the distribution of Pelagibacter oligotypes and 3% OTUs across all sampling stations is compared side by side. Data from each station consisted of temporal samples spanning a 17-month time period between May 2007 and September 2008. Each colour represents a different oligotype and OTU. Colour range order is defined by the relative abundance; therefore, identical colours do not suggest any correlation across panels.
Article
Bacteria comprise the most diverse domain of life on Earth, where they occupy nearly every possible ecological niche and play key roles in biological and chemical processes. Studying the composition and ecology of bacterial ecosystems and understanding their function are of prime importance. High-throughput sequencing technologies enable nearly comprehensive descriptions of bacterial diversity through 16S ribosomal RNA gene amplicons. Analyses of these communities generally rely upon taxonomic assignments through reference data bases or clustering approaches using de facto sequence similarity thresholds to identify operational taxonomic units. However, these methods often fail to resolve ecologically meaningful differences between closely related organisms in complex microbial data sets. In this paper, we describe oligotyping, a novel supervised computational method that allows researchers to investigate the diversity of closely related but distinct bacterial organisms in final operational taxonomic units identified in environmental data sets through 16S ribosomal RNA gene data by the canonical approaches. Our analysis of two data sets from two different environments demonstrates the capacity of oligotyping at discriminating distinct microbial populations of ecological importance. Oligotyping can resolve the distribution of closely related organisms across environments and unveil previously overlooked ecological patterns for microbial communities. The URL http://oligotyping.org offers an open-source software pipeline for oligotyping.
 
Article
1. Measuring physiological and behavioural parameters in free-ranging animals – and therefore under fully natural conditions – is of general biological concern but difficult to perform. 2. We have developed a minimally invasive telemetry system for ruminants that is capable of measuring heart rate (HR), body temperature (Tb) and locomotor activity (LA). A ruminal transmitter unit was per os placed into the reticulum and therefore located in close proximity to the heart. The unit detected HR by the use of an acceleration sensor and also measured Tb. HR and Tb signals were transmitted via short-distance UHF link to a repeater system located in a collar unit. The collar unit decoded and processed signals received from the ruminal unit, measured LA with two different activity sensors and transmitted pulse interval-modulated VHF signals over distances of up to 10 km. 3. HR data measured with the new device contained noise caused by reticulum contractions and animal movements that triggered the acceleration sensor in the ruminal unit. We have developed a software filter to remove this noise. Hence, the system was only capable of measuring HR in animals that showed little or no activity and in the absence of rumen contractions. Reliability of this ‘stationary HR’ measurement was confirmed with a second independent measurement of HR detected by an electrocardiogram in a domestic sheep (Ovis aries). 4. In addition, we developed an algorithm to correctly classify an animal as ‘active’ or ‘at rest’ during each 3-min interval from the output of the activity sensors. Comparison with direct behavioural observations on free-ranging Alpine ibex (Capra ibex) showed that 87% of intervals were classified correctly. 5. First results from applications of this new technique in free-ranging Alpine ibex underlined its suitability for reliable and long-term monitoring of physiological and behavioural parameters in ruminants under harsh field conditions. With the battery settings and measurement cycles used in this study, we achieved a system lifetime of approximately 2 years.
 
Number of published articles per year found in the Web of Knowledge (Thomson Reuters, New York, NY, USA) when searching for the term ‘multiplex PCR’ (in quotation marks) as topic.
Multiplex PCR conducted with standardised numbers of DNA templates and separated with QIAxcel (Qiagen) where an internal marker (15 and 3000 bp) is run with each sample. Description of Lanes: Pn, Pardosa nigra; Nr, Nebria rufescens; Oc, Oreonebria castanea; Mg, Mitopus glacialis; Nj, Nebria jockischii; Ng, Nebria germari; Col, Collembola; each with 10 000 double-stranded copies as template (tc); M1–M4 standardised DNA mixes. M1, 2100 tc per target; M2, 1000 tc per target; M3, 200 tc per target; M4, 100 tc per target; E, electropherogram of Lane M3.
Note: when a single target was present at high concentrations, signal strength was not balanced (e.g. Oc and Ng resulted in stronger signals); however, this did not occur at lower concentrations.
Article
1. Multiplex PCR is a valuable tool in many biological studies but it is a multifaceted procedure that has to be planned and optimised thoroughly to achieve robust and meaningful results. In particular, primer concentrations have to be adjusted to assure an even amplification of all targeted DNA fragments. Until now, total DNA extracts were used for balancing primer efficiencies; however, the applicability for comparisons between taxa or different multiple-copy genes was limited owing to the unknown number of template molecules present per total DNA. 2. Based on a multiplex system developed to track trophic interactions in high Alpine arthropods, we demonstrate a fast and easy way of generating standardised DNA templates. These were then used to balance the amplification success for the different targets and to subsequently determine the sensitivity of each primer pair in the multiplex PCR. 3. In the current multiplex assay, this approach led to an even amplification success for all seven targeted DNA fragments. Using this balanced multiplex PCR, methodological bias owing to variation in primer efficiency will be avoided when analysing field-derived samples. 4. The approach outlined here allows comparing multiplex PCR sensitivity, independent of the investigated species, genome size or the targeted genes. The application of standardised DNA templates not only makes it possible to optimise primer efficiency within a given multiplex PCR, but it also offers to adjust and/or to compare the sensitivity between different assays. Along with other factors that influence the success of multiplex reactions, and which we discuss here in relation to the presented detection system, the adoption of this approach will allow for direct comparison of multiplex PCR data between systems and studies, enhancing the utility of this assay type.
 
Article
1. While classical life-history theory does not predict postreproductive life span (PRLS), it has been detected in a great number of taxa, leading to the view that it is a broadly conserved trait and attempts to reconcile theory with these observations. We suggest an alternative: the apparently wide distribution of significant PRLS is an artefact of insufficient methods. 2. PRLS is traditionally measured in units of time between each individual’s last parturition and death, after excluding those individuals for whom this interval is short. A mean of this measure is then calculated as a population value. We show this traditional population measure (which we denote PrT) to be inconsistently calculated, inherently biased, strongly correlated with overall longevity, uninformative on the importance of PRLS in a population’s life history, unable to use the most commonly available form of relevant data and without a realistic null hypothesis. Using data altered to ensure that the null hypothesis is true, we find a false-positive rate of 0·47 for PrT. 3. We propose an alternative population measure, using life-table methods. Postreproductive representation (PrR) is the proportion of adult years lived which are postreproductive. We briefly derive PrR and discuss its properties. We employ a demographic simulation, based on the null hypothesis of simultaneous and proportional decline in survivorship and fecundity, to produce a null distribution for PrR based on the age-specific rates of a population. 4. In an example analysis, using data on 84 populations of human and nonhuman primates, we demonstrate the ability of PrR to represent the effects of artificial protection from mortality and of humanness on PRLS. PrR is found to be higher for all human populations under a wide range of conditions than for any nonhuman primate in our sample. A strong effect of artificial protection is found, but humans under the most adverse conditions still achieve PrR of >0·3. 5. PrT should not be used as a population measure and should be used as an individual measure only with great caution. The use of PrR as an intuitive, statistically valid and intercomparable population life-history measure is encouraged.
 
Sample data from PLHD Biography 1
Sample data from PLHD Fertility 1
Article
1. The importance of data archiving, data sharing and public access to data has received considerable attention. Awareness is growing among scientists that collaborative databases can facilitate these activities. 2. We provide a detailed description of the collaborative life history database developed by our Working Group at the National Evolutionary Synthesis Center to address questions about life history patterns and the evolution of mortality and demographic variability in wild primates. 3. Examples from each of the seven primate species included in our database illustrate the range of data incorporated and the challenges, decision-making processes, and criteria applied to standardize data across diverse field studies. In addition to the descriptive and structural metadata associated with our database, we also describe the process metadata (how the database was designed and delivered) and the technical specifications of the database. 4. Our database provides a useful model for other researchers interested in developing similar types of databases for other organisms, while our process metadata may be helpful to other groups of researchers interested in developing databases for other types of collaborative analyses.
 
Comparison of analytical value (curve) with Monte Carlo calculation with 2000 samples (points) for the mean of rooted PD under rarefaction.
Comparison of analytical value (curve) with Monte Carlo calculation with 2000 samples (points) for the variance of rooted PD under rarefaction.
Phylogenetic diversity of mammal faunas for terrestrial ecoregions on the Australian continental shelf. Phylogenetic diversity is calculated for (a) all species present and (b) as an expected value after rarefaction to 25 species. Ecoregions are coloured light blue for low values to dark red for high values. The three highest ranked ecoregions in each case are indicated by number.
Rarefaction curve of samples from Srinivasan et al. (2012). The Nugent score is a diagnostic score for bacterial vaginosis, with 0 being ‘normal’ and 10 being classified as BV.
Article
Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.
 
Article
1. Next-generation sequencing (NGS) is being increasingly used in ecological and evolutionary studies. Though promising, NGS is known to be error-prone. Sequencing error can cause significant bias for population genetic analysis of a sequence sample. 2. We present jPopGen Suite, an integrated tool for population genetic analysis of DNA polymorphisms from nucleotide sequences. It is specially designed for data with a non-negligible error rate, although it serves well for ‘error-free’ data. It implements several methods for estimating the population mutation rate, population growth rate and conducting neutrality tests. 3. jPopGen Suite facilitates the population genetic analysis of NGS data in various applications and is freely available for non-commercial users at http://sites.google.com/site/jpopgen/.
 
Article
1. Nucleotide sequences sampled at different times (serially sampled sequences) allow researchers to study the rate of evolutionary change and the demographic history of populations. Some phylogenies inferred from serially sampled sequences are described as having strong ‘temporal clustering’, such that sequences from the same sampling time tend to cluster together and to be the direct ancestors of sequences from the following sampling time. The degree to which phylogenies exhibit these properties is thought to reflect interesting biological processes, such as positive selection or deviation from the molecular clock hypothesis. 2. Here, we introduce the temporal clustering (TC) statistic, which is the first quantitative measure of the degree of topological ‘temporal clustering’ in a serially sampled phylogeny. The TC statistic represents the expected deviation of an observed phylogeny from the null hypothesis of no temporal clustering, as a proportion of the range of possible values, and can therefore be compared among phylogenies of different sizes. 3. We apply the TC statistic to a range of serially sampled sequence data sets, which represent both rapidly evolving viruses and ancient mitochondrial DNA. In addition, the TC statistic was calculated for phylogenies simulated under a neutral coalescent process. 4. Our results indicate significant TC in many empirical data sets. However, we also find that such clustering is exhibited by trees simulated under a neutral coalescent process; hence, the observation of significant ‘temporal clustering’ cannot unambiguously indicate the presence of strong positive selection in a population. 5. Quantifying topological structure in this manner will provide new insights into the evolution of measurably evolving populations.
 
Article
1. Many recent statistical applications involve inference under complex models, where it is computationally prohibitive to calculate likelihoods but possible to simulate data. Approximate Bayesian computation (ABC) is devoted to these complex models because it bypasses the evaluation of the likelihood function by comparing observed and simulated data. 2. We introduce the R package ‘abc’ that implements several ABC algorithms for performing parameter estimation and model selection. In particular, the recently developed nonlinear heteroscedastic regression methods for ABC are implemented. The ‘abc’ package also includes a cross-validation tool for measuring the accuracy of ABC estimates and to calculate the misclassification probabilities when performing model selection. The main functions are accompanied by appropriate summary and plotting tools. 3. R is already widely used in bioinformatics and several fields of biology. The R package ‘abc’ will make the ABC algorithms available to a large number of R users. ‘abc’ is a freely available R package under the GPL license, and it can be downloaded at http://cran.r-project.org/web/packages/abc/index.html.
 
Article
Auto-logistic and related auto-models, implemented approximately as autocovariate regression, provide simple and direct modelling of spatial dependence. The autologistic model has been widely applied in ecology since Augustin, Mugglestone and Buckland (J. Appl. Ecol., 1996, 33, 339) analysed red deer census data using a hybrid estimation approach, combining maximum pseudo-likelihood estimation with Gibbs sampling of missing data. However Dormann (Ecol. Model., 2007, 207, 234) questioned the validity of auto-logistic regression, giving examples of apparent underestimation of covariate parameters in analysis of simulated "snouter" data. Dormann et al. (Ecography, 2007, 30, 609) extended this analysis to auto-Poisson and auto-normal models, reporting anomalies in all cases. All the above studies employ neighbourhood weighting schemes inconsistent with conditions (Besag, J. R. Stat. Soc., Ser. B, 1974, 36, 192) required for auto-model validity; furthermore the auto-Poisson analysis fails to exclude cooperative interactions. We show that all "snouter" anomalies are resolved by correct auto-model implementation. Re-analysis of the red deer data shows that invalid neighbourhood weightings generate only small estimation errors for the full dataset, but larger errors occur on geographic subsamples. A substantial fraction of papers applying auto-logistic regression to ecological data use these invalid weightings, which are default options in the widely used "spdep" spatial dependence package for R. Auto-logistic analyses using invalid neighbourhood weightings will be erroneous to an extent that can vary widely. These analyses can easily be corrected by using valid neighbourhood weightings available in "spdep". The hybrid estimation approach for missing data is readily adapted for valid neighbourhood weighting schemes and is implemented here in R for application to sparse presence-absence data.
 
False discovery rate (FDR) against significance threshold α for three scenarios (IM: island model, SS: Stepping Stone model and HS: Hierarchically Structured model) and monogenic/polygenic selection. The grey line is the expected identity relationship between the FDR and α. The models tested are BayeScan (blue dashed) and BayeScEnv (orange dotted, green dot-dashed and solid red) with different probabilities π of jumping away from the neutral model (M1) and different preferences p for the locus-specific model (M3). Note that p = 0 means the environmental model (M2) is tested against the neutral one only.
Power against significance threshold α for three scenarios (IM: island model, SS: Stepping Stone model and HS: Hierarchically Structured model) and monogenic/polygenic selection. The models tested are BayeScan (blue dashed) and BayeScEnv (orange dotted, green dot-dashed and solid red) with different probabilities π of jumping away from the neutral model (M1) and different preferences p for the locus-specific model (M3). Note that p = 0 means the environmental model (M2) is tested against the neutral one only.
Power against false positive rate (FPR), a.k.a. ROC curve, for three scenarios (IM: Island model, SS: Stepping Stone model and HS: Hierarchically Structured model) and monogenic/polygenic selection. The models tested are BayeScan (blue dashed) and BayeScEnv (orange dotted, green dot-dashed and solid red) with different probabilities π of jumping away from the neutral model (M1) and different preferences p for the locus-specific model (M3). Note that p = 0 means the environmental model (M2) is tested against the neutral one only.
Manhattan plot of the q-values for the human data set when using BayeScEnv with altitude (a), temperature (b), precipitations (c) or when using BayeScan (d). For altitude and temperature (a and b), genes mentioned in the text are displayed using black lines and genes associated with a significant GO term using grey lines. Top ‘stripes’ for BayeScan (d) are artefacts due to finite number of iterations in RJMCMC (e.g. 0, 1, 2, 3… iterations outside of the non-neutral model), corresponding to determined posterior probabilities when divided by the total number of iterations.
Results from BayeScan and BayeScEnv on the human data set. FDR significance threshold was set to 5%. The total number of tested markers was 446 117
Article
Genome-scan methods are used for screening genome-wide patterns of DNA polymorphism to detect signatures of positive selection. There are two main types of methods: (i) "outlier" detection methods based on $F_{\text{ST}}$ that detect loci with high differenciation compared to the rest of the genomes and, (ii) environmental association methods that test the association between allele frequencies and environmental variables. In this article, we present a new $F_{\text{ST}}$-based genome scan method, BayeScEnv, which incorporates environmental information in the form of "environmental differentiation". It is based on the F model but as opposed to existing approaches it considers two locus-specific effects, one due to divergent selection and another due to other processes such as differences in mutation rates across loci or background selection. Simulation studies showed that our method has a much lower false positive rate than an existing $F_{\text{ST}}$-based method, BayeScan, under a wide range of demographic scenarios. Although it had lower power, it leads to a better compromise between power and false positive rate. We apply our method to Human and Salmon datasets and show that it can be used successfully to study local adaptation. The method was developped in C++ and is avaible at http://github.com/devillemereuil/bayescenv.
 
Article
Complex systems of moving and interacting objects are ubiquitous in the natural and social sciences. Predicting their behavior requires models that mimic these systems with sufficient accuracy, while accounting for their inherent stochasticity. Though tools exist to determine which of a set of models is best relative to the others, there is currently no generic goodness-of-fit framework for testing how close the best model is to the real complex stochastic system. We propose such a framework, using a novel application of the Earth mover's distance, also known as the Wasserstein metric. It is applicable to any stochastic process where the probability of the model's state at time t is a function of the state at previous times. It generalizes the concept of a residual, often used to analyze 1D summary statistics, to situations where the complexity of the underlying model's probability distribution makes standard residual analysis too imprecise for practical use. We give a scheme for testing the hypothesis that a model is an accurate description of a data set. We demonstrate the tractability and usefulness of our approach by application to animal movement models in complex, heterogeneous environments. We detail methods for visualizing results and extracting a variety of information on a given model's quality, such as whether there is any inherent bias in the model, or in which situations it is most accurate. This work provides a fundamental toolkit to assess the quality of generic movement models of complex systems, in an absolute rather than a relative sense.
 
Article
Group dynamic movement is a fundamental aspect of many species' movements. The need to adequately model individuals' interactions with other group members has been recognised, particularly in order to differentiate the role of social forces in individual movement from environmental factors. However, to date, practical statistical methods which can include group dynamics in animal movement models have been lacking. We consider a flexible modelling framework that distinguishes a group-level model, describing the movement of the group's centre, and an individual-level model, such that each individual makes its movement decisions relative to the group centroid. The basic idea is framed within the flexible class of hidden Markov models, extending previous work on modelling animal movement by means of multi-state random walks. While in simulation experiments parameter estimators exhibit some bias in non-ideal scenarios, we show that generally the estimation of models of this type is both feasible and ecologically informative. We illustrate the approach using real movement data from 11 reindeer (Rangifer tarandus). Results indicate a directional bias towards a group centroid for reindeer in an encamped state. Though the attraction to the group centroid is relatively weak, our model successfully captures group-influenced movement dynamics. Specifically, as compared to a regular mixture of correlated random walks, the group dynamic model more accurately predicts the non-diffusive behaviour of a cohesive mobile group.
 
Article
Extensive research shows that more species-rich assemblages are generally more productive and efficient in resource use than comparable assemblages with fewer species. But the question of how diversity simultaneously affects the wide variety of ecological functions that ecosystems perform remains relatively understudied, and it presents several analytical and empirical challenges that remain unresolved. In particular, researchers have developed several disparate metrics to quantify multifunctionality, each characterizing different aspects of the concept, and each with pros and cons. We compare four approaches to characterizing multifunctionality and its dependence on biodiversity, quantifying 1) magnitudes of multiple individual functions separately, 2) the extent to which different species promote different functions, 3) the average level of a suite of functions, and 4) the number of functions that simultaneously exceed a critical threshold. We illustrate each approach using data from the pan-European BIODEPTH experiment and the R multifunc package developed for this purpose, evaluate the strengths and weaknesses of each approach, and implement several methodological improvements. We conclude that a extension of the fourth approach that systematically explores all possible threshold values provides the most comprehensive description of multifunctionality to date. We outline this method and recommend its use in future research.
 
Article
Birdsong often contains large amounts of rapid frequency modulation (FM). It is believed that the use or otherwise of FM is adaptive to the acoustic environment and also that there are specific social uses of FM such as trills in aggressive territorial encounters. Yet temporal fine detail of FM is often absent or obscured in standard audio signal analysis methods such as Fourier analysis or linear prediction. Hence, it is important to consider high-resolution signal processing techniques for analysis of FM in bird vocalizations. If such methods can be applied at big data scales, this offers a further advantage as large data sets become available. We introduce methods from the signal processing literature which can go beyond spectrogram representations to analyse the fine modulations present in a signal at very short time-scales. Focusing primarily on the genus Phylloscopus, we investigate which of a set of four analysis methods most strongly captures the species signal encoded in birdsong. We evaluate this through a feature selection technique and an automatic classification experiment. In order to find tools useful in practical analysis of large data bases, we also study the computational time taken by the methods, and their robustness to additive noise and MP3 compression. We find three methods which can robustly represent species-correlated FM attributes and can be applied to large data sets, and that the simplest method tested also appears to perform the best. We find that features representing the extremes of FM encode species identity supplementary to that captured in frequency features, whereas bandwidth features do not encode additional information. FM analysis can extract information useful for bioacoustic studies, in addition to measures more commonly used to characterize vocalizations. Further, it can be applied efficiently across very large data sets and archives.
 
Article
We perform a Bayesian analysis on abundance data for ten species of North American duck, using the results to investigate the evidence in favour of biologically motivated hypotheses about the causes and mechanisms of density dependence in these species. We explore the capabilities of our methods to detect density dependent effects, both by simulation and through analyzes of real data. The effect of the prior choice on predictive accuracy is also examined. We conclude that our priors, which are motivated by considering the dynamics of the system of interest, offer clear advances over the priors used by previous authors for the duck data sets. We use this analysis as a motivating example to demonstrate the importance of careful parameter prior selection if we are to perform a balanced model selection procedure. We also present some simple guidelines that can be followed in a wide variety of modelling frameworks where vague parameter prior choice is not a viable option. These will produce parameter priors that not only greatly reduce bias in selecting certain models, but improve the predictive ability of the resulting model-averaged predictor.
 
Article
Spatial ecological networks are widely used to model interactions between georeferenced biological entities (e.g., populations or communities). The analysis of such data often leads to a two-step approach where groups containing similar biological entities are firstly identified and the spatial information is used afterwards to improve the ecological interpretation. We develop an integrative approach to retrieve groups of nodes that are geographically close and ecologically similar. Our model-based spatially-constrained method embeds the geographical information within a regularization framework by adding some constraints to the maximum likelihood estimation of parameters. A simulation study and the analysis of real data demonstrate that our approach is able to detect complex spatial patterns that are ecologically meaningful. The model-based framework allows us to consider external information (e.g., geographic proximities, covariates) in the analysis of ecological networks and appears to be an appealing alternative to consider such data.
 
Article
1. Understanding how to find targets with very limited information is a topic of interest in many disciplines. In ecology, such research has often focused on the development of two movement models: i) the Lévy walk and; ii) the composite correlated random walk and its associated area-restricted search behaviour. Although the processes underlying these models differ, they can produce similar movement patterns. Due to this similarity and because of their disparate formulation, current methods cannot reliably differentiate between these two models. 2. Here, we present a method that differentiates between the two models. It consists of likelihood functions, including a hidden Markov model, and associated statistical measures that assess the relative support for and absolute fit of each model. 3. Using a simulation study, we show that our method can differentiate between the two search models over a range of parameter values. Using the movement data of two polar bears ( Ursus maritimus), we show that the method can be applied to complex, real-world movement paths. 4. By providing the means to differentiate between the two most prominent search models in the literature, and a framework that could be extended to include other models, we facilitate further research into the strategies animals use to find resources.
 
Article
Due to the availability of large molecular data-sets, covariance models are increasingly used to describe the structure of genetic variation as an alternative to more heavily parametrised biological models. We focus here on a class of parametric covariance models that received sustained attention lately and show that the conditions under which they are valid mathematical models have been overlooked so far. We provide rigorous results for the construction of valid covariance models in this family. We also outline how to construct alternative covariance models for the analysis of geographical variation that are both mathematically well behaved and easily implementable.
 
Quantile-quantile plots of p-values obtained on simulated data. Each point corresponds to a simulated data set. The y-axis gives the p-value returned by the simple Mantel test. The null hypothesis tested is the independence between X and Y. Left column: simple Mantel test, the matrices D X and D Y are obtained from independent random fields with zero mean (no deterministic spatial trend). Middle: partial Mantel test, the matrices D X and D Y are obtained from independent random fields with zero mean (no deterministic spatial trend). Right column: partial Mantel test, the matrices D X and D Y are obtained from independent random fields with a deterministic linear spatial trend. For the partial Mantel test (middle and right columns), each data set was analyzed by each of the four permutation methods of Legendre and Fortin (2010). The p-values should be aligned along the diagonal. As soon as the data are spatially auto-correlated, the simple and partial Mantel tests produce an excess of Type I errors. The four permutation methods perform similarly and this whatever the intensity of the auto-correlation.
Article
The simple and partial Mantel tests are routinely used in many areas of evolutionary biology to assess the significance of the association between two or more matrices of distances relative to the same pairs of individuals or demes. Partial Mantel tests rather than simple Mantel tests are widely used to assess the relationship between two variables displaying some form of structure. We show that contrarily to a widely shared belief, partial Mantel tests are not valid in this case, and their bias remains close to that of the simple Mantel test. We confirm that strong biases are expected under a sampling design and spatial correlation parameter drawn from an actual study. The Mantel tests should not be used in case auto-correlation is suspected in both variables compared under the null hypothesis. We outline alternative strategies. The R code used for our computer simulations is distributed as supporting material.
 
Article
1. Studies of stable isotope signatures can reveal and quantify trophic carbon transfer between organisms. However, preservation of the samples before analysis cannot always be avoided. Some preservation agents are known to alter tissue δ13C values considerably, but we do not yet understand how variation in such preservation artefacts may be determined by variation in body traits of different invertebrate species and life stage. 2. Here, we tested the effect of four different preservation methods on 13C signatures of two morphologically and ecologically distinct springtails, Folsomia candida and Orchesella cincta. These springtails were fed on the fungus Cladosporium cladosporioides grown on either a C3 or a C4 carbon source, resulting in springtails with two contrasting initial δ13C values. Subsequently, these springtails were preserved for 46 days. In addition, a juvenile–adult comparison was made for F. candida. 3. Freeze-drying and subsequent dry storage did not affect 13C signatures of either species; nor did killing springtails with liquid nitrogen and storing them at −80 °C. Preservation in 70% ethanol slightly depleted δ13C values of adult F. candida but not of O. cincta. In contrast, storage in saturated salt solution depleted both species considerably. Life stage affected preservation success significantly; storage in 70% ethanol depleted adult F. candida but not its juveniles. Initial δ13C values of the springtails did not interact with preservation artefacts, suggesting that the shifts in δ13C values are caused by effects of the preservative on the animal tissue rather than by its remainders. 4. We recommend freeze-drying as a preservation method. However, our results suggest that interspecific differences (e.g. in body size and cuticle thickness) as well as intraspecific difference (e.g. life stage-dependent changes in the proportion of fat reserves) are important determinants of preservation effects on 13C signatures. This makes interpreting stable isotope data obtained from preserved springtails relatively difficult, especially when natural 13C abundances are used to study trophic interactions or interspecific functional differences. We predict that such complications also apply to other invertebrate taxa and types.
 
Article
1. While teaching statistics to ecologists, the lead authors of this paper have noticed common statistical problems. If a random sample of their work (including scientific papers) produced before doing these courses were selected, half would probably contain violations of the underlying assumptions of the statistical techniques employed. 2. Some violations have little impact on the results or ecological conclusions; yet others increase type I or type II errors, potentially resulting in wrong ecological conclusions. Most of these violations can be avoided by applying better data exploration. These problems are especially troublesome in applied ecology, where management and policy decisions are often at stake. 3. Here, we provide a protocol for data exploration; discuss current tools to detect outliers, heterogeneity of variance, collinearity, dependence of observations, problems with interactions, double zeros in multivariate analysis, zero inflation in generalized linear modelling, and the correct type of relationships between dependent and independent variables; and provide advice on how to address these problems when they arise. We also address misconceptions about normality, and provide advice on data transformations. 4. Data exploration avoids type I and type II errors, among other problems, thereby reducing the chance of making wrong ecological conclusions and poor recommendations. It is therefore essential for good quality management and policy based on statistical analyses.
 
Article
1. Tests of significance of the individual canonical axes in redundancy analysis allow researchers to determine which of the axes represent variation that can be distinguished from random. Variation along the significant axes can be mapped, used to draw biplots or interpreted through subsequent analyses, whilst the nonsignificant axes may be dropped from further consideration. 2. Three methods have been implemented in computer programs to test the significance of the canonical axes; they are compared in this paper. The simultaneous test of all individual canonical axes, which is appealing because of its simplicity, produced incorrect (highly inflated) levels of type I error for the axes following those corresponding to true relationships in the data, so it is invalid. The ‘marginal’ testing method implemented in the ‘vegan’ R package and the ‘forward’ testing method implemented in the program CANOCO were found to have correct levels of type I error and comparable power. Permutation of the residuals achieved greater power than permutation of the raw data. 3. R functions found in a Supplement to this paper provide the first formal description of the ‘marginal’ and ‘forward’ testing methods.
 
Article
1. An important question in macroecology is: Can we estimate a species’ abundance from its occurrence on landscape? Answers to this question are useful for estimating population size from more easily acquired distribution data and for understanding the macroecological occupancy–abundance relationship. 2. Several methods have recently been developed to address this question, but no method is general enough to provide a common solution to all species because of the wide variation in spatial distribution of species. 3. In this study, we developed a mixed Gamma-Poisson model that generalizes the negative binomial model and can characterize spatial dependence in the abundance distribution across cells. Under this framework, without any extra information, the clumping parameter and species abundance can be estimated using a map aggregation technique. This model was tested using a set of empirical census data consisting of 299 tree species from a 50-ha stem-mapped plot of Panama. 4. A comparison showed that the new method outperformed the previous methods to an appreciable degree. Particularly for abundant species in a finely gridded map (5 × 5 m), its bias is very small and the method can also reduce the root mean square error up to 30%. Like for previous methods, however, the new method’s performance decreases with the increase in cell size. 5. As a by-product, the new method provides an approach to estimate spatial autocorrelation of species distribution which is otherwise difficult to estimate for presence/absence map.
 
Article
1. Mark–recapture studies are often used to estimate population size based on a single source of individual identification data such as natural markings or artificial tags. However, with the development of molecular ecology, multiple sources of identification can be obtained for some species and combining them to obtain population size estimates would certainly provide better information about abundance than each survey can provide alone. 2. We propose an extension of the Jolly–Seber model to infer abundance by combining two sources of capture–recapture data. The need to merge both sources of data was motivated by studies of humpback whales in which both photo-identification and DNA from skin biopsy samples are often collected. As whales are not necessarily available by both sampling methods on any given occasion, they can appear twice in the combined data set if no combined sampling ever occurred during the survey, i.e. being photographed and genotyped on the same occasion. Our model thus combines the two sources of information by estimating the possible overlap. Monte Carlo simulations are used to assess the properties of the present estimator that is then used to estimate the size of the humpback whale population in New Caledonia. The new open-population estimator is also compared with classic closed-population estimators incorporating either temporal and/or individual heterogeneity in the capture probability: the purpose was to evaluate which approach (closed or open population) was the least biased for an open population with individual heterogeneous capture probabilities. 3. When all assumptions are met, the estimator is unbiased as long as the probability of being double-tagged (e.g. photographed and biopsied on the same occasion) on every occasion is above 0·2. 4. The humpback whale case study in New Caledonia shows that our two-source Jolly–Seber (TSJS) estimator could be more efficient in estimating population size than models based only on one type of data. For monitoring purposes, the proposed method provides an efficient alternative to the existing approaches and a productive direction for future work to deal with multiple sources of data to estimate abundance. 5. R-codes formatting the data and implementing the TSJS model are provided in Resource S5.
 
Article
1. Binomial mixture models use repeated count data to estimate abundance. They are becoming increasingly popular because they provide a simple and cost-effective way to account for imperfect detection. However, these models assume that individuals are detected independently of each other. This assumption may often be violated in the field. For instance, manatees (Trichechus manatus latirostris) may surface in turbid water (i.e. become available for detection during aerial surveys) in a correlated manner (i.e. in groups). However, correlated behaviour, affecting the non-independence of individual detections, may also be relevant in other systems (e.g. correlated patterns of singing in birds and amphibians).
 
Article
1. Tracking return migrations in songbirds has been impossible until recently when miniaturization of light-level loggers enabled observation of the first complete round trip. Although geolocators are extensively used on animals at sea, little is known about how accurate geolocators are for tracking terrestrial or forest-dwelling migrants. 2. To test the accuracy of geolocators for tracking migratory songbirds living in forested habitat, we calibrated geolocators to a source population located in central Europe and collected location estimates based on the source population calibration from stationary geolocators deployed over an 800 km NE to SW gradient in Western Europe. Additionally, we fit non-migratory songbirds (European blackbirds, Turdus merula) with geolocators for 12 months to compare known locations of individuals with locations estimated by geolocators. 3. We found an average error ±95% CI of 201 ± 43 km in latitude for stationary geolocators in forest habitat. Longitude error was considerably lower (12 ± 03 km). The most accurate geolocator was on average 23 km off target, the worst was on average 390 km off. 4. The winter latitude estimate error for geolocators deployed on sedentary birds was on average (±95% CI) 143 ± 62 km when geolocators were calibrated during the breeding season and 132 ± 75 km when they were calibrated during the winter. Longitude error for geolocators deployed on birds was on average (±95% CI) 50 ± 34 km. 5. Although we found error most likely due to seasonal changes in habitat and behaviour, our results indicate that geolocators can be used to reliably track long-distance forest-dwelling migrants. We also found that the low degree of error for longitude estimates attained from geolocators makes this technology suitable for identifying relatively short-distance movements in longitude.
 
The discrepancy between true latitude and latitude estimated by the threshold method where day length has a constant deviation. The deviation in kilometres is calculated for different latitudes and is drawn as a range, which approximately reflects the 90 and 10 percentiles in deviation owing to weather effects (2–17 min).
Deviation from ‘true’ day length (devDL) and solar midnight/noon (devMN) in min:s of stationary geolocators which were exposed to the effects of (a) weather, (b) topography plus weather and (c) vegetation plus weather. For deviations, mode values and 10 and 90 percentile values are given; extremes are symbolized as dots. The grey bars indicate the range of ±2 min, which is the potential highest accuracy of each sunrise and sunset time determined by refraction at the horizon.
Deviation (min:s) from true day length (devDL) and solar midnight/noon (devMN) for on-bird geolocators of (a) Arctic Tern for open-landscape habitat, (b) European Hoopoe for open woodland habitat and (c) Common Nightingale for woodland habitat. The grey bars indicate the range of ±2 min, which is the potential highest accuracy of each sunrise and sunset time determined by refraction at the horizon.
Two examples of latitudinal positions calculated through different calibration methods. Data come from (a) a stationary geolocator placed within a reed bed in Switzerland and (b) a Great Reed Warbler during winter in a stationary unknown position. The upper graphs show the latitudinal positions over a specific time period. Positions were derived through ‘civil twilight calibrations’ (open circles) and through ‘Hill–Ekstrom calibration’ (black circles). In the lower graphs (a2 and b2), the latitudinal positions derived though different methods are compared. The black dotted line in a1/a2 gives the real position of the geolocator.
Article
1. Geolocation by light allows for tracking animal movements, based on measurements of light intensity over time by a data-logging device (‘geolocator’). Recent developments of ultra-light devices (<2 g) broadened the range of target species and boosted the number of studies using geolocators. However, an inherent problem of geolocators is that any factor or process that changes the natural light intensity pattern also affects the positions calculated from these light patterns. Although the most important factors have been identified, estimation of their effect on the accuracy and precision of positions estimated has been lacking but is very important for the analyses and interpretation of geolocator data.
 
Article
1. Cougar (Puma concolor) populations, like other large carnivores, have increased during recent decades and may be recolonizing their former ranges in Midwestern North America. The dispersal routes taken by these animals from established populations are unknown and insight into these movements would facilitate their conservation and management. 2. We inferred the origin and migration route of four dispersing cougars using stable hydrogen (δD) and carbon (δ13C) isotope values along one of their claws. We compared isotopic variations within claws to regional and large-scale isoscapes of δD and δ13C values in prey species. Using a likelihood-based assignment approach, we predicted the most likely dispersal route of each cougar (among several least-cost dispersal paths to potential source populations) in a chronological sequence dating back from its final location. 3. Our model predicted the origin of a radio-collared short-distance disperser and inferences about the most likely dispersal corridors for two long-distance dispersers matched reported information from re-sighting events and genetic investigations. 4. Insights about the most likely migration corridors may help identify critical areas and guide future conservation efforts of cougars and other large carnivores. We encourage managers to extend regional isoscapes based on sedentary prey species as they prove to be valuable tools in isotopic tracking of long-distance migration. 5. Our isotopic approach may be extended to other metabolically inert tissues that grow continuously, to investigate dispersal paths of species of interest, providing that individuals disperse across known isotopically structured landscapes.
 
Article
1. Abundance estimation is a pervasive goal in ecology. The rate of detection by motion-sensitive camera traps can, in principle, provide information on the abundance of many species of terrestrial vertebrates that are otherwise difficult to survey. The random encounter model (REM, Rowcliffe et al. 2008) provides a means estimating abundance from camera trap rate but requires camera sensitivity to be quantified. 2. Here, we develop a method to estimate the area effectively monitored by cameras, which is one of the most important codeterminants of detection rate. Our method borrows from distance sampling theory, applying detection function models to data on the position (distance and angle relative to the camera) where the animals are first detected. Testing the reliability of this approach through simulation, we find that bias depends on the effective detection angle assumed but was generally low at less than 5% for realistic angles typical of camera traps. 3. We adapted standard detection functions to allow for the possibility of smaller animals passing beneath the field of view close to the camera, resulting in reduced detection probability within that zone. Using a further simulation to test this approach, we find that detection distance can be estimated with little or no bias if detection probability is certain for at least some distance from the camera. 4. Applying this method to a 1-year camera trapping data set from Barro Colorado Island, Panama, we show that effective detection distance is related strongly positively to species body mass and weakly negatively to species average speed of movement. There was also a strong seasonal effect, with shorter detection distance during the wet season. Effective detection angle is related more weakly to species body mass, and again strongly to season, with a wider angle in the wet season. 5. This method represents an important step towards practical application of the REM, including abundance estimation for relatively small (<1 kg) species.
 
Article
1. Researchers using the adaptive management paradigm consider learning about the behaviour of social-ecological systems as an inherent element of endeavours to improve the provision of ecosystem services. Learning-by-experience about social-ecological systems is a slow process attributable to system complexity. We review recent developments in systems modelling which support learning by creating a salient diversity of management alternatives and by translating science-based results into stakeholder perspectives. 2. Design-oriented learning cycles aimed at developing ecosystem services could be improved using systematic model-based diversification and selection of natural resource management alternatives. 3. Recent advances in spatially explicit computer-based ecological modelling and in visualization of results can effectively support repeated learning cycles. 4. Prioritization and weighing of conservation objectives and ecosystem services should be postponed until after the exploration of the synergies and trade-offs among objectives. 5. Investigating whether this evolutionary design approach can increase adoption of management adjustments and help to avoid lock-in onto unsustainable development trajectories should be part of efforts to understand the way we learn.
 
Summary of U-CARE LOF tests for the Hector's dolphin study
Sensitivity (sφ) of asymptotic population growth rate (λ) to the value of adult survival probability (φ), versus age at first reproduction (α) and φ/λ, using the population model in Eq. 1.
Summary of U-CARE LOF tests for the wolf study
Article
1. Mark–recapture studies are often used to estimate adult survival probability , which is an important demographic parameter for long-lived species, as it can have a large impact on the population growth rate. We consider the impact of variation in capture probability among individuals (capture heterogeneity) on the estimation of ϕ from a mark–recapture study and thence on estimation of the asymptotic population growth rate . 2. We review the mechanisms by which capture heterogeneity arises, methods of allowing for it in the analysis, and use simulation to assess the power of detecting three types of capture heterogeneity (two-group heterogeneity, trap-response and temporary emigration) using standard mark–recapture lack-of-fit tests. 3. We use simulation to assess the bias that can arise in the estimation of ϕ from a mark–recapture study when we do not allow for capture heterogeneity. Using a generic population model, we assess the effect this bias has on estimation of . 4. We use our results on the power of the lack-of-fit tests, together with a measure of the size of the bias relative to the standard error of the estimate of ϕ, to assess which situations might lead to an important level of undetected bias. Our results suggest that undetected bias is not likely to be an issue when there is trap-response, owing to the lack-of-fit tests having sufficient power to detect any trap-response that could lead to non-negligible bias. For two-group heterogeneity, the worst bias generally occurs when the difference between the capture probabilities for the two groups is moderate and both capture probabilities are low. For temporary emigration, the worst bias generally occurs when the rate of emigration and the capture probability are both low. 5. We illustrate the issues for conservation management using data from studies of Hector’s dolphin (Cephalorhynchus hectori) in New Zealand and wolves (Canis lupus) in France. 6. Previous studies have suggested that capture heterogeneity will generally lead to a relatively small bias in the estimate of ϕ. However, given the high sensitivity of the asymptotic population growth rate to adult survival, a small bias in ϕ might lead to nontrivial bias in the estimate of .
 
Article
1. Developing the next-generation of species distribution modelling (SDM) requires solutions to a number of widely recognised problems. Here, we address the problem of uncertainty in predictor variables arising from fine-scale environmental variation. 2. We explain how this uncertainty may cause scale-dependent ‘regression dilution’, elsewhere a well-understood statistical issue, and explain its consequences for SDM. We then demonstrate a simple, general correction for regression dilution based on Bayesian methods using latent variables. With this correction in place, unbiased estimates of species occupancy vs. the true environment can be retrieved from data on occupancy vs. measured environment, where measured environment is correlated with the true environment, but subject to substantial measurement error. 3. We then show how applying our correction to multiple co-occurring species simultaneously increases the accuracy of parameter estimates for each species, as well as estimates for the true environment at each survey plot – a phenomenon we call ‘neighbourly advice’. With a sufficient number of species, the estimates of the true environment at each plot can become extremely accurate. 4. Our correction for regression dilution could be integrated with models addressing other issues in SDM, e.g. biotic interactions and/or spatial dynamics. We suggest that Bayesian analysis, as employed here to address uncertainty in predictor variables, might offer a flexible toolbox for developing such next-generation species distribution models.
 
Article
1. Conservation objectives for non-breeding shorebirds (waders) are determined from their population size. Individual-based models (IBMs) have accurately predicted mortality rate (a determinant of population size) of these species, and are a tool for advising coastal management and policy. However, due to their complexity, the use of these IBMs has been restricted to specialist modellers in the scientific community, whereas, ideally, they should be accessible to non-specialists with a direct interest in coastal issues. 2. We describe how this limitation has been addressed by the development of WaderMORPH, a user-friendly interface to a shorebird IBM, MORPH, that runs within Microsoft Windows. WaderMORPH hides technical and mathematical details of parameterisation from the user and allows models to be parameterised in a series of simple steps. We provide an overview of WaderMORPH and its range of applications. WaderMORPH, its user guide and an example data set can be downloaded from http://individualecology.bournemouth.ac.uk.
 
Summarized percentages of positive correlations with biodiversity
Article
1. Structural diversity and niche differences within habitats are important for stabilizing species coexistence. However, land-use change leading to environmental homogenization is a major cause for the dramatic decline of biodiversity under global change. The difficulty in assessing large-scale biodiversity losses urgently requires new technological advances to evaluate land-use impact on diversity timely and efficiently across space. 2. While cost-effective aerial images have been suggested for potential biodiversity assessments in forests, correlation of canopy object variables such as gaps with plant or animal diversity has so far not been demonstrated using these images. 3. Here, we show that aerial images of canopy gaps can be used to assess floristic biodiversity of the forest understorey. This approach is made possible because we employed cutting-edge unmanned aerial vehicles and very high-resolution images (7 cm pixel−1) of the canopy properties. We demonstrate that detailed, spatially implicit information on gap shape metrics is sufficient to reveal strong dependency between disturbance patterns and plant diversity (R2 up to 0·74). This is feasible because opposing disturbance patterns such as aggregated and dispersed tree retention directly correspond to different functional and dispersal traits of species and ultimately to different species diversities. 4. Our findings can be used as a coarse-filter approach to conservation in forests wherever light strongly limits regeneration and biodiversity.
 
Article
1. The fluid component of blood is widely used in ecophysiological investigations, including measures of immune function and stable isotope ecology. After blood collection, delayed separation of blood extracellular fluids from red blood cells is known to affect the concentration of a wide range of biochemical compounds in the resulting fluid, as does prevention of clotting (producing plasma) when compared with blood allowed to clot (producing serum). 2. One challenge when investigating immune function and stable isotope ecology, therefore, is discriminating variation because of the effect of the biological factors of interest from potential methodological artefacts. This study assesses how seven widely used measures of immune function and stable isotope composition respond both to delayed separation of the cellular and fluid components and to the clotting of blood samples from two species of waterfowl. 3. Samples that remained uncentrifuged for up to 12 h did not differ from those centrifuged within 15 min of sampling from the same individuals, indicating that samples from a wide range of field conditions may remain highly comparable. However, the outcome of three of the four immunological assays and two of the three isotopic analyses was highly dependent on the type of fluid, with higher immunological activity and higher relative concentrations of heavy carbon and total nitrogen in plasma compared to serum. 4. Researchers interested in immune function and stable isotope ecology may obtain the most useful results by ensuring that they use a single fluid type in their investigations.
 
Article
1. Hormone analyses are frequently used to support management of wildlife; however, current techniques are not very field-friendly. In situ hormone monitoring is often expensive, time consuming and logistically difficult. Thus, a new method for assessing ovarian cycle activity non-invasively in free-ranging African elephants was developed. 2. The technique involves handshaking faecal samples in common organic solvents, use of environmentally stable antibody-coated microtitre plates and assessment of progestagen concentrations based on a visual colour change. 3. Studies using ex situ African elephants determined that handshaking faeces in a solution of isopropyl alcohol was effective for extracting the faecal progestagens (efficiency >90%). 4. Antibody-coated plates were stable for up to 3 months under a range of temperatures (4 to >38 °C) and the resulting faecal oestrous cycle progestagen profiles corresponded significantly to those of serum (r = 0·89, P < 0·01). 5. This field-friendly technique provided qualitative hormone data without the need for expensive equipment. Although developed for progestagen analyses in elephants, this approach should be adaptable to other steroids in a myriad of species. As such, it could facilitate how hormones are measured in species under field conditions and provide new tools for making sensible conservation management decisions.
 
Article
1. Humans age, but how much more or less do we age compared with other species? Do humans age more than chimps, birds more than fish or sheep more than buffalos? In this article, I argue that current methods to compare patterns of ageing across species are limited because they confound two dimensions of age-specific change – the pace and the shape of ageing. 2. Based on the two axes of pace and shape, I introduce a new conceptual framework to classify how species age. 3. With this method, I rank species according to how strongly they age (shape) and how fast they age (pace). Depending on whether they are ranked by pace or by shape, species are ordered differently. 4. Alternative pace measures turn out to be highly correlated. Alternative shape measures are also highly correlated. The correlation between pace and shape ranking is negative but weak. Among the examples here, no species is long lived yet exhibits negligible ageing – contrary to the commonly held view that long-lived species are good candidates for negligible ageing. 5. Analysis of species in pace–shape space provides a tool to identify key determinants of the evolution of ageing for species across the tree of life.
 
Article
1. T-RFLP is an established tool for high-throughput studies of microbial communities, which can, with care and practical validation, be enhanced to aid identification of specific organisms in a community by associating T-RFs from experimental runs with predicted T-RFs from a set of existing sequences. A barrier to this approach is the laborious process of selecting diagnostic restriction enzyme(s) for further validation. 2. Here, we describe directed terminal restriction analysis tool (DRAT), a software tool that aids the design of directed terminal-restriction fragment length polymorphism (DT-RFLP) strategies, to separate DNA targets based on restriction enzyme polymorphisms. The software assesses multiple user-supplied DNA sequences, ranks optimal restriction endonucleases for separating targets and provides summary information including the length of diagnostic terminal restriction fragments. A worked example suggesting enzymes uniquely separating selected arbuscular mycorrhizal fungal groups is presented. 3. This tool greatly facilitates identification of diagnostic restriction enzymes for user-designated groups within complex populations and provides expected product sizes for all designated groups.
 
Article
1. There is a clear need for field experiments to estimate the effects of global climate change like decreasing precipitation and rising atmospheric CO2 concentration. Adequate methods for controlled manipulations of these environmental parameters under field conditions are scarce, particularly in regard to multi-factor experiments. Here, we describe a new flexible rain shelter system, which can be assembled manually and easily be combined with further experimental treatments in field studies. 2. Frames of tents with a ground area of 12 m × 20 m were assembled on a field site after the sowing of the maize and sorghum crop and after the equipment for free-air CO2 enrichment has been set up. The tents were equipped with transparent tarpaulins, which were installed on the frames only in cases of high amounts of precipitations forecast. In autumn, the entire experimental equipment was removed from the field site. 3. The rain shelter tents were operated in the growing seasons 2008 and 2010 for 9 and 20 days with 54 mm and 176 mm of precipitation excluded, respectively. In the months with rain shelters in operation, pronounced reductions in precipitation were achieved (2008: 39·5%, 2010: 58·6%). The tent frames did not affect temperature or CO2 concentration, but slightly decreased incident photosynthetic active radiation (PAR) by 6·6%. In times with tarpaulins installed, PAR decreased by 24·1%. Comparing times without and with tarpaulins installed, the fraction of time in which 1-min mean CO2 concentration was within ±20% limits of the setpoint was decreased from 99·7% to 97·8% in 2008 and from 99·0% to 96·7% in 2010, respectively. 4. The rain shelter tents provide a suitable and versatile tool for excluding precipitation from larger areas in the field without relevant disturbances to the soil and aerial environment except a slight decrease in incident radiation, which can be accounted for i.e. in the evaluation of plant growth data. Furthermore, they can be easily combined with further experimental treatments like free-air CO2 enrichment.
 
Article
1. Investigating allometric relationships is interesting in its own right as well as being important when controlling for the effects of body mass on specific structures in other areas of research. Here, I explore a number of basic assumptions behind the study of allometry and demonstrate that they are incorrect in theory and in practice for one commonly-studied allometric relationship, that between body mass and testes mass. 2. First, allometric relationships need not be linear. As a result, I recommend that the potential for nonlinearity should be considered and additive modelling (AM) or generalized additive modelling (GAM) used to explore this possibility before linear statistics are applied. In this respect, assessing and comparing the shape and the topology of allometric relationships are an important first step in any study which investigates or uses allometric relationships. This is particularly important when, as ecologists, we wish to control for the effects of allometry, where assuming linearity when it is not the case can lead to systematic, size-based biases in the results. 3. An extension of relaxing the assumption that allometric relationships must necessarily be linear is that the existence of nonlinear relationships means that allometric relationships for a given body mass range examined need not necessarily pass through the origin. This is because a nonzero intercept could indicate nonlinearity between the lowest body mass examined and a theoretical body mass of zero. 4. Finally, I show that allometric relationships identified using ratio data as the dependent variable are not necessarily biased. In particular, for testes mass allometry, when ratio data are log-transformed, the relationship obtained is topologically homologous to allometric relationships identified using log testes mass as the dependent variable. That is, both produce the same fitted values for testes mass from a given body mass. In addition, the same residual value for each data point is obtained in both cases. 5. A lack of awareness of these assumptions means that allometric relationships may be misunderstood, leading to a failure to account correctly for the effects of body mass in studies where this allometry needs to be controlled for.
 
Article
1. Closed ecological systems (CES) are small man-made ecosystems which do not have any material exchange with the surrounding environment. Recent ecological and technological advances enable successful establishment and maintenance of CES, making them a suitable tool for detecting and measuring subtle feedbacks and mechanisms. 2. As a part of an analogue (physical) C cycle modelling experiment, we developed a non-intrusive methodology to control the internal environment and to monitor atmospheric CO2 concentration inside 16 replicated CES. Whilst maintaining an air-tight seal of all CES, this approach allowed for access to the CO2 measuring equipment for periodic re-calibration and repairs. 3. To ensure reliable cross-comparison of CO2 observations between individual CES units and to minimize the cost of the system, only one CO2 sampling unit was used. An ADC BioScientific OP-2 (open-path) analyser mounted on a swinging arm was passing over a set of 16 measuring cells. Each cell was connected to an individual CES with air continuously circulating between them. 4. Using this setup, we were able to continuously measure several environmental variables and CO2 concentration within each closed system, allowing us to study minute effects of changing temperature on C fluxes within each CES. The CES and the measuring cells showed minimal air leakage during an experimental run lasting, on average, 3 months. The CO2 analyser assembly performed reliably for over 2 years, however an early iteration of the present design proved to be sensitive to positioning errors. 5. We indicate how the methodology can be further improved and suggest possible avenues where future CES based research could be applied.
 
Illustrative dispersal kernels generated using the WALD model for plants aged 11, 30 and 50. Importantly, older plants produce more seeds, and, on average, because they have greater release height, these seeds travel further (see also Table 1). The difference in the dispersal distributions is substantial.
Wavespeed (metres per year) estimated from both the individual-based models (IBM) and the analytical model parameterised with 4, 8, 20 and 60 age classes. Errors for the IBM model are standard deviations derived from 20 replicate simulations.
The elasticities of the analytical model’s wavespeed, c*, to transitions, aij, of the population projection matrix A (eqn 3) for 20 age classes. Key: elasticities for fecundity values (grey squares), elasticities for the survival of plants staying in the same age class (white circles), elasticities of progressions to the next age class (black triangles).
Individual-based models predictions of spread over a real landscape without management (a) and following removal of all plants with the exception of those in private gardens (b) Green = current, red = in 20 years time, black = boundary line for gardens.
Article
1. Improving the understanding, prediction and management of range expansions is a key challenge for ecology. Over recent years, there has been a rapid increase in modelling effort focussed on range expansions and a shift from predominantly theoretical developments towards application. This is especially the case in the field of invasion biology and also in relation to reintroductions and species’ responses to climate change. 2. While earlier models were exclusively analytical, individual-based models (IBMs) are now increasingly widely used. We argue that instead of being viewed as competing methodologies, analytical and individual-based methods can valuably be used in conjunction. 3. We use a mechanistic wind dispersal model to generate age-specific dispersal kernels for the invasive shrub, Rhododendron ponticum. To demonstrate the utility of employing both modelling approaches, this information along with demographic parameters is incorporated into an IBM and an analytical, integrodifference model. From both models, the equilibrium rate of spread is calculated. 4. Estimates of wavespeeds were similar for the two models, although slower rates of spread were consistently projected by the IBM. Further, our results demonstrate the wavespeed to be sensitive to the characterisation of age structure in the model; when few age classes are used, much higher rates of spread are projected. 5. The analytical model is extremely efficient at providing elasticity analysis of the wavespeed, which can provide helpful information for management. We gain qualitatively similar results using the IBM but obtaining the results is time-consuming and, because the model is stochastic, they are noisy and harder to interpret. We argue that analytically derived transient elasticity analyses are needed for the many cases where success of control is measured on a relatively short time horizon. 6. To demonstrate the flexibility of the IBM approach, we run it on a real landscape comprising different habitat types. The comparison of two different control scenarios is an example of the utility of this approach for more tactical applications. 7. As a general conclusion of the study, we emphasise that analytical and individual-based approaches offer different, but complementary, advantages and suggest how their joint use can facilitate the improvement in biodiversity management at a range of spatial scales.
 
Article
1. Invasive ungulates with eruptive population dynamics can degrade sensitive habitats, harbour disease-causing pathogens and facilitate the spread of weedy plants. Hence there is a need globally for cost-effective density reduction and damage mitigation strategies. User-friendly software tools that facilitate effective decision making by managers (who are not usually scientists) can help in understanding uncertainty and maximising benefits to native biodiversity within a constrained budget. 2. We designed an easy-to-use spreadsheet model – the Spatio-Temporal Animal Reduction (STAR) model – for strategic management of large feral ungulates (pigs, swamp buffalo and horses) within the World Heritage Kakadu National Park in Australia. The main goals of the model are to help park managers understand the landscape and population dynamics that influence the number and distribution of feral ungulates in time and space. 3. The model is a practical tool and methodological advance that provides a forecast of the effects and financial costs of proposed management plans. Feral animal management in the park is complex because populations cover an extensive area comprised of diverse and difficult-to-access habitats. There are also large reservoir populations in the regions surrounding the park, and these can provide immigrants even after within-park control operations. To provide the optimal outcomes for the reduction of feral animals, STAR is spatially explicit in relation to habitat, elevation and regions of culling, and applies density-feedback models in a lattice framework (multi-layer grid) to determine the optimal cost–benefit ratio of control choices. A series of spatial and nonspatial optimisation routines yielding the best cost–benefit approaches to culling are provided. 4. The spreadsheet module is flexible and adaptable to other regions and species, and is made available for testing and modifying. Users can operate STAR without having prior expert knowledge of animal management theory and application. The intuitive spreadsheet format could render it effective as a teaching or training tool for undergraduate students and landscape managers who might not have detailed ecological backgrounds.
 
Article
1. Estimating population size is a fundamental objective of many animal monitoring programmes. Capture–recapture methods are often used to estimate population size from repeated sampling of uniquely marked animals, but capturing and marking animals can be cost prohibitive and affect animal behaviours, which can bias population estimates. 2. We developed a method to construct spatially explicit capture–recapture encounter histories from locations of unmarked animals for estimating population size with conventional capture–recapture models. Prior estimates of the maximum distance individuals move in the population is used to set a summary statistic and process subsequent capture–recapture survey data. Animal locations are recorded as point coordinates during survey occasions, and the parameter of interest is abundance of individual activity centres. 3. We applied this method to data from a point-coordinate capture–recapture survey of burrowing owls Athene cunicularia in the Imperial Valley of California, USA. We also used simulations to examine the utility of this technique for additional species with variable detection probabilities, levels of home range overlap and distributions of activity centres within a survey area. 4. The estimates from empirical and simulation studies were precise and unbiased when detection probabilities were high and territorial overlap was low. 5. This method of estimating population size from point locations fills a gap in non-invasive census and long-term monitoring methods available for conspicuous species and provides accurate estimates of burrowing owl territory abundance. The method requires high detection probabilities, low levels of home range overlap and that individuals use activity centres. We believe that these requirements can be met, with suitable survey protocols, for numerous songbird and reptile species.
 
Article
1. When exploring spatially complex ecological phenomena using regression models it is often unreasonable to assume a single set of regression coefficients can capture space-varying and scale-dependent relationships between covariates and the outcome variable. This is especially true when conducting analysis across large spatial domains, where there is an increased propensity for anisotropic dependence structures and non-stationarity in the underlying spatial processes. 2. Geographically weighted regression (GWR) and Bayesian spatially-varying coefficients (SVC) are the most common methods for modelling such data. This paper compares these methods for modelling data generated from non-stationary processes. The comparison highlights some strengths and limitations of each method and aims to assist those who seek appropriate methods to better understand spatially complex ecological systems. Both synthetic and ecological data sets are used to facilitate the comparison. 3. Results underscored the need for the postulated model to approximate the underlying mechanism generating the data. Further, results show GWR and SVC can produce very different regression coefficient surfaces and hence dramatically different conclusions can be drawn regarding the impact of covariates. The trade-off between the richer inferential framework of SVC models and computational demands is also discussed.
 
Article
1. Geographic information systems (GIS) have recently proved useful for estimating the environmental niche of species across broad geographic regions. However, the application of these niche-based GIS techniques has yet to be extensively applied to local systems. The assumptions of the methods are transferable across scales: species exist across a range of habitats, and these habitats represent a gradient of suitability that can be characterized using multivariate environmental data in association with known species occurrences.
 
Article
1. In recent years, evolutionary ecologists have become increasingly interested in antioxidants and oxidative stress. Information on redox systems can provide new insights into our understanding of life-history variation and animal responses to environmental stressors. 2. A common approach of ecological studies to the study of antioxidant capacity of animals has been measurement of the total antioxidant capacity of serum or plasma. Some of these studies have suggested that most of the antioxidant capacity measured in plasma is made up of uric acid and, therefore, estimates of antioxidant capacity should be corrected for the concentration of uric acid. 3. Here, I show that (i) the correlation between plasma concentration of uric acid and plasma antioxidant capacity is method dependent and (ii) different assays for the quantification of circulating antioxidant capacity can provide information on different components of the antioxidant machinery. 4. To determine whether measurements of antioxidant capacity need to be corrected for the uric acid concentration in the sample, it is therefore important to take into account the biochemical properties of the assay used.
 
Article
1. To sustain the vital ecosystem service of pollination, new methodical developments are needed for research on the underlying factors of globally observed bee losses. In particular, robust laboratory methods for assessing adverse effects on honey bee brood are required. In addition, from a statistical point of view, the shared origin of test individuals must be considered when analysing ecotoxicological data. 2. To improve honey bee in vitro rearing, we adopted a nongrafting method to collect honey bee larvae without direct manipulation. Linear mixed effects model to evaluate LD50, larvae survival and prepupae weights integrated the colony background of larvae as a random factor into the statistical analyses. The novel rearing approach and appropriate statistical tools for data analyses are illustrated in an in vitro case study on acute oral dimethoate toxicity. 3. We recommend our honey bee larvae collection approach for in vitro larvae-rearing applications, because of (i) a mere 3% background mortality upon the prepupae stage, (ii) a high quantitative capacity and (iii) because of robustness of performance which are great benefits for standardization. 4. The analyses indicate clear adverse effects of dimethoate by a significant survival reduction and prepupae weight reduction. For second instars, the acute 48-h LD50 was 1·67 μg dimethoate per larva. 5. We conclude that both our larvae collection method and the applied statistical approaches will help to improve the quality of environmental risk assessment studies on honey bees, to secure honey bee pollination and to sustain biodiversity.
 
Top-cited authors
Holger Schielzeth
  • Friedrich Schiller University Jena
David I Warton
  • UNSW Sydney
Alain Zuur
  • Highland Statistics
David Bryant
  • University of Otago
Jane Elith
  • University of Melbourne