R. M. Lark

British Geological Survey, Nottigham, England, United Kingdom

Are you R. M. Lark?

Claim your profile

Publications (154)330.07 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Marine spatial planning and conservation need underpinning with sufficiently detailed and accurate seabed substrate and habitat maps. Although multibeam echosounders enable us to map the seabed with high resolution and spatial accuracy, there is still a lack of fit-for-purpose seabed maps. This is due to the high costs involved in carrying out systematic seabed mapping programmes and the fact that the development of validated, repeatable, quantitative and objective methods of swath acoustic data interpretation is still in its infancy. We compared a wide spectrum of approaches including manual interpretation, geostatistics, object-based image analysis and machine-learning to gain further insights into the accuracy and comparability of acoustic data interpretation approaches based on multibeam echosounder data (bathymetry, backscatter and derivatives) and seabed samples with the aim to derive seabed substrate maps. Sample data were split into a training and validation data set to allow us to carry out an accuracy assessment. Overall thematic classification accuracy ranged from 67% to 76% and Cohen’s kappa varied between 0.34 and 0.52. However, these differences were not statistically significant at the 5% level. Misclassifications were mainly associated with uncommon classes, which were rarely sampled. Map outputs were between 68% and 87% identical. To improve classification accuracy in seabed mapping, we suggest that more studies on the effects of factors affecting the classification performance as well as comparative studies testing the performance of different approaches need to be carried out with a view to developing guidelines for selecting an appropriate method for a given dataset. In the meantime, classification accuracy might be improved by combining different techniques to hybrid approaches and multi-method ensembles.
    Continental Shelf Research 08/2014; 84:107-119. · 2.12 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: A Confidence Index is proposed that expresses the confidence of experts in the quality of a 3-D model as a representation of the subsurface at particular locations. The Confidence Index is based on the notion that the variation of the height of a particular geological surface represents general geological variability and local variability. The general variability comprises simple trends which allow the modeller to project surface structure at locations remote from direct observations. The local variability limits the extent to which borehole observations constrain inferences which the modeller can make concerning local fluctuations around the broad trends. The general and local geological variability of particular contacts are modelled in terms of simple trend surfaces and variogram models. These are then used to extend measures of confidence that reflect expert opinion so as to assign a confidence value to any location where a particular contact is represented in a model. The index is illustrated with an example from the East Midlands region of the United Kingdom.
    Proceedings of the Geologists Association 01/2014; · 1.76 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Spatial predictions of soil properties are needed for various purposes. However, the costs associated with soil sampling and laboratory analysis are substantial. One way to improve efficiencies is to combine measurement of soil properties with collection of cheaper-to-measure ancillary data. There are two possible approaches. The first is the formation of classes from ancillary data. A second is the use of a simple predictive linear model of the target soil property on the ancillary variables. Here, results are presented and compared where proximally sensed gamma-ray (γ-ray) spectrometry and electromagnetic induction (EMI) data are used to predict the variation in topsoil properties (e.g. clay content and pH). In the first instance, the proximal data is numerically clustered using a fuzzy k-means (FKM) clustering algorithm, to identify contiguous classes. The resultant digital soil maps (i.e. k = 2–10 classes) are consistent with a soil series map generated using traditional soil profile description, classification and mapping methods at a highly variable site near the township of Shelford, Nottinghamshire UK. In terms of prediction, the calculated expected value of mean squared prediction error (i.e. σ2p,C) indicated that values of k = 7 and 8 were ideal for predicting clay and pH. Secondly, a linear mixed model (LMM) is fitted in which the proximal data are fixed effects but the residuals are treated as a combination of a spatially correlated random effect and an independent and identically distributed error. In terms of prediction, the expected value of the mean squared prediction error from a regression (σ2p,R) suggested that the regression models were able to predict clay content, better than FKM clustering. The reverse was true with respect to pH, however. We conclude that both methods have merit. In the case of the clustering the approach is able to account for soil properties which have non-linearity's with the ancillary data (i.e. pH), whereas the LMM approach is best when there is a strong linear relationship (i.e. clay).
    Geoderma 01/2014; s 232–234:69–80. · 2.35 Impact Factor
  • R.M. Lark
    [Show abstract] [Hide abstract]
    ABSTRACT: The multivariate cumulants characterize aspects of the spatial variability of a regionalized variable. A centred multivariate Gaussian random variable, for example, has zero third-order cumulants. In this paper it is shown how the third-order cumulants can be used to test the plausibility of the assumption of multivariate normality for the porosity of an important formation, the Bunter Sandstone in the North Sea. The results suggest that the spatial variability of this variable deviates from multivariate normality, and that this assumption may lead to misleading inferences about, for example, the uncertainty attached to kriging predictions.
    Spatial Statistics. 01/2014;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Deficiency or excess of certain trace elements in the soil causes problems for agriculture, including disorders of grazing ruminants. Geostatistics has been used to map the probability that trace element concentrations in soil exceed or fall below particular thresholds. However, deficiency or toxicity problems may depend on interactions between elements in the soil. Here we show how cokriging from a regional survey of topsoil geochemistry can be used to map the risk of deficiency, and the best management intervention, where both depend on the interaction between two elements. Our case study is on cobalt. Farmers and their advisors in Ireland use index values for the concentration of total soil cobalt and manganese to identify where grazing sheep are at risk of cobalt deficiency. We use topsoil data from a regional geochemical survey across six counties of Ireland to form local cokriging predictions of cobalt and manganese concentrations with an attendant distribution which reflects the joint uncertainty of these predictions. From this distribution we then compute conditional probabilities for different combinations of cobalt and manganese index values, and so for the corresponding inferred risk to sheep of cobalt deficiency and the appropriateness of different management interventions. We represent these results as maps, using a verbal scale for the communication of uncertain information. This scale is based on one used by the Intergovernmental Panel on Climate Change, modified in light of some recent research on its effectiveness.
    Geoderma 01/2014; s 226–227:64–78. · 2.35 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Marine spatial planning and conservation need underpinning with sufficiently detailed and accurate seabed substrate and habitat maps. Although multibeam echosounders enable us to map the seabed with high resolution and spatial accuracy, there is still a lack of fit-for-purpose seabed maps. This is due to the high costs involved in carrying out systematic seabed mapping programmes and the fact that the development of validated, repeatable, quantitative and objective methods of swath acoustic data interpretation is still in its infancy. We compared a wide spectrum of approaches including manual interpretation, geostatistics, object-based image analysis and machine-learning to gain further insights into the accuracy and comparability of acoustic data interpretation approaches based on multibeam echosounder data (bathymetry, backscatter and derivatives) and seabed samples with the aim to derive seabed substrate maps. Sample data were split into a training and validation data set to allow us to carry out an accuracy assessment. Overall thematic classification accuracy ranged from 67% to 76% and Cohen׳s kappa varied between 0.34 and 0.52. However, these differences were not statistically significant at the 5% level. Misclassifications were mainly associated with uncommon classes, which were rarely sampled. Map outputs were between 68% and 87% identical. To improve classification accuracy in seabed mapping, we suggest that more studies on the effects of factors affecting the classification performance as well as comparative studies testing the performance of different approaches need to be carried out with a view to developing guidelines for selecting an appropriate method for a given dataset. In the meantime, classification accuracy might be improved by combining different techniques to hybrid approaches and multi-method ensembles.
    Continental Shelf Research 01/2014; · 2.12 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We analyzed data on nitrous oxide emissions and on soil properties that were collected on a 7.5-km transect across an agricultural landscape in eastern England using the discrete wavelet packet transform. We identified a wavelet packet "best basis" for the emission data. Wavelet packet basis functions are used to decompose the data into a set of coefficients that represent the variation in the data at different spatial frequencies and locations. The "best basis" for a set of data is adapted to the variability in the data by ensuring that the spatial resolution of local features is good at those spatial frequencies where variation is particularly intermittent. The best basis was shown to be adapted to represent such intermittent variation, most markedly at wavelengths of 100 m or less. Variation at these wavelengths was shown to be correlated particularly with chemical properties of the soil, such as nitrate content. Variation at larger wavelengths showed less evidence of intermittency and was found to be correlated with soil chemical and physical constraints on emission rates. In addition to frequency-dependent intermittent variation, it was found that the variance of emission rates at some wavelengths changed at particular locations along the transect. One factor causing this appeared to be contrasts in parent material. The complex variation in emission rates identified by these analyses has implications for how emission rates are estimated.
    Journal of Environmental Quality 07/2013; 42(4):1070-9. · 2.35 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The revised Environmental Protection Act Part 2A contaminated land Statutory Guidance (England and Wales) makes reference to 'normal' levels of contaminants in soil. The British Geological Survey has been commissioned by the United Kingdom Department for Environment, Food and Rural Affairs (Defra) to estimate contaminant levels in soil and to define what is meant by 'normal' for English soil. The Guidance states that 'normal' levels of contaminants are typical and widespread and arise from a combination of both natural and diffuse pollution contributions. Available systematically collected soil data sets for England are explored for inorganic contaminants (As, Cd, Cu, Hg, Ni and Pb) and benzo[a]pyrene (BaP). Spatial variability of contaminants is studied in the context of the underlying parent material, metalliferous mineralisation and associated mining activities, and the built (urban) environment, the latter being indicative of human activities such as industry and transportation. The most significant areas of elevated contaminant concentrations are identified as contaminant domains. Therefore, rather than estimating a single national contaminant range of concentrations, we assign an upper threshold value to contaminant domains. Our representation of this threshold is a Normal Background Concentration (NBC) defined as the upper 95% confidence limit of the 95th percentile for the soil results associated with a particular domain. Concentrations of a contaminant are considered to be typical and widespread for the identified contaminant domain up to (and including) the calculated NBC. A robust statistical methodology for determining NBCs is presented using inspection of data distribution plots and skewness testing, followed by an appropriate data transformation in order to reduce the effects of point source contamination.
    Science of The Total Environment 04/2013; 454-455C:604-618. · 3.26 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Three-dimensional framework models are the state of the art to present geologists’ understanding of a region in a form that can be used to support planning and decision making. However, there is little information on the uncertainty of such framework models. This paper reports an experiment in which five geologists each produced a framework model of a single region in the east of England. Each modeller was provided with a unique set of borehole observations from which to make their model. Each set was made by withholding five unique validation boreholes from the set of all available boreholes. The models could then be compared with the validation observations. There was no significant between-modeller source of variation in framework model error. There was no evidence of systematic bias in the modelled depth for any unit, and a statistically significant but small tendency for the mean error to increase with depth below the surface. The confidence interval for the predicted height of a surface at a point ranged from ±5.6 m to ±6.4 m. There was some evidence that the variance of the model error increased with depth, but no evidence that it differed between modellers or varied with the number of close-neighbouring boreholes or distance to the outcrop. These results are specific to the area that has been modelled, with relatively simple geology, and reflect the relatively dense set of boreholes available for modelling. The method should be applied under a range of conditions to derive more general conclusions.
    Proceedings of the Geologists Association 04/2013; · 1.76 Impact Factor
  • R. Webster, R.M. Lark
    01/2013; Routledge., ISBN: 978-1849713672
  • Source
    R.M. Lark, C. Scheib
    [Show abstract] [Hide abstract]
    ABSTRACT: It is important to understand how and where pollution and other anthropogenic processes compromise the ability of urban soil to serve as a component of the natural infrastructure. An extensive survey of the topsoil of the Greater London Area (GLA) in the United Kingdom has recently been completed by a non-probability systematic sampling scheme. We studied data on lead content from this survey. We examined an overall hypothesis that land use, as recorded at the time of sampling, is an important source of the variation of soil lead content, and we examined specific orthogonal contrasts to test particular hypotheses about land use effects. The assumption that the residuals from land use effects are independent random variables cannot be sustained because of the non-probability sampling. For this reason model-based analyses were used to test the hypotheses. One particular contrast, between the lead content in the soil of domestic gardens and that in the soil under parkland or recreational land, was modelled as a spatially dependent random variable, predicted optimally by cokriging. We found that land use is an important source of variation in lead content of topsoil. Industrial sites had the largest mean lead content, followed by domestic gardens. Detailed contrasts between land uses are reported. For example, the lead content in soil of parkland did not differ significantly from that of recreational land, but the soil in these two land uses, considered together, had significantly less lead than did the soil of domestic gardens. Local cokriging predictions of this contrast varied substantially, and were larger in outer parts of the GLA, particularly in the south west.
    Geoderma 01/2013; s 209–210:65–74. · 2.35 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper illustrates the potential for statistical mapping of seabed sediment texture classes. It reports the analysis of legacy data on the composition of seabed sediment samples from the UK Continental Shelf with respect to three particle size classes (sand, mud, gravel). After appropriate transformation for compositional variables the spatial variation of the sediment particle size classes was modelled geostatistically using robust variogram estimators to produce a validated linear model of coregionalization. This was then used to predict the composition of seabed sediments at the nodes of a fine grid. The predictions were back-transformed to the original scales of measurement by a Monte Carlo integration over the prediction distribution on the transformed scale. This approach allowed the probability to be computed for each class in a classification of seabed sediment texture, at each node on the grid. The probability of each class, and derived information such as the class of maximum probability could therefore be mapped. Predictions were validated at a set of 2000 randomly sampled locations. The class of maximum probability corresponded to the observed class with a frequency of 0.7, and the uncertainty of this prediction was shown to depend on the absolute probability of the class of maximum probability. Other tests showed that this geostatistical approach gives reliable predictions with meaningful uncertainty measures. This provides a basis for rapid mapping of seabed sediment texture to classes with sound quantification of the uncertainty. Remapping to revised class definitions can also be done rapidly, which will be of particular value in habitat mapping where the seabed geology is an important factor in biotope modelling.
    Sedimentary Geology 12/2012; 281:35–49. · 1.80 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper illustrates the potential for statistical mapping of seabed sediment texture classes. It reports the analysis of legacy data on the composition of seabed sediment samples from the UK Continental Shelf with respect to three particle size classes (sand, mud, gravel). After appropriate transformation for compositional variables the spatial variation of the sediment particle size classes was modelled geostatistically using robust variogram estimators to produce a validated linear model of coregionalization. This was then used to predict the composition of seabed sediments at the nodes of a fine grid. The predictions were back-transformed to the original scales of measurement by a Monte Carlo integration over the prediction distribution on the transformed scale. This approach allowed the probability to be computed for each class in a classification of seabed sediment texture, at each node on the grid. The probability of each class, and derived information such as the class of maximum probability could therefore be mapped. Predictions were validated at a set of 2000 randomly sampled locations. The class of maximum probability corresponded to the observed class with a frequency of 0.7, and the uncertainty of this prediction was shown to depend on the absolute probability of the class of maximum probability. Other tests showed that this geostatistical approach gives reliable predictions with meaningful uncertainty measures. This provides a basis for rapid mapping of seabed sediment texture to classes with sound quantification of the uncertainty. Remapping to revised class definitions can also be done rapidly, which will be of particular value in habitat mapping where the seabed geology is an important factor in biotope modelling.
    Sedimentary Geology 08/2012; 281:35-49. · 1.80 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Numerous scientific challenges arise when designing a soil monitoring network (SMN), especially when assessing large areas and several properties that are driven by numerous controlling factors of various origins and scales. Different broad approaches to the establishment of SMNs are distinguished. It is essential to establish an adequate sampling protocol that can be applied rigorously at each sampling location and time. We make recommendations regarding the within-site sampling of soil. Different statistical methods should be associated with the different types of sampling design. We review new statistical methods that account for different sources of uncertainty. Except for those parameters for which a consensus exists, the question of testing method harmonisation remains a very difficult issue. The establishment of benchmark sites devoted to harmonisation and inter-calibration is advocated as a technical solution. However, to our present knowledge, no study has addressed crucial scientific issues such as how many calibration sites are necessary and how to locate them.
    Pedosphere 08/2012; 22(4):456–469. · 1.23 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper examines the weathering processes that have combined to produce the distribution of soil-regolith (SR) thickness across the Triassic Sherwood Sandstone Group outcrop (750 km2) in Nottinghamshire, UK. Archive borehole logs (n = 282) taken across the outcrop showed that SR thickness had mean and median depths of ~1·8 and 1·5 m, respectively. Cores were taken from a forested site to depths ~3 m for geochemical analysis. At this site the SR thickness was ~1·7 m. Analysis of the loss of elements, compared to bedrock using mass balance calculations (τ) showed that all the calcite and gypsum cement had been removed to depths of >3 m. Thus the major difference between the SR and the underlying saprolite was that the former exists as loose sand as opposed to a semi-durable rock. Scanning electron microscopy (SEM) analysis of core samples suggested that the non-durable rock or saprolite had greater cementation of clay particles. We propose that the mechanism through which the clay cement (and other interlocking grain bonds) was eased apart was through freeze–thaw processes associated with the summer ‘active layer development (ALD)’ during the last glacial activity in the UK. We tested this theory by developing a Monte Carlo simulation based on a simplified version of the Stefan equation. Current Arctic datasets of air and ground temperatures were obtained to provide reasonable starting conditions for input variables. These were combined with known data for thermal conductivity, bulk density and moisture content of the Sherwood Sandstone regolith. Model predictions (n = 1000) of the distribution of SR thickness accurately reflect the observed distribution thickness from the borehole logs. This is strong evidence that freeze–thaw and ‘ALD’ processes are major factors in determining the thickness of SR across this outcrop. British Geological Survey © NERC 2012
    Earth Surface Processes and Landforms 07/2012; 37(9). · 2.49 Impact Factor
  • B. G. Rawlins, R. M. Lark, J. Wragg
    [Show abstract] [Hide abstract]
    ABSTRACT: Regulatory authorities need to establish rapid, cost-effective methods to measure soil physical indicators - such as aggregate stability - which can be applied to large numbers of soil samples to detect changes of soil quality through monitoring. Limitations of sieve-based methods to measure the stability of soil macro-aggregates include: i) the mass of stable aggregates is measured, only for a few, discrete sieve/size fractions, ii) no account is taken of the fundamental particle size distribution of the sub-sampled material, and iii) they are labour intensive. These limitations could be overcome by measurements with a Laser Granulometer (LG) instrument, but this technology has not been widely applied to the quantification of aggregate stability of soils. We present a novel method to quantify macro-aggregate (1-2 mm) stability. We measure the difference between the mean weight diameter (MWD; μm) of aggregates that are stable in circulating water of low ionic strength, and the MWD of the fundamental particles of the soil to which these aggregates are reduced by sonication. The suspension is circulated rapidly through a LG analytical cell from a connected vessel for ten seconds; during this period hydrodynamic forces associated with the circulating water lead to the destruction of unstable aggregates. The MWD of stable aggregates is then measured by LG. In the next step, the aggregates - which are kept in the vessel at a minimal water circulation speed - are subject to sonication (18W for ten minutes) so the vast majority of the sample is broken down into its fundamental particles. The suspension is then recirculated rapidly through the LG and the MWD measured again. We refer to the difference between these two measurements as disaggregation reduction (DR) - the reduction in MWD on disaggregation by sonication. Soil types with more stable aggregates have larger values of DR. The stable aggregates - which are resistant to both slaking and mechanical breakdown by the hydrodynamic forces during circulation - are disrupted only by sonication. We used this method to compare macro-aggregate (1-2 mm) stability of air-dried agricultural topsoils under conventional tillage developed from two contrasting parent material types and compared the results with an alternative sieve-based technique. The first soil from the Midlands of England (developed from sedimentary mudstone; mean soil organic carbon (SOC) 2.5%) contained a substantially larger amount of illite/smectite (I/S) minerals compared to the second from the Wensum catchment in eastern England (developed from sands and glacial deposits; mean SOC=1.7%). The latter soils are prone to large erosive losses of fine sediment. Both sets of samples had been stored air-dried for 6 months prior to aggregate analyses. The mean values of DR (n=10 repeated subsample analyses) for the Midlands soil was 178μm; mean DR (n=10 repeat subsample analyses) for the Wensum soil was 30μm. The large difference in DR is most likely due to differences in soil mineralogy. The coefficient of variation of mean DR for duplicate analyses of sub-samples from the two topsoil types is around 10%. The majority of this variation is likely to be related to the difference in composition of the sub-samples. A standard, aggregated material could be included in further analyses to determine the relative magnitude of sub-sampling and analytical variance for this measurement technique. We then used the technique to investigate whether - as previously observed - variations (range 1000 - 4000 mg kg-1) in the quantity of amorphous (oxalate extractable) iron oxyhydroxides in a variety of soil samples (n=30) from the Wensum area (range SOC 1 - 2%) could account for differences in aggregate stability of these samples.
    European Journal of Soil Science 04/2012; 64(1):2106-. · 2.65 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We consider approaches for calculating and mapping statistical predictions of soil organic carbon (SOC), and attendant uncertainty, from data across a region of France. The data were collected from farms across the region. To protect the anonymity of farms that contributed, the locations and values of individual observations were unavailable, and we were only able to use the average value, sample variance, and number of observations from each commune. Communes varied in size up to a maximum of 130 km 2, with a mean of 10 km 2. The uncertainty due to data being commune‐wide averages—with sample error varying between communes as a result of variations in their size and the number of samples drawn from within them—raises an important methodological issue. We show how a residual maximum likelihood method can be used to estimate covariance parameters on the basis of this form of data and use the empirical best linear unbiased predictor to calculate predictions. Cross‐validation shows that by properly representing the commune‐wide averaged data, the predictions and attendant uncertainty assessments are more reliable than those from a naïve approach based on the summary means only. We compare maps produced using the approaches showing the SOC predictions and the attendant uncertainty. Copyright © 2012 John Wiley & Sons, Ltd.
    Environmetrics 01/2012; 23(2):129-147. · 1.10 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We calculate statistical predictions of changes in soil organic carbon (SOC), and attendant uncertainty from areal data across a region of France. The data consist of measurements of SOC from farms across the region collected in two time periods: 1995–1999, and 2000–2004. To protect the anonymity of farms that contributed, the data were summarised by commune; we were only able to use the average value, sample variance and number of observations from each commune. We consider how we can use data of this form to map temporal changes in SOC. We account for the dependence between data from the two surveys through a linear model of coregionalization. Cross‐validation shows that by using the linear model of coregionalization to model inter‐survey dependence, we obtain better estimates of SOC changes and better uncertainty assessments. We compare maps produced using the approaches showing the estimated SOC changes and probabilities of SOC decrease between the times of the two surveys. Copyright © 2012 John Wiley & Sons, Ltd.
    Environmetrics 01/2012; 23(2):148-161. · 1.10 Impact Factor
  • R. M. Lark
    [Show abstract] [Hide abstract]
    ABSTRACT: Soil monitoring and inventory require a sampling strategy. One component of this strategy is the support of the basic soil observation: the size and shape of the volume of material that is collected and then analysed to return a single soil datum. Many, but not all, soil sampling schemes use aggregate supports in which material from a set of more than one soil cores, arranged in a given configuration, is aggregated and thoroughly mixed prior to analysis. In this paper, it is shown how the spatial statistics of soil information, collected on an aggregate support, can be computed from the covariance function of the soil variable on a core support (treated as point support). This is done via what is called here the discrete regularization of the core‐support function. It is shown how discrete regularization can be used to compute the variance of soil sample means and to quantify the consistency of estimates made by sampling then re‐sampling a monitoring network, given uncertainty in the precision with which sample sites are relocated. These methods are illustrated using data on soil organic carbon content from a transect in central England. Two aggregate supports, both based on a 20 m 20 m square, are compared with core support. It is shown that both the precision and the consistency of data collected on an aggregate support are better than data on a core support. This has implications for the design of sampling schemes for soil inventory and monitoring.
    European Journal of Soil Science 01/2012; 63(1). · 2.65 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The spatial variability of soil nitrogen (N) mineralisation has not been extensively studied, which limits our capacity to make N fertiliser recommendations. Even less attention has been paid to the scaledependence of the variation. The objective of this research was to investigate the scale-dependence of variation of mineral N (MinN, N–NO3 − plus N–NH4 +) at within-field scales. The study was based on the spatial dependence of the labile fractions of SOM, the key fractions for N mineralisation. Soils were sampled in an unbalanced nested design in a 4-ha arable field to examine the distribution of the variation of SOM at 30, 10, 1, and 0.12 m. Organic matter in free and intra-aggregate light fractions (FLF and IALF) was extracted by physical fractionation. The variation occurred entirely within 0.12 m for FLF and at 10 m for IALF. A subsequent sampling on a 5-m grid was undertaken to link the status of the SOM fractions to MinN, which showed uncorrelated spatial dependence. A uniform application of N fertiliser would be suitable in this case. The failure of SOM fractions to identify any spatial dependence of MinN suggests that other soil variables, or crop indicators, should be tested to see if they can identify different N supply areas within the field for a more efficient and environmentally friendly N management. © 2011 Elsevier B.V. All rights reserved
    Agriculture Ecosystems & Environment 01/2012; 147:66-72. · 2.86 Impact Factor

Publication Stats

2k Citations
330.07 Total Impact Points

Institutions

  • 2012–2014
    • British Geological Survey
      Nottigham, England, United Kingdom
  • 1970–2012
    • Rothamsted Research
      Harpenden, England, United Kingdom
  • 2009
    • Agri-Food and Biosciences Institute
      Béal Feirste, N Ireland, United Kingdom
  • 2008–2009
    • University of Florida
      • Department of Soil and Water Science
      Gainesville, FL, United States
  • 2006–2008
    • Cranfield University
      Cranfield, England, United Kingdom
  • 2007
    • University of Reading
      Reading, England, United Kingdom
  • 1995–1998
    • University of Oxford
      • Department of Plant Sciences
      Oxford, ENG, United Kingdom
  • 1994–1996
    • University of Wales
      Cardiff, Wales, United Kingdom