“Kriging: A Method of Interpolation For Geographical Information Systems,”

Geographical Information Systems 07/1990; 4(3):313-332. DOI: 10.1080/02693799008941549
Source: DBLP


Geographical information systems could be improved by adding procedures for geostatistical spatial analysis to existing facilities. Most traditional methods of interpolation are based on mathematical as distinct from stochastic models of spatial variation. Spatially distributed data behave more like random variables, however, and regionalized variable theory provides a set of stochastic methods for analysing them. Kriging is the method of interpolation deriving from regionalized variable theory. It depends on expressing spatial variation of the property in terms of the variogram, and it minimizes the prediction errors which are themselves estimated. We describe the procedures and the way we link them using standard operating systems. We illustrate them using examples from case studies, one involving the mapping and control of soil salinity in the Jordan Valley of Israel, the other in semi-arid Botswana where the herbaceous cover was estimated and mapped from aerial photographic survey.

513 Reads
  • Source
    • "To this effect, we computed Moran's I, an index of spatial autocorrelation (Moran, 1950), with an increasing neighborhood distance between observations (from 2 km to 30 km), in ArcGIS 10.0 (ESRI, Redlands, CA, USA). We applied a threshold criterion that the absolute index values were >0.3 as an indication of autocorrelation (e.g., Oliver and Webster, 1990; Hitziger and Ließ, 2014). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Abstract In existing carbon budget models, carbon stocks are not explicitly related to forest successional dynamics and environmental factors. Yet time-since-last-fire (TSLF) is an important variable for explaining successional changes and subsequent carbon storage. The objective of this study was to predict the spatial variability of aboveground biomass carbon (ABC) as a function of TSLF and other environmental factors across the landscape at regional scales. ABC was predicted using random forest models, both at the sample-plot level and at the scale of 2-km2 cells. This cell size was chosen to match the observed minimum fire size of the Canadian large fire database. The percentage variance explained by the empirical sample-plot level model of ABC was 50%. At that scale, TSLF was not significantly related to ABC. At the 2-km2 scale, ABC was influenced mainly by the proportions of cover density classes, which explained 83% of the variance. Changes in cover density were related to TSLF at the same 2-km2 scale, indicating that the increase in cover density following fire disturbance is a dominant mechanism through which TSLF acts upon ABC at the scale of landscapes.
    Forest Ecology and Management 01/2016; 360:170-180. DOI:10.1016/j.foreco.2015.10.035 · 2.66 Impact Factor
  • Source
    • "In this work we are focused on a specific kind of RS, the Gaussian processes (GP) surrogates, also known as kriging in many fields of applications. The GP surrogates have been widely used in machine learning [44], geostatistics [34], engineering optimizations [40], and most recently, uncertainty quantifications [8,9]. A number of GP-based methods have been also been successfully implemented for failure probability estimation [28] [4]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: An important task of uncertainty quantification is to identify {the probability of} undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian {process} surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
  • Source
    • "The GP surrogates, which are also known as kriging, have been widely used in machine learning [25], geostatistics [21], engineering optimizations [24], and reliability analysis [3], just to name a few. The GP surrogate constructs the approximation of g(x) in a nonparametric Bayesian regression framework [20] [25]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter $y$. The performance parameter $y$ is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of $y$. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithm, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo method.
Show more

Similar Publications