Science topic

# Geostatistical Analysis - Science topic

Explore the latest questions and answers in Geostatistical Analysis, and find Geostatistical Analysis experts.

Questions related to Geostatistical Analysis

What ways can I get measurements of ozone and nitrogen dioxide concentrations in an area? What satellites provide this type of data and can I use Google Earth Engine to retrieve the data when it is available?

Reghais.A

Thanks

Hi, I was hoping someone could recommend papers that discuss the impact of using averaged data in random forest analyses or in making regression models with large data sets for ecology.

For example, if I had 4,000 samples each from 40 sites and did a random forest analysis (looking at predictors of SOC, for example) using environmental metadata, how would that compare with doing a random forest of the averaged sample values from the 40 sites (so 40 rows of averaged data vs. 4,000 raw data points)?

I ask this because a lot of the 4,000 samples have missing sample-specific environmental data in the first place, but there are other samples within the same site that do have that data available.

I'm just a little confused on 1.) the appropriateness of interpolating average values based on missingness (best practices/warnings), 2.) the drawbacks of using smaller, averaged sample sizes to deal with missingness vs. using incomplete data sets vs. using significantly smaller sample sizes from only "complete" data, and 3.) the geospatial rules for linking environmental data with samples? (if 50% of plots in a site have soil texture data, and 50% of plots don't, yet they're all within the same site/area, what would be the best route for analysis?) (it could depend on variable, but I have ~50 soil chemical/physical variables?)

Thank you for any advice or paper or tutorial recommendations.

Many open source programs exist in the field of geology with all its specializa (Water resources , hydrology , Hydrogeology, Geostatistics ,Quality water .......etc) that many people are unaware of.

What software do you want to suggest to us ?

Thanks

Reghais Azzeddine

I want to use the daily GPP data from "Fluxnet 2015 data". However, in FLX there are two kinds of daily GPP ("GPP_DT_VUT_MEAN" & "GPP_NT_VUT_MEAN"). I want to know the difference between "Daytime partitioning method" with "from Nighttime partitioning method". And how can I get the daily GPP value. Thank you very much.

- GPP_DT_VUT (Gross Primary Production, from Daytime partitioning method)
- GPP_NT_VUT(Gross Primary Production, from Nighttime partitioning method)

I don't know how to get the daily GPP. Thank you very much.

I have two datasets. One with 9 past cyclones with their damage on the forest, wind speed, distance from the study site, recovery area. Another dataset with future sea-level rise (SLR) projections and potential loss area due to SLR.

- By using data from both disturbance events datasets (loss area, recovery area, wind speed, predicted loss area from SLR) can I create any kinds of disturbance risk/vulnerability/disturbance index/ hazard indicator map of the study area?
- What kinds of statistical analysis can I include in my study with these limited data sets which will help me to show some sort of relationship of "Loss Area" with other variables?

We know that the disperssion variance is related to the domain size V and the support size v. The textbook said that if keeping v unchanged, the disperssion variance will increase as the domain size V increased. I want to know why? Is there mathemtical evidence?

We know that LU decomposition is an important method for stochastic simulation of 2D RF. Assume the covariance matrix of regionalized RVs is C, it can be decomposed as C=LU according to the LU algorithm, then a RF can be generated by X=L'*y, where y is a vector consisting of independent standard normal random numbers. I want to know whether can I use LU decomposition for simulation if n observations exist as the conditioning data? If so, how can it be demonstrated?

I want to simulate a random field Z(u) that has a nested variogram, say gamma(h)=gamma1(h) + gamma2(h) + gamma3(h), assuming the variogram is isotropic. Whether can I simuate independently three random fields: Z1(u) with correlation structure gamma1(h), Z2(u) with correlation structure gamma2(h), Z3(u) with correlation structure gamma3(h), and then sum them up Z(u) = Z1(u) + Z2(u) + Z3(u)?

I did a MCA analysis using FactoMineR. I know how to interpret cos2, contributions and coordinates, but I don't know how values of v.test should be interpreted.

Thank you

The published RP mainly shows the use of cluster analysis like agglomerative hierarchical clustering (AHC) technique to divide the data into clusters having the common trait followed by Oneway ANOVA test followed by DUNCAN test as a posthoc test. However, no one mentioned the detail of the procedure of how the multiple variables or clusters were utilized to prepare single

*site-specific management zones. This makes it difficult for the layman. Please share the detail or any video link to prepare the zonation map ArcGIS.*I am trying to do my own map of the urban tissue of annaba (north africa), i just want to know what the best soft that i can use for doing that ? because i'v done just a draft using artgem, i want a more sophisticated one but i'm not a specialist, i'm waiting for you help, thanks.

I have created a soil moisture proxy model that shows areas with higher or lower soil moisture. In other words, each pixel in the GIS raster data has a value. I developed and applied the model in ArcGIS 10.2.2. I am wanting to find a high resolution land surface temperature dataset that I could use to validate the model. By high resolution, I would like to have at least sub-kilometer. For my purposes, I need it to be high resolution, but I am just trying to determine whether they correlate on an ordinal scale. It would be preferable that it was in raster format as well.

I know that MODIS is probably one of the best datasets, but I think it's resolution is only 1-km. Does anyone have any other suggestions? Thanks so much for your time and consideration.

I want to carry out PCA on a set of chemical data, some of them in oxide form and some in elemental form. The oxides are in percentage and the elements are in ppm.

I have understood that, the data have to be normalised/standardised before starting PCA. Now,

1) Should I have to convert all oxides to element first?

2) Should I have to convert all into single type of unit (either percentage or ppm )?

3) For normalisation, should I go for lognormal (10), lognormal (2) or natural log? What is the best way to decide which one is ideal?

4) If some elements show lognormal (10) distribution and some show Ln distribution, can I apply them separately or a single method to be followed for all?

5) Can I attempt IDF-Normal method for normalisation of such data?

Kindly advise.

Dear all,

I know it might depend also in the distribution / behavior of the variable that we are studying. The sample spacing must be able to capture the spatial dependence .

But, since

**Kriging**is very much dependent in the computed variance within lag distance, if we have few number of observations we might**fail to capture the spatial dependence**because we would have few pairs of points within a specific lag distance. We would also have few number of lags. Specially, when we have points with a very**irregular distribution across the study area**, with a lot of observation in a specific region and sparce observations in other region, this will also will affect the estimation of computed variance among lag (different accuracy).Therefore, I think in such circumstances computing

**semivariogram**seems useless. What is the best practices if iwe still want to use**kriging**instead of other interpolation methods?Thank you in advance

PS

Aim is to find signal value at x

_{0}from signal values at x_{i}, i=1,..N using Kriging, given as Z(x_{0})=sum(w_{i}Z(x_{i})).After fitting a non-decreasing curve to empirical variogram, we solve following equation to find the weights w

_{i}'s-Aw = B,

where A is padded matrix containing Cov(x

_{i},x_{j}) terms and B is vector containing Cov(x_{i},x_{0}).In my simulation setup, weights often have negative value (which is non-intuitive). Am I missing any step? As per my understanding, choice of curve-fitting function affects A. Weights are positive only if A is positive-definite. Is there a way to ensure that A is positive-definite?

We know that the estimation error (in percent) can be calculated with

*EE*where_{x}= a*sqrt(var(x))*100/x,*EE*is the estimation error,_{x}*a*is a constant,*var(x)*is the variance (came from (co)kriging), and*x*is the estimated value by (co)kriging.Considering an estimated value

**below 1%**, the above equation may leads to large and meaningless estimation errors (thousands of percentages).The question is that what should we do for estimated values less than 1%?

in the other side, we need to report the estimation error in percentages to classify mineral resources based on their estimation error.

p.s.: an example is

**Phosphorus grades at an iron ore mine**. They vary between 0 and 1, but after performing (co)kriging, their estimation error ranges around 1500%.Despite performing compositional data analysis (CoDa), this happens, more or less.

I am interested to know the suitability of Geostatistical Analysis or Geospatial Analysis in ArcGIS.

Both of the tools contain interpolation techniques, I am not sure which one should be used for mapping of Soil properties (like Bearing Capacity) using point data?

Hello! I have a data base of geochemistry analyzes.

My doubts is:

1 - Can I apply statistics measurements for the same variable even though it was measured by several analytical methods?

Eg: Can I get the mean of SiO2 from whole rock samples which was measured by X-Ray Fluorescence and by Electron microprobe?

2 - Can I get linear correlation from two different variables measured by different analytical methods?

Eg: Linear correlation between H2 that was measured by Gas chromatography and FeO that was measured by Electron microprobe?

3 - In which readings can I learn this theory?

I have a dataset with distances between beneficiaries and the nearest provision point (nearest hub).

I want to develop a model to explain distances based on several atrributes like category of beneficiary, category of provision point among others:

distance ~ cat_beneficiary + cat_provision + altitude + ...

I guess I should use a GLM, but I don't know which model would fits better with this kind of data (continuos and positive). Can I use a count data model (like Poisson or NB)? Or they just work with discrete data?

I attach a histogram.

In geostatistics, structural analysis aims to get a universal expression characterizing the spatial variation of regionalized variable in different directions. The general expression is γ(h)=Σγi(h), where γi(h) represents the variogram model in the

*i*th direction, which is subject to some geometric transformation and then is isotropic.My question is：

(1) whether the expression γ(h)=Σγi(h) applies for the geometric anisotropic variogram? In detail, if the variograms in two orthogonal directions have the same nugget and sill, but the ranges are different, we call this geometric anisotropy. The common practice is to make a linear transformation [1, 0; 0, K] to the lag vector, then the variogram model can be used for all directions.

(2) If the geometric anisotropy can also be expressed as sum of variograms in different directions, then it should be equivalent with the commonly used way as mentioned in (1). But how these two ways can be related to each other?

(3) I think the core question is: why the variograms in different directions can be added?

Could anyone give me some explanations, and some illustrated examples?

My observations are points along a transect, irregularly spaced.

I aim at

**in order to use it in the following LISA analysis (Local Moran I).***finding the distance values that maximize the clustering of my observation attribute,*I iteratively run Global Moran I function with PySAL 2.0, recreating a different distance-based weight matrix (binary, assigning 1 to neighbors and 0 to not neighbors) with a search radius 0.5m longer at every iteration.

At every iteration, I save z_sim,p_sim, I statistics, together with the distance at which these stats have been computed.

**From these information, what strategy is best to find distances that potentially show underlying spatial processes that (pseudo)-significantly cluster my point data?**

**PLEASE NOTE:**

- Esri style: ArcMap Incremental Global Moran I tool identify
where p is significant as interesting distances*peaks of z-values* - Literature: I found many papers that simply choose the distance with the
*higher absolute significant value of I*

**CONSIDERATIONS**

Because with varying search radius the number of observations considered in the neighborhood change, thus, the weight matrix also change, the I value is not comparable

I want to compare two maps (covering the same area):

- a raster map of the Topographic Position Index (TPI) with values ranging from 0 to 100. This is a continuous variable. I added a picture in attachment.
- a raster map with wetlands (permanent wetland, temporal wetland, no wetlands present), so this is a categorical variable. I added a picture in attachment.

I want to check whether the TPI map can predict wetlands. I want to look for similar spatial patterns.

I was thinking in terms of probability (e.g. Bayesian Networks), but I am not sure this is the right technique.

Does someone know a technique I could use to analyse these two maps? I am using ArcGIS but other software programs (like R) can also be used of course.

At present, I have a map of geological classification results, the classification is

**categorical and orderly**, and**there are some areas in the map are vacant**.I want to use geostatistical analysis to

**interpolate the vacancy areas**, such as Kriging method.So I want to know how to use these methods for discontinuous data types, or whether there is a Kriging method for such data?

Looking forward to your suggestions.

I first state that my domain of interest lies in the understanding of distances in the spatial, geographical domain. This does not involve distances in statistical non-spatial analysis.

Distances in the mathematical sense are characterised by the properties of positivity, separation, symmetry and triangular inequality. In the literature this last property is often wrongly interpreted. I state the hypothesis that the triangular inequality (d(AC) <= d(AB) + d(BC) whatever B) has for main purpose to guarantee the minimal nature of distances. I have so far listed three errors:

1. confusion with with the non-euclidean nature of geographical distances : the fact that distances are not following the straight line is accounted for triangular inequality as in Müller 1982 ("Non-Euclidean geographic spaces: mapping functional distances" Geographical Analysis vol 14). This is clearly a misunderstanding of the non euclidean nature of the transport and mobility realities.

2. considering non-optimal sets of measures (the word "measure" is used to avoid considering them as distances) containing triangular inequalities as in Haggett 2001 p 341 ("Geography a global synthesis" Prentice Hall). This argument is not consistent with the idea of minimality that I mentioned earlier: what can be a sub-optimal distance? Does it exist in spatial analysis of transport such distances?

3. the existence of rest-stops along transport routes creates the possibility that the additivity of time-distances (for instance) is not guaranteed. On the optimal route from A to C passing through B, if there is a need for a stop (for rest, for energy refuelling or other similar purposes) in or around the point B then the addition of time-distances AB and BC is inferior to the overall AC distance creating an apparent violation of the triangle inequality. This idea can be found in the reference article Huriot, Smith and Thisse 1989 "Minimum cost distances in Spatial Analysis", Geographical Analysis, p 300. This argument could be surpassed by using a non-continuous time-distance function to overcome the sub-additivity paradox.

All this discussion brings to the idea that distances are optimal in nature, and that in the spatial analysis of transport and mobility, the concept of distance contains an idea of optimality. This idea could link geography and economics through the principle of optimisation in the distribution of activities and the functioning of transport systems.

I would like to open here a discussion to test if my reasoning is consistent and solid: I welcome any counter arguments, examples, illustrations.

I'm looking for examples and analyses of autocorrelation tests performed on cross-sectional data and the issues encountered (data prop for analysis, problems with estimations, etc). Not interested in spatial autocorrelation.

Many thanks in advance!

To do ArcGIS geostatistical analysis I need China Climatic Zone shape file. I search it in DIvaGIS but it was not found..

I plan to use

**sequential Gaussian simulation**(sGs) approaches in study. However, when I apply the inverse**normal score transformation**(NST) to the data after the calculation, there are some errors (especially in negative results). Therefore, I simulated the data with**Box-Cox transformation**instead of NST and obtained more appropriate results.Can the

**Box-Cox transformation be used instead of normal score transformation**in sequential Gaussian simulation (sGs)? Can I also find examples of such transformations previously used (to reference in the article)?Thank you very much in advance

I am using ArcSWAT.I encounter some problems when I try to estimate the flow through these GIS files. I loaded my land use after the watershed delineation, but when I tried to add my look up table, and select "Cropland data layer", I got this error message(error 1).

"The grid value 0 has not been defined.

Please define this grid value."

I double click the 'value 0' and select one of an item for it(Crop-AGRR), but 'Land Use\Soils\Solpe Definition can't complete this time. Then I add my look up table again, but select "User Table" this time, just like running the Example1 which come with the SWAT model, but I got another error message(error 2).

'An error has occurred in the application. Record the call stack sequence and description error.

Error call stack sequence

fromChoose2LUSoilsfunctions.vb

Error Number

5

Description

Argument ‘Length’ must be greater or equal to zero'

I appreciate it if someone could give me some suggestions.

Hi everyone!

I would greatly appreciate your help

This is my first weeks of self-studying GIS and geostatistical analysis for my hydrobiological research.

I have about 30 points on the lake with x,y,z information and species occurence data.

I'm just interpolated z value (depth) and made bathymetric map (isolines) of the lake using Surfer Golden Software 11.

Now i need to compare this information with species occurence data and maybe find some relation between them, but I don't clearly understand how can I do this.

Which software or tools i should use? Should I continue my work on Surfer? Is there any tutorials or literature for begginers? I'm stuck and in despair.

I've heard that cokriging maybe can help, but I'm not sure. Also I heard about Maxent, but as a beginner I can't understand how to work on this soft, so I need some simple tutorials or something like that I think..

And I am sorry for my English)

The original data is 2D, and has been detrended using a second order polynomial obtained by OLS. The variogram of the detrended data is in red as shown on the attached figure. Conditional sequential Gaussian simulation is then performed and 100 realizations were obtained. The variograms of these realizations are in gray in the figure. Why is there an apparent deviation? Whether is there something wrong with my procedure? Thank you.

The formula of Advanced Vegatation Index for Landsat 7 is:

AVI = [(B4 + 1) (256 - B3) (B4 - B3)] ^ 1/3

and for Landsat 8,

AVI = [(B5 + 1) (65536 - B4) (B5 - B4)] ^ 1/3

Now if I transform the DN values into Reflectance values, should the constants (256, 65536) be changed or they remain constant for reflectance as well?

When performing kriging with anisotropy it can be the case that a very high length-to-width-ratio of my sampling points can pretend an anisotropy which is in reality not existent. So my question is if there is a recommended ratio of the longest and shortest sampling axis for kriging to avoid this effect? Or how can I deal with this situation?

I have a set of disease cases in the polygon form as an attribute of each city. There are some 180 cities (polygons) that 2-5 of them recorded more than 300 cases, about 100 of them contain 0-2 cases and the rest recorded 2-20 disease cases. I'm going to evaluate the possible correlation between illness and some environmental factor such as temperature, precipitation, etc.

However, the distribution of the disease data is severely non-normal and violates many statistical methods' assumptions.

Do you have any suggestion in this case?

Hello everyone, I'm doing work about solute transportation in soil. There are 40 sample points and in GS+ software, active lag distance is default as half of sample distance. After calculation, only four or five plotted points were showed, but R2 is more than 0.9. Is this analysis reliable? How many plotted points are needed at least?

I have used AHP, Frequency Ratio (FR), and Fuzzy Logic (FL) to create land suitability maps in the GIS environment. Do you know other methods?

1) From 73 statistic levels I interpolated the groundwater table using three methods: topo to raster, IDW and kriging. From these last two I used geostatistical analysis and so errors are calculated. However, I do not know how to validate the "topo to raster" interpolation. In addition, I interpolated the GW flow direction using ArcHydro Groundwater. Is this valid? Is this modeling?

2) I calculated the mix fraction of "X" water source using a 3-mixing model analysis (EMMA). I interpolated the mixing fraction using the same three methods. Topo to raster (aparentelly) resulted the best fit...still, I do not know how to validate the method (i.e. RMSE). Kriging overstimated a lot of values and so IDW.

Thank in advance for your help.

ku-band Geostationary satellites elevation angle, for internet and TV.

Hello,

I want to extract this netcdf file 'Sea_sur_temp_tos_Omon_MPI-ESM-MR_historical_r1i1p1_200001-200512.nc'. It has total 8 variables. Those are time, time_bnds, j(cell index along second dimension), i (cell index along first dimension), lat, lat_vertices, lon, lon_vertices, tos( sea surface temperature). While extracting through ArcGIS, in the dimension box it is asking for j and i values instead of lat and lon. I don't understand what should be the value for i and j.

Kindly help me in this regard.

Thanks

I have the cloud masked time series images, I want to apply the temporal gap filling by interpolating the values of non cloud pixels to the one which had clouds(NaN after cloud masking) in my time series images. Please is there any software, methodology or script which I can use to perform such temporal refilling?

I am trying to evaluate impact of an intervention that was implemented in very poor areas (more poor people, undeserved communities). In addition, the location of these areas were such that health services were limited because of various administrative reasons. Thus, the intervention areas had two problems: (1) individuals residing in these areas were mostly poor, illiterate and belonged to undeserved communities; (2) the geographical location of the area was also contributing to their vulnerability (as people with similar profile but living elsewhere (non-intervention areas) had better access to services. I have a cross sectional data about health service utilization from both types of areas at endline. There is no baseline data available for intervention and control. I am willing to do two analyses: (1) intent to treat analysis: Here, I wish to compare the service utilization in "areas" (irrespective of whether the household in intervention area was exposed to the intervention). The aim is to see whether the intervention could bring some change at "area" (village) level. My question is: can I use Propensity Score Analysis for this? (by matching intervention "areas" with control "areas" on aggregated values of covariates obtained from survey and Census?). For example, matching intervention areas with non-intervention areas in terms of % of poor households, % of illiterate population, etc. The second analysis is to examine the treatment effect: Here I am using Propensity score analysis at individual level (comparing those who were exposed in intervention areas with matched unexposed people from non-intervention areas). Is it right way of analysing data for my objective?

I have a time series of satellite derived rasters. what is the best approach to define spatial pattern of spatio-temporal variation by means of geostatistical analysis?

I would like to characterize how areas are affected by temporal and spatial variation of a parameter derived by satellite. how i can relate this with another forcing driver ? this one is measured as a numerical vector.

Is the variogram analysis the best? Or the empirical orthogonal function?

if one of these was feasible how would you develop this with R, or other open source softwares?

The time series raster dataset has more than 50 images with a variable revisit time. the lag time range from 7 to 96 days.

Dear RG members

During reading a paper, I passed the statement "M 6.0 Ranau earthquake dated on June 05, 2015 coupling with intense and prolonged rainfall caused several mass movements such as debris flow, deep-seated and shallow landslides in Mesilou, Sabah" given in the paper and forced to ask a question like that.

regards

IJAZ

Interested to know about flood return period.

I am using krige.cv() for cross validation of a data set with 1394 locations. In my code, empirical variogram estimation, model fitting, and kriging (by krige()) everything works fine. But when I use krige.cv() I get the following error.

Error: dimensions do not match: locations 2786 and data 1394

One can notice that 1394*2 = 2786. What could I be missing? Please note that there are no NA or NaN or missing values in the data, variogram or kringing results. Everything works fine, and it's just krige.cv() that does not work.

I am caught in a strange situation. I have been doing some kriging using gstat package, but the data is exhibiting a lot of bad signs. It has a second order strong trend surface, with almost all the coefficients declared significant with three asterisks *** in lm(). Though R^2 is small, but the empirical variograms are clearly different in two cases i.e. variogram(attribute~1) and variogram(attribute~SO(x, y)) have different sills. Furthermore, the directional variograms also show both zonal and geometric anisotropies. When I try to fit a variogram model, I see a big change between the fitted ranges and sills of the simple (no anisotropy assumed) and anisotropy corrected variogram models. How do I deal with this analysis?

Is it valid to transform ilrs using normal score, Box-Cox or other transformations in order to perform Sequential Gaussian Simulation (SGS)?

Is it true to use the combination of isometric logratios (ilrs) with Direct Sequential Simulation without any transformation?

Thanks in advance.

Steps to WGS1984 ASCii file project to Kertau_RSO_Malaya_Meters ASCii file. Been trying but still not projected well.I cannot open in FMP stated that this file has no projection. Seeking an answer from all experts and professors. Thank you and appreciate your feedback.

I have also attached my original file for reference.

Dear all

Thanks in advance for comments, answers, papers etc.

Regards

Ijaz

Hello,

It will be really helpful, if someone can suggest some sources where DMSP-OLS data with thermal band is available.

I have soil contaminant samples collected at different depths/layers. I generated a contaminant surface at each depth/layer using ArcGIS krigging tool. However, I need to have a vertical feel of this contaminant across layers and which I think I can achieve by interpolating the values across the different layers. As far as I know ArcGIS can't do this. So, I will be happy to know any freeware I can use to achieve this. Many thanks.

I have 5 raster layers depicting different temperature levels across a given geographic space. I need to use a common/same ‘Stretched” color ramp to show how this phenomenon varies across space, where a given color in the color scale in each of the raster layers represent same value in all the layers. Kindly see attached sample. I need it in a “stretched” style.I use ArcGIS 10.3.1

I have tried a couple of things which haven’t worked yet.

I made a dummy raster with values equivalent to the min-max of the 5 rasters. The lowest value of all the raster is 5 and the max is 74. So, I created a dummy raster with min value of 5 and max as 74. Layer properties-Symbology-I then symbolized the dummy with a color ramp of choice, choose -minimum-maximun- under “Stretch” and choose -From Custom Setting (below)- under setting under “Statistics”. Save the layer as layer file. lyr

The problem is that when I import the symbology or apply the layer file, all the raster retain same symbology and shows same min-max values (5 and 74). I need them to show their real values-such as 34 to 58 and the colors should reflect that range in the common color scale/symbology.

How do I go about this? I need a quick way out. I am not experienced in Python or other programming languages, but with detailed steps I can also try if that’s the only way out.

AIC and BIC are Information criteria methods used to assess model fit while penalizing the number of estimated parameters. As I understand, when performing model selection, the one with the lowest AIC or BIC is preferred. In a situation am working on, a model with the lowest AIC is not necessarily the one with the lowest BIC. Is there any reason to prefer one over the other?

Hello every body

I used Arc Map Geostatistical analyst to interpolate a surface for heavy metals. Before Interpolation I divide my data set to train and test. By training data I interpolate a surface among dataset and now I want to determine residuals for test data to control the precision of interpolation. How can I determine value in an unknown position?

Hi everyone, I have a point data set with 197 different coordinates. I am selecting 25% to be used for training. When I run maxent with certain environmental layers, it is only using 8 presence records used for training, 2 for testing. Any ideas why this is?

I want to compare between blending data using Regression Kriging and Bayesia Kriging. What is the advantage of Bayesian Kriging compare to Regression Kriging? Anyone has a recommended link/journal for learning Bayesian Kriging? Many thanks in advance

I'm modeling labor productivity on farm sites spread widely across the US, and I would like to include NDVI (or another vegetation index) as a predictor variable. I'm wondering if there is month during the growing season that makes the most sense for comparing across disparate climate types. Thank you for any suggestions!

I am modeling undrained behavior of clays/drained behavior of sands under various static loading cases. I understand that the geostatic step is utilized in general analysis of soils. I was wondering if one can replace this with static, general case under gravity loading. I am not interested in calculating pore pressure and the soil is homogeneous in nature.

Hello all.

I have a data set with coordinates from locations that some fishes have been collected.

I want to be able to estimate the distances between each of those locations, but following the river path. I know how to estimate euclidian distances, but they are estimated as lines connecting each location, ignoring the river.

Is there any program, R function or QGIS plugin that could calculate such distances? I have a shape file for the river basin in question, that I can transform to lines file in QGIS, but I can't get the estimates of distance of these locations along the river.

Thanks,

JP

I plotted a variogram for a dataset which contain 83 observation sites. The maximum distance between the observation points is 60 km but i got the range of the variogram is 261 km. What does it mean?

[I thought that range is 261 km means kriging can predict well in the radious of 261 km.probably my concept is wrong. ]

i want to know if there is an equation to compute the miminum sample to take in the field.

HI im looking for Aphonopelma chalcodes distribution data for use in a gis, i need the data in geotiff or shape format.

Thanks for your help.

**Can anyone please tell me the use of Normalized Rank in Analytical Hierarchy Process used in ArcGIS? Is it used to scale all the maps into one as it is being taken the value from 0 to 1 for all the features.**

the nugget effect is the value of the variogram when lag distance is equal to zero, and according to litterature, it's dependant on measurement error, i don't if any one can help me to calirify this point.

thank you Louadj yacine

Hi everyone

I want to add coordinates of several points as post map in Surfer. The problem is that after importing data they are not in a straight line, they are shown in a zigzag formation. How can I fix the problem? The points are the coordinates of ship movement, so they should be shown in straight line.

Thanks

Dear collegues,

I do geostatic procedure for modeling of tunnel in clay-soil.

ABAQUS is used.

The model of soil - Cam-Clay.

Initial stresses in the soil are given.

Material data is fully specified.

Nevertheless - the program displays an error message:

The sum of initial pressue and elastic tensile strength for the porous elastic material in element 1 instance ground1-1 must be positive.

I downloaded MODIS level 2 ocean colour images and displayed them in Seadas version 7.2. My area of interest in Lake Victoria. However I am facing a problem with the areas that are affected by cloud cover hence causing no data to be available for those parts. Is there a way I can extrapolate using the available data in order to have data in those areas affected by cloud cover. Also is there any other software I can use to perform that function?

I try to find any reference to such question:

Suppose, we have 4 arrays (with same zonds on each).

First and second are hybridized with sample 1

Third and fourth are hybridized with sample 2

Then we want to compare signal from zonds between samples.

As I understood, we must carry out RMA procedure for each array to correct background and then construct empirical signal distribution from ALL 4 ARRYAS and make quantile normaliztion. So, our input matrix for QN will consist of 4 columns.

But collegues say, that we must make QN independetly for

first and second and third and fourth

Who is right?

Thanks in advance.

When i kept substrate width and copper width same and applied master slave boundary condition, during simulation it shows 4 topological error. Since i am reproducing one published article i am facing this problem. How to solve this issue. I am attaching the HFSS 14.0 file. Thanks in advance to all reply.

Hello fellow researcher. I am currently working on a research project on Cokriging. anybody who can assist me construction of crossvariograms in a non collocated setup.

In landslide susceptibility analyses, we are using geological and topographical parameters. How to identify the multicolinearity among these factors to know which factor is best and which is least.