Science topic

# Spatial Analysis - Science topic

Geographical Analysis, Urban Modeling, Spatial Statistics

Questions related to Spatial Analysis

Despite going through available literature and getting a basic understanding of how Kernel Density Estimation works when analysing home ranges, I cannot make my Biotas works. Does anybody know about some more detailed guidelines/manuals/training videos etc., for Biotas? Provided manual on their website says very little about navigating within the software and setting required parameters. For instance, choosing a window with (LSCV), I have no idea what units are represented there. In my case, I have a limited dataset, let's say less than 100 GPS fixes of the Little owl, which moves on a small scale. Moreover, it roosts often on the same spots; therefore, my points are dense and sometimes even overlaid. I would like to avoid manipulating hard data and exclude those overlaying points from analyses because I still think they have important information value. Would it be still possible to make a fixed KDE with LSCV under such conditions?

Any help will be highly appreciated. Thx a lot.

I need this data for creating a location map and conducting spatial analysis for my research ?

Please Suggest the authentic sources only.

When we conduct geostatistical analysis for water quality, how do we pick the distance between two points in canals and lakes? what is the grid schema? give examples

Hello everyone,

I am new to geostatistics and ArcGIS, and I’m currently working on fitting a semivariogram in Ordinary Kriging (OK) for a spatial distribution map as part of my research. I’m encountering some issues with this process and would greatly appreciate any advice or assistance. My variogram appears quite flat, and there is no noticeable variation in the results after the run is finished.

If anyone has experience with this, please feel free to share your insights.

Thank you so much for your help!

When conducting ANOVA to analyze the concentration of heavy metals in soil, due to the non-normal distribution of the raw data, a Box-Cox transformation was performed to achieve normality. However, it should be noted that the transformed data exhibited significant alterations in terms of mean, median, and overall magnitude compared to the raw data. Therefore, when carrying out subsequent research on heavy metal pollution evaluation (e.g., calculating Igeo geological accumulative index, Kriging difference of concentration spatial distribution, and descriptive statistics), it is crucial to determine whether to utilize pre-transformed or post-transformed data?

Dear Colleague,

I hope this message finds you well.

I am excited to announce the Call for Chapters for our upcoming book project titled "Applying Remote Sensing and GIS for Spatial Analysis and Decision-Making," scheduled to be published by IGI Global.

We are seeking contributions from researchers and practitioners who are passionate about exploring the application of remote sensing and GIS technologies in spatial analysis and decision-making processes. Your expertise and insights would greatly enrich the content of our book, and we cordially invite you to submit a proposal for a chapter.

Submission Deadline: May 19, 2024

For more details about the submission process and guidelines, please visit the following link: [https://www.igi-global.com/publish/call-for-papers/call-details/7509]

Should you have any inquiries or require further information, please do not hesitate to contact me . I am more than happy to assist you throughout the submission process.

Thank you for considering this opportunity to contribute to our publication. We look forward to receiving your proposals and collaborating with you on this exciting project.

Best regards,

Adil Moumane

University of Ibn Tofail. Kenitra, Morocco

Hi there,

I am trying to use VariogramST function in R, for spatial-temporal kriging.

The values I am working on, are monthly based; while in the function VariogramST, the predefined "tunit" is hours, days or weeks.

I appreciate if anyone can tell me how to change it to months?

Thanks,

Hamideh

Dear Colleagues,

The aim of this Special Issue is to explore the latest advances in the fields of spatial analysis and regional science, focused on integrated methodologies, strategies, and frameworks to guide urban planning decisions. To this end, we search for innovative approaches linking the regional to urban planning dimension from a spatial analysis perspective.

This Special Issue will welcome manuscripts that link the following themes:

- Advanced spatial analysis tools, including spatial optimization, suitability analysis, land use/land cover modeling, and simulation modelling;
- Innovative geospatial methodologies for the examination of land use conflicts and land degradation due to urbanization;
- Models and methods evaluating the environmental and climate-related parameters of urban areas to support regional sustainability;
- Decision support systems and tools for urban areas, handling local to global connections on a regional scale;
- Exploration of regional science tools and methods for guiding sustainable urban development.

Keywords

- spatial analysis
- urban planning
- regional science
- spatial optimization
- land use modelling
- decision support systems
- sustainable development
- policy making
- environmental management
- climate change
- local-global connections

We look forward to receiving your original research articles and reviews.

Dr. Apostolos Lagarias
Dr. Poulicos Prastacos
Dr. Despina Dimelli
Dr. Alexandra Delgado-Jiménez

*Guest Editors*My question relates to the implicit assumption that topologically associating domains(TAD) have to be contiguous along the genome. This seems odd to me given that the DNA molecule exists in a 3D space while this contiguity criteria relates only to the 1D genome coordinate, which might not be appropriate to delimit interactions in a 3D space.

Consequently I am wondering if I'm missing anything obvious to impose such a criteria to characterise TADs.

Thanking you in advance.

How is precipitation predicted and how can we use satellites and gauges such as Climate Prediction (CPC) Formation Algorithm (CMORPH), Global Precipitation Mapping Satellite (GSMaP), Tropical Precipitation Measuring Mission (TRMM), and some other satellites; Precipitation analysis (TMPA) used?

Precipitation is one of the important components of the global water cycle and is related to atmospheric circulation

In climate and climate change that is used for weather forecasting, hydrological process modeling, disaster monitoring, etc. Because precipitation varies widely in space and time, it is accurate and reliable. Higher temporal and spatial precipitation products are needed for stakeholder decision-making. Local-scale decision-making is needed. Precipitation data can be both temporally and spatially. presented a series of spatial and temporal events indicating the tendency to increase rainfall in a particular region and the spatio-temporal distribution directly affecting the availability

Water sources in rivers or watersheds. This availability of precipitation data is an important part of hydrological analysis, however, its inclusion is often insufficient and incomplete due to several factors such as the lack of both spatial and temporal observation data. Precipitation time series data, uneven number Precipitation stations, a limited number of observers and system observations, as well as manual data entry. this is . It is also difficult to obtain surface precipitation in real time. Observational data, which require preliminary investigation before they can be used directly. However, there is a need for accurate spatio-temporal and long-term precipitation data in climate change prediction, simulation study, hydrological forecasting, floods, landslides, droughts, disasters.

Management and investigation of water resources Several factors that contribute to uncertainty, such as observation errors, boundary errors or initial conditions, model or system errors, scale differences and unknown parameter heterogeneity, have a significant impact on mass and distributed hydrological performance. models. Usually, the highest amount of precipitation is considered. An important meteorological input in hydrological and water quality studies is accurate measurement of precipitation. For reliable and consistent hydrological forecasts, the quantity and quality of water, the accuracy of precipitation data, is needed.

including intensity, duration, geographic patterns and extent, significant impact on land surface output and hydrological models. has it. Large-scale hydrological models often rely on remotely sensed precipitation data from satellite sensors due to the lack of ground-sensing equipment and rain gauge networks. Gauges or satellites show regional and temporal variability and measurement errors although ground sensor

Networks such as rain gauges and radars provide the most amount. Direct observations of surface precipitation and frequent

They provide measurements with high time frequency

The systems have significant drawbacks. Gauges limited to

Point-scale observations, but they are also susceptible

Misleading readings due to wind and evaporation effects.

In addition, spatial interpolation of point-based observations

Adds uncertainty to the final grid in addition to measurement errors

Spatial precipitation dataset

The distribution and density of gauges are critical factors

Adequacy measurement has been shown by several studies to be fragmented

And irregular rain gauge networks have a significant impact

It can be based on the uncertainty of the hydrological model and that uncertainty

It decreases with increasing densitometer or optimization

distribution pattern. ground radar

On the other hand, networks often provide continuous

Spatial coverage with high spatial and temporal resolution.

However, their accuracy is affected by signal attenuation and

Extinction, surface scattering, illumination and effects, and

Uncertainty in reflectivity-rainfall-rate relationship

The latest technologies, such as remote sensing technology,

It can overcome the lack or unavailability of precipitation data

In the previous period, this means through satellite

The possibility of obtaining precipitation data remotely

measurements, thereby simplifying the collection process

At any time and from any region, satellites generally have several

Advantages over surface observation rain stations

The measurement of precipitation amounts is one of the above

spatial and temporal resolution with a wide coverage area,

Near real-time data, continuous recording, quick access, weather

effects, less field variability and easy data collection

Because of the free download now, there are several satellite-based precipitation products available, each of which is different. Degrees of accuracy of the Climate Prediction Center (CPC)

Formation Algorithm (CMORPH), Global Precipitation Mapping Satellite (GSMaP), Tropical Precipitation Measuring Mission (TRMM), Multisatellite Precipitation Analysis (TMPA) and others.

I am using DACE tool box to generate aerodynamics kriging surrogate model with wing morphing. How to set the correlation models and Regression Models?

I want to make a correlation of AOD 550nm (MODIS MCD19A2, 1km) with PM2.5 (non-referenced sensors: low-cost sensors). For this, I have downloaded MODIS MCD19A2 files and selected SDS optical_depth_055 during post-processing on LAADS Web.

- How can I collocate MODIS AOD data on the minimum pixel level (probably 3*3 pixel window) so that it can be correlated with the point sensor? Probably I need a Python script or ENVI manual for this.
- How can this MCD19A2 AOD550nm data be pre-processed for cloud masking, and QA for good-quality data?
- Mostly MAPPS Website (http://giovanni.gsfc.nasa.gov/aerostat/) is used for the validation of MODIS (Terra: MOD & Aqua: MYD) products with AERONET. However, there is no option for MCD (Aqua & Terra Combined). How can I validate MCD19A2 AOD 550nm with AERONET AOD.

Hello all,

I am trying to learn how to conduct a Moran's I test in R for my 4 species distribution models generated in MaxEnt. I want to be able to show that my four models hopefully show little spatial autocorrelation and do not need to be redone.

I have found lots of people discussing the packages and functions used to complete this task but no scripts that are useful to learn from. I would like to understand the meanings behind the code and how it works. I was wondering if anyone had any tips or R-scripts that would help me?

Any direct help/useful information would be greatly appreciated.

Kind regards,

William

I am trying to calculate access to public health facilities and I want to ensure that the buffering or spatial analysis of this would be reasonable for me to calculate accessibility as the cluster points are not in their original positions.;

hello

I wane to interpolate temperature in an alpine region (valle d aosta ) , i want to know how i can obtain better results ?i have data of 87 meteorological stations that distributed in an area of about 3000 km2 and also 8 statins in a smaller area in this region , as the region has a complex topography, I chose to do kriging with external drift to consider elevation as a co-variable , in this case which are should be used?the smaller about 500 km2 with 8 stations or the whole area ?(87 stations and 3000km2)

thanks

Dear Scholars,

Assume a mobile air pollution monitoring strategy using a network of sensors that move around the city, specifically a network of sensors that quantify PM2.5 at a height of 1.5 meters that lasts about 20 minutes. Clearly, using this strategy we would lose temporal resolution to gain spatial resolution.

If we would like to perform spatial interpolation to "fill" the empty spaces, what would you recommend? What do you think about it? What would be your approximations?

Regards

Dear all,

I have land use shapefile (different classes) and PM 2.5 value (stored in station point shapefile). I would like to analyze the relationship between type of land use and PM 2.5 level. If I interpolate PM 2.5 level to raster file, Is there any tools in Arcmap that can run regression analysis between land use types and PM 2.5 level? Thank you

I'm wondering if you should differ between presence data of highly mobile species such as Raptors and immobile species (e.g. Plants). The dispersal of plants is limited to a certain distance, so the occurance might be clustered because of that. Birds on the other hand should be able to search for suitable nesting sites. If nesting sites are close together, could that be an indicator for great suitability?

Thanks, Tim

I have calculated two cross-classified multilevel models where the models have different sample sizes. I was wondering whether I can compare (like say higher or lower) ICC between these two models.

Another question, from a cross-classified multilevel model, we can calculate the ICC index for different contexts (such as students living in the zone but school zone is different, student's school in the same zone but residence is different). At least can we add the ICC index to say that this total amount of variation is due to spatial dependency?

Especially in how to distribute porosity and permeability of the reservoir using SGSIM of Kriging. I tried to run the software, yet, it always fails. Also, I couldn't find the tutorial on YouTube. Do we need to have the data for the cluster or we can skip that step?

I am trying to run a spatio-temporal autoregressive model (STAR). Therefore I need to create a spatial weight matrix W with N × T rows and N × T columns to weight country interdependencies based on yearly trade data. Could someone please tell me how I to create such a matrix in R or Stata?

A Moran I (spatial autocorrelation) has been prepared in Arc Map 10.4 and GeoDa for comparison. Please find attached those results for your valuable input.

Hi, in the Brightness temperature calculation, I saw two values for converting to Celsius.

Which one is correct? why in some articles use 272.15 and other articles use 273.15??

BT = K2 / ln (K1/ Lλ +1) -272.15

or

BT = K2 / ln (K1/ Lλ +1) -273.15

#research#urban-planning#users' health and sustainability

I did some thermal maps of the Girona’s atmospheric urban heat island with a method of car's mobile transect. I use the software Surfer 6.0 to do the maps, but Surfer isn’t a Geographical Information System, I think this is a throuble. Also in my transects there is a very high spatial density of observation points in the downtown of the city of Girona (15/km2) and a low density in rural areas (2/km2). I always interpolate isotherms with the kriging method. What is the best method to interpolate temperatures (Kriging, IDW, etc.) in my area of interest, Girona and it environs? Can you give me bibliographic citations?

Dear all,

I have three 100 m

^{2}plots subdivided in 100 subplots of 1 m^{2}. I have counted, in each subplot, the total number of individuals of a plant species, and I repeated the sampling 7 years. I would like to study the distribution (aggregation) patterns of individuals within the main plot and see if there are changes over time. I've been looking for examples with SADIE methodology (Spatial Analysis by Distance Indices), but I don't know if it fits my study. Any recommendations for studies of this type? Is there any R package to apply the SADIE methodology?I would be thankful to all ideas.

Thank you and best regards

I produced three different raster layers interpolating points collected in-situ in three different months in the same area using Kriging. What is the best way to highlight the temporal changes that occurred in the test site during this time? I did the standard deviation of the images, since it permits to summarize the variations in just one image. If I did the difference between subsequent images I produce two images to represent the same results. Am I right?

Hi,

I have 26 sampling locations, relatively well distributed, but not adequately enough to cover the study area. I have also a digital elevation model, from which I can extract much more locations. An important detail is that the temperature data are distributed in low elevations, but no temperature data is available at high elevations. Hence, I want to use the elevation data to sensitize the temperature interpolation process through co-kriging, which works with non collocated points. I prefer to use gstat, but I am not sure: a) what is an optimal number of elevation points that I have to use (I know that I have 26 temperature points, and that I need elevation points at high elevations) b) is there an automatic process tha provides reasonable kriging predictions, such as autokrige, for co-kriging?

I need layers for spatial analysis of Sakarya province in the eastern Marmara region of Turkey. Where can I find it for free?

Thank you in advance!

I am currently undergoing a research on: The Spatial Analysis of the Incidence and Users' Fear of Crime at Motor Parks in Ibadan.

What is/are the best method for determining the sampling frame or population of study considering the fact that majority of the motor parks in the city have no official capacity or data on daily passenger patronage.

Also, what is the best sampling method to use for administration of questionnaire on passenger in achieving the aim and objectives of the study.

Awaiting your scholarly contributions.

Both Ripley's k-function and Moran's index measure the statistically significant clustering within data. However, how to know, which method is performing better for our data?

What are the advantages and disadvantages of each method which can help to choose a better method?

Trying to investigate whether the Autoregressive behaviour is as a result of Lags (SER) or errors (SEM).

**Diagnistic check on the weights matrix**

The output is as below

**spatdiag, weights(W)**

**Diagnostic tests for spatial dependence in OLS regression**

**Fitted model**

**------------------------------------------------------------**

**logHPRICE = SPOOL + BQUARTER + WFENCE + PSIZE + NROOMS**

**------------------------------------------------------------**

**Weights matrix**

**------------------------------------------------------------**

**Name: W**

**Type: Distance-based (inverse distance)**

**Distance band: 0.0 < d <= 10.0**

**Row-standardized: Yes**

**------------------------------------------------------------**

**Diagnostics**

**------------------------------------------------------------**

**Test | Statistic df p-value**

**-------------------------------+----------------------------**

**Spatial error: |**

**Moran's I | -11.468 1 2.000**

**Lagrange multiplier | 55.698 1 0.000**

**Robust Lagrange multiplier | 55.302 1 0.000**

**|**

**Spatial lag: |**

**Lagrange multiplier | 2.544 1 0.111**

**Robust Lagrange multiplier | 2.148 1 0.143**

**------------------------------------------------------------**

Hello, I performed a spatio-temporal regression kriging (ordinary) on the residuals from a regression. I would like to know if the ST kriging predictor is an exact interpolator : That is, the values predicted at the sample data locations are equal to the observed value at the sample locations?

Thanks for your answer.

Lucas

At the continental level, what did the spatial footprint of African trade routes look like before colonialisation?

I would like to use the R package "gstat" for predicting and mapping the distribution of water quality parameters using tow methods (using kriging and co-kriging)

I need a guide to codes or resources to do this

Azzeddine

Hi, everyone. :)

Language maintenance and language shifting is an interesting topic. Talking about Indonesia, our linguists note that until 2022 Indonesia has 718 languages. Indonesia really cares about the existing languages.

One thing that is interesting, language maintenance and language shift are also influenced by geographical conditions.

To accommodate 718 different languages, Indonesia has a geographical condition of islands. If we move from island to island in Indonesia, the use of the language is very contrasting, there is contact of different languages between us.

Some literature states that language maintenance and language shift are strongly influenced by the concentration of speakers in an area.

So, in the developments related to the topic of language maintenance and language shift regarding geographical conditions, to what extent have linguists made new breakthroughs in this issue?

I think that the study of language maintenance and language shifts related to regions is the same as the study of food availability or state territory which makes the area the main factor for this defense.

I throw this question at all linguists, do you have a new point of view in the keywords language, maintenance, and geographical.

Kind regards :)

What do you consider are the implications of Big Data on urban planning practice?

Is it proposed to use replicates (and if yes, how many) when doing spatial omics, using the same type of tissues but from different animals within the same phylum?

Yet I was using the Geostatistical Analyst tool in ArcGIS to interpolate some data. But I found that when the parameter of the semivariogram is same, disjunctive kriging(DK) and simple kriging(SK) gave the same cross-validation result (also gave the same prediction result). I tried to change the transform method and semivariogram model, but the problem wasn't solved.

I searched many papers and found that a few researchers used both SK and DK. I am not sure whether or not I made some mistakes.

If someone met the same problem or know why this problem occurred, please teach me! Thank you a lot!

Hello

The most important question: is the (virogram) used in the (ordinary kriging) Is it the (traditional virogram) or (The Residual Maximum Likelihood Method virogram)?

Does (ordinary kriging OK) include the trend, or does it not depend on the trend?

My greetings

Hi everyone,

I have to identify overlapping polygons, with one of the datasets containing thousands of polygons. I am using the sf package and its st_intersects function, as:

dataframe1 %>%

st_filter(y = dataframe2, .predicate = st_intersects)

which takes about 6sec to compare each polygon of the 1st dataframe, and so, days for my current dataframes.

The only way I have encountered so far to make it possible is to first remove some manually and then split the dataframe to run the intersecting.

Would anyone have an advice on how to make it faster?

thanks a lot!

Dear Scholars,

Assume a mobile air pollution monitoring strategy using a network of sensors that move around the city, specifically a network of sensors that quantify PM2.5 at a height of 1.5 meters that lasts about 20 minutes. Clearly, using this strategy, we would lose temporal resolution to gain spatial resolution.

If we would like to perform spatial interpolation to "fill" the empty spaces, what would you recommend? What do you think about it? What would be your approximations?

Regards

The article "

*Ethnographic Knowledge in Urban Planning – Bridging the Gap between the Theories of Knowledge-Based and Communicative Planning*”, that was published on November 4th 2021 has serious ethical problems, e.g, plagiarism, authorship and duplication.Are these datasets available through spatial analysis tools like ArcGIS? Are they available in libraries of programming tools like R or Python? Are they available at official websites from the Colombian government? Any reference to the specific links or libraries is highly appreciated.

Please help !

How can I extend GWR model to GWRK model? I have obtained the residuals from GWR, but I don't understand how can I add the residual kriging for extending GWR model.

I am following the way how a previous paper (PMID: 30948552) treating their spatial transcriptomic (ST) data. It seems like they combined all expression matrix (not mentioned whether normalized or log transformed) of different conditions, and calculate a gene-gene similarity matrix (by Pearson rather than Spearman), and they finally got some gene modules (clustered by L1 norm and average linkage) with different expression between conditions.

So I have several combination of methods to imitate their workflow.

For expression matrix, I have two choice. The first one is a merged count matrix from different conditions. The second one is a normalized data matrix (default by NormalizeData function in seurat, log((count/total count of spot)*10000+1)). For correlation, I have used spearman or pearson to calculate a correlation matrix.

But, I got stuck.

When I use a count matrix, no matter which correlation method, I get a heatmap with mostly positive value pattern, which looks strange. And for a normalized data matrix (only pearson calculated), I got a heatmap with sparse pattern, which is indescribably strange too.

My questions:

- Which combinations of data and method should I use?
- Would this workflow weaken the correlation of the genes since some may have correlations only in specific condition?
- Whatever you think of my work?

Looking forward to your reply!

I have to prepare a spatial map of various soil properties for that I am confused about the semi-variogram is compulsory or not?

I have some environmental covariates derived from the digital elevation model (slope, gradient, channel network distance, etc.) in raster format.

**I want to identify areas of similarity between the covariates and somehow identify the smallest possible size area (or areas) to serve as a reference area.**

- Soil data points will be collected to create predictive models in this reference area.
- The predictive models developed in the reference areas should fit when extrapolated to regions outside the reference area.
- Therefore, the covariates will cover this external area.

Dear everyone

I am sorry that I'm not so polite and I didn't make the question clear in the last letter.

I am learning FRAGSTATS, and I have two question here:(1) how to get a user provided points raster or table, (2) is the random points procedure repeatable?

(1) how to get a user provided points raster or table

In FRAGSTATS, the input data of user provided points must one of the grid or table, and the manual example only provide the grid and table file, however doesn’t provide the method to generate grid and table files from a point vector file or a table of contains coordination. I tried rasterized point vector files using the

*Rasterize (vector to raster)*in GDAL module in QGIS, but the cells of generated grid doesn’t match exactly the cells of my land cover raster I want to analysis, like this figure attached.(2) is the random points procedure repeatable

In FRAGSTATS, there is random points procedure, I wonder is this repeatable? Such as the

*setseed ()*function in R, after fixing the random seed, you will get exactly the same random numbers/points every time you run your code and your results are reproducible despite the involved randomness. I wonder is there any settlements in FRAGSTATS to handle this problem?Thank you and best regards.

I have prepared these two maps of Electrical Conductivity (EC). I used both IDW and Kriging to prepare them. Which one should I choose? The sampling sites are also pointed in the map.

You can see that the Kirging map is quite strange!

I am in a fix. Please help!

Hello!

So, I'm a R user and currently working on a spatial analysis on panel data. Based on my experience for these past days, I understand that 'spml' function on 'splm' package requires strict condition to actually run the analysis (e.g.: balanced panels and complete cases).

Recently, I tried to run 'plm' function instead with self-defined spatially-lagged dependent variable. Will this actually yield the same result? Is there important unaccounted spatial aspects if I use 'plm' instead?

Looking forward to responses. Thank you for your attention!

Can we draw any statistical and spatial relationship between fine parameters of road dust ( Pd, Zn, Cr, and Ca) and some key air pollutants ( like PM_2.5, CO, NO, CH4, O3, HCHO, BC, NOx, SO2)? Did anyone make any spatial analysis using these both datasets?

I have both datasets and would like to do some statistical and spatial analyses. Kindly suggest, if there is any possibility to draw any relationship.

I have average level household- income data of 26 statistical divisions of Turkey (NUTS 2). I need to break down and estimate this data to city (81 sub division NUTS 3) or town level (LAU) level to assess economic conditions of each local area. I have demographic data at the city and town levels. I did some research in the small area estimation field, but I couldn't find the exact method which proposes solution for my research question given above. Can you please recommend any book, article, or method on this case? Thank you very much.

An inquiry about Interpolation Model Validation.

I ran the interpolation of IDW, Ordinary kriging, and EBK. But the R(sq.) values for the all these models (including the semivariograms for OK) rarely exceed 0.1 (even sometimes 0.007 is the highest).

Is model with R(sq.) value of 0.007 good for publication? I think this value indicates too poor prediction, but none of these models is showing a decent R(sq.) value.

On the other hand, the RMSSE is really close to 1, and mean error is around 0.009.

What should I do know?

What can be the possible reason? Am I missing something? Or I should follow more complex models? Is the spatial distribution being controlled more by extrinsic factors (e.g. human interference)?

[120 samples were collected randomly from this study area].

I would gladly appreciate any suggestion.

I am running Geographically Weighted Regression (GWR) on a categorical dependent variable so my model is basically a Geographically Weighted Logistic Regression.

I have multiple independent variables, some numerical and some categorical.

While interpreting the results of numerical variables is straight forward, I want to know how to distinguish the reference level of the categorical independent variables and how to interpret those?

let's say I code males as 0 and females as 1. so the coefficients should be interpreted for females as they are coded 1? what if I coded males as1 and females as 2?

I am looking at spatial correlation patterns between grid cells with increasing distances to each other (please see the attached figure 1). I have divided the distance into 11 bins, and in each bin, I have about 150 Pearson correlation values. The violin plot shows the distribution of correlation values within each bin.

Now I would like to estimate up to which distance we can consider a significant correlation?

In other words, how to identify the correlation below e.g. 0.2 is not significant?

For example, in this case, is it statistically meaningful to do something like e.g. calculate the 2-sigma for each bin, and then if we assume the correlation below zero is not significant, and in case the 2-sigma exceeds that threshold we remove that bin (distance) (figure 2).

Or another example, in this case, does the t-test for Pearson correlation coefficient applicable?

I am looking forward to hearing your ideas about it.

I try to calculate Stream Power Index (SPI) in ArcGIS. For this reason, i checked many videos or documents but there is no certainty about the formula in the raster calculator. So I wrote these formulas below to learn which one is right. Each one creates different results.

DEM Cell Size=10m

SPI_1 --> "flow_accumulation" * 10 * Tan("slope_degree" * 0.017453)

SPI_2 --> Ln("flow_accumulation" * 10 * Tan("slope_degree" * 0.017453))

SPI_3 --> Ln("flow_accumulation" + 0.001) * (("slope_percent" / 100) + 0.001)

SPI_4 --> Ln(((flow_accumulation+1)*10) * Tan("slope_degree"))

SPI_5 --> "flow_accumulation" * Tan("slope_degree")

Also, while creating a slope, am I need to choose which one? DEGREE or PERCENT_RISE

And the last question: When I calculate SPI with the formulas above, I get SPI map that included negative values. Is it true? Are negative values a problem or not?

Which model is better for investigating outcome – exposure relationship spatially?

· Data are counties

· Dependent variable is incidence per county

· Independent variable is median of XXX per county

· Spearman’s correlation, significant and negative

· Spatial autorelation of incidence values close to zero

· Local clusters, detected but majorities are single counties

· Geographically weighted regression, local coefficients mix of positive and negative but global regression coefficient negative

With this background, we should go for geographically weighted regression or bayesian convolution model with structured and unstructured random effects.

We think to test this hypothesis in Morocco within our research project, but we wonder if the error in the absolute altitude could prevent from using these data to this goal.

I have a point shapefile of 12 islands (created by using centroid in Arcmap) with attribute of Macro benthic speciesl and their abundance. I would like to analyze a distribution of these species using Arcmap. Would it be suitable to use kernel density to analyze Species distribution? If not, which method should be used in analyzing instead?

I am concerned about number of points (too few points) and long distance among island.

Thank you

I am computing taylor series expansion in which distance of a objects from a origin(0,0) is computed. I am expressing the distance in terms of sum of position and velocity*time of the object. I only wrote the first two terms of taylor series expansion.

Please check if the expansion in the document attached is correct.

Fitted values: prediction on the dataset used for fitting the model pred1.m0 <- predict(m0, newdata = wheatdata) pred1.m0[1:5,]

Genotype prediction

pred2.fit.SpATS <- predict(m0, which = "geno")

pred2.fit.SpATS[1:5,]

above mention two commands output file is BLUEs or BLUPs value?

I am studying the effect of the land-use surrounding a location on the abundance of aphids on this location. To do this I fit a linear model with the land-use as independent variable and the abundance of aphids as the dependent variable. To check for spatial autocorrelation I plot the correlogram with the Moran I of the model residuals in function of the lag distance.

However I have multiple years of data: where the aphids have been observed each year together with the surrounding land-use. How can I account for this temporal effect? Should I incorporate a 'Year' variable in the linear model and can I then just look at the correlogram of the whole dataset?

Thanks in advance.

I am working on a project on the spatial variability of Soil EC and i am a bit puzzled on the comprehensive steps for kriging regression and semivariogram output on QGIS.

I am studying walking behavior by agent based modeling. Do you see proper Anylogic software on it? Or another software?

Also, do you know any sample in which analyzed walking behavior by Anylogic?

Hi everyone!

I have origin points in different areas and within these areas also specific values within raster cells (in 50m resolution). My goal is to sum up the values of all previous cells in the respective cells that are traversed.

This movement can be calculated for example with Least Cost Path. With this tool i created a Backlink Raster, which shows me the movement.

But when I use the Least Cost Path tool to accumulate values, the respective values are extended by the grid cell size.

Does anyone have an idea how this can be used to accumulate only the actual values without the grid size?

I tried this with flow accumulation, but some cells will get no value, this i because there is no other cell ending in it. But it needs the value added from the prior cell (or value of the cell) in which this cell is "moving"/"flowing".

I hope someone could help me out with this issue.

Cheers!