Science topics: GeoscienceGeoinformaticsSpatial Analysis
Spatial Analysis - Science topic
Geographical Analysis, Urban Modeling, Spatial Statistics
Questions related to Spatial Analysis
I have calculated two cross-classified multilevel models where the models have different sample sizes. I was wondering whether I can compare (like say higher or lower) ICC between these two models.
Another question, from a cross-classified multilevel model, we can calculate the ICC index for different contexts (such as students living in the zone but school zone is different, student's school in the same zone but residence is different). At least can we add the ICC index to say that this total amount of variation is due to spatial dependency?
Especially in how to distribute porosity and permeability of the reservoir using SGSIM of Kriging. I tried to run the software, yet, it always fails. Also, I couldn't find the tutorial on YouTube. Do we need to have the data for the cluster or we can skip that step?
I am trying to run a spatio-temporal autoregressive model (STAR). Therefore I need to create a spatial weight matrix W with N × T rows and N × T columns to weight country interdependencies based on yearly trade data. Could someone please tell me how I to create such a matrix in R or Stata?
A Moran I (spatial autocorrelation) has been prepared in Arc Map 10.4 and GeoDa for comparison. Please find attached those results for your valuable input.
- 59.99 KBReport.JPG
- 7.60 KBScatterplot.png
- 45.48 KBCluster Map.png
- 43.74 KBLISA_Significant_Map.png
- 8.44 KBBox plot.png
Hi, in the Brightness temperature calculation, I saw two values for converting to Celsius.
Which one is correct? why in some articles use 272.15 and other articles use 273.15??
BT = K2 / ln (K1/ Lλ +1) -272.15
BT = K2 / ln (K1/ Lλ +1) -273.15
#research#urban-planning#users' health and sustainability
I did some thermal maps of the Girona’s atmospheric urban heat island with a method of car's mobile transect. I use the software Surfer 6.0 to do the maps, but Surfer isn’t a Geographical Information System, I think this is a throuble. Also in my transects there is a very high spatial density of observation points in the downtown of the city of Girona (15/km2) and a low density in rural areas (2/km2). I always interpolate isotherms with the kriging method. What is the best method to interpolate temperatures (Kriging, IDW, etc.) in my area of interest, Girona and it environs? Can you give me bibliographic citations?
I have three 100 m2 plots subdivided in 100 subplots of 1 m2. I have counted, in each subplot, the total number of individuals of a plant species, and I repeated the sampling 7 years. I would like to study the distribution (aggregation) patterns of individuals within the main plot and see if there are changes over time. I've been looking for examples with SADIE methodology (Spatial Analysis by Distance Indices), but I don't know if it fits my study. Any recommendations for studies of this type? Is there any R package to apply the SADIE methodology?
I would be thankful to all ideas.
Thank you and best regards
I produced three different raster layers interpolating points collected in-situ in three different months in the same area using Kriging. What is the best way to highlight the temporal changes that occurred in the test site during this time? I did the standard deviation of the images, since it permits to summarize the variations in just one image. If I did the difference between subsequent images I produce two images to represent the same results. Am I right?
I have 26 sampling locations, relatively well distributed, but not adequately enough to cover the study area. I have also a digital elevation model, from which I can extract much more locations. An important detail is that the temperature data are distributed in low elevations, but no temperature data is available at high elevations. Hence, I want to use the elevation data to sensitize the temperature interpolation process through co-kriging, which works with non collocated points. I prefer to use gstat, but I am not sure: a) what is an optimal number of elevation points that I have to use (I know that I have 26 temperature points, and that I need elevation points at high elevations) b) is there an automatic process tha provides reasonable kriging predictions, such as autokrige, for co-kriging?
I need layers for spatial analysis of Sakarya province in the eastern Marmara region of Turkey. Where can I find it for free?
Thank you in advance!
I am currently undergoing a research on: The Spatial Analysis of the Incidence and Users' Fear of Crime at Motor Parks in Ibadan.
What is/are the best method for determining the sampling frame or population of study considering the fact that majority of the motor parks in the city have no official capacity or data on daily passenger patronage.
Also, what is the best sampling method to use for administration of questionnaire on passenger in achieving the aim and objectives of the study.
Awaiting your scholarly contributions.
Both Ripley's k-function and Moran's index measure the statistically significant clustering within data. However, how to know, which method is performing better for our data?
What are the advantages and disadvantages of each method which can help to choose a better method?
Trying to investigate whether the Autoregressive behaviour is as a result of Lags (SER) or errors (SEM).
Diagnistic check on the weights matrix
The output is as below
Diagnostic tests for spatial dependence in OLS regression
logHPRICE = SPOOL + BQUARTER + WFENCE + PSIZE + NROOMS
Type: Distance-based (inverse distance)
Distance band: 0.0 < d <= 10.0
Test | Statistic df p-value
Spatial error: |
Moran's I | -11.468 1 2.000
Lagrange multiplier | 55.698 1 0.000
Robust Lagrange multiplier | 55.302 1 0.000
Spatial lag: |
Lagrange multiplier | 2.544 1 0.111
Robust Lagrange multiplier | 2.148 1 0.143
Hello, I performed a spatio-temporal regression kriging (ordinary) on the residuals from a regression. I would like to know if the ST kriging predictor is an exact interpolator : That is, the values predicted at the sample data locations are equal to the observed value at the sample locations?
Thanks for your answer.
At the continental level, what did the spatial footprint of African trade routes look like before colonialisation?
I would like to use the R package "gstat" for predicting and mapping the distribution of water quality parameters using tow methods (using kriging and co-kriging)
I need a guide to codes or resources to do this
Hi, everyone. :)
Language maintenance and language shifting is an interesting topic. Talking about Indonesia, our linguists note that until 2022 Indonesia has 718 languages. Indonesia really cares about the existing languages.
One thing that is interesting, language maintenance and language shift are also influenced by geographical conditions.
To accommodate 718 different languages, Indonesia has a geographical condition of islands. If we move from island to island in Indonesia, the use of the language is very contrasting, there is contact of different languages between us.
Some literature states that language maintenance and language shift are strongly influenced by the concentration of speakers in an area.
So, in the developments related to the topic of language maintenance and language shift regarding geographical conditions, to what extent have linguists made new breakthroughs in this issue?
I think that the study of language maintenance and language shifts related to regions is the same as the study of food availability or state territory which makes the area the main factor for this defense.
I throw this question at all linguists, do you have a new point of view in the keywords language, maintenance, and geographical.
Kind regards :)
What do you consider are the implications of Big Data on urban planning practice?
Is it proposed to use replicates (and if yes, how many) when doing spatial omics, using the same type of tissues but from different animals within the same phylum?
Yet I was using the Geostatistical Analyst tool in ArcGIS to interpolate some data. But I found that when the parameter of the semivariogram is same, disjunctive kriging(DK) and simple kriging(SK) gave the same cross-validation result (also gave the same prediction result). I tried to change the transform method and semivariogram model, but the problem wasn't solved.
I searched many papers and found that a few researchers used both SK and DK. I am not sure whether or not I made some mistakes.
If someone met the same problem or know why this problem occurred, please teach me! Thank you a lot!
The most important question: is the (virogram) used in the (ordinary kriging) Is it the (traditional virogram) or (The Residual Maximum Likelihood Method virogram)?
Does (ordinary kriging OK) include the trend, or does it not depend on the trend?
I have to identify overlapping polygons, with one of the datasets containing thousands of polygons. I am using the sf package and its st_intersects function, as:
st_filter(y = dataframe2, .predicate = st_intersects)
which takes about 6sec to compare each polygon of the 1st dataframe, and so, days for my current dataframes.
The only way I have encountered so far to make it possible is to first remove some manually and then split the dataframe to run the intersecting.
Would anyone have an advice on how to make it faster?
thanks a lot!
Assume a mobile air pollution monitoring strategy using a network of sensors that move around the city, specifically a network of sensors that quantify PM2.5 at a height of 1.5 meters that lasts about 20 minutes. Clearly, using this strategy, we would lose temporal resolution to gain spatial resolution.
If we would like to perform spatial interpolation to "fill" the empty spaces, what would you recommend? What do you think about it? What would be your approximations?
Assume a mobile air pollution monitoring strategy using a network of sensors that move around the city, specifically a network of sensors that quantify PM2.5 at a height of 1.5 meters that lasts about 20 minutes. Clearly, using this strategy we would lose temporal resolution to gain spatial resolution.
If we would like to perform spatial interpolation to "fill" the empty spaces, what would you recommend? What do you think about it? What would be your approximations?
As a non-statistician, I have a (seemingly) complicated statistical question on my hands that I'm hoping to gather some guidance on.
For background, I am studying the spatial organization of a biological process over time (14 days), in a roughly-spherical structure. Starting with fluorescence images (single plane, no stacks), I generate one curve per experimental day that corresponds to the average intensity of the process as I pass through the structure; this is in the vein of line intensity profiling for immunofluorescence colocalization. I have one curve per day (see attached) and I'm wondering if there are any methods that can be used to compare these curves to check for statistical differences.
Any direction to specific methods or relevant literature is deeply appreciated, thank you!
Edit to add some additional information: the curves to be analyzed will be averages of curves generated from multiple biological replicates, and therefore will have error associated with them. Across the various time points and conditions, the number of values per curve ranges roughly from 200 -- 1000 (one per pixel).
The article "Ethnographic Knowledge in Urban Planning – Bridging the Gap between the Theories of Knowledge-Based and Communicative Planning”, that was published on November 4th 2021 has serious ethical problems, e.g, plagiarism, authorship and duplication.
Are these datasets available through spatial analysis tools like ArcGIS? Are they available in libraries of programming tools like R or Python? Are they available at official websites from the Colombian government? Any reference to the specific links or libraries is highly appreciated.
Please help !
How can I extend GWR model to GWRK model? I have obtained the residuals from GWR, but I don't understand how can I add the residual kriging for extending GWR model.
I am trying to use VariogramST function in R, for spatial-temporal kriging.
The values I am working on, are monthly based; while in the function VariogramST, the predefined "tunit" is hours, days or weeks.
I appreciate if anyone can tell me how to change it to months?
I have to prepare a spatial map of various soil properties for that I am confused about the semi-variogram is compulsory or not?
I have some environmental covariates derived from the digital elevation model (slope, gradient, channel network distance, etc.) in raster format.
I want to identify areas of similarity between the covariates and somehow identify the smallest possible size area (or areas) to serve as a reference area.
- Soil data points will be collected to create predictive models in this reference area.
- The predictive models developed in the reference areas should fit when extrapolated to regions outside the reference area.
- Therefore, the covariates will cover this external area.
I am following the way how a previous paper (PMID: 30948552) treating their spatial transcriptomic (ST) data. It seems like they combined all expression matrix (not mentioned whether normalized or log transformed) of different conditions, and calculate a gene-gene similarity matrix (by Pearson rather than Spearman), and they finally got some gene modules (clustered by L1 norm and average linkage) with different expression between conditions.
So I have several combination of methods to imitate their workflow.
For expression matrix, I have two choice. The first one is a merged count matrix from different conditions. The second one is a normalized data matrix (default by NormalizeData function in seurat, log((count/total count of spot)*10000+1)). For correlation, I have used spearman or pearson to calculate a correlation matrix.
But, I got stuck.
When I use a count matrix, no matter which correlation method, I get a heatmap with mostly positive value pattern, which looks strange. And for a normalized data matrix (only pearson calculated), I got a heatmap with sparse pattern, which is indescribably strange too.
- Which combinations of data and method should I use?
- Would this workflow weaken the correlation of the genes since some may have correlations only in specific condition?
- Whatever you think of my work?
Looking forward to your reply!
I am sorry that I'm not so polite and I didn't make the question clear in the last letter.
I am learning FRAGSTATS, and I have two question here:(1) how to get a user provided points raster or table, (2) is the random points procedure repeatable?
(1) how to get a user provided points raster or table
In FRAGSTATS, the input data of user provided points must one of the grid or table, and the manual example only provide the grid and table file, however doesn’t provide the method to generate grid and table files from a point vector file or a table of contains coordination. I tried rasterized point vector files using the Rasterize (vector to raster) in GDAL module in QGIS, but the cells of generated grid doesn’t match exactly the cells of my land cover raster I want to analysis, like this figure attached.
(2) is the random points procedure repeatable
In FRAGSTATS, there is random points procedure, I wonder is this repeatable? Such as the setseed () function in R, after fixing the random seed, you will get exactly the same random numbers/points every time you run your code and your results are reproducible despite the involved randomness. I wonder is there any settlements in FRAGSTATS to handle this problem?
Thank you and best regards.
I have prepared these two maps of Electrical Conductivity (EC). I used both IDW and Kriging to prepare them. Which one should I choose? The sampling sites are also pointed in the map.
You can see that the Kirging map is quite strange!
I am in a fix. Please help!
So, I'm a R user and currently working on a spatial analysis on panel data. Based on my experience for these past days, I understand that 'spml' function on 'splm' package requires strict condition to actually run the analysis (e.g.: balanced panels and complete cases).
Recently, I tried to run 'plm' function instead with self-defined spatially-lagged dependent variable. Will this actually yield the same result? Is there important unaccounted spatial aspects if I use 'plm' instead?
Looking forward to responses. Thank you for your attention!
Can we draw any statistical and spatial relationship between fine parameters of road dust ( Pd, Zn, Cr, and Ca) and some key air pollutants ( like PM_2.5, CO, NO, CH4, O3, HCHO, BC, NOx, SO2)? Did anyone make any spatial analysis using these both datasets?
I have both datasets and would like to do some statistical and spatial analyses. Kindly suggest, if there is any possibility to draw any relationship.
I have average level household- income data of 26 statistical divisions of Turkey (NUTS 2). I need to break down and estimate this data to city (81 sub division NUTS 3) or town level (LAU) level to assess economic conditions of each local area. I have demographic data at the city and town levels. I did some research in the small area estimation field, but I couldn't find the exact method which proposes solution for my research question given above. Can you please recommend any book, article, or method on this case? Thank you very much.
An inquiry about Interpolation Model Validation.
I ran the interpolation of IDW, Ordinary kriging, and EBK. But the R(sq.) values for the all these models (including the semivariograms for OK) rarely exceed 0.1 (even sometimes 0.007 is the highest).
Is model with R(sq.) value of 0.007 good for publication? I think this value indicates too poor prediction, but none of these models is showing a decent R(sq.) value.
On the other hand, the RMSSE is really close to 1, and mean error is around 0.009.
What should I do know?
What can be the possible reason? Am I missing something? Or I should follow more complex models? Is the spatial distribution being controlled more by extrinsic factors (e.g. human interference)?
[120 samples were collected randomly from this study area].
I would gladly appreciate any suggestion.
- 2.54 MBDacope_Paper_EC_2020.jpg
- 2.29 MBDacope_Paper_SOC_2020.jpg
- 2.29 MBDacope_Paper_STN_2020.jpg
- 1.86 MBDacope_Paper_CN_2020.jpg
I am running Geographically Weighted Regression (GWR) on a categorical dependent variable so my model is basically a Geographically Weighted Logistic Regression.
I have multiple independent variables, some numerical and some categorical.
While interpreting the results of numerical variables is straight forward, I want to know how to distinguish the reference level of the categorical independent variables and how to interpret those?
let's say I code males as 0 and females as 1. so the coefficients should be interpreted for females as they are coded 1? what if I coded males as1 and females as 2?
I am looking at spatial correlation patterns between grid cells with increasing distances to each other (please see the attached figure 1). I have divided the distance into 11 bins, and in each bin, I have about 150 Pearson correlation values. The violin plot shows the distribution of correlation values within each bin.
Now I would like to estimate up to which distance we can consider a significant correlation?
In other words, how to identify the correlation below e.g. 0.2 is not significant?
For example, in this case, is it statistically meaningful to do something like e.g. calculate the 2-sigma for each bin, and then if we assume the correlation below zero is not significant, and in case the 2-sigma exceeds that threshold we remove that bin (distance) (figure 2).
Or another example, in this case, does the t-test for Pearson correlation coefficient applicable?
I am looking forward to hearing your ideas about it.
Despite going through available literature and getting a basic understanding of how Kernel Density Estimation works when analysing home ranges, I cannot make my Biotas works. Does anybody know about some more detailed guidelines/manuals/training videos etc., for Biotas? Provided manual on their website says very little about navigating within the software and setting required parameters. For instance, choosing a window with (LSCV), I have no idea what units are represented there. In my case, I have a limited dataset, let's say less than 100 GPS fixes of the Little owl, which moves on a small scale. Moreover, it roosts often on the same spots; therefore, my points are dense and sometimes even overlaid. I would like to avoid manipulating hard data and exclude those overlaying points from analyses because I still think they have important information value. Would it be still possible to make a fixed KDE with LSCV under such conditions?
Any help will be highly appreciated. Thx a lot.
If i want to calculate say global morans' I-index over the observation that is dispersed in administrative units to measure the level of clustering on a certain variable is it relevant to do it if i use the geographical data on the administrative units? So observations within the same administrative unit will have exactly the same position and I don't want to measure clustering between the observation across administrative units (i.e I set the observations to be neighbours within each administrative unit). Is a spatial approach the correct way to approach this question if i want to examine if observations in admin unit A have higher levels of the measured varaible in question and are thus clustered compared to admin unit B that has more of a mix of levels of the variable?
I try to calculate Stream Power Index (SPI) in ArcGIS. For this reason, i checked many videos or documents but there is no certainty about the formula in the raster calculator. So I wrote these formulas below to learn which one is right. Each one creates different results.
DEM Cell Size=10m
SPI_1 --> "flow_accumulation" * 10 * Tan("slope_degree" * 0.017453)
SPI_2 --> Ln("flow_accumulation" * 10 * Tan("slope_degree" * 0.017453))
SPI_3 --> Ln("flow_accumulation" + 0.001) * (("slope_percent" / 100) + 0.001)
SPI_4 --> Ln(((flow_accumulation+1)*10) * Tan("slope_degree"))
SPI_5 --> "flow_accumulation" * Tan("slope_degree")
Also, while creating a slope, am I need to choose which one? DEGREE or PERCENT_RISE
And the last question: When I calculate SPI with the formulas above, I get SPI map that included negative values. Is it true? Are negative values a problem or not?
Which model is better for investigating outcome – exposure relationship spatially?
· Data are counties
· Dependent variable is incidence per county
· Independent variable is median of XXX per county
· Spearman’s correlation, significant and negative
· Spatial autorelation of incidence values close to zero
· Local clusters, detected but majorities are single counties
· Geographically weighted regression, local coefficients mix of positive and negative but global regression coefficient negative
With this background, we should go for geographically weighted regression or bayesian convolution model with structured and unstructured random effects.
We think to test this hypothesis in Morocco within our research project, but we wonder if the error in the absolute altitude could prevent from using these data to this goal.
I have a point shapefile of 12 islands (created by using centroid in Arcmap) with attribute of Macro benthic speciesl and their abundance. I would like to analyze a distribution of these species using Arcmap. Would it be suitable to use kernel density to analyze Species distribution? If not, which method should be used in analyzing instead?
I am concerned about number of points (too few points) and long distance among island.
I am computing taylor series expansion in which distance of a objects from a origin(0,0) is computed. I am expressing the distance in terms of sum of position and velocity*time of the object. I only wrote the first two terms of taylor series expansion.
Please check if the expansion in the document attached is correct.
Fitted values: prediction on the dataset used for fitting the model pred1.m0 <- predict(m0, newdata = wheatdata) pred1.m0[1:5,]
pred2.fit.SpATS <- predict(m0, which = "geno")
above mention two commands output file is BLUEs or BLUPs value?
I am studying the effect of the land-use surrounding a location on the abundance of aphids on this location. To do this I fit a linear model with the land-use as independent variable and the abundance of aphids as the dependent variable. To check for spatial autocorrelation I plot the correlogram with the Moran I of the model residuals in function of the lag distance.
However I have multiple years of data: where the aphids have been observed each year together with the surrounding land-use. How can I account for this temporal effect? Should I incorporate a 'Year' variable in the linear model and can I then just look at the correlogram of the whole dataset?
Thanks in advance.
I am working on a project on the spatial variability of Soil EC and i am a bit puzzled on the comprehensive steps for kriging regression and semivariogram output on QGIS.
I am studying walking behavior by agent based modeling. Do you see proper Anylogic software on it? Or another software?
Also, do you know any sample in which analyzed walking behavior by Anylogic?
I have origin points in different areas and within these areas also specific values within raster cells (in 50m resolution). My goal is to sum up the values of all previous cells in the respective cells that are traversed.
This movement can be calculated for example with Least Cost Path. With this tool i created a Backlink Raster, which shows me the movement.
But when I use the Least Cost Path tool to accumulate values, the respective values are extended by the grid cell size.
Does anyone have an idea how this can be used to accumulate only the actual values without the grid size?
I tried this with flow accumulation, but some cells will get no value, this i because there is no other cell ending in it. But it needs the value added from the prior cell (or value of the cell) in which this cell is "moving"/"flowing".
I hope someone could help me out with this issue.
I have attached the OSM map of Pannipitiya, Sri Lanka. So looking at the map, what kind of geographical questions you can ask?
To me, the following came to my mind
1. What are the places where house dwellers can walk and reach within 1 minute (600 m ?) ?
2. What is the calmest and quiet place to meditation?
Hoping you are doing great,
I’m conducting spatial analysis on Expressway in Malaysia, however, I don’t know how to determine the length of the hotspots because the clustering of hot spots using Get G* method with fixed bandwidth 1000 m, sometimes hotspots come in one isolated location from the others on the network as on the attached image.
So, you may someone can explain to me how I can measure and determine the length of hotspots?
Would you please help me in this regard?
Fathi Salam Mohammed Alkhatni
As a part of my PhD, I conducted a study to assess health inequities in Amaravati capital region of Andhra Pradesh using two composite indices made from health determinants indicators and health outcome indicators.
Health outcome indicators data was available at the sub-district level. The data were interpolated to create a heatmap of the health outcome index. Whereas health determinants data was available at the village level. Thus I created a choropleth map using the health determinants index.
Later interpolated health outcome index map was overlayered on the choropleth map of health outcomes. It highlighted some interesting findings, i.e. areas of concern (Villages). The colour combinations created because of overlaying two layers revealed the areas with poor health outcomes and poor health determinants and areas with poor health outcomes with better determinants.
Kindly check these files and give your valuable opinions. Whether this type of analysis can be used to highlight the areas with health inequities or not? Please comment on the method used and the results obtained in the overlayered map.
I have 39 datasets of georeferenced disease severity data for which I would like to conduct a spatial analysis. As a part of this analysis, I would like to compare the amount of spatial autocorrelation present in each dataset.
For disease incidence data (count-based data), there is the SADIE procedure, which is widely used for this kind of task. In contrast, for disease severity data (continuous data), I am not aware of a statistic that can be or is used in that kind of way. The most popular statistic, Moran’s I, seems to be solely used in an inferential kind of way (presence or absence of spatial autocorrelation).
I am aware that the spatial weights matrix used for calculation of Moran’s I complicates the comparison between datasets. But, given a somewhat constant spatial weights matrix between datasets (for example Inverse Distance Weighted?), wouldn’t it be possible to compare the results? In addition, this GeoDa video https://www.youtube.com/watch?v=_J_bmWmOF3I seems to indicate that a comparison based on standardized z-values is in principle possible. Nevertheless, I am not aware of a published study in which this kind of analysis was carried out.
Therefore I would like to ask: Does anyone know of such studies? Or maybe of another statistic that would be better suited for this kind of purpose?
Any suggestions would be greatly appreciated.
Hi, I managed to estimate SEM, SAR and SDM for spatial panel models. However, to test whether the SDM can be simplified to SAR or SEM, there is a need to conduct Wald or LR test. I am thankful if anyone can share the command/instruction of Wald test and LR test for spatial panels using STATA. Thanks.
I have NYC taxi trips dataset contains multiple attributes like (pickups and dropoffs) coordinates, DateTime also, trip distance and so on, indexed by "tpep_pickup_datetime" coulmn as datetime64[ns] data type, I extracted some features from date time pickup and drop-off columns like month, day, hour and other. So I am focusing on DateTime and location to do trip time prediction.
Concerning the problem that faced me, it is the problem of converting the dataset into a fixed time series intervals (1 or 10 mins as an example to get 1440 time bins for each day) to be ready to LSTM input, let me reflect the essential point of the problem, I have tried to do a resample the dataset based on pickups-time, but the dataset contains more features, so it is difficult to convert it into a sequence time series with fixed interval lengths. Because the data contain a lot of trips at the same time (such as Sunday at 8:00-9:00 am approximate 2990 trip) but they are from different places.
So the main problem which is briefly: "How do I convert or prepare a taxi dataset to a time series with the fixed intervals?".
Thank you advance,
I am a novice researcher, and i'm working on a project which is busy analyzing water quality data from different water sources such as dams,rivers, and springs , and also from secondary sources such water treatment plants, and households, i have collected water GPS coordinates, which appear on the image* , hence im having difficult to find the right methodology to analyse these results spatially. The microbiological parameters that are going to analysed include bacteria such as E.coli, salmonella spp , Shigella spp, Giardia spp, Entamoba Histolytica spp.
please help and be kind :)
i have attached an image showing the sample locations , additionally the study area is in six Quaternary drainage basins
Dear All , I am biggener in Bayesian spatial modeling . I have structured and unstructured random effects in my formula . I extracted and mapped the posterior means . I can't understand what a high/low value of structured and unstructured means (interpretation).
R code is as under:
nb <- poly2nb(map)
g <- inla.read.graph(filename = "map.adj")
map$re_u <- 1:nrow(map)
map$re_v <- 1:nrow(map)
hyperprior2 <- list(theta=list(prior='loggamma',param=c(1, 0.0005)))
formula2 <- obs ~ NDVI + f(re_u, model = "besag", graph = g, scale.model = TRUE, hyper = hyperprior2) + f(re_v, model = "iid",hyper = hyperprior2)
res2 <- inla(formula2, family = "poisson", data = map, E = E, control.predictor = list(compute = TRUE),control.compute = list(dic = TRUE))
map$RR <- res2$summary.fitted.values[, "mean"]
map$LL <- res2$summary.fitted.values[, "0.025quant"]
map$UL <- res2$summary.fitted.values[, "0.975quant"]
map$spatial <- res2$summary.random$re_v$mean
map$nonspatial <- res2$summary.random$re_u$mean
tm_shape(map)+ tm_polygons("RR", alpha = .50)
tm_shape(map)+ tm_polygons("spatial", alpha = .50)
tm_shape(map)+ tm_polygons("nonspatial", alpha = .50)
Hello dear researchers, is there any source (except Bangladesh Meteorological Department) to get the daily meteorological data (air quality, rainfall, temperature etc.) of the BMD stations of Bangladesh for the last 10 years for free?
I am not a spatial analyst or an expert in a related field, that's why I decided to contact you for advice or help in extracting maximum information from the data I have.
Data (2 trials) - routes crossing the study area (about 1250 sq. km.), about 1000 points along the lines represents an event of species occurence, and a collumn about the number of occurences (z data) attached to the x,y coordinates.
Which Exploratory Data Analysis tools I should try to describe the pattern? I have an idea about visualising the Standard Distance circle (Centrography), box plot of an events, for example. Maybe Kernel Density? And what I can do with z data? Maybe interpolation? I think It would be easier to work with transects, but I don't know what I can do with this kind of irregularly shaped data. Can I do some tests or predictions with this kind of data, or the samples are too small and not representative? For example I have a hypothesis that the distribution of an event is not random, and the binomial probability and number (z) of events is higher in the central south part because of some factors.
I look forward to your suggestions on what tools, tests I can use, which concepts I should learn about.
Now I am trying to use R and QGis for visualisation and analysis.
I also apologize that my english may confuse you.
At the moment I am designing a spatial gene expression experiment using the 10X Visium assay. There are a few papers out there that have used this assay. There are also several packages available to analyze the data (e.g. Seurat). However, if I am correct, none of these methods take biological replicates into account. In other words, is it possible to align different slices of biological replicates and then perform differential expression analysis to compare conditions?
I am checking for spatial autocorrelation in my dataset. It comprises the ID of the nests, the longitude and latitude for each of the nest boxes and the number of fledged chicks for each nest box. I want to know if reproductive success is spatially autocorrelated in our bird colony.
For this, I computed the distance matrix for nest boxes to know the distance between each nest box and the rest of nest boxes. Following this, I designed distance bands (distance lags) to calculate Moran's I for each lag specifically. As I have multiple data for several years (2014-2020), I wonder if there is any way to get a mean Moran's Index of all the years, instead of calculating an index for each year.
It is my first time doing these types of analysis so any advice would be very much appreciated!!