Science topics: Geoinformatics (GIS)GIS Analysis
Science topic
GIS Analysis - Science topic
Explore the latest questions and answers in GIS Analysis, and find GIS Analysis experts.
Questions related to GIS Analysis
We are looking to identify characteristics of a river from drone images in ArcGIS pro.
I have been able to successfully delineate the wetted channel, riparian zone and and wood material based on hue, saturation and value in ImageJ but we are hoping to find a similar tool in ArcGIS. I have investigated the classification tools and they seem to either be designed for specific jobs (e.g. NDVI), rely on a training dataset (e.g. supervised classification) or segment into groups (but this provides too many of the wrong groups and we cant specify the colour characteristics of each group!).
We are looking for a tool which allows you to threshold the images based on known values of RGB or HSV but maybe this doesn't exist?
Thanks for your help!
Rich
How to make Least Cost Path Analysis on the taking only LULC into consideration?
We need to calculate the influence of 3 different sources of water intake (river, eutrophicated bay and sea) on the bay, located in the middle of these sources. You can see a map with explanations as the 1st attachment (Slide1).
We have a boat with a flow-through system of sensors, so we measure different parameters every 5 sec (60m approx.). Is it possible to "imagine" the narrow streams of water intake as 3 point-source of "pollution" and calculate the influence of each point on the bay? Like, for example, to understand what source is more responsible for the increased level of pollution: river, bay or sea? Could it be done with kriging in ArcGIS Pro, and then probably visualized the influence with arrows, where the length of the arrow will represent the strength of the influence (2nd attachment, Slide2)?
Or maybe there are other suitable ways to do so? By the way, we don't have information about the speed of the flow in and out of the bay, however, it's probably possible to get it later. But we do have lots of other parameters like salinity, turbidity, fDOM, temperature etc.
I would really appreciate your help and suggestions on what articles I can read about it.
Thank you!
Based on your expertise and experience,
What are the Python packages that are commonly utilized for tasks related to GIS, remote sensing, and spatial data science in 2022?
and/or
What are the Python packages that you recommend for use in GIS, remote sensing, and spatial data science applications in 2023?
please consider following domains for/as reference,
## GIS ##
- Data management and processing
- Geospatial analysis
- Map production
- Web mapping
- etc
## Remote Sensing ##
- Image processing
- Feature extraction
- Change detection
- Image analysis
- etc
## Spatial Data Science ##
- Spatial statistics and modeling
- Machine learning
- Data visualization
- etc
Both Ripley's k-function and Moran's index measure the statistically significant clustering within data. However, how to know, which method is performing better for our data?
What are the advantages and disadvantages of each method which can help to choose a better method?
I would to obtain a risk map related to the oak habitat using several ecological variables referring to climate change.
Thanks for your help.
Dear comunity,
at present, I am in a time sensitive situation, my doctoral dissertation is scheduled to be submitted in the next three months. However, I still need a publication to meet the requirements.
My PhD research include the application of remote sensing and GIS for Groundwater exploration
I prepared a manuscript on the integration of RS, GIS, and AHP techniques to map groundwater potential zones.
knowing that, Journals takes a lot of time to respond. Which are the quality/Free journals that take less time to respond or give review?
Thank you,
I am looking for an algorithm or GIS software that, for a solar panel or a larger area equipped with solar panels, determines which buildings in the vicinity are blinded by the panels due to the reflection of sunlight.
Inputs are 3D planar polygons that describe the panel geometries. Furthermore, a surface model of the earth's surface. The algorithm's output would be the locations that are affected by the glare effects.
Who knows algorithmic approaches or software solutions?
I have two incidents (pre and post incidents) for which NDVIs will be calculated and used to detect any change by subtracting. Now before calculating NDVI, I want to normalize radiometrically the post images with respect to the pre images using regression which uses pseudo-invariant target (PIF). I am looking to do this whole process in Google Earth Engine.
My questions are:
- Can anybody please kindly share the script?
- During selecting the PIFs, should I select them from the reference/base images or the target image?
Hello, all:
I am looking for some Open-sourced Downscaling Algorithms or Methods applied to the High-resolution Remote Sensing Data (such as Land Cover/ Vegetation Type and so on).
Could somebody help me out? Appreciate that!
I want to get the coordinates from the vertices of a set of singlepart polygons (all in the same vector layer .shp) using R. I would like to have a list with the x and y coordinates per polygon. Do you know how this can be obtained?
Thank you in advance!
What do you consider are the implications of Big Data on urban planning practice?
How can Big Data optimize urban planning and design?
What can be regarded as Big Data in the context of urban planning and design?
What are the application opportunities of Big Data in urban planning?
Dear Scholars,
Assume a mobile air pollution monitoring strategy using a network of sensors that move around the city, specifically a network of sensors that quantify PM2.5 at a height of 1.5 meters that lasts about 20 minutes. Clearly, using this strategy we would lose temporal resolution to gain spatial resolution.
If we would like to perform spatial interpolation to "fill" the empty spaces, what would you recommend? What do you think about it? What would be your approximations?
Regards
I am using a NetCDF image comprising 12 subNetCDF images of different duration, as shown in the Figure attached; I tried to average/separate by using every tool, software, and different source, such as the use of Origin as mentioned in this link https://www.youtube.com/watch?v=14-sLHOaaOg&ab_channel=OriginLabCorp. I have used the Make NetCDF raster file tool, which won't import the subNetCDF in batch. It uploaded the images one at a time. I have used ArcGIS supporting tools such as https://r-video-tutorial.blogspot.com/2016/07/time-averages-of-netcdf-files-from.htmland also used python and R codes, but I failed to separate the subNetCDF files. I am working on large datasets. Uploading one-by-one images of multiple parameters with multiple duration will make my work more crucial, typical, and time-consuming. My work requires the final image as a raster file. Please recommend some solution to deal with the issue. Please help me get images at one or an average of all images in one. My work requires the final image as a raster file.
I'm doing research for my degree thesis in architecture on the urban heat islands of the city of Naples - Italy.
I'm reclassifying the Land surface temeperature map in gis and I am looking for a method to classify the temperatures on the ground in a precise way, according to the classes that allow me to locate the heat islands.
I am working on Forest Canopy Density. There is a parameter called "Scaled Shadow Index(SSI)" while computing Forest canopy density. In most of the papers I found that, SSI has been calculated by "Linearly Transforming" Shadow Index. I have computed the Shadow Index. But i am not getting the idea to compute Scaled Shadow Index. Kindly help me out. Moreover, If I am using Landsat 5 and 8 Surface Reflectance Image for FCD Mapping and as the Reflectance value ranges from 0 to 1, is it still mandatory to normalize these Surface Reflectance data before calculating Vegetation Indices?
Dear colleagues I'm trying to calculate the land surface temperature for the Bakun Catchment, Sarawak, Malaysia region using different landsat satellite images of different years with the help of Google Earth Engine Code Editor. I'm not sure why I'm getting incomplete results for just this region and even the pixel values are too high in some of the years (please check the attached screenshots). Yes, for the NDVI and NDWI the results are covering the whole region.
I tried the same code for other regions of Pakistan and the results are very well.
So, my question is can anyone help me to get to know that this region the Bakun Catchment, Sarawak, Malaysia comes under special consideration or are there any other satellite data issues or maybe something wrong with the code? Also, does this region's LST goes down to -3 to -10?
- I am trying to use the DSAS tool (version 5) for both ArcMap 10.4.1. and ArcMap 10.8 in Windows 10. I am able to output the transects, but when it comes to trying to calculate the statistic. I get an error message telling me to reference the DSAS_log (photo attached) (which does not provide a clear solution of what the error is). I wonder how to solve this problem? I have tried to re-create the baseline multiple times and am getting the same error for both two versions of ArcMap (10.4 &10.8). Knwoing that I followed the Guide and I am using (English US and mm/dd/yyyy format).
Dear Experts,
What is the best method and resource to create a detailed landcover map of an urban area?
i need these classes: green space, water-body, farmland, bare land, building, and road
I need the land cover maps of 2000, 2005, 2010, 2015, and 2020.
Thank you
Hello, I am a senior grade student in university and nowadays conducting a study about change of the urban structure(in terms of commercial activities) after certain transportation project.
After the project, several pedestrian roads will be installed(which is correspond to axial line in SSA) so that the number of people in the area will grow to much larger degree.
If I can quantify the differences of number of pedestrians between before/after of business commensement I will eventually reflect this as a variable to making a model(ex: regression, whatever) that analyzing commercial activity.
I've been studying SSM theory and notions, but couldn't find materials that I can get technical information...And as I heard, the procedure is AutoCAD or QGIS > depthmap > R, but so what? I have no idea how to do this in detail, and above all, I don't know how to use AutoCAD. So can anyone tell me how to analyize space syntax with QGIS? I've downloaded depthmapX 0.35 and QGIS space syntax toolkit already but don't know how to do further. I'm despairing now.
So please give your hand regarding this to this poor novice student :(
I am writing a paper that focuses on the importance of centralizing file-based spatial that exist in data silos currently fragmented in file servers, email servers, web servers, document management systems, etc. Most of the solutions that consolidate spatial data are for database-based spatial data. However, much spatial data still exists in files and are unconsolidated.
One description missing from the paper is how enterprise spatial organisations are currently handling security of these spatial files, especially those that exist in data silos (fragmented in file servers, email servers, web servers, document management systems, etc).
Unfortunately, I do not know anyone in the enterprise GIS organisations and therefore its hard to know how these organisations store and handle the security of these data. Could anyone with an industry background please guide me.
- Are these files normally stored in data silos?
- Are there any movements towards consolidating these file data?
- Are there any software solutions that enable consolidating these spatial files?
Any online document/paper that describes this issue which I can cite would be well appreciated.
I am currently working on my research topic on “Safe Urban Mobility”. I mean by “Safe” which ensures no transmission the infections during transport journeys in the time of the pandemic (Covid-19), especially with the poor mobility choices.
After searching the literature, I got a few studies pointing to the topic.
You can view and discuss your perceptions on this topic.
I welcome all opinions.
Ahmed.
I have two datasets. One with 9 past cyclones with their damage on the forest, wind speed, distance from the study site, recovery area. Another dataset with future sea-level rise (SLR) projections and potential loss area due to SLR.
- By using data from both disturbance events datasets (loss area, recovery area, wind speed, predicted loss area from SLR) can I create any kinds of disturbance risk/vulnerability/disturbance index/ hazard indicator map of the study area?
- What kinds of statistical analysis can I include in my study with these limited data sets which will help me to show some sort of relationship of "Loss Area" with other variables?
I wanted to get the TOA reflectance from MODIS data (Terra surface reflectance, MOD09A1). hence I used preprocessing too. I got the following reflectance values which I suppose are very less than I expected. There are so many zeros after the decimal. Would you please help me to solve this?
(As I was having a problem downloading the data. So I prefer to use this preprocess tool to use the data downloaded directly from the official website.)
Thank you in advance.
Respected All
The data which is available at https://swat.tamu.edu/data/india-dataset/ , the file named "SOIL_WATERBASE.7z" on this link has only the raster along with "VALUE" and "NAME" of the soil series, which is taken/subsetted from HWSD_FAO,
But the database file require a large number of inputs of different soil properties, for these soil series like, compaction, number of layers, Texture, Bulk density, Available water capacity, Proportion of sand silt and clay, etc, in the dbf file, which has these columns
OBJECTID
MUID
SEQN
SNAM
S5ID
CMPPCT
NLAYERS
HYDGRP
SOL_ZMX
ANION_EXCL
SOL_CRK
TEXTURE
SOL_Z1
SOL_BD1
SOL_AWC1
SOL_K1
SOL_CBN1
CLAY1
SILT1
SAND1
ROCK1
SOL_ALB1
USLE_K1
SOL_EC1
SOL_Z2
SOL_BD2
SOL_AWC2
SOL_K2
SOL_CBN2
CLAY2
SILT2
SAND2
ROCK2
SOL_ALB2
USLE_K2
SOL_EC2
and so on_____________till 10 layers (if available)
The another file named " soil_HWSD_FAO.7z " has its global raster without any information about the soil properties which are used in input database.
I have sub-setted/clipped the indian raster from the global,
although the global raster was missing spatial referencing,
So if someone is sub-setting the same for their area of interest, then you have to manually geo-reference the subsetted image to use it while creating HRU's.
Kindly share if you have any idea about, the source of the soil properties for these soil series.
The Geology of the Himalaya is a record of the most dramatic and visible creations of modern plate tectonic forces. I wonder how vegetation flourished in such a mountain system.
Can we draw any statistical and spatial relationship between fine parameters of road dust ( Pd, Zn, Cr, and Ca) and some key air pollutants ( like PM_2.5, CO, NO, CH4, O3, HCHO, BC, NOx, SO2)? Did anyone make any spatial analysis using these both datasets?
I have both datasets and would like to do some statistical and spatial analyses. Kindly suggest, if there is any possibility to draw any relationship.
Is there any good software that able to show the good results of spatiotemporal analysis of GIS outputs?
If we apply atmospheric correction to the data to enhance the coherence, will it provide accurate deformation result in a densely vegetated mountainous region like the Himalayas.
I am running Geographically Weighted Regression (GWR) on a categorical dependent variable so my model is basically a Geographically Weighted Logistic Regression.
I have multiple independent variables, some numerical and some categorical.
While interpreting the results of numerical variables is straight forward, I want to know how to distinguish the reference level of the categorical independent variables and how to interpret those?
let's say I code males as 0 and females as 1. so the coefficients should be interpreted for females as they are coded 1? what if I coded males as1 and females as 2?
I have worked on RUSLE model and collected all the required data from flied survey and lab analysis.
But now I am stuck in Data validation process.
Please suggest the procedure to do it.
Thank you
Hi,
Can I prove or model a reclamation in the past using GIS/RS? Can you provide me some discussion, readings, or techniques?
I try to calculate Stream Power Index (SPI) in ArcGIS. For this reason, i checked many videos or documents but there is no certainty about the formula in the raster calculator. So I wrote these formulas below to learn which one is right. Each one creates different results.
DEM Cell Size=10m
SPI_1 --> "flow_accumulation" * 10 * Tan("slope_degree" * 0.017453)
SPI_2 --> Ln("flow_accumulation" * 10 * Tan("slope_degree" * 0.017453))
SPI_3 --> Ln("flow_accumulation" + 0.001) * (("slope_percent" / 100) + 0.001)
SPI_4 --> Ln(((flow_accumulation+1)*10) * Tan("slope_degree"))
SPI_5 --> "flow_accumulation" * Tan("slope_degree")
Also, while creating a slope, am I need to choose which one? DEGREE or PERCENT_RISE
And the last question: When I calculate SPI with the formulas above, I get SPI map that included negative values. Is it true? Are negative values a problem or not?
After the GIS 3D model is established, what data types need to be imported into ABAQUA? After importing, can the model be modified in the GUI?
Littoral zone is wider than coastal zone, while second is a physical concept, littoral is also understood as social and economic. Which criteria could we use to draw the limits of littoral zone?
I have a point shapefile of 12 islands (created by using centroid in Arcmap) with attribute of Macro benthic speciesl and their abundance. I would like to analyze a distribution of these species using Arcmap. Would it be suitable to use kernel density to analyze Species distribution? If not, which method should be used in analyzing instead?
I am concerned about number of points (too few points) and long distance among island.
Thank you
Dear all,
Are there publications/guidelines about methodologies for identifying critical areas based on stormwater quality? I am especially interested in methods based on GIS analysis and water quality sampling. By sampling I mean quite practical approaches that municipalities can utilize while trying to map critical areas for stormwater quality management, and with city-scale focus. I assume that a city-scale project has to start with a GIS analysis of critical areas and then, as a second phase, continue with a water quality sampling campaing(s) of selected sites.
At present, most researches construct the topic model of the text first, and then analyze the results in time and space. Is there a better way to combine?
Related Research and studies, books, and any other references, please?
One critical component of the study of landuse change is the testing of level of agreement between pixels of a satellite-derived landuse map and a reference map (produced with authenticated ground information). This comparison remains the veritable approach for checking the degree of correctness of the image analysis task. However, one of the indices developed for scaling through this task is Kappa Index for the assessment of the components of the study. However, scholars such as Prof Gil Pontius of Clark University have repeatedly affirmed that Kappa Index gives grossly inaccurate estimates with misses and other erroneous outputs. My questions is what are the proffered alternatives to Kappa if we have to accept that the index is inaccurate and misleading? How available at the open science level (open source software packages) are these alternatives so as to have inclusive and easily accessible tools for future land change science studies? These answers are needed because some young and early career remote sensing experts are still glued to Kappa.
Thank you for your contributions to this inquiry.
Hi everyone!
I have origin points in different areas and within these areas also specific values within raster cells (in 50m resolution). My goal is to sum up the values of all previous cells in the respective cells that are traversed.
This movement can be calculated for example with Least Cost Path. With this tool i created a Backlink Raster, which shows me the movement.
But when I use the Least Cost Path tool to accumulate values, the respective values are extended by the grid cell size.
Does anyone have an idea how this can be used to accumulate only the actual values without the grid size?
I tried this with flow accumulation, but some cells will get no value, this i because there is no other cell ending in it. But it needs the value added from the prior cell (or value of the cell) in which this cell is "moving"/"flowing".
I hope someone could help me out with this issue.
Cheers!
I have attached the OSM map of Pannipitiya, Sri Lanka. So looking at the map, what kind of geographical questions you can ask?
To me, the following came to my mind
1. What are the places where house dwellers can walk and reach within 1 minute (600 m ?) ?
2. What is the calmest and quiet place to meditation?
As a part of my PhD, I conducted a study to assess health inequities in Amaravati capital region of Andhra Pradesh using two composite indices made from health determinants indicators and health outcome indicators.
Health outcome indicators data was available at the sub-district level. The data were interpolated to create a heatmap of the health outcome index. Whereas health determinants data was available at the village level. Thus I created a choropleth map using the health determinants index.
Later interpolated health outcome index map was overlayered on the choropleth map of health outcomes. It highlighted some interesting findings, i.e. areas of concern (Villages). The colour combinations created because of overlaying two layers revealed the areas with poor health outcomes and poor health determinants and areas with poor health outcomes with better determinants.
Kindly check these files and give your valuable opinions. Whether this type of analysis can be used to highlight the areas with health inequities or not? Please comment on the method used and the results obtained in the overlayered map.
CI-110 Plant Canopy Imager gives two reading related to leaf area index (LAI) GF lai and PAR lai. Is these values are the final ones if I am trying to Effective leaf area index or do I need to do any other calculation ? if yes, then, Which values should I use as the effective leaf area index (GF lai / Par lai)? I am trying to work on lai prediction using remote sensing.
I am a novice researcher, and i'm working on a project which is busy analyzing water quality data from different water sources such as dams,rivers, and springs , and also from secondary sources such water treatment plants, and households, i have collected water GPS coordinates, which appear on the image* , hence im having difficult to find the right methodology to analyse these results spatially. The microbiological parameters that are going to analysed include bacteria such as E.coli, salmonella spp , Shigella spp, Giardia spp, Entamoba Histolytica spp.
please help and be kind :)
i have attached an image showing the sample locations , additionally the study area is in six Quaternary drainage basins
Hi. I'm searching different methods to delimitate a study area. For example, in hidrological studies is usual to create a polygon for the watersheed (Using contour lines or a DEM). In Environmental impact assessment is the direct and indirect areas that determine the polygon to use. Other cases use political division (Neightborhoods, streets, estates...).
All of them base the delimitations on physical characteristics.
But how to delimitate the study area when the phenomenon intersects several watersheeds, or if i'm investigating a community that move across several estates or political divisions. How to know where the limits are to create my polygon? And how to know if the delimitation is right and that i'm not losing information or data by bad delimitation.
Thanks for your answers.
The parallel drainages are commonly observed over estuarine environments, flood plains and reservoirs of the area
I have an image of the map ( no lat long present on the map )which I have to convert into a shapefile to import into GPlates. How can I do so? Any help or idea is much appreciated.
I am trying to delineate agricultural fields using Sentinel 2 imagery. I have been implementing different image segmentation algorithms to a time series of this data set. My best output so far has false positive errors of some non-agricultural zones (like forests). Hence, I'm looking for the best way to distinguish forests from ag-fields as a post-processing step.
The SAR data shows decorrelation for highly vegetated terrains. Has any of the latest software overcome this shortcoming?
Hi,
I am creating stream orders for morphometric analysis.
At what threshold will we write to the "Con" tool when creating a stream? (Example "Value > 100").
As known, this affects all accounts as it affects the number of Strahler stream orders.
How can I determine the threshold that I am going to write in the "Con" tool and that is suitable for my basin?
Best wishes, Ahmet.
Dear all,
I know it might depend also in the distribution / behavior of the variable that we are studying. The sample spacing must be able to capture the spatial dependence .
But, since Kriging is very much dependent in the computed variance within lag distance, if we have few number of observations we might fail to capture the spatial dependence because we would have few pairs of points within a specific lag distance. We would also have few number of lags. Specially, when we have points with a very irregular distribution across the study area, with a lot of observation in a specific region and sparce observations in other region, this will also will affect the estimation of computed variance among lag (different accuracy).
Therefore, I think in such circumstances computing semivariogram seems useless. What is the best practices if iwe still want to use kriging instead of other interpolation methods?
Thank you in advance
PS
I need to write a MATLAB code that has the ability to process a GIS image in order to extract the coordinates of the grid points within the red region (R) and that are at least distance "d" from its boundary. Each point in the R is given a weight w1 (attached figure). The same procedure is to be made for the green region (G) but w2 is the weight of any point in G. The gathered data are saved in a matrix formed of three rows: row 1 contains the abscissa, row 2 contains the ordinate, and row 3 the weight.
I am looking forward to getting your suggestions...thanks in advance.
Hi
I am working with eCognition Developer software. resently, I used it for UAV imagery segmentation. This image is for the fall season and it has 4 bands( Green, Red, Reg and NIR).
My purpos is classify trees speces of forest dense. But, classification does not done well. becuse, each tree is classified into several classes. My openion is segmentation does not done well. I tried lots of for improved segmentation, But I did not achieve acceptable results.
Do you have an opinion on this?
Thank you all dear ones
I have been trying to download the bands 10 and 11 of LANDSAT 8 for a GIS analysis however the images downloaded from both Glovis and Earthexplorer have been resampled to a 30meters spatial resolution. Please who knows how I can get the 100m spatial resolution TIR band for my analysis
Dear all!
I would like to suggest you take the survey "NSDI FOR YOU AND YOUR COUNTRY":
The results of this survey will help me to research the issues of the implementation of NSDI and get an overview of this policy.
I will be very grateful for your help, cooperation and sharing!
I am seeking the best current methods and datasets (highest possible resolution) for defining and assessing global land degradation - ideally with a time series. I know there are different ways of exploring this e.g. biomass, productivity, land use/cover etc., but I would appreciate any thoughts on current modelling, datasets/resources and novel approaches.
I am also interested in the best methods for quantitatively mapping/modelling land restoration (biophysical) on a global scale, and if possible, historic land reconstruction.
Thanks!
For my Masters I am looking into elephant space-use and its effect on vegetation change. I have elephant GPS collar data and was going to use Google Earth Engine to obtain NDVI values. I would like to establish elephant space-use across a fenced area in the form of either areas most frequently occupied (at the pixel level) or pixels the most time was spent in. I would then like to use that data to determine whether any vegetation changes (recent NDVI as a percentage of long term average NDVI) are linked to the elephant space-use (i.e. to determine whether vegetation change is elephant-mediated). I have looked at methods such as UD and BBMM but was wondering if there were more suggestions out there? Ultimately the method I use for space-use needs to have an output I can use to compare it to the NDVI values. Thanks in advance!
I have multiple line outlining annual glacier extent (over 20 lines per glacier). I would like to calculate the average linear retreat of the glaciers. Is there a tool to calculate such distance on arcGIS?Or is there a python script I could use?
Dear Researchers,
I am applying interpolation by kriging method using GIS and interpolation did not cover the whole of area under consideration.
How to do this? Please guide me.
Regards
Naveed
I'm trying to do pixel-wise correlation comparison of two datasets? Any standard procedure or codes to perform this?
Like Correlation Coefficient map and RMSE map of a particular region.
I want shoreline extraction using a water mask index which is MNDWI. I decided use Landsat data for it. There are two kind of data sets. One is Collection-01 level-01 another one is collection-01 level-02(Surface reflectance). Which is better for analysis? Level-01 or Level-02?