Questions related to Satellite Image Analysis
I am seeking recommendations for established methods that utilize multispectral and Synthetic Aperture Radar (SAR) data for "Remotely Sensing Surface Water Bodies"
These could cover areas such as,
- Reflectance measurements
- Radiative transfer modeling
- Feature extraction
- Water Body Detection (WBD)
- Normalized Difference Water Index (NDWI)
- Modified Normalized Difference Water Index (NDWI)
- Automatic Water Extraction Model in Complex Environment (AWECE)
I would greatly appreciate any guidance or references that could be provided on this topic. Thank you in advance for your assistance.
I have two incidents (pre and post incidents) for which NDVIs will be calculated and used to detect any change by subtracting. Now before calculating NDVI, I want to normalize radiometrically the post images with respect to the pre images using regression which uses pseudo-invariant target (PIF). I am looking to do this whole process in Google Earth Engine.
My questions are:
- Can anybody please kindly share the script?
- During selecting the PIFs, should I select them from the reference/base images or the target image?
I'm doing research for my degree thesis in architecture on the urban heat islands of the city of Naples - Italy.
I'm reclassifying the Land surface temeperature map in gis and I am looking for a method to classify the temperatures on the ground in a precise way, according to the classes that allow me to locate the heat islands.
I am working on Forest Canopy Density. There is a parameter called "Scaled Shadow Index(SSI)" while computing Forest canopy density. In most of the papers I found that, SSI has been calculated by "Linearly Transforming" Shadow Index. I have computed the Shadow Index. But i am not getting the idea to compute Scaled Shadow Index. Kindly help me out. Moreover, If I am using Landsat 5 and 8 Surface Reflectance Image for FCD Mapping and as the Reflectance value ranges from 0 to 1, is it still mandatory to normalize these Surface Reflectance data before calculating Vegetation Indices?
I am working on calculating NDVi and EVI indices from different years (50 year time period) and comparing them. I have a few questions regarding the datasets that I have found on Google Earth Engine (GEE).
- In the Landsat 8 Surface Reflectance dataset that is given on Google Earth Engine, do I have to do any additional preprocessing as it is already atmospherically corrected or do i have to just scale it using the scaling factor?
- What is the difference between the Landsat Surface Reflectance datasets and Landsat TOA Reflectance datasets?
I am performing a time-series analysis and using Sentinel-2 for the latest data and Landsat-5 for the older data. Now in the time-series graph, I see a significant rise in the output values for the last two years' data (sentinel imagery). The first step of my algorithm is to derive NDVI so does the difference between the two sensors still matter? If so then how to correct the process?
P.S- I am using top-tier products for all the imagery so is there a question of DN to TOA conversion?
I want to simulate the urban expansion using different time series LULC based on satellite image. please suggest me most suitable model for urban simulation.
Thanks and regards.
I am trying to delineate agricultural fields using Sentinel 2 imagery. I have been implementing different image segmentation algorithms to a time series of this data set. My best output so far has false positive errors of some non-agricultural zones (like forests). Hence, I'm looking for the best way to distinguish forests from ag-fields as a post-processing step.
I am trying to plot daily LST(Land Surface temerature) data for a region using MODIS and Landsat data. I have used ESTARFM model to fuse MODIS and Landsat data to produce daily LST data at landsat resolution. However MODIS data (Acqua-MYD11A1) is only available after a few days of lag. Can somebody please let me know how many days of lag is there between acquisition date and the date on which the data is made available and whether it is possible to produce the LST data at near real time if we compute the LST ourselves?
I am working on a project in which I have to calculate LST using Landsat imagery. I used different algorithms. However there is a significant difference between LSTs calculated on the imagery and those measured in field. The calculated brightness temperatures are in more accordance with field-measured LSTs. I think that this is because these methods are based on vegetation cover (NDVI) and the study area has a very poor vegetation cover. Is there any other approaches that do not use vegetation indices?
I have using satellite image of Landsat 8 and 5 for LULC classification. I am confused that which software and method are the best for LULC classification?
Please help me.
Thanks and Regards
In Terrset, an error encountered in executing Spatial Decision Modeller (MCE). During execution or run the MCE a massage come on the screen that reflects
"Your columns and rows are not same" , I have attached the image below.
How to encounter this error?
I need a suggestion to get rid of this sort of error. Waiting for the response from any expertise persons.
Emissivity is a crucial parameter for calculating Land Surface Temperature (LST). One of the algorithms to calculate LST using single thermal band of Landsat 8 (Band 10) or Landsat 7 (Band 6) or Landsat 5 (Band 6) is based on the simplified Radiative Transfer Model (RTM) equation as documented in Barsi et al.(2003). The RTM equation in question is the following :
Ltoa = τεLt + Lu + (1-ε) Ld
where, τ is the atmospheric transmission, ε is the emissivity of the surface, Lt is the radiance of a blackbody target of kinetic temperature t, Lu is the upwelling or atmospheric path radiance, Ld is the downwelling or sky radiance, Ltoa is the Top Of Atmosphere (TOA) radiance measured by the sensor.
For this question, I'm only interested in estimating ε. Generally, I've been using the NDVI Threshold Method as described in Sobrino et al. (2001), which probably is the most commonly used method for estimating emissivity. This works more or less okay for Landsat scenes that have been acquired during the day. However, for night-time Landsat acquisitions, estimating emissivity using the NDVI method is illogical because the NIR and Red bands of Landsat mostly register only noise. This could be due to the absence of reflected energy from the sun at night and radiometric sensitivity of the sensor. With that said, I'm aware of alternate ways to estimate emissivity, e.g., (1) by Image Classification method which assumes emissivity for each class, or (2) by using emissivity values from spectral libraries such as the ASTER spectral library (http://asterweb.jpl.nasa.gov), or (3) by taking into account the seasonal ASTER Global Emissivity Database (GED) provided by JPL or (4) In-situ emissivity measurements. Except the last method, that is rarely available, the other methods may not be close enough in regard to the temporal resolution of the night-time Landsat image acquired for which the Land Surface Temperature is to be determined. In such a case, what would be the best way to accurately obtain a Land Surface Emissivity image for the corresponding night time Landsat thermal image?
Does anybody have any other ideas? You're encouraged to add relevant references in addition to your answers/comments.
PS: USGS has developed a Landsat Surface Temperature product that is available for US Region and rest of the world, but I'm not sure whether it includes night time surface temperature product.
My CNN model input requires around 30,000 square-shaped satellite images, the corner coordinates(lat-long) of which I have already compiled. However, I am not able to figure out how to collectively obtain the images for these specific co-ordinates using the Google Earth Engine.
I have a Corine 2000 land use data set and i want to update it using a Worldview Satellite Image with 8 channels.. WHat would be the best way you think?
i want to prospect ground water in rocky terrain using satellite imagery. i need to know the most suitable data to be use, the method of extraction and the procedure for achieving it. Thank you in anticipation of your useful contribution.
What are the procedures to make lineament map using satellite image in ArcGIS or ERDAS IMAGINE? Is there any video tutorial? Or Step by step procedures.
Dear esteemed researchers,
I have developed an algorithm for contrast enhancement of satellite images. I need some recent methods for comparison. I have managed to get 6 methods so far, but I need more. If you have a Matlab code (even if its a Matlab p-code) and you would like to share it, please do. I will use it for comparison purposes only.
Thank you in advance, your help is much appreciated.
Though Anderson's 1976 classification is the most conventional LULC classification system, and at the same time NLCD 2011 being a subset of the earlier one, there is confusion with barren and water class level-2 classification. The major confusion is with class 7. Barren ( 71 Dry Salt Flats. 72 Beaches. 73 Sandy Areas other than Beaches....) What does the level-2 73 class mean? Is it the whole coastal line other than beaches (for instance, the yellow highlighted areas in the image attached)? What if the sand holds moisture content? Can't we classify it under the water class? If the latter case is acceptable, then it will be like violating Anderson's classification scheme. I have found that NLCD 2011 classification is a brief categorization whereas Anderson's is not. Can anyone suggest how to proceed with this confusion/issue?
Thank you very much in advance.
Does anyone have a fuzzy method to solve mixed pixel in crop (sugarcane/rice) classification for district level area?
What is the accuracy of these images regarding their geo-referencing and how accurate are they for their use of known sites with control points for geo-referencing?
I have no knowledge as to how images are prepared for supervised classification in machine learning. I would like to ask help on this matter. I will be working with satellite images and my variable of interest would be the roofing material of houses. How do I prepare my training sample? What software is needed and what would be the program? Your inputs and help will be greatly appreciated.
I want to implement image registration for satellite images using mutual information and intensity based. And I want to calculate mutual information between two images, translation and rotation.
Currently we are investigating about dendrometric variables of some tree species in Bogotá-Colombia, and we need to calculate the tree crown diameter per tree (in an automatic or semi-automatic), but we only have access to satellital images and a old inventory of tree heighs.
We are searching for a method that can be used in Python or R or with an open-source software.
In spite of several campaigns and infinite awareness activities, water wastage is still an avid problem. In countries like India especially in its rural areas, this problem imposes a great threat.
Is it possible to retrieve information relevant to this from the satellite image data and use it to build a systematic surveillance system for this?
If I have thee successive satellite images (A, B, C) with almost 80% overlap, I can generate for example two disparity maps for each image ( AB disparity & and AC disparity maps). The question is how to merge these two disparity maps to fill the gaps/ holes and produce more complete map even with few occluded areas?
What scaling factor to use to convert floating point data into integers to perform the unsupervised classification?
Can compression (before classification) increased the classification acuracy?
How can we improve the classification accuracy of remote sensing image(RSI) by the help of compression?
Dear respective researchers can you please help me by providing the related articles link or your valuable opinion for these issue.
I am doing analysis of MODIS data in order to build yield model from the Vegetation Indices.
Recently, I used R since I have found it's been used widely by many research regarding time series Remote Sensing.
I found several packages, but it was still confusing me, "what should I do first with my Raster data downloaded from USGS?" before I use the packages to analysis my stack of data.
If you have some best practice tutorial, I will be glad if you can share it to me!
I am working on the paper " An Adaptive Pansharpening Method by Using Weighted Least Squares Filter". I downloaded the WLS filter code and when i am applying satellite image, it is not working. please send me some more information regarding that.
I am going to estimate land surface temperature in a cloudy city and prepare its LST map for further analysis. I searched among Landsat images of the case study , but, the cloud coverage is very high ! (Since I prefer to use high resolution images, i searched among Landsat images).
I would be thankful if you provide me some solutions and suggestions.
SENTINEL-1 IW Level-1 products are Single Look Complex (SLC) and Ground Range Detected (GRD).
Which of them is better in the aim of soil moisture retrieval? Is there any difference to choose one of them?
(amount of data and being free is important for me)
Below is the difference between these two products.
Level-1 GRD consist of focused SAR data that has been detected, multi-looked and projected to ground range using an Earth ellipsoid model. The ellipsoid projection of the GRD products is corrected using the terrain height specified in the product general annotation. The terrain height used varies in azimuth but is constant in range. Phase information is lost. The resulting product has approximately square resolution pixels and square pixel spacing with reduced speckle at the cost of reduced geometric resolution. Ground range coordinates are the slant range coordinates projected onto the ellipsoid of the Earth. Pixel values represent detected magnitude. The resulting product has approximately square resolution pixels and square pixel spacing with reduced speckle, but with reduced resolution.
Level-1 SLC products consist of focused SAR data geo-referenced using orbit and attitude data from the satellite and provided in Zero-Doppler slant-range geometry and have been corrected for azimuth bi-static delay, elevation antenna pattern and range spreading loss. Slant range is the natural radar range observation coordinate, defined as the line-of-sight from the radar to each reflecting object. The products are in Zero-Doppler orientation where each row of pixels represents points along a line perpendicular to the sub-satellite track. The products include a single look in each dimension using the full TX signal bandwidth and consist of complex samples preserving the phase information.
SLC images are distributed as a GeoTIFF file per polarisation with pixel interleaved I and Q. Each I and Q value is 16 bits per pixel.
Does anyone know how frequently sand storms and dust storms that arise from middle east or north africa travel to Pakistan and North India? I was wondering, in view of the already worsening air pollution levels in North India, events such as dust and sand storms reaching the subcontinent may exacerbate the situation. How rare or common are such sand and dust storms being carried from their place of origin (usually middle east and north africa) and intermix with fog or haze intensified by smoke or other atmospheric pollutants in another far off location? Has there been any similar, possible mixing of phenomena (dust storm and smog) reported/documented/studied anywhere around the globe at any time, preferably that was also caught by polar or geostationary satellites?
I was looking at a true-color or natural color satellite image acquired on 29th Oct. 2017 by the Visible Infrared Imaging Radiometer Suite (VIIRS) on board the joint NASA/NOAA Suomi-National Polar orbiting Partnership (S-NPP) satellite around early afternoon. I've attached a screenshot of the image as well as provided the full link to access the satellite imagery. These satellite images have been stitched together to create a global mosaic. Unlike MODIS, VIIRS do not show any data gaps (except sun glints!). I found this satellite image particularly compelling because it clearly shows the sand storm picking up over northern Saudi Arabia and moving around Iraq, Iran, Caspian Sea towards Afghanistan with the movement of wind. I also think the Earth's rotation from west to east has a role to play in the movement and direction of the wind laden with sand and dust. But it seems difficult to understand their dynamics. The smog over North India and parts of Pakistan can be differentiated from the sand storm over middle east in this satellite image. In North India this is the time of the year when there are intentional crop fires due to the traditional slash-and-burn agriculture practice.
Does anyone knows how to work with the sentinel-2 .jp2 files in R?
The earthexplorer has .jp2 files for sentinel-2 and I'm having some trouble in simply open it in R (I'm using R Studio in Windows)
---- Answer Update ----
What I was looking for was this:
Masita D. M. Manessa Added an answer
I'm use follows steps (https://stackoverflow.com/questions/40044403/gdal-jp2-driver-missing-in-python)
gdal_chooseInstallation('JP2OpenJPEG') its works"
Thank you Manessa! =)
(Machine learning & computer vision)I am finding a public satellite image dataset with road & building masks. I already know the SpaceNet (NVIDIA, AWS) and TorontoCity dataset (Wang et al.), but downloading SpaceNet by using the AWS cli is too inconvenient and TorontoCity dataset is not available yet.
I was reading some literature on forest canopy density change detection using Landsat image. And I found on some literature that they normalize the Landsat image bands before using the formula of different indices to determine the forest canopy density map. But I am not sure what does normalization of bands mean. Does it mean converting the band DN values into Top of Atmosphere reflectance values?
I am including 3 literature I have studied
I try to select nighttime images of Landsat 8 through the option "night" in the browser: https://earthexplorer.usgs.gov/ (Additional Criteria tab) and the answer is always "No Results Found". Anyone knows if it's possible to download nighttime images of Landsat (in general)? by google engine/lv.eosda.com/usgs... I thought that some time ago it was possible to download Landsat nightime images.
Thank you in advance!
i am working on disparity estimation methods. I am using sum of squared difference methods.The method is applied on two images with different views.I got the disparity map. I got a disparity map with zeros ,ones and two's.My asumption is that region covered with 0's in map shows similarity and other regions with 1's and 2 shows dissimilarity.This is my assumption.I would like to measure the similarity in the map in terms of number of 1's covering the regions.
My point is that my methodology is correct or not? please advice me in this regard?
Can anyone tell me the detailed procedure to estimate the paddy crop acreage using SAR data. I also want know that which SAR data will be better for this which is freely available.
Im doing a research on mapping air pollution and its spatial distribution using a set of multi-date Landsat data. Is there anybody who can help provide me a step wise methodological activities? I hope this can help me a lots. Thanks in advance.
Hi everyone! I just want to ask this question since it is necessary especially in obtaining multi temporal related studies. As based from experience, there's an instance that utilizing different kinds of Landsat images would cause the images not properly aligned.
I'm a researcher currently working on Land surface temperature over a particular area and i want to know the difference between LST and Skin temperature to help me get the various satellite data set that would be used for the research.
I have ship-borne gravity data along some profiles of a region, as well as satellite gravity gridded data of the region. I want to combine both the data and prepare a gridded gravity data of that region.
I am giving up, I cannot find a way reading EUMETSAT NWC SAF products, like Cloud Analysis, Cloud top heights, Cloud masking etc. The products are provided in a "space view perspective" grid in grib2 format and seems that this is not a supported projection by any famous software for handling satellite products or by NCL which I am mostly using.
Has anyone managed to handle this kind of data?
I need to perform DN to Reflectance conversion on subset of image in specific band not multi-spectral image. I am able to do it on multispectral image.
Kindly let me know that:
What should be the Incidence angle of RADARSAT or any SAR data for flood mapping?
what could be the best combination of polarization (HH,VV,HV etc.) and incidence angle for flood mapping and related studies?
Thanks and regards
Satellite (like TRMM) provides Total Column Water Vapor over the sea. But over land, the albedo interferes. My objective is to measure water vapor into a cumulonimbus over a tropical andean area.
I applied the FLAASH atmospheric correction tool in ENVI for Landsat images and got the result as given below. This area is located in high mountain areas with significant amount of snow and glacier. My question is about the how can I validate this result, whether it is correct or not. Also, I can see the only green color, where the area is covered by snow (mostly) in false color combination and pink in true color. I thought it is a noise but, when I tried to remove the noise, it is still same. What might be the reason for this. Does anybody can help me on this regards.
Thanking you in advance.
Hi, I want to check if the De Grandi speckle filter works on 2 temporal SAR images or multi temporal SAR images are required to get the best results from it?
Thanks a million!
I need to know if is it scientifically valid to fuse/pan-sharpen Landsat 8 surface reflectance products with pan band of that respective scene? Landsat reflactance product details can be found at below link. It needs to be mentioned that one needs to order surface reflectance product separately to get this product. This product contains only 7 band (30m) not IR and Pan band. So, again, my question is, is it valid to fuse 7 bands(30m) of surface reflectance product with normal(not surface reflactance) pan band(15m)?
N.B. It would be a great help if someone comes up with some reference.
Anybody who can point me in the right direction cause I can't apply multi-temporal speckle filtering on my 2007, 2008, 2009, 2010 and 2015 ALOS PALSAR mosaic datasets. It just gives me an erros saying that the bands need to have a unit.
Does this tool only work on standard products (L1.1 and L1.5)?
I would like work on analysis of SAR images and optical/multispectral images. Hence I am in need of freely available database of these images. Images of same location should be captured during same time (may be same week or same month).
I am undertaking a project into the identification of tree species using satellite data, namely the hyper spectral data generated by Sentinel 2. Is there a way to use this data to identify trees to a species level or failing that identify whether a tree is broadleaved/coniferous etc?
I am trying to convert the DN values of WorldView-3 multi-spectral to surface reflectance using the ENVI WV-3 Radiometric Calibration and FLAASH tools.
I can easily convert the DN values to radiance, but the calculated radiance values for the WV-3 coastal band seem not to be right (large spike values, please see the attachment for an example of vegetation spectra).
As a result, after conducting the FLAASH atmospheric correction, the reflectance in coastal and blue bands (Right) is also abnormal (vegetation spectra in blue band even has negative values, please see the attachment). My best guess is that the atmospheric scattering has huge impacts on the short-wavelength bands of WV-3 on this imagery, and it is too "wrong" to be corrected by WV-3 Radiometric Calibrations and FLAASH. My study site is in tropical region (Brazil) and the WV-3 acquisition time is in June. Any suggestions? Thanks a lot!
I am undertaking a project looking at possible ways to identify tree species using satellite data and am considering using near infared imagery provided by Sentinel 2. However to do this I will need a list of individual trees spectral signatures and was wondering if a complete database existed and was accessible?
HI please I have classified my satellite images, done change detection and produced my Land Surface Temperature maps, but I have no access the landuse data and the national topomaps of the study area.
How do I carry out my accuracy assessment for the classification and change detection?
How can I validate the LST?
Sometimes we prefer to use satellite data in different resolution. So while performing resampling, aggregating pixel by simple averaging or resizing data set into different resolution what should I do fast.
A) should I apply the mask to negatives values & cloud pixels, and shadow pixels 1st and the I do resampling?
B) should I perform the resampling or aggregation etc 1st then I resample?
which one is correct sequence please mention with an explanation.
thanks in advance
what are the use of these products "num_observations","250m Reflectance Band Quality",obs_cov, in the MODIS data set. What does the bit no and bit combination mean in table 1 below? how could I use them to mask the cloud?
these are collected from the MODIS product description link attached below.
Thanks in advance.
research shows that remotely sensed images from satellites can be used to study water and air quality. In low income countries it is expensive to obtain high resolution satellite images for such studies. So I am wondering if one can buy a phantom drone and use it images taken to do similar studies.