Questions related to SAR Interferometry
Hi, I’m a beginner in satellite image analysis. I want to know the lat/lon coordinates of some bursts of a sentinel-1 image. I looked at the file names of the downloaded zip, but couldn’t find any promising files(attached: file structure). Can someone teach me how I can obtain them?
Context: My purpose is to generate a coherence image and project to QGIS. I used SNAP following up to p12 of this tutorial(https://step.esa.int/docs/tutorials/S1TBX%20TOPSAR%20Interferometry%20with%20Sentinel-1%20Tutorial_v2.pdf). but the coordinates were somehow lost from the first step(importing and choosing bursts so as to produce a split file). not sure why but it apparently happens with other satellites(https://earthenable.wordpress.com/2016/11/21/how-to-export-sar-images-with-geocoding-in-esa-snap/). I was able to produce the coherence without coordinates, so i’m thinking if I can get the coordinates from the sentinel file, I can just add it to the geotiff myself.
I also want to ask, is this idea wrong? are the sentinel coordinates different from the coherence image as it undergoes back geocoding?
It is a method used to created an interferometric time series developed by Hooper [2004-2007] following the persistent scatterer approach of Ferretti 2001, which produces surface line-of-sight deformation with respect to the multi-temporal radar repeat-passes and mitgate the temporal phase decorrelations due to the instrument errors, DEM error and atmospheric contribution to the phase delay.
SNAP2StaMPS is a Python workflow developed by José Manuel Delgado Blasco and Michael Foumelis collaboration with Prof. A. Hooper to automate the pre-processing of Sentinel-1 SLC data and their preparation for ingestion to StaMPS. Much appreciation goes to the great work of those honorable professors.
StaMPS method follows 8 steps that are carried out on Matlab on a virtual Unix machine or a Linus OS system
If I understood this correctly. Step 3 selects separate groups (each subset contains a pre-determined density of pixels per square kilometer, those pixels have random phase) from the initially selected PS pixels in step 2 that was based on their calculated spatially correlated phase, spatially uncorrelated DEM error that is subtracted from the remaining phase, temporal coherence.
Then, in step 4 (weeding) those groups of pixels per unit kilometers are further filtered and oversampled and in each group, a selected pixel with highest SNR is taken as a reference pixel and the noise for the neighbouring adjacent pixels is calculated, then based on a pre-determined value of (‘weed_standard_deviation’), some of those neighbouring pixels are dropped and others are kept as PS pixels.
A) Am I correct?
B) What is a pixel with random phase?
C) What is the pixel noise? is it related to having multiple elementary scatterers where none of them is dominant therefore their backscattererd signal is recieved at a different collective phase at each aquision even if the ground containing those scatterers were stable over time ?
D) Due to the language barrier, I have read Hooper's 2007 paper
E) what is difference between by the spatially uncorrelated DEM (look angle) error that is filtered at StaMPS step 3, removed at Step 5, and spatially correlated look angle error that is removed at Stamps Step 7?
There are attached some test results and I would appreciate if someone inform me how I may remove the persistent atmospheric contribution. I have only used the basic APS linear approach using TRAIN toolbox developed by Bekaert 2015
Im working with Sarscape, and during the "geocodding and radiometric calibration" process I encounter the message "Memory not found"! but there is no problem and limitations in the CPU, RAM and the Hard disk free space. The error report file is attached.
What might be the reason for the problem?
I have around 100 sentinel-1 SLC images. I want to compute coherence between the successive images (1->2, 2->3… and so on). For this, the conventional method is to read images 1 and 2, select swath, apply orbit files, coregister them using the back-geocoding operator, apply ESD and then compute the coherence and the same procedure is also with images 2 and 3, and so on.
I want to discuss another approach here
- Read image 1 and coregister all other images (2,3,4…100) with respect to 1.
- Now, for images 1 and 2, we can compute the coherence as the above-mentioned method.
- For image 2 slv and image 3 slv, we stack them together considering them already coregistered with image 1 and compute coherence.
I want to understand whether it is an ethical way to do it or not.
1. Can we assume in the case of SAR SLC product image 2 and image 3 are also coregistered when they are coregistered with image 1?
2. Do different incident angles affect the coregistration results?
3. Is there any effect of different range and azimuth pixel size? 4. What should be the optimal method to perform successive time series analysis?
Hello dear Researchers, I need guidance related to derampdemod. working with single SLC_IW data, after applying derampdemod signal encountered with stripes. https://sentinels.copernicus.eu/documents/247904/1653442/Sentinel-1-TOPS-SLC_Deramping.pdf/b041f20f-e820-46b7-a3ed-af36b8eb7fa0?t=1533744826000 1 Anyone who worked with this process is given in the above link, actually, I need to produce figure.2. spectrum after derampdemod. I have attached one intensity image and after derampdemod I encountered stripes. Looking forward to your advice.
So far, I’ve tried to contact as many official distributors of SAR products as I can. One common response that I got from these distributors is, although they're happy to take orders for new acquisitions, they don’t have archived data for my study areas in Ethiopia. Albeit near-real-time satellite acquisitions for submeter resolutions are too expensive, I could manage it. Even though high resolution (submeter) archived data were of high importance for my research, it's unfortunate that I didn’t succeed to get them till now.
To summarize, archived data collected between 2005 and 2010, in VV polarization, are mandatory for my study to conduct interferometric analyzes. So, I wonder if anyone can help me get out of this problem. Any commercially available archived data with the above-mentioned requirements, either in ascending or descending orbit, are welcome.
Many open source for SAR Interferometry (InSAR) are open to everyone. But is there available persistent and distributed scatterers (PSDS) InSAR software ? I really want to make it available if there is a huge gap.
We perform differential SAR Interferometry for analyzing the land movement. In case of Landslides the area generally suffers from decorrelation. So for the confidence in the result what minimum threshold should be taken so that outliers should be removed?
I need to know the computing requirements to do SAR time series analysis and how many SAR rasters (images) should be analyzed?
The SAR data shows decorrelation for highly vegetated terrains. Has any of the latest software overcome this shortcoming?
I have the scattering matrix images (8 images: S11_real, S11_imaginary, similarly for S22, S12, S21) and I need to create the coherency matrix images (6 images: Diagonal and upper elements T11,T22,T33,T12,T13,T23). The sensor is mono-static so S12=S21. How can it be done using python\MATLAB. Kindly share the required library/code or equation, required for it.
we performed some test with a SAR system and we observed a change of chip temperature causing a significant change of (interferometric) phase values. However, this change seems to be stronger the further away a target (corner reflector) is and it seems to be a linear trend. Is there an explanation for it? Why is there such a distance-dependent shift?
Thanks for your help and suggestions!
I'm looking for a method to find the Doppler center of complex SAR image. Can anyone suggest the easiest way to estimate the Sentinel image Doppler center ?
I'm working on SAR Images Co-registration.I want to divide my image into two looks with the spectral diversity method. But I did not find its complete formulas in any article.
Does anyone know how I can do this? Or do you know the paper that fully explains the algorithm?
Without using the zip file with metadata, is there any way to use just the Geotiff for flood detection? (using Snappy or any other API) Any insight will be highly appreciated.
When calculating interferometric coherence, why can't you do so on a pixel by pixel basis? I know the equation for estimating coherence = (|S1∙S2* |)/√(S1∙S1*∙S2∙S2*) where S1 and S2 are the two single look complexes. And I know this calculation uses a maximum likelihood estimator but why do you need to specify an estimation window and why cant the estimation window size be 1?
i am looking for a book to learn how can i design a repeat pass interferometric sar system considering necessary system parameters and simulating them in matlab?
In interferometric water cloud model we estimate 3 unknown parameters via non-linear least square regression, but if we do not mention proper initial values of those parameters, regression process doesn't complete. I am trying to find how to assign the initial parameters in a correct way.
I have written a Sentinel-1 Level-0 image formation tool in Matlab. I am able to extract and decode all the information in the Level-0 file both for the image formation and for the spacecraft position. This tool is able to extract each burst and process it into a complex image chip. I am currently unable to determine how to determine an exact antenna beam center ground/pointing location (which should then allow me to map formed complex pixels to lat/lon). My current method/algorithm is biased in both lat (~0.3 degrees) and lon (~0.1 degrees) at a target latitude of ~45N. I have assumed that the velocity vector defines the x axis, the ECEF normal to the spacecraft defines the z axis and the cross product is the y axis. I also assume the quaternions provide how to rotate the antenna look direction from the y axis to earth (though I am not confident in my application of the rotational values...). I further assume a WGS-84 earth model and an imaging location at sea level and find where the rotated y-axis intersects the WGS-84 model. But, as stated, this yields a significant earth surface error relative to the positioning provided in the available SLC Level-1 product. I'm very open to exchanging information and engaging in discussion to better understand this less-documentation-than-desired aspect of the algorithm.
I have written an image formation algorithm for Sentinel-1 Level-0 IW data. To properly form the image, one *must* perform the frequency unwrapping on the azimuth data. I have found I am able to perform an azimuth compression on this data without needing to get to the documented algorithmic step of doing the second unwrap.
How and why, exactly, does Sentinel-1 have this frequency wrapping of azimuth data? What benefit is there to the documented additional processing steps that lead to the documented second unwrapping?
The following data comes directly from S1B_IW_SLC__1SDV_20180426T063818_20180426T063845_010651_013707_1395.SAFE\annotation\s1b-iw1-slc-vv-20180426t063819-20180426t063844-010651-013707-004.xml:
I can not find *any* set of equations to convert from the provided quaternions (q0-q3) to pitch,roll,yaw and vice-versa. Yes, I have started with all of the ESA published documents and then expanded out to multiple other documents on the web. I have coded every variant I have encountered in Matlab, but am unable to get numbers that agree to even to the ones digit. I have even tried to use different application sequences (instead of just the published 3-2-1 order).
Worse from a documentation point of view, I believe that here Q3 is the magnitude, but almost all Sentinel-1 documentation defines Q0 to be the magnitude of the quaternion. I get that it is generally not consistent and/or both are acceptable - BUT a given system should be self-consistent.
As you might expect, this is a key step to being able to associate each formed image pixel with an exact ground coordinate. Without an understanding of how to apply (q0-q3) to be pitch,roll,yaw it is clear any attempt I take to use them for vehicle rotation/orientation will be wrong. "Close" isn't good enough - I need the full, exact understanding so I can precisely apply the vehicle orientation to my image formation.
Please, don't just point me to Wikipedia or some web page - try the numbers first - I have.
i want to make interferogram using alos palsar 2 but the data i got are alos palsar 2 imagery level 1,5 and tutorial sarscape that i learnt using alos palsar 2 level 1.1 and if can where i can find the tutorial?
I should find free SAR images ( Spaceborne or Airborne).
Especially I need complex SAR images and also StereoSAR images for same area.
So I can examine both interferometric and radargrametric algorithms' results.
I am currently working with the SLC data (Level 1.1) acquired using ALOS PALSAR sensor. But I am not able to display the Image using MATLAB. Can someone please tell me what are important parameters that I have to look into while processing SLC data, as I wish to obtain SPAN of the SLC data?
Kindly suggest me a step by step procedure (cheatsheet or crib sheet) for SAR image analysis (purticularly, creating interferogram fringes).
Also suggest me which open source SAR image analysis tool is compatible?
Can anyone please help me. I don't know what the Problem is: I downloaded Sentinel 1 data (Level 1 S SLC) and opened them in the sentinel 1 toolbox. But when I add them in the window for the InSAR coregistration, I don't have any layers to select for the master image and the toolbox always reports the problem: "Operator 'CreateInSARStackOp': Value for 'Slave Bands' is invalid". What is the Problem? Do I have to preprocess the SLC's?
I allready debursted them, but that also doesn't help.
Thank you for your advices!
I have been trying to implement the Wavenumber Domain Algorithm (WDA) to process simulated FMCW-SAR data. For those who have open access to IEEE website, the paper describing this very algorithm can be found in the link on the bottom. The algorithm is slightly different from the common WDA for pulse SAR: the main difference is given by the Stolt interpolation which now takes into account the movement of the sensor during the sweep.
This is my first time trying to focus SAR data, thus I am having a bit of hard time. I was wondering whether there are out there experienced SAR processing guys, which are maybe familiar with FMCW-SAR too. If you are out there .. well, just give a look at my Matlab code (see attachment).
The FFT-based spectrum of the signal was compared to the ones achieved via Principle of Stationary Phase (PoSP). It looks like there's good matching until the Stolt interpolation. I am not at all convinced of my Stolt interpolation and I would like to ask two things:
- How to define the vector containing the mapped range frequencies? See the block "RE-MAPPING RANGE FREQUENCIES" in section "STOLT INTERPOLATION" of my Matlab code. What I did .. I took into consideration the biggest mapped range frequency, and I used it to define a new sampling frequency. This new sampling frequency was then used to define the vector of the mapped range frequencies. See code rows 219 - 234.
- The unambiguous range interval in FMCW-SAR is directly proportional to the sampling frequency. Unfortunately, when applying 1., the slant-range extension is no longer equal to the unambiguous range. Compare 'Ru' to 'c*tp(end)/2'. How so?
Overall .. I would say there's something quite wrong with my Stolt interpolation. Is there anyone capable of helping me out?
Update: Matlab attachment was eliminated, see new attachment in the response below.
Interferogram generation is done on a TanDEM-X pairs in CoSSC format. Can I now calculate the phase standard deviation and make assumptions how accurate the interferogram is over a specific (distributed) target? Additionally, how does multilooking of the interferogram and different coherence windows affect this accuracy?
I am planning to register missile borne SAR images and optical pictures and want to know the differences between missle-borne and air-borne SAR images
as mentioned in Fialko 2001 for 3 D displacement from InSAR, a linear system of equations should be solved for each pixel :
1.Two equations from Ascending and Descending Orbits
[Un Sin φ – Ue Cos φ] Sin θ + Uu Cos θ + δlos = dr ……….. (1)
φ … The azimuth of the satellite heading vector (positive
clockwise from the North)
θ … The radar incidence angle at the reflection point
dr … The LOS displacement at the reflection point
δlos … Is the relative measurement error
One equation for the Azimuthal Offset from the descending orbit
Un Cos φ + Ue Sin φ + δazo = dazo ………………….……….. (2)
my question is how to obtain dazo from an unwrapped differential interferogram.
To resolve 3 D Surface displacement from DInSAR it is required to know the incidence angle at each target pixel and the azimuth of the satellite heading vector clock wise positive from north to form three equations in three unknowns(Un, Ue, and Uv) for each pixel.
The azimuth vector differs according to the looking direction of the satellite and I use ERS satellites and ENVISAT
I am working on SAR interferometry with some TSX Stripmap mode images. I have some choices for "Processor Gain Attenuation" which are 0, 10 and 20dB but I totally have no idea about this value.
Could you let me know some suggestions?
Could you tell me which software is best for TSX interferometry processing? Thank you!
After forming an interferogram and correct it for phase and apply a DEM to remove the topographic noise now it can be called "DInSAR" which is used for crustal displacement
How to create displacement map from this DInSAR????
I need to generate a DInSAR to study the surface displacement
In NEST, It is required to create first two pairs:
1. TOPO pair
2. DEFO pair
The information mentioned in the user manual is not enough
I need help for this steps in the software
I am currently working on an HPC-Cluster with 16 available threads. So I put the command export OMP_NUM_THREADS=16 at the top of my bash script, hoping that the number of threads is set for all following GAMMA-functions that use OPENMP. Unfortunately this is not working (e.g. with running offset_pwr), as still the default number of 4 threads is used. Does anybody have any suggestions?
Hey there. I'm having trouble processing some full-polarimetric SLC Radarsat-2 imagery.
On NEST DAT 4C, on "SAR Tools | Multilooking", I have two options of multilooking: (1) is GR Square Pixel and (2) is independent looks.
My image has a pixel size of ~ 11m on range and ~5 on azimuth. When I select GR Square Pixel and configure 1 look on range direction, NEST suggests me 3 looks in azimuth; if I select 2 looks in range, NEST suggests 7 looks in azimuth.
I'm not understanding those multillooks. They are not square pixels. Is there any difference between "Ground Range multi-looking" and "normal" multilooking?