ArticlePDF Available

Abstract

The Kennaugh framework turned out to be a powerful tool for the preparation of multi-sensor SAR data during the last years. Using intensity-based (an-) isotropic diffusion algorithms like the Multi-scale Multi-looking or the Schmittlets, even robust pre-classification change detection from multi-polarized images is enabled. The only missing point so far, namely the integration of multi-mode SAR data in one image, is accomplished in this article. Furthermore, the Kennaugh decomposition is extended to multi-spectral data as well. Hence, arbitrary Kennaugh elements, be it from SAR or optical images, can be fused. The mathematical description of the most general image fusion is derived and applied to four scenarios. The validation section considers the distribution of mean and gradient in the original and the fused images by the help of scatter plots. The results prove that the fused images adopt the spatial gradient of the input image with a higher geometric resolution and preserve the local mean of the input image with a higher polarimetric and thus also radiometric resolution. Regarding the distribution of the entropy and alpha angle, the fused images are always characterized by a higher variance in the entropy-alpha-plane and therewith, a higher resolution in the polarimetric domain. The proposed algorithm guarantees optimal information integration while ensuring the separation of intensity and polarimetric/spectral information. The Kennaugh framework is ready now to be used for the sharpening of multi-sensor image data in the spatial, radiometric, polarimetric, and even spectral domain.
SAR-SHARPENING IN THE KENNAUGH FRAMEWORK
APPLIED TO THE FUSION OF MULTI-MODAL SAR AND OPTICAL IMAGES
A. Schmitt1 and A. Wendleder2
1 Munich University of Applied Sciences, Department of Geoinformatics, Karlstraße 6, D-80333 Munich, schmitt@hm.edu
2 German Aerospace Center (DLR), Earth Observation Center, Oberpfaffenhofen, D-82234 Weßling, anna.wendleder@dlr.de
Commission I, WG I/6
KEY WORDS: SAR, Optical, Image fusion, Image sharpening, Polarimetry, Multispectral imaging, Multi-sensor data fusion
ABSTRACT:
The Kennaugh framework turned out to be a powerful tool for the preparation of multi-sensor SAR data during the last years. Using
intensity-based (an-) isotropic diffusion algorithms like the Multi-scale Multi-looking or the Schmittlets, even robust pre-
classification change detection from multi-polarized images is enabled. The only missing point so far, namely the integration of
multi-mode SAR data in one image, is accomplished in this article. Furthermore, the Kennaugh decomposition is extended to multi-
spectral data as well. Hence, arbitrary Kennaugh elements, be it from SAR or optical images, can be fused. The mathematical
description of the most general image fusion is derived and applied to four scenarios. The validation section considers the distribution
of mean and gradient in the original and the fused images by the help of scatter plots. The results prove that the fused images adopt
the spatial gradient of the input image with a higher geometric resolution and preserve the local mean of the input image with a
higher polarimetric and thus also radiometric resolution. Regarding the distribution of the entropy and alpha angle, the fused images
are always characterized by a higher variance in the entropy-alpha-plane and therewith, a higher resolution in the polarimetric
domain. The proposed algorithm guarantees optimal information integration while ensuring the separation of intensity and
polarimetric/spectral information. The Kennaugh framework is ready now to be used for the sharpening of multi-sensor image data in
the spatial, radiometric, polarimetric, and even spectral domain.
1. INTRODUCTION
Earth observation satellites with their diversity of sensors
provide a variety of spectral, geometric, temporal, and
radiometric resolutions. Their rising number raises the issue of
image fusion in order to enhance interpretation capabilities of
image features (Pohl and van Genderen, 1998; Abdikan et al.,
2008) and to reduce the amount of data at the same time. For
instance, Pan-Sharpening combines a high resolution
panchromatic image with a low resolution multispectral image
and creates a multispectral image with higher-resolution
features. This improves the thematic interpretation enormously
and can be seen as state of the art nowadays. Cliche et al. (1985)
demonstrated that the spatial resolution of 20-m multispectral
SPOT data can be increased by integrating the 10-m
panchromatic channel. Chavez et al. (1991) compared three
different methods of Pan-Sharpening and found that distortions
of the spectral characteristics using a High-Pass Filter were
minimal. Equally, image fusion of panchromatic and SAR data
enhances the understanding and classification of objects due to
the combination of two disparate data: on the one hand, optical
data with information on the reflective and emissive
characteristics of the earth’s surface features and SAR data with
information on surface roughness, texture, and dielectric
properties on the other hand (Pohl and van Genderen, 1998;
Amarsaikhan et al., 2010). Amarsaikhan et al. (2010) used
optical and SAR data for the enhancement of urban features and
demonstrated that multi-source information could significantly
improve the interpretation and classification of land cover types.
However, image fusion of only SAR images, possibly acquired
in different frequencies or polarizations, is not well established
in practice. The so-called SAR-Sharpening primarily denotes an
increase of the spatial resolution. Depending on surface
roughness, texture, and dielectric properties of an object, each
frequency and each polarimetry underlies a completely different
scattering behaviour. Additionally, SAR images are influenced
by high and diverse noise content: additive (white) noise and the
multiplicative speckle effect. Thus, the basic idea of combining
SAR images with different frequencies and polarizations is a
radiometric stabilization without reduction of the spatial
resolution. With respect to the interpretation of backscatter
values, this immediately leads to an increase of the information
content (Simone et al., 2001; Farina et al., 1996). This image
fusion is novel and promising as it supports the understanding
and interpretation of SAR image features due to different
electromagnetic signatures. Simone et al. (2001) combined
multi-frequency, multi-polarized, and multi-resolution intensity
images incoherently using the discrete wavelet transform. The
classification results underlined an improved discrimination of
land cover types. Weissgerber (2016) combined a single-
polarized high-resolution TerraSAR-X image and a quad-
polarized coarser resolution TerraSAR-X image acquired under
interferometric conditions, thus coherent. The goal was to exploit
the scattering mechanisms of polarimetric SAR images even in
fine-structured urban environments. The method consequently
enhanced the spatial resolution of point-like targets while keeping
their polarimetric behaviour.
Our approach proposes a versatile SAR-Sharpening in the
Kennaugh framework. The idea is to establish a simple but
consistent mathematical description which supports both the
fusion of several SAR images and the fusion of SAR with optical
data. The Kennaugh framework is already in use for polarimetric
decomposition and data preparation and has proven to be suitable
in diverse applications (Schmitt and Brisco, 2013; Moser et al.,
2015; Bertram et al., 2016). Its advantage is the consistent
preparation of all SAR data independent of sensor, mode and
polarization. The final product consists always of geocoded,
calibrated, and normalized Kennaugh elements, i.e. one intensity
measure and up to nine polarimetric measures. The existing
framework is expanded to the integration of optical images as
well. Hence, SAR and optical Kennaugh elements are defined
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume IV-1, 2018
ISPRS TC I Mid-term Symposium “Innovative Sensing – From Sensors to Methods and Applications”, 10–12 October 2018, Karlsruhe, Germany
This contribution has been peer-reviewed. The double-blind peer-review was conducted on the basis of the full paper.
https://doi.org/10.5194/isprs-annals-IV-1-133-2018 | © Authors 2018. CC BY 4.0 License.
133
which can be fused to one image. The fused images benefit from
the increased resolution in both the spatial and
polarimetric/spectral domain. Four scenarios are designed in order
to prove the added value of the fused image: (1) traditional SAR-
Sharpening in the spatial domain, (2) SyntheticQuadPol, (3)
SAR-Sharpening involving a pan-chromatic image, and (4) the
fusion of SAR and optical features provided by the Sentinel-1&2.
2. TEST SITES AND REMOTE SENSING DATA
This section introduces the four application scenarios for the
following methodology (Tab. 1). Scenarios 1 and 2 fuse two
SAR images whereas scenario 1 enhances the spatial resolution
of an ALOS-ALSAR-2 QuadPol StripMap (SM) by the
combination with a TerraSAR-X SpotLight (SL). The test site
covers the estuary of the Lech into the Danube near Rain am
Lech in Bavaria, Germany. This landscape is characterized by
canalized river courses, artificial lakes, floodplain forests,
agricultural areas, and settlements. Scenario 2 improves the
polarimetric resolution by fusing dual-co- and dual-cross-
polarized StripMaps (SM) of TerraSAR-X to a so-called
“SyntheticQuadPol” image. The test site covers the northern
part of Khayelitsha which is a district in Cape Town,
South Africa, with formal settlements, planned Townships and
informal, completely unorganized settlements in a relatively dry
environment. The remaining scenarios concern the fusion of
SAR images with optical data. Scenario 3 combines a QuadPol
acquisition of ALOS-PALSAR-2 with an Arial Image over
Langwasser which is a quite new district in Nuremberg,
Germany. This test site contains very diverse urban structure
types: residential buildings (with varying orientation), parks, the
southern cemetery, the Nuremberg exhibition area, a railroad
shunting yard, and industrial buildings. Scenario 4 benefits from
the synergy of Sentinel-1 and Sentinel-2 by introducing SAR
intensity into an optical image and vice versa. The test site is
located near Osterseen in Bavaria, Germany. This area is an
extensive wetland with numerous swamp lakes popular as local
recreation area.
3. THE KENNAUGH FRAMEWORK
Traditional image fusion algorithms deals with one target and
one warp image (Brown, 1992). The target image commonly
defines the reference for the final fused image in terms of
geometry and radiometry and with respect to the polarimetric
and/or spectral bands. Our approach defines an independent,
earth-fixed and practice-oriented reference frame, in which all
input images were transformed as follows.
3.1 Geometric frame
In most applications, earth-fixed coordinates are required in
order to combine the remotely sensed information with geo-
information data bases; hence, satellite images have to be
geocoded in a pre-processing step. Thanks to the high accuracy
of today’s positioning systems, the satellites orbits can be
predicted with an accuracy of about 10 m, measured with 1 m
and adjusted (in a post-processing step) with about 0.1 m
accuracy (Peter et al., 2017). With respect to the common pixel
sizes of 10 m at minimum in the Sentinel-1 mission (in square
ground-range pixels with a reasonable number of looks), the
orbit deviation delivered with the image ranges around a tenth
of the pixel size. Thus, geocoding is simply possible using orbit
data and a digital elevation model. Because of the weak
influence of atmospheric disturbances on the microwave band,
SAR acquisitions can be projected on the earth’s surface by
solving the Doppler equation for each range line with the
accuracy of a few meters or even less (Schubert et al., 2015).
Only the geocoding of very high resolution SAR acquisitions or
the interferometric analysis of image stacks requires the
consideration of atmospheric effects. Optical bands on the
contrary are much more affected by refraction. As the influence
in the geometry increases with the incidence angle, steep (near
nadir) acquisitions are generally preferred. In the case of
Sentinel-2, the maximum incidence angle is only 10°. Because
of its push broom characteristics, the central projection equation
can be solved for each row neglecting further distortions. The
gained geolocation accuracy does not exceed a few meters
according to recent studies (Vajsova and Åstrand, 2015).
3.2 Radiometric frame
As the pixels are geocoded onto the earth’s surface, equally the
radiometric frame should consider the horizontal area. For SAR
acquisitions, this means that σ0 is calculated using the β0-
calibrated intensity values and the local incidence angle
(Schmitt et al., 2015) being well aware of recent, more
sophisticated methods that are preferable for rough terrain
(Small, 2011). The common models only concern the
backscatter intensity. All polarimetric channels are treated the
same way, although the impact of target orientation on
polarimetric measurements is well-known (Li et al., 2015). In
consequence, the applied calibration does not change the
polarimetric properties (see chapter 3.3). Optical data of
Sentinel-2 are already delivered as Top-Of-Atmosphere
calibrated products (Level 1C). The provided image value thus
directly reflects a multiple of the quotient of the measured
intensity to the solar illumination. Some images are also
available as Bottom-Of-Atmosphere (Level 2A) products.
Those are already corrected for atmospheric influences as far as
possible (ESA, 2018). It is recommended to use the best
calibration variant available, though the influence on the fusion
algorithm is almost negligible. The only important characteristic
is that all data sets (SAR and optical data) are normalized to
reflectance values referring to the horizontal plane similar to σ0.
3.3 Polarimetric frame
SAR sensors always transmit polarized microwaves in order to
enable coherent measurements needed for the synthetic aperture
Scenario Acquisition date Sensor Mode Polarization Looks Target Grid (m)
1 12.05.2017 ALOS-PALSAR-2 StripMap HH/VV/HV/VH 0.5 2 x 2
20.04.2017 TerraSAR-X SpotLight HH/VV 1.1
2 22.11.2014 TerraSAR-X StripMap HH/VV 1.5 2.5 x 2.5
03.12.2014 TerraSAR-X StripMap VV/VH 1.5
3 10.05.2017 Aerial Camera - - 25 1 x 1
12.05.2017 ALOS-PALSAR-2 StripMap HH/VV/HV/VH 0.1
4 14.10.2017 Sentinel-2 - - 1.0 10 x 10
15.10.2017 Sentinel-1 Interferometric Wide Swath VV/VH 2.1
Table 1. Sensor characteristics and acquisition parameters of the available data sets for the four scenarios.
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume IV-1, 2018
ISPRS TC I Mid-term Symposium “Innovative Sensing – From Sensors to Methods and Applications”, 10–12 October 2018, Karlsruhe, Germany
This contribution has been peer-reviewed. The double-blind peer-review was conducted on the basis of the full paper.
https://doi.org/10.5194/isprs-annals-IV-1-133-2018 | © Authors 2018. CC BY 4.0 License.
134
calculation. Today’s sensors typically measure, , ,
or , the so-called elements of the Sinclair matrix in linear
polarization with horizontally or vertically oriented transmission
and reception (Moreira et al., 2013). The included absolute but
random phase impairs the direct interpretation of these complex
values. Therefore, different methods of forming intensity
measurements by removing the absolute phase have been
developed: inter alia the coherency matrix, the covariance
matrix, and the Mueller matrix which denotes the linear
transform of the real Stokes vector. In the special case of a
monostatic SAR system it reduces to the Kennaugh matrix
(Schmitt and Brisco, 2013) consisting of the total intensity K
and up to nine polarimetric Kennaugh elements K. These can
be divided by the total intensity and result in the so-called
normalized Kennaugh elements k ranging in-between −1 and
+1. The total intensity K can be related to the norm intensity
of 1 by the TANH scaling and results in the normalized
intensity element k with the identical data range (Schmitt et
al., 2015). The interpretation of polarimetric elements is quite
simple. The value zero means “no polarimetric information”.
Any deviation from zero indicates polarimetric information. The
sign shows the direction, for example positive values of k3 stand
for a higher even-bounce scattering and negative values for a
higher odd-bounce scattering in the dual-co-polarized case
(Moser et al., 2016). The strength of the effect can be expressed
in the unit-less TANH measure or traditionally in decibel. The
normalized Kennaugh elements hence enable the separation of
intensity from polarimetry (Ullmann et al., 2017). The
polarimetric information therefore can be combined with an
arbitrary intensity measure. For example a combination of a
constant intensity of one SAR sensor (for study purposes) and
an intensity acquired by another SAR sensor or even with a
reflectance acquired by panchromatic optical sensors are
possible. Because of the incoherent illumination by the sun
without fixed polarization direction, polarimetry cannot
measured by optical satellite sensors. In summary, the only
cross connection between SAR and optics is the total intensity,
whereas SAR is able to provide additionally polarimetric
information about the illuminated targets.
3.4 Spectral frame
The spectral resolution is one key feature of optical sensors. We
distinguish panchromatic, multispectral, and hyperspectral
sensors. Panchromatic refers to only one image channel with a
large bandwidth. Multispectral sensors provide up to 15 bands
with medium band width. Hyperspectral images may consist of
more than hundred narrow and highly correlated bands. This
article focusses on the four channel image which is typical for
aerial sensor systems measuring blue, green, red, and infrared
reflectance values gathered in vector
. Furthermore, these four
bands are delivered in the maximum spatial resolution (10 m
pixel raster) in the products of Sentinel-2. The goal is the
separation of intensity from spectral information which is
reached by the traditional Hue-Saturation-Value (HSV)
transformation for R-G-B images. We defined an invertible
linear transform of four channels which is fully described by the
4-by-4 matrix (Eq. (1)). Out of the infinite number of possible
orthogonal transformations, the elements of are chosen
according to the Kennaugh concept in polarimetry. Following
equation with total intensity and intensity differences with equal
weighting of positive and negative summands has been defined:
=
1111
−1 −1 1 1
1111
−1 1 −1 1 (1)
Assuming a uniform distribution of the intensity over the four
input channels (a grey scale image respectively), the expectation
value of each resulting spectral Kennaugh element is zero. By
analogy to the polarimetric Kennaugh elements any deviation
from zero can be interpreted as spectral information.
From wavelet theory, this transform might be interpreted as
Haar wavelet decomposition: the first row contains the low
pass, the second row reflects the band pass Haar wavelet of the
first scale in central position, the third row contains the same
Haar wavelet shifted by one channel, and the fourth row defines
the high pass Haar wavelet (Haar, 1910).
Back to matrix calculation, the design matrix represents an
orthogonal matrix which means that it is simply invertible by
transposition =∙. The multiplication with does not
change the length of the input colour vector ‖ℛ‖=∙ℛ,
and the resulting dimensions are orthogonal and thus
independent of each other. The linear transform then unfolds to
=
=


=∙ℛ
(2)
where the elements of
share the same characteristics as the
Kennaugh elements known from polarimetry. Hence, is the
total intensity. The remaining elements resemble intensity
differences. In this manner, the proposed decomposition are
similar to the well-known Tasselled Cap transform with the
main difference that the Tasselled Cap reduces the
dimensionality and hence does not represent an orthogonal
transform (Kauth and Thomas, 1976). All Kennaugh elements
can be projected on a closed value range by the division through
the total intensity. According to Schmitt et al. (2015) the
normalized elements can be defined as follows
=−1
+1 ∈ ] − 1,+1[ (3)
=
for i =1,2,3 ∈ ] − 1, +1[ (4)
In consequence, these multi-spectral elements can be treated as
Kennaugh elements known from polarimetry. The inverse
transform is always possible applying
=∙
. The
presented orthogonal transform allows the separation of
intensity from multispectral information. As the mono-
frequency SAR sensors in general are not able to provide
multispectral information, the only cross connection between
SAR and optical data again is given by the total intensity.
4. SAR-SHARPENING
Thanks to the chosen geometric and radiometric frames, the
fusion requirements are already fulfilled by the pre-processing
steps. The delivered SAR data processed in the Multi-SAR
framework (Bertram et al., 2016), the optical data sets provided
by the Sentinel-2 mission, and the aerial image mosaic (LDBV
2018) can directly be used. Minor deviations resulting from an
outdated or coarse digital elevation model might potentially occur
but are not addressed in this article. The question to be answered
in the following sections is how to optimally fuse intensity
measurements and how to replace intensity channels without
influencing polarimetry and spectral properties in a multi-sensor
data set.
4.1 Intensity Averaging
Intensity by definition represents a conservative potential field.
For instance, there is no negative intensity and the mean intensity
of an area - defined as the arithmetic mean of the available local
intensity measures - is always greater than zero. Hence, an
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume IV-1, 2018
ISPRS TC I Mid-term Symposium “Innovative Sensing – From Sensors to Methods and Applications”, 10–12 October 2018, Karlsruhe, Germany
This contribution has been peer-reviewed. The double-blind peer-review was conducted on the basis of the full paper.
https://doi.org/10.5194/isprs-annals-IV-1-133-2018 | © Authors 2018. CC BY 4.0 License.
135
additive combination of intensity measures is prescribed. The
polarimetric and spectral Kennaugh decomposition in this sense is
nothing else than a linear combination of intensities. The
Kennaugh elements, be it or , can be treated in the same
way. In order to consider the potentially varying spatial resolution
of the input data, the number of looks per pixel is introduced as
weight. Assuming intensity images of the same area, the total
number of looks is given by =
 . From statistics, this can
be interpreted as the mean over independent measurements
available for the target pixel area. The individual number of
looks can be seen as quotient of the target pixel area in the
fused image by the measured pixel area , and provides an
adequate sampling rate:
l=aa,
(5)
The intensity fusion hence unfolds to the weighted arithmetic
mean of the input intensities , in linear scale including the look
numbers as weights:
sK=l∙K,
 (6)
The fused intensity  is given in linear scale again, i.e.
[0,[. This is also the typical data range of variance
measures. From radar theory, any intensity resembles a squared
deviation. The mean intensity over measurements hence defines
the mean squared deviation, namely the variance.
As the use of normalized intensities is preferable with view to
memory demand (Schmitt et al., 2015), the following equation
can be derived from Eq. 3 for the fused normalized intensity
which is independent from polarimetric or spectral information:
=
=
,∙,

,
 (7)
In that way, the workaround over linear intensities can be
avoided. The normalized fused intensity shows a closed value
range ]−1,+1[. With respect to statistics, the fused
intensity  equals the normalized deviation from a normal
distribution with its expected variance in one.
Regarding the definition of normalized polarimetric and spectral
Kennaugh elements respectively in Eq. 4, the calculation of the
fused elements consequently unfolds to
=
=∙,

∙,
 =∙,
, ∙,

∙,
,
 >0 (8)
In summary, fused intensity, polarimetric, and spectral
information can be expressed in Kennaugh elements in linear and
in TANH scale. The additive fusion as weighted arithmetic (see
Eq. 6) yields maximum stability for statistical reasons as long as
the images to be fused share exactly the same polarimetric or
spectral dimensions.
4.2 Intensity Substitution
The idea behind the Kennaugh decomposition is the separation of
intensity from polarimetric and/or spectral information in order to
remove illumination effects like insufficient topographic
calibration in SAR images or varying solar irradiance in optical
images. The image then decomposes to
=
=∙1
or =
=∙1
(9)
Both the scalar intensity and the Kennaugh vector can be
substituted. For instance, the polarimetry acquired by a SAR
sensor can be spread by the intensity measured by an optical
sensor in order to retrieve smoother results. The spectral
Kennaugh elements of an optical image can be stretched by the
intensity acquired by a SAR sensor in order introduce image
texture vice versa. The intensity is the only overlapping
dimension as stated before. Hence, both intensity measures can
potentially be fused according to Eq. 6 whereas the vectors of
polarimetric and spectral elements (see Eq. 9) are just
concatenated:
&=[1
⋯
⋯
] (10)
As only the intensity measure is fused, this approach is reasonable
if images with no overlap in the polarimetric or spectral domain
are available. The typical application is the fusion of a multi-
polarized SAR image with a multi-spectral optical image.
4.3 Intensity Fusion
The most general and most complicated case is the fusion of
several partially overlapping dimensions. In contrast to the
preceding sections, both requirements have to be fulfilled at the
same time: the stable additive combination from (Sec. 4.1) and
the isolated consideration of intensity and polarimetric/spectral
information from (Sec. 4.2). According to Eqs. 2 and 6, the fusion
of linear Kennaugh elements can be expressed in matrix notation:

=∙
 =∙∙ℛ
 =∙ℛ
 (11)
Obviously, it is completely irrelevant whether a collection of
Kennaugh vectors
or a collection of reflectance vectors
is
fused. Assuming that not all positions of
or
are filled, the
entity of measurements and the total number of looks l needed for
normalization purposes is no longer uniform. That is why a look
vector with entries , is introduced that attaches an individual
look number to each element , of
. The normalization leads
to an elements-wise division by the corresponding look number.
=,∙,

,
 (12)
The same problem occurs with the normalized Kennaugh
elements: the total intensity  as weighted sum over all
measurements is not the adequate calibration factor for all
entries  because  possibly composes of only a subset of all
measurements. This is taken into account by the individual look
number , and a specific total intensity for each
polarimetric/spectral element:
=,∙,∙,

,∙,
 >0 (13)
The total intensity  which is the essential dimension of each
measurement is calculated by applying the look numbers ,
which are identical to known from Eq. 11. The normalization
by the reference intensity of one finally leads to
=,∙,
 ,

,∙,
 ,
 (14)
In summary, three cases of data fusion have been addressed: the
averaging of redundant measurements (as mathematical basis for
the whole data fusion approach), the substitution of independent
measurements (scenarios 3 and 4), and the fusion of partially
redundant measurements (scenarios 1 and 2). Those cases will be
subject to the following application and quality assessment.
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume IV-1, 2018
ISPRS TC I Mid-term Symposium “Innovative Sensing – From Sensors to Methods and Applications”, 10–12 October 2018, Karlsruhe, Germany
This contribution has been peer-reviewed. The double-blind peer-review was conducted on the basis of the full paper.
https://doi.org/10.5194/isprs-annals-IV-1-133-2018 | © Authors 2018. CC BY 4.0 License.
136
Figure 1.
T
T
he fusion of d
u
u
al-co-
p
ol and
q
q
ua
d
-
p
ol imag
e
e
s. Fig
u
u
re 2. The fusi
o
o
n of dual-
p
ol i
m
m
ages to one q
u
u
a
d
-pol image.
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume IV-1, 2018
ISPRS TC I Mid-term Symposium “Innovative Sensing – From Sensors to Methods and Applications”, 10–12 October 2018, Karlsruhe, Germany
This contribution has been peer-reviewed. The double-blind peer-review was conducted on the basis of the full paper.
https://doi.org/10.5194/isprs-annals-IV-1-133-2018 | © Authors 2018. CC BY 4.0 License.
137
Figure 3. The fusion of qua
d
-
p
ol SAR with Aerial Orthop
h
h
otos. Figure 4. T
h
h
e fusion of Se
n
n
tinel-1 and Se
n
n
tinel-2.
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume IV-1, 2018
ISPRS TC I Mid-term Symposium “Innovative Sensing – From Sensors to Methods and Applications”, 10–12 October 2018, Karlsruhe, Germany
This contribution has been peer-reviewed. The double-blind peer-review was conducted on the basis of the full paper.
https://doi.org/10.5194/isprs-annals-IV-1-133-2018 | © Authors 2018. CC BY 4.0 License.
138
5. RESULTS
This section illustrates the results of the data fusion approach:
Scenario 1 - A quad-pol image acquisition of ALOS-PALSAR-2
is fused with a dual-co-pol spotlight image of TerraSAR-X in
order to slightly enhance the spatial resolution and to stabilize the
co-polarized information according to Sec. 4.3, see Fig. 1.
Scenario 2 - Two dual-pol stripmap acquisitions of TerraSAR-X,
namely one dual-co-pol HH/VV and one dual-cross-pol VV/VH
measurement, are fused in order to generate a synthetic, but
adequate quad-pol image according to Sec. 4.3, see Fig. 2.
Scenario 3 - The intensity of a quad-pol image acquired by
ALOS-PALSAR-2 is replaced by the total intensity of the
channels measured by an airborne camera in order to enhance the
spatial resolution according to Sec. 4.2, see Fig. 3.
Scenario 4 - The images of the Sentinel-1 (Interferometric Wide
Swath, VV/VH) and Sentinel-2 (Blue-Green-Red-Infrared)
missions are fused in order to introduce SAR texture into the
multispectral image according to Sec. 4.2, see Fig. 4.
Figs 1-4 depict the input images, the fused data set, and a physical
map of the respective test site. The coordinates refer to UTM
Zone 32N and UTM Zone 34S respectively for Fig. 2.
6. VALIDATION
The validation of image fusion algorithms is always a difficult
task for lack of adequate and comprehensive ground truth data.
Consequently, inter-comparison is the only feasible way. As input
images inherently differ in terms of sensor, wavelength,
illumination, and image generation, just to mention a few aspects,
measures that match both the input and the fused images are
required. We decided in favour of two isolated considerations:
first, spatial resolution and second, polarimetric resolution.
Spatial resolution is described by the local gradient: the higher the
gradient, the higher the resolution as long as the mean values are
not contaminated by noise. The noise contamination comes along
with a random change of the local value. Therefore, the local
intensity is plotted against the local gradient according to Schmitt
(2016). The left-hand side of Figs. 5-8 illustrates the distribution
of the two input images in red and green and the resulting
distribution of the fused images in blue. The polarimetric
resolution, generally called “polarimetric information content”, is
determined in the entropy-alpha plane. Entropy shows the
diversity of the local scattering, whereas the alpha angle indicates
the location of the mean backscattering in the polarimetric
domain and thus the scattering mechanism (Cloude and Pottier,
1996). Depending on the input polarizations the scatter plot in the
entropy-alpha plane shows varying characteristics. In general, the
scatter data range varies from narrow to broad band with
increasing polarimetric information (Cloude, 2007). The
distribution is again plotted in three colors: red and green for the
input images, and blue for the fused image.
Mixed colors display the joint occurrence in two images, whereas
pink stands for an overlay of the fused image with the first input
image and turquoise for the accordance between the fused image
and the second input image. White demonstrates that all three
images share a high occurrence in the local feature plane. Pure red
or green color means that features of the input images are
dismissed in the fused image. Pure blue marks new information.
The validation of scenario 1 in Fig. 5 shows that the distribution
of the mean and the gradient is quite different in the two input
images. Nevertheless, the fused image is a good trade-off between
both input intensities: the overlay of TerraSAR-X and ALOS-
PALSAR-2 is completely covered by the fused image.
Additionally, both the pink and turquoise areas can be identified,
where the characteristics of one input image are captured. With
respect to the polarimetric resolution on the right side of Fig. 5,
the input images fill a small part of the feature plane, whereas the
fused image covers nearly the whole of the possible data range.
The validation of scenario 2 in Fig. 6 suggests that the input
images are quite similar in terms of mean and gradient which is
reasonable because both images are acquired by TerraSAR-X in
the same acquisition mode. The fused image necessarily shares
the same characteristics. Regarding the polarimetric properties on
the right-hand side of Fig. 6, the polarimetric information
contained in the dual-co-pol and dual-cross-pol images is quite
different. Nevertheless, the fused image fills the whole data range,
hence it optimally integrates both partial-polarimetric
information.
The validation of scenario 3 in Fig. 7 shows the intensity fusion
whilst preserving the polarimetric properties. Both requirements
are perfectly met by the fused image. The distribution of the mean
and the gradient matches the distribution of the optical input
image. The polarimetric information is completely identical to the
quad-pol input image. Hence, the proposed image fusion
guarantees the separation of intensity and polarimetry.
A similar behaviour can be observed in the validation of
scenario 4 in Fig. 8. The polarimetric distribution of the fused
image follows the distribution of the SAR input image
independent of the spectral information content introduced by the
Figure 5. 2D-distribution of TerraSAR-X dual-co-pol (red),
ALOS-PALSAR quad-pol (green), and the fused image (blue).
Figure 6. 2D-distribution of TerraSAR-X dual-co-pol (red),
dual-cross-pol (green), and the combined image (blue).
Figure 7. 2D-distribution of an aerial orthophoto (red), ALOS-
PALSAR quad-pol (green), and the fused image (blue).
Figure 8. 2D-distribution of Sentinel-2 R-G-B-IR (red),
Sentinel-1 VV/VH (green), and the fused image (blue).
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume IV-1, 2018
ISPRS TC I Mid-term Symposium “Innovative Sensing – From Sensors to Methods and Applications”, 10–12 October 2018, Karlsruhe, Germany
This contribution has been peer-reviewed. The double-blind peer-review was conducted on the basis of the full paper.
https://doi.org/10.5194/isprs-annals-IV-1-133-2018 | © Authors 2018. CC BY 4.0 License.
139
optical input image. The validation of the mean and the gradient
indicates that the image characteristics of the input acquisitions
are very different, which was expected (e.g. Fig. 7). In contrast to
scenario 3, the fused image does not follow the optical intensity
exclusively because of the similar spatial resolution - and thus,
almost equal look numbers - with Sentinel-1 and Sentinel-2
images. Therefore, the fused intensity reflects an improved mean
of both inputs. In summary, the proposed image fusion algorithm
fulfils all requirements in each of the four scenarios which cover
varying multi-sensor input data as well as varying test sites.
7. CONCLUSION
This article introduces a versatile approach to SAR-Sharpening in
analogy to PAN-Sharpening known from optical data. It is based
on the Kennaugh framework known from SAR pre-processing.
The geometric frame is given by geocoded images in earth-fixed
coordinates. The radiometric frame refers to the horizontal
projection plane which requires σ0. The polarimetric frame is
given by the normalized Kennaugh elements decomposing multi-
polarized measurements into a total intensity referred to one and
several normalized intensity differences. With respect to optical
images, multi-spectral Kennaugh elements are defined for the first
time. They share the same properties with polarimetric Kennaugh
elements and thus guarantee the easy fusion of SAR and optical
data sets. The fusion of partial measurements takes into account
the local number of data points and the backscatter intensity,
which refers to the reliability of the derived polarimetric or
spectral information. The normalization step always has to
comply with the total intensity of the corresponding Kennaugh
element. The general definition simplifies in case of completely
overlapping polarimetric and/or spectral domain or in the case of
a pure intensity fusion. The validation considers the mean and the
gradient of the fused intensity as well as the polarimetric
information content depicted in the entropy-alpha plane. The four
scenarios prove that the separation of intensity and
polarimetric/spectral information is achieved on one hand, and the
fused images optimally integrate the information provided by
both input data sets on the other hand. This approach completes
the Kennaugh framework previously introduced for the pre-
processing of multi-sensor SAR data and the robust change
detection. It opens the door to the Kennaugh processing of optical
data sets and thus, brings SAR and Optical remote sensing
another small step closer.
8. ACKNOWLEDGEMENTS
The authors acknowledge the use of TerraSAR-X data (©DLR
2014 & 2017), Sentinel-1 and Sentinel-2 data (©ESA 2017),
ALOS-PALSAR-2 data (©JAXA 2017), and aerial orthophotos
(©Geobasisdaten: Bayerische Vermessungsverwaltung).
9. REFERENCES
Abdikan, S., Balik Sanli, F., Bektas Balcik, F., and Goksel, C., 2008.
Fusion of SAR images (PALSAR and RADARSAT-1) with multispectral
spot image: a comparative analysis of resulting images. In: Int. Arch.
Photogramm. Remote Sens. Spatial Inf. Sci., 37, 1197-1202.
Amarsaikhan, D., Blotevogel, H.H., van Genderen, J.L., Ganzorig, M.,
Gantuya, R., and Nergui, B., 2010. Fusing high-resolution SAR and
optical imagery for improved urban land cover study and classification.
Int. J. of Image and Data Fusion, 1:1, 83-97.
Bertram, A., Wendleder, A., Schmitt, A., and Huber, M., 2016. Long-term
Monitoring of water dynamics in the Sahel region using the Multi-SAR-
System. In: Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLI-
B8, 313-320, doi.org/10.5194/isprs-archives-XLI-B8-313-2016.
Brown, L.G., 1992. A Survey of Image Registration Techniques. ACM
Computing Surveys, 24 (4), 325-376.
Chavez, P. S., Sides, S.C., Anderson, J. A., 1991. Comparison of three
different methods to merge multiresolution and multispectral data: TM &
SPOT pan. Photogram. Engineering and Remote Sensing, 57, 295-303.
Cliche, G., Bonn, F., Teillet, P., 1985. Integration of the SPOT Pan
channel into its multispectral mode for image sharpness enhancement.
Photogrammetric Engineering and Remote Sensing, 51, 311-316.
Cloude, R.C., 2007. The dual polarisation entropy/alpha decomposition: a
PALSAR case study. In: Proc. PolInSAR 2007, Frascati, Italy. http://earth
.esa.int/workshops/polinsar2007/papers/75_cloude.pdf (28 March 2018).
Cloude, R.C. and Pottier, E., 1996. A Review of Target Decomposition
Theorems in Radar Polarimetry. IEEE Trans. of Geosc. and Remote
Sensing, 34 (2), 498-518.
ESA, 2018. Sentinel-2 products: Level-2A Algorithm Overview.
https://sentinel.esa.int/web/sentinel/technical-guides/sentinel-2-msi/level-
2a/algorithm (28 March 2018).
Farina, A., Costantini, M., Zirilli, F., 1996. Fusion of radar images:
techniques and applications. Military Technology (Miltech), 5, 34-40.
Haar, A., 1910. Zur Theorie der orthogonalen Funktionensysteme.
Mathematische Annalen 69, 331-371. https://link.springer.com/
article/10.1007%2FBF01456927 (28 March 2018).
Kauth, R.J. and Thomas, G.S., 1976. The Tasseled Cap - A Graphic
Description of the Spectral-Temporal Development of Agricultural Crops
as Seen by LANDSAT. In: LARS Symposia. 159.
http://docs.lib.purdue.edu/lars_symp/159 (28 March 2018).
LDBV, 2018. Luftbilder - Hochauflösende Senkrechtaufnahmen der Erd-
oberfläche. https://www.ldbv.bayern.de/produkte/luftbild/luftbilder.html.
Li, Y., Hong, W., Pottier, E., 2015. Topography retrieval from single-pass
POLSAR data based on the polarization-dependent intensity ratio. IEEE
Transactions on Geoscience and Remote Sensing, 53 (6), 3160-3177.
doi.org/10.1109/TGRS.2014.23694817.
Moreira, A., Prats-Iraola, P., Younis, M., Krieger, G., Hajnsek, I., and
Papathanassiou, K.P., 2013. A Tutorial on Synthetic Aperture Radar.
IEEE Geoscience and Remote Sensing Magazine, 1 (1), 6-43.
doi.org/10.1109/MGRS.2013.2248301.
Moser, L., Schmitt, A., Wendleder, A., 2016. Automated wetland
delineation from multi-frequency and multi-polarized SAR images in high
temporal and spatial resolution. In: Int. Ann. Photogramm. Remote Sens.
Spatial Inf. Sci., III (8), 57-64.
Moser, L., Schmitt, A., Wendleder, A., and Roth, A., 2015. Monitoring of
the Lac Bam Wetland Using Dual-Polarized X-Band SAR Data. MDPI
Remote Sensing, 8 (302), 1-31.
Peter, H., Jäggi, A., Fernández, J., Escobar, D., Ayuga, F., Arnold, D.,
Wermuth, M., Hackel, S., Otten, M., Simons, W., Visser, P., Hugentobler,
U., and Féménias, P., 2017. Sentinel-1A - First precise orbit determination
results. Advances in Space Research, 60, 879-892.
Pohl, C. and van Genderen, J.L., 1998. Multisensor image fusion in
remote sensing: concepts, methods and applications. Int. J. Remote
Sensing, 19 (5), pp. 823-854.
Schmitt, A., 2016. Multiscale and Multidirectional Multilooking for
SAR Image Enhancement. IEEE Transactions on Geoscience and
Remote Sensing, 54 (9), 5117-5134.
doi.org/10.1109/TGRS.2016.2555624.
Schmitt, A., Wendleder, A., Hinz, S., 2015. The Kennaugh element
framework for multi-scale, multi-polarized, multi-temporal and multi-
frequency SAR image preparation. ISPRS J. of Photogrammetry and
Remote Sensing, 102, 122-139.
Schmitt, A. and Brisco, B., 2013. Wetland Monitoring Using the
Curvelet-Based Change Detection Method on Polarimetric SAR
Imagery. MDPI Water, 5, 1036-1051. ISSN 2073-4441
doi.org/10.3390/w5031036.
Schubert, A., Small, D., Miranda, N., Geudtner, D., and Meier, E.,
2015. Sentinel-1A Product Geolocation Accuracy: Commissioning
Phase Results. In: MDPI Remote Sensing, 7, 9431-9449.
doi.org/10.3390/rs70709431.
Simone, G., Morabite, F., Farina, A., 2001. Multifrequency and
multiresolution fusion of SAR images for remote sensing applications
In: Proceedings of 3rd IEEE International Conference on Fusion,
FUSION 2000, Paris, France, July 10-13, 2000, pp. WeD3.10-17.
Small, D., 2011. Flattening Gamma: Radiometric Terrain Correction for
SAR Imagery. IEEE Transactions of Geoscience and Remote Sensing,
49 (8), 3081-3093. doi.org/10.1109/TGRS.2011.2120616.
Ullmann, T., Banks, S.N., Schmitt, A., and Jagdhuber, T., 2017.
Scattering Characteristics of X-, C- and L-Band PolSAR Data
Examined for the Tundra Environment of the Tuktoyaktuk Peninsula,
Canada. MDPI Applied Sciences, 7 (6), 595.
Vajsova, B. and Åstrand, P.J., 2015. New sensors benchmark report on
Sentinel-2A. Joint Research Center Technical Reports.
http://publications.jrc.ec.europa.eu/repository/bitstream/JRC99517/lb-
na-27674-en-n%20.pdf (28 March 2018).
Weissgerber, F., 2016. Resolution enhancement of polarimetric images
using a high resolution mono-channel image. In: 11th EUSAR Conference,
Hamburg, Germany.
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume IV-1, 2018
ISPRS TC I Mid-term Symposium “Innovative Sensing – From Sensors to Methods and Applications”, 10–12 October 2018, Karlsruhe, Germany
This contribution has been peer-reviewed. The double-blind peer-review was conducted on the basis of the full paper.
https://doi.org/10.5194/isprs-annals-IV-1-133-2018 | © Authors 2018. CC BY 4.0 License.
140
... Remote sensing data fusion on hypercomplex bases [1] integrates multiple data sources using hypercomplex algebra, extending complex numbers to higher dimensions for processing multidimensional data. The combination of multi-sensor and multi-resolution data enables (i.a.) an improvement in the spatial resolution of satellite images, known as SAR sharpening [2]. The normalised Kennaugh elements resulting from this fusion allow for the first time the large-scale use of modern classification methods based on local, discrete, empirical distributions to distinguish land cover classes based on their polarimetric signature and spatial texture [3]. ...
... The Kennaugh elements then available are comparable to the Kennaugh elements output by the MultiSAR system at the German Aerospace Center in Oberpfaffenhofen, which have proven beneficial in numerous studies [4]. The calculation of optical Kennaugh elements from Sentinel-2 now puts both systems (radar and optical) on a common basis [2], on which they can either be further merged [1] or treated as separate datasets for comparability reasons. Now, sixty images with a ground sampling distance of 10 m are available for each year. ...
... In our study, we focus on predicting tree values using a data stack with 256 features consisting of multimodal, multi-temporal fused satellite datasets, as described in Sec. 2 (2015) capture the dynamic changes in vegetation over time, thereby increasing our understanding of tree characteristics. By utilizing UAV, we obtain precise reference data for the tree values, such as deciduous or coniferous tree type, crown volume, height, and crown base height, which serves as the basis for evaluating the predictive capabilities of the satellite data. ...
... The histogram classification which bases on the similarity measure for the comparison of two discrete local histograms was further studied for typical land cover classification based on reflectance data only [27]. Recently, a very promising new approach was published how to generate Kennaugh-like elements from optical multi-spectral data [28] in order to perform a SAR-sharpening gathering typical characteristics of optical and SAR images in one layer. Though, the sharpening of SAR by the fusion of multi-mode images delivered excellent results, the fusion of SAR and Optics still showed some weaknesses, for example, the combined use of SAR and Optics without radiometric fusion still was superior to the fused data set [29]. ...
... With respect to polarimetric SAR data, the Kennaugh decomposition was first renewed in Reference [13] and then extended to any set of multi-polarized SAR data available nowadays in Reference [11]. The calculation of Kennaugh-like elements even from multi-spectral data in order to sharpen SAR images was recently published in Reference [28]. The core of this technique is an orthogonal transform by a matrix A with following characteristics: ...
... Beyond SAR, most airborne camera or scanning systems provide a four-channel image containing Blue, Green, Red, and Infrared. For this case, an extended approach was already published in Reference [28] that formulated a four-by-four matrix with the required characteristics. For conformity reasons, an equivalent matrix is now created by the substitution of each element in C (see Equation (2)) by another complex number. ...
Article
Full-text available
This article spanned a new, consistent framework for production, archiving, and provision of analysis ready data (ARD) from multi-source and multi-temporal satellite acquisitions and an subsequent image fusion. The core of the image fusion was an orthogonal transform of the reflectance channels from optical sensors on hypercomplex bases delivered in Kennaugh-like elements, which are well-known from polarimetric radar. In this way, SAR and Optics could be fused to one image data set sharing the characteristics of both: the sharpness of Optics and the texture of SAR. The special properties of Kennaugh elements regarding their scaling—linear, logarithmic, normalized—applied likewise to the new elements and guaranteed their robustness towards noise, radiometric sub-sampling, and therewith data compression. This study combined Sentinel-1 and Sentinel-2 on an Octonion basis as well as Sentinel-2 and ALOS-PALSAR-2 on a Sedenion basis. The validation using signatures of typical land cover classes showed that the efficient archiving in 4 bit images still guaranteed an accuracy over 90% in the class assignment. Due to the stability of the resulting class signatures, the fuzziness to be caught by Machine Learning Algorithms was minimized at the same time. Thus, this methodology was predestined to act as new standard for ARD remote sensing data with an subsequent image fusion processed in so-called data cubes.
... In this methodology, Sentinel-2 reflectance values are initially normalised to balance the spectral channels by reducing the influence of the NIR band. These normalised values are then converted into Kennaugh-like elements via linear combinations [66], resulting in one total reflectance element and three spectral elements. Sentinel-1 provides VV-and VH-polarised SAR imagery in the C-band, which is sensitive to structures approximately 5 cm in size. ...
Article
Full-text available
Sinkholes are significant geohazards in karst regions that pose risks to landscapes and infrastructure by disrupting geological stability. Usually, sinkholes are mapped by field surveys, which is very cost-intensive with regard to vast coverages. One possible solution to derive sinkholes without entering the area is the use of high-resolution digital terrain models, which are also expensive with respect to remote areas. Therefore, this study focusses on the mapping of sinkholes in arid regions from open-access remote sensing data. The case study involves data from the Sentinel missions over the Mangystau region in Kazakhstan provided by the European Space Agency free of cost. The core of the technique is a multi-scale curvature filter bank that highlights sinkholes (and takyrs) by their very special illumination pattern in Sentinel-2 images. Marginal confusions with vegetation shadows are excluded by consulting the newly developed Combined Vegetation Doline Index based on Sentinel-1 and Sentinel-2. The geospatial analysis reveals distinct spatial correlations among sinkholes, takyrs, vegetation, and possible surface discharge. The generic and, therefore, transferable approach reached an accuracy of 92%. However, extensive reference data or comparable methods are not currently available.
... In order to fuse multi-polarized SAR and multi-spectral optical data, a common radiometric frame is necessary. One most interesting approach was mentioned in the context of SARsharpening [53] and later on explained as hyper-complex bases (HCBs) in detail [41]. The basic idea is to generate Kennaugh-like elements from the multi-spectral reflectances of Sentinel-2 that are compatible with the Kennaugh elements of Sentinel-1. ...
Article
Full-text available
Earth observation satellites offer vast opportunities for quantifying landscapes and regional land cover composition and changes. The integration of artificial intelligence in remote sensing is essential for monitoring significant land cover types like forests, demanding a substantial volume of labeled data for effective AI model development and validation. The Wald5Dplus project introduces a distinctive open benchmark dataset for mid-European forests, labeling Sentinel-1/2 time series using data from airborne laser scanning and multi-spectral imagery. The freely accessible satellite images are fused in polarimetric, spectral, and temporal domains, resulting in analysis-ready data cubes with 512 channels per year on a 10 m UTM grid. The dataset encompasses labels, including tree count, crown area, tree types (deciduous, coniferous, dead), mean crown volume, base height, tree height, and forested area proportion per pixel. The labels are based on an individual tree characterization from high-resolution airborne LiDAR data using a specialized segmentation algorithm. Covering three test sites (Bavarian Forest National Park, Steigerwald, and Kranzberg Forest) and encompassing around six million trees, it generates over two million labeled samples. Comprehensive validation, including metrics like mean absolute error, median deviation, and standard deviation, in the random forest regression confirms the high quality of this dataset, which is made freely available.
... The Kennaugh images pass through the production steps of the Multi-SAR system ( Fig.2) developed by the German Aerospace Center. The system is suitable for multi-scale, multi-sensor, multi-temporal, multi-frequency and multi-polarization SAR data [13]. The Multi-SAR system delivers geometrically corrected, geocoded, polarimetrically decomposed, radiometrically calibrated, speckle reduced, normalized and compressed image data used in this study [2]. ...
Conference Paper
Polarimetric signatures in L-band data measured by the spaceborne SAR sensor ALOS-2 are correlated with typical forest parameters derived by airborne laser scanning via single-tree segmentation and classification over a test site within the Bavarian Forest National Park. Forest parameters like tree species, tree height, crown volume, or crown base height gathered over a small part are transferred to the complete coverage by satellite-based SAR polarimetry. Three classifiers are trained, applied to the Multi-SAR-processed Kennaugh elements. The observed distinct correlations provide the basis for an in-deep examination of the polarimetric, temporal, and spectral signature of mid-European forests in near future.
... In literature, the DWT was applied on (i) SAR data for noise filtering of the Sentinel-1 data [10], (ii) for preclassification change detection from the Sentinel-1 multipolarized images [11], (iii) to retrieve the wind direction form a series of VV-Sentinel-1 images [12], (iv) for image classification [13], and for parameterizing the feedforward neural networks to improve remote sensing LC identification [14]. In this context, this paper shows the powerful use of the DWT components of the Sentinel-1 data, comparing to the use of the spatial PCA method, for a classification procedure. ...
Article
Full-text available
This article presents a new alternative for data resource, by applying the proposed methods of Principal Components Analysis (PCA) or of Discrete Wavelet Transformation (DWT) on the VV and VH polarization images of the Sentinel-1 radar satellite, aiming at a better classification of data. The study area concerns the Houareb site located in the city of Kairouan in central Tunisia. In addition to Sentinel-1 data, field truth data and the Euclidian Minimum Distance (EMD) criterion were used for classification and validation. Energy descriptors have been proposed in this study for classifications. Cross validation was used to evaluate the results of the classification. The best classification result was achieved using the DWT method applied on VH and VV images with an Overall Precision (OA) of 0.671 and 0.548, respectively, against an OA value of 0.371 and of 0.449 when the PCA method and the Minimum Distance (MDist) classifier were applied on the dual (VV; VH) polarization, respectively. The DWT transformation gives the highest Kappa Precision Coefficient (KPC) of 0.8.
Article
Full-text available
Accurately characterizing clouds and their shadows is a long-standing problem in the Earth Observation community. Recent works showcase the necessity to improve cloud detection methods for imagery acquired by the Sentinel-2 satellites. However, the lack of consensus and transparency in existing reference datasets hampers the benchmarking of current cloud detection methods. Exploiting the analysis-ready data offered by the Copernicus program, we created CloudSEN12, a new multi-temporal global dataset to foster research in cloud and cloud shadow detection. CloudSEN12 has 49,400 image patches, including (1) Sentinel-2 level-1C and level-2A multi-spectral data, (2) Sentinel-1 synthetic aperture radar data, (3) auxiliary remote sensing products, (4) different hand-crafted annotations to label the presence of thick and thin clouds and cloud shadows, and (5) the results from eight state-of-the-art cloud detection algorithms. At present, CloudSEN12 exceeds all previous efforts in terms of annotation richness, scene variability, geographic distribution, metadata complexity, quality control, and number of samples.
Preprint
Accurately characterizing clouds and their shadows is a long-standing problem in the Earth Observation community. Recent works showcase the necessity to improve cloud detection methods for imagery acquired by the Sentinel-2 satellites. However, the lack of consensus and transparency in existing reference datasets hampers the benchmarking of current cloud detection methods. Exploiting the analysis-ready data offered by the Copernicus program, we created CloudSEN12, a new multi-temporal global dataset to foster research in cloud and cloud shadow detection. CloudSEN12 has 49,400 image patches, including (1) Sentinel-2 level-1C and level-2A multi-spectral data, (2) Sentinel-1 synthetic aperture radar data, (3) auxiliary remote sensing products, (4) different hand-crafted annotations to label the presence of thick and thin clouds and cloud shadows, and (5) the results from eight state-of-the-art cloud detection algorithms. At present, CloudSEN12 exceeds all previous efforts in terms of annotation richness, scene variability, geographic distribution, metadata complexity, quality control, and number of samples. The dataset is made publicly available at https://cloudsen12.github.io/.
Article
Full-text available
13 This article presents a new data resource alternative, using spatial and frequency 14 transformations of images, aiming at a better classification of Sentinel-1 data. The 15 transformations of the image by the Principal Components Analysis method (PCA) and by the 16 Discrete Wavelet Transformation (DWT) were used for this purpose. The PCA and DWT 17 methods were applied to the VV and VH polarizations of Sentinel-1, covering a study area on 18 the Houareb site (city of Kairouan, central Tunisia). In addition to Sentinel-1 data, field truth 19 data are used for classification and validation. The minimum euclidean distance criterion and 20 the energy descriptors are proposed for classifications; the cross validation is used to assess the 21 two approaches. The classification with the best performances is carried out using the DWT 22 method with an Overall Precision (OA) of 0.674 against the OA value of 0.345 obtained when 23 the PCA is used. 24
Article
This article proposes methods of parameterizing the inputs of a feedforward neural network (FFNN) that classifies the land cover (LC) in remote-sensing (RS) images in order to speed up the classification process while maintaining high classification accuracy. FFNN training was optimized via two parameters, the learning time (LT) and the number of neurons (NN), by decorrelating the LC data in the RS image using either a discrete wavelet transform (DWT) or independent component analysis (ICA), although ICA can only be applied to a multiband image. The RS images used in this work have also been the focus of several previous attempts at LC classification using various methods. They consisted of a 4.6-m airborne resolution image with HH polarization that was acquired by a synthetic-aperture radar (SAR) and a 15-m multispectral resolution image acquired by the Terra satellite’s Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER). Corresponding field-truth data was used to validate the FFNN classifiers examined in this work. High resolution gives high accuracy information, so that the LC correlation increases, in this case. Classification results for the ASTER image showed that the NN of the FFNN classifier was reduced by more than half when the classifier was parameterized using a DWT, and by three-quarters when the classifier was parameterized using ICA. Results for the SAR image indicated that the NN of the FFNN classifier was halved when the classifier was parameterized using a DWT. Parameterization also reduced the LT of the classifier. The classification accuracy was assessed using a confusion matrix. The fast parameterized FFNN classifiers presented strong classification performance characteristics, similar to those of the original FFNN classifier, with overall accuracies that always exceeded 0.75 and sometimes reached 1. Subsequent work should focus on optimizing the FFNN further by automating two steps: (1) image decomposition using ICA or DWT and (2) FFNN classification.
Article
Full-text available
In this study, polarimetric Synthetic Aperture Radar (PolSAR) data at X-, C- and L-Bands, acquired by the satellites: TerraSAR-X (2011), Radarsat-2 (2011), ALOS (2010) and ALOS-2 (2016), were used to characterize the tundra land cover of a test site located close to the town of Tuktoyaktuk, NWT, Canada. Using available in situ ground data collected in 2010 and 2012, we investigate PolSAR scattering characteristics of common tundra land cover classes at X-, C- and L-Bands. Several decomposition features of quad-, co-, and cross-polarized data were compared, the correlation between them was investigated, and the class separability offered by their different feature spaces was analyzed. Certain PolSAR features at each wavelength were sensitive to the land cover and exhibited distinct scattering characteristics. Use of shorter wavelength imagery (X and C) was beneficial for the characterization of wetland and tundra vegetation, while L-Band data highlighted differences of the bare ground classes better. The Kennaugh Matrix decomposition applied in this study provided a unified framework to store, process, and analyze all data consistently, and the matrix offered a favorable feature space for class separation. Of all elements of the quad-polarized Kennaugh Matrix, the intensity based elements K0, K1, K2, K3 and K4 were found to be most valuable for class discrimination. These elements contributed to better class separation as indicated by an increase of the separability metrics squared Jefferys Matusita Distance and Transformed Divergence. The increase in separability was up to 57% for Radarsat-2 and up to 18% for ALOS-2 data.
Article
Full-text available
Sentinel-1A is the first satellite of the European Copernicus programme. Equipped with a Synthetic Aperture Radar (SAR) instrument the satellite was launched on April 3, 2014. Operational since October 2014 the satellite delivers valuable data for more than two years. The orbit accuracy requirements are given as 5 cm in 3D. In order to fulfill this stringent requirement the precise orbit determination (POD) is based on the dual-frequency GPS observations delivered by an eight-channel GPS receiver.
Article
Full-text available
Water scarcity is one of the main challenges posed by the changing climate. Especially in semi-arid regions where water reservoirs are filled during the very short rainy season, but have to store enough water for the extremely long dry season, the intelligent handling of water resources is vital. This study focusses on Lac Bam in Burkina Faso, which is the largest natural lake of the country and of high importance for the local inhabitants for irrigated farming, animal watering, and extraction of water for drinking and sanitation. With respect to the competition for water resources an independent area-wide monitoring system is essential for the acceptance of any decision maker. The following contribution introduces a weather and illumination independent monitoring system for the automated wetland delineation with a high temporal (about two weeks) and a high spatial sampling (about five meters). The similarities of the multispectral and multi-polarized SAR acquisitions by RADARSAT-2 and TerraSAR-X are studied as well as the differences. The results indicate that even basic approaches without pre-classification time series analysis or post-classification filtering are already enough to establish a monitoring system of prime importance for a whole region.
Article
Full-text available
Wetlands in semi-arid Africa are vital as water resource for local inhabitants and for biodiversity, but they are prone to strong seasonal fluctuations. Lac Bam is the largest natural freshwater lake in Burkina Faso, its water is mixed with patches of floating or flooded vegetation, and very turbid and sediment-rich. These characteristics as well as the usual cloud cover during the rainy season can limit the suitability of optical remote sensing data for monitoring purposes. This study demonstrates the applicability of weather-independent dual-polarimetric Synthetic Aperture Radar (SAR) data for the analysis of spatio-temporal wetland dynamics. A TerraSAR-X repeat-pass time series of dual-co-polarized HH-VV StripMap data-with intervals of 11 days, covering two years (2013-2015) from the rainy to the dry season-was processed to normalized Kennaugh elements and classified mono-temporally and multi-temporally. Land cover time series and seasonal duration maps were generated for the following four classes: Open water, flooded/floating vegetation, irrigated cultivation, and land (non-wetland). The added value of dual-polarimetric SAR data is demonstrated by significantly higher multitemporal classification accuracies, where the overall accuracy (88.5%) exceeds the classification accuracy using single-polarimetric SAR intensity data (82.2%). For relevant change classes involving flooded vegetation and irrigated fields dual-polarimetric data (accuracies: 75%-97%) are favored to single-polarimetric data (42%-87%). This study contributes to a better understanding of the dynamics of semi-arid African wetlands in terms of water areas including water with flooded vegetation, and the location and timing of irrigated cultivations.
Article
Full-text available
As the number of space-borne SAR sensors increases, a rising number of different SAR acquisition modes is in use, resulting in a higher variation within the image products. This variability in acquisition geometry, radiometry, and last but not least polarimetry raises the need for a consistent SAR image description incorporating all available sensors and acquisition modes. This paper therefore introduces the framework of the Kennaugh elements to comparably represent all kinds of multi-scale, multi-temporal, multi-polarized, multi-frequency, and hence, multi-sensor data in a consistent mathematical framework. Furthermore, a novel noise model is introduced that estimates the significance and thus the (polarimetric) information content of the Kennaugh elements. This facilitates an advanced filtering approach, called multi-scale multi-looking, which is shown to improve the radiometric accuracy while preserving the geometric resolution of SAR images. The proposed methodology is finally demonstrated using sample applications that include TerraSAR-X (X-band), Envisat-ASAR, RADARSAT-2 (C-band) and ALOS-PALSAR (L-band) data as well as the combination of all three frequencies. Thus the suitability of the Kennaugh element framework for practical use in proved for advanced SAR remote sensing.
Article
Full-text available
The two objectives of this study are to compare the performances of different data fusion techniques for the enhancement of urban features and subsequently to improve urban land cover types classification using a refined Bayesian classification. For the data fusion, wavelet-based fusion, Brovey transform, Elhers fusion and principal component analysis are used and the results are compared. The refined Bayesian classification uses spatial thresholds defined from local knowledge and different features obtained through a feature derivation process. The result of the refined classification is compared with the results of a standard method and it demonstrates a higher accuracy. Overall, the research indicates that multi-source information can significantly improves the interpretation and classification of land cover types and the refined Bayesian classification is a powerful tool to increase the classification accuracy.
Article
By integrating SPOT's panchromatic channel with 10m resolution into its multiband channels with 20m resolution, it is possible to produce a high resolution image suitable for photo interpretation. Three different integration algorithms have been tested on simulated SPOT data in order to produce color composite images using SPOT's multispectral mode (three channels) in combination with its 10m resolution panchromatic mode. The algorithm that gave the best visual results used a different integration formula for the near infrared channel than for the green and red channels, due to the fact that the panchromatic is less correlated with the infrared than with the visible channels. The result looks very similar to a color infrared air-photo with high resolution and good spectral information quality. -Authors
Article
One fundamental task in wetland monitoring is the regular mapping of (temporarily) flooded areas especially beneath vegetation. Due to the independence of weather and illumination conditions, Synthetic Aperture Radar (SAR) sensors could provide a suitable data base. Using polarimetric modes enables the identification of flooded vegetation by means of the typical double-bounce scattering. In this paper three decomposition techniques—Cloude-Pottier, Freeman-Durden, and Normalized Kennaugh elements—are compared to each other in terms of identifying the flooding extent as well as its temporal change. The image comparison along the time series is performed with the help of the Curvelet-based Change Detection Method. The results indicate that the decomposition algorithm has a strong impact on the robustness and reliability of the change detection. The Normalized Kennaugh elements turn out to be the optimal representation for Curvelet-based change detection processing. Furthermore, the co-polarized channels (same transmit and receive polarization in horizontal (HH) and vertical (VV) direction respectively) appear to be sufficient for wetland monitoring so that dual-co-polarized imaging modes could be an alternative to conventional quad-polarized acquisitions.
Article
In this paper we develop a dual polarized version of the entropy/alpha decomposition method. We first develop the basic algorithms and then apply them to theoretical models of surface and volume scattering to demonstrate the potential for discrimination, classification and parameter estimation. We then apply the formalism to d ual polarized data from the ALOS/PALSAR system operating in PLR and FBD modes to illustrate application to forest classification, urban scattering characterization and point target signature analysis for ship classification.