ArticlePDF Available

Abstract and Figures

In this paper, we present an in-depth analysis of the use of convolutional neural networks (CNN), a deep learning method widely applied in remote sensing-based studies in recent years, for burned area (BA) mapping combining radar and optical datasets acquired by Sentinel-1 and Sentinel-2 on-board sensors, respectively. Combining active and passive datasets into a seamless wall-to-wall cloud cover independent mapping algorithm significantly improves existing methods based on either sensor type. Five areas were used to determine the optimum model settings and sensors integration, whereas five additional ones were utilised to validate the results. The optimum CNN dimension and data normalisation were conditioned by the observed land cover class and data type (i.e., optical or radar). Increasing network complexity (i.e., number of hidden layers) only resulted in rising computing time without any accuracy enhancement when mapping BA. The use of an optimally defined CNN within a joint active/passive data combination allowed for (i) BA mapping with similar or slightly higher accuracy to those achieved in previous approaches based on Sentinel-1 (Dice coefficient, DC of 0.57) or Sentinel-2 (DC 0.7) only and (ii) wall-to-wall mapping by eliminating information gaps due to cloud cover, typically observed for optical-based algorithms.
Content may be subject to copyright.
Remote Sensing of Environment 260 (2021) 112468
Available online 24 April 2021
0034-4257/© 2021 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license
CNN-based burned area mapping using radar and optical data
Miguel A. Belenguer-Plomer
, Mihai A. Tanase
, Emilio Chuvieco
, Francesca Bovolo
Environmental Remote Sensing Research Group, Dep. of Geology, Geography and Environment, Universidad de Alcal´
a, Alcal´
a de Henares 28801, Spain
Center for Information and Communication Technology, Fondazione Bruno Kessler, Trento 38122, Italy
Editor: Marie Weiss
Burned area mapping
Convolutional neural networks
Deep learning
Wildland res
In this paper, we present an in-depth analysis of the use of convolutional neural networks (CNN), a deep learning
method widely applied in remote sensing-based studies in recent years, for burned area (BA) mapping combining
radar and optical datasets acquired by Sentinel-1 and Sentinel-2 on-board sensors, respectively. Combining
active and passive datasets into a seamless wall-to-wall cloud cover independent mapping algorithm signicantly
improves existing methods based on either sensor type. Five areas were used to determine the optimum model
settings and sensors integration, whereas ve additional ones were utilised to validate the results. The optimum
CNN dimension and data normalisation were conditioned by the observed land cover class and data type (i.e.,
optical or radar). Increasing network complexity (i.e., number of hidden layers) only resulted in rising computing
time without any accuracy enhancement when mapping BA. The use of an optimally dened CNN within a joint
active/passive data combination allowed for (i) BA mapping with similar or slightly higher accuracy to those
achieved in previous approaches based on Sentinel-1 (Dice coefcient, DC of 0.57) or Sentinel-2 (DC 0.7) only
and (ii) wall-to-wall mapping by eliminating information gaps due to cloud cover, typically observed for optical-
based algorithms.
1. Introduction
Fire is one of the natural disturbance processes that generates sig-
nicant social and economic consequences (Bowman et al., 2020;
Chuvieco et al., 2010) and modies the terrestrial ecosystems by
reducing biodiversity, changing water supply and liberating vegetated-
sequestered carbon (Hansen et al., 2013; Aponte et al., 2016; Pausas
and Paula, 2012; Lavorel et al., 2007). At global scale, emissions of
aerosols and greenhouse gases (GHGs) from res may modify the Earths
biochemical cycles and the radiative energy balance (Van Der Werf
et al., 2017; Bowman et al., 2009; Jin and Roy, 2005). Fire-induced
carbon emissions have been estimated to be 2.2 PgC per year over the
period 19972016 (Van Der Werf et al., 2017), which translates into
2030% of global emissions from burning fossils fuels, triggering the
current global warming (Kloster et al., 2012; Flannigan et al., 2009).
Besides, it is observed a direct relationship between the rising of Earths
temperature and the severity of res (Hoffmann et al., 2002; Knorr et al.,
2016). Given the global warming current context, such a relationship
may reinforce the re role progressively on climate change (Turco et al.,
2019; Williams and Abatzoglou, 2016; Flannigan et al., 2006;
Langenfelds et al., 2002). However, res may also result in opposite
effects by enabling global cooling processes as a result of increased
aerosols in the atmosphere, which induce negative radiative forcing
(Ward et al., 2012). Such effects suggest a limited understanding of re
impact on global climate (Krawchuk et al., 2009; Liu et al., 2019).
Due to its undeniable climatic and environmental importance, re is
considered by the Global Climate Observing System (GCOS) as an
Essential Climatic Variable (ECV) (i.e., a physical, biological, chemical,
or a group of connected variables capable of altering the climate system
(Bojinski et al., 2014)). The European Space Agency (ESA), through the
Climate Change Initiative (CCI) programme, is generating remote
sensing-based ECVs to improve climate modelling (Plummer et al.,
2017; Hollmann et al., 2013). Fire has been included in the CCI pro-
gramme since 2010 (Fire_cci project). Improving current BA products by
developing new algorithms based on state-of-the-art Earth observation
datasets as well as generating a long-term time series of global BA have
been the main goals of the Fire_cci project (Chuvieco et al., 2018). One
driving factor behind the project was the need for more accurate BA
products that reduce current uncertainties when studying the re-
induced climate impacts (Mouillot et al., 2014; Poulter et al., 2015).
* Corresponding author at: Environmental Remote Sensing Research Group, Dep. of Geology, Geography and Environment, Universidad de Alcal´
a, Alcal´
a de
Henares 28801, Spain.
E-mail address: (M.A. Belenguer-Plomer).
Contents lists available at ScienceDirect
Remote Sensing of Environment
journal homepage:
Received 14 May 2020; Received in revised form 16 April 2021; Accepted 19 April 2021
Remote Sensing of Environment 260 (2021) 112468
In particular, emissions from small-sized res were of particular concern
(Van Der Werf et al., 2017; Ramo et al., 2021).
Many BA global products have been released over the past decade,
mostly based on optical imagery acquired by the Moderate Resolution
Imaging Spectroradiometer (MODIS), such as the MCD45 (Roy et al.,
2008), MCD64 (Giglio et al., 2009, 2018), Fire_cci v5.0 (Chuvieco et al.,
2018) and Fire_cci v5.1 (Lizundia-Loiola et al., 2020). However, such
products have limitations as small-sized res are difcult to detect due
to the coarse pixel spacing (>250 m). Such limitations generate uncer-
tainty about the extent of the global burned area (Chuvieco et al., 2019).
In order to reduce BA mapping uncertainty, imagery acquired by me-
dium spatial resolution optical sensors such as Landsat-8 and Sentinel-2
are increasingly used to map BA at regional and global scales. Indeed, a
recent study over sub-Sahara Africa based on Sentinel-2 images for 2016
quantied an increase of 80% over existing global BA products
(MCD64A1 product Version 6) for the same region and year (Roteta
et al., 2019). In addition to problems observed when detecting small-
sized res, global BA products are also affected by cloud cover, which
limits detection of burned pixels, particularly in Tropical regions where
re activity occurs over short time spans and the continuous cloud cover
prevents BA mapping from optical sensors. In order to circumvent such
limitations, active sensors (e.g., synthetic aperture radar SAR) have
been used as an alternative to optical imagery for mapping BA (Bour-
geau-Chavez et al., 2002; French et al., 1999). The launch of ESAs
Sentinel-1 A and B in October 2014 and December 2015, respectively,
have greatly improved the availability of SAR images, by operationally
acquiring (i) dual-polarisation C-band imagery (i.e., verticalvertical,
VV, and verticalhorizontal, VH polarisations), while (ii) providing
precise orbital information, (iii) allowing for viewing geometries more
suitable for vegetation monitoring through increased incidence angle,
and (iv) improving spatial and temporal resolution, as revisit period of
Sentinel-1 mission is three days when combining ascending and
descending passes from Sentinel-1 A and B. Such advances, coupled with
a free data access policy, have allowed for the development of SAR-
based BA mapping algorithms (Belenguer-Plomer et al., 2019c).
Indeed, a rst large-scale BA product based on Sentinel-1 datasets was
released recently for the Amazon basin for the year 2017 (https://www.
esa-, last accessed March 15th, 2020).
Availability of near-concurrent active (Sentinel-1) and passive
(Sentinel-2) datasets allows taking advantage of similar spatial and
temporal resolutions of radar and optical information. Nevertheless, few
studies have considered combining such sensors when mapping BA. In
addition, there is little consensus regarding the benets of such data
combination. Some studies noted that active-passive data might reduce
limitations associated with each data-source (Verhegghen et al., 2016).
On the contrary, other studies suggest limited to nil benets (Brown
et al., 2018). The potential of radar-optical based approaches depends
on several limiting factors depending on the sensor type. Optical sensors
are severely restricted by cloud cover or strong variations in solar illu-
mination (Bourgeau-Chavez et al., 2002; French et al., 1999). Limita-
tions to using SAR for re mapping include sensitivity of SAR
backscatter to variations in soil moisture and steep topography (Belen-
guer-Plomer et al., 2018, 2019a). Besides, BA detection and mapping
accuracy from both types of sensors is affected by the land cover class
(Tanase et al., 2020). Previous studies which investigated the potential
of combining SAR-optical (SAR
O) for BA mapping did it only over
relatively small study areas or single biomes, which reduced results
validity (Verhegghen et al., 2016; Brown et al., 2018; Stroppiana et al.,
2015). Furthermore, the strengths and weaknesses of combining active
and passive datasets within a single BA classication algorithm as
opposed to a single sensor-based detection and post-detection fusion
have only been supercially analysed.
Deep learning methods have been widely applied, in recent years, in
many remote sensing-based studies (Zhu et al., 2017). Among them, the
convolutional neural networks (CNN) are being extensively used for
classifying satellite images (Ma et al., 2019), although few studies
address BA detection and mapping (Ban et al., 2020; Pinto et al., 2020).
The present research has been motivated by the limited literature on
CNN applied to BA mapping, and the need for a more profound under-
standing of its strengths and limitations over existing classication ap-
proaches, and particularly, the impact of different congurations on BA
detection accuracy, as well as the relevance of the burned land cover,
level of re severity and water content variations of soil and vegetation
when using SAR data on detection performance (Belenguer-Plomer
et al., 2019c). This paper analyses the CNN potential for BA mapping
when SAR and optical data are combined, considering a wide range of
burning conditions. Data from Sentinel-1, Sentinel-2 and their combi-
nation have been used to test different CNN congurations for detecting
burned pixels. The analysis was carried out over distinct ecosystems and
biomes with signicant re activity. The specic objectives of the study
were to (i) determine the optimum CNN parameters (i.e., image
dimensionality for feature extraction, data normalisation, and the
number of hidden layers) for each input dataset (i.e., radar, optical and
O) and land cover class, and (ii) to nd the optimal active-passive
combination approach for BA mapping. The optimal conguration was
validated over independent study areas.
2. Study areas and datasets
Ten Military Grid Reference System (MGRS) tiles, distributed over
most of the biomes frequently affected by res, were used as study areas.
These tiles covered a broad range of terrestrial ecoregions, land cover
classes, re intensity (radiative power) as well as soil moisture and
precipitation patterns over the considered re periods (Table 1). Notice
that no site was selected within the boreal region since there were found
too specic and not generalisable effects, such as the re-induced
permafrost layer melting which increases the soil moisture. (Bour-
geau-Chavez et al., 2002; Kasischke et al., 1994). Thus, additional
research focused on this biome must be carried out in future attempts.
Five of the tiles (training tiles) were used to calibrate the algorithm,
which included nding the optimum mapping conguration (i.e., CNN
parameters and sensor combination). The remaining tiles (test tiles)
were reserved for validating the results over independent sites, as well as
checking the algorithm generalisation capability (Fig. 1).
Ground range detected (GRD) C-band backscatter coefcient tem-
poral series acquired by the Sentinel-1 A and B satellites using the
interferometric wide (IW) swath mode were the source of radar infor-
mation. Temporal series acquired by the MultiSpectral Instrument (MSI)
on-board the Sentinel-2 A and B satellites were the source of optical
information. Sentinel-1 and Sentinel-2 data were downloaded from
Copernicus Open Access Hub. As ancillary data, the enhanced Shuttle
Radar Topography Mission (STRM) Digital Elevation Model (DEM) at
30 m pixel spacing was considered when pre-processing both SAR and
optical datasets (see Section 3.1). Ancillary datasets such as land cover
information as well as thermal anomalies due to active res (i.e., hot-
spots) were also used within the BA mapping algorithm. The land cover
information was extracted from the ESAs land cover CCI product for the
year 2015 Land_Cover_cci, which uses the Land Cover Classication
System (LCC) (Di Gregorio, 2005). The LCC legend was simplied to six
landscapes (i.e., shrublands, grasslands, forests, crops, non-burnable and
others, including the later transitional woodland-shrub and scle-
rophyllous vegetation) as in our previous research study to simplify the
BA mapping procedure (Belenguer-Plomer et al., 2019c). Hotspots from
VIIRS (Visible Infrared Imaging Radiometer Suite) (Schroeder et al.,
2014) and MODIS (Giglio et al., 2016) sensors at 375 m and 1 km of
spatial resolution, respectively, were downloaded from NASAs Fire
Information for Resource Management System (FIRMS).
Reference re perimeters were used to validate the BA products. The
reference perimeters were derived from independent sensors (i.e.,
Landsat imagery) to avoid auto-correlation (Tanase et al., 2020).
Landsat-8 BOA (bottom of atmosphere) reectance images with cloud
cover below 70% were downloaded from the United States geological
M.A. Belenguer-Plomer et al.
Remote Sensing of Environment 260 (2021) 112468
survey repository (USGS) for each tile. The extraction of the reference
re perimeters is explained in detail in Section 3.4
3. Methods
3.1. Sentinel-1 pre-processing
Sentinel-1 GRD images were processed using the Orfeo ToolBox
(OTB), an open-source software developed by the Centre National
DEtudes Spatiales (CNES), France (Inglada and Christophe, 2009). The
processing chain has been utilised in previous studies (Belenguer-Plomer
et al., 2019c,b; Ottinger et al., 2017; Bouvet et al., 2018) and when
generating the FireCCIS1SA10 product, the rst large-scale BA product
from Sentinel-1 data for the Amazon basin. Sentinel-1 data processing
may be divided into three steps: data-preparation, geocoding, and multi-
temporal ltering (Fig. 2). Sentinel-1 data were calibrated radiometri-
cally to gamma nought (γ
) via a lookup table obtained from the product
metadata. The calibrated imagery was orthorectied using topograph-
ical information from the SRTM DEM. Since ESA often provides Sentinel-
1 images of the same relative orbit within distinct slices, images from the
same orbit were mosaicked and then spatially trimmed to the co-
ordinates of the MGRS tile. Lastly, the processed images of each orbit
were temporally ltered (Quegan et al., 2000). All images were pro-
cessed to the Sentinel-1 nominal resolution (20 m) and subsequently
aggregated to 40 m to reduce speckle (Tanase and Belenguer-Plomer,
2018; Belenguer-Plomer et al., 2020).
BA mapping is an iterative process in which the re-detection in-
terval is delimited by the temporal gap between two consecutive data
Table 1
Terrestrial ecoregions (Olson et al., 2001), predominant land cover classes (from CCI
land cover, 2015), mean re radiative power (FRP, derived from VIIRS
thermal anomalies products), pre- and post-re soil moisture (SM, from SMAP
product), and accumulated precipitations (from CHRIPS
product) for each
MGRS tile. Notice that ±is referring to the standard deviation.
MGRS Terrestrial ecoregion Predominant land covers FRP (MW) SM pre-re (m
) SM pos-re (m
) Rainfall (mm)
10UEC Tcf F (76.7%), S (7.9%) and G (7.2%) 17.5±24.6 0.11±0.03 0.11±0.03 2.4
10SEH Mfws G (24.59%), C (24.22%) and F (19.23%) 10.0±4.51 0.1±0.03 0.17±0.03 4.79
20LQQ TSTmbf F (93.8%), C (3.7%) and S (2.1%) 13.86±16.13 0.33±0.06 0.24±0.05 3.61
20LQP TSTmbf F (93.1%), C (5.7%) and S (1.01%) 13.78±14.5 0.1±0.05 0.13±0.03 1.77
29TNG Tbf S (36.1%), F (26.5%) and C (10.6%) 24.9±33.06 0.09±0.02 0.18±0.02 4.73
29TNE Mfws S (45%), F (28.3%) and C (12.7%) 24.9±33.06 0.07±0.03 0.07±0.03 0.24
33NTG TSTgss F (89.4%), S (10.1%) and O (0.06%) 9.03±7.37 0.23±0.04 0.09±0.03 91.2
36NXP TSTgss S (52.7%), F (41.3%) and C (4.7%) 14.24±14.68 0.13±0.06 0.12±0.05 17.64
50JML Mfws G (70.7%), S (12.9%) and F (9.7%) 13.03±13.69 0.11±0.02 0.07±0.01 146.63
52LCH TSTgss S (72.5%), O (25.7%) and G (0.4%) 8.98±9.12 0.2±0.04 0.18±0.03 24.25
Terrestrial ecoregion: Tcf - Temperate Coniferous Forests; Mfws - Mediterranean Forests, woodlands and scrubs; TSTmbf - Tropical and subtropical moist broadleaf
forests; Tbf - Temperate broadleaf and mixed forests; TSTgss - Tropical and subtropical grasslands, savannas and shrublands.
Land covers: F - Forests; S - Shrubs; G - Grasslands; C - Crops; O - Others.
CCI - Climate Change Initiative;
VIIRS - Visible Infrared Imaging Radiometer Suite;
MODIS - Moderate Resolution Imaging Spectroradiometer;
SMAP - Soil
Moisture Active Passive;
CHIRPS - Climate Hazards Group InfraRed Precipitation with Station data.
Fig. 1. Location of the military grid reference system tiles used for training and test.
Temporal filtering
Data-preparation Geocoding
coefficient (γ°)
tiling, and slice
MGRS grid
filtered images
Fig. 2. Data chain pre-processing of SAR images with Orfeo ToolBox (Belenguer-Plomer et al., 2019c).
M.A. Belenguer-Plomer et al.
Remote Sensing of Environment 260 (2021) 112468
acquisitions. For each re-detection interval (t
), determined by two
Sentinel-1 consecutive acquisition dates (t
and t
), the two most
recent images acquired before t
(i.e., pre-re) and all images acquired
up to 180 days after t
(post-re) were used as input for the CNN BA
mapping algorithm. Both available polarisations (i.e., VV and VH) and
their ratio (i.e., VH/VV) were considered for each SAR image sensing
date. Notice that the log-ratio used in some SAR-based change detection
studies was not included since it had lower relevance than simple SAR
ratios when monitoring re effects (Belenguer-Plomer et al., 2019a).
The 180 days post-re interval accounted for re-induced temporal
variation of the backscattering process that may occur at some point
after a re event due to temporal decorrelation Belenguer-Plomer et al.
3.2. Sentinel-2 pre-processing
The ESAs atmospheric correction algorithm, sen2cor (v.2.4.0), was
used to derive Sentinel-2 surface reectance images by correcting not
only atmospheric but also topographic effects. The bi-cubic interpola-
tion was subsequently used to resample the 20 m Sentinel-2 images to
the pre-processed Sentinel-1 output pixel spacing of 40 m. Temporal
composites of Sentinel-2 images were generated to reduce the number of
cloud-affected pixels using images acquired by both satellites for the
selected bands (i.e., B02, B03, B04, B05, B06, B07, B8a, B11 and B12).
Given a re-detection interval (t
), as determined by two consecutive
acquisition dates of Sentinel-2 A and B (t
and t
), the sen2cor-based
Scene Classication (SCL) was considered when generating the temporal
composites for t
and t
. Pixels affected by clouds or shadows were
gap-lled using data from Sentinel-2 imagery acquired at the closest
date before t
and past t
, up to 30 days (Melchiorre and Boschetti,
2018) (Fig. 3).
Along with the surface reectance for each of the two temporal
composites (pre- and post-re), the following indices were computed
and fed into the CNN: (i) the Normalized Burn Ratio (García and Case-
lles, 1991) (NBR, Eq. (1)), (ii) the Normalized Difference Water Index
(Gao, 1996) (NDWI, Eq. (3)), (iii) the Normalized Difference Vegetation
Index (Rouse Jr et al., 1974; Tucker, 1979) (NDVI, Eq. (2)) and the (iv)
Mid InfraRed Burn Index (Trigg and Flasse, 2001) (MIRBI, Eq. (4)).
These indices are part of the state-of-the-art of BA mapping from optical
datasets (Roteta et al., 2019; Loboda et al., 2007; Fraser et al., 2000).
NDVI = (NIR-Red)/(NIR +Red)(2)
MIRBI =10 ×SWIR29.8×SWIR1+2(4)
where Red, NIR, SWIR1 and SWIR2 are the surface reectances of bands
4 (650680 nm), 8a (785899 nm), 11 (15651655 nm) and 12
(21002280 nm) of MSI on-board Sentinel-2 satellites, respectively.
3.3. SAR-optical data integration
As Sentinel-1 and Sentinel-2 acquisition dates may not coincide
when capturing images over the same geographical area, the Sentinel-1
acquisition dates dened each re-detection interval (t
) when jointly
using SAR and optical data because of their complete spatial coverage (i.
e., no missing pixels due to cloud cover). Then, Sentinel-2 images were
matched to the Sentinel-1 dates for each detection period as follows
when there was not any temporally coincident image: for the pre-re
date, the closest Sentinel-2 image acquired before was selected as t
date, whereas for the post-re date, the closest image acquired after was
selected as t
date. Once the Sentinel-2 images were matched with the
Sentinel-1 detection interval, cloud-related gaps were lled through
carrying out the temporal composite process (see Section 3.2). Subse-
quently, the Sentinel-1 radar-derived images (i.e., VV, VH and VH/VV
ratio) acquired on t
and t
, as well as the Sentinel-2 temporal com-
posites (i.e., spectral bands and spectral-indices) were stacked and fed
into the classication algorithm. Similar data combination approaches
based on Sentinel-1 and Sentinel-2 had been previously used for vege-
tation monitoring (Sharma et al., 2018; Tavares et al., 2019), also
employing CNN (Scarpa et al., 2018).
3.4. Reference burned perimeters and validation
The reference re perimeters were extracted from Landsat-8 surface
reectance. The extraction was based on the validation framework
previously established for BA products (Padilla et al., 2014, 2015, 2017;
Fernandez-Carrillo et al., 2018; Franquesa et al., 2020). A random for-
ests classier was trained using samples of burned, unburned and no
Fig. 3. Graphical representation of temporal composite formation. The re-detection interval (t
) is dened by the time span of two consecutive Sentinel-2 images,
being dependent on the revisit period.
M.A. Belenguer-Plomer et al.
Remote Sensing of Environment 260 (2021) 112468
data pixels (i.e., clouds). These samples were selected through manual
digitisation of polygons over a false colour composite (RGB: SWIR
, NIR,
R) which provided an experienced user with a clear visual distinction
between burned, unburned and no data pixels. Input data for the random
forests classier were (i) the band 5 (NIR; 0.850.88
m) and band 7
; 2.112.29
m) of post-re date, (ii) the NBR of post-re (Eq.
(1)) and (iii) the temporal difference between pre- and post-re of NBR
values (dNBR) from Landsat-8 images. Model-training and scene clas-
sication was carried out iteratively, by including new training data in
each iteration and re-running the classier until the reference re pe-
rimeters were considered accurate at close-up visual inspection.
Confusion matrices were used to validate the CNN-based BA maps
(Table 2). The Dice coefcient (Eq. (5)) and the omission (Eq. (6)) and
commission errors (Eq. (7)), which are widely used metrics when vali-
dating BA products, were computed from the matrix to assess the quality
of the maps (Padilla et al., 2015).
DC =2P11/(P1++P+1)(5)
OE =P21/P+1(6)
CE =P12/P1+(7)
3.5. Burned area mapping experimental setup
The BA mapping algorithm identies changes in C-band backscatter
and surface reectance associated with burning events. BA mapping was
carried out using (i) Sentinel-1 derived incoherent SAR-based metrics
(see Section 3.1), (ii) Sentinel-2 surface optical reectance (see Section
3.2) and (iii) combining SAR and optical selected datasets (see Section
3.3). Thus, up to three BA maps derived from different input datasets
were generated for each detection period. Hotspots and land cover in-
formation were used for algorithm training purposes (see Section 3.5.2).
3.5.1. Convolutional neural networks (CNN) background
Deep learning methods are increasingly applied to remote sensing
problems (Zhu et al., 2017) with CNN being widely used in land cover
classication, the retrieval of bio-geophysical variables (Ma et al., 2019)
or BA detection and classication (Ban et al., 2020; Pinto et al., 2020).
CNNs are structured by stages of convolution and pooling, followed by at
least one fully connected layer (LeCun et al., 2015; Zhu et al., 2017).
Each convolutional layer carries out a spatial-spectral feature extraction
(Zhong et al., 2019), generating a set of ltered data where patterns such
as edges are emphasised (Strigl et al., 2010). From the convoluted
ltered data, each neuron takes a vector and applies an activation
function of a weighted linear summation (Eq. (8)) (Maggiori et al.,
a=f(wx +b)(8)
where a is the neuron output, w is the weight given to the vector x, b is
the bias value, and f is the activation function which introduces non-
linearity into the network and permits learning complex features from
data (Agostinelli et al., 2014; Saha et al., 2019). The most common
activation function in remote sensing applications is the rectied linear
unit (ReLU) (Nair and Hinton, 2010), which activates values greater
than zero, while it converts the remaining to zero (Eq. (10)).
f(x) = {x,x0
A loss function is used to quantify the errors when classifying a
training vector data, comparing the CNN-based prediction with the label
of such vector (Maggiori et al., 2016). The weights and biases of each
neuron are adjusted using the backpropagation criterion during the
network training, carrying out multiple iterations forward and back-
ward (Anantrasirichai et al., 2019) to minimise the errors via gradient
descent (Schmidhuber, 2015). The activated data is sub-sampled to
reduce the tensor size, which increases the receptor eld to the next
convolutional layer of the network (Kellenberger et al., 2018; Strigl
et al., 2010). The last layer of the network performs the classication
instead of the feature extraction. Thus, a fully connected neural network
is used. Usually, such a fully connected network is followed by a softmax
layer, which models the input data to the probability of belonging to
each considered class (Hu et al., 2015; Anantrasirichai et al., 2019;
Zhang et al., 2018).
3.5.2. Selection of training data
CNN is a supervised learning method, and as such, it needs sample
data (i.e., burned and unburned pixels) for training purposes. In this
study, the training data extraction relied on hotspots and land cover
information at each MGRS tile (100×100 km). Hence, a specic CNN
model was built and trained for each re-detection interval (t
) and land
cover class at each tile, which limited the large variations in climate
regimes, vegetation classes or phenological cycles. The use of hotspots,
well established for BA mapping (Belenguer-Plomer et al., 2019c; Roteta
et al., 2019), was essential, especially when using the radar-derived
metrics to differentiate changes due to res (Huang and Siegert,
2006). In addition, processing pixels according to their land cover class
allowed improving the patterns characterisation, which resulted in more
accurate separation of burned and unburned areas when considering
SAR, optical and both datasets (Belenguer-Plomer et al., 2018; Tanase
et al., 2020). Therefore, CNNs training and the subsequent mapping
process were carried out class-by-class, with the number of CNN models
built depending on the land cover classes present in each study area. For
a land cover class k, the training pixels of the burned category were
selected within a spatial buffer determined as the double of the thermal
sensor spatial resolution (Langner et al., 2007; Sitanggang et al., 2013).
The unburned training pixels were those outside the hotspot buffer areas
as well as from not burnable (e.g., water) land cover classes according to
CCI land cover map reference.
3.5.3. Assessment of optimum CNN conguration for BA mapping
The architecture of the CNNs was based on AlexNet (Krizhevsky
et al., 2012), and integrate hidden convolutional layers, the ReLU acti-
vation function, max-pooling, fully-connected layers, dropout and soft-
max classication. According to Bashiri and Geranmayeh (2011), the
parameters of a CNN model, such as the number of layers, neurons and
lters, have to be adjusted ad hoc for each dataset. Hence, up to eight
different CNN-combinations by each input dataset were analysed to
determine the optimal network for BA detection and mapping (Table 3).
Table 2
Confusion matrix scheme.
Refererence data
Detection Burned Unburned Row total
Burned P
Unburned P
Col. total P
Table 3
The eight congurations assessed for each input dataset (S simple, C
CNN model Convolution dimension Data normalisation
S 1D z-score
S 1D [0, 1]
S 2D z-score
S 2D [0, 1]
C 1D z-score
C 1D [0, 1]
C 2D z-score
C 2D [0, 1]
M.A. Belenguer-Plomer et al.
Remote Sensing of Environment 260 (2021) 112468
Four architectures were analysed after combining two CNN-groups
that differed in terms of (i) the number of hidden layers and lters,
and (ii) the image domain where the convolutional feature extraction
was executed (i.e., spatial or spectral). The rst group included two CNN
models with a different number of hidden layers and lters. The rst
model used two hidden layers with 32 and 64 lters, respectively,
whereas the second model had a third additional hidden layer where
128 lters were applied. Hereafter the models with two and three hid-
den layers are referred to as the simple (S) and the complex (C),
respectively. The second group involved two convolution-based lters
for feature extraction. Given any pixel located at row i and column j of
the input image X, the rst lter implied a pixel-wise convolution over
the spectral domain (1D). It was considered a three-pixels size kernel to
extract features from the spectral information of the previously stacked
optical images and radar channels (see Section 3.1 to Section 3.3). The
second lter considered a 3×3 kernel around the centre pixel (spatial
domain, 2D) to extract the features used for BA detection (Kussul et al.,
2017; Xu et al., 2017; Zhang et al., 2019) (Fig. 4).
Two normalisation methods were tested separately with each image
band being normalised (i) in the interval [0, 1] (Benedetti et al., 2018)
(Eq. 10) and (ii) applying the z-score normalisation (Zhong et al., 2017)
(Eq. 11).
interval [0,1](x) = x
zscore(x) = x
where x is a given pixel of a band b of the image, and
are the
mean and standard deviation, respectively. Table 3 shows the eight
congurations for BA mapping performance assessment.
4. Results
4.1. Optimum CNN conguration
Depending on the MGRS tile, the optimum CNN conguration varied
(Fig. 5). When Sentinel-1 (S-1) data were fed into the CNN, accuracy
metrics dispersion (i.e., between tiles) at any CNN conguration was
higher when compared to feeding Sentinel-2 (S-2) data or both Sentinel-
1 and Sentinel-2 data (S-1 +S-2). The inter-tiles accuracy dispersion of
the radar-fed CNN was lower when carrying out the convolution-based
feature extraction through the spatial domain of the image (2D),
which decreased omission errors (36NXP, 20LQQ and 50JML) despite a
slight increase in commission errors for some tiles (10UEC and 29TNE).
Similar results were achieved when feeding the CNN model using
Sentinel-2 data only. Contrarily, when feeding both types of data (i.e., S-
1 +S-2) into the CNN, the convolution dimension (i.e., 1D or 2D) did not
inuence the accuracy. In addition, the time required when training 2D
models was lower compared to 1D, particularly when considering
complex (C) networks, regardless of the data normalisation type. The
use of more complex (C) CNN models, instead of using the simplest ones
(S), did not increase the accuracy without regard to the type of data fed
into the network. Similarly, training times were not inuenced by the
data normalisation method (z-score vs [0, 1]). However, a marginal
enhancement of mapping accuracy was observed when using the z-score
normalisation for the Sentinel-1 fed CNN, particularly in tile 50JML (i.e.,
Australian grasslands), where OE was reduced signicantly (for 2D
CNN). Conversely, when feeding Sentinel-2 or Sentinel-1 and Sentinel-2
data, the [0, 1] normalisation provided slightly more accurate BA
detection rates.
By land cover classes, the lowest BA mapping accuracy was observed
over Grasslands, particularly when using Sentinel-1 data due to high OE
(Fig. 6). However, combining 2D convolution with z-score normalisation
resulted in improved DC (by 59%) from 1D convolution-based ap-
proaches with z-score (DC 0.35±0.24 vs 0.22±0.2, mean ±the standard
deviation). The same conguration (2D and z-score) also improved the
accuracy over Crops, especially when compared to 1D with [0, 1] data
normalisation (DC 0.37±0.14 vs 0.30±0.25), although to a lesser extent,
while over Forests the improvement was marginal. Accuracy metrics
were stable for Shrubs over all the congurations tested, although the
2D and z-score conguration provided less overall dispersion among the
analysed tiles. In the Others class, the highest mapping accuracy based
on Sentinel-1 data was achieved using the convolution in the spectral
domain (1D).
Although Sentinel-2 fed CNN achieved higher accuracy when
compared to Sentinel-1 fed one, such an improvement was conditioned
by land cover classes and congurations. When using optical data, the
spectral-based feature extraction (1D) was the most appropriate except
for Crops, where the spatial-based (2D) improved the results. Besides,
marginal differences in BA accuracy were found between the two data
normalisation types, with the z-score normalisation providing higher DC
values over all land cover classes, except for Forests.
When not only Sentinel-1 but also Sentinel-2 data were fed to the
CNN, the BA classication did not improve (except for Crops) in com-
parison to only using Sentinel-2 data, despite requiring more computa-
tion time in all congurations. Over cropping areas, SAR or optical data
alone provided a low mapping accuracy (highest DCs achieved
0.37±0.14 and 0.42±0.05, respectively). However, the SAR-O combi-
nation improved the accuracy (DC 0.44±0.09) by reducing the OE. Such
an improvement was maximum for the 2D convolution and z-score
normalisation. For the remaining land cover classes, the SAR and optical
combination did not improve the results when cloud cover was not an
issue. Despite Sentinel-2 temporal compositing, gaps remained over
areas frequently affected by clouds. As for the CNN optimum congu-
ration, 1D convolution and [0, 1] normalisation improved the mapping
accuracy (as for the Sentinel-1 based network). The highest mapping
accuracy was observed over Forests regardless of the data normalisation
method, convolution dimension and input remote sensing data (i.e., S-1,
S-2, S-1 +S-2). The optimum CNN conguration for each land cover
class is presented in Table 4 as a function of the input remote sensing
The softmax layer (i.e., the last layer of the CNN) predicted the
probability that each pixel would have been burned or unburned.
Fig. 4. Feature extraction carried out in a convolution (Conv) through (a) the spectral-domain (1D) and (b) the spatial-domain (2D) of the input image. Relevant
parts of CNN such as ReLU, max-pooling, fully-connected network and softmax layers are also shown.
M.A. Belenguer-Plomer et al.
Remote Sensing of Environment 260 (2021) 112468
Although in our previous analysis, pixels were classied as burned when
such a probability was equal to or above 50%, such a xed threshold,
based on a statistical proxy instead of on the data analysis, may not
provide the optimum performance. Hence, we analysed the use of a
variable probability threshold to improve the BA mapping accuracy,
balancing CE and OE (Fig. 7). Such variation depended on the land cover
class and the input data fed to the CNN (Table 5). Over Grasslands, Crops
and Shrubs (i.e., the classes with the highest OE (Fig. 6)) accuracies
improved when the softmax burned probability threshold was reduced
(40 to 50%), although it depended on the input data. Conversely, for the
Forests class, a more restrictive threshold improved the classication.
The optimum threshold differed with the input data, from 65% when
using Sentinel-2 data alone to 75% when using Sentinel-1 or integrating
SAR and optical data. BA accuracy improved marginally for the Others
class when varying the threshold until a probability of 80% for Sentinel-
1 and 70% for Sentinel-2. However, when integrating SAR and optical
data, the improvement was considerable for the 5575% interval, with
the highest accuracy achieved for a softmax threshold of 70%. Such an
improvement allowed that maps based on SAR-O integration had higher
accuracy when compared to those derived from individual Sentinel-1 or
Sentinel-2 datasets. Past the optimum threshold, mapping accuracy
reduced considerably, especially when using Sentinel-2 data. This effect
was observed for all land cover classes except for Grasslands.
4.2. SAR-optical mapping strategy
Three different BA mapping strategies when combining SAR and
optical datasets were analysed: (i) stacking radar as well as optical data
(i.e., backscatter coefcient, optical surface reectances and spectral
indices) and feeding them to the CNN (Fig. 8, a), (ii) using BA detected
from the optical data and lling the cloud cover-induced gaps with
pixels mapped from radar data (Fig. 8, b) and (iii) joining the BA
detected independently from radar and optical datasets (Fig. 8, c). For
the Forests class, the three mapping strategies provided similar results (i.
e., DC values). However, joining individual Sentinel-1 and Sentinel-2
maps may provide an advantage by reducing missed burned pixels due
to clouds or shadows, not possible when using optical temporal com-
posites alone. For Shrubs, the observed DC values were similar for all
mapping strategies, with radar-lled optical-based BA maps showing
slightly higher DC values when compared to the remaining two
Fig. 5. Dice coefcient (DC), commission and omission errors (CE and OE) and seconds needed when training the models by training tiles considering different CNN
conguration and input data (Sentinel-1 - S-1, Sentinel-2 - S-2 and both datasets - S-1 +S-2).
M.A. Belenguer-Plomer et al.
Remote Sensing of Environment 260 (2021) 112468
strategies. Over Grasslands, the radar-lled optical-based BA maps
provided the most accurate results. Over the two remaining land cover
classes (i.e., Others and Crops), using radar-optical stacked data into the
CNN allowed improving the accuracy. In particular, over the Others
class, the radar-optical stacks allowed reducing the CE by 20%.
4.3. Burned area mapping validation
The optimum CNN conguration and mapping strategy, according to
the observed trends over the training tiles, were assessed over the test
tiles (Table 6) with the mapping accuracy varying depending on the
input data (i.e., S-1, S-2 and S-1 +S-2). Higher mapping errors (DC<0.6)
were observed over grasslands dominated tiles in Africa and Australia
(33NTG and 52LCH, respectively), regardless of the input data. Over the
remaining tiles, DC values were above 0.7. Over two tiles (20LQP and
33NTG), the radar-based maps were more accurate when compared to
the optical-based (DC of 0.81 vs 0.71 and 0.50 vs 0.47, respectively)
with the opposite being valid for the remaining three tiles. However, the
use of Sentinel-1 data (i.e., cloud cover independent) allowed for wall-
to-wall mapping. In tile 52LCH the optical-based maps did not provide
information for 17.6% (Fig. 9).
By land cover type, the highest accuracy was observed over forested
areas when mapping BA through the SAR-O combination (DC 0.72) as
opposed to only using SAR (DC 0.63) or optical (DC 0.66) information
(Fig. 10). The most relevant improvement when combining Sentinel-1
and Sentinel-2 was found over the Others class, where the synergy of
both sensors reduced OE and CE considerably. The lowest accuracy was
achieved over the Crops class, mainly due to high CE (near 0.8) observed
for both sensor types. In addition, for the radar-based maps, BA accuracy
Fig. 6. Mean and standard error of Dice coefcient (DC), commission and omission errors (CE and OE) and seconds per pixel needed when training the models by
land cover classes (O-others, F-forests, S-shrubs, G-grasslands and C-crops) of training tiles considering different CNN conguration and input datasets (Sentinel-1 - S-
1, Sentinel-2 - S-2 and both datasets - S-1 +S-2).
Table 4
Optimum CNN conguration and Dice coefcient mean (±standard deviation)
by land cover classes (O-others, F-forests, S-shrubs, G-grasslands and C-crops) of
the training tiles and input datasets (Sentinel-1 - S-1, Sentinel-2 - S-2 and both
datasets - S-1 +S-2).
LC S-1 DC (S-1) S-2 DC (S-2) S-1+S-2 DC (S-1+S-
O 1D z-
0.46±0.31 1D z-
0.50±0.31 1D [0,
F 2D z-
0.60±0.23 1D [0,
0.64±0.21 1D [0,
S 2D z-
0.50±0.23 1D z-
0.56±0.22 1D [0,
G 2D z-
0.35±0.24 1D z-
0.38±0.20 all 0.31±0.23
C 2D z-
0.37±0.15 2D z-
0.43±0.19 2D z-
M.A. Belenguer-Plomer et al.
Remote Sensing of Environment 260 (2021) 112468
over cropping areas was also negatively inuenced by high OE, which
did not occur when using optical datasets. The combination of Sentinel-1
and Sentinel-2 data generally improved or maintained the accuracy
achieved from individual datasets except for tile 20LQP, where the SAR-
based maps were the most accurate. When combining the two sensor
types, we observed a considerable reduction in OE which coupled with a
marginal increase in CE. The average OE reduction and CE increment
over the ve test tiles was 0.22±0.22 and 0.05±0.17 as well as
0.09±0.08 and 0.05±0.05 when compared to radar- and optical-based
maps, respectively. Apart from accuracy improvements, SAR-O data
integration reduced gaps due to cloud cover to nil, a signicant
advantage of combining active and passive sensors.
5. Discussion
5.1. Optimum CNN parameters
Optimum CNN parameters were proposed based on the ve training
tiles and applied to the test tiles (Fig. 1). The training-test tiles are
geographically distributed and exhibit considerable differences in land
cover distribution, FRP and soil moisture that might affect BA mapping
accuracy (Table 1). Nevertheless, no signicant variations were
observed between the BA mapping accuracies achieved over the training
and test tiles. It may be explained by the use of local CNN training, which
Fig. 7. Variation of mapping accuracy measured through the mean and standard error of Dice coefcient (DC) as a function of changes in softmax probability by land
cover classes of training tiles and input datasets (Sentinel-1 - S-1, Sentinel-2 - S-2 and both datasets - S-1 +S-2).
Table 5
Most suitable burned thresholds (Bt) of softmax classication probability layer
when mapping burned area (BA) and the mean Dice coefcient (±standard
deviation) by land cover classes (O-others, F-forests, S-shrubs, G-grasslands and
C-crops) of training tiles and input datasets (Sentinel-1 - S-1, Sentinel-2 - S-2 and
both datasets - S-1 +S-2).
LC Bt (S-
DC (S-1) Bt (S-
DC (S-2) Bt (S-1+S-
DC (S-1+S-
O 0.75 0.47±0.32 0.70 0.52±0.35 0.70 0.55±0.36
F 0.75 0.65±0.17 0.65 0.68±0.20 0.75 0.65±0.15
S 0.55 0.50±0.24 0.50 0.56±0.22 0.45 0.53±0.19
G 0.50 0.35±0.24 0.45 0.41±0.20 0.40 0.31±0.25
C 0.45 0.37±0.13 0.50 0.43±0.19 0.50 0.44±0.11
Fig. 8. Mean and standard error of Dice coefcient (DC) and commission and omission errors (CE and OE) by land cover classes (O-others, F-forests, S-shrubs, G-
grasslands and C-crops) of training tiles when combining Sentinel-1 and Sentinel-2 data applying three different approaches: (a) data stacking of SAR and optical
images to feed the CNN; (b) lling Sentinel-2 based maps pixels with information-gaps using those derived from Sentinel-1; and (c) joining all burned pixels detected
using both SAR and optical images separately.
M.A. Belenguer-Plomer et al.
Remote Sensing of Environment 260 (2021) 112468
provided a representative set of optimum parameters.
Our results show that the optimum data normalisation was based on
the z-score when using either radar or optical data as input. The only
exception was for forested areas mapped from Sentinel-2 imagery,
which aligns with ndings from previous research (Zhong et al., 2017).
Conversely, when using a combined SAR-O dataset, the [0, 1] normal-
isation was better suited for mapping applications, as also observed in
previous studies that combined imagery from these sensors (Benedetti
et al., 2018a). The [0, 1] normalisation provided more accurate BA
detections when stacking SAR and optical datasets except for Grasslands
(no difference with z-score normalisation) and Crops. For Grasslands,
the insensitivity to the normalisation method may be related to the low
BA mapping accuracies. On the other hand, for Crops, the intrinsic class
vegetation differences given by the variability of different agricultural
elds as well as the vegetation season may explain the need for a
different normalisation type.
The optimum feature extraction was achieved via the spectral
domain (1D) when the optical or the SAR-O combination was used.
Table 6
Error metrics for burned area (BA) maps based on Sentinel-1 (S-1), Sentinel-2 (S-2) and the optimum combination of both datasets (S-1 +S-2) for each test tile.
MGRS C Reference period Sat Detection period DC OE CE %Nd
10SEH NA 04/10/201705/11/2017 S-1 28/09/201703/11/2017 0.46 0.69 0.13 0.00
S-2 07/10/201701/11/2017 0.70 0.12 0.41 2.26
S-1 +S-2 28/09/201703/11/2017 0.70 0.10 0.43 0.00
20LQP SA 20/07/201622/09/2016 S-1 03/07/201625/09/2016 0.81 0.08 0.27 0.00
S-2 17/07/201625/09/2016 0.71 0.20 0.37 0.00
S-1 +S-2 03/07/201625/09/2016 0.73 0.04 0.41 0.00
29TNG Eu 05/10/201706/11/2017 S-1 28/09/201709/11/2017 0.64 0.44 0.25 0.00
S-2 05/10/201709/11/2017 0.75 0.27 0.22 0.06
S-1 +S-2 28/09/201709/11/2017 0.77 0.23 0.22 0.00
33NTG Af 15/01/201616/02/2016 S-1 15/01/201620/02/2016 0.50 0.53 0.47 0.00
S-2 18/01/2016/17/02/2016 0.47 0.65 0.31 0.39
S-1 +S-2 15/01/201620/02/2016 0.56 0.47 0.42 0.00
52LCH Au 05/04/201721/04/2017 S-1 26/03/201719/04/2017 0.36 0.75 0.34 0.00
S-2 19/03/201708/04/2017 0.55 0.59 0.15 17.6
S-1 +S-2 26/03/201719/04/2017 0.56 0.55 0.24 0.00
C - continent for each tile (Af-Africa, Au-Australia, Eu-Europe, NA-North America and SA-South America); Reference period - period for which it was derived the
reference burned perimeters using Landsat-8; Sat - input dataset considered; Detection period - rst and last Sentinel-1 or Sentinel-2 images of the temporal series; DC -
Dice coefcient; OE - omission error; CE - commission error; and %Nd - the percentage of no data pixels over all the MGRS tile.
Fig. 9. Burned area (BA) maps based on Sentinel-1 (S-1), Sentinel-2 (S-2) and the optimum combination of both datasets (S-1 +S-2) for the test tiles. Errors of
omission and commission, as well as no data pixels due to reference or input datasets are also shown.
M.A. Belenguer-Plomer et al.
Remote Sensing of Environment 260 (2021) 112468
Conversely, the spatial domain (2D) provided more accurate results
when using SAR data alone. Such a difference may be due to optical
reectances allowing mapping BA better than the radar backscatter
coefcient data (Belenguer-Plomer et al., 2019c). Hence, only consid-
ering the spectral reectances of those wavelengths highly sensitive to
re effects results in accurate classication of BA. However, when only
the backscatter coefcient is available, considering the surrounding
pixels improves the differentiation between burned and unburned,
which explains the improved performance of the spatial feature
The optimum Softmax threshold, when distinguishing between
burned and unburned pixels, differed as a function of land cover classes.
The most considerable enhancement, when varying the threshold from
50% was observed for the Others class, which was mapped more accu-
rately when considering SAR-O using a 60% probability threshold. The
optimum thresholds also varied as a function of the input data (SAR,
optical or SAR-O combination) over each land cover class. For Crops,
Grasslands and Shrubs, the optimum thresholds were less restrictive (i.
e., close to 50%), while for Forests and Others classes, the optimum ones
were more restrictive (i.e., around 70%). Except for Shrubs, a higher
threshold (for BA detection) seemed appropriate for the land cover
classes mapped with higher accuracy (i.e., Forests and Others). These
thresholds have been dened considering a reduced number of study
areas so that further research is needed to conrm them. Nevertheless,
the broad range of terrestrial ecoregions, land cover classes, re radia-
tive power as well as soil moisture and precipitation patterns observed
over the training sites (Table 1) suggest their utility over a wide array of
conditions and their transferability to other areas. The higher mapping
accuracy may be related to the biomass level of each land cover class as
it inuences the level of pre- to post-re changes for both, the back-
scatter coefcient and optical reectance. In addition, the Fire Radiative
Power (FRP) is dependent on fuels availability (i.e., biomass) which
implies that in land cover classes with a reduced amount of biomass, the
capability to detect hotspots from thermal sensors is lower when
compared to land cover classes with a higher quantity of biomass
(Wooster et al., 2005). CNN models are land cover dependent and
trained using information derived from hotspots. Hence, a reduced
number of hotspots for a specic land cover class (e.g., due to low FRP or
related to low biomass levels) resulted in suboptimal training, and as
such, increased the uncertainty when compared to land cover classes
with higher fuel availability, and consequently hotspots, which indeed
explains the different optimum thresholds for each land cover class.
Lastly, in terms of computing time, mapping the BA over a vegetation
class with considerable intrinsic heterogeneity (i.e., Others class)
increased the computing duration. However, the most signicant time
increment was found when using additional hidden layers which did not
translate into mapping accuracy improvements. Although including
more hidden layers does not deteriorate the mapping accuracy, the
considerable increase of computing time may hinder algorithm
deployment from continental to global scales, the nal objective of this
research (Chuvieco et al., 2019).
5.2. SAR and optical data integration for BA mapping
The input data (SAR, optical, joint use) providing the highest accu-
racy differed with the land cover class. For Others and Crops classes, the
joint use of active and passive data provided the most accurate results.
As these land cover classes are more heterogeneous, the mapping pro-
cess takes advantage of the different sensitivity of the two types of
sensors through the CNN training, allowing for a more precise separa-
tion between burned and unburned areas overall. Notice that over the
test tiles, the joint use of both sensor types did not improve results for the
Crops class, which suggests that further research is needed to ascertain
the optimum combination of active and passive datasets. A possible
explanation is a reduced variability among the types of crops within the
test tiles. Such reduced variability was suggested by the reduced VH
backscatter coefcient variability (i.e., standard deviation), related to
the vegetation volumetric scattering process (Freeman and Durden,
1998), over the Crops in the test tiles when compared to the training
ones (0.10 vs 0.15). Increased homogeneity over the agricultural elds,
induced by different crop types and/or growing seasons, may reduce the
need for SAR-derived information for monitoring purposes (Van Tricht
et al., 2018). Nevertheless, comparing SAR-O and optical-based results
over the test tiles suggest only marginal DC differences over cropping
Fig. 10. Mean and standard error of Dice coefcient (DC), commission and omission errors (CE and OE) by land cover classes of test tiles as a function of the input
datasets used (Sentinel-1 - S-1, Sentinel-2 - S-2 and the optimum combination of both datasets - S-1 +S-2).
M.A. Belenguer-Plomer et al.
Remote Sensing of Environment 260 (2021) 112468
areas and demonstrates the reliability of the CNN-based predictions,
even when some of the input data are redundant.
For Forests and Shrubs classes, the combination of BA mapping
products based on either individual SAR or optical data sources allowed
for more accurate detections; however, such improvements were mar-
ginal, especially for Shrubs, when compared to the remaining data-
integration strategies. The improvement resulted from a considerable
OE reduction when joining the independently generated maps. In
particular, OE was reduced for pixels located at the border of re patches
which are more susceptible to be misclassied due to residual pixel co-
registration errors between maps and validation datasets (Mandanici
and Bitelli, 2016). Hence, combining maps obtained from sensors with
different viewing geometries (i.e., SAR and optical) reduced the geo-
location error effect without meaningfully increasing the CE. Lastly,
over Grasslands, the use of Sentinel-2 data for BA mapping and Sentinel-
1 for cloud-induced gap-lling provided the most accurate results. Such
ndings align with previous research which suggested reduced utility of
C-band backscatter coefcient when monitoring re effects in re-
affected grasslands (Menges et al., 2004).
5.3. Algorithm independent validation
The joint use of Sentinel-1 and Sentinel-2 data improved slightly or at
least maintained the BA accuracy achieved using a sole input data (i.e.,
SAR or optical) in most test tiles while providing wall-to-wall mapping
capabilities (i.e., all pixels were mapped), a feature particularly crucial
in tile 52LCH, were cloud-induced gaps amounted to 17.6% of the area.
Further, the joint use of active and passive datasets allowed combining
the strengths of SAR (i.e., a cloud cover independence) and optical data
(i.e., better sensitivity to re-induced changes in vegetation) as also
suggested in previous studies (Verhegghen et al., 2016). As an exception,
for tile 20LQP, the highest accuracy was obtained using the SAR data
(DC 0.81). The OE increased by 0.2 when using Sentinel-2 data as an
input and by 0.04 when jointly using the active and passive datasets.
However, for the latter, the CE signicantly increased when joining all
burned pixel detected separately from SAR and optical datasets due to
the large commission errors of the Sentinel-2 based maps. The
discrepant results in tile 20LQP were explained by re location, as 83%
of the re patches burned forested areas and did not reect the general
trends as discussed in Section 5.4. Overall, using SAR and optical data
for BA mapping requires more computing power or increased processing
time. However, such an effort may be worth it whether end-users are
provided with the most accurate BA products without information gaps,
particularly benecial at inter-tropical latitudes.
By land cover classes, the higher mapping accuracies were observed
for Forests, Shrubs and Grasslands with DC values of 0.72, 0.65 and
0.57, respectively. A lower DC value (0.46) was observed for the Others
class whereas a rather low mapping accuracy was observed for Crops
(DC 0.27) regardless of the input datasets. However, one should notice
that most accuracy metrics were based on reference re perimeters over
short periods (i.e., one month or less), which may signicantly affect
accuracy assessment. According to previous research, evaluating BA
maps over short periods tends to underestimate mapping accuracy
regardless of the input datasets (Padilla et al., 2018). Such effects were
also found when assessing Sentinel-2 based BA maps with DC values
increasing from 0.34 to 0.77 from short to long temporal periods (Roteta
et al., 2019).
In this study, most of the evaluated periods were short. However, two
clearly dened groups of tiles were observed when analysing the BA
mapping accuracy from Sentinel-2 data. For the rst group, formed by
tiles 10SEH, 20LQP and 29TNG, the re activity was concentrated
around dates timely covered by both the reference (as set by Landsat 8
acquisition dates) and the detection period (set by the Sentinel-2
acquisition dates). Over these tiles, the DC values were similar (DC
>0.7) and in line with those observed in previous studies (Roteta et al.,
2019). For the second group, tiles 33NTG and 52LCH, many res were
active during dates not simultaneously covered by Landsat-8 and
Sentinel-2 acquisitions. In fact, 8.8% (33NTG) and 39.4% (52LCH)
hotspots were recorded within the interval covered by the Lansat-8
imagery (16 days revisit period) but outside the interval covered by
the Sentinel-2 ones (5 days revisit period). Such a mismatching may
explain the increased OE (0.65 and 0.59, respectively) and thus the
lower accuracy as the average DC was lower (0.21) when compared to
the remaining tiles (DC 0.51 vs 0.72).
The accuracy observed for the Sentinel-1 based BA maps was similar
to that observed in previous studies based on the same sensor (Belen-
guer-Plomer et al., 2019c). For the test tiles, the CNN-based maps ach-
ieved an average DC of 0.55±0.17 while the Reed-Xiaoli detector-based
approach proposed by Belenguer-Plomer et al. (2019c) achieved
0.57±0.18. Although only marginal differences, in terms of accuracy,
were found between the two approaches, the CNN-based algorithm was
considerable faster (Belenguer-Plomer et al., 2019c). Regarding the
combination of active/passive derived data, the reduced number of
studies that took advantage of such a fusion when mapping BA mapping
precluded meaningful comparisons as such studies were carried out over
homogeneous areas with little variations in vegetation types and re
regimes (Verhegghen et al., 2016; Brown et al., 2018; Stroppiana et al.,
5.4. Main sources of error
BA mapping commission and omission errors depended, to a large
degree, on the input data source. SAR and optical datasets were affected
differently by factors including variations in soil moisture, slope orien-
tation and post-re vegetation response (Kurum, 2015; Belenguer-
Plomer et al., 2019a). For tile 10SEH (North America), the main limiting
factor when using SAR data was the steep topography since re patches
were located on steeper slopes (13.46±7.7) when compared to the
remaining test tiles (7.15±6). The steep topography may reduce the
backscatter suitability when monitoring res, which translates into
increased OE (0.69) (Belenguer-Plomer et al., 2019c). Conversely,
considerable CE (0.41) was observed for the optical-based maps as
during the automatic training low-re severity pixels (i.e., reduced pre-
to post-re variations) were considered due to their distance to hotspots.
However, the reference perimeters only included visible burned pixels
since their generation was based on a manually supervised classica-
tion. The mean dNBR, a reliable indicator of re severity (Key and
Benson, 2004), in pixels affected by CE was 0.15±0.16, a value
considerably higher when compared to that of unburned pixels
(0.01±0.7) and, at the same time, far from the values observed for the
accurately mapped burned pixels (0.46±0.26). Hence, it is thought that
had the reference perimeters included partially burned pixels as burned,
the CE would have been lower.
Fire severity was also the main limiting factor in tiles 33NTG (Africa)
and 52LCH (Australia). According to the MIRBI spectral index (Eq. (4)),
found as the most suitable index when assessing re severity over
grasslands (Lu et al., 2016), low re severity was observed for pixels
affected by OE (1.67±0.38 and 1.62±0.21, respectively). In contrast,
moderate severities were noticed for accurately detected burned pixels
(1.8±0.32 and 1.76±0.12, respectively). Although marginal differences
were found when comparing accuracies from SAR-O and optical-based
maps (DC 0.56 vs 0.55, respectively), when evaluating the accuracy of
the latter, pixels covered by clouds (17.6%) were not included despite
some of them were affected by res. In fact, whether these cloud-
covered pixels are ignored when assessing the SAR-O BA map in tile
52LCH, the accuracy improves up to 12.5% (DC 0.63). Furthermore, as
indicated in Section 5.3, mismatched reference and detection periods
may have increased the observed errors (particularly OE) in tiles 33NTG
and 52LCH.
Hotspots availability may have also affected the observed mapping
accuracy. For example, in tile 29TNG (Portugal), most areas affected by
omission errors were located within a unique re scar with only one
M.A. Belenguer-Plomer et al.
Remote Sensing of Environment 260 (2021) 112468
hotspot detected by the thermal MODIS and VIIRS sensors. The reduced
number of hotspots hindered the CNN training for SAR, optical and both
combined datasets. However, the absence of hotspots was an exception
since not only within the remaining re patches of the same area but also
in the rest of the tiles such limitations were not observed.
Regarding the high CE observed in tile 20LQP (South America),
particularly for the optical-based map (0.37), it was related to a similar
post-re increment in SWIR reectance over both burned (+0.046) and
unburned (+0.05) areas. The SWIR increment over unburned areas may
be related to drying unburned vegetation during the post-re period
(Gao, 1996). Most pixels (77%) affected by CE were spatially concen-
trated along the largest re perimeter, a re that accounted for 93.3% of
all burned pixels in this tile. According to the MODIS-based hotpots
product (Giglio et al., 2016), FRP values up to 339.9 MW were observed
for this re, a 15th fold increase when compared to value registered over
the remaining re-patches (20.3 MW), which suggests that heat radi-
ating from the very intense re-affected vegetation on the neighbouring
areas. As CNN training was based on larger areas around hotspots, un-
burned re-dried pixels were mixed within the training burned samples,
which resulted in an incorrect learning process. Such errors may be
easily rectied by relating the sampling areas around hotspots with the
FRP (i.e., being sampled within a lower radius around the hotspots the
burned training pixels from intense res). Soil moisture variations may
affect the BA mapping accuracy when considering SAR data (Imperatore
et al., 2017; Gimeno and San-Miguel-Ayanz, 2004; Ruecker and Siegert,
2000). However, in this study such an effect has not been observed as the
recorded variations of soil moisture between pre- and post-re images
occurred in the entire scene (i.e., a background change). When soil
moisture changes are concentrated in smaller regions, as a result of a
focused rainfall, misclassication may occur and translate into increased
CE (Belenguer-Plomer et al., 2019c). However, despite the reliability of
the SMAP product (Chan et al., 2018; Chen et al., 2018), its coarse
spatial resolution (i.e., 9 km) does not allow monitoring spatially
concentrated changes. Thus, soil moisture effects on SAR-based BA
mapping may have been underestimated. Further analysis considering a
more spatially detailed product of soil moisture is needed. However, to
date, the most spatially detailed soil moisture product, the Copernicus
Surface Soil Moisture (SSM) at 1 km based on Sentinel-1 data, is only
available over Europe (Bauer-Marschallinger et al., 2018) precluding a
more in-depth analysis over most of our study sites.
5.5. Further research and improvements
This research has advanced the current state-of-the-art in BA map-
ping using both radar and optical sensors of medium spatial resolution.
The unprecedented scenario in which (i) Sentinel-1 and -2 data free
distribution under the European Copernicus programme as well as (ii)
the recent advances in deep learning algorithms (e.g., CNN) have
allowed investigating novel BA detection and mapping techniques as the
proposed one. The presented algorithm has the potential to reduce un-
certainties on current BA products, estimated at 4 to 4.5 million km
globally (Giglio et al., 2018; Lizundia-Loiola et al., 2020). However, in
order to conrm the global relevance of these ndings, further research
is needed to include additional study sites over all the re-prone biomes.
To this end, a recently published Burned Area Reference Database
(BARD), based on 2769 images acquired by Landsat-7 and -8 and
Sentinel-2 satellites (Franquesa et al., 2020), would be hugely benecial
to validate the proposed algorithm.
As soil moisture changes the importance of C-band VV and VH
polarisations when distinguishing between burned and unburned areas
(Van Zyl et al., 2011; Belenguer-Plomer et al., 2019a, 2019b), BA
mapping based on Sentinel-1 datasets shall take into account more
reliable information on soil moisture as ancillary global products
become available at higher spatial resolutions. Current global products
(i.e., SMAP at 9 km or CCI soil moisture at 0.25) are not accurate
enough for such purposes. In particular, future iterations may assign
differentiated weights for the VV and VH polarisations based on soil
moisture information as VV importance for BA mapping increases with
soil moisture (Belenguer-Plomer et al., 2019a). Further improvements
may be achieved by stratifying the training pixels based on the re
radiative power (related to re intensity). Such an approach may reduce
the increased uncertainties observed over areas affected by low re in-
tensities (i.e., low FRP), which results in reduced re severity, an
important factor affecting BA accuracy (Tanase et al., 2014; Belenguer-
Plomer et al., 2019c). Such a stratication may improve CNN training
and thus reduce CE and OE for approximately 15% of the burned pixels
with no recorded hotspots in the close vicinity. Finally, BA mapping
within the proposed framework may greatly benet from the concurrent
use of different SAR wavelengths such as L- (from the future NISAR
mission, launch planned in 2021) and P-band (from the future Biomass
mission, launch planned in 2022). Adding longer wavelength may allow
for discriminating surface res in forested areas, difcult to be detected
from optical and shortwave SAR wavelengths such as C-band.
6. Conclusions
This study provides insights for the optimum conguration, by land
cover class, of CNN algorithms fed by Sentinel-1 and/or Sentinel-2
datasets when detecting and mapping burned area. The analysis was
carried out over 10 study areas (1 M ha each) distributed within a broad
range of terrestrial ecoregions, with diverse land cover classes, affected
by different re intensities and environmental conditions (i.e., soil
moisture and precipitation patterns). CNN models with two hidden
layers allowed reducing the computing time with virtually no loss in
maintaining mapping accuracy when compared to deeper networks
regardless of the input data (i.e., Sentinel-1, Sentinel-2 and both) or the
observed land cover class. Three factors were relevant when dening an
optimum CNN conguration: (i) the dimension where the convolution-
based feature extraction was executed (i.e., spectral or spatial), (ii) the
data normalisation method (z-score or interval [0, 1]), and (iii) the
optimum threshold of the softmax output layer. In addition, the land
cover class was relevant when dening the most accurate SAR-O data
integration strategy.
The optimum CNN parameters were used to map BA over ve in-
dependent test areas, not used for algorithm optimisation, with similar
accuracies when compared to those achieved over the training tiles. The
consistent behaviour, despite using geographically distributed sites, was
possible due to a local model training approach supported by the ther-
mal anomalies. Error analysis over the test tiles suggested a strong
relationship between mapping accuracy and the land cover classes, as
observed in previous studies. The highest and lowest accuracies were
found over Forests and Grasslands, respectively. When individual data
were fed into the CNN (i.e., Sentinel-1 or Sentinel-2), the observed
mapping accuracies were similar to those found in the literature.
However, the proposed CNN approach was considerably more versatile
with respect to the existing BA mapping algorithms. Besides, this study
provided insights into the optimum SAR-O data integration, which al-
lows (i) improving BA mapping accuracy when compared to using a
single sensor type and (ii) wall-to-wall mapping as cloud-related gaps
affecting BA products from optical datasets were eliminated. Despite
these strengths, CNN-based BA mapping accuracy was limited by
different sources of errors including steep topography, low FRP, absence
of hotspots and presence of re unrelated land changes. Future research
should consider more study areas from representative re-prone biomes
to conrm the relevance of these ndings.
Declaration of Competing Interest
The authors declare no conict of interest.
M.A. Belenguer-Plomer et al.
Remote Sensing of Environment 260 (2021) 112468
This research has been nanced by the (i) Spanish Ministry of Uni-
versities through a Formaci´
on Profesorado Universitario (FPU) doctoral
fellowship (FPU16/01645) and its mobility grant associated (EST18/
00497) as well as (ii) by the European Space Agency (ESA) through the
Fire_cci (Climate Change Initiative) project (Contract 4000126706/19/
Agostinelli, F., Hoffman, M., Sadowski, P., Baldi, P., 2014. Learning Activation Functions
to Improve Deep Neural Networks arXiv preprint arXiv:1412.6830.
Anantrasirichai, N., Biggs, J., Albino, F., Bull, D., 2019. A deep learning approach to
detecting volcano deformation from satellite imagery using synthetic datasets.
Remote Sens. Environ. 230, 111179.
Aponte, C., de Groot, W.J., Wotton, B.M., 2016. Forest res and climate change: causes,
consequences and management options. Int. J. Wildland Fire 25 iii.
Ban, Y., Zhang, P., Nascetti, A., Bevington, A.R., Wulder, M.A., 2020. Near real-time
wildre progression monitoring with Sentinel-1 SAR time series and deep learning.
Sci. Rep. 10, 115.
Bashiri, M., Geranmayeh, A.F., 2011. Tuning the parameters of an articial neural
network using central composite design and genetic algorithm. Sci. Iran. 18,
Bauer-Marschallinger, B., Freeman, V., Cao, S., Paulik, C., Schauer, S., Stachl, T.,
Modanesi, S., Massari, C., Ciabatta, L., Brocca, L., et al., 2018. Toward global soil
moisture monitoring with Sentinel-1: harnessing assets and overcoming obstacles.
IEEE Trans. Geosci. Remote Sens. 57, 520539.
Belenguer-Plomer, M.A., Tanase, M.A., Fernandez-Carrillo, A., Chuvieco, E., 2018.
Insights into burned areas detection from Sentinel-1 data and locally adaptive
algorithms. In: Active and Passive Microwave Remote Sensing for Environmental
Monitoring II, vol. 10788. International Society for Optics and Photonics,
p. 107880G.
Belenguer-Plomer, M.A., Chuvieco, E., Tanase, M.A., 2019a. Evaluation of backscatter
coefcient temporal indices for burned area mapping. In: Active and Passive
Microwave Remote Sensing for Environmental Monitoring III, vol. 11154.
International Society for Optics and Photonics, p. 111540D.
Belenguer-Plomer, M.A., Chuvieco, E., Tanase, M.A., 2019b. Temporal decorrelation of
C-band backscatter coefcient in Mediterranean burned areas. Remote Sens. 11,
Belenguer-Plomer, M.A., Tanase, M.A., Fernandez-Carrillo, A., Chuvieco, E., 2019c.
Burned area detection and mapping using Sentinel-1 backscatter coefcient and
thermal anomalies. Remote Sens. Environ. 233, 111345.
Belenguer-Plomer, M.A., Chuvieco, E., Tanase, M.A., 2020. Optimum Sentinel-1 pixel
spacing for burned area mapping. In: IGARSS 2020-2020 IEEE International
Geoscience and Remote Sensing Symposium. IEEE, pp. 48584861.
Benedetti, A., Picchiani, M., Del Frate, F., 2018a. Sentinel-1 and sentinel-2 data fusion for
urban change detection. In: IGARSS 2018-2018 IEEE International Geoscience and
Remote Sensing Symposium. IEEE, pp. 19621965.
Benedetti, P., Ienco, D., Gaetano, R., Ose, K., Pensa, R.G., Dupuy, S., 2018b. M
fusion: a
deep learning architecture for multiscale multimodal multitemporal satellite data
fusion. IEEE J. Select. Top. Appl. Earth Observ. Remote Sens. 11, 49394949.
Bojinski, S., Verstraete, M., Peterson, T.C., Richter, C., Simmons, A., Zemp, M., 2014. The
concept of essential climate variables in support of climate research, applications,
and policy. Bull. Am. Meteorol. Soc. 95, 14311443.
Bourgeau-Chavez, L., Kasischke, E., Brunzell, S., Mudd, J., Tukman, M., 2002. Mapping
re scars in global boreal forests using imaging radar data. Int. J. Remote Sens. 23,
Bouvet, A., Mermoz, S., Ball`
ere, M., Koleck, T., Le Toan, T., 2018. Use of the SAR
shadowing effect for deforestation detection with Sentinel-1 time series. Remote
Sens. 10, 1250.
Bowman, D.M., Balch, J.K., Artaxo, P., Bond, W.J., Carlson, J.M., Cochrane, M.A.,
DAntonio, C.M., DeFries, R.S., Doyle, J.C., Harrison, S.P., et al., 2009. Fire in the
earth system. Science 324, 481484.
Bowman, D., Williamson, G., Yebra, M., Lizundia-Loiola, J., Pettinari, M.L., Shah, S.,
Bradstock, R., Chuvieco, E., 2020. Wildres: Australia Needs National Monitoring
Brown, A.R., Petropoulos, G.P., Ferentinos, K.P., 2018. Appraisal of the Sentinel-1 & 2
use in a large-scale wildre assessment: a case study from Portugals res of 2017.
Appl. Geogr. 100, 7889.
Chan, S., Bindlish, R., ONeill, P., Jackson, T., Njoku, E., Dunbar, S., Chaubell, J.,
Piepmeier, J., Yueh, S., Entekhabi, D., et al., 2018. Development and assessment of
the SMAP enhanced passive soil moisture product. Remote Sens. Environ. 204,
Chen, Q., Zeng, J., Cui, C., Li, Z., Chen, K.-S., Bai, X., Xu, J., 2018. Soil moisture retrieval
from smap: a validation and error analysis study using ground-based observations
over the little washita watershed. IEEE Trans. Geosci. Remote Sens. 56, 13941408.
Chuvieco, E., Aguado, I., Yebra, M., Nieto, H., Salas, J., Martín, M.P., Vilar, L.,
Martínez, J., Martín, S., Ibarra, P., et al., 2010. Development of a framework for re
risk assessment using remote sensing and geographic information system
technologies. Ecol. Model. 221, 4658.
Chuvieco, E., Lizundia-Loiola, J., Pettinari, M.L., Ramo, R., Padilla, M., Mouillot, F.,
Laurent, P., Storm, T., Heil, A., Plummer, S., 2018. Generation and analysis of a new
global burned area product based on MODIS 250m reectance bands and thermal
anomalies. Earth Syst. Sci. Data Discuss 512, 124.
Chuvieco, E., Mouillot, F., van der Werf, G.R., San Miguel, J., Tanasse, M., Koutsias, N.,
García, M., Yebra, M., Padilla, M., Gitas, I., et al., 2019. Historical background and
current developments for mapping burned area from satellite earth observation.
Remote Sens. Environ. 225, 4564.
Di Gregorio, A., 2005. Land Cover Classication System: Classication Concepts and User
Manual. United Nations Food and Agriculture Organization.
Fernandez-Carrillo, A., Belenguer-Plomer, M., Chuvieco, E., Tanase, M., 2018. Effects of
sample size on burned areas accuracy estimates in the Amazon Basin. In: Earth
Resources and Environmental Remote Sensing/GIS Applications IX, vol. 10790.
International Society for Optics and Photonics, p. 107901S.
Flannigan, M.D., Amiro, B.D., Logan, K.A., Stocks, B., Wotton, B., 2006. Forest res and
climate change in the 21 st century. Mitig. Adapt. Strateg. Glob. Chang. 11, 847859.
Flannigan, M.D., Krawchuk, M.A., de Groot, W.J., Wotton, B.M., Gowman, L.M., 2009.
Implications of changing climate for global wildland re. Int. J. Wildland Fire 18,
Franquesa, M., Vanderhoof, M.K., Libonati, R., Rodrigues, J.A., Setzer, A.W.,
Stavrakoudis, D., Gitas, I.Z., Roteta, E., Padilla, M., Chuvieco, E., 2020. Development
of a standard database of reference sites for validating global burned area products.
Earth Syst. Sci. Data Discuss. 120.
Fraser, R., Li, Z., Cihlar, J., 2000. Hotspot and NDVI differencing synergy (HANDS): a
new technique for burned area mapping over boreal forest. Remote Sens. Environ.
74, 362376.
Freeman, A., Durden, S.L., 1998. A three-component scattering model for polarimetric
SAR data. IEEE Trans. Geosci. Remote Sens. 36, 963973.
French, N.H., Bourgeau-Chavez, L.L., Wang, Y., Kasischke, E.S., 1999. Initial
observations of Radarsat imagery at re-disturbed sites in interior Alaska. Remote
Sens. Environ. 68, 8994.
Gao, B.-C., 1996. NDWIA normalized difference water index for remote sensing of
vegetation liquid water from space. Remote Sens. Environ. 58, 257266.
García, M.L., Caselles, V., 1991. Mapping burns and natural reforestation using thematic
mapper data. Geocarto Int. 6, 3137.
Giglio, L., Loboda, T., Roy, D.P., Quayle, B., Justice, C.O., 2009. An active-re based
burned area mapping algorithm for the MODIS sensor. Remote Sens. Environ. 113,
Giglio, L., Schroeder, W., Justice, C.O., 2016. The collection 6 MODIS active re
detection algorithm and re products. Remote Sens. Environ. 178, 3141.
Giglio, L., Boschetti, L., Roy, D.P., Humber, M.L., Justice, C.O., 2018. The collection 6
MODIS burned area mapping algorithm and product. Remote Sens. Environ. 217,
Gimeno, M., San-Miguel-Ayanz, J., 2004. Evaluation of RADARSAT-1 data for
identication of burnt areas in southern Europe. Remote Sens. Environ. 92, 370375.
Hansen, M.C., Potapov, P.V., Moore, R., Hancher, M., Turubanova, S., Tyukavina, A.,
Thau, D., Stehman, S., Goetz, S., Loveland, T., et al., 2013. High-resolution global
maps of 21st-century forest cover change. Science 342, 850853.
Hoffmann, W.A., Schroeder, W., Jackson, R.B., 2002. Positive feedbacks of re, climate,
and vegetation and the conversion of tropical savanna. Geophys. Res. Lett. 29, 91.
Hollmann, R., Merchant, C.J., Saunders, R., Downy, C., Buchwitz, M., Cazenave, A.,
Chuvieco, E., Defourny, P., de Leeuw, G., Forsberg, R., et al., 2013. The ESA climate
change initiative: satellite data records for essential climate variables. Bull. Am.
Meteorol. Soc. 94, 15411552.
Hu, F., Xia, G.-S., Hu, J., Zhang, L., 2015. Transferring deep convolutional neural
networks for the scene classication of high-resolution remote sensing imagery.
Remote Sens. 7, 1468014707.
Huang, S., Siegert, F., 2006. Backscatter change on re scars in Siberian boreal forests in
ENVISAT ASAR wide-swath images. IEEE Geosci. Remote Sens. Lett. 3, 154158.
Imperatore, P., Azar, R., Calo, F., Stroppiana, D., Brivio, P.A., Lanari, R., Pepe, A., 2017.
Effect of the vegetation re on backscattering: an investigation based on Sentinel-1
observations. IEEE J. Select. Top. Appl. Earth Observ. Remote Sens. 10, 44784492.
Inglada, J., Christophe, E., 2009. The Orfeo Toolbox remote sensing image processing
software. In: Geoscience and Remote Sensing Symposium, 2009 IEEE International,
IGARSS 2009, vol. 4. IEEE (pp. IV733).
Jin, Y., Roy, D.P., 2005. Fire-induced albedo change and its radiative forcing at the
surface in northern Australia. Geophys. Res. Lett. 32.
Kasischke, E.S., Bourgeau-Chavez, L.L., French, N.H., 1994. Observations of variations in
ERS-1 SAR image intensity associated with forest res in Alaska. IEEE Trans. Geosci.
Remote Sens. 32, 206210.
Kellenberger, B., Marcos, D., Tuia, D., 2018. Detecting mammals in UAV images: best
practices to address a substantially imbalanced dataset with deep learning. Remote
Sens. Environ. 216, 139153.
Key, C., Benson, N., 2004. Ground measure of severity, the composite burn index; and
remote sensing of severity, the normalized burn ratio. In: G. T. R. RMRS-GTR-164
(Ed.), FIREMON: Fire Effects Monitoring and Inventory System chapter Landscape
assessment (LA): Sampling and analysis methods. USDA Forest Service, Rocky
Mountain Research Station, Ogden, pp. 151.
Kloster, S., Mahowald, N., Randerson, J., Lawrence, P., 2012. The impacts of climate,
land use, and demography on res during the 21st century simulated by CLM-CN.
Biogeosciences 9, 509525.
Knorr, W., Jiang, L., Arneth, A., 2016. Climate, CO 2 and human population impacts on
global wildre emissions. Biogeosciences 13, 267282.
Krawchuk, M.A., Moritz, M.A., Parisien, M.-A., Van Dorn, J., Hayhoe, K., 2009. Global
pyrogeography: the current and future distribution of wildre. PLoS One 4, e5102.
Krizhevsky, A., Sutskever, I., Hinton, G.E., 2012. Imagenet classication with deep
convolutional neural networks. In: Advances in Neural Information Processing
Systems, pp. 10971105.
M.A. Belenguer-Plomer et al.
Remote Sensing of Environment 260 (2021) 112468
Kurum, M., 2015. C-band SAR backscatter evaluation of 2008 Gallipoli forest re. IEEE
Geosci. Remote Sens. Lett. 12, 10911095.
Kussul, N., Lavreniuk, M., Skakun, S., Shelestov, A., 2017. Deep learning classication of
land cover and crop types using remote sensing data. IEEE Geosci. Remote Sens. Lett.
14, 778782.
Langenfelds, R., Francey, R., Pak, B., Steele, L., Lloyd, J., Trudinger, C., Allison, C., 2002.
Interannual growth rate variations of atmospheric CO2 and its δ13C, H2, CH4, and
CO between 1992 and 1999 linked to biomass burning. Glob. Biogeochem. Cycles
16, 1048.
Langner, A., Miettinen, J., Siegert, F., 2007. Land cover change 20022005 in Borneo
and the role of re derived from MODIS imagery. Glob. Chang. Biol. 13, 23292340.
Lavorel, S., Flannigan, M.D., Lambin, E.F., Scholes, M.C., 2007. Vulnerability of land
systems to re: interactions among humans, climate, the atmosphere, and
ecosystems. Mitig. Adapt. Strateg. Glob. Chang. 12, 3353.
LeCun, Y., Bengio, Y., Hinton, G., 2015. Deep learning. Nature 521, 436.
Liu, Z., Ballantyne, A.P., Cooper, L.A., 2019. Biophysical feedback of global forest res
on surface temperature. Nat. Commun. 10, 19.
Lizundia-Loiola, J., Ot´
on, G., Ramo, R., Chuvieco, E., 2020. A spatio-temporal active-re
clustering approach for global burned area mapping at 250 m from MODIS data.
Remote Sens. Environ. 236, 111493.
Loboda, T., Oneal, K., Csiszar, I., 2007. Regionally adaptable dNBR-based algorithm for
burned area mapping from MODIS data. Remote Sens. Environ. 109, 429442.
Lu, B., He, Y., Tong, A., 2016. Evaluation of spectral indices for estimating burn severity
in semiarid grasslands. Int. J. Wildland Fire 25, 147157.
Ma, L., Liu, Y., Zhang, X., Ye, Y., Yin, G., Johnson, B.A., 2019. Deep learning in remote
sensing applications: a meta-analysis and review. ISPRS J. Photogramm. Remote
Sens. 152, 166177.
Maggiori, E., Tarabalka, Y., Charpiat, G., Alliez, P., 2016. Convolutional neural networks
for large-scale remote-sensing image classication. IEEE Trans. Geosci. Remote Sens.
55, 645657.
Mandanici, E., Bitelli, G., 2016. Preliminary comparison of sentinel-2 and landsat 8
imagery for a combined use. Remote Sens. 8, 1014.
Melchiorre, A., Boschetti, L., 2018. Global analysis of burned area persistence time with
MODIS data. Remote Sens. 10, 750.
Menges, C., Bartolo, R., Bell, D., Hill, G.E., 2004. The effect of savanna res on SAR
backscatter in northern Australia. Int. J. Remote Sens. 25, 48574871.
Mouillot, F., Schultz, M.G., Yue, C., Cadule, P., Tansey, K., Ciais, P., Chuvieco, E., 2014.
Ten years of global burned area products from spaceborne remote sensinga
review: analysis of user needs and recommendations for future developments. Int. J.
Appl. Earth Obs. Geoinf. 26, 6479.
Nair, V., Hinton, G.E., 2010. Rectied linear units improve restricted boltzmann
machines. In: Proceedings of the 27th International Conference on Machine Learning
(ICML-10), pp. 807814.
Olson, D.M., Dinerstein, E., Wikramanayake, E.D., Burgess, N.D., Powell, G.V.,
Underwood, E.C., Damico, J.A., Itoua, I., Strand, H.E., Morrison, J.C., et al., 2001.
Terrestrial Ecoregions of the world: a new map of life on earth: a new global map of
terrestrial ecoregions provides an innovative tool for conserving biodiversity.
BioScience 51, 933938.
Ottinger, M., Clauss, K., Kuenzer, C., 2017. Large-scale assessment of coastal aquaculture
ponds with sentinel-1 time series data. Remote Sens. 9, 440.
Padilla, M., Stehman, S.V., Chuvieco, E., 2014. Validation of the 2008 MODIS-MCD45
global burned area product using stratied random sampling. Remote Sens. Environ.
144, 187196.
Padilla, M., Stehman, S.V., Ramo, R., Corti, D., Hantson, S., Oliva, P., Alonso-Canas, I.,
Bradley, A.V., Tansey, K., Mota, B., et al., 2015. Comparing the accuracies of remote
sensing global burned area products using stratied random sampling and
estimation. Remote Sens. Environ. 160, 114121.
Padilla, M., Olofsson, P., Stehman, S.V., Tansey, K., Chuvieco, E., 2017. Stratication and
sample allocation for reference burned area data. Remote Sens. Environ. 203,
Padilla, M., Wheeler, J., Tansey, K., 2018. D4. 1.1 Product Validation Report (PVR). In:
ESA CCI ECV Fire Disturbance. ESA Climate Change InitiativeFire_cci.
Pausas, J.G., Paula, S., 2012. Fuel shapes the reclimate relationship: evidence from
Mediterranean ecosystems. Glob. Ecol. Biogeogr. 21, 10741082.
Pinto, M.M., Libonati, R., Trigo, R.M., Trigo, I.F., DaCamara, C.C., 2020. A deep learning
approach for mapping and dating burned areas using temporal sequences of satellite
images. ISPRS J. Photogramm. Remote Sens. 160, 260274.
Plummer, S., Lecomte, P., Doherty, M., 2017. The ESA climate change initiative (CCI): a
European contribution to the generation of the global climate observing system.
Remote Sens. Environ. 203, 28.
Poulter, B., Cadule, P., Cheiney, A., Ciais, P., Hodson, E., Peylin, P., Plummer, S.,
Spessa, A., Saatchi, S., Yue, C., et al., 2015. Sensitivity of global terrestrial carbon
cycle dynamics to variability in satellite-observed burned area. Glob. Biogeochem.
Cycles 29, 207222.
Quegan, S., Le Toan, T., Yu, J.J., Ribbes, F., Floury, N., 2000. Multitemporal ERS SAR
analysis applied to forest mapping. IEEE Trans. Geosci. Remote Sens. 38, 741753.
Ramo, R., Roteta, E., Bistinas, I., Van Wees, D., Bastarrika, A., Chuvieco, E., Van der
Werf, G.R., 2021. African burned area and re carbon emissions are strongly
impacted by small res undetected by coarse resolution satellite data. Proc. Natl.
Acad. Sci. 118.
Roteta, E., Bastarrika, A., Padilla, M., Storm, T., Chuvieco, E., 2019. Development of a
Sentinel-2 burned area algorithm: generation of a small re database for sub-
Saharan Africa. Remote Sens. Environ. 222, 117.
Rouse Jr., J., Haas, R., Schell, J., Deering, D., 1974. Monitoring vegetation systems in the
Great Plains with ERTS. In: NASA. Goddard Space Flight Center 3d ERTS-1 Symp,
vol. 1. NASA, pp. 309317.
Roy, D.P., Boschetti, L., Justice, C.O., Ju, J., 2008. The collection 5 MODIS burned area
productglobal evaluation by comparison with the MODIS active re product.
Remote Sens. Environ. 112, 36903707.
Ruecker, G., Siegert, F., 2000. Burn scar mapping and re damage assessment using ERS-
2 SAR images in East Kalimantan, Indonesia. Int. Arch. Photogr. Remote Sens. 33,
Saha, S., Bovolo, F., Bruzzone, L., 2019. Unsupervised deep change vector analysis for
multiple-change detection in VHR images. IEEE Trans. Geosci. Remote Sens. 57,
Scarpa, G., Gargiulo, M., Mazza, A., Gaetano, R., 2018. A CNN-based fusion method for
feature extraction from sentinel data. Remote Sens. 10, 236.
Schmidhuber, J., 2015. Deep learning in neural networks: an overview. Neural Netw. 61,
Schroeder, W., Oliva, P., Giglio, L., Csiszar, I.A., 2014. The new VIIRS 375 m active re
detection data product: algorithm description and initial assessment. Remote Sens.
Environ. 143, 8596.
Sharma, R., Hara, K., Tateishi, R., 2018. Developing Forest cover composites through a
combination of Landsat-8 optical and Sentinel-1 SAR data for the visualization and
extraction of forested areas. J. Imaging 4, 105.
Sitanggang, I., Yaakob, R., Mustapha, N., Ainuddin, A., 2013. Predictive models for
hotspots occurrence using decision tree algorithms and logistic regression. J. Appl.
Sci. 13, 252261.
Strigl, D., Koer, K., Podlipnig, S., 2010. Performance and scalability of GPU-based
convolutional neural networks. In: 2010 18th Euromicro Conference on Parallel,
Distributed and Network-based Processing. IEEE, pp. 317324.
Stroppiana, D., Azar, R., Cal`
o, F., Pepe, A., Imperatore, P., Boschetti, M., Silva, J.,
Brivio, P.A., Lanari, R., 2015. Integration of optical and SAR data for burned area
mapping in Mediterranean regions. Remote Sens. 7, 13201345.
Tanase, M.A., Belenguer-Plomer, M.A., 2018. 03. D3 Intermediate validation results: SAR
pre-processing and burned area detection, version 1.0. In: ESA CCI ECV Fire
Disturbance. ESA Climate Change InitiativeFire_cci.
Tanase, M.A., Santoro, M., Aponte, C., de la Riva, J., 2014. Polarimetric properties of
burned forest areas at C-and L-band. IEEE J. Select. Top. Appl. Earth Observ. Remote
Sens. 7, 267276.
Tanase, M.A., Belenguer-Plomer, M.A., Roteta, E., Bastarrika, A., Wheeler, J., Fern´
Carrillo, ´
A., Tansey, K., Wiedemann, W., Navratil, P., Lohberger, S., et al., 2020.
Burned area detection and mapping: Intercomparison of Sentinel-1 and Sentinel-2
based algorithms over tropical Africa. Remote Sens. 12, 334.
Tavares, P.A., Beltr˜
ao, N.E.S., Guimar˜
aes, U.S., Teodoro, A.C., 2019. Integration of
sentinel-1 and sentinel-2 for classication and LULC mapping in the urban area of
em, eastern Brazilian Amazon. Sensors 19, 1140.
Trigg, S., Flasse, S., 2001. An evaluation of different bi-spectral spaces for discriminating
burned shrub-savannah. Int. J. Remote Sens. 22, 26412647.
Tucker, C.J., 1979. Red and photographic infrared linear combinations for monitoring
vegetation. Remote Sens. Environ. 8, 127150.
Turco, M., Jerez, S., Augusto, S., Tarín-Carrasco, P., Ratola, N., Jim´
enez-Guerrero, P.,
Trigo, R.M., 2019. Climate drivers of the 2017 devastating res in Portugal. Sci. Rep.
9, 18.
Van Der Werf, G.R., Randerson, J.T., Giglio, L., Van Leeuwen, T.T., Chen, Y., Rogers, B.
M., Mu, M., Van Marle, M.J., Morton, D.C., Collatz, G.J., et al., 2017. Global re
emissions estimates during 1997-2016. Earth Syst. Sci. Data 9, 697720.
Van Tricht, K., Gobin, A., Gilliams, S., Piccard, I., 2018. Synergistic use of radar Sentinel-
1 and optical Sentinel-2 imagery for crop mapping: a case study for Belgium. Remote
Sens. 10, 1642.
Van Zyl, J.J., Arii, M., Kim, Y., 2011. Model-based decomposition of polarimetric SAR
covariance matrices constrained for nonnegative eigenvalues. IEEE Trans. Geosci.
Remote Sens. 49, 34523459.
Verhegghen, A., Eva, H., Ceccherini, G., Achard, F., Gond, V., Gourlet-Fleury, S.,
Cerutti, P.O., 2016. The potential of sentinel satellites for burnt area mapping and
monitoring in the Congo Basin forests. Remote Sens. 8, 986.
Ward, D., Kloster, S., Mahowald, N., Rogers, B., Randerson, J., Hess, P., 2012. The
changing radiative forcing of res: global model estimates for past, present and
future. Atmos. Chem. Phys. 12, 1085710886.
Williams, A.P., Abatzoglou, J.T., 2016. Recent advances and remaining uncertainties in
resolving past and future climate effects on global re activity. Curr. Climat. Chang.
Rep. 2, 114.
Wooster, M.J., Roberts, G., Perry, G., Kaufman, Y., 2005. Retrieval of biomass
combustion rates and totals from re radiative power observations: FRP derivation
and calibration relationships between biomass consumption and re radiative
energy release. J. Geophys. Res.-Atmos. 110.
Xu, X., Li, W., Ran, Q., Du, Q., Gao, L., Zhang, B., 2017. Multisource remote sensing data
classication based on convolutional neural network. IEEE Trans. Geosci. Remote
Sens. 56, 937949.
Zhang, C., Pan, X., Li, H., Gardiner, A., Sargent, I., Hare, J., Atkinson, P.M., 2018.
A hybrid MLP-CNN classier for very ne resolution remotely sensed image
classication. ISPRS J. Photogramm. Remote Sens. 140, 133144.
Zhang, C., Sargent, I., Pan, X., Li, H., Gardiner, A., Hare, J., Atkinson, P.M., 2019. Joint
deep learning for land cover and land use classication. Remote Sens. Environ. 221,
M.A. Belenguer-Plomer et al.
Remote Sensing of Environment 260 (2021) 112468
Zhong, Y., Fei, F., Liu, Y., Zhao, B., Jiao, H., Zhang, L., 2017. SatCNN: satellite image
dataset classication using agile convolutional neural networks. Remote Sens. Lett.
8, 136145.
Zhong, L., Hu, L., Zhou, H., 2019. Deep learning based multi-temporal crop classication.
Remote Sens. Environ. 221, 430443.
Zhu, X.X., Tuia, D., Mou, L., Xia, G.-S., Zhang, L., Xu, F., Fraundorfer, F., 2017. Deep
learning in remote sensing: a comprehensive review and list of resources. IEEE
Geosci. Remote Sens. Magaz. 5, 836.
M.A. Belenguer-Plomer et al.
... Tanase et al. (2018) utilized ALOS L-band data for forest windthrow detection and achieved high levels of accuracy (up to 90%) using image thresholding as the detection approach. SAR data have also been widely employed for change detection purposes, including the detection of forest fires (Abdikan et al., 2022;Ban et al., 2020;Belenguer-Plomer et al., 2021;Bovolo and Bruzzone, 2005;Hosseini and Lim, 2023). Various SAR satellites are currently operational, such as Sentinel-1, ALOS-2, SAOCOM, ICEYE, Capella, TerraSAR-X and Tandem-X, RADARSAT, and COSMO-SkyMed. ...
... For instance, SAR captures information on the three-dimensional structure of surface features, and is increasingly combined with optical imagery to improve land cover maps, particularly in cloudy tropical regions (259)(260)(261). ...
Full-text available
Semantic segmentation (classification) of Earth Observation imagery is a crucial task in remote sensing. This paper presents a comprehensive review of technical factors to consider when designing neural networks for this purpose. The review focuses on Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), and transformer models, discussing prominent design patterns for these ANN families and their implications for semantic segmentation. Common pre-processing techniques for ensuring optimal data preparation are also covered. These include methods for image normalization and chipping, as well as strategies for addressing data imbalance in training samples, and techniques for overcoming limited data, including augmentation techniques, transfer learning, and domain adaptation. By encompassing both the technical aspects of neural network design and the data-related considerations, this review provides researchers and practitioners with a comprehensive and up-to-date understanding of the factors involved in designing effective neural networks for semantic segmentation of Earth Observation imagery.
Full-text available
Satellite-based forest alert systems are an important tool for ecosystem monitoring, planning conservation, and increasing public awareness of forest cover change. Continuous monitoring in tropical regions, such as those experiencing pronounced monsoon seasons, can be complicated by spatially extensive and persistent cloud cover. One solution is to use Synthetic Aperture Radar (SAR) imagery acquired by the European Space Agency’s Sentinel-1A and B satellites. The Sentinel 1A and B satellites acquire C-band radar data that penetrates cloud cover and can be acquired during the day or night. One challenge associated with operational use of radar imagery is that the speckle associated with the backscatter values can complicate traditional pixel-based analysis approaches. A potential solution is to use deep learning semantic segmentation models that can capture predictive features that are more robust to pixel-level noise. In this analysis, we present a prototype SAR-based forest alert system that utilizes deep learning classifiers, deployed using the Google Earth Engine cloud computing platform, to identify forest cover change with near real-time classification over two Cambodian wildlife sanctuaries. By leveraging a pre-existing forest cover change dataset derived from multispectral Landsat imagery, we present a method for efficiently developing a SAR-based semantic segmentation dataset. In practice, the proposed framework achieved good performance comparable to an existing forest alert system while offering more flexibility and ease of development from an operational standpoint.
Full-text available
The Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi National Polar-orbiting Partnership (S-NPP) satellite has been used for the early detection and daily monitoring of active wildfires. How to effectively segment the active fire pixels from VIIRS image time-series in a reliable manner remains a challenge because of the low precision associated with high recall using automatic methods. For active fire detection, multi-criteria thresholding is often applied to both low-resolution and mid-resolution Earth observation images. Deep learning approaches based on Convolutional Neural Networks are also well-studied on mid-resolution images. However, ConvNet-based approaches have poor performance on low-resolution images because of the coarse spatial features. On the other hand, the high temporal resolution of VIIRS images highlights the potential of using sequential models for active fire detection. Transformer networks, a recent deep learning architecture based on self-attention, offer hope as they have shown strong performance on image segmentation and sequential modelling tasks within computer vision. In this research, we propose a Transformer-based solution to segment active fire pixels from the VIIRS time-series. The solution feeds a time-series of tokenized pixels into a Transformer network to identify active fire pixels at each timestamp and achieves a significantly higher F1-Score than prior approaches for active fires within the study areas in California, New Mexico, and Oregon in the US, and in British Columbia and Alberta in Canada, as well as in Australia and Sweden.
Full-text available
Significance Fires burn an area comparable to Europe each year, emitting greenhouse gases and aerosols. We compared burned area (BA) based on 20-m resolution images with a BA derived from 500-m data. It represents an 80% increase in BA in sub-Saharan Africa, responsible for about 70% of global BA. This difference is predominately (87%) attributed to small fires (<100 ha), which account for 41% of total BA but only for 5% in coarse-resolution products. We found that African fires were responsible for emissions of 1.44 PgC, 31–101% higher than previous estimates and representing 14% of global CO 2 emissions from fossil fuel burning. We conclude that small fires are critically important in characterizing the most important disturbance agent on a global scale.
Full-text available
Over the past 2 decades, several global burned area products have been produced and released to the public. However, the accuracy assessment of such products largely depends on the availability of reliable reference data that currently do not exist on a global scale or whose production require a high level of dedication of project resources. The important lack of reference data for the validation of burned area products is addressed in this paper. We provide the Burned Area Reference Database (BARD), the first publicly available database created by compiling existing reference BA (burned area) datasets from different international projects. BARD contains a total of 2661 reference files derived from Landsat and Sentinel-2 imagery. All those files have been checked for internal quality and are freely provided by the authors. To ensure database consistency, all files were transformed to a common format and were properly documented by following metadata standards. The goal of generating this database was to give BA algorithm developers and product testers reference information that would help them to develop or validate new BA products. BARD is freely available at (Franquesa et al., 2020).
Full-text available
In recent years, the world witnessed many devastating wildfires that resulted in destructive human and environmental impacts across the globe. Emergency response and rapid response for mitigation calls for effective approaches for near real-time wildfire monitoring. Capable of penetrating clouds and smoke, and imaging day and night, Synthetic Aperture Radar (SAR) can play a critical role in wildfire monitoring. In this communication, we investigated and demonstrated the potential of Sentinel-1 SAR time series with a deep learning framework for near real-time wildfire progression monitoring. The deep learning framework, based on a Convolutional Neural Network (CNN), is developed to detect burnt areas automatically using every new SAR image acquired during the wildfires and by exploiting all available pre-fire SAR time series to characterize the temporal backscatter variations. The results show that Sentinel-1 SAR backscatter can detect wildfires and capture their temporal progression as demonstrated for three large and impactful wildfires: the 2017 Elephant Hill Fire in British Columbia, Canada, the 2018 Camp Fire in California, USA, and the 2019 Chuckegg Creek Fire in northern Alberta, Canada. Compared to the traditional log-ratio operator, CNN-based deep learning framework can better distinguish burnt areas with higher accuracy. These findings demonstrate that spaceborne SAR time series with deep learning can play a significant role for near real-time wildfire monitoring when the data becomes available at daily and hourly intervals with the launches of RADARSAT Constellation Missions in 2019, and SAR CubeSat constellations.
Full-text available
This study provides a comparative analysis of two Sentinel-1 and one Sentinel-2 burned area (BA) detection and mapping algorithms over 10 test sites (100 × 100 km) in tropical and sub-tropical Africa. Depending on the site, the burned area was mapped at different time points during the 2015–2016 fire seasons. The algorithms relied on diverse burned area (BA) mapping strategies regarding the data used (i.e., surface reflectance, backscatter coefficient, interferometric coherence) and the detection method. Algorithm performance was compared by evaluating the detected BA agreement with reference fire perimeters independently derived from medium resolution optical imagery (i.e., Landsat 8, Sentinel-2). The commission (CE) and omission errors (OE), as well as the Dice coefficient (DC) for burned pixels, were compared. The mean OE and CE were 33% and 31% for the optical-based Sentinel-2 time-series algorithm and increased to 66% and 36%, respectively, for the radar backscatter coefficient-based algorithm. For the coherence based radar algorithm, OE and CE reached 72% and 57%, respectively. When considering all tiles, the optical-based algorithm provided a significant increase in agreement over the Synthetic Aperture Radar (SAR) based algorithms that might have been boosted by the use of optical datasets when generating the reference fire perimeters. The analysis suggested that optical-based algorithms provide for a significant increase in accuracy over the radar-based algorithms. However, in regions with persistent cloud cover, the radar sensors may provide a complementary data source for wall to wall BA detection.
Full-text available
This paper presents the generation of a global burned area mapping algorithm using MODIS hotspots and near-infrared reflectance within ESA's Fire_cci project. The algorithm is based on a hybrid approach that combines MODIS highest resolution (250 m) near-infrared band and active fire information from thermal channels. The burned area is detected in two phases. In the first step, pixels with a high probability of being burned are selected in order to reduce commission errors. To do that, spatio-temporal active-fire clusters are created to determine adaptive thresholds. Finally, a contextual growing approach is applied from those pixels to the neighbouring area to fully detect the burned patch and reduce omission errors. The algorithm was used to obtain a time series of global burned area dataset (named FireCCI51), covering the 2001–2018 period. Validation based on 1200 sampled sites covering the period from 2003 to 2014 showed an average omission and commission errors of 67.1% and 54.4%. When using longer validation periods, the errors were found smaller (54.5% omission and 25.7% commission for the additional 1000 African sampled sites), which indicates that the product is negatively influenced by temporal reporting accuracy. The inter-comparison carried out with previous Fire_cci versions (FireCCI41 and FireCCI50), and NASA's standard burned area product (MCD64A1 c6) showed consistent spatial and temporal patterns. However, the new algorithm estimated an average BA of 4.63 Mkm², with a maximum of 5.19 Mkm² (2004) and a minimum of 3.94 Mkm² (in 2001), increasing current burned area estimations. Besides, the new product was found more sensitive to detect smaller burned patches. This new product, called FireCCI51, is publicly available at:, last accessed on September 2019.
Full-text available
Burned area algorithms from radar images are often based on temporal differences between pre- and post-fire backscatter values. However, such differences may occur long past the fire event, an effect known as temporal decorrelation. Improvements in radar-based burned areas monitoring depend on a better understanding of the temporal decorrelation effects as well as its sources. This paper analyses the temporal decorrelation of the Sentinel-1 C-band backscatter coefficient over burned areas in Mediterranean ecosystems. Several environmental variables influenced the radar scattering such as fire severity, post-fire vegetation recovery, water content, soil moisture, and local slope and aspect were analyzed. The ensemble learning method random forests was employed to estimate the importance of these variables to the decorrelation process by land cover classes. Temporal decorrelation was observed for over 32% of the burned pixels located within the study area. Fire severity, vegetation water content, and soil moisture were the main drivers behind temporal decorrelation processes and are of the utmost importance for areas detected as burned immediately after fire events. When burned areas were detected long after fire (decorrelated areas), due to reduced backscatter coefficient variations between pre- to post-fire acquisitions, water content (soil and vegetation) was the main driver behind the backscatter coefficient changes. Therefore, for efficient synthetic aperture radar (SAR)-based monitoring of burned areas, detection, and mapping algorithms need to account for the interaction between fire impact and soil and vegetation water content.
Full-text available
A record 500,000 hectares burned in Portugal during the extreme wildfire season of 2017, with more than 120 human lives lost. Here we analyse the climatic factors responsible for the burned area (BA) from June to October series in Portugal for the period 1980-2017. Superposed onto a substantially stationary trend on BA data, strong oscillations on shorter time scales were detected. Here we show that they are significantly affected by the compound effect of summer (June-July-August) drought and high temperature conditions during the fire season. Drought conditions were calculated using the Standardized Precipitation Evapotranspiration Index (SPEI), the Standardized Precipitation Index (SPI) and the Standardized Soil Moisture Index (SSI). Then the extent to which the burned area has diverged from climate-expected trends was assessed. Our results indicate that in the absence of other drivers, climate change would have led to higher BA values. In addition, the 2017 extreme fire season is well captured with the model forced with climate drivers only, suggesting that the extreme fire season of 2017 could be a prelude to future conditions and likewise events. Indeed, the expected further increase of drought and high temperature conditions in forthcoming decades, point at a potential increase of fire risk in this region. The climate-fire model developed in this study could be useful to develop more skilled seasonal predictions capable of anticipating potentially hazardous conditions.
Over the past decades, methods for burned areas mapping and dating from remote sensing imagery have been the object of extensive research. The limitations of current methods, together with the heavy pre-processing of input data they require, make them difficult to improve or apply to different satellite sensors. Here, we explore a deep learning approach based on daily sequences of multi-spectral images, as a promising and flexible technique that can be applicable to observations with various spatial and spectral resolutions. We test the proposed model for five regions around the globe using input data from VIIRS 750 m bands resampled to a 0.01° spatial resolution grid. The derived burned areas are validated against higher resolution reference maps and compared with the MCD64A1 Collection 6 and FireCCI51 global burned area datasets. We show that the proposed methodology achieves competitive results in the task of burned areas mapping, despite using lower spatial resolution observations than the two global datasets. Furthermore, we improve the task of burned areas dating for the considered regions of study when compared with state-of-the-art products. We also show that our model can be used to map burned areas for low burned fraction levels and that it can operate in near-real-time, converging to the final solution in only a few days. The obtained results are a strong indication of the advantage of deep learning approaches for the problem of mapping and dating of burned areas and provide several routes for future research.
Conference Paper
Fire has a vast influence on the climatic balance, and the Global Climate Observing System (GCOS) considers it an Essential Climate Variable (ECV). Remote sensing data is a powerful source of information for burned area detection and thus for estimating greenhouse gases (GHGs) emissions from fires. Currently, most burned area products are based on optical images. However, cloud cover independent Synthetic Aperture Radar (SAR) datasets are increasingly exploited for burned area mapping. This study assessed temporal indices based on temporal backscatter coefficient to understand their suitability for burned area detection. The analysis was carried out using the random forests machine learning classifier, which provides a rank for each independent variable used as input. Depending on land cover type, soil moisture, and topographic conditions, remarkable differences were observed between the temporal backscatter based indices.