Confusion matrix scheme.

Confusion matrix scheme.

Source publication
Article
Full-text available
In this paper, we present an in-depth analysis of the use of convolutional neural networks (CNN), a deep learning method widely applied in remote sensing-based studies in recent years, for burned area (BA) mapping combining radar and optical datasets acquired by Sentinel-1 and Sentinel-2 on-board sensors, respectively. Combining active and passive...

Context in source publication

Context 1
... matrices were used to validate the CNN-based BA maps ( Table 2). The Dice coefficient (Eq. ...

Similar publications

Article
Full-text available
Deforestation exacerbates climate change through greenhouse gas emissions, but other climatic alterations linked to the local biophysical changes remain poorly understood. Here, we assess the impact of tropical deforestation on fire weather risk – that is the climate conditions conducive to wildfires – using high-resolution convection-permitting cl...

Citations

... For instance, SAR captures information on the three-dimensional structure of surface features, and is increasingly combined with optical imagery to improve land cover maps, particularly in cloudy tropical regions (259)(260)(261). ...
Preprint
Full-text available
Semantic segmentation (classification) of Earth Observation imagery is a crucial task in remote sensing. This paper presents a comprehensive review of technical factors to consider when designing neural networks for this purpose. The review focuses on Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), and transformer models, discussing prominent design patterns for these ANN families and their implications for semantic segmentation. Common pre-processing techniques for ensuring optimal data preparation are also covered. These include methods for image normalization and chipping, as well as strategies for addressing data imbalance in training samples, and techniques for overcoming limited data, including augmentation techniques, transfer learning, and domain adaptation. By encompassing both the technical aspects of neural network design and the data-related considerations, this review provides researchers and practitioners with a comprehensive and up-to-date understanding of the factors involved in designing effective neural networks for semantic segmentation of Earth Observation imagery.
... These maps were utilized for a wide variety of purposes, such as fire damage reporting, firefighting and evacuation efforts planning, and wildland fire management. Many DL approaches were adopted for the fire mapping task, as summarized in Table 3. Belenguer-Plomer et al. [88] investigated CNN performance using Sentinel-1 and/or Sentinel-2 data, which were downloaded from the Copernicus Open Access Hub in detecting and mapping burned areas. The proposed CNN comprises two convolutional layers, each one followed by the ReLU activation function, max pooling layer, two fully connected layers, and the sigmoid function to predict the probabilities of burned or unburned areas. ...
... Two scenarios were investigated: in the first scenario, pre-and post-fire datasets collected from Sentinel-2; and in the second scenario, pre-fire datasets downloaded from Sentinel-2 and post-fire datasets obtained from PRISMA. The numerical results showed that DSMNN-Net achieved an accuracy of 90.24% and 97.46% in the first and second scenarios, respectively, outperforming existing state-of-the-art methods such as CNN proposed by Belenguer-Plomer et al. [88], random forest [111,112], and SVM [113] . ...
Article
Full-text available
Wildland fires are one of the most dangerous natural risks, causing significant economic damage and loss of lives worldwide. Every year, millions of hectares are lost, and experts warn that the frequency and severity of wildfires will increase in the coming years due to climate change. To mitigate these hazards, numerous deep learning models were developed to detect and map wildland fires, estimate their severity, and predict their spread. In this paper, we provide a comprehensive review of recent deep learning techniques for detecting, mapping, and predicting wildland fires using satellite remote sensing data. We begin by introducing remote sensing satellite systems and their use in wildfire monitoring. Next, we review the deep learning methods employed for these tasks, including fire detection and mapping, severity estimation, and spread prediction. We further present the popular datasets used in these studies. Finally, we address the challenges faced by these models to accurately predict wildfire behaviors, and suggest future directions for developing reliable and robust wildland fire models.
... Our results show that increasing the number of top variables from three to five in training datasets substantially improves the performance of AUNet and RAUNet for segmentation of fires (Table 2), but including more variables will likely not increase performance indefinitely [34]. Many studies used all of the main bands of Sentinel-2 [40,89], selected main bands of Sentinel-2 [34,90,91], or single derivatives (e.g., burned area indices) [44,92] for training their deep-learning-based approaches to detect forest fires. However, increasing the number of channels (i.e., variables in a dataset) increases the parameters and training time of the model. ...
Article
Full-text available
Southern Africa experiences a great number of wildfires, but the dependence on low-resolution products to detect and quantify fires means both that there is a time lag and that many small fire events are never identified. This is particularly relevant in miombo woodlands, where fires are frequent and predominantly small. We developed a cutting-edge deep-learning-based approach that uses freely available Sentinel-2 data for near-real-time, high-resolution fire detection in Mozambique. The importance of Sentinel-2 main bands and their derivatives was evaluated using TreeNet, and the top five variables were selected to create three training datasets. We designed a UNet architecture, including contraction and expansion paths and a bridge between them with several layers and functions. We then added attention gate units (AUNet) and residual blocks and attention gate units (RAUNet) to the UNet architecture. We trained the three models with the three datasets. The efficiency of all three models was high (intersection over union (IoU) > 0.85) and increased with more variables. This is the first time an RAUNet architecture has been used to detect fire events, and it performed better than the UNet and AUNet models—especially for detecting small fires. The RAUNet model with five variables had IoU = 0.9238 and overall ac-curacy = 0.985. We suggest that others test the RAUNet model with large datasets from different regions and other satellites so that it may be applied more broadly to improve the detection of wildfires.
... For example, Sentinel-2 data was applied for mapping burned areas in Portugal, southern France, and Greece using DL (Pinto et al., 2021). Belenguer-Plomer et al. (2021) combined Sentinel-1 SAR data and Sentinel-2 optical imagery with CNN for mapping burned areas. Also in this context, Zhang et al. (2021) proposed a deep learning multi-source-based method to combine SAR (Sentinel 1) and multispectral (Sentinel 2) data, using PlanetScope normalized difference vegetation index (NDVI) pre and post-fire data to generate the labeled dataset. ...
Article
Full-text available
Pantanal is the largest continuous wetland in the world, but its biodiversity is currently endangered by catastrophic wildfires that occurred in the last three years. The information available for the area only refers to the location and the extent of the burned areas based on medium and low-spatial resolution imagery, ranging from 30 m up to 1 km. However, to improve measurements and assist in environmental actions, robust methods are required to provide a detailed mapping on a higher-spatial scale of the burned areas, such as PlanetScope imagery with 3–5 m spatial resolution. As state-of-the-art, Deep Learning (DL) segmentation methods, in specific Transformed-based networks, are one of the best emerging approaches to extract information from remote sensing imagery. Here we combine Transformers DL methods and high-resolution planet imagery to map burned areas in the Brazilian Pantanal wetland. We first compared the performances of multiple DL-based networks, namely Segformer and DTP Transformers methods with CNN-based networks like PSPNet, FCN, DeepLabV3+, OCRNet, and ISANet, applied in Planet imagery, considering RGB and near-infrared within a large dataset of 1282 image patches (512 × 512 pixels). We later verified the generalization capability of the model for segmenting burned areas in different areas, located in the Brazilian Amazon, which is also worldwide known due to its environmental relevance. As a result, the two transformers based-methods, SegFormer (F1-score equals 95.91%) and DTP (F1-score equals 95.15%), provided the most accurate results in mapping burned forest areas in Pantanal. Results show that the combination of SegFormer and RGB+NIR image with pre-trained weights is the best option (F1-score of 96.52%) to distinguish burned from not-burned areas. When applying the generated model in two Brazilian Amazon forest regions, we achieved F1-score averages of 95.88% for burned areas. We conclude that Transformer-based networks are fit to deal with burned areas in two of the most relevant environmental areas of the world using high-spatial-resolution imagery.
... CNN is an artificial neural network that uses the convolution principle in data processing. It is a type of deep neural network that uses additional layers between the input and output layers [46]. The basic concept of CNN is to utilize a convolutional layer to detect the relationship between the features of objects and a pooling layer to group similar features. ...
Article
Full-text available
Forest and land fires are disasters that greatly impact various sectors. Burned area identification is needed to control forest and land fires. Remote sensing is used as common technology for rapid burned area identification. However, there are not many studies related to the combination of optical and synthetic aperture radar (SAR) remote sensing data for burned area detection. In addition, SAR remote sensing data has the advantage of being a technology that can be used in various weather conditions. This research aims to evaluate the burned area model using a hybrid of convolutional neural network (CNN) as a feature extractor and random forest (CNN-RF) as classifiers on Sentinel-1 and Sentinel-2 data. The experiment uses five test schemes: (1) using optical remote sensing data; (2) using SAR remote sensing data; (3) a combination of optical and SAR data with VH polarization only; (4) a combination of optical and SAR data with VV polarization only; and (5) a combination of optical and SAR data with dual VH and VV polarization. The research was also carried out on the CNN, RF, and neural network (NN) classifiers. On the basis of the overall accuracy on the part of the region of Pulang Pisau Regency and Kapuas Regency, Central Kalimantan, Indonesia, the CNN-RF method provided the best results in the tested schemes, with the highest overall accuracy reaching 97% using Satellite pour l’Observation de la Terre (SPOT) images as reference data. This shows the potential of the CNN-RF method to identify burned areas, mainly in increasing precision value. The estimated result of the burned area at the research site using a hybrid CNN-RF method is 48,824.59 hectares, and the accuracy is 90% compared with MCD64A1 burned area product data.
... However, due to the unpredictability of the available data shortly after a disaster, and the diversity of building features, the generalization ability and emergency effects of these methods are still far from what is required for real application. Presently, convolutional neural networks (CNNs) have been successfully applied to computer vision fields (Ren et al., 2016), including remote sensing applications such as land-use mapping (Belenguer-Plomer et al., 2021;Huang et al., 2018), instance segmentation (Huang et al., 2022) and scene classification (Shawky et al., 2020). CNN models can autonomously extract high-level features from images in a data-driven paradigm (Zheng et al., 2021). ...
Article
The accurate extraction of building damage after destructive natural disasters is critical for disaster rescue and assessment. To achieve a rapid disaster response, training a model from scratch using enough ground-truth data collected in situ is not feasible. Often, in disaster situations, it is ineffective to directly apply an existing model due to the vast diversity among buildings worldwide, the limited number of label samples for training, and the different sources of remote sensing images between the pre- and post-disaster. To solve this problem, we present an incremental learning framework for the rapid identification of collapsed buildings triggered by sudden natural disasters. Specifically, end-to-end gradient boosting networks are improved into an incremental learning framework for an emergency response, where the historical natural disaster data are transferred into the same style of images that were captured shortly after a disaster event by using cycle-consistent generative adversarial networks. The proposed method is tested on two cases, i.e., the Haiti earthquake in January 2010 and the Nepal earthquake in April 2015, achieving Kappa accuracies of 0.70 and 0.68, respectively. The optimization of building damage extraction can be completed within 8 h after the disaster using the transferred data. The experimental results show that the proposed method is an effective way to evaluate the building damage triggered by natural disasters with different source remote sensing images. The code of this work and the data of the test cases are available at https://github.com/gjy-Ari/Incre-Trans.
... Uzaktan algılama verileri arazi yüzey analizinden iklim değişikliği tespitine, ekosistem analizinden doğal afet durumu tespitine kadar birçok farklı alanda başarıyla kullanılmaktadır [1][2][3][4][5][6][7][8][9][10]. Ancak, pasif algılayıcılara sahip uydu sistemlerinin çalışma koşulları ve atmosferik koşullar nedeniyle, uzaktan algılama verileri genellikle eksik bilgiler içerirler [11]. ...
Article
Uzaktan algılama çalışmalarında uydu görüntülerindeki eksik verilerin yeniden yapılandırılması, veri kullanılabilirliğini artırmak ve analiz süreçlerini kolaylaştırmak açısından büyük önem taşımaktadır. Bu çalışmada, bu problemi çözmek için otokodlayıcı adı verilen Yapay Sinir Ağı (YSA) modeli kullanılmıştır. Çalışmanın amacı, büyük oranda eksik veri içeren ve bu nedenle interpolasyon gibi klasik yöntemlerle yüksek doğrulukla yeniden yapılandırılması zor olan uydu görüntülerini başarılı bir şekilde yeniden yapılandıracak bir YSA modelinin geliştirilmesidir. Model, Orta Çözünürlüklü Görüntüleme Spektroradyometresi (MODIS) sensörleri ile elde edilen 1-km çözünürlüğe sahip günlük (MYD11A1) yüzey sıcaklığı verileri üzerinde test edilmiştir. Çalışma alanı Türkiye’nin güneyinde yer alan, Antalya ilinin kuzeyi ile Burdur ve Isparta il sınırları içerisinde bulunan bir bölgeyi kapsamaktadır. 2017-2020 tarih aralığına ait 306 veri üzerinde yapılan çalışma sonucunda modelin %70 ve üzerinde eksik bilgi içeren verileri 1,79 Ortalama Mutlak Hata (OMH) değeri ile tamamlayabildiği görülmüştür.
... Belenguer-Plomer et al., (2021) [29] utilize SAR and MSI images in a wall-to-wall mapping strategy to eliminate gaps caused by cloudy MSI imagery. They construct several CNNs to determine optimum model performance by land cover type. ...
... This ability would represent a crucial step forward in being able to track BA progression across varied terrain and geography, though research has not yet fully evaluated this capability. Belenguer-Plomer et al., (2021) [29] train land cover specific CNNs to detect BA, though do not test their effectiveness combining land cover. This study aims to investigate both of the above gaps in the literatur-inclusion of additional data channels, and effectiveness of transfer learning in burn area monitoring. ...
... The loss function used in this study for all models is Dice Loss. Dice Loss was chosen because it is shown to handle class imbalance and outliers more smoothly than cross-entropy, and also because it has been applied in BA estimation previously [10,29]. Further, it is based off this study's evaluation metric, the F1-score, so minimizing the Dice Loss it maximizes model performance. ...
Article
Full-text available
The study presented here builds on previous synthetic aperture radar (SAR) burnt area estimation models and presents the first U-Net (a convolutional network architecture for fast and precise segmentation of images) combined with ResNet50 (Residual Networks used as a backbone for many computer vision tasks) encoder architecture used with SAR, Digital Elevation Model, and land cover data for burnt area mapping in near-real time. The Santa Cruz Mountains Lightning Complex (CZU) was one of the most destructive fires in state history. The results showed a maximum burnt area segmentation F1-Score of 0.671 in the CZU, which outperforms current models estimating burnt area with SAR data for the specific event studied models in the literature, with an F1-Score of 0.667. The framework presented here has the potential to be applied on a near real-time basis, which could allow land monitoring as the frequency of data capture improves.
... Sentinel-1 data can penetrate clouds and smoke (Torres et al., 2012) but many sensing and environmental factors affect the C-band backscatter (Beaudoin et al., 1990;Pulliainen et al., 1996) and, for example, the change in backscatter pre-and post-fire can be comparable to changes in moisture content and seasonal phenology in savannas (Mathieu et al., 2019). Belenguer-Plomer et al. (2021) applied two CNN-based architectures independently to Sentinel-1, Sentinel-2, and to both datasets together, degraded to a common 40 m resolution to map burned areas in 110 × 110 km tiles in different biomes. The results were validated with 30 m Landsat burned/unburned area maps, and provided 0.35 to 0.64 F1 values that varied with land cover and the input data type. ...
Article
Full-text available
High spatial resolution commercial satellite data provide new opportunities for terrestrial monitoring. The recent availability of near-daily 3 m observations provided by the PlanetScope constellation enables mapping of small and spatially fragmented burns that are not detected at coarser spatial resolution. This study demonstrates, for the first time, the potential for automated PlanetScope 3 m burned area mapping. The PlanetScope sensors have no onboard calibration or short-wave infrared bands, and have variable overpass times, making them challenging to use for large area, automated, burned area mapping. To help overcome these issues, a U-Net deep learning algorithm was developed to classify burned areas from two-date Planetscope 3 m image pairs acquired at the same location. The deep learning approach, unlike conventional burned area mapping algorithms, is applied to image spatial subsets and not to single pixels and so incorporates spatial as well as spectral information. Deep learning requires large amounts of training data. Consequently, transfer learning was undertaken using pre-existing Landsat-8 derived burned area reference data to train the U-Net that was then refined with a smaller set of PlanetScope training data. Results across Africa considering 659 PlanetScope radiometrically normalized image pairs sensed one day apart in 2019 are presented. The U-Net was first trained with different numbers of randomly selected 256 × 256 30 m pixel patches extracted from 92 pre-existing Landsat-8 burned area reference data sets defined for 2014 and 2015. The U-Net trained with 300,000 Landsat patches provided about 13% 30 m burn omission and commission errors with respect to 65,000 independent 30 m evaluation patches. The U-Net was then refined by training on 5,000 256 × 256 3 m patches extracted from independently interpreted PlanetScope burned area reference data. Qualitatively, the refined U-Net was able to more precisely delineate 3 m burn boundaries, including the interiors of unburned areas, and better classify “faint” burned areas indicative of low combustion completeness and/or sparse burns. The refined U-Net 3 m classification accuracy was assessed with respect to 20 independently interpreted PlanetScope burned area reference data sets, composed of 339.4 million 3 m pixels, with low 12.29% commission and 12.09% omission errors. The dependency of the U-Net classification accuracy on the burned area proportion within 3 m pixel 256 × 256 patches was also examined, and patches <6.5% burned were less accurately classified. A regression analysis between the proportion of 30 m grid cells classified as burned against the proportion labelled as burned in the 3 m reference maps showed high agreement (r² = 0.91, slope = 0.93, intercept <0.001), indicating that the commission and omission errors largely compensate at 30 m resolution.
... Fires devastated Europe during the summer of 2021, with hundreds of events burning across the Mediterranean, causing unprecedented damage to people, properties, and ecosystems. Remote sensing (RS) is widely recognized as a key source of data for monitoring wildfires [1], exploiting both optical/multi-spectral and microwave satellite sensors [2]. Optical/multi-spectral and microwave satellite observations can provide information on areas affected by fires as well as on fire severity, which is the damage that affects vegetation. ...