Björn Waske’s research while affiliated with Osnabrück University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (87)


Comparative analysis of UAV-based LiDAR and photogrammetric systems for the detection of terrain anomalies in a historical conflict landscape
  • Article

June 2025

·

33 Reads

·

1 Citation

Science of Remote Sensing

·

Benjamin Kisliuk

·

·

[...]

·


Fig. 1. Spatial demonstration of the investigated sampling designs in binary flood detection. A total sample size of 100 samples is created using the strata flood (blue) and non-flood (background). The proportional allocation split is based on the area covered by the stratum. Gaps in the systematic sampling pattern exist due to limited sample size and simplification of implementation. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
Fig. 2. The study regions and corresponding CEMS data for simulated ground truth (left column) and flood classification via Otsu threshold (right column) are displayed. The investigated spatial extent corresponds to the extent of the available CEMS data, thus ensuring the availability of ground truth data for all pixels. The background shows the used S1 data. The Otsu-based flood classification uses blue as the original threshold, while violet colors represent negative and brown colors represent positive threshold shifts, respectively. More information to the spatial extent of the flood maps is provided in Table 5. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
Fig. 3. Creation of synthetic errors by shifting the threshold of Otsu-based classification. By comparing ground truth and flood classification with and without synthetic error, error matrices arise. In the example included in Row 3, the ratios of the classes and the distribution of TP (true positive), TN (true negative), FN (false negative), and FP (false positive) in resulting error matrices for −5 %, ±0 % and + 5 % for the study region AUS are shown. After bootstrapping, OA (Overall Accuracy), MCC (Mathews Correlation Coefficient), F1 (F1-Score), RECA (Recall) and PREC (Precision) are calculated.
Fig. 4. Sampling design evaluation via bootstrapping (1000 runs). Standard deviation calculated through resulting scores of a single metric. For OA (Overall Accuracy), RECA (Recall), PREC (Precision), F1 (F1-Score) scale is 0 to 100 %. Normalized MCC (Mathews Correlation Coefficient) natural range from 0 to 1 is multiplied by 100 to adjust all scales. Note that all the figure y axis scales are different. More information and statistics of the bootstrapping can be found in Appendix, Table A.2.
Fig. 5. Distribution of metric scores calculated through bootstrapping (1000 runs) based on the optimal sampling design per metric identified using Fig. 4. For OA (Overall Accuracy), RECA (Recall), PREC (Precision), F1 (F1-Score) scale is 0 to 100 %. Normalized MCC (Mathews Correlation Coefficient) natural range from 0 to 1 is multiplied by 100 to adjust all scales. Note that all y axis scales are different.

+2

Towards robust validation strategies for EO flood maps
  • Article
  • Full-text available

December 2024

·

108 Reads

·

1 Citation

Remote Sensing of Environment

Flood maps based on Earth Observation (EO) data inform critical decision-making in almost every stage of the disaster management cycle, directly impacting the ability of affected individuals and governments to receive aid as well as informing policies on future adaptation. However, flood map validation also presents a challenge in the form of class imbalance between flood and non-flood classes, which has rarely been investigated. There are currently no established best practices for addressing this issue, and the accuracy of these maps is often viewed as a mere formality, which leads to a lack of user trust in flood map products and a limitation in their operational use and uptake. This paper provides the first comprehensive assessment of the impact of current EO-based flood map validation practices. Using flood inundation maps derived from Sentinel-1 synthetic aperture radar data with synthetically generated controlled errors and Copernicus Emergency Management Service flood maps as the ground truth, binary metrics were statistically evaluated for the quantification of flood detection accuracy for events under varying flood conditions. Especially, class specific metrics were found to be sensitive to the class imbalance, i.e. larger flood magnitudes result in higher metric scores, thus being naturally biased towards overpredicting classifiers. Metric stability across error percentiles and flood magnitudes was assessed through standard deviation calculated by bootstrapping to quantify the impact of sample selection subjectivity, where stratified sampling schemes exhibited the lowest standard deviation consistently. Thoughtful sample and response design were critical, with probability-based random sampling and proportional or equal class allocation vital to producing robust accuracy estimates comparable across study sites, error classes, and flood magnitudes. Results suggest that popular evaluation metrics such as the F1-Score are in fact unsuitable for accurate characterization of map quality and are not comparable across different study sites or events. Overall accuracy and MCC are shown to be the most robust performance metrics when sampling designs are optimized, and boot-strapping is demonstrated to be a necessary tool for estimating variability in map accuracy observed due to the spatial sampling of validation points. Results presented herein pave the way for the development of global flood map validation guidelines, to support wider use of and trust in EO-derived flood risk and recovery products, eventually allowing us to unlock the full potential of EO for improved flood resilience.

Download

Towards robust validation strategies for EO flood maps

December 2024

·

23 Reads

Flood maps based on Earth Observation (EO) data inform critical decision-making in almost every stage of the disaster management cycle, directly impacting the ability of affected individuals and governments to receive aid as well as informing policies on future adaptation. However, flood map validation also presents a challenge in the form of class imbalance between flood and non-flood classes, which has rarely been investigated. There are currently no established best practices for addressing this issue, and the accuracy of these maps is often viewed as a mere formality, which leads to a lack of user trust in flood map products and a limitation in their operational use and uptake. This paper provides the first comprehensive assessment of the impact of current EO-based flood map validation practices. Using flood inundation maps derived from Sentinel-1 synthetic aperture radar data with synthetically generated controlled errors and Copernicus Emergency Management Service flood maps as the ground truth, binary metrics were statistically evaluated for the quantification of flood detection accuracy for events under varying flood conditions. Especially, class specific metrics were found to be sensitive to the class imbalance, i.e. larger flood magnitudes result in higher metric scores, thus being naturally biased towards overpredicting classifiers. Metric stability across error percentiles and flood magnitudes was assessed through standard deviation calculated by bootstrapping to quantify the impact of sample selection subjectivity, where stratified sampling schemes exhibited the lowest standard deviation consistently. Thoughtful sample and response design were critical, with probability-based random sampling and proportional or equal class allocation vital to producing robust accuracy estimates comparable across study sites, error classes, and flood magnitudes. Results suggest that popular evaluation metrics such as the F1-Score are in fact unsuitable for accurate characterization of map quality and are not comparable across different study sites or events. Overall accuracy and MCC are shown to be the most robust performance metrics when sampling designs are optimized, and bootstrapping is demonstrated to be a necessary tool for estimating variability in map accuracy observed due to the spatial sampling of validation points. Results presented herein pave the way for the development of global flood map validation guidelines, to support wider use of and trust in EO-derived flood risk and recovery products, eventually allowing us to unlock the full potential of EO for improved flood resilience.


Multi-Stage Feature Fusion of Multispectral and SAR Satellite Images for Seasonal Crop-Type Mapping at Regional Scale Using an Adapted 3D U-Net Model

August 2024

·

61 Reads

·

2 Citations

Earth observation missions such as Sentinel and Landsat support the large-scale identification of agricultural crops by providing free radar and multispectral satellite images. The extraction of representative image information as well as the combination of different image sources for improved feature selection still represent a major challenge in the field of remote sensing. In this paper, we propose a novel three-dimensional (3D) deep learning U-Net model to fuse multi-level image features from multispectral and synthetic aperture radar (SAR) time series data for seasonal crop-type mapping at a regional scale. For this purpose, we used a dual-stream U-Net with a 3D squeeze-and-excitation fusion module applied at multiple stages in the network to progressively extract and combine multispectral and SAR image features. Additionally, we introduced a distinctive method for generating patch-based multitemporal multispectral composites by selective image sampling within a 14-day window, prioritizing those with minimal cloud cover. The classification results showed that the proposed network provided the best overall accuracy (94.5%) compared to conventional two-dimensional (2D) and three-dimensional U-Net models (2D: 92.6% and 3D: 94.2%). Our network successfully learned multi-modal dependencies between the multispectral and SAR satellite images, leading to improved field mapping of spectrally similar and heterogeneous classes while mitigating the limitations imposed by persistent cloud coverage. Additionally, the feature representations extracted by the proposed network demonstrated their transferability to a new cropping season, providing a reliable mapping of spatio-temporal crop type patterns.


An Integrated Approach for Landscape Element Detection and Characterization using Sentinel-2 AND EnMAP Data

August 2024

·

9 Reads

Landscape elements, such as hedges, tree rows, and green strips in agroecosystems provide important ecosystem services. However detailed and up-to date information on landscape elements is often limited. This study proposes a two-step approach for the identification and characterization of landscape elements, using multitemporal Sentinel-2 and hy-perspectral EnMAP data. The first step aims on landscape element detection using Sentinel-2 data, using a TensorFlow U-Net model. The second component is based on spectral unmixing of hyperspectral EnMAP data, aiming on the previously detected landscape elements. Overall, the proposed method proofs useful. Results show that the U-Net model used is capable of detecting landscape elements with a high degree of certainty (> 93 % OA and > 0.84 IoU on independent test data). Preliminary results of spectral unmixing are promising, indicating that the research presented here represents an approach that can be built upon for future, large-scale applications.


Classifying Stand Compositions in Clover Grass Based on High-Resolution Multispectral UAV Images

July 2024

·

58 Reads

·

1 Citation

In organic farming, clover is an important basis for green manure in crop rotation systems due to its nitrogen-fixing effect. However, clover is often sown in mixtures with grass to achieve a yield-increasing effect. In order to determine the quantity and distribution of clover and its influence on the subsequent crops, clover plants must be identified at the individual plant level and spatially differentiated from grass plants. In practice, this is usually done by visual estimation or extensive field sampling. High-resolution unmanned aerial vehicles (UAVs) offer a more efficient alternative. In the present study, clover and grass plants were classified based on spectral information from high-resolution UAV multispectral images and texture features using a random forest classifier. Three different timestamps were observed in order to depict the phenological development of clover and grass distributions. To reduce data redundancy and processing time, relevant texture features were selected based on a wrapper analysis and combined with the original bands. Including these texture features, a significant improvement in classification accuracy of up to 8% was achieved compared to a classification based on the original bands only. Depending on the phenological stage observed, this resulted in overall accuracies between 86% and 91%. Subsequently, high-resolution UAV imagery data allow for precise management recommendations for precision agriculture with site-specific fertilization measures.



Individual Tree Detection and Crown Delineation in the Harz National Park from 2009 to 2022 using Mask R-CNN and Aerial Imagery

July 2024

·

128 Reads

·

1 Citation

ISPRS Open Journal of Photogrammetry and Remote Sensing

Forest diebacks pose a major threat to global ecosystems. Identifying and mapping both living and dead trees is crucial for understanding the causes and implementing effective management strategies. This study explores the efficacy of Mask R–CNN for automated forest dieback monitoring. The method detects individual trees, delineates their crowns, and classifies them as alive or dead. We evaluated the approach using aerial imagery and canopy height models in the Harz Mountains, Germany, a region severely affected by forest dieback. To assess the model's ability to track changes over time, we applied it to images from three separate flight campaigns (2009, 2016, and 2022). This evaluation considered variations in acquisition dates, cameras, post-processing techniques, and image tilting. Forest changes were analyzed based on the detected trees' number, spatial distribution, and height. A comprehensive accuracy assessment demonstrated the Mask R–CNN's robust performance, with precision scores ranging from 0.80 to 0.88 and F1-scores from 0.88 to 0.91. These results confirm the model's ability to generalize across diverse image acquisition conditions. While minor changes were observed between 2009 and 2016, the period between 2016 and 2022 witnessed substantial dieback, with a 64.57% loss of living trees. Notably, taller trees appeared to be particularly affected. This study highlights Mask R–CNN's potential as a valuable tool for automated forest dieback monitoring. It enables efficient detection, delineation, and classification of both living and dead trees, providing crucial data for informed forest management practices.


Farmland quality assessment using deep learning and UAVs

May 2024

·

56 Reads

·

1 Citation

The transformation of agricultural landscapes due to the intensification of farming practices has placed a major threat on habitat and species diversity. Stopping the loss of biodiversity is addressed in various international treaties and adequate strategies are required to assess the initial status of biodiversity and to measure the success of policies. While traditional methods of field-based biodiversity monitoring are time consuming, costly, and highly surveyor-dependent, recent advances in remote sensing and machine learning enable a systematic, more general and cost-effective monitoring. Accordingly, this study investigated whether large-scale biodiversity monitoring campaigns such as the European Monitoring of Biodiversity in Agricultural Landscapes (EMBAL) could be supported by the use of Unmanned Aerial Vehicle (UAV) based remote sensing and Convolutional Neural Networks (CNNs). For this purpose, the Structural Nature Value (SNV) indicator was proposed, which allows an intuitive estimation of the species richness of landscape sections. Subsequently, a CNN was trained to directly predict the SNV. For the accuracy assessment, a survey was conducted in which participants were asked to assess the SNV for various landscape patches, both to obtain an independent test set, and to evaluate the intuitiveness of the concept. The CNN achieved a mean weighted F1-score of 0.81 with an overall accuracy of 81 %. It was shown that the CNN can provide promising results to not only aid large-scale biodiversity monitorings, but to enhance the quality of their results through the automated analysis of fine-resolution UAV imagery.



Citations (63)


... However, several limitations, challenges, and research directions can be drawn from the approach adopted in the present contribution. Weather-dependent performance variations [41] and the need for standardized protocols present ongoing challenges, and we conceptualized the issue from a dataset that needs to be extended to more seasons (work in progress). Our model currently doesn't account for foliage variations, drone velocity modulations, or temporal scanning parameters. ...

Reference:

Drone LiDAR Occlusion Analysis and Simulation from Retrieved Pathways to Improve Ground Mapping of Forested Environments
Comparative analysis of UAV-based LiDAR and photogrammetric systems for the detection of terrain anomalies in a historical conflict landscape
  • Citing Article
  • June 2025

Science of Remote Sensing

... Panchal et al. analyzed different winter crop growth stages in India's Vijapur Taluka using multi-temporal NDVI data to determine optimal periods for crop differentiation [44]. For regional-scale seasonal crop type identification using multispectral and SAR satellite imagery, Wittstruck et al. proposed a novel 3D deep learning U-Net model with squeeze-and-excitation fusion modules to progressively extract and combine image features, improving classification of spectrally similar and heterogeneous classes while mitigating cloud cover limitations [45]. Given Shandong Province's diverse topography and distinct seasonal crop patterns, extracting accurate agricultural land distribution information through remote sensing remains a significant challenge. ...

Multi-Stage Feature Fusion of Multispectral and SAR Satellite Images for Seasonal Crop-Type Mapping at Regional Scale Using an Adapted 3D U-Net Model

... The overall accuracies of the SVM, RF, and ANN were 91.4%, 90.0%, and 91.1%. Nahrstedt et al. [22] classified clover and grass plants based on spectral information (red, green, blue, red-edge, near-infrared) from high-resolution UAV multispectral imagery and texture features from a random forest classifier, with a final overall accuracy of more than 86%. These studies emphasize the potential of UAV multispectral imaging for crop disease monitoring. ...

Classifying Stand Compositions in Clover Grass Based on High-Resolution Multispectral UAV Images

... This facilitates identification of crops grown, detecting land-cover change, and the assessment of the level of expansion in agriculture versus degradation. Information helps policymakers and farmers better manage agricultural landscapes to maximize and optimize land use (Joshi et al., 2016) [48] .  Monitoring weather and climate: Through satellites, longterm trends in temperature, precipitation, and wind regimes, which directly impact agricultural productivity, can be tracked. ...

A Review of the Application of Optical and Radar Remote Sensing Data Fusion to Land Use Mapping and Monitoring

Remote Sensing

... The best performing model had a recall, precision, and F1 score of 0.797, 0.836, and 0.814, respectively. Lucas et al. [35] used the Mask R-CNN model for individual tree crown detection (ITCD) in forest aerial images from different years, with an average precision of over 0.8. Sani-Mohammed et al. [24] employed an adjusted Mask R-CNN deep learning method to identify and delineate standing dead trees in mixed dense forest aerial images, achieving an average recall, precision, and F1 score of 0.85, 0.88, and 0.87, respectively. ...

Individual Tree Detection and Crown Delineation in the Harz National Park from 2009 to 2022 using Mask R-CNN and Aerial Imagery

ISPRS Open Journal of Photogrammetry and Remote Sensing

... Their ability to extract representative features from satellite imagery enables the identification of critical to subtle changes in land cover, which can offer clues for uncovering illegal activities on the ground, such as mining and illicit harvesting. Furthermore, CNNs can be successfully extended to map other native forest regions, as well as to detect other objects-of-interest, ranging from farmland [67] to plant [68] and tree [69] species. However, despite their effectiveness and robustness, CNNs may encounter challenges that warrant attention. ...

Farmland quality assessment using deep learning and UAVs

... Methods/Tools Used [1][2][3][4][5][6][7] UAV-based terrain analysis UAV photogrammetry, DEM [8][9][10][11][12] LiDAR technology in spatial information Aerial LiDAR, UAV LiDAR [13][14][15][16] Dolines detection using LiDAR in karst landscapes Airborne LiDAR, DEM [17,18] Doline detection GIS, DEM, multi-layer depth maps [19][20][21] UAV LiDAR in terrain feature analysis and road geometry UAV LiDAR, DTM, deep learning [22,23] Automated detection and mapping of sinkholes High-resolution maps, digital data [24][25][26][27] UAV for cost-effective high-resolution terrain analysis UAV, RGB imagery ...

Detecting Historical Terrain Anomalies with UAV-LiDAR Data Using Spline-Approximation and Support Vector Machines

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing

... This approach to vegetation mapping requires highly specialised surveying and it very labour-intensive. More recently, remote sensing methods have proven to be successful at identifying and monitoring intertidal environments (Murray et al., , 2022Bunting et al., 2022) including saltmarsh vegetation (Campbell and Wang, 2019;Blount et al., 2022;Stückemann and Waske, 2022). While natural colour satellite imagery may help to delineate saltmarsh vegetation in coastal settings, recent advances in multispectral data availability have allowed saltmarsh identification using key vegetation indices, such as the Normalised Difference Vegetation Index (NDVI) (Tucker, 1979). ...

Mapping Lower Saxony’s salt marshes using temporal metrics of multi-sensor satellite data

International Journal of Applied Earth Observation and Geoinformation

... Despite the establishment of a dam policy in Ceará, the 21 st century is also marked by the impacts of several meteorological droughts. For example, in 2012, a multi-year drought began, lasting for six years until 2017, marking it as the worst drought in terms of rainfall totals in the last hundred years (see Fig. 1 (b); Marengo et al. 2017a, de Lima and Magalhães 2018, Zhang et al. 2021, Pereira et al. 2023. However, the impacts caused by these droughts are different compared to previous centuries. ...

Mapping regional surface water volume variation in reservoirs in northeastern Brazil during 2009–2017 using high-resolution satellite images
  • Citing Article
  • May 2021

The Science of The Total Environment

... The delineation approach proposed in Chapter 2 is based on topographic wetland probability, spectral indices that reflect water and wetland vegetation, and image segmentation. Such approaches are frequently used in large-scale wetland delineation with optical , or optical and radar remote sensing (Ludwig et al., 2019;Muro et al., 2020;Rapinel et al., 2019), as they work across various wetland types and minimize confusion with upland vegetation. ...

Multitemporal optical and radar metrics for wetland mapping at national level in Albania

Heliyon