Article

Object-based continuous monitoring of land disturbances from dense Landsat time series

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

An accurate mapping of land disturbances with regard to their timing and locations is the prerequisite for the success of downstream disturbance characterization. This study introduces and tests a novel approach that integrates a spatial perspective into the dense Landsat time series analysis, called "Object-Based COntinuous monitoring of Land Disturbance" (OB-COLD). The new algorithm is based on the recognition that the pixels under effects of the same disturbance event often present similar spectral change concurrently in a short time; such pixels experiencing concurrent change could be grouped as an analytic spatial unit, namely "change object". OB-COLD first generates a change-magnitude snapshot every 60 days by applying per-pixel time series analysis to measure spectral change history. Then, an object-based change analysis is applied for each time-stamped snapshot: two levels of change objects are generated through over-segmentation and region merging; the changing area is determined by examining three object-level properties derived at different scales: the average change magnitude, the pre-change cover type, and the object size. Lastly, OB-COLD reconstructs model coefficients for each temporal segment based on the temporal breaks indicated by new snapshot-based change maps. We tested the new algorithm using 3000 randomly selected reference sample plots and eight Landsat ARD tiles across the continental United States. The accuracy assessment suggests that OB-COLD achieved 76.9% producer's accuracy, which is significantly higher than the per-pixel time series algorithm (i.e., COLD) (16.3 percentage increase) while keeping a comparable user's accuracies (58.7% vs. 57.8%). The quantitative and qualitative evaluations both suggest that OB-COLD could significantly reduce omission errors, particularly for stress disturbances. Most commission errors (73%) are attributed to agricultural practices and climate variability. The proposed algorithm is computationally scalable to large-scale spatiotemporal datasets with advanced cyberinfrastructure resources, holding great potential as the base detection algorithm for the next generation of land disturbance products.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Satellite image time series (SITS) has become an invaluable resource for studying dynamic changes on Earth's surface, offering an enhanced capability to track terrestrial features over time [92]. Although much of the research in SITS analysis has traditionally focused on pixel-level analysis [10], the growing clarity offered by advances in sensor resolution has highlighted the benefits of object-based analysis [93], [94]. Recent studies have started to delve into the approach of object-based SITS, which generally includes 1) similarity measurement methods, 2) model-based methods, and 3) ML methods. ...
... To address this, some researchers have turned to segmented object analysis, utilizing characteristics of objects to enhance SITS analysis outcomes. This approach faces the challenge of maintaining consistent object segmentation across spatial and temporal scales due to the inherently dynamic nature of landscape boundaries in SITS [93], [100]. Recent studies have made progress by extracting objects' postpixel-level change detection, somewhat alleviating the challenges associated with using segmented objects in dense SITS analysis [93]. ...
... This approach faces the challenge of maintaining consistent object segmentation across spatial and temporal scales due to the inherently dynamic nature of landscape boundaries in SITS [93], [100]. Recent studies have made progress by extracting objects' postpixel-level change detection, somewhat alleviating the challenges associated with using segmented objects in dense SITS analysis [93]. ...
Article
Full-text available
Deep learning has gained significant attention in remote sensing, especially in pixel- or patch-level applications. Despite initial attempts to integrate deep learning into object-based image analysis (OBIA), its full potential remains largely unexplored. In this article, as OBIA usage becomes more widespread, we conducted a comprehensive review and expansion of its task subdomains, with or without the integration of deep learning. Furthermore, we have identified and summarized five prevailing strategies to address the challenge of deep learning's limitations in directly processing unstructured object data within OBIA, and this review also recommends some important future research directions. Our goal with these endeavors is to inspire more exploration in this fascinating yet overlooked area and facilitate the integration of deep learning into OBIA processing workflows.
... Unlike single-observation imagery, SITS incorporates multi-temporal data, providing a more comprehensive view of environmental dynamics. While much of the research in SITS analysis has traditionally focused on pixel-level analysis [10], the growing clarity offered by advances in sensor resolution has highlighted the benefits of object-based analysis [93], [94]. Recent studies have started to delve into the approach of object-based SITS, classifying existing object-based SITS analysis methods into three categories: 1) Similarity measurement methods; 2) Model-based methods; and 3) Machine Learning (ML) methods. ...
... To address this, some researchers have turned to segmented object analysis, utilizing the spectral, morphological, and textural characteristics of objects to enhance SITS analysis outcomes. This approach faces the challenge of maintaining consistent object segmentation across spatial and temporal scales due to the inherently dynamic nature of landscape boundaries in SITS [93], [100]. Recent studies have made progress by extracting objects post-pixel-level change detection, somewhat alleviating the challenges associated with using segmented objects in dense SITS analysis [93]. ...
... This approach faces the challenge of maintaining consistent object segmentation across spatial and temporal scales due to the inherently dynamic nature of landscape boundaries in SITS [93], [100]. Recent studies have made progress by extracting objects post-pixel-level change detection, somewhat alleviating the challenges associated with using segmented objects in dense SITS analysis [93]. ...
Preprint
Deep learning has gained significant attention in remote sensing, especially in pixel- or patch-level applications. Despite initial attempts to integrate deep learning into object-based image analysis (OBIA), its full potential remains largely unexplored. In this article, as OBIA usage becomes more widespread, we conducted a comprehensive review and expansion of its task subdomains, with or without the integration of deep learning. Furthermore, we have identified and summarized five prevailing strategies to address the challenge of deep learning's limitations in directly processing unstructured object data within OBIA, and this review also recommends some important future research directions. Our goal with these endeavors is to inspire more exploration in this fascinating yet overlooked area and facilitate the integration of deep learning into OBIA processing workflows.
... In a similar fashion, spatial filtering operations allowed Senf and Seidl (2020) and Ye et al. (2021) to reduce omission and commission errors associated with small patches in the disturbance maps. Recently, Ye et al., (2023) have integrated OBIA into a COLD-like algorithm for identifying change objects, i.e. regions of homogeneous change in space and time, at different spatial scales. ...
... The grey background indicates the presence of forest cover either in 1990 or 2018 according to the Corine Land Cover. (Hughes, Kaylor, and Hayes 2017) and OB-COLD (Ye, Zhu, and Cao 2023), typically perform temporal segmentation either after or before the spatial analysis. In addition to increasing computational and algorithmic complexity, sequential approaches require appropriate parametrization for each stage of the analysis, such as setting the optimal scale for spatial segmentation. ...
... We note that online algorithms, such as CCDC and COLD, need to reinitialize the time series model after a disturbance event, which may result in the omission of consecutive disturbances occurring in close succession. For this reason, backward fitting of the data has been proposed after a disturbance event for the OB-COLD algorithm (Ye, Zhu, and Cao 2023). ...
Article
Full-text available
Time series analysis of medium-resolution multispectral satellite imagery is critical to investigate forest disturbance dynamics at the landscape scale. In particular, the spatial, temporal, and radiometric consistency of Landsat time series data provides unprecedented insight into past disturbances that occurred over the last four decades. Several Landsat time series-based algorithms have been developed to automate the detection of forest disturbances. However, automated detection of non-stand-replacing disturbances based on Landsat time series remains a challenging task due to the difficulty of effectively separating them from spectral noise. Here, we present the High-dimensional detection of Landscape Dynamics (HILANDYN) algorithm, which exploits spatial and spectral information provided by Landsat time series to detect forest disturbance dynamics retrospectively. A novel and unsupervised procedure for changepoint detection in high-dimensional time series allows HILANDYN to perform the temporal segmentation of inter-annual time series into linear trends. The algorithm embeds a noise filter to remove spurious changepoints caused by residual spectral noise in the time series. We tested HILANDYN to detect disturbances that occurred in the forests of the European Alps over a period of 39 years, i.e. between 1984 and 2022, and evaluated its accuracy using a validation dataset of 3000 plots randomly located inside and outside the disturbed patches. We compared HILANDYN with the Bayesian Estimator of Abrupt change, Seasonality, and Trend (BEAST), which is a well-established and high-performing time series-based algorithm for changepoint detection. The quantitative results highlighted that the number of bands, i.e. original Landsat bands and spectral indices, included in the high-dimensional time series and the threshold controlling the significance of changepoints strongly influenced the user’s accuracy (UA). Conversely, changes in the combinations of bands primarily affected the producer’s accuracy (PA). HILANDYN achieved an F1 score of 0.801, which increased to 0.833 when we activated the noise filter, allowing the algorithm to balance UA (83.1%) and PA (83.5%). The qualitative results showed that disturbed forest patches detected by HILANDYN were characterized by a high spatio-temporal consistency, regardless of the disturbance severity. Furthermore, our algorithm was able to detect forest patches associated with secondary disturbances, such as salvage logging, that occur in close succession with respect to the primary event. The comparison with BEAST evidenced a similar sensitivity of the algorithms to non-stand-replacing events, as both achieved comparable PA. However, BEAST struggled to balance UA and PA when using a single parameter set, achieving a maximum F1 score of 0.717. Moreover, the computational efficiency of BEAST in processing high-dimensional time series was very limited due to its univariate nature based on the Bayesian approach. The adaptability of HILANDYN to detect a wide range of disturbance severities using a single parameter set and its computational efficiency in handling high-dimensional time series promotes its scalability to large study areas characterized by heterogeneous ecological conditions.
... The Planet Explorer provides free access to 3-m daily Planet images. As such, we were capable of determining the actual disturbance date for the reference sample pixel, which was more accurate than interpreting the first anomaly date from the time series (Shang et al., 2022;Ye et al., 2021aYe et al., , 2023 or using the date right in-between the first anomaly and the last pre-disturbance observation date (Bullock et al., 2022;Reiche et al., 2018). While the occasional cloud/ shadow contamination or missing datasets in PlanetScope still might cause an inaccurate interpretation of the exact disturbance date, such impacts were negligible because 1) PlanetScope images provided the densest optical images as daily observations; 2) the beginning dates for some disturbance events (such as Hurricane Ian and Mosquito Fire) were well recorded and ascertained so that the recorded disturbance dates could be directly adopted. ...
... continuous region affected by the change process), the Flood Fill algorithm was first applied to the two disturbance probability layers (Ye et al., 2023). Flood Fill is an algorithm to identify adjacent pixels as an object in an image based on their similarity to an initial seed pixel (Van der Walt et al., 2014). ...
... Flood Fill is an algorithm to identify adjacent pixels as an object in an image based on their similarity to an initial seed pixel (Van der Walt et al., 2014). The Flood Fill algorithm outperformed other segmentation algorithms such as SLIC and watershed in our previous test (Ye et al., 2023), and hence was chosen for this study. For the supervised probability layer, we retained the disturbance regions as the disturbance patches with an average disturbance probability >50%; for the unsupervised-probability layer, we labeled the patches as their average probability ≥ 12.5%, so that the disturbance identification was possible even when one anomaly observation was captured, as consistently as the supervised-probability approach. ...
Article
Near real-time (NRT) monitoring of land disturbances holds great importance for delivering emergency aid, mitigating negative social and ecological impacts, and distributing resources for disaster recovery. Many past NRT techniques were built upon examining the overall change magnitude of a spectral anomaly with a pre-defined threshold, namely the unsupervised approach. However, their lack of fully considering spectral change direction, change date, and pre-disturbance conditions often led to low detection sensitivity and high commission errors, especially when only a few satellite observations were available at the early disturbance stage, eventually resulting in a longer lag to produce a reliable disturbance map. For this study, we developed a novel supervised machine learning approach guided by historical disturbance datasets to accelerate land disturbance monitoring. This new approach consisted of two phases. For the first phase, the supervised approach applied retrospective analysis on historical Harmonized Landsat Sentinel-2 (HLS) datasets from 2015 to 2021, combined with several open disturbance products. The disturbance model was constructed for each condition of consecutive anomaly number, with the aim of enhancing the specificity for delineating early-stage disturbance regions. Then, these stage-based models were applied for an NRT scenario to predict disturbance probabilities with 2022 HLS images incrementally on a weekly basis. To demonstrate the capability of this new approach, we developed an operational NRT system incorporating both the unsupervised and supervised approach. Latency and accuracy were evaluated against 3000 samples that were randomly selected from the five most influential disturbance events of the United States in 2022, based on labels and disturbance dates interpreted from daily PlanetScope images. The evaluation showed that the supervised approach required 15 days (since the start of the disturbance event) to reach the plateau of its F 1 curve (where most disturbance pixels are detected with high confidence), seven days earlier with roughly 0.2 F 1 score improvement compared to the unsupervised approach (0.733 vs. 0.546 F 1 score). Further analysis showed the improvement was mainly due to the substantial decrease in commission errors (17.7% vs 44.4%). The latency component analysis illustrated that the supervised approach only took an average of 4.1 days to yield the first disturbance alert at its fastest daily updating speed, owing to its decreased sensitivity lag. This finding highlighted the importance of using past knowledge and machine learning to reduce detection delays for an NRT monitoring task.
... The application of remote sensing data in combination with forest resource inventory data is an important way to study the forest carbon sink and its response to global climate change [1][2][3][4][5][6][7]. Carbon storage estimation using remote sensing data usually relies on understanding absorption, reflection, and transmission of solar radiation, as well as other associated mechanisms within the vegetation canopy and atmosphere. ...
... Using satellitederived information in combination with field measured biomass, forest carbon estimation models based on image pixels can be used to reveal spatial and temporal variations of forest carbon storage [4]. Generally, forest carbon stock is estimated in "pixels"; however, in China, forest features such as woodland area, stand volume, forest growth, etc. are investigated in "sub-compartments" within the forest management inventory [2]. A "sub-compartment" is a combination of stands with similar biological characteristics and management features. ...
... An object-based approach has the capability to gather homogenous pixels into image objects to form a closed area through multiscale segmentation. This allows us to extract species information through the objects' spatial characteristics such as size, shape, and position [2,9,10]. Therefore, within the object-based approach, objects and sub-compartments have a certain degree of consistency that is more realistic with respect to the forest management inventory. ...
Article
Full-text available
Remote sensing is an important tool for the quantitative estimation of forest carbon stock. This study presents a multiscale, object-based method for the estimation of aboveground carbon stock in Moso bamboo forests. The method differs from conventional pixel-based approaches and is more suitable for Chinese forest management inventory. This research indicates that the construction of a SPOT-6 multiscale hierarchy with the 30 scale as the optimal segmentation scale achieves accurate information extraction for Moso bamboo forests. The producer’s and user’s accuracy are 88.89% and 86.96%, respectively. A random generalized linear model (RGLM), constructed using the multiscale hierarchy, can accurately estimate carbon storage of the bamboo forest in the study area, with a fitting and test accuracy (R2) of 0.74 and 0.64, respectively. In contrast, pixel-based methods using the RGLM model have a fitting and prediction accuracy of 0.24 and 0.01, respectively; thus, the object-based RGLM is a major improvement. The multiscale object hierarchy correctly analyzed the multiscale correlation and responses of bamboo forest elements to carbon storage. Objects at the 30 scale responded to the microstructure of the bamboo forest and had the strongest correlation between estimated carbon storage and measured values. Objects at the 60 scale did not directly inherit the forest information, so the response to the measured carbon storage of the bamboo forest was the smallest. Objects at the 90 scale serve as super-objects containing the forest feature information and have a significant correlation with the measured carbon storage. Therefore, in this study, a carbon storage estimation model was constructed based on the multiscale characteristics of the bamboo forest so as to analyze correlations and greatly improve the fitting and prediction accuracy of carbon storage.
... Despite efforts to develop faster near real-time methods [14], [15], [63], [69], disturbance detection methods are extremely computationally demanding and are typically implemented on high-performance computers (HPCs). They also require multiple post-hazard observations to confirm a disturbance. ...
... Pixel-wise approaches also lack the spatial context required for effective high-resolution image analysis [70]. Few disturbance detection studies have investigated generalised spatially aware methods [69], and none involve a framework that simultaneously flags multiple hazards at a regional level while mapping them at the pixel level. ...
Preprint
The increasing frequency of environmental hazards due to climate change underscores the urgent need for effective monitoring systems. Current approaches either rely on expensive labelled datasets, struggle with seasonal variations, or require multiple observations for confirmation (which delays detection). To address these challenges, this work presents SHAZAM - Self-Supervised Change Monitoring for Hazard Detection and Mapping. SHAZAM uses a lightweight conditional UNet to generate expected images of a region of interest (ROI) for any day of the year, allowing for the direct modelling of normal seasonal changes and the ability to distinguish potential hazards. A modified structural similarity measure compares the generated images with actual satellite observations to compute region-level anomaly scores and pixel-level hazard maps. Additionally, a theoretically grounded seasonal threshold eliminates the need for dataset-specific optimisation. Evaluated on four diverse datasets that contain bushfires (wildfires), burned regions, extreme and out-of-season snowfall, floods, droughts, algal blooms, and deforestation, SHAZAM achieved F1 score improvements of between 0.066 and 0.234 over existing methods. This was achieved primarily through more effective hazard detection (higher recall) while using only 473K parameters. SHAZAM demonstrated superior mapping capabilities through higher spatial resolution and improved ability to suppress background features while accentuating both immediate and gradual hazards. SHAZAM has been established as an effective and generalisable solution for hazard detection and mapping across different geographical regions and a diverse range of hazards. The Python code is available at: https://github.com/WiseGamgee/SHAZAM
... COLD utilizes harmonic regression to model spectral time series, enabling continuous monitoring with high temporal resolution. Key features include dynamic outlier removal, refined initialization, and updated model fitting with the Least Absolute Shrinkage and Selection Operator (LASSO) regression for improved stability and accuracy [53]. The algorithm detects disturbances using a probabilistic approach with parameters like consecutive anomaly observations and temporally adjusted Root Mean Square Error (RMSE). ...
... The selected six algorithms were all single-pixel time-series methods, which ignored the spatial information of surrounding pixels in monitoring forest disturbance. This might lead to incomplete detection of disturbance patches, as the varying change magnitudes across different pixels may not be adequately captured using a uniform forest disturbance criterion [53]. To address these limitations and enhance the completeness and reliability of detected forest disturbance patches, incorporating spatial context into disturbance monitoring is advisable [62][63][64]. ...
Article
Full-text available
Accurate long-term and high-resolution forest disturbance monitoring are pivotal for forest carbon modeling and forest management. Many algorithms have been developed for this purpose based on the Landsat time series, but their nationwide performance across different regions and disturbance types remains unexplored. Here, we conducted a comprehensive comparison and validation of six widely used forest disturbance- monitoring algorithms using 12,328 reference samples in China. The algorithms included three annual-scale (VCT, LandTrendr, mLandTrendr) and three daily-scale (BFAST, CCDC, COLD) algorithms. Results indicated that COLD achieved the highest accuracy, with F1 and F2 scores of 81.81% and 81.25%, respectively. Among annual-scale algorithms, mLandTrendr exhibited the best performance, with F1 and F2 scores of 73.04% and 72.71%, and even outperformed the daily-scale BFAST algorithm. Across China’s six regions, COLD consistently achieved the highest F1 and F2 scores, showcasing its robustness and adaptability. However, regional variations in accuracy were observed, with the northern region exhibiting the highest accuracy and the southwestern region the lowest. When considering different forest disturbance types, COLD achieved the highest accuracies for Fire, Harvest, and Other disturbances, while CCDC was most accurate for Forestation. These findings highlight the necessity of region-specific calibration and parameter optimization tailored to specific disturbance types to improve forest disturbance monitoring accuracy, and also provide a solid foundation for future studies on algorithm modifications and ensembles.
... Finally, to address the "salt and pepper" phenomenon resulting from the pixel-based approach, an object-based approach for monitoring was considered, as done by Yin et al. (2018), using multi-temporal data for image segmentation to create spectral-temporal homogenous objects. However, for mining disturbances, in which landscape boundaries are constantly changing due to mining activities, this object-based change monitoring approach based on stable landscape boundaries does not seem to be suitable for this study (Ye et al., 2023). Future research needs to explore procedures for object-based change monitoring techniques in the presence of changing object boundaries to improve mining disturbance detection. ...
Article
Full-text available
The exploitation and utilization of coal resources have significantly contributed to global energy security. However, this mining activity has inflicted considerable damage on the ecological environment, particularly on the Tibetan Plateau, where the impact on ecosystems may be even more detrimental. The implementation of high-intensity mining activities leads to rapid changes in land cover/land use. Consequently, it is essential to accurately and effectively monitor mining disturbances. In this study, we propose an approach to capture surface mining disturbances using spatial–temporal rules and time series stacks of Landsat data. First, a time series of annual mining disturbance probability was generated based on Landsat temporal-spectral metrics and random forest. Second, the Landsat-based detection of Trends in Disturbance and Recovery (LandTrendr) algorithm was employed to segment the time series and detect breakpoints. Finally, mining disturbances were captured by further restricting the output of LandTrendr based on spatial–temporal rules of mining disturbances. This approach was applied and evaluated in the Muli mining area of the northeastern Tibetan Plateau, which experienced large-scale and rapid mining disturbances from 2004 to 2014, and identified a disturbed mining area of 43.62 km². The mining sites have been reclaimed after mining, and all reclamation work was done after 2016, with a total reclaimed area of 22.28 km². The validation results indicated that the overall accuracy of mining disturbance and reclamation mapping ranges from 0.7333 to 0.8667, and the F1 scores for mining disturbances and reclamation range from 0.7551 to 0.8723. This study provides a reliable framework for monitoring mining disturbances and reclamation in surface mines, promising to be useful in realizing disturbance monitoring in surface mines for a wide range of mineral types.
... wetland state, since many of the waterbodies were waterfowl-hunting ponds created in the 1980s (Duncan et al., 1999). Future studies could analyze changes in wetland state in greater detail by detecting progressive changes within the same habitat (e.g., eutrophication of grasslands) based on recent continuous change detection methods, such as object-based continuous monitoring of land disturbances (OB-COLD) (Ye et al., 2023), DECODE , or new deep learning methods Ji et al., 2024;Liu et al., 2022;Sun et al., 2021). ...
Article
Full-text available
Accurate long-term monitoring of wetlands using satellite archives is crucial for effective conservation. While new methods based on temporal profile classification have been useful for long-term monitoring of wetlands, their advantages over traditional classification methods have not yet been demonstrated. This study aimed to compare continuous change detection (using the continuous change detection and classification (CCDC) algorithm) to traditional post-classification change detection for monitoring wetland changes between 1984 and 2022 in a temperate coastal marsh (Marais Poitevin, France) from the Landsat archive. The reference dataset was collected mainly from field observations and used to train and test a random forest classifier. The accuracy of the resulting change map was then assessed for both methods using validation points collected via visual interpretation of historical aerial photographs and Landsat temporal profiles. The change map derived from CCDC had much higher unbiased overall accuracy (0.86 ± 0.02) than that derived from post-classification change detection (0.51 ± 0.03). In addition, wetland loss was much higher than wetland gain (18 % and 2 % of the area, respectively) and was due mainly to conversion of grassland to cropland and urbanization. The study demonstrated that, unlike traditional post-classification change detection, continuous change detection provides maps of wetland changes sufficiently accurate for operational use by managers. The study also confirmed the ongoing impact of agricultural intensification and artificialization on wetland degradation in Europe.
... Many research articles have been published in the field of classification algorithms applied to time-series images, addressing various tasks, such as crop mapping, particularly rice cultivation [23], water surface detection [15], or land disturbance identification [24]. However, the quest for a superpixel algorithm that seamlessly integrates spatial time-series data with unsupervised learning techniques and exhibits practical feasibility at a large scale remains underdeveloped. ...
Article
Full-text available
The progressive evolution of the spatial and temporal resolutions of Earth observation satellites has brought multiple benefits to scientific research. The increasing volume of data with higher frequencies and spatial resolutions offers precise and timely information, making it an invaluable tool for environmental analysis and enhanced decision-making. However, this presents a formidable challenge for large-scale environmental analyses and socioeconomic applications based on spatial time series, often compelling researchers to resort to lower-resolution imagery, which can introduce uncertainty and impact results. In response to this, our key contribution is a novel machine learning approach for dense geospatial time series rooted in superpixel segmentation, which serves as a preliminary step in mitigating the high dimensionality of data in large-scale applications. This approach, while effectively reducing dimensionality, preserves valuable information to the maximum extent, thereby substantially enhancing data accuracy and subsequent environmental analyses. This method was empirically applied within the context of a comprehensive case study encompassing the 2002–2022 period with 8-d-frequency-normalized difference vegetation index data at 250-m resolution in an area spanning 43,470 km². The efficacy of this methodology was assessed through a comparative analysis, comparing our results with those derived from 1000-m-resolution satellite data and an existing superpixel algorithm for time series data. An evaluation of the time-series deviations revealed that using coarser-resolution pixels introduced an error that exceeded that of the proposed algorithm by 25 % and that the proposed methodology outperformed other algorithms by more than 9 %. Notably, this methodological innovation concurrently facilitates the aggregation of pixels sharing similar land-cover classifications, thus mitigating subpixel heterogeneity within the dataset. Further, the proposed methodology, which is used as a preprocessing step, improves the clustering of pixels according to their time series and can enhance large-scale environmental analyses across a wide range of applications.
... To reduce the omission error, the algorithm parameters can be adjusted with a lower change probability and fewer consecutive anomaly observations. Alternatively, exploring other variants of the COLD algorithm, such as OBCOLD (Ye et al., 2023), or modifying the COLD algorithm for specific types of disturbance using the artificial surface index (ASI) (Zhao and Zhu, 2022), could further improve the accuracy of construction change identification and classification. ...
Article
Full-text available
Highlights • Construction changes are detected using the U-net model and satellite time series. • A novel solution for using the U-net model on small and isolated change objects. • This approach has the potential for monitoring global construction changes. Abstract Monitoring construction changes is essential for understanding the anthropogenic impacts on the environment. However, mapping construction changes at a medium scale (i.e., 30m) using satellite time series and deep learning models presents challenges due to their large spectral variability during different phases of construction and the presence of small and isolated change targets. These challenges reduce the effectiveness of feature extraction from deep convolutional layers. To address these issues, we propose a novel Classify Areas with Potential and then Exclude the Stable pixels (hereafter called CAPES) method using a U-net model along with per-pixel-based time series model information derived from the COntinuous monitoring of Land Disturbance (COLD) algorithm (Zhu et al., 2020). Our major findings are as follows: (1) the U-net with time series model information performed best when combining time series model coefficients and RMSE values extracted before and after the change (average F1 score of 70.8%); (2) the CAPES approach substantially improves the accuracy by addressing the loss of spatial information for small and isolated construction change targets in deep convolutional layers; (3) the U-net with time series model information showed better performance than other pixel-based machine learning algorithms for monitoring 2 construction change; (4) our model can be transferred to different time periods and geographic locations with similar performance as the baseline model after fine-tuning.
... Additionally, it demonstrated robustness in handling image misregistration and highlighted the potential impact of automation in decision-making for mapping natural disasters. Another study introduced a new algorithm called 'Object-Based Continuous Monitoring of Land Disturbance', which integrates a spatial perspective into the Landsat time series analysis to accurately map land disturbances (Ye et al., 2023). The algorithm uses an object-based change analysis to group pixels with similar spectral changes into 'change objects' and achieved higher accuracy in detecting disturbances compared to the per-pixel time series algorithm, making it a promising base detection algorithm for large-scale spatiotemporal datasets. ...
Article
Full-text available
The detection of changes in asbestos roofing is vital due to the significant health risks associated with asbestos exposure. Utilising remote sensing for this task is promising, yet it encounters challenges like misregistration and variations in illumination within temporal images. This study presents a novel object-based deep learning framework with a post-classification structure designed specifically to detect changes in asbestos roofing. The method employs a hybrid deep learning model that utilises DenseNet121 to extract spatial features and integrates with either LSTM (DenseNet-LSTM) or conv1D (DenseNet-Conv1D) to capture temporal features. Using VHR aerial imagery, image patch sequences were generated for each roofing object across seven temporal points. After training the model, labels were assigned to each image patch in the sequence through binary classification. In post-processing, sequence alignment based on Hamming distance was used to enhance label assignments by aligning them with a pre-determined set of valid label sequences. Finally, the temporal change map was generated by comparing label transitions across temporal points. The results highlighted the model's robust capability to address inconsistencies across temporal points without relying on preprocessing techniques to manage such temporal discrepancies. The DenseNet-LSTM and DenseNet-Conv1D models demonstrated comparable classification performance in asbestos roofing detection, achieving average overall accuracies of 95.8% and 96.0%, respectively. Furthermore, the model's generalisability was underscored by a test on an independent dataset, which produced up to 94% detection accuracy for asbestos class. While the framework has limitation in detecting painted asbestos roofing and partial changes, it offers a reproducible approach for change detection tasks that target a specific roofing type, rather than detecting changes in all roofing types within remote sensing imagery.
... Spectral trajectory-based detection methods have been commonly used for mapping land disturbances. Recent studies have included the Landsat-based detection of trends in disturbance and recovery (LandTrendr) (Kennedy et al., 2010), continuous change detection and classification (CCDC) (Zhu and Woodcock, 2014), continuous subpixel monitoring (CSM) (Deng and Zhu, 2020), and continuous monitoring of land disturbance (COLD) (Ye et al., 2023). The LT-GEE algorithm (GEE-version Landsat-based detection of trends in disturbance and recovery) was used to detect land disturbances, which mainly referred to the conversion from vegetation to other impervious surfaces (Wang et al., 2020b). ...
Article
Full-text available
The manual extraction of land disturbances associated with oil exploration, which normally includes resource roads, mining facilities, and well pads, presents significant challenges in terms of cost and time. Accurate monitoring and mapping of land disturbances resulting from oil exploration plays a crucial role in conducting comprehensive environmental assessments and facilitating effective land reclamation initiatives. However, prevailing deep learning methodologies in the realm of oil and gas exploration primarily focus on oil spill detection, neglecting the critical aspect of land disturbances resulting from oil exploration, thus overlooking the impact on land. Furthermore, given that the well sites are scattered and relatively diminutive compared to other land covers, their detection poses substantial difficulties. This paper proposes an automatic error-correcting (AEC) algorithm to address deficiencies in ground truth data quality. This AEC method was integrated into the deep-learning framework for land disturbance extraction, specifically tailored for land disturbances analysis associated with oil exploration. The efficacy of our method was validated on a dataset collected in Alberta covering an area of oil sand mining sites. The application of the AEC algorithm significantly enhanced the accuracy of land disturbance analysis, thereby contributing to a more effective hydrocarbon exploration impact analysis and facilitating the timely planning by the Alberta government. The results demonstrate notable improvements in both average pixel accuracy (AA) and mean intersection over union (mIoU), ranging from 8.3% to 15.4% and 0.5% to 5.8%, respectively. These enhancements, which have profound implications for the precision of land disturbance detection, prove that the proposed AEC algorithm can serve a dual purpose: correcting errors in the dataset and efficiently detecting land disturbance features in the oil exploration area.
... We tested and compared two ways for generating disturbance probability: (1) the proposed method based on the models trained from 413 historical datasets (called supervised) and (2) the traditional anomaly method based on an 414 empirical change-magnitude threshold (called unsupervised). For the supervised probability, we 415 input the predictor vector associated with into the different disturbance models based on the current 416 consecutive anomaly number at pixel-level (we used 0.9 chi-square probability for detecting 417 anomalies, same as "anomaly generation" for Stage 1); then predicted its random-forest 418 To generate spatially complete disturbance patches, the floodfill segmentation algorithm 428 was first applied to the two disturbance probability layers (Ye et al., 2023). For the supervised 429 probability layer, we remained the disturbance regions as the disturbance patches with an average 430 disturbance probability > 50%; for the unsupervised-probability layer, we labeled the patches as 431 their average probability >= 12.5%, so that the disturbance identification was possible even when 432 one anomaly observation was captured, as consistently as the supervised-probability approach. ...
Preprint
Near real time (NRT) monitoring of land disturbances holds great importance for delivering emergency aids, mitigating negative social and ecological impacts, and distributing resources for disaster recovery. Many past NRT techniques were built upon examining the overall change magnitude of a spectral anomaly with a pre-defined threshold, namely the unsupervised approach. However, their lack of fully considering spectral change direction, change date and pre-disturbance conditions often led to low detection sensitivity and high commission errors, especially when only a few satellite observations were available at the early disturbance stage, which could eventually result in a longer lag to produce a reliable disturbance map. For this study, we developed a novel supervised machine learning approach guided by historical disturbance datasets for accelerating land disturbance monitoring. This new approach first applied retrospective analysis based on historical Harmonized Landsat Sentinel-2 (HLS) datasets from 2015 to 2021 and several open disturbance products, in which various multifaceted change related predictors were extracted from satellite time series, followed by separate disturbance model construction for each consecutive anomaly number. Then, these models were applied for NRT prediction with a per-pixel disturbance probability with new observations (e.g., 2022 HLS images) ingested incrementally on a weekly basis. We developed this operational NRT system incorporating both unsupervised and supervised approaches. Latency and accuracy were evaluated against 3,000 samples randomly selected from five most influential disturbance events of United States in 2022 based on labels and disturbance dates interpreted from daily PlanetScope images. The evaluation showed that the supervised approach required 15 days (since the start of the disturbance event) to reach the plateau of its F_1 curve (where disturbances are detected with high confidence), seven days earlier with roughly 0.2 F_1 score improvement compared to the unsupervised approach (0.733 vs. 0.546 F_1 score). The further analysis showed the improvement was mainly due to the substantial decrease of commission errors (17.7% vs 44.4%). The latency component analysis indicated that the supervised approach only took an average of 4.1 days to yield the first disturbance alert at its fastest alerting speed when the NRT platform made a daily update. This finding highlighted the importance of past knowledge and machine learning for accelerating a NRT monitoring task.
... Image segmentation is the first step in performing object-oriented change detection. The multi-scale segmentation algorithm effectively solves salt and pepper noise [35,36]. The images are clustered from bottom to top at different scales according to spectral similarity and shape factor to form homogeneous image objects within the scale range. ...
Article
Full-text available
A full understanding of the patterns, trends, and strategies for long-term ecosystem changes helps decision-makers evaluate the effectiveness of ecological restoration projects. This study identified the ecological restoration approaches on planted forest, natural forest, and natural grassland protection during 2000–2022 based on a developed object-oriented continuous change detection and classification (OO-CCDC) method. Taking the Loess hilly region in the southern Ningxia Hui Autonomous Region, China as a case study, we assessed the ecological effects after protecting forest or grassland automatically and continuously by highlighting the location and change time of positive or negative effects. The results showed that the accuracy of ecological restoration approaches extraction was 90.73%, and the accuracies of the ecological restoration effects were 86.1% in time and 84.4% in space. A detailed evaluation from 2000 to 2022 demonstrated that positive effects peaked in 2013 (1262.69 km2), while the highest negative effects were observed in 2017 (54.54 km2). In total, 94.39% of the planted forests, 99.56% of the natural forest protection, and 62.36% of the grassland protection were in a stable pattern, and 35.37% of the natural grassland displayed positive effects, indicating a proactive role for forest management and ecological restoration in an ecologically fragile region. The negative effects accounted for a small proportion, only 2.41% of the planted forests concentrated in Pengyang County and 2.62% of the natural grassland protection mainly distributed around the farmland in the central-eastern part of the study area. By highlighting regions with positive effects as acceptable references and regions with negative effects as essential conservation objects, this study provides valuable insights for evaluating the effectiveness of the integrated ecological restoration pattern and determining the configuration of ecological restoration measures.
... The forest sub-compartment is the minimum object for information statistics and forest resource management, and it is also a basic unit of forest inventory. In addition, the natural characteristics within each sub-compartment, such as the condition of the stand, stand type, forest-age group, and species composition, were always similar but were significantly different from adjacent sub-compartments [30]. In addition, there were similar image features in the remote sensing imagery of a single sub-compartment, such as reflectance and texture features [31]. ...
Article
Full-text available
Stand age is a significant factor when investigating forest resource management. How to obtain age data at a sub-compartment level on a large regional scale conveniently and in real time has become an urgent scientific challenge in forestry research. In this study, we established two strategies for stand-age estimation at sub-compartment and pixel levels, specifically object-based and pixel-based approaches. First, the relationship between canopy height and stand age was established based on field measurement data, which was achieved at the Mao’er Mountain Experimental Forest Farm in 2020 and 2021. The stand age was estimated using the relationship between the canopy height, the stand age, and the canopy-height map, which was generated from multi-resource remote sensing data. The results showed that the validation accuracy of the object-based estimation results of the stand age and the canopy height was better than that of the pixel-based estimation results, with a root mean squared error (RMSE) increase of 40.17% and 33.47%, respectively. Then, the estimated stand age was divided into different age classes and compared with the forest inventory data (FID). As a comparison, the object-based estimation results had better consistency with the FID in the region of the broad-leaved forests and the coniferous forests. In addition, the pixel-based estimation results had better accuracy in the mixed forest regions. This study provided a reference for estimating stand age and met the requirements for stand-age data at the pixel and sub-compartment levels for studies involving different forestry applications.
... Note that the problem is the same as the deep learning process of semantic land object detection [115][116][117]. Several approaches have been proposed to solve the problem with a single image, e.g., a composite pattern of multiple bands [118], or as a time series analysis with the assumption of dynamically developed landscape structures [119]. Our solution provides an alternative method for land cover change detection based on image classification using scripts within the GRASS GIS framework ...
Article
Full-text available
Automated classification of satellite images is a challenging task that enables the use of remote sensing data for environmental modeling of Earth’s landscapes. In this document, we implement a GRASS GIS-based framework for discriminating land cover types to identify changes in the endorheic basins of the ephemeral salt lakes Chott Melrhir and Chott Merouane, Algeria; we employ embedded algorithms for image processing. This study presents a dataset of the nine Landsat 8–9 OLI/TIRS satellite images obtained from the USGS for a 9-year period, from 2014 to 2022. The images were analyzed to detect changes in water levels in ephemeral lakes that experience temporal fluctuations; these lakes are dry most of the time and are fed with water during rainy periods. The unsupervised classification of images was performed using GRASS GIS algorithms through several modules: ‘i.cluster’ was used to generate image classes; ‘i.maxlik’ was used for classification using the maximal likelihood discriminant analysis, and auxiliary modules, such as ‘i.group’, ‘r.support’, ‘r.import’, etc., were used. This document includes technical descriptions of the scripts used for image processing with detailed comments on the functionalities of the GRASS GIS modules. The results include the identified variations in the ephemeral salt lakes within the Algerian part of the Sahara over a 9-year period (2014–2022), using a time series of Landsat OLI/TIRS multispectral images that were classified using GRASS GIS. The main strengths of the GRASS GIS framework are the high speed, accuracy, and effectiveness of the programming codes for image processing in environmental monitoring. The presented GitHub repository, which contains scripts used for the satellite image analysis, serves as a reference for the interpretation of remote sensing data for the environmental monitoring of arid and semi-arid areas of Africa.
Article
Full-text available
The discipline of land change science has been evolving rapidly in the past decades. Remote sensing played a major role in one of the essential components of land change science, which includes observation, monitoring, and characterization of land change. In this paper, we proposed a new framework of the multifaceted view of land change through the lens of remote sensing and recommended five facets of land change including change location, time, target, metric, and agent. We also evaluated the impacts of spatial, spectral, temporal, angular, and data-integration domains of the remotely sensed data on observing, monitoring, and characterization of different facets of land change, as well as discussed some of the current land change products. We recommend clarifying the specific land change facet being studied in remote sensing of land change, reporting multiple or all facets of land change in remote sensing products, shifting the focus from land cover change to specific change metric and agent, integrating social science data and multi-sensor datasets for a deeper and fuller understanding of land change, and recognizing limitations and weaknesses of remote sensing in land change studies.
Article
Full-text available
Monitoring nighttime light (NTL) change enables us to quantitatively analyze the dynamic patterns of human activity and socioeconomic features. NASA's Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) atmospheric- and Lunar-BRDF-corrected Black Marble product (VNP46A2) provides daily global nighttime radiances with high temporal consistency. However, timely and continuous monitoring of NTL changes based on the dense daily DNB time series is still lacking. In this study, we proposed a novel Viewing Zenith Angle (VZA) stratified COntinuous monitoring of Land Disturbance (COLD) algorithm (VZA-COLD) to detect NTL change at 15 arc-second spatial resolution with daily updating capability based on NASA's Black Marble products. Specifically, we divided the clear observations into four VZA intervals (0–20°, 20°–40°, 40°–60°, and 0–60°) to mitigate the temporal variation of the NTL data caused by the combined angular effects of viewing geometry and the complex surface conditions (e.g., building heights, vegetation canopy covers, etc.). Single-term harmonic models were continuously estimated for new observations from each VZA interval, and by comparing the model predictions with the actual DNB observations, a unified set of NTL changes can be captured continuously among the different VZA intervals. The final NTL change maps were generated after excluding the consistent dark pixels. Results showed that the VZA-COLD algorithm reduced the DNB data temporal variations caused by disparities among different viewing angles and surface conditions, and successfully detected NTL changes for six globally distributed test sites with an overall accuracy of 99.71%, a user's accuracy of 87.18%, and a producer's accuracy of 68.88% for the NTL change category.
Article
Full-text available
Coastal tidal wetlands are highly altered ecosystems exposed to substantial risk due to widespread and frequent land-use change coupled with sea-level rise, leading to disrupted hydrologic and ecologic functions and ultimately, significant reduction in climate resiliency. Knowing where and when the changes have occurred, and the nature of those changes, is important for coastal communities and natural resource management. Large-scale mapping of coastal tidal wetland changes is extremely difficult due to their inherent dynamic nature. To bridge this gap, we developed an automated algorithm for DEtection and Characterization of cOastal tiDal wEtlands change (DECODE) using dense Landsat time series. DECODE consists of three elements, including spectral break detection, land cover classification and change characterization. DECODE assembles all available Landsat observations and introduces a water level regressor for each pixel to flag the spectral breaks and estimate harmonic time-series models for the divided temporal segments. Each temporal segment is classified (e.g., vegetated wetlands, open water, and others – including unvegetated areas and uplands) based on the phenological characteristics and the synthetic surface reflectance values calculated from the harmonic model coefficients, as well as a generic rule-based classification system. This harmonic model-based approach has the advantage of not needing the acquisition of satellite images at optimal conditions (i.e., low tide status) to avoid underestimating coastal vegetation caused by the tidal fluctuation. At the same time, DECODE can also characterize different kinds of changes including land cover change and condition change (i.e., land cover modification without conversion). We used DECODE to track status of coastal tidal wetlands in the northeastern United States from 1986 to 2020. The overall accuracy of land cover classification and change detection is approximately 95.8% and 99.8%, respectively. The vegetated wetlands and open water were mapped with user's accuracy of 94.6% and 99.0%, and producer's accuracy of 98.1% and 93.5%, respectively. The cover change and condition change were mapped with user's accuracy of 68.0% and 80.0%, and producer's accuracy of 80.5% and 97.1%, respectively. Approximately 3283 km² of the coastal landscape within our study area in the northeastern United States changed at least once (12% of the study area), and condition changes were the dominant change type (84.3%). Vegetated coastal tidal wetland decreased consistently (~2.6 km² per year) in the past 35 years, largely due to conversion to open water in the context of sea-level rise.
Article
Full-text available
Forest disturbances reduce the extent of natural habitats, biodiversity, and carbon sequestered in forests. With the implementation of the international framework Reduce Emissions from Deforestation and forest Degradation (REDD+), it is important to improve the accuracy in the estimation of the extent of forest disturbances. Time series analyses, such as Breaks for Additive Season and Trend (BFAST), have been frequently used to map tropical forest disturbances with promising results. Previous studies suggest that in addition to magnitude of change, disturbance accuracy could be enhanced by using other components of BFAST that describe additional aspects of the model, such as its goodness-of-fit, NDVI seasonal variation, temporal trend, historical length of observations and data quality, as well as by using separate thresholds for distinct forest types. The objective of this study is to determine if the BFAST algorithm can benefit from using these model components in a supervised scheme to improve the accuracy to detect forest disturbance. A random forests and support vector machines algorithms were trained and verified using 238 points in three different datasets: all-forest, tropical dry forest, and temperate forest. The results show that the highest accuracy was achieved by the support vector machines algorithm using the all-forest dataset. Although the increase in accuracy of the latter model vs. a magnitude threshold model is small, i.e., 0.14% for sample-based accuracy and 0.71% for area-weighted accuracy, the standard error of the estimated total disturbed forest area was 4352.59 ha smaller, while the annual disturbance rate was also smaller by 1262.2 ha year−1. The implemented approach can be useful to obtain more precise estimates in forest disturbance, as well as its associated carbon emissions.
Article
Full-text available
Spatiotemporal quantification of surface water and flooding is essential given that floods are among the largest natural hazards. Effective disaster response management requires near real-time information on flood extent. Satellite remote sensing is the only way of monitoring these dynamics across vast areas and over time. Previous water and flood mapping efforts have relied on optical time series, despite cloud contamination. This reliance on optical data is due to the availability of systematically acquired and easily accessible optical data globally for over 40 years. Prior research used either MODIS or Landsat data, trading either high temporal density but lower spatial resolution or lower cadence but higher spatial resolution. Both MODIS and Landsat pose limitations as Landsat can miss ephemeral floods, whereas MODIS misses small floods and inaccurately delineates flood edges. Leveraging high temporal frequency of 3–4 days of the existing Landsat-8 (L8) and two Sentinel-2 (S2) satellites combined, in this research, we assessed whether the increased temporal frequency of the three sensors improves our ability to detect surface water and flooding extent compared to a single sensor (L8 alone). Our study area was Australia’s Murray-Darling Basin, one of the world’s largest dryland basins that experiences ephemeral floods. We applied machine learning to NASA’s Harmonized Landsat Sentinel-2 (HLS) Surface Reflectance Product, which combines L8 and S2 observations, to map surface water and flooding dynamics. Our overall accuracy, estimated from a stratified random sample, was 99%. Our user’s and producer’s accuracy for the water class was 80% (±3.6%, standard error) and 76% (±5.8%). We focused on 2019, one of the most recent years when all three HLS sensors operated at full capacity. Our results show that water area (permanent and flooding) identified with the HLS was greater than that identified by L8, and some short-lived flooding events were detected only by the HLS. Comparison with high resolution (3 m) PlanetScope data identified extensive mixed pixels at the 30 m HLS resolution, highlighting the need for improved spatial resolution in future work. The HLS has been able to detect floods in cases when one sensor (L8) alone was not, despite 2019 being one of the driest years in the area, with few flooding events. The dense optical time-series offered by the HLS data is thus critical for capturing temporally dynamic phenomena (i.e., ephemeral floods in drylands), highlighting the importance of harmonized data such as the HLS.
Article
Full-text available
The success and rate of forest regeneration has consequences for sustainable forest management, climate change mitigation, and biodiversity, among others. Systematically monitoring forest regeneration over large and often remote areas is challenging. Remotely sensed data and associated analytical approaches have demonstrated consistent and transparent options for spatially-explicit characterization of vegetation return following disturbance. Moreover, time series of satellite imagery enable the establishment of spatially meaningful recovery baselines that can provide a benchmark for identifying areas that are either under- or over-performing relative to those baselines. This information allows for the investigation and/or prioritization of areas requiring some form of management intervention, including guiding tree planting initiatives. In this research, we assess recovery following stand replacing disturbances for the 650 Mha forested ecosystems of Canada for the period 1985–2017, wherein ~51 Mha of Canada's forested ecosystems were impacted by wildfire, and ~ 21 Mha were impacted by harvesting. For quantification of forest recovery, we implement the Years to Recovery or Y2R metric using Landsat time series data based on the Normalized Burn Ratio (NBR) to relate the number of years required for a pixel to return to 80% of its pre-disturbance NBR value. By the end of the analyzed period, 76% of areas impacted by wildfire were considered spectrally recovered compared to 93% of harvested areas. On average, we found that harvest areas had more rapid spectral recovery (mean Y2R = 6.1 years) than wildfire (mean Y2R = 10.6 years) and importantly, that Y2R varied by ecozone, disturbance type, pre-disturbance land cover, and latitude. We used airborne laser scanning data to assess whether pixels that were considered spectrally recovered had attained United Nations Food and Agricultural Organization benchmarks of canopy height (>5 m) and cover (>10%) across four geographic regions representing different forest types. Overall, 87% and 97% of recovered pixels sampled in harvests and wildfires, respectively, had achieved at least one of the benchmarks, with benchmarks of height more readily achieved than benchmarks of cover. By analyzing spatial patterns of Y2R, we identified areas that had significant positive or negative spatial clustering in their rate of spectral recovery. Approximately 3.5–4% of areas disturbed by wildfire or harvest had significant positive spatial clustering, indicative of slower spectral recovery rates; these areas were also less likely to have attained benchmarks of height and cover. Conversely, we identified significant negative spatial clustering for 0.94% of areas recovering from harvest and 1.93% of areas recovering from wildfire, indicative of spectral recovery that was more rapid than the ecozonal baseline. Herein, we demonstrated that remote sensing can provide spatial intelligence on the nature of disturbance-recovery dynamics in forested ecosystems over large areas and moreover, can retrospectively quantify and characterize historic forest recovery trends within the past three decades that have implications for forest management, climate change mitigation, and restoration initiatives in the near term.
Article
Full-text available
The increasing availability of high-quality remote sensing data and advanced technologies has spurred land cover mapping to characterize land change from local to global scales. However, most land change datasets either span multiple decades at a local scale or cover limited time over a larger geographic extent. Here, we present a new land cover and land surface change dataset created by the Land Change Monitoring, Assessment, and Projection (LCMAP) program over the conterminous United States (CONUS). The LCMAP land cover change dataset consists of annual land cover and land cover change products over the period 1985–2017 at a 30 m resolution using Landsat and other ancillary data via the Continuous Change Detection and Classification (CCDC) algorithm. In this paper, we describe our novel approach to implement the CCDC algorithm to produce the LCMAP product suite composed of five land cover products and five products related to land surface change. The LCMAP land cover products were validated using a collection of ∼25000 reference samples collected independently across CONUS. The overall agreement for all years of the LCMAP primary land cover product reached 82.5 %. The LCMAP products are produced through the LCMAP Information Warehouse and Data Store (IW+DS) and shared Mesos cluster systems that can process, store, and deliver all datasets for public access. To our knowledge, this is the first set of published 30 m annual land change datasets that include land cover, land cover change, and spectral change spanning from the 1980s to the present for the United States. The LCMAP product suite provides useful information for land resource management and facilitates studies to improve the understanding of terrestrial ecosystems and the complex dynamics of the Earth system. The LCMAP system could be implemented to produce global land change products in the future. The LCMAP products introduced in this paper are freely available at 10.5066/P9W1TO6E (LCMAP, 2021).
Article
Full-text available
Forest disturbance and recovery detection is vital for assessing ecosystem resilience and service to further establish the sustainable ecosystem development. Time series analyses of remote sensing data provide essential and effective methods in such research. Some studies have incorporated spatial structural characteristics to improve the spatial accuracy of detecting forest abrupt disturbances, however, few of them paid attention to the detection of recovery during ecosystem dynamics. To more comprehensively detect forest disturbance and recovery and explore the effectiveness of incorporating spatial structural metrics in dense Landsat temporal analysis, this study performed the LandTrendr algorithm using the normalized burn ratio (NBR) and the NBR -based spatial structural metrics time series. The spatial structural metrics (i.e., texture metrics) were calculated using the grey-level co-occurrence matrix (GLCM) based on the spatial neighbor of NBR. The methodology was tested using all available Landsat images in a subtropical region in China from 1986 to 2018 on the Google Earth Engine platform. The temporal accuracy of the recovery detection was improved from approximately 20% to 63% after incorporating the GLCM-based texture metrics compared to that using the pixel-based NBR time series. Additionally, the change patterns of forest composition and structure (closed forest to shrub or closed forest to cropland) and changes in the edge pixels in landscape patches can be well depicted by incorporating spatial metrics in dense temporal analyses. Our results highlight that the spatial structural metrics can be integrated to develop more robust detection indicators for the monitoring of forest dynamics and to determine the characteristics that are meaningful to ecological assessment and management.
Article
Full-text available
Optical earth observation satellite sensors often provide a coarse spatial resolution (CR) multispectral (MS) image together with a fine spatial resolution (FR) panchromatic (PAN) image. Pansharpening is a technique applied to such satellite sensor images to generate an FR MS image by injecting spatial detail taken from the FR PAN image while simultaneously preserving the spectral information of MS image. Pansharpening methods are mostly applied on a per-pixel basis and use the PAN image to extract spatial detail. However, many land cover objects in FR satellite sensor images are not illustrated as independent pixels, but as many spatially aggregated pixels that contain important semantic information. In this article, an object-based pansharpening approach, termed object-based area-to-point regression kriging (OATPRK), is proposed. OATPRK aims to fuse the MS and PAN images at the object-based scale and, thus, takes advantage of both the unified spectral information within the CR MS images and the spatial detail of the FR PAN image. OATPRK is composed of three stages: image segmentation, object-based regression, and residual downscaling. Three data sets acquired from IKONOS and Worldview-2 and 11 benchmark pansharpening algorithms were used to provide a comprehensive assessment of the proposed OATPRK approach. In both the synthetic and real experiments, OATPRK produced the most superior pan-sharpened results in terms of visual and quantitative assessment. OATPRK is a new conceptual method that advances the pixel-level geostatistical pansharpening approach to the object level and provides more accurate pan-sharpened MS images.
Article
Full-text available
Landsat time series (LTS) and associated change detection algorithms are useful for monitoring the effects of global change on Earth’s ecosystems. Because LTS algorithms can be easily applied across broad areas, they are commonly used to map changes in forest structure due to wildfire, insect attack, and other important drivers of tree mortality. But factors such as initial forest density, tree mortality agent, and disturbance severity (i.e., percent tree mortality) influence patterns of surface reflectance and may influence the accuracy of LTS algorithms. And while LTS algorithms are widely used in areas with a history of multiple disturbance events during the Landsat record, the effectiveness of LTS algorithms in these conditions is not well understood. We compared products from the LTS algorithm LandTrendr (Landsat-based Detection of Trends in Disturbance and Recovery) with a unique field dataset from a landscape heavily influenced by both wildfire and spruce beetles (Dendroctonus rufipennis) since c. 2000. We also compared LandTrendr to other common methods of mapping fire- and spruce beetle-affected areas. We found that LandTrendr more accurately detected wildfire than spruce beetle-induced tree mortality, and both mortality agents were more easily detected when they occurred at high severity. Surprisingly, prior spruce beetle outbreaks did not influence the detectability of subsequent wildfire. Compared to alternative disturbance mapping approaches, LandTrendr predicted a c. 40% lower area affected by wildfire or spruce beetle outbreaks. Our findings indicate that disturbance type- and severity-specific differences in omission error may have broad implications for disturbance mapping efforts that utilize Landsat data. Gradual, low-severity disturbances (e.g., background tree mortality and non-stand replacing disturbance) are pervasive in forest ecosystems, yet they can be difficult to detect using automated LTS algorithms. Whenever possible, methods to account for these biases should be incorporated in LTS-based mapping efforts, including the use of multispectral ensembles and ancillary spatial data to refine predictions. However, our findings also indicate that LTS algorithms appear to be robust in areas with multiple disturbance events, which is important because these areas will increase as new acquisitions extend the length of the Landsat record.
Article
Full-text available
Forest disturbances greatly affect the ecological functioning of natural forests. Timely information regarding extent, timing and magnitude of forest disturbance events is crucial for effective disturbance management strategies. Yet, we still lack accurate, near-real-time and high-performance remote sensing tools for monitoring abrupt and subtle forest disturbances. This study presents a new approach called ‘Stochastic Continuous Change Detection (S-CCD)’ using a dense Landsat data time series. S-CCD improves upon the ‘COntinuous monitoring of Land Disturbance (COLD)’ approach by incorporating a mathematical tool called the ‘state space model’, which treats trends and seasonality as stochastic processes, allowing for modeling temporal dynamics of satellite observations in a recursive way. The quantitative accuracy assessment is evaluated based on 3782 Landsat-based disturbance reference plots (30 m) from a probability sampling distributed throughout the Conterminous United States. Validation results show that the overall accuracy (best F1 score) of S-CCD is 0.793 with 20% omission error and 21% commission error, slightly higher than that of COLD (0.789). Two disturbance sites respectively associated with wildfire and insect disturbances are used for qualitative map-based analysis. Both quantitative and qualitative analyses suggest that S-CCD achieves fewer omission errors than COLD for detecting those disturbances with subtle/gradual spectral change. In addition, S-CCD facilitates a better real-time monitoring, benefited by its complete recursive manner and a shorter lag for confirming disturbance than COLD (126 days vs. 166 days for alerting 50% disturbance events), and reached up to ~4.4 times speedup for computation. This research addresses the need for near-real-time monitoring and large-scale mapping of forest health and offers a new approach for operationally performing change detection tasks from dense Landsat-based time series.
Article
Full-text available
Changes in forest disturbances can have strong impacts on forests, yet we lack consistent data on Europe’s forest disturbance regimes and their changes over time. Here we used satellite data to map three decades of forest disturbances across continental Europe, and analysed the patterns and trends in disturbance size, frequency and severity. Between 1986 and 2016, 17% of Europe’s forest area was disturbed by anthropogenic and/or natural causes. We identified 36 million individual disturbance patches with a mean patch size of 1.09 ha, which equals an annual average of 0.52 disturbance patches per km2 of forest area. The majority of disturbances were stand replacing. While trends in disturbance size were highly variable, disturbance frequency consistently increased and disturbance severity decreased. Here we present a continental-scale characterization of Europe’s forest disturbance regimes and their changes over time, providing spatial information that is critical for understanding the ongoing changes in Europe’s forests. Changes in forest disturbance affect their sustainability. This study finds that between 1986 and 2016, 36 million disturbances by humans or other causes affected 17% of Europe’s forest area.
Article
Full-text available
Objective information is required to monitor and characterize disturbances and disturbance regimes as related to changing climate and anthropogenic pressures. Forest disturbances occur over a range of spatial and temporal scales, with varying extent, severity, and persistence. To date, most of our understanding of detecting forest disturbances using remotely sensed imagery has been based on detecting abrupt and rapid, stand replacing disturbances, such as those related to wildfire and forest harvesting. Conversely, more continuous, subtle, and gradual non-stand replacing (NSR) disturbances, such as those related to drought stress or insect infestation have been subject to less focus. This can be attributed to the variability of NSR in space and time, as well as detection difficulties due to their often-subtle alterations to forest canopies and structure. Time series remote sensing techniques offer new opportunities for the capture and characterization of NSR disturbances. In this research, we investigate NSR disturbances as detected using a Landsat time series-based disturbance detection algorithm, Composite2Change (C2C) applied to the 650 Mha forested ecosystems of Canada. We first examined three key characteristics of the spectral trajectory of NSR disturbances: pre-disturbance conditions, disturbance severity and disturbance persistence. From these characteristics, we defined seven typical NSR patterns (exemplars) for Canada’s forested ecosystems. Using annual information on predicted stand-level cover and biomass, we found that all NSR exemplars resulted in a loss of canopy cover, ranging from an average loss of between 5 and 31%. For instance, the most common exemplar was linked to a 15% reduction in canopy cover and a 13 Mg/ha reduction in biomass. While stands lost biomass, over the duration of the NSR, most stands increased slightly in stand height likely due to growth of the remaining trees or release from competition from the remaining stems. Comparing exemplars across Canada with independent data on NSR disturbance events confirmed that NSR exemplars that had larger severities, and shorter disturbance persistence times, were more commonly associated with insect infestations (e.g. mountain pine beetle (Dendroctonus ponderosae) and spruce budworm (Choristoneura fumiferana)), whereas exemplars with a lower severity, and longer persistence were more commonly associated with drought events. Developing a nomenclature around remote sensing based characteristics of NSR disturbances provides an approach to both map NSR disturbances and better understand their impact on forest attributes, as well as provide tools to assess if these disturbanes are changing over space and time.
Article
Full-text available
Given long time series of satellite imagery, multiple disturbances can be detected for a particular location at different points in time. We assessed multiple disturbances for the 650 Mha of Canada's forested ecosystems using annual change information derived from Landsat time series imagery (1985-2015). Changes were typed by agent (fire, harvest, and non-stand replacing). Spectral change rate and time between successive disturbances were used to characterize disturbance-type combination differences. Short-term spectral recovery following the last disturbance was compared to a reference sample of pixels disturbed only once. Results indicated that of the 97.6 Mha disturbed, 13.5 Mha have had two or more disturbances, with low magnitude non-stand replacing disturbances involved in the majority of occurrences (77.2%). The total area disturbed represents 18.27% of forest ecosystems, with 2.53% having multiple disturbances and 0.54% having multiple stand-replacing disturbances. Systematic time series-based investigation of multiple disturbance events and agents provides insights on forest disturbance dynamics and recovery processes.
Preprint
Full-text available
Growing demands for temporally specific information on land surface change are fueling a new generation of maps and statistics that can contribute to understanding geographic and temporal patterns of change across large regions, provide input into a wide range of environmental modeling studies, clarify the drivers of change, and provide more timely information for land managers. To meet these needs, the U.S. Geological Survey has implemented a capability to monitor land surface change called the Land Change Monitoring, Assessment, and Projection (LCMAP) initiative. This paper describes the methodological foundations and lessons learned during development and testing of the LCMAP approach. Testing and evaluation of a suite of 10 annual land cover and land surface change data sets over six diverse study areas across the United States revealed good agreement with other published maps (overall agreement ranged from 73% to 87%) as well as several challenges that needed to be addressed to meet the goals of robust, repeatable, and geographically consistent monitoring results from the Continuous Change Detection and Classification (CCDC) algorithm. First, the high spatial and temporal variability of observational frequency led to differences in the number of changes identified, so CCDC was modified such that change detection is dependent on observational frequency. Second, the CCDC classification methodology was modified to improve its ability to characterize gradual land surface changes. Third, modifications were made to the classification element of CCDC to improve the representativeness of training data, which necessitated replacing the random forest algorithm with a boosted decision tree. Following these modifications, assessment of prototype Version 1 LCMAP results showed improvements in overall agreement (ranging from 85% to 90%).
Article
Full-text available
With the accumulating evidence of changing disturbance regimes becoming increasingly obvious, there is potential for disturbance ecology to become the most valuable lens through which climate-related disturbance events are interpreted. In this paper, I revisit some of the central themes of disturbance ecology and argue that the knowledge established in the field of disturbance ecology continues to be relevant to ecosystem management, even with rapid changes to disturbance regimes and changing disturbance types in local ecosystems. Disturbance ecology has been tremendously successful over the past several decades at elucidating the interactions between disturbances, biodiversity, and ecosystems, and this knowledge can be leveraged in different contexts. Primarily, management in changing and uncertain conditions should be focused primarily on the long-term persistence of that native biodiversity that has evolved within the local disturbance regime and is likely to go extinct with rapid changes to disturbance intensity, frequency, and type. Where possible, conserving aspects of natural disturbance regimes will be vital to preserving functioning ecosystems and to that native biodiversity that requires disturbance for its continued existence, though these situations may become more limited over time. Finally, scientists must actively propose management policies that incorporate knowledge of disturbance ecology. Successful policies regarding changing disturbance regimes for biodiversity will not merely be reactive, and will recognize that for natural ecosystems as for human society, not all desired outcomes are simultaneously possible.
Article
Full-text available
Imagery from medium resolution satellites, such as Landsat, have long been used to map forest disturbances in the tropics. However, the Landsat spatial resolution (30 m) has often been considered too coarse for reliably mapping small-scale selective logging. Imagery from the recently launched Sentinel-2 sensor, with a resampled 10 m spatial resolution, may improve the detection of forest disturbances. This study compared the performance of Landsat 8 and Sentinel-2 data for the detection of selective logging in an area located in the Brazilian Amazon. Logging impacts in seven areas, which had governmental authorization for harvesting timber, were mapped by calculating the difference of a self-referenced normalized burn ratio (∆rNBR) index over corresponding time periods (2016-2017) for imagery of both satellite sensors. A robust reference dataset was built using both high-and very-high-resolution imagery. It was used to define optimum ∆rNBR thresholds for forest disturbance maps, via a bootstrapping procedure, and for estimating accuracies and areas. A further assessment of our approach was also performed in three unlogged areas. Additionally, field data regarding logging infrastructure were collected in the seven study sites where logging occurred. Both satellites showed the same performance in terms of accuracy, with area-adjusted overall accuracies of 96.7% and 95.7% for Sentinel-2 and Landsat 8, respectively. However, Landsat 8 mapped 36.9% more area of selective logging compared to Sentinel-2 data. Logging infrastructure was better detected from Sentinel-2 (43.2%) than Landsat 8 (35.5%) data, confirming its potential for mapping small-scale logging. We assessed the impacted area by selective logging with a regular 300 m × 300 m grid over the pixel-based results, leading to 1143 ha and 1197 ha of disturbed forest on Sentinel-2 and Landsat 8 data, respectively. No substantial differences in terms of accuracy were found by adding three unlogged areas to the original seven study sites.
Article
Full-text available
Predicting future ecosystem dynamics depends critically on an improved understanding of how disturbances and climate change have driven long-term ecological changes in the past. Here we assembled a dataset of >100,000 tree species lists from the 19th century across a broad region (>130,000km²) in temperate eastern Canada, as well as recent forest inventories, to test the effects of changes in anthropogenic disturbance, temperature and moisture on forest dynamics. We evaluate changes in forest composition using four indices quantifying the affinities of co-occurring tree species with temperature, drought, light and disturbance. Land-use driven shifts favouring more disturbance-adapted tree species are far stronger than any effects ascribable to climate change, although the responses of species to disturbance are correlated with their expected responses to climate change. As such, anthropogenic and natural disturbances are expected to have large direct effects on forests and also indirect effects via altered responses to future climate change.
Article
Full-text available
We developed a new algorithm for COntinuous monitoring of Land Disturbance (COLD) using Landsat time series. COLD can detect many kinds of land disturbance continuously as new images are collected and provide historical land disturbance maps retrospectively. To better detect land disturbance, we tested different kinds of input data and explored many time series analysis techniques. We have several major observations as follows. First, time series of surface reflectance provides much better detection results than time series of Top-Of-Atmosphere (TOA) reflectance, and with some adjustments to the temporal density, time series from Landsat Analysis Ready Data (ARD) is better than it is from the same Landsat scene. Second, the combined use of spectral bands is always better than using a single spectral band or index, and if all the essential spectral bands have been employed, the inclusion of other indices does not further improve the algorithm performance. Third, the remaining outliers in the time series can be removed based on their deviation from model predicted values based on probability-based thresholds derived from normal or chi-squared distributions. Fourth, model initialization is pivotal for monitoring land disturbance, and a good initialization stability test can influence algorithm performance substantially. Fifth, time series model estimation with eight coefficients model, updated for every single observation, based on all available clear observations achieves the best result. Sixth, a change probability of 0.99 (chi-squared distribution) with six consecutive anomaly observations and a mean included angle < 45° to confirm a change provide the best results, and the combined use of temporally-adjusted Root Mean Square Error (RMSE) and minimum RMSE is recommended. Finally, spectral changes (or “breaks”) contributed from vegetation regrowth should be excluded from the land disturbance maps. The COLD algorithm was developed and calibrated based on all these lessons learned above. The accuracy assessment shows that COLD results were accurate for detecting land disturbance, with an omission error of 27% and a commission error of 28%.
Article
Full-text available
Data that have been processed to allow analysis with a minimumof additional user effort are often referred to as Analysis Ready Data (ARD). The ability to perform large scale Landsat analysis relies on the ability to access observations that are geometrically and radiometrically consistent, and have had non-target features (clouds) and poor quality observations flagged so that they can be excluded. The United States Geological Survey (USGS) has processed all of the Landsat 4 and 5 Thematic Mapper (TM), Landsat 7 Enhanced Thematic Mapper Plus (ETM+), Landsat 8 Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) archive over the conterminous United States (CONUS), Alaska, and Hawaii, into Landsat ARD. The ARD are available to significantly reduce the burden of pre-processing on users of Landsat data. Provision of pre-prepared ARD is intended to make it easier for users to produce Landsat-based maps of land cover and land-cover change and other derived geophysical and biophysical products. The ARD are provided as tiled, georegistered, top of atmosphere and atmospherically corrected products defined in a common equal area projection, accompanied by spatially explicit quality assessment information, and appropriate metadata to enable further processing while retaining traceability of data provenance.
Preprint
Full-text available
Data that have been processed to allow analysis with a minimum of additional user effort are often referred to as Analysis Ready Data (ARD). The ability to perform large scale Landsat analysis relies on the ability to access observations that are geometrically and radiometrically consistent, and have had non-target features (clouds) and poor quality observations flagged so that they can be excluded. The United States Geological Survey (USGS) has processed all of the Landsat 4 and 5 Thematic Mapper (TM), Landsat 7 Enhanced Thematic Mapper Plus (ETM+), Landsat 8 Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) archive over the conterminous United States (CONUS), Alaska, and Hawaii, into Landsat ARD. The ARD are available to significantly reduce the burden of pre-processing on users of Landsat data. Provision of pre-prepared ARD is intended to make it easier for users to produce Landsat-based maps of land cover and land-cover change and other derived geophysical and biophysical products. The ARD are provided as tiled, georegistered, top of atmosphere and atmospherically corrected products defined in a common equal area projection, accompanied by spatially explicit quality assessment information, and
Article
Full-text available
Rapid urban population growth in Sub-Saharan Western Africa has important environmental, infrastructural and social impacts. Due to the low availability of reliable urbanization data, remote sensing techniques become increasingly popular for monitoring land use change processes in that region. This study aims to quantify land cover for the Ouagadougou metropolitan area between 2002 and 2013 using a Landsat-TM/ETM+/OLI time series. We use a support vector regression approach and synthetically mixed training data. Working with bi-seasonal image stacks, we account for spectral variability between dry and rainy season and incorporate a new class-seasonal vegetation-that describes surfaces that are soil and vegetation during parts of the year. We produce fraction images of urban surfaces, soil, permanent vegetation and seasonal vegetation for each time step. Statistical evaluation shows that a temporally generalized, bi-seasonal model over all time steps performs equally or better than yearly or mono-seasonal models and provides reliable cover fractions. Urban fractions can be used to visualize pixel-based spatial-temporal patterns of urban densification and expansion. A simple rule set based on a seasonal vegetation to soil ratio is appropriate to delineate areas of unplanned and planned settlements and, thus, contributes to monitoring urban development on a neighborhood scale.
Article
Full-text available
Introduced insects and pathogens impact millions of acres of forested land in the United States each year, and large-scale monitoring efforts are essential for tracking the spread of outbreaks and quantifying the extent of damage. However, monitoring the impacts of defoliating insects presents a significant challenge due to the ephemeral nature of defoliation events. Using the 2016 gypsy moth (Lymantria dispar) outbreak in Southern New England as a case study, we present a new approach for near-real-time defoliation monitoring using synthetic images produced from Landsat time series. By comparing predicted and observed images, we assessed changes in vegetation condition multiple times over the course of an outbreak. Initial measures can be made as imagery becomes available, and season-integrated products provide a wall-to-wall assessment of potential defoliation at 30 m resolution. Qualitative and quantitative comparisons suggest our Landsat Time Series (LTS) products improve identification of defoliation events relative to existing products and provide a repeatable metric of change in condition. Our synthetic-image approach is an important step toward using the full temporal potential of the Landsat archive for operational monitoring of forest health over large extents, and provides an important new tool for understanding spatial and temporal dynamics of insect defoliators. Licensee MDPI, Basel, Switzerland.
Article
Full-text available
The free and open access to all archived Landsat images in 2008 has completely changed the way of using Landsat data. Many novel change detection algorithms based on Landsat time series have been developed We present a comprehensive review of four important aspects of change detection studies based on Landsat time series, including frequencies, preprocessing, algorithms, and applications. We observed the trend that the more recent the study, the higher the frequency of Landsat time series used. We reviewed a series of image preprocessing steps, including atmospheric correction, cloud and cloud shadow detection, and composite/fusion/metrics techniques. We divided all change detection algorithms into six categories, including thresholding, differencing, segmentation, trajectory classification, statistical boundary, and regression. Within each category, six major characteristics of different algorithms, such as frequency, change index, univariate/multivariate, online/offline, abrupt/gradual change, and sub-pixel/pixel/spatial were analyzed. Moreover, some of the widely-used change detection algorithms were also discussed. Finally, we reviewed different change detection applications by dividing these applications into two categories, change target and change agent detection.
Article
Full-text available
Forest disturbances are sensitive to climate. However, our understanding of disturbance dynamics in response to climatic changes remains incomplete, particularly regarding large-scale patterns, interaction effects and dampening feedbacks. Here we provide a global synthesis of climate change effects on important abiotic ( re, drought, wind, snow and ice) and biotic (insects and pathogens) disturbance agents. Warmer and drier conditions particularly facilitate re, drought and insect disturbances, while warmer and wetter conditions increase disturbances from wind and pathogens. Widespread interactions between agents are likely to amplify disturbances, while indirect climate effects such as vegetation changes can dampen long-term disturbance sensitivities to climate. Future changes in disturbance are likely to be most pronounced in coniferous forests and the boreal biome. We conclude that both ecosystems and society should be prepared for an increasingly disturbed future of forests.
Article
Full-text available
Disturbance is a critical ecological process in forested systems, and disturbance maps are important for understanding forest dynamics. Landsat data are a key remote sensing dataset for monitoring forest disturbance and there recently has been major growth in the development of disturbance mapping algorithms. Many of these algorithms take advantage of the high temporal data volume to mine subtle signals in Landsat time series, but as those signals become subtler, they are more likely to be mixed with noise in Landsat data. This study examines the similarity among seven different algorithms in their ability to map the full range of magnitudes of forest disturbance over six different Landsat scenes distributed across the conterminous US. The maps agreed very well in terms of the amount of undisturbed forest over time; however, for the ~30% of forest mapped as disturbed in a given year by at least one algorithm, there was little agreement about which pixels were affected. Algorithms that targeted higher-magnitude disturbances exhibited higher omission errors but lower commission errors than those targeting a broader range of disturbance magnitudes. These results suggest that a user of any given forest disturbance map should understand the map’s strengths and weaknesses (in terms of omission and commission error rates), with respect to the disturbance targets of interest.
Article
Full-text available
Speed and accuracy are important factors when dealing with time-constraint events for disaster, risk, and crisis-management support. Object-based image analysis can be a time consuming task in extracting information from large images because most of the segmentation algorithms use the pixel-grid for the initial object representation. It would be more natural and efficient to work with perceptually meaningful entities that are derived from pixels using a low-level grouping process (superpixels). Firstly, we tested a new workflow for image segmentation of remote sensing data, starting the multiresolution segmentation (MRS, using ESP2 tool) from the superpixel level and aiming at reducing the amount of time needed to automatically partition relatively large datasets of very high resolution remote sensing data. Secondly, we examined whether a Random Forest classification based on an oversegmentation produced by a Simple Linear Iterative Clustering (SLIC) superpixel algorithm performs similarly with reference to a traditional object-based classification regarding accuracy. Tests were applied on QuickBird and WorldView-2 data with different extents, scene content complexities, and number of bands to assess how the computational time and classification accuracy are affected by these factors. The proposed segmentation approach is compared with the traditional one, starting the MRS from the pixel level, regarding geometric accuracy of the objects and the computational time. The computational time was reduced in all cases, the biggest improvement being from 5 h 35 min to 13 min, for aWorldView-2 scene with eight bands and an extent of 12.2 million pixels, while the geometric accuracy is kept similar or slightly better. SLIC superpixel-based classification had similar or better overall accuracy values when compared to MRS-based classification, but the results were obtained in a fast manner and avoiding the parameterization of the MRS. These two approaches have the potential to enhance the automation of big remote sensing data analysis and processing, especially when time is an important constraint.
Article
Full-text available
The U.S. Geological Survey's Land Change Monitoring, Assessment, and Projection (LCMAP) initiative is a new end-to-end capability to continuously track and characterize changes in land cover, use, and condition to better support research and applications relevant to resource management and environmental change. Among the LCMAP product suite are annual land cover maps that will be available to the public. This paper describes an approach to optimize the selection of training and auxiliary data for deriving the thematic land cover maps based on all available clear observations from Landsats 4–8. Training data were selected from map products of the U.S. Geological Survey's Land Cover Trends project. The Random Forest classifier was applied for different classification scenarios based on the Continuous Change Detection and Classification (CCDC) algorithm. We found that extracting training data proportionally to the occurrence of land cover classes was superior to an equal distribution of training data per class, and suggest using a total of 20,000 training pixels to classify an area about the size of a Landsat scene. The problem of unbalanced training data was alleviated by extracting a minimum of 600 training pixels and a maximum of 8000 training pixels per class. We additionally explored removing outliers contained within the training data based on their spectral and spatial criteria, but observed no significant improvement in classification results. We also tested the importance of different types of auxiliary data that were available for the con-terminous United States, including: (a) five variables used by the National Land Cover Database, (b) three variables from the cloud screening ''Function of mask " (Fmask) statistics, and (c) two variables from the change detection results of CCDC. We found that auxiliary variables such as a Digital Elevation Model and its derivatives (aspect, position index, and slope), potential wetland index, water probability, snow probability , and cloud probability improved the accuracy of land cover classification. Compared to the original strategy of the CCDC algorithm (500 pixels per class), the use of the optimal strategy improved the classification accuracies substantially (15-percentage point increase in overall accuracy and 4-percentage point increase in minimum accuracy). Ó
Article
The arctic and boreal biomes are changing as temperatures increase, including changes in the type, frequency, intensity, and seasonality of disturbances. However, our understanding of the frequency, extent, and causes of disturbance events remains incomplete. Disturbances such as fire, forest harvest, drought, wind, flooding, and insects and pathogens occur at different frequencies and severities, posing challenges to characterize and assess them under a single framework. We used the Continuous Change Detection and Classification (CCDC) algorithm on all available Landsat observations from 1984 to 2014 to detect land cover and land condition change. We mapped the following causes of disturbances annually across the study domain of NASA's Arctic Boreal Vulnerability Experiment (ABoVE): fire, logging, and pest damage. Differences between Landsat Tasseled Cap (TC) values pre- and post-disturbance were used in a random forest classifier to map causal agents. For forested ecosystems, we mapped causal agents including fire, insect, and logging. In areas that were not forest before disturbance, only the fire class was mapped. The result shows that multidimensional spectral-temporal change information is useful for mapping the causes of disturbance in arctic and boreal biomes. We employed two rounds of post-processing and used the information obtained from the comparison between the map and reference data to improve the final map. The user's and producer's accuracies of an aggregated disturbance map were 94.6% (± 2.37%) and 89.3% (± 21.78%) (95% confidence intervals in parenthesis). When evaluating the causal agents, insect damage was found the most challenging to map and validate. We estimated that 10.8% of the ABoVE core domain was disturbed between 1987 and 2012, with a margin of error of 0.5% at the 95% confidence level. Rates of disturbance due to logging remained constant over time, while fires were more episodic, and insect damage was highest between 2005 and 2010. Overall, fires affected 8.8% of the study area, while logging was 1.4% and insect damage 0.6%. Our maps indicate that pest damage became a significant issue after 2000, but it was more severe for forest ecosystems in Western Canada than in Alaska.
Article
Sudden-onset natural and man-made disasters represent a threat to the safety of human life and property. Rapid and accurate building damage assessment using bitemporal high spatial resolution (HSR) remote sensing images can quickly and safely provide us with spatial distribution information and statistics of the damage degree to assist with humanitarian assistance and disaster response. For building damage assessment, strong feature representation and semantic consistency are the keys to obtaining a high accuracy. However, the conventional object-based image analysis (OBIA) framework using a patch-based convolutional neural network (CNN) can guarantee semantic consistency, but with weak feature representation, while the Siamese fully convolutional network approach has strong feature representation capabilities but is semantically inconsistent. In this paper, we propose a deep object-based semantic change detection framework, called ChangeOS, for building damage assessment. To seamlessly integrate OBIA and deep learning, we adopt a deep object localization network to generate accurate building objects, in place of the superpixel segmentation commonly used in the conventional OBIA framework. Furthermore, the deep object localization network and deep damage classification network are integrated into a unified semantic change detection network for end-to-end building damage assessment. This also provides deep object features that can supply an object prior to the deep damage classification network for more consistent semantic feature representation. Object-based post-processing is adopted to further guarantee the semantic consistency of each object. The experimental results obtained on a global scale dataset including 19 natural disaster events and two local scale datasets including the Beirut port explosion event and the Bata military barracks explosion event show that ChangeOS is superior to the currently published methods in speed and accuracy, and has a superior generalization ability for man-made disasters.
Article
The U.S. Geological Survey Land Change Monitoring, Assessment and Projection (USGS LCMAP) has released a suite of annual land cover and land cover change products for the conterminous United States (CONUS). The accuracy of these products was assessed using an independently collected land cover reference sample dataset produced by analysts interpreting Landsat data, high-resolution aerial photographs, and other ancillary data. The reference sample of nearly 25,000 pixels and the accompanying 33-year time series of annual land cover reference labels allowed for a comprehensive assessment of accuracy of the LCMAP land cover and land cover change products. Overall accuracy (± standard error) for the per-pixel assessment across all years for the eight land cover classes was 82.5% (±0.2%). Overall accuracy was consistent year-to-year within a range of 1.5% but varied regionally with lower accuracy in the eastern United States. User's accuracy (UA) and producer's accuracy (PA) for CONUS ranged from the higher accuracies of Water (UA = 96%, PA = 93%) and Tree Cover (UA = 90%, PA = 83%) to the lower accuracies of Wetland (UA = 69%, PA = 74%) and Barren (UA = 43%, PA = 57%). For a binary change/no change classification, UA of change was 13% (±0.5%) and PA was 16% (±0.6%) for CONUS when agreement was defined as a match by the exact year of change. UA and PA improved to 28% and 34% when agreement was defined as the change being detected by the map and reference data within a ± 2-year window. Change accuracy was higher in the eastern United States compared to the western US. UA was 49% (±0.3) and PA was 54% (±0.3) for the footprint of change (defined as the area experiencing at least one land cover change from 1985 to 2017). For class-specific loss and gain when agreement was defined as an exact year match, UA and PA were generally below 30%, with Tree Cover loss being the most accurately mapped change (UA = 25%, PA = 31%). These accuracy results provide users with information to assess the suitability of LCMAP data and information to guide future research for improving LCMAP products, particularly focusing on the challenges of accurately mapping annual land cover change.
Article
In contrast to abrupt changes caused by land cover conversion, subtle changes driven by a shift in the condition, structure, or other biological attributes of land often lead to minimal and slower alterations of the terrestrial surface. Accurate mapping and monitoring of subtle change are crucial for an early warning of long-term gradual change that may eventually result in land cover conversion. Freely accessible moderate-resolution datasets such as the Landsat archive have great potential to characterize subtle change by capturing low-magnitude spectral changes in long-term observations. However, past studies have reported limited success in accurately extracting subtle changes from satellite-based time series analysis. In this study, we introduce a supervised framework named ‘PIDS’ to detect subtle forest disturbance from a comprehensive Landsat data archive by leveraging disturbance-based calibration sites. PIDS consists of four components: (1) Parameter optimization; (2) Index selection; (3) Dynamic stratified monitoring; and (4) Spatial consideration. PIDS was applied to map the early stage of bark beetle infestations (i.e., a lower per-pixel fraction of trees cover that show visual signs of infestation), which are a typical example of subtle change in conifer forests. Landsat Analysis Ready Data were used as the time series inputs for mapping mountain pine beetle and spruce beetle disturbance between 2001 and 2019 in Colorado, USA. PIDS-detection map assessment showed that the overall performance of PIDS (namely ‘F1 score’) was 0.86 for mountain pine beetle and 0.73 for spruce beetle, making a substantial improvement (> 0.3) compared to other approaches/products including COntinuous monitoring of Land Disturbance, LandTrendr, and the National Land Cover Database forest disturbance product. A sub-pixel analysis of tree canopy mortality percentage was performed by linking classified high-resolution (0.3- and 1-m) aerial imagery and 30-m PIDS-detection maps. Results show that PIDS typically detects mountain pine beetle infestation when ≥56% of a Landsat pixel is occupied by red-stage canopy mortality (one year after initial infestation), and spruce beetle infestation when ≥55% is occupied by gray-stage mortality (two years after initial infestation). This study addresses an important methodological goal pertinent to the utility of event-based reference samples for detecting subtle forest change, which could be potentially applied to other types of subtle land change.
Article
In their applications, both deep learning techniques and object-based image analysis (OBIA) have shown better performance separately than conventional methods on change detection tasks. However, efforts to investigate the effect of combining these two techniques for advancing change detection techniques are unexplored in current literature. This study proposes a novel change detection method implementing change feature extraction using convolutional neural networks under an OBIA framework. To demonstrate the effectiveness of our proposed method, we compare the proposed method against benchmark pixel-based counterparts on aerial images for the task of multi-class change detection. To thoroughly assess the performance of our proposed method, this study also for the first time compared three common feature fusion schemes for change detection architecture: concatenation, differencing, and Long Short-Term Memory (LSTM). The proposed method was also tested on simulated misregistered images to evaluate its robustness, a factor that plays an important role in compromising change detection accuracy but has not been investigated for supervised change detection methods in the literature. Finally, the proposed change detection method was also tested using very high resolution (VHR) satellite images for binary class change detection to map an impacted area caused by natural disaster and the result was evaluated using reference data from the Federal Emergency Management Agency (FEMA). With the experimental results from these two sets of experiments, we showed that (1) our proposed method achieved substantially higher accuracy and computational efficiency when compared to pixel-based methods, (2) three feature fusion schemes did not show a significant difference for overall accuracy, (3) our proposed method was robust in image misregistration in both testing and training data, (4) we demonstrate the potential impact of automation to decision making by deploying our method to map a large geographic area affected by a recent natural disaster.
Article
Characterizing landscape patterns and revealing their underlying processes are critical for studying climate change and environmental problems. Previous methods for mapping land cover changes largely focused on the classification of remote sensing images. Therefore, they could not provide information about the evolutionary process of land cover changes. In this paper, we developed a spatiotemporal structural graph (STSG) technique for a comprehensive analysis of land cover changes. First, a land cover neighborhood graph was generated for each snapshot to quantify the spatial relationship between adjacent land cover objects. Then, an object-based temporal tracking algorithm was designed to monitor the temporal changes between land cover objects over time. Finally, land cover evolutionary trajectories, pixel-level land cover change trajectories, and node-wise connectivity changes over time were characterized. We applied the proposed method to analyze land cover changes in Suffolk County, New York from 1996 to 2010. The results demonstrated that STSG can not only characterize and visualize detailed land cover changes spatially but also maintain the temporal sequence and relations of land cover objects in an integrated space-time environment. The proposed STSG provides a useful framework for analyzing land cover changes and can be adapted to characterize and quantify other spatiotemporal phenomena.
Conference Paper
This study investigates the possibilities of improving the classification of high spatial resolution images by using object-based approach, superpixel segmentation and compact texture unit descriptors. The proposed approach was implemented on Google Earth Engine (GEE) which provides a fast and easy-to-use platform with its freely available datasets and geospatial analysis tools for applications such multi-class classification. In this study, Multispectral Instrument (MSI) images of Sentinel-2 were utilized to classify main land-cover and land-use types. The obtained results were validated using the Corine land cover inventory.
Article
Natural disturbances are critical for supporting biodiversity in many ecosystems, but subsequent management actions can influence the quality of habitat that follow these events. Post-disturbance salvage logging has negative consequences on certain components of forest biodiversity, but populations of some early seral-adapted organisms may be maintained in salvage-logged areas. We investigated the influence of an essential group of pollinators-wild bees-to recent post-wildfire salvage logging within managed mixed-conifer forest in the Pacific Northwest. We compared bee diversity (i.e., bee abundance, species richness, alpha diversity, and beta diversity) and habitat features (i.e., floral resources and available nesting substrates) in salvage-logged areas to unlogged sites, both of which experienced high-severity wildfire. Although we found no evidence for differences in bee abundance between salvage-logged and unlogged sites, interpolated estimates of species richness and both interpolated and extrapolated estimates of alpha diversity were greater in unlogged sites. Additionally, beta diversity was greater in unlogged sites when equal weight was given to rare species. Salvage logging did not select for specific functional groups of bees, as both logged and unlogged sites were dominated by generalist, social species that nest in the ground. However, habitat conditions for bees were influenced by salvage logging: flowering plant density was greater in salvage-logged sites during the latter half of the season, with fewer conifer snags and more woody debris than were available in unlogged sites. Our study indicates that salvage logging can support wild bee abundance, but that unlogged patches of severely burned forest supports slightly greater bee diversity within fire-prone mixed-conifer landscapes.
Article
Change detection in heterogeneous remote sensing images is an important but challenging task because of the incommensurable appearances of the heterogeneous images. In order to solve the change detection problem in optical and synthetic aperture radar (SAR) images, this paper proposes an improved method that combines cooperative multitemporal segmentation and hierarchical compound classification (CMS-HCC) based on our previous work. Considering the large radiometric and geometric differences between heterogeneous images, first, a cooperative multitemporal segmentation method is introduced to generate multi-scale segmentation results. This method segments two images together by associating the information from the two images and thus reduces the noises and errors caused by area transition and object misalignment, as well as makes the boundaries of detected objects described more accurately. Then, a region-based multitemporal hierarchical Markov random field (RMH-MRF) model is defined to combine spatial, temporal, and multi-level information. With the RMH-MRF model, a hierarchical compound classification method is performed by identifying the optimal configuration of labels with a region-based marginal posterior mode estimation, further improving the change detection accuracy. The changes can be determined if the labels assigned to each pair of parcels are different, obtaining multi-scale change maps. Experimental validation is conducted on several pairs of optical and SAR images. It consists of two parts: comparison on different multitemporal segmentation methods and comparison on different change detection methods. The results show that the proposed method can effectively detect the changes in heterogeneous images, with low false positive and high accuracy.
Article
The U.S. Geological Survey Land Change Monitoring, Assessment and Projection (USGS LCMAP) initiative is working toward a comprehensive capability to characterize land cover and land cover change using dense Landsat time series data. A suite of products including annual land cover maps and annual land cover change maps will be produced using the Landsat 4-8 data record. LCMAP products will initially be created for the conterminous United States (CONUS) and then extended to include Alaska and Hawaii. A critical component of LCMAP is the collection of reference data using the TimeSync tool, a web-based interface for manually interpreting and recording land cover from Landsat data supplemented with fine resolution imagery and other ancillary data. These reference data will be used for area estimation and validation of the LCMAP annual land cover products. Nearly 12,000 LCMAP reference sample pixels have been interpreted and a simple random subsample of these pixels has been interpreted independently by a second analyst (hereafter referred to as “duplicate interpretations”). The annual land cover reference class labels for the 1984–2016 monitoring period obtained from these duplicate interpretations are used to address the following questions: 1) How consistent are the reference class labels among interpreters overall and per class? 2) Does consistency vary by geographic region? 3) Does consistency vary as interpreters gain experience over time? 4) Does interpreter consistency change with improving availability and quality of imagery from 1984 to 2016? Overall agreement between interpreters was 88%. Class-specific agreement ranged from 46% for Disturbed to 94% for Water, with more prevalent classes (Tree Cover, Grass/Shrub and Cropland) generally having greater agreement than rare classes (Developed, Barren and Wetland). Agreement between interpreters remained approximately the same over the 12-month period during which these interpretations were completed. Increasing availability of Landsat and Google Earth fine resolution data over the 1984 to 2016 monitoring period coincided with increased interpreter consistency for the post-2000 data record. The reference data interpretation and quality assurance protocols implemented for LCMAP demonstrate the technical and practical feasibility of using the Landsat archive and intensive human interpretation to produce national, annual reference land cover data over a 30-year period. Protocols to estimate and enhance interpreter consistency are critical elements to document and ensure the quality of these reference data.
Article
Agricultural land abandonment is a common land-use change, making the accurate mapping of both location and timing when agricultural land abandonment occurred important to understand its environmental and social outcomes. However, it is challenging to distinguish agricultural abandonment from transitional classes such as fallow land at high spatial resolutions due to the complexity of change process. To date, no robust approach exists to detect when agricultural land abandonment occurred based on 30-m Landsat images. Our goal here was to develop a new approach to detect the extent and the exact timing of agricultural land abandonment using spatial and temporal segments derived from Landsat time series. We tested our approach for one Landsat footprint in the Caucasus, covering parts of Russia and Georgia, where agricultural land abandonment is widespread. First, we generated agricultural land image objects from multi-date Landsat imagery using a multi-resolution segmentation approach. Second, we estimated the probability for each object that agricultural land was used each year based on Landsat temporal-spectral metrics and a random forest model. Third, we applied temporal segmentation of the resulting agricultural land probability time series to identify change classes and detect when abandonment occurred. We found that our approach was able to accurately separate agricultural abandonment from active agricultural lands, fallow land, and re-cultivation. Our spatial and temporal segmentation approach captured the changes at the object level well (overall mapping accuracy = 97 ± 1%), and performed substantially better than pixel-level change detection (overall accuracy = 82 ± 3%). We found strong spatial and temporal variations in agricultural land abandonment rates in our study area, likely a consequence of regional wars after the collapse of the Soviet Union. In summary, the combination of spatial and temporal segmentation approaches of time-series is a robust method to track agricultural land abandonment and may be relevant for other land-use changes as well.
Article
Monitoring and classifying forest disturbance using Landsat time series has improved greatly over the past decade, with many new algorithms taking advantage of the high-quality, cost free data in the archive. Much of the innovation has been focused on use of sophisticated workflows that consist of a logical sequence of processes and rules, multiple statistical functions, and parameter sets that must be calibrated to accurately classify disturbance. For many algorithms, calibration has been local to areas of interest and the algorithm's classification performance has been good under those circumstances. When applied elsewhere, however, algorithm performance has suffered. An alternative strategy for calibration may be to use the locally tested parameter values in conjunction with a statistical approach (e.g., Random Forests; RF) to align algorithm classification with a reference disturbance dataset, a process we call secondary classification. We tested that strategy here using RF with LandTrendr, an algorithm that runs on one spectral band or index. Disturbance detection using secondary classification was spectral band- or index-dependent, with each spectral dimension providing some unique detections and different error rates. Using secondary classification, we tested whether an integrated multispectral LandTrendr ensemble, with various combinations of the six basic Landsat reflectance bands and seven common spectral indices, improves algorithm performance. Results indicated a substantial reduction in errors relative to secondary classification based on single bands/indices, revealing the importance of a multispectral approach to forest disturbance detection. To explain the importance of specific bands and spectral indices in the multispectral ensemble, we developed a disturbance signal-to-noise metric that clearly highlighted the value of shortwave-infrared reflectance, especially when paired with near-infrared reflectance.
Article
Efficient methodologies for mapping croplands are an essential condition for the implementation of sustainable agricultural practices and for monitoring crops periodically. The increasing spatial and temporal resolution of globally available satellite images, such as those provided by Sentinel-2, creates new possibilities for generating accurate datasets on available crop types, in ready-to-use vector data format. Existing solutions dedicated to cropland mapping, based on high resolution remote sensing data, are mainly focused on pixel-based analysis of time series data. This paper evaluates how a time-weighted dynamic time warping (TWDTW) method that uses Sentinel-2 time series performs when applied to pixel-based and object-based classifications of various crop types in three different study areas (in Romania, Italy and the USA). The classification outputs were compared to those produced by Random Forest (RF) for both pixel- and object-based image analysis units. The sensitivity of these two methods to the training samples was also evaluated. Object-based TWDTW outperformed pixel-based TWDTW in all three study areas, with overall accuracies ranging between 78.05% and 96.19%; it also proved to be more efficient in terms of computational time. TWDTW achieved comparable classification results to RF in Romania and Italy, but RF achieved better results in the USA, where the classified crops present high intra-class spectral variability. Additionally, TWDTW proved to be less sensitive in relation to the training samples. This is an important asset in areas where inputs for training samples are limited.
Article
The ever-increasing volume and accessibility of remote sensing data has spawned many alternative approaches for mapping important environmental features and processes. For example, there are several viable but highly varied strategies for using time series of Landsat imagery to detect changes in forest cover. Performance among algorithms varies across complex natural systems, and it is reasonable to ask if aggregating the strengths of an ensemble of classifiers might result in increased overall accuracy. Relatively simple rules have been used in the past to aggregate classifications among remotely sensed maps (e.g. using majority predictions), and in other fields, empirical models have been used to create situationally specific algorithm weights. The latter process, called "stacked generalization" (or "stacking"), typically uses a parametric model for the fusion of algorithm outputs. We tested the performance of several leading forest disturbance detection algorithms against ensembles of the outputs of those same algorithms based upon stacking using both parametric and Random Forests-based fusion rules. Stacking using a Random Forests model cut omission and commission error rates in half in many cases in relation to individual change detection algorithms, and cut error rates by one quarter compared to more conventional parametric stacking. Stacking also offers two auxiliary benefits: alignment of outputs to the precise definitions built into a particular set of empirical calibration data; and, outputs which may be adjusted such that map class totals match independent estimates of change in each year. In general, ensemble predictions improve when new inputs are added that are both informative and uncorrelated with existing ensemble components. As increased use of cloud-based computing makes ensemble mapping methods more accessible, the most useful new algorithms may be those that specialize in providing spectral, temporal, or thematic information not already available through members of existing ensembles.
Article
Forest disturbances are sensitive to climate. However, our understanding of disturbance dynamics in response to climatic changes remains incomplete, particularly regarding large-scale patterns, interaction effects and dampening feedbacks. Here we provide a global synthesis of climate change effects on important abiotic (fire, drought, wind, snow and ice) and biotic (insects and pathogens) disturbance agents. Warmer and drier conditions particularly facilitate fire, drought and insect disturbances, while warmer and wetter conditions increase disturbances from wind and pathogens. Widespread interactions between agents are likely to amplify disturbances, while indirect climate effects such as vegetation changes can dampen long-term disturbance sensitivities to climate. Future changes in disturbance are likely to be most pronounced in coniferous forests and the boreal biome. We conclude that both ecosystems and society should be prepared for an increasingly disturbed future of forests.
Article
Spatially-explicit tree species distribution maps are increasingly valuable to forest managers and researchers in light of the effects of climate change and invasive pests on forest resources. Traditional forest classifications are limited to broad classes of forest types with variable accuracy. Advanced remote sensing techniques, such as spectral unmixing and object-based image analysis, offer novel forest mapping approaches by quantifying proportional species composition at the pixel level and utilizing ancillary environmental data for forest classifications. This is particularly useful in the Northeastern region of the United States where species composition is often mixed.
Article
Insect disturbance are important agents of change in forest ecosystems around the globe, yet their spatial and temporal distribution and dynamics are not well understood. Remote sensing has gained much attention in mapping and understanding insect outbreak dynamics. Consequently, we here review the current literature on the remote sensing of insect disturbances. We suggest to group studies into three insect types: bark beetles, broadleaved defoliators, and coniferous defoliators. By so doing, we systematically compare the sensors and methods used for mapping insect disturbances within and across insect types. Results suggest that there are substantial differences between methods used for mapping bark beetles and defoliators, and between methods used for mapping broadleaved and coniferous defoliators. Following from this, we highlight approaches that are particularly suited for each insect type. Finally, we conclude by highlighting future research directions for remote sensing of insect disturbances. In particular, we suggest to: 1) Separate insect disturbances from other agents; 2) Extend the spatial and temporal domain of analysis; 3) Make use of dense time series; 4) Operationalize near-real time monitoring of insect disturbances; 5) Identify insect disturbances in the context of coupled human-natural systems; and 6) Improve reference data for assessing insect disturbances. Since the remote sensing of insect disturbances has gained much interest beyond the remote sensing community recently, the future developments identified here will help integrating remote sensing products into operational forest management. Furthermore, an improved spatiotemporal quantification of insect disturbances will support an inclusion of these processes into regional to global ecosystem model.