Figure 2 - uploaded by Olena Dubovyk
Content may be subject to copyright.
Source publication
Agricultural production and food security highly depend on crop growth and condition throughout the growing season. Timely and spatially explicit information on crop phenology can assist in informed decision making and agricultural land management. Remote sensing can be a powerful tool for agricultural assessment. Remotely sensed data is ideally su...
Context in source publication
Similar publications
Many challenges prevail in cropland mapping over large areas, including dealing with massive volumes of datasets and computing capabilities. Accordingly, new opportunities have been opened at a breakneck pace with the launch of new satellites, the continuous improvements in data retrieval technology, and the upsurge of cloud computing solutions suc...
Citations
... To deal with this challenge, we followed the procedure introduced in Xu et al. (2020Xu et al. ( , 2021 to deal with the missing data. Future work can investigate other methods, such as fusing Landsat time series with MODIS or Sentinel-2 (Gao et al., 2017;Schreier et al., 2021) or reconstructing the images contaminated by clouds by employing image reconstruction methods such as generative adversarial network (GAN) (Zhou et al., 2022). ...
Precise and timely information about crop types plays a crucial role in various agriculture-related applications. However, crop type mapping methods often face significant challenges in cross-regional and cross-time scenarios with high discrepancies between temporal-spectral characteristics of crops from different regions and years. Unsupervised domain adaptation (UDA) methods have been employed to mitigate the problem of domain shift between the source and target domains. Since these methods require source domain data during the adaptation phase, they demand significant computational resources and data storage, especially when large labeled crop mapping source datasets are available. This leads to increased energy consumption and financial costs. To address this limitation, we developed a source-free UDA method for cross-regional and cross-time crop mapping, capable of adapting the source-pretrained models to the target datasets without requiring the source datasets. The method mitigates the domain shift problem by leveraging mutual information loss. The diversity and discriminability terms in the loss function are balanced through a novel unsupervised weighting strategy based on mean confidence scores of the predicted categories. Our experiments on mapping corn, soybean, and the class Other from Landsat image time series in the U.S. demonstrated that the adapted models using different backbone networks outperformed their non-adapted counterparts. With CNN, Transformer, and LSTM backbone networks, our adaptation method increased the macro F1 scores by 12.9%, 7.1%, and 5.8% on average in cross-time tests and by 20.1%, 12.5%, and 8.8% on average in cross-regional tests, respectively. Additionally, in an experiment covering a large study area of 450 km
300 km, the adapted model with the CNN backbone network obtained a macro F1 score of 92.6%, outperforming its non-adapted counterpart with a macro F1 score of 89.2%. Our experiments on mapping the same classes using Sentinel-2 image times series in France demonstrated the effectiveness of our method across different countries and sensors. We also tested our method in more diverse agricultural areas in Denmark and France containing six classes. The results showed that the adapted models outperformed the non-adapted models. Moreover, in within-season experiments, the adapted models performed better than the non-adapted models in the vast majority of weeks. These results and their comparison to those obtained by the other investigated UDA methods demonstrated the efficiency of our proposed method for both end-of-season and within-season crop mapping tasks. Additionally, our study showed that the method is modular and flexible in employing various backbone networks. The code and data are available at
https://github.com/Sina-Mohammadi/SFUDA-CropMapping.
... Many existing phenology detection approaches using thresholds, derivatives or trend information offer only a broad overview of phenology (e.g., green up) or focus on stages with distinct features (e.g., rape flowering) [4]. Thresholding identifies phenological events by predefined value limits on vegetation indices (e.g., [5][6][7][8][9][10]), while a derivative approach captures rapid changes or the shape of a time series which indicate certain phenological transitions (e.g., [11][12][13]). Trend analysis monitors features like slope to assess crop development over time (e.g., [14][15][16]). ...
... The advent of harmonized analysis-ready data, at daily or near-daily frequencies, has been shown to facilitate phenological growth stage classification with high precision to a ground-measured growth scale useful for agronomic applications [18][19][20]. The combination of sensors to increase temporal frequency and spatial accuracy for detecting key phenological stages is especially promising, as shown by [6,11]. High-frequency data provide more continuous monitoring of changes in crop growth also during the season that could be missed with less frequent observations. ...
Accurate identification of crop phenology timing is crucial for agriculture. While remote sensing tracks vegetation changes, linking these to ground-measured crop growth stages remains challenging. Existing methods offer broad overviews but fail to capture detailed phenological changes, which can be partially related to the temporal resolution of the remote sensing datasets used. The availability of higher-frequency observations, obtained by combining sensors and gap-filling, offers the possibility to capture more subtle changes in crop development, some of which can be relevant for management decisions. One such dataset is Planet Fusion, daily analysis-ready data obtained by integrating PlanetScope imagery with public satellite sensor sources such as Sentinel-2 and Landsat. This study introduces a novel method utilizing Dynamic Time Warping applied to Planet Fusion imagery for maize phenology detection, to evaluate its effectiveness across 70 micro-stages. Unlike singular template approaches, this method preserves critical data patterns, enhancing prediction accuracy and mitigating labeling issues. During the experiments, eight commonly employed spectral indices were investigated as inputs. The method achieves high prediction accuracy, with 90% of predictions falling within a 10-day error margin, evaluated based on over 3200 observations from 208 fields. To understand the potential advantage of Planet Fusion, a comparative analysis was performed using Harmonized Landsat Sentinel-2 data. Planet Fusion outperforms Harmonized Landsat Sentinel-2, with significant improvements observed in key phenological stages such as V4, R1, and late R5. Finally, this study showcases the method’s transferability across continents and years, although additional field data are required for further validation.
... According to existing literature, two classification approaches exist, i.e., Machine Learning (ML) and Deep Learning (DL). Machine learning (ML) algorithms such as random forest (RF), k-nearest neighbors (kNN), and support vector machines (SVM) have been applied in various plant studies (Prins and Niekerk 2020;Chabalala et al. 2022;Schreier et al. 2021). Although remarkable results were achieved in classifying certain crop types, difficulties in distinguishing between crop types were nevertheless reported. ...
Accurate and up-to-date crop-type maps are essential for efficient management and well-informed decision-making, allowing accurate planning and execution of agricultural operations in the horticultural sector. The assessment of crop-related traits, such as the spatiotemporal variability of phenology, can improve decision-making. The study aimed to extract phenological information from Sentinel-2 data to identify and distinguish between fruit trees and co-existing land use types on subtropical farms in Levubu, South Africa. However, the heterogeneity and complexity of the study area—composed of smallholder mixed cropping systems with overlapping spectra—constituted an obstacle to the application of optical pixel-based classification using machine learning (ML) classifiers. Given the socio-economic importance of fruit tree crops, the research sought to map the phenological dynamics of these crops using deep neural network (DNN) and optical Sentinel-2 data. The models were optimized to determine the best hyperparameters to achieve the best classification results. The classification results showed the maximum overall accuracies of 86.96%, 88.64%, 86.76%, and 87.25% for the April, May, June, and July images, respectively. The results demonstrate the potential of temporal phenological optical-based data in mapping fruit tree crops under different management systems. The availability of remotely sensed data with high spatial and spectral resolutions makes it possible to use deep learning models to support decision-making in agriculture. This creates new possibilities for deep learning to revolutionize and facilitate innovation within smart horticulture.
... In our work, we employed the S. Mohammadi et al. procedure introduced in Xu et al. (2020) to deal with missing data. To further reduce the classification uncertainties caused by the presence of gaps in time series, future studies can be dedicated to either fusing Landsat time series with other products, including MODIS or Sentinel-2 (Gao et al., 2017;Schreier et al., 2021) or to incorporating image reconstruction methods such as generative adversarial network (GAN) (Zhou et al., 2022) to successfully reconstruct the images contaminated by clouds. ...
... By contrast, free access to Landsat, Sentinel, MODIS, and other satellite archives has revolutionized satellite images, especially in conservation agriculture (Liu et al., 2020;Wulder et al., 2019). Many studies use free satellite imageries for land-use monitoring and change detection (Al-Juboury & Al-Rubaye, 2021; Chen & Wang, 2010;Chughtai et al., 2021;Fonji & Taff, 2014), crop identification and mapping (Belgiu & Csillik, 2018;Xun et al., 2021;Yan et al., 2021), phenology mapping using time series Schreier et al., 2021;Zhao et al., 2021) and other applications. In addition to access to free satellite imagery, many private satellite companies, like Planet Lab, provide high spatial and temporal resolution time series imagery for a fee (Huang & Roy, 2021). ...
... Gan et al. (2020) found that the green-up date of winter wheat extracted by MODIS vegetation index, especially the Normalized Difference Phenology Index, had a significant correlation with the groundobserved green-up date from agrometeorological stations in the Huanghuai region of China [22]. Recently, Schreier et al. (2021) derived the phenometrics of the main crops in Bila Tserkva district of Ukraine by fusing time series from Landsat 8 and Sentinel 2 with MODIS data and discovered that there were only a few days deviation from on-field reported growing-stages for most crops [23]. ...
... Gan et al. (2020) found that the green-up date of winter wheat extracted by MODIS vegetation index, especially the Normalized Difference Phenology Index, had a significant correlation with the groundobserved green-up date from agrometeorological stations in the Huanghuai region of China [22]. Recently, Schreier et al. (2021) derived the phenometrics of the main crops in Bila Tserkva district of Ukraine by fusing time series from Landsat 8 and Sentinel 2 with MODIS data and discovered that there were only a few days deviation from on-field reported growing-stages for most crops [23]. ...
Crop phenology is considered to be an important indicator reflecting the biophysical and physiological processes of crops facing climate change. Therefore, quantifying crop phenology change and its relationship with climate variables is of great significance for developing agricultural management and adaptation strategies to cope with global warming. Based on the Moderate Resolution Imaging Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI) product, winter wheat green-up date, heading date, jointing date, and maturity date were first retrieved by Savitzky–Golay (S-G) filtering and threshold methods and then the variation of winter wheat phenology and its correlation with mean (Tmean), minimum (Tmin), and maximum (Tmax) temperature and precipitation (Pre) during 2003–2019 were comprehensively analyzed in Shandong Province, China. Results showed that green-up date, jointing date, heading date, and maturity date generally ranged from 50–70 DOY, 75–95 DOY, 100–120 DOY, and 130–150 DOY. Winter wheat phenology presented a spatial pattern of the South earlier than the North and the inland earlier than the coastal regions. For every 1° increase in latitude/longitude, green-up date, jointing date, heading date, and maturity date were respectively delayed by 3.93 days/0.43 days, 2.31 days/1.19 days, 2.80 days/1.14 days, and 2.12 days/1.09 days. Green-up date and jointing date were both advanced in the West and delayed in the Eastern coastal areas and the South, and heading date and maturity date respectively showed a widespread advance and a delayed tendency in Shandong Province, however, the trend of winter wheat phenological changes was generally insignificant. In addition, green-up date, jointing date, and heading date generally presented a significant negative correlation with mean/minimum temperature, while maturity date was positively associated with the current month maximum temperature, notably in the West of Shandong Province. Regarding precipitation, a generally insignificant relationship with winter wheat phenology was detected. Results in this study are anticipated to provide insight into the impact of climate change on winter wheat phenology and to supply reference for the agricultural production and field management of winter wheat in Shandong Province, China.
... To analyze the impacts of dates of input images on the fusion accuracy of variables with different temporal variation patterns, dense high spatial resolution images are required. Due to the limitation of revisit period and cloud cover [29,30], images acquired by a single satellite sensor such as Landsat are temporally sparse [31,32]. To obtain a sufficient number of high spatial resolution images, in this study, the HLS (Harmonized Landsat-8 Sentinel-2) product (version 1.4) was used as the fine resolution input images for spatio-temporal fusion (HLS data provided by NASA can be downloaded from https://hls.gsfc.nasa.gov/data/, ...
Dense time series of remote sensing images with high spatio-temporal resolution are critical for monitoring land surface dynamics in heterogeneous landscapes. Spatio-temporal fusion is an effective solution to obtaining such time series images. Many spatio-temporal fusion methods have been developed for producing high spatial resolution images at frequent intervals by blending fine spatial images and coarse spatial resolution images. Previous studies have revealed that the accuracy of fused images depends not only on the fusion algorithm, but also on the input image pairs being used. However, the impact of input images dates on the fusion accuracy for time series with different temporal variation patterns remains unknown. In this paper, the impact of input image pairs on the fusion accuracy for monotonic linear change (MLC), monotonic non-linear change (MNLC), and non-monotonic change (NMC) time periods were evaluated, respectively, and the optimal selection strategies of input image dates for different situations were proposed. The 16-day composited NDVI time series (i.e., Collection 6 MODIS NDVI product) were used to present the temporal variation patterns of land surfaces in the study areas. To obtain sufficient observation dates to evaluate the impact of input image pairs on the spatio-temporal fusion accuracy, we utilized the Harmonized Landsat-8 Sentinel-2 (HLS) data. The ESTARFM was selected as the spatio-temporal fusion method for this study. The results show that the impact of input image date on the accuracy of spatio-temporal fusion varies with the temporal variation patterns of the time periods being fused. For the MLC period, the fusion accuracy at the prediction date (PD) is linearly correlated to the time interval between the change date (CD) of the input image and the PD, but the impact of the input image date on the fusion accuracy at the PD is not very significant. For the MNLC period, the fusion accuracy at the PD is non-linearly correlated to the time interval between the CD and the PD, the impact of the time interval between the CD and the PD on the fusion accuracy is more significant for the MNLC than for the MLC periods. Given the similar change of time intervals between the CD and the PD, the increments of R2 of fusion result for the MNLC is over ten times larger than those for the MLC. For the NMC period, a shorter time interval between the CD and the PD does not lead to higher fusion accuracies. On the contrary, it may lower the fusion accuracy. This study suggests that temporal variation patterns of the data must be taken into account when selecting optimal dates of input images in the fusion model.
... Crop type detection is a critical and important parameter in food security, land use monitoring, and water resource management affecting climate change and biodiversity. Knowledge of crop type and distribution across landscapes is essential for agricultural lands management and development in a sustainable manner (Schreier et al., 2020). Water consumption of various crop types is a challenging issue in water resource management addressed by croplands mapping and monitoring. ...
Crop type detection is of great importance in water resource allocation and planning mostly in arid and semi-arid regions of Iran. Landsat-OLI 16-day inter-annual images are invaluable sources obviating crop monitoring into issues of crop types detection, crop yield prediction, and crop pattern studies. Although many classification methods such as decision tree (DT), support vector machine (SVM), and maximum likelihood (ML) were implied for crop type mapping, recent researches often use an object-based classification approach. In this study, an object-based image analysis (OBIA) classifier based on rule-based decision tree (RBDT) and object-based nearest neighbor (OBNN) used to delineate five common crop types (includes Wheat and Barley together in one class, rice, multiple crop (MC), Alfalfa and Spring crops) in Isfahan city and nearby areas. The classification was applied in five scenarios using different vegetation indexes including normalized difference vegetation index (NDVI), normalized difference water index (NDWI), green normalized difference vegetation index GNDVI and their combination. All scenarios property and accuracy assessed both with by class separation distance matrix and confusion matrix. The overall accuracy of classification with using only one vegetation index was lower than other scenarios. It was the lowest for GNDVI rating 37% whereas combination of Indexes resulted better accuracy. In final map with combination of NDVI, GNDVI and NDWI, overall accuracy and kappa achieved to 88% and 0/83 successively. Comparing individual accuracy of different crops showed that MC crops with 66% has the lowest accuracy and Wheat-Barely crops with 94.8% individual accuracy has the Maximum accuracy. Other crop types accuracy alters between 66 and 94.8%.
... This implies that apart from the crop traits presented here, not only other vegetation models (e.g., related to non-photosynthetic vegetation, Amin et al., 2021), but also those targeting other land cover types, such as models dedicated to the quantification of water variables (e.g., Ruescas et al., 2018) or soil properties (e.g., Vaudour et al., 2019) can be provided. Furthermore, the presented workflow can serve as a foundation for the computation of higher-level products, e.g., time series processing for the calculation of phenology metrics (e.g., Misra et al., 2020;Htitiou et al., 2020;Salinero-Delgado et al., 2021), fusion or assimilation of multiple products (e.g., Pipia et al., 2019;Schreier et al., 2021;Sadeh et al., 2021). At the same time, although this work focused on the processing of S2 TOA data, it must be emphasized that essentially the EBD-GPR retrieval models can be developed for any optical sensor data with the ALG-ARTMO software framework. ...
The unprecedented availability of optical satellite data in cloud-based computing platforms, such as Google Earth Engine (GEE), opens new possibilities to develop crop trait retrieval models from the local to the planetary scale. Hybrid retrieval models are of interest to run in these platforms as they combine the advantages of physically- based radiative transfer models (RTM) with the flexibility of machine learning regression algorithms. Previous research with GEE primarily relied on processing bottom-of-atmosphere (BOA) reflectance data, which requires atmospheric correction. In the present study, we implemented hybrid models directly into GEE for processing Sentinel-2 (S2) Level-1C (L1C) top-of-atmosphere (TOA) reflectance data into crop traits. To achieve this, a training dataset was generated using the leaf-canopy RTM PROSAIL in combination with the atmospheric model 6SV. Gaussian process regression (GPR) retrieval models were then established for eight essential crop traits namely leaf chlorophyll content, leaf water content, leaf dry matter content, fractional vegetation cover, leaf area index (LAI), and upscaled leaf variables (i.e., canopy chlorophyll content, canopy water content and canopy dry matter content). An important pre-requisite for implementation into GEE is that the models are sufficiently light in order to facilitate efficient and fast processing. Successful reduction of the training dataset by 78% was achieved using the active learning technique Euclidean distance-based diversity (EBD). With the EBD-GPR models, highly accurate validation results of LAI and upscaled leaf variables were obtained against in situ field data from the validation study site Munich-North-Isar (MNI), with normalized root mean square errors (NRMSE) from 6% to 13%. Using an independent validation dataset of similar crop types (Italian Grosseto test site), the retrieval models showed moderate to good performances for canopy-level variables, with NRMSE ranging from 14% to 50%, but failed for the leaf-level estimates. Obtained maps over the MNI site were further compared against Sentinel-2 Level 2 Prototype Processor (SL2P) vegetation estimates generated from the ESA Sentinels' Application Platform (SNAP) Biophysical Processor, proving high consistency of both retrievals (R2 from 0.80 to 0.94). Finally, thanks to the seamless GEE processing capability, the TOA-based mapping was applied over the entirety of Germany at 20 m spatial resolution including information about prediction uncertainty. The obtained maps provided confidence of the developed EBD-GPR retrieval models for integration in the GEE framework and national scale mapping from S2-L1C imagery. In summary, the proposed retrieval workflow demonstrates the possibility of routine processing of S2 TOA data into crop traits maps at any place on Earth as required for operational agricultural applications.
... Convolutional neural networks are utilized under different methodological perspectives, both addressing transferability, notably instance segmentation (Ghorbanzadeh et al., 2020) and graph-based approach (Gao et al., 2021). Advances in crop mapping are illustrated by innovative tools of image fusion for deriving phenometrics (Schreier et al., 2020) and object-based radiative transfer modelling (RTM) (Graf et al., 2020). The current challenges of fine-scale mapping of natural vegetation and land cover dynamics are shown by spectral mixture analysis for gravel mapping (Stančič et al., 2020), object-based weed classification using an open-source workflow (Lam et al., 2020), and vegetation indices for phenological mapping using online processing (Kamenova & Dimitrov, 2020). ...
The transforming world evokes changes in social, environmental, and economic dimensions, pushed by the digitalisation of many, if not all, aspects of our lives. Satellite Earth observation, while being “digital” from early on, has experienced a boost by digitalisation in recent years, with new trend s of cloud processing, data cube infrastructure, computer vision, machine learning, at unprecedented speeds. This Special Issue on “Digital | Earth | Observation” is dedicated to the fruitful interplay between the Digital Earth concept and Earth observation, embedded in the great technological trends in this field, and demonstrates how this potential can be used in various application contexts.