ArticlePDF Available

Abstract and Figures

Agricultural production and food security highly depend on crop growth and condition throughout the growing season. Timely and spatially explicit information on crop phenology can assist in informed decision making and agricultural land management. Remote sensing can be a powerful tool for agricultural assessment. Remotely sensed data is ideally suited for both large-scale and field-level analyses due to the wide variability of datasets with diverse spatiotemporal resolution. To derive crop-specific phenometrics, we fused time series from Landsat 8 and Sentinel 2 with Moderate-resolution Imaging Spectroradiometer (MODIS) data. Using a linear regression approach, synthetic Landsat 8 and Sentinel 2 data were created based on MODIS imagery. This fusion-process resulted in synthetic imagery with radiometric characteristics of original Landsat 8 and Sentinel 2 data. We created four different time series using synthetic data as well as a mix of original and synthetic data. The extracted time series of phenometrics consisting of both synthetic and original data showed high detail in the final phenomaps which allowed intra-field level assessment of crops. In-situ field reports were used for validation. Our phenometrics showed only a few days of deviation for most crops and datasets. The proposed data integration method can be applied in areas where data from a single high-resolution source is scarce.
Content may be subject to copyright.
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=tejr20
European Journal of Remote Sensing
ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/tejr20
Crop-specific phenomapping by fusing Landsat
and Sentinel data with MODIS time series
Jonas Schreier , Gohar Ghazaryan & Olena Dubovyk
To cite this article: Jonas Schreier , Gohar Ghazaryan & Olena Dubovyk (2020): Crop-specific
phenomapping by fusing Landsat and Sentinel data with MODIS time series, European Journal of
Remote Sensing, DOI: 10.1080/22797254.2020.1831969
To link to this article: https://doi.org/10.1080/22797254.2020.1831969
© 2020 The Author(s). Published by Informa
UK Limited, trading as Taylor & Francis
Group.
Published online: 12 Oct 2020.
Submit your article to this journal
Article views: 5
View related articles
View Crossmark data
Crop-specic phenomapping by fusing Landsat and Sentinel data with MODIS
time series
Jonas Schreier
a,b
, Gohar Ghazaryan
a,b
and Olena Dubovyk
a,b
a
Center for Remote Sensing of Land Surfaces (ZFL), University of Bonn, Bonn, Germany;
b
Remote Sensing Research Group (RSRG),
Department of Geography, University of Bonn, Bonn, Germany
ABSTRACT
Agricultural production and food security highly depend on crop growth and condition
throughout the growing season. Timely and spatially explicit information on crop phenology
can assist in informed decision making and agricultural land management. Remote sensing can
be a powerful tool for agricultural assessment. Remotely sensed data is ideally suited for both
large-scale and eld-level analyses due to the wide variability of datasets with diverse spatio-
temporal resolution. To derive crop-specic phenometrics, we fused time series from Landsat 8
and Sentinel 2 with Moderate-resolution Imaging Spectroradiometer (MODIS) data. Using
a linear regression approach, synthetic Landsat 8 and Sentinel 2 data were created based on
MODIS imagery. This fusion-process resulted in synthetic imagery with radiometric character-
istics of original Landsat 8 and Sentinel 2 data. We created four dierent time series using
synthetic data as well as a mix of original and synthetic data. The extracted time series of
phenometrics consisting of both synthetic and original data showed high detail in the nal
phenomaps which allowed intra-eld level assessment of crops. In-situ eld reports were used
for validation. Our phenometrics showed only a few days of deviation for most crops and
datasets. The proposed data integration method can be applied in areas where data from
a single high-resolution source is scarce.
ARTICLE HISTORY
Received 27 November 2019
Revised 25 September 2020
Accepted 30 September 2020
KEYWORDS
Data-fusion; phenometrics;
high-resolution; crops
Introduction
Agricultural crop production and food security highly
depend on crop growth and condition. Timely and spa-
tially explicit information on growth stages is important
for agricultural land management. Remote sensing ana-
lysis opens up possibilities to conduct crop-assessment
across large areas. Remotely sensed data has long been
used for vegetation-analyses (Kogan, 1987; R. Lee et al.,
2002; Tian et al., 2016). the use of vegetation index (VI)
time series is usually used as a base to derive phenological
metrics (R. Lee et al., 2002; Reed & Brown, 2005). The
use of VI time series to derive phenology development
patterns from space is well documented. However, most
studies rely on the use of coarse spatial resolution
data from the National Oceanic and Atmospheric
Administration Advanced Very High-Resolution
Radiometer (NOAA AVHRR) (R. Lee et al., 2002; Tian
et al., 2016) or moderate resolution sensors, such as
Moderate-resolution Imaging Spectroradiometer
(MODIS) (Karlsen et al., 2008; R. Lee et al., 2002).
SPOT data has also successfully been applied for pheno-
metric analyses using TIMESAT, for example, for winter
wheat in China (Lu et al., 2014). High spatial resolution
data, such as from Landsat 8 or Sentinel 2, has been
applied less frequently for phenology analyses (Eklundh
et al., 2012; Skakun et al., 2019).
Due to cloud-cover, high spatial resolution data
might be scarce for certain study areas and time-
frames. Data fusion has been applied by several studies
to overcome a lack of good-quality imagery. Feng Gao
et al. (2017) fused Landsat 5, 7, 8, and MODIS data to
map phenology at field scale for Iowa, the United
States of America (USA). Images were fused using
the spatial and temporal adaptive reflectance fusion
model (STARFM) approach (Gao et al., 2015). The
results of this study were reasonable; however, the
amount of cloud-contaminated Landsat imagery
impacted the analyses significantly. Q. Li et al. (2015)
fused MODIS with Landsat data using the enhanced
algorithm ESTARFM for crop classification. The
authors achieved a good degree of accuracy and
underline that a fusion of MODIS and Landsat is
practical (Q. Li et al., 2015). Complex approaches
like (E)STARFM are one option to achieve data fusion.
On the other hand, simpler methods, such as linear
regression approaches, satisfying result for fusion with
significantly reduced computing intensity, even with
high resolution data (M. H. Lee et al., 2017; Sankey &
Glenn, 2011). Siachalou et al. (2015) fused Landsat 7
data with very high-resolution Rapideye imagery for
vegetation monitoring and classification, which
resulted in satisfactory accuracy. The authors pro-
posed an additional usage of Sentinel 2 data for “a
CONTACT Jonas Schreier s6jsschr@uni-bonn.de Center for Remote Sensing of Land Surfaces (ZFL), University of Bonn, Bonn 53113, Germany
EUROPEAN JOURNAL OF REMOTE SENSING
https://doi.org/10.1080/22797254.2020.1831969
© 2020 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits
unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
more dense set of observations” (Siachalou et al.,
2015). Skakun et al. (2019) assessed winter wheat
yield on central Ukraine using a 30 m spatial resolu-
tion harmonized Landsat 8 and Sentinel 2 product as
input for yield models (Claverie et al., 2018; Skakun
et al., 2019). VIs as well as surface reflectance values
were used for this task. Results of this study under-
lined the potential of fused high-resolution data for
crop and phenology-related analyses.
High-resolution optical sensors potentially allow
for better differentiation of fields, visibility of patterns
on an intra-field level and are less prone to mixed
pixels which may impact data quality (ESA, 2015).
This offers new perspectives in terms of remotely
sensed phenology.
In this paper, we focus on the integration of high-
resolution Sentinel 2, Landsat 8, and MODIS imagery
to analyze crop-specific phenology for a case study in
Ukraine. The objectives of this study were: (i) genera-
tion and analysis of detailed and high-resolution phe-
nometrics based on multi-sensor satellite imagery, as
well as (ii) bridging data gaps using a data-fusion
approach involving Sentinel, Landsat, and MODIS
data.
Materials and methods
Study area
Bila Tserkva in Kiev region, central Ukraine, was cho-
sen as the test site (Figure 1). (Kussul et al., 2012, 2015;
Skakun et al., 2016).
Ukraine is one of Europe’s top agricultural produ-
cers. The state-wide major crops include wheat, sun-
flower, rapeseed, soybeans, sugar beet, and maize
(Food and Agriculture Organization, 2008; World
Data Center, n.d.). Bila Tserkva district, the focus-
area of this study, is covered by more than 60% crop-
land. In this study, we focused on the analysis of the
main crops in this region, namely sunflower, maize,
and soy.
Data and preprocessing
The 250 m resolution MODIS, 30 m resolution
Landsat 8 and high-resolution 10 m Sentinel 2A data
were used for analysis (Barsi et al., 2014; ESA, n.d.;
MODIS Web, n.d.). We acquired data for the year
2016. Due to cloud-contamination, we could not
include data starting from February or earlier.
Therefore, data from March to October was selected
for all sensors to preserve a homogenous period across
datasets. We acquired 8-day MODIS Red and near-
infrared (NIR) bands surface reflectance product. The
acquired MOD09Q1 product is provided corrected for
atmospheric and aerosol effects (NASA LP DAAC,
2014). These data were later used to calculate VIs at
high temporal intervals (MODIS Web, n.d.; NASA LP
DAAC, 2014).
We acquired atmospherically corrected Red and
NIR bands of Landsat 8 surface-reflectance products.
We used integrated fmask data for cloud-masking (Z.
Zhu et al., 2015; Z. Zhu & Woodcock, 2014). Three
overlapping scenes from two paths with different
Figure 1. The location of the study area in the Kyiv region, Ukraine.
2J. SCHREIER ET AL.
overpassing times covered the study area, resulting in
a dense time series with a revisit interval of about
a week (Figure 2).
Next, we acquired high-resolution Sentinel 2A ima-
gery. Level 1 C whole scenes were downloaded for the
desired time frame (ESA, 2015; Müller-Wilm, 2016).
Only tile T35UQR was needed to cover our study area.
Revisiting times differ from region to region for
Sentinel 2A (Li & Roy, 2017). For the Bila Tserkva
district, acquisition intervals varied between three and
seven days. Sentinel 2B had not yet launched at the
time of this study.
The availability of the cloud-free observations var-
ied highly amongst sensors. MODIS provided high-
quality eight-day composites, mitigating issues caused
by cloud coverage (Parkinson et al., 2000). In contrast,
Landsat 8 and Sentinel 2A were heavily affected by
cloudy conditions during the crop growing season in
2016 (Figure 3). There were also scenes with low scene
cloud-cover but with most clouds located inside the
study area, as well as there were images with many
clouds mostly located outside of the study area.
Moreover, some images were taken during dense
haze conditions. Thus, we selected images for further
analyses not only based on the scenes’ cloud-cover
percentages but also based on manual assessment of
each available scene.
We conducted atmospheric correction using ESA’s
Sentinel Application Platform (SNAP) as well as the
Sen2Cor plug-in (Mather & Koch, 2011; Müller-Wilm,
2016). Sen2Cor (v2.2.1) was also used for cloud-masking.
The final selection of the 12 Landsat 8 and 14
Sentinel 2A images used for analysis is shown in
Figure 3. For both Landsat 8 and Sentinel 2A, images
for the early season (March) were mostly taken during
obstructive weather conditions and had to be omitted.
Cloudy conditions resulted in large data gaps for the
early growing stages from April to June for both sensors.
We used all preprocessed images to calculate the
Normalized Difference Vegetation Index (NDVI)
(Rouse et al., 1974). NDVI time series were used as
the baseline of the phenology analysis.
For each crop selected for this study (maize, soy,
sunflower), we conducted a selection of fields for phe-
nology analyses considering the availability of field
data for validation. For crop fields’ identification, we
used the crop map of 2016 provided by the Space
Research Institute (Kussul et al., 2012, 2015; Skakun
et al., 2016). Only fields inside the region covered by
all Sentinel 2A and Landsat 8 scenes were considered
for the analysis. In total, we chose 30 maize fields, 20
soy fields, and 18 sunflower fields.
In-situ phenology data per crop was provided by
the State Hydrometeorological Services of Ukraine.
The dataset included different growing-phases and
their dates on the district level for five different
crops for the year 2016. We used this dataset for
validation of the remotely sensed phenometrics. In
this dataset, there were some gaps. For maize, the
harvest data was missing for Bila Tserkva in 2016.
Therefore, the latest stage recorded was full matur-
ity, which was subsequently taken as an estimate for
the end of the season. Two phenology stages could
be linked to the start of the season for maize: emer-
gence and fifth leaf. We picked the fifth leaf stage as
a proxy for the start of the season as emerging
plants most likely were too small to be detected by
the sensors. Sunflower statistics contained harvest
Figure 2. Three Landsat 8 scenes (in blue) cover the study area (in red).
EUROPEAN JOURNAL OF REMOTE SENSING 3
dates as well as the second leaf stage. However, these
were only recorded for a district about 60 km east
from the study area. Consequently, these dates only
could be used as an estimate. Soybeans contained
dates for the fifth leaf as well as harvest dates for
Bila Tserkva which we used as validation data.
Methodology
Figure 4 summarizes the workflow of the analysis.
Data fusion
To increase the temporal density of Landsat 8 and
Sentinel 2 images, we applied a data-fusion approach.
To create synthetic Sentinel 2 and Landsat 8 data, we
used a linear regression model based on temporally
dense MODIS imagery. This approach was chosen as it
is straightforward in implementation and it reportedly
yields good results (Sankey & Glenn, 2011). Due to
ground truth data only being available for the year
2016 as well as large data-gaps caused by clouds espe-
cially during the beginning of the growing season,
a fusion based on Sentinel-2 and Landsat 8 data
alone was not feasible. Additionally, only Sentinel-2A
data was available as Sentinel-2B was not in orbit by
2016, further reducing the amount of imagery (Li &
Roy, 2017).
Images from the three sensors used in this study
were characterized by slightly different radiometric
and spatial resolution (Barsi et al., 2014; ESA, n.d.;
MODIS Web, n.d.). All three analyzed types of imagery
possess sufficient spatial resolution for the analyses of
agricultural fields in the study area. For the image
fusion, an initial random point sampling was conducted
on a sample-pair of Landsat and MODIS images
acquired on the same day (21.08.2016). This first assess-
ment showed the viability of the NDVI as a regression-
based with an R2-value of 0.69. As a normalized index,
NDVI is less vulnerable to remaining traces of atmo-
spheric effects on band reflectances and radiometric
discrepancies (Zhu & Lei, 2018).
The implementation of the fusion was conducted as
follows. First, we paired each acquired MODIS scene
(8-day temporal interval) with its temporally closest
match of Sentinel 2 (or Landsat 8). Second, each
MODIS image was resampled to match Sentinel 2 (or
Landsat 8) resolution. Third, we used a 3500-point
random sampling of NDVI values to conduct a linear
regression between each image-pair of one MODIS
image and one Sentinel-2 (or Landsat 8) image.
Fourth, resulting regression models were then applied
Figure 3. All available Landsat 8 and Sentinel 2 Scenes from March to October 2016 for the study site (a). Final image selection
after filtering unusable imagery for Landsat 8 (b) and Sentinel 2 (c). Dark bars represent mostly cloud-free conditions specifically
within the study area. Brighter bars signify images with some cloud-cover over the study area. Because of the size of the overall
scenes, even at small scene cloud cover values, the study area might be affected. Scene cloud cover [%] is indicated above the
bars.
4J. SCHREIER ET AL.
to the spatially resampled MODIS scenes for every
MODIS 8-day time step. This fusion routine resulted
in two datasets: a synthetic Sentinel 2-time series and
a synthetic Landsat 8 time series. Both are based on
MODIS and as such have an interval of 8 days.
Phenology derivation
For satellite-based phenometrics calculation, the changes
of the NDVI during the growing season were observed.
The bigger a crop grows during its different stages, the
more ground is covered by it causing a higher proportion
of green biomass per pixel. This, in turn, is reflected by
higher NDVI values (Todd & Hoffer, 1998) (Figure 5)
We analyzed four different time series (Figure 4),
namely: (i) the synthetic Landsat 8 time series, (ii) the
synthetic Sentinel 2-time series, (iii) the time series
consisting of original Landsat 8 data which was tempo-
rally gap-filled using synthetic Landsat 8 data. (hereafter
Figure 4. The workflow of the analysis.
Figure 5. Example of TIMESAT parameters for one season. The green curve shows the original NDVI time series, the red curve the
TIMESAT smoothed NDVI. Parameters: (a) start of the season, (b) peak time of the season, (c) end of the season. The growing
season is marked in grey (modified after L. Eklundh & Jönsson, 2012).
EUROPEAN JOURNAL OF REMOTE SENSING 5
denoted as “mixed Landsat time series”); and (iv) the
time series consisting of original Sentinel 2A data which
was gap-filled using synthetic Sentinel 2 data (there-
after, denoted as “mixed Sentinel time series”).
TIMESAT software was used for the derivation of
phenological parameters for each of the four-time
series (Lars Eklundh & Jönsson, 2015). We calculated
the following phenometrics: start of the season (SOS),
peak time (peak), and end of the season (EOS, Figure
5). Derivation of SOS and EOS requires a threshold
value that can differ by surface, study area, vegetation
type, and available data. Based on empirical tests,
a threshold of 0.2 was used for this study. Gaussian
fitting was selected to create a smooth NDVI curve.
Optimal input-data for TIMESAT should be pro-
vided in equal temporal intervals (L. Eklundh &
Jönsson, 2012). However, we could not achieve per-
fectly homogenous temporal time steps for either
mixed time series (Figure 4) due to the different scenes
for Landsat imagery and variable revisiting times of
Sentinel acquisitions. We selected images manually,
keeping the temporal intervals as consistent as possi-
ble in addition to maintaining an adequate share of
original Sentinel (or Landsat) scenes. Overall, the
mixed Sentinel time series (iv) could be constructed
more densely due to a larger amount of Sentinel
images available. Both mixed time series (iii, iv) start
and end on the 29
th
of March and 31
st
of October.
The values for all four different NDVI and phe-
nology datasets (i, ii, iii, iv) were plotted for the
selected fields and summarized for each field.
NDVI boxplots were created for each crop using
monthly means. We created raster images of the
phenological parameters and extracted values for
each field. The resulting phenology was validated
with the in-situ field-reports. Results for different
crops and time series (i, ii, iii, iv) were compared to
one another.
Results and discussion
Phenology results for all four time series (Figure 6, i to
iv) were mostly precise and revealed similar patterns.
Figure 6 shows the seasonal development of NDVI
values for each crop in the selected fields. Mixed
time series (iii, iv) generally showed higher values
during the peak months of growth and a slight shift
of peak time. The first and last two months were
mostly consistent across datasets. Only mixed
Sentinel time series (iv) showed slightly lowered
NDVI in April. Soy and maize NDVI series were
similar across the datasets. August showed the highest
variation of values within each dataset, possibly caused
by harvest in some of the fields.
Figure 7 shows the phenometrics (SOS, peak, EOS)
derived from each time series (i to iv). For the mixed
time series (iii, iv), the peak time was calculated later
compared to the purely synthetic time series (i, ii).
SOS was estimated slightly later for the mixed time
series (iii, iv) compared to the other datasets. Peak
Figure 6. Monthly NDVI values for the selected fields across the synthetic Landsat 8 (i), synthetic Sentinel 2 (ii), mixed Landsat (iii),
and mixed Sentinel (iv) time series. Dashed lines represent mean values.
6J. SCHREIER ET AL.
time showed the highest spread: values stretched from
the end of June to the end of July (Figure 7). EOS was
distributed mostly uniformly for most fields with
a spread of approximately two weeks (Figure 8).
Figure 8 shows phenometrics in field-level for both
mixed time series (iii, iv) and one synthetic set (i).
Phenomapping results based on the mixed time series
(iii, iv) retained their original resolution and detail.
Figure 7. Phenometrics derived from synthetic Landsat 8 (i), synthetic Sentinel 2 (ii), mixed Landsat (iii), and mixed Sentinel (iv)
time series.
Figure 8. The field-level phenomaps revealing the high spatial level of detail of the mixed time series (iii, iv) results. EOS is shown.
EUROPEAN JOURNAL OF REMOTE SENSING 7
Variations were observable on the field-level for the
mixed Sentinel dataset (iv). Visually, mixed Sentinel
time series showed the highest level of detail at the
intra-field level, revealing phenology patterns inside
a field. Mixed Landsat results (iii) retain clear field
boundaries and a good degree of intra-field detail.
The level of detail of the mixed time series (iii, iv)
phenomaps shows the potential of the proposed
method.
Figure 9 compares the calculated phenometrics
across all datasets and crops with the field data. The
best performance was achieved for maize followed by
sunflower. EOS values showed a smaller variability than
SOS values for most crops and datasets. This could be
related to a sharp NDVI decline at harvest-time in
contrast to a mostly smooth, steady rise of NDVI values
throughout the beginning stages of growth. Both mixed
time series (iii, iv) achieved good overall performance.
For soy, the mixed Sentinel time series (iv) per-
formed best with a deviation of 5 days for EOS and
2 days for SOS compared to the ground truth data. All
other datasets (i, ii, iii) showed higher deviations as
well as high spreads for the beginning of the season.
End of the season was calculated within an error-
margin of fewer than two weeks among all dataset
medians; Synthetic Sentinel (ii) showing 13, synthetic
Landsat (i) 10 and mixed Landsat time series (iii)
5 days of the error.
The phenometrics for maize were calculated more
accurately than soy overall. For the start of the season,
the best performing time series was mixed Sentinel (iv)
with a deviation of 10 days from the in-situ recordings.
However, all other datasets (i, ii, iii) performed simi-
larly well with errors of 13 and 14 days. For the end of
the season, the synthetic Landsat (i) matched the exact
date. Synthetic Sentinel (ii) missed the date by 3 days,
both mixed time series (iii, iv) by 12 days. The start of
the season for sunflower was determined very accu-
rately by synthetic Sentinel (ii), synthetic Landsat (i) as
well as mixed Landsat (iii) with 1 to 2 days of deviation
as well as marginal spread among fields. Mixed
Sentinel (iv) was an outlier, showing about two
weeks of error. The end of the season was best esti-
mated by both mixed time series (iii, iv): mixed
Sentinel (iv) missed the in-situ date by 8 days, mixed
Landsat (iii) by 5 days. Both synthetic time series (i, ii)
deviated by about two weeks. The temporal accuracy
in general is comparable with other studies using
TIMESAT, especially for the start of the season (Lu
et al., 2014). The results for the start of the season were
more reliable than analyses which applied a modified
TIMESAT approach on a different vegetation cover
(grassland) as well as different types of sensors such as
Sentinel 1 (Stendardi et al., 2019).
It should be noted that we connected phenological
stages provided by the in-situ data to SOS and EOS as
optimally as possible. However, a small margin of
error might be attributed to those linkages.
Moreover, the slightly uneven intervals of the mixed
time series (iii, iv) might have affected the results to
a minor degree. Some of the variations can be
explained by the inter-field management differences
(e.g., timing of the harvest).
TIMESAT interprets every loaded dataset as evenly
distributed, causing limitations in terms of its usability
for mixed time series (L. Eklundh & Jönsson, 2012;
Eklundh & Jönsson, 2015). This circumstance also
resulted in good quality images of Landsat and
Sentinel having to be omitted to keep time-intervals
as homogenous as possible for the mixed datasets (iii,
iv). Leaving out these images might have resulted in
a loss of overall quality of achieved results. Using
different vegetational indices, such as EVI, might also
slightly affect phenology output (L. Li et al., 2014; Tan
et al., 2011). Further applications over larger areas may
need an adaptive threshold selection for different
crops (Huang et al., 2019) or the use of methods that
do not use thresholds for Phenometrics estimation,
such as the use of extreme values of the derivative of
the seasonal cycle (Forkel et al., 2015).
Figure 9. Calculated values for SOS and EOS for each time series across all selected fields. Values on the day of the year (DOY).
8J. SCHREIER ET AL.
As for the acquired datasets and their preprocessing,
the performance of cloud-masks was satisfactory. Some
fragments of cloud-shadows remained, which resulted
in slight changes in the selected fields to avoid shadow-
contaminated pixels. Approaches using object-based
cloud detection might achieve good results for masking
shadows, as the clouds themselves have mostly been
masked well (Zhang et al., 2014). Overall, Landsat
fmask (Zhu & Woodcock, 2012) provided more reliable
results than Sentinel 2 Sen2Cor-flags.
Although the overlaying patterns are closely
matched across the different time series, some degree
of differentiation can be observed. It is possible that
data gaps that are temporally close to critical stages,
such as harvesting, could cause these slight differences.
Gaps were especially prevalent during the early season
which might explain the minimally higher diversity of
SOS values. It can be observed that NDVI values were
overall more saturated for Sentinel 2 data and the
related time series. Another vegetation index, such as
EVI, might reveal more comparable levels without
oversaturating. Furthermore, the spectral response
function for Sentinel 2A was changed at one point.
This could also have an effect on the data and the
results (ESA, 2018; Hagolle, 2018).
Despite being simple, the synthetic image creation
approach showed overall good results. Linear regression
model R2-values mostly reached from 0.2 to 0.7. Lowest
correlations were observed in early spring and for image
pairs with high temporal differences in the acquisition.
Landsat pairs with around a week or less of temporal
difference to their MODIS counterpart resulted in good
regression results with R2-values of 0.5 to 0.7 (Figure 10).
Larger time-gaps resulted in R2-values of 0.25 and
below. Other outliers could be a result of unforeseen
changes in the field. MODIS data can be affected by
mixed pixels. For Bila Tserkva district, this was not
a significant issue, as most fields are large enough to be
covered by multiple MODIS pixels: the average field size
is around 20 ha for the Kyiv region which Bila Tserkva
district is part of (State Statistics Service of Ukraine, n.d.).
It was out of the scope of this study to evaluate different
fusion approaches (Bannari et al., 1995).
Both the synthetic Landsat and Sentinel time
series (i, ii) show an overestimation of NDVI-
values in April despite the fusions’ correlation R2-
values being high for both datasets. Seasonal pat-
terns are evident in general. Some areas showed an
increased amount of outlier-values. However, these
were determined to be forests and built-up areas
and as such not relevant for the study. Crop-
specific seasonal signatures were visibly identifiable
in NDVI for both datasets (see Figure 6). With the
emergence of a new, Harmonized Sentinel-Landsat
Figure 10. Selection of scatterplots showing the relation between Landsat 8 and MODIS NDVI for image pairs. Regression lines are
shown in red.
EUROPEAN JOURNAL OF REMOTE SENSING 9
product (HLS), these signatures may be observable
even clearer and the derivation of phenometrics can
be tested over larger areas (Claverie et al., 2018). For
future years with available field-data, this product or
another type of fusion between Sentinel 2 and
Landsat 8 data could be utilized to follow up on
the study.
Conclusion
We performed a derivation of crop-specific using
multi-source remote sensing data as well as a data-
fusion. Three of the study area’s most important crops
were analyzed: maize, sunflower, and soy. Synthetic
Landsat 8 and Sentinel 2 VI data were created based on
MODIS to receive dense time series.
The proposed method shows great potential and
good accuracy for field-level analysis of crop-
phenology. The high spatial resolution phenomaps
achieved in this study provided spatially-explicit infor-
mation at a field level on crop phenology variation that
could further support crop management activities and
has not been previously available for the study site.
Mixed time series containing synthetic, as well as
original data, resulted in phenomaps with great spa-
tial detail: Phenology differences on an intra-field
level could be identified. In-situ field reports were
used as validation data for the phenometrics. All
datasets showed good accuracy, only deviating few
days from on-field reported growing-stages for most
crops and datasets. The linear regression-based
fusion method achieved good results for the creation
of synthetic Landsat 8 and Sentinel 2 based on
MODIS imagery. Integrating radar data or testing
harmonized Landsat 8/Sentinel 2 product time series
from NASA could be a next step to overcome the
limitations of optical data caused by cloud-over and
achieve even denser time series. Further research
could also utilize and compare different phenology-
derivation approaches.
Acknowledgments
The crop-maps and on-field crop-reports were kindly pro-
vided by the Ukrainian Space Research Institute, the
National Academy of Sciences, SSA Ukraine, and the
Ukrainian State Hydrometeorological Services.
Landsat 8 and MODIS data is available from and courtesy
of the U.S. Geological Survey. Copernicus Sentinel 2 data:
courtesy of ESA.
Data Availability Statement
The raw data used for this study (Landsat-8 surface reflec-
tance, MODIS surface reflectance, Sentinel-2A imagery) can
be acquired via USGS (Landsat-8, MOD09Q1.006: https://
earthexplorer.usgs.gov/) and ESA respectively (Sentinel-2A:
https://scihub.copernicus.eu/dhus/#/home). The crop maps
and ground truth data on phenology are subject to permis-
sion of third parties (Ukrainian Space Research Institute,
National Academy of Sciences, SSA Ukraine, and Ukrainian
State Hydrometeorological Services).
Disclosure statement
No potential conflict of interest was reported by the authors.
Funding
Research support was provided by the German Federal
Ministry of Education and Research through its Global
Resource Water (GRoW) funding initiative (Project:
GlobeDrought, grant no. 02WGR1457A-F)
ORCID
Olena Dubovyk http://orcid.org/0000-0002-7338-3167
References
Bannari, A., Morin, D., Bonn, F., & Huete, A. R. (1995).
A review of vegetation indices. Remote Sensing Reviews,
13(1–2), 95–120. https://doi.org/10.1080/
02757259509532298
Barsi, J., Lee, K., Kvaran, G., Markham, B., & Pedelty, J.
(2014). The spectral response of the Landsat-8 opera-
tional land imager. Remote Sensing, 6(10), 10232–10251.
https://doi.org/10.3390/rs61010232
Claverie, M., Ju, J., Masek, J. G., Dungan, J. L.,
Vermote, E. F., Roger, J.-C., Skakun, S. V., & Justice, C.
(2018). The harmonized Landsat and Sentinel-2 surface
reflectance data set [Special issue]. Remote Sensing of
Environment, 219, 145–161. https://doi.org/10.1016/j.rse.
2018.09.002
Eklundh, L., & Jönsson, P. (2012). TIMESAT 3.2 with par-
allel processing-Software Manual. Lund University. http://
web.nateko.lu.se/TIMESAT/docs/TIMESAT32_soft
ware_manual.pdf
Eklundh, L., Ardö, J., Jönsson, P., & Sjöström, M. (2012).
High resolution mapping of vegetation dynamics from
Sentinel-2. Malmö Universtity. http://muep.mau.se/bit
stream/handle/2043/14624/Sentinel_paper_Eklundhetal.
pdf?sequence=2&isAllowed=y
Eklundh, L., & Jönsson, P. (2015). TIMESAT: A software
package for time-series processing and assessment of
vegetation dynamics. In: Kuenzer C., Dech S., Wagner
W. (Eds.) Remote Sensing Time Series. Remote Sensing
and Digital Image Processing, 22, 141–158. Springer
Cham. https://doi.org/10.1007/978-3-319-15967-6_7
ESA. (2015). Sentinel −2 user Handbook (2nd ed.). European
Space Agency Standard Document. https://sentinel.esa.
int/documents/247904/685211/Sentinel-2_User_
Handbook
ESA. (2018). Sentinel-2 mission status report 119. European
Commission. https://sentinel.esa.int/documents/247904/
3347201/Sentinel-2-Mission-Status-Report-119-16-Dec
-2017-05-Jan-2018
ESA. (n.d.). Radiometric - Resolutions - Sentinel-2 MSI - User
guides - Sentinel Online. European Commission. https://
sentinel.esa.int/web/sentinel/user-guides/sentinel-2-msi/
resolutions/radiometric
10 J. SCHREIER ET AL.
Food and Agriculture Organization. (2008). Country report
on the state of plant genetic resources for food and
agriculture. National Centre for Plant Genetic Resources
of Ukraine, Kharkiv. http://www.fao.org/docrep/013/
i1500e/Ukraine.pdf
Forkel, M., Migliavacca, M., Thonicke, K., Reichstein, M.,
Schaphoff, S., Weber, U., & Carvalhais, N. (2015).
Codominant water control on global interannual varia-
bility and trends in land surface phenology and
greenness. Global Change Biology, 21(9), 3414–3435.
https://doi.org/10.1111/gcb.12950
Gao, F., Hilker, T., Zhu, X., Anderson, M., Masek, J.,
Wang, P., & Yang, Y. (2015). Fusing Landsat and
MODIS data for vegetation monitoring. IEEE
Geoscience and Remote Sensing Magazine, 3(3), 47–60.
https://doi.org/10.1109/MGRS.2015.2434351
Gao, F., Anderson, M. C., Zhang, X., Yang, Z., Alfieri, J. G.,
Kustas, W. P., Mueller, R., Johnson, D. M., &
Prueger, J. H. (2017). Toward mapping crop progress at
field scales through fusion of Landsat and MODIS
imagery. Remote Sensing of Environment, 188, 9–25.
https://doi.org/10.1016/j.rse.2016.11.004
Hagolle, O. (2018). Revised spectral bands for Sentinel-2A.
Séries Temporelles. http://www.cesbio.ups-tlse.fr/multi
temp/?p=12618
Huang, X., Liu, J., Zhu, W., Atzberger, C., & Liu, Q. (2019).
The optimal threshold and vegetation index time series
for retrieving crop phenology based on a modified
dynamic threshold method. Remote Sensing, 11(23),
2725. https://doi.org/10.3390/rs11232725
Karlsen, S. R., Tolvanen, A., Kubin, E., Poikolainen, J.,
Høgda, K. A., Johansen, B., Danks, F. S., Aspholm, P.,
Wielgolaski, F. E., & Makarova, O. (2008). MODIS-NDVI
-based mapping of the length of the growing season in
northern Fennoscandia. International Journal of Applied
Earth Observation and Geoinformation, 10(3), 253–266.
https://doi.org/10.1016/j.jag.2007.10.005
Kogan, F. N. (1987). Vegetation index for areal analysis of
crop conditions. Proceedings of 18th Conference on
Agricultural and Forest Meteorology (pp. 103–106).
Kussul, N., Skakun, S., Shelestov, A., Kravchenko, O., &
Kussul, O. (2012). Crop classification in Ukraine using satel-
lite optical and SAR images. International Journal“
Information Models and Analyses”, 2(2), 118–122. ITHEA.
http://www.foibg.com/ijima/vol02/ijima02-02-p03.pdf
Kussul, N., Skakun, S., Shelestov, A., Lavreniuk, M.,
Yailymov, B., & Kussul, O. (2015). Regional scale crop
mapping using multi-temporal satellite imagery. The
International Archives of Photogrammetry, Remote
Sensing and Spatial Information Sciences, 40(7), 45.
https://doi.org/10.5194/isprsarchives-XL-7-W3-45-2015
Lee, M. H., Lee, S. B., Eo, Y. D., Kim, S. W., Woo, J.-H., &
Han, S. H. (2017). A comparative study on generating
simulated Landsat NDVI images using data fusion and
regression method—the case of the Korean Peninsula.
Environmental Monitoring and Assessment, 189(7), 333.
https://doi.org/10.1007/s10661-017-6034-z
Lee, R., Yu, F., Price, K. P., Ellis, J., & Shi, P. (2002).
Evaluating vegetation phenological patterns in Inner
Mongolia using NDVI time-series analysis.
International Journal of Remote Sensing, 23(12),
2505–2512. https://doi.org/10.1080/01431160110106087
Li, J., & Roy, D. P. (2017). A global analysis of Sentinel-2A,
Sentinel-2B and Landsat-8 data revisit intervals and
implications for terrestrial monitoring. Remote Sensing,
9(9), 902. https://doi.org/10.3390/rs9090902
Li, L., Friedl, M. A., Xin, Q., Gray, J., Pan, Y., & Frolking, S.
(2014). Mapping crop cycles in China using MODIS-EVI
time series. Remote Sensing, 6(3), 2473–2493. https://doi.
org/10.3390/rs6032473
Li, Q., Wang, C., Zhang, B., & Lu, L. (2015). Object-based
crop classification with Landsat-MODIS enhanced
time-series data. Remote Sensing, 7(12), 16091–16107.
https://doi.org/10.3390/rs71215820
Lu, L., Wang, C., Guo, H., & Li, Q. (2014). Detecting winter
wheat phenology with SPOT-VEGETATION data in the
North China Plain. Geocarto International, 29(3),
244–255. https://doi.org/10.1080/10106049.2012.760004
Mather, P. M., & Koch, M. (2011). Computer processing of
remotely-sensed images: An introduction. John Wiley &
Sons.
MODIS Web. (n.d.). Specifications. NASA. Retrieved
January 24, 2018, from https://modis.gsfc.nasa.gov/
about/specifications.php
Müller-Wilm, U. (2016). Sentinel-2 MSI—Level-2A proto-
type processor installation and user manual (Darmstadt,
Germany). Telespazio VEGA. http://step.esa.int/thirdpar
ties/sen2cor/2.2.1/S2PAD-VEGA-SUM-0001-2.2.pdf
NASA LP DAAC. (2014). Surface reectance 8-Day L3 global
250m. United States Geological Survey. https://lpdaac.
usgs.gov/dataset_discovery/modis/modis_products_
table/mod09q1
Parkinson, C. L., Greenstone, R., & Closs, J. (2000). EOS
data products Handbook (Vol. 2). NASA. https://ntrs.
nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20010069261.
pdf
Reed, B. C., & Brown, J. F. (2005). Trend analysis of
time-series phenology derived from satellite data. 3rd
International Workshop on the Analysis of Multi-
Temporal Remote Sensing Images 2005 (pp. 166–168).
Rouse, J., Jr, Haas, R. H., Schell, J. A., & Deering, D. W.
(1974). Monitoring vegetation systems in the Great Plains
with ERTS. NASA Special Publication, 351, 309.
Sankey, T., & Glenn, N. (2011). Landsat-5 TM and lidar
fusion for sub-pixel juniper tree cover estimates in
a western rangeland. Photogrammetric Engineering and
Remote Sensing, 77(12), 1241–1248. https://doi.org/10.
14358/PERS.77.12.1241
Siachalou, S., Mallinis, G., & Tsakiri-Strati, M. (2015).
A Hidden Markov models approach for crop classifica-
tion: Linking crop phenology to time series of
multi-sensor remote sensing data. Remote Sensing, 7(4),
3633–3650. https://doi.org/10.3390/rs70403633
Skakun, S., Kussul, N., Shelestov, A. Y., Lavreniuk, M., &
Kussul, O. (2016). Efficiency assessment of multitemporal
C-band Radarsat-2 intensity and Landsat-8 surface reflec-
tance satellite imagery for crop classification in Ukraine.
IEEE Journal of Selected Topics in Applied Earth
Observations and Remote Sensing, 9(8), 3712–3719.
https://doi.org/10.1109/JSTARS.2015.2454297
Skakun, S., Vermote, E., Franch, B., Roger, J.-C., Kussul, N.,
Ju, J., & Masek, J. (2019). Winter wheat yield assessment
from Landsat 8 and Sentinel-2 data: Incorporating sur-
face reflectance, through phenological fitting, into regres-
sion yield models. Remote Sensing, 11(15), 1768. https://
doi.org/10.3390/rs11151768
State Statistics Service of Ukraine. (n.d.). Statistical
Information. Government of Ukraine. Retrieved January
24, 2018, from http://www.ukrstat.gov.ua/
Stendardi, L., Karlsen, S. R., Niedrist, G., Gerdol, R.,
Zebisch, M., Rossi, M., & Notarnicola, C. (2019).
Exploiting time series of Sentinel-1 and Sentinel-2
EUROPEAN JOURNAL OF REMOTE SENSING 11
imagery to detect Meadow phenology in mountain
regions. Remote Sensing, 11(5), 542. https://doi.org/10.
3390/rs11050542
Tan, B., Morisette, J. T., Wolfe, R. E., Gao, F., Ederer, G. A.,
Nightingale, J., & Pedelty, J. A. (2011). An enhanced
TIMESAT algorithm for estimating vegetation phenology
metrics from MODIS data. IEEE Journal of Selected Topics
in Applied Earth Observations and Remote Sensing, 4(2),
361–371. https://doi.org/10.1109/JSTARS.2010.2075916
Tian, F., Brandt, M., Liu, Y. Y., Verger, A., Tagesson, T.,
Diouf, A. A., Rasmussen, K., Mbow, C., Wang, Y., &
Fensholt, R. (2016). Remote sensing of vegetation dynamics
in drylands: Evaluating vegetation optical depth (VOD) using
AVHRR NDVI and in situ green biomass data over West
African Sahel. Remote Sensing of Environment, 177, 265–276.
https://doi.org/10.1016/j.rse.2016.02.056
Todd, S. W., & Hoffer, R. M. (1998). Responses of spectral
indices to variations in vegetation cover and soil
background. Photogrammetric Engineering and Remote
Sensing, 64(9), 915–922. https://www.asprs.org/wp-content/
uploads/pers/1998journal/sep/1998_sep_915-921.pdf
World Data Center. (n.d.). Ukraine: Agricultural overview.
World Data Center Ukraine. Retrieved January 28, 2018,
from http://wdc.org.ua/en/node/29
Zhang, Y., Guindon, B., & Li, X. (2014). A robust approach
for object-based detection and radiometric characteriza-
tion of cloud shadow using haze optimized
transformation. IEEE Transactions on Geoscience and
Remote Sensing, 52(9), 5540–5547. https://doi.org/10.
1109/TGRS.2013.2290237
Zhu, W., & Lei, H. (2018). Urban vegetation coverage mon-
itoring technology based on NDVI. 2018 7th
International Conference on Energy, Environment and
Sustainable Development (ICEESD 2018).
Zhu, Z., Wang, S., & Woodcock, C. E. (2015). Improvement
and expansion of the Fmask algorithm: Cloud, cloud
shadow, and snow detection for Landsats 4–7, 8, and
Sentinel 2 images. Remote Sensing of Environment, 159,
269–277. https://doi.org/10.1016/j.rse.2014.12.014
Zhu, Z., & Woodcock, C. E. (2012). Object-based cloud and
cloud shadow detection in Landsat imagery. Remote
Sensing of Environment, 118, 83–94. https://doi.org/10.
1016/j.rse.2011.10.028
Zhu, Z., & Woodcock, C. E. (2014). Automated cloud, cloud
shadow, and snow detection in multitemporal Landsat
data: An algorithm designed specifically for monitoring
land cover change. Remote Sensing of Environment, 152,
217–234. https://doi.org/10.1016/j.rse.2014.06.012
12 J. SCHREIER ET AL.
... Many existing phenology detection approaches using thresholds, derivatives or trend information offer only a broad overview of phenology (e.g., green up) or focus on stages with distinct features (e.g., rape flowering) [4]. Thresholding identifies phenological events by predefined value limits on vegetation indices (e.g., [5][6][7][8][9][10]), while a derivative approach captures rapid changes or the shape of a time series which indicate certain phenological transitions (e.g., [11][12][13]). Trend analysis monitors features like slope to assess crop development over time (e.g., [14][15][16]). ...
... The advent of harmonized analysis-ready data, at daily or near-daily frequencies, has been shown to facilitate phenological growth stage classification with high precision to a ground-measured growth scale useful for agronomic applications [18][19][20]. The combination of sensors to increase temporal frequency and spatial accuracy for detecting key phenological stages is especially promising, as shown by [6,11]. High-frequency data provide more continuous monitoring of changes in crop growth also during the season that could be missed with less frequent observations. ...
Article
Full-text available
Accurate identification of crop phenology timing is crucial for agriculture. While remote sensing tracks vegetation changes, linking these to ground-measured crop growth stages remains challenging. Existing methods offer broad overviews but fail to capture detailed phenological changes, which can be partially related to the temporal resolution of the remote sensing datasets used. The availability of higher-frequency observations, obtained by combining sensors and gap-filling, offers the possibility to capture more subtle changes in crop development, some of which can be relevant for management decisions. One such dataset is Planet Fusion, daily analysis-ready data obtained by integrating PlanetScope imagery with public satellite sensor sources such as Sentinel-2 and Landsat. This study introduces a novel method utilizing Dynamic Time Warping applied to Planet Fusion imagery for maize phenology detection, to evaluate its effectiveness across 70 micro-stages. Unlike singular template approaches, this method preserves critical data patterns, enhancing prediction accuracy and mitigating labeling issues. During the experiments, eight commonly employed spectral indices were investigated as inputs. The method achieves high prediction accuracy, with 90% of predictions falling within a 10-day error margin, evaluated based on over 3200 observations from 208 fields. To understand the potential advantage of Planet Fusion, a comparative analysis was performed using Harmonized Landsat Sentinel-2 data. Planet Fusion outperforms Harmonized Landsat Sentinel-2, with significant improvements observed in key phenological stages such as V4, R1, and late R5. Finally, this study showcases the method’s transferability across continents and years, although additional field data are required for further validation.
... Gan et al. (2020) found that the green-up date of winter wheat extracted by MODIS vegetation index, especially the Normalized Difference Phenology Index, had a significant correlation with the groundobserved green-up date from agrometeorological stations in the Huanghuai region of China [22]. Recently, Schreier et al. (2021) derived the phenometrics of the main crops in Bila Tserkva district of Ukraine by fusing time series from Landsat 8 and Sentinel 2 with MODIS data and discovered that there were only a few days deviation from on-field reported growing-stages for most crops [23]. ...
... Gan et al. (2020) found that the green-up date of winter wheat extracted by MODIS vegetation index, especially the Normalized Difference Phenology Index, had a significant correlation with the groundobserved green-up date from agrometeorological stations in the Huanghuai region of China [22]. Recently, Schreier et al. (2021) derived the phenometrics of the main crops in Bila Tserkva district of Ukraine by fusing time series from Landsat 8 and Sentinel 2 with MODIS data and discovered that there were only a few days deviation from on-field reported growing-stages for most crops [23]. ...
Article
Full-text available
Crop phenology is considered to be an important indicator reflecting the biophysical and physiological processes of crops facing climate change. Therefore, quantifying crop phenology change and its relationship with climate variables is of great significance for developing agricultural management and adaptation strategies to cope with global warming. Based on the Moderate Resolution Imaging Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI) product, winter wheat green-up date, heading date, jointing date, and maturity date were first retrieved by Savitzky–Golay (S-G) filtering and threshold methods and then the variation of winter wheat phenology and its correlation with mean (Tmean), minimum (Tmin), and maximum (Tmax) temperature and precipitation (Pre) during 2003–2019 were comprehensively analyzed in Shandong Province, China. Results showed that green-up date, jointing date, heading date, and maturity date generally ranged from 50–70 DOY, 75–95 DOY, 100–120 DOY, and 130–150 DOY. Winter wheat phenology presented a spatial pattern of the South earlier than the North and the inland earlier than the coastal regions. For every 1° increase in latitude/longitude, green-up date, jointing date, heading date, and maturity date were respectively delayed by 3.93 days/0.43 days, 2.31 days/1.19 days, 2.80 days/1.14 days, and 2.12 days/1.09 days. Green-up date and jointing date were both advanced in the West and delayed in the Eastern coastal areas and the South, and heading date and maturity date respectively showed a widespread advance and a delayed tendency in Shandong Province, however, the trend of winter wheat phenological changes was generally insignificant. In addition, green-up date, jointing date, and heading date generally presented a significant negative correlation with mean/minimum temperature, while maturity date was positively associated with the current month maximum temperature, notably in the West of Shandong Province. Regarding precipitation, a generally insignificant relationship with winter wheat phenology was detected. Results in this study are anticipated to provide insight into the impact of climate change on winter wheat phenology and to supply reference for the agricultural production and field management of winter wheat in Shandong Province, China.
... To deal with this challenge, we followed the procedure introduced in Xu et al. (2020Xu et al. ( , 2021 to deal with the missing data. Future work can investigate other methods, such as fusing Landsat time series with MODIS or Sentinel-2 (Gao et al., 2017;Schreier et al., 2021) or reconstructing the images contaminated by clouds by employing image reconstruction methods such as generative adversarial network (GAN) (Zhou et al., 2022). ...
Article
Full-text available
Precise and timely information about crop types plays a crucial role in various agriculture-related applications. However, crop type mapping methods often face significant challenges in cross-regional and cross-time scenarios with high discrepancies between temporal-spectral characteristics of crops from different regions and years. Unsupervised domain adaptation (UDA) methods have been employed to mitigate the problem of domain shift between the source and target domains. Since these methods require source domain data during the adaptation phase, they demand significant computational resources and data storage, especially when large labeled crop mapping source datasets are available. This leads to increased energy consumption and financial costs. To address this limitation, we developed a source-free UDA method for cross-regional and cross-time crop mapping, capable of adapting the source-pretrained models to the target datasets without requiring the source datasets. The method mitigates the domain shift problem by leveraging mutual information loss. The diversity and discriminability terms in the loss function are balanced through a novel unsupervised weighting strategy based on mean confidence scores of the predicted categories. Our experiments on mapping corn, soybean, and the class Other from Landsat image time series in the U.S. demonstrated that the adapted models using different backbone networks outperformed their non-adapted counterparts. With CNN, Transformer, and LSTM backbone networks, our adaptation method increased the macro F1 scores by 12.9%, 7.1%, and 5.8% on average in cross-time tests and by 20.1%, 12.5%, and 8.8% on average in cross-regional tests, respectively. Additionally, in an experiment covering a large study area of 450 km 300 km, the adapted model with the CNN backbone network obtained a macro F1 score of 92.6%, outperforming its non-adapted counterpart with a macro F1 score of 89.2%. Our experiments on mapping the same classes using Sentinel-2 image times series in France demonstrated the effectiveness of our method across different countries and sensors. We also tested our method in more diverse agricultural areas in Denmark and France containing six classes. The results showed that the adapted models outperformed the non-adapted models. Moreover, in within-season experiments, the adapted models performed better than the non-adapted models in the vast majority of weeks. These results and their comparison to those obtained by the other investigated UDA methods demonstrated the efficiency of our proposed method for both end-of-season and within-season crop mapping tasks. Additionally, our study showed that the method is modular and flexible in employing various backbone networks. The code and data are available at https://github.com/Sina-Mohammadi/SFUDA-CropMapping.
... According to existing literature, two classification approaches exist, i.e., Machine Learning (ML) and Deep Learning (DL). Machine learning (ML) algorithms such as random forest (RF), k-nearest neighbors (kNN), and support vector machines (SVM) have been applied in various plant studies (Prins and Niekerk 2020;Chabalala et al. 2022;Schreier et al. 2021). Although remarkable results were achieved in classifying certain crop types, difficulties in distinguishing between crop types were nevertheless reported. ...
Article
Full-text available
Accurate and up-to-date crop-type maps are essential for efficient management and well-informed decision-making, allowing accurate planning and execution of agricultural operations in the horticultural sector. The assessment of crop-related traits, such as the spatiotemporal variability of phenology, can improve decision-making. The study aimed to extract phenological information from Sentinel-2 data to identify and distinguish between fruit trees and co-existing land use types on subtropical farms in Levubu, South Africa. However, the heterogeneity and complexity of the study area—composed of smallholder mixed cropping systems with overlapping spectra—constituted an obstacle to the application of optical pixel-based classification using machine learning (ML) classifiers. Given the socio-economic importance of fruit tree crops, the research sought to map the phenological dynamics of these crops using deep neural network (DNN) and optical Sentinel-2 data. The models were optimized to determine the best hyperparameters to achieve the best classification results. The classification results showed the maximum overall accuracies of 86.96%, 88.64%, 86.76%, and 87.25% for the April, May, June, and July images, respectively. The results demonstrate the potential of temporal phenological optical-based data in mapping fruit tree crops under different management systems. The availability of remotely sensed data with high spatial and spectral resolutions makes it possible to use deep learning models to support decision-making in agriculture. This creates new possibilities for deep learning to revolutionize and facilitate innovation within smart horticulture.
... By contrast, free access to Landsat, Sentinel, MODIS, and other satellite archives has revolutionized satellite images, especially in conservation agriculture (Liu et al., 2020;Wulder et al., 2019). Many studies use free satellite imageries for land-use monitoring and change detection (Al-Juboury & Al-Rubaye, 2021; Chen & Wang, 2010;Chughtai et al., 2021;Fonji & Taff, 2014), crop identification and mapping (Belgiu & Csillik, 2018;Xun et al., 2021;Yan et al., 2021), phenology mapping using time series Schreier et al., 2021;Zhao et al., 2021) and other applications. In addition to access to free satellite imagery, many private satellite companies, like Planet Lab, provide high spatial and temporal resolution time series imagery for a fee (Huang & Roy, 2021). ...
... In our work, we employed the S. Mohammadi et al. procedure introduced in Xu et al. (2020) to deal with missing data. To further reduce the classification uncertainties caused by the presence of gaps in time series, future studies can be dedicated to either fusing Landsat time series with other products, including MODIS or Sentinel-2 (Gao et al., 2017;Schreier et al., 2021) or to incorporating image reconstruction methods such as generative adversarial network (GAN) (Zhou et al., 2022) to successfully reconstruct the images contaminated by clouds. ...
... To analyze the impacts of dates of input images on the fusion accuracy of variables with different temporal variation patterns, dense high spatial resolution images are required. Due to the limitation of revisit period and cloud cover [29,30], images acquired by a single satellite sensor such as Landsat are temporally sparse [31,32]. To obtain a sufficient number of high spatial resolution images, in this study, the HLS (Harmonized Landsat-8 Sentinel-2) product (version 1.4) was used as the fine resolution input images for spatio-temporal fusion (HLS data provided by NASA can be downloaded from https://hls.gsfc.nasa.gov/data/, ...
Article
Full-text available
Dense time series of remote sensing images with high spatio-temporal resolution are critical for monitoring land surface dynamics in heterogeneous landscapes. Spatio-temporal fusion is an effective solution to obtaining such time series images. Many spatio-temporal fusion methods have been developed for producing high spatial resolution images at frequent intervals by blending fine spatial images and coarse spatial resolution images. Previous studies have revealed that the accuracy of fused images depends not only on the fusion algorithm, but also on the input image pairs being used. However, the impact of input images dates on the fusion accuracy for time series with different temporal variation patterns remains unknown. In this paper, the impact of input image pairs on the fusion accuracy for monotonic linear change (MLC), monotonic non-linear change (MNLC), and non-monotonic change (NMC) time periods were evaluated, respectively, and the optimal selection strategies of input image dates for different situations were proposed. The 16-day composited NDVI time series (i.e., Collection 6 MODIS NDVI product) were used to present the temporal variation patterns of land surfaces in the study areas. To obtain sufficient observation dates to evaluate the impact of input image pairs on the spatio-temporal fusion accuracy, we utilized the Harmonized Landsat-8 Sentinel-2 (HLS) data. The ESTARFM was selected as the spatio-temporal fusion method for this study. The results show that the impact of input image date on the accuracy of spatio-temporal fusion varies with the temporal variation patterns of the time periods being fused. For the MLC period, the fusion accuracy at the prediction date (PD) is linearly correlated to the time interval between the change date (CD) of the input image and the PD, but the impact of the input image date on the fusion accuracy at the PD is not very significant. For the MNLC period, the fusion accuracy at the PD is non-linearly correlated to the time interval between the CD and the PD, the impact of the time interval between the CD and the PD on the fusion accuracy is more significant for the MNLC than for the MLC periods. Given the similar change of time intervals between the CD and the PD, the increments of R2 of fusion result for the MNLC is over ten times larger than those for the MLC. For the NMC period, a shorter time interval between the CD and the PD does not lead to higher fusion accuracies. On the contrary, it may lower the fusion accuracy. This study suggests that temporal variation patterns of the data must be taken into account when selecting optimal dates of input images in the fusion model.
Article
Full-text available
Early-season crop mapping provides decision makers with timely information on crop type and conditions that are crucial for agricultural management. Current satellite-based mapping solutions mainly rely on optical imagery, albeit limited by weather conditions. Very few exploit long time series of polarized Synthetic Aperture Radar (SAR) imagery. To address this gap we assessed the performance of COSMO-SkyMed X-band dual polarized (HH, VV) data in a test area in Ponte a Elsa (central Italy) in January-September 2020 and 2021. A deep learning convolutional neural network (CNN) classifier arranged with two different architectures (one- and three-dimensional) was trained and used to recognize ten classes. Validation was undertaken with in-situ measurements from regular field campaigns carried out during satellite overpasses over more than 100 plots each year. The three-dimensional classifier structure and the combination of HH+VV backscatter provide the best classification accuracy, especially during the first months of each year, i.e., 80% already in April 2020 and in May 2021. Overall accuracy above 90% is always marked from June using the three-dimensional classifier with HH, VV and HH+VV backscatter. These experiments showcase the value of the developed SAR-based early-season crop mapping approach. The influence of vegetation phenology, structure, density, biomass and turgor on the CNN classifier using X-band data requires further investigations, along with the relatively low producer accuracy marked by vineyard and uncultivated fields.
Article
Full-text available
Crop phenology is an important parameter for crop growth monitoring, yield prediction, and growth simulation. The dynamic threshold method is widely used to retrieve vegetation phenology from remotely sensed vegetation index time series. However, crop growth is not only driven by natural conditions, but also modified through field management activities. Complicated planting patterns, such as multiple cropping, makes the vegetation index dynamics less symmetrical. These impacts are not considered in current approaches for crop phenology retrieval based on the dynamic threshold method. Thus, this paper aimed to (1) investigate the optimal thresholds for retrieving the start of the season (SOS) and the end of the season (EOS) of different crops, and (2) compare the performances of the Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) in retrieving crop phenology with a modified version of the dynamic threshold method. The reference data included SOS and EOS ground observations of three major crop types in 2015 and 2016, which includes rice, wheat, and maize. Results show that (1) the modification of the original method ensures a 100% retrieval rate, which was not guaranteed using the original method. The modified dynamic threshold method is more suitable to retrieve crop SOS/EOS because it considers the asymmetry of crop vegetation index time series. (2) It is inappropriate to retrieve SOS and EOS with the same threshold for all crops, and the commonly used 20% or 50% thresholds are not the optimal thresholds for all crops. (3) For single and late rice, the accuracies of the SOS estimations based on EVI are generally higher compared to those based on NDVI. However, for spring maize and summer maize, results based on NDVI give higher accuracies. In terms of EOS, for early rice and summer maize, estimates based on EVI result in higher accuracies, but, for late rice and winter wheat, results based on NDVI are closer to the ground records.
Article
Full-text available
A combination of Landsat 8 and Sentinel-2 offers a high frequency of observations (3–5 days) at moderate spatial resolution (10–30 m), which is essential for crop yield studies. Existing methods traditionally apply vegetation indices (VIs) that incorporate surface reflectances (SRs) in two or more spectral bands into a single variable, and rarely address the incorporation of SRs into empirical regression models of crop yield. In this work, we address these issues by normalizing satellite data (both VIs and SRs) derived from NASA’s Harmonized Landsat Sentinel-2 (HLS) product, through a phenological fitting. We apply a quadratic function to fit VIs or SRs against accumulated growing degree days (AGDDs), which affects the rate of crop development. The derived phenological metrics for VIs and SRs, namely peak, area under curve (AUC), and fitting coefficients from a quadratic function, were used to build empirical regression winter wheat models at a regional scale in Ukraine for three years, 2016–2018. The best results were achieved for the model with near infrared (NIR) and red spectral bands and derived AUC, constant, linear, and quadratic coefficients of the quadratic model. The best model yielded a root mean square error (RMSE) of 0.201 t/ha (5.4%) and coefficient of determination R2 = 0.73 on cross-validation. Full text: https://www.mdpi.com/2072-4292/11/15/1768
Article
Full-text available
A synergic integration of Synthetic Aperture Radar (SAR) and optical time series offers an unprecedented opportunity in vegetation phenology monitoring for mountain agriculture management. In this paper, we performed a correlation analysis of radar signal to vegetation and soil conditions by using a time series of Sentinel-1 C-band dual-polarized (VV and VH) SAR images acquired in the South Tyrol region (Italy) from October 2014 to September 2016. Together with Sentinel-1 images, we exploited corresponding Sentinel-2 images and ground measurements. Results show that Sentinel-1 cross-polarized VH backscattering coefficients have a strong vegetation contribution and are well correlated with the Normalized Difference Vegetation Index (NDVI) values retrieved from optical sensors, thus allowing the extraction of meadow phenological phases. Particularly for the Start Of Season (SOS) at low altitudes, the mean difference in days between Sentinel-1 and ground sensors is compatible with the acquisition time of the SAR sensor. However, the results show a decrease in accuracy with increasing altitude. The same trend is observed for senescence. The main outcomes of our investigations in terms of inter-satellite comparison show that Sentinel-1 is less effective than Sentinel-2 in detecting the SOS. At the same time, Sentinel-1 is as robust as Sentinel-2 in defining mowing events. Our study shows that SAR-Optical data integration is a promising approach for phenology detection in mountain regions.
Article
Full-text available
Combination of different satellite data will provide increased opportunities for more frequent cloud-free surface observations due to variable cloud cover at the different satellite overpass times and dates. Satellite data from the polar-orbiting Landsat-8 (launched 2013), Sentinel-2A (launched 2015) and Sentinel-2B (launched 2017) sensors offer 10 m to 30 m multi-spectral global coverage. Together, they advance the virtual constellation paradigm for mid-resolution land imaging. In this study, a global analysis of Landsat-8, Sentinel-2A and Sentinel-2B metadata obtained from the committee on Earth Observation Satellite (CEOS) Visualization Environment (COVE) tool for 2016 is presented. A global equal area projection grid defined every 0.05° is used considering each sensor and combined together. Histograms, maps and global summary statistics of the temporal revisit intervals (minimum, mean, and maximum) and the number of observations are reported. The temporal observation frequency improvements afforded by sensor combination are shown to be significant. In particular, considering Landsat-8, Sentinel-2A, and Sentinel-2B together will provide a global median average revisit interval of 2.9 days, and, over a year, a global median minimum revisit interval of 14 min (±1 min) and maximum revisit interval of 7.0 days.
Article
Full-text available
Landsat optical images have enough spatial and spectral resolution to analyze vegetation growth characteristics. But, the clouds and water vapor degrade the image quality quite often, which limits the availability of usable images for the time series vegetation vitality measurement. To overcome this shortcoming, simulated images are used as an alternative. In this study, weighted average method, spatial and temporal adaptive reflectance fusion model (STARFM) method, and multilinear regression analysis method have been tested to produce simulated Landsat normalized difference vegetation index (NDVI) images of the Korean Peninsula. The test results showed that the weighted average method produced the images most similar to the actual images, provided that the images were available within 1 month before and after the target date. The STARFM method gives good results when the input image date is close to the target date. Careful regional and seasonal consideration is required in selecting input images. During summer season, due to clouds, it is very difficult to get the images close enough to the target date. Multilinear regression analysis gives meaningful results even when the input image date is not so close to the target date. Average R² values for weighted average method, STARFM, and multilinear regression analysis were 0.741, 0.70, and 0.61, respectively.
Article
The ability to regionally monitor crop progress and condition through the growing season benefits both crop management and yield estimation. In the United States, these metrics are reported weekly at state or district (multiple counties) levels by the U.S. Department of Agriculture (USDA) National Agricultural Statistics Service (NASS) using field observations provided by trained local reporters. However, the ground data collection process supporting this effort is time consuming and subjective. Furthermore, operational crop management and yield estimation efforts require information with more granularity than at the state or district level. This paper evaluates remote sensing approaches for mapping crop phenology using vegetation index time-series generated by fusing Landsat and MODIS (Moderate Resolution Imaging Spectroradiometer) surface reflectance imagery to improve temporal sampling over that provided by Landsat alone. The case study focuses on an agricultural region in central Iowa from 2001 to 2014. Our objectives are 1) to assess Landsat-MODIS data fusion results over cropland; 2) to map crop phenology at 30 m resolution using fused surface reflectance data; and 3) to identify the relationships between remotely sensed crop phenology metrics and the crop progress stages reported by NASS. The results show that detailed spatial and temporal variability in vegetation development across this landscape can be identified using the fused Landsat-MODIS data. The mean difference (bias) in Normalized Difference Vegetation Index (NDVI) between actual Landsat observations and the fused Landsat-MODIS data, generated for Landsat overpass dates, is in the range of − 0.011 to 0.028 for every year. The derived phenological metrics show distinct features for different crops and natural vegetation at field scales. Strong correlations are observed between remotely sensed phenological stages, based on NDVI curve inflection points, and the observed crop physiological growth stages from the NASS Crop Progress (CP) reports. The green-up dates detected from remote sensing data typically occurred during crop vegetative stages when 2–4 leaves were developed for both corn and soybeans, or about 1–3 weeks after the reported emergence dates when the plant were first visible to ground-based observers. Despite being a lagging indicator, remotely sensed green-up can be used effectively to backcast emergence, e.g. as input to spatially distributed crop models. The differences in green-up date between corn and soybean were 8–10 days, consistent with the offset in emergence dates reported by NASS at district level. The reported harvest dates were typically about 2–3 weeks after the dormancy stage was detected via remote sensing for corn and about 1–2 weeks for soybeans. This suggests that probable harvest times for individual fields may be predicted 1–3 weeks ahead using remote sensing data. The results suggest that crop phenology and certain growth stages at field scales (30 m spatial resolution) can be linked and mapped by integrating imagery from multiple remote sensing platforms.
Chapter
Large volumes of data from satellite sensors with high time-resolution exist today, e.g. Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS), calling for efficient data processing methods. TIMESAT is a free software package for processing satellite time-series data in order to investigate problems related to global change and monitoring of vegetation resources. The assumptions behind TIMESAT are that the sensor data represent the seasonal vegetation signal in a meaningful way, and that the underlying vegetation variation is smooth. A number of processing steps are taken to transform the noisy signals into smooth seasonal curves, including fitting asymmetric Gaussian or double logistic functions, or smoothing the data using a modified Savitzky-Golay filter. TIMESAT can adapt to the upper envelope of the data, accounting for negatively biased noise, and can take missing data and quality flags into account. The software enables the extraction of seasonality parameters, like the beginning and end of the growing season, its length, integrated values, etc. TIMESAT has been used in a large number of applied studies for phenology parameter extraction, data smoothing, and general data quality improvement. To enable efficient analysis of future Earth Observation data sets, developments of TIMESAT are directed towards processing of high-spatial resolution data from e.g. Landsat and Sentinel-2, and use of spatio-temporal data processing methods.