On the blending of the Landsat and MODIS surface reflectance: predicting daily Landsat surface reflectance. IEEE Trans Geosci Remote Sens

Earth Resources Technology Inc, Jessup, MD
IEEE Transactions on Geoscience and Remote Sensing (Impact Factor: 3.51). 08/2006; 44(8):2207-2218. DOI: 10.1109/TGRS.2006.872081
Source: DBLP

ABSTRACT The 16-day revisit cycle of Landsat has long limited its use for studying global biophysical processes, which evolve rapidly during the growing season. In cloudy areas of the Earth, the problem is compounded, and researchers are fortunate to get two to three clear images per year. At the same time, the coarse resolution of sensors such as the Advanced Very High Resolution Radiometer and Moderate Resolution Imaging Spectroradiometer (MODIS) limits the sensors' ability to quantify biophysical processes in heterogeneous landscapes. In this paper, the authors present a new spatial and temporal adaptive reflectance fusion model (STARFM) algorithm to blend Landsat and MODIS surface reflectance. Using this approach, high-frequency temporal information from MODIS and high-resolution spatial information from Landsat can be blended for applications that require high resolution in both time and space. The MODIS daily 500-m surface reflectance and the 16-day repeat cycle Landsat Enhanced Thematic Mapper Plus (ETM+) 30-m surface reflectance are used to produce a synthetic "daily" surface reflectance product at ETM+ spatial resolution. The authors present results both with simulated (model) data and actual Landsat/MODIS acquisitions. In general, the STARFM accurately predicts surface reflectance at an effective resolution close to that of the ETM+. However, the performance depends on the characteristic patch size of the landscape and degrades somewhat when used on extremely heterogeneous fine-grained landscapes

Download full-text


Available from: Forrest G. Hall, Jan 31, 2015
478 Reads
    • "STARFM relies on the assumption that land cover does not change between the estimation-and reference-time periods. A series of weights (spatial, temporal, and distance weights) were introduced to increase the ability of the fusion model to detect land cover change (Gao et al., 2006). STARFM has been proven useful to detect gradual changes but was shown to be less effective in detecting abrupt changes often caused by disturbances (Hilker et al., 2009). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Monitoring forest disturbances using remote sensing data with high spatial and temporal resolution can reveal relationships between forest disturbances and forest ecological patterns and processes. In this study, we fused Landsat data at high spatial resolution (30 m) with 8-day MODIS data to produce high spatial and temporal resolution image time-series. The Spatial Temporal Adaptive Algorithm for mapping Reflectance Change (STAARCH) is a simple but effective fusion method. We adapted the STAARCH fusion method to successfully produce a time-series of disturbances with high overall accuracy (89–92%) in mixed forests in southeast Oklahoma. The results demonstrated that in southeast Oklahoma, forest area disturbed in 2011 was higher than it was in 2000. However, two remarkable drops were identified in 2001 and 2006. We speculated that the drops were related to the economic recessions causing reduction in the demand of woody products. The detected fluctuation of area disturbed calls for continuing monitoring of spatial and temporal changes in this and other forest landscapes using high spatial and temporal resolution imagery datasets to better recognize the economic and environmental factors, as well as the consequences of those changes. (Free access until 9/23 -
    International Journal of Applied Earth Observation and Geoinformation 02/2016; 44:42-52. DOI:10.1016/j.jag.2015.07.001 · 3.47 Impact Factor
  • Source
    • "Due to the data volume generated, applications that seek to combine these types of data have thus far have been limited to local and regional studies (Hilker, Wulder, Coops, Seitz, et al., 2009; Schmidt et al., 2015). Nonetheless , the increase in the number of data fusion models over the past several years (Gao et al., 2006; Hilker, Wulder, Coops, Linke, et al., 2009; Gevaert & Garcia-Haro, 2015) demonstrates the interest in these types of observations for mapping vegetation parameters and improving Earth system modeling. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Free and open access to satellite imagery and value-added data products have revolutionized the role of remote sensing in Earth system science. Nonetheless, rapid changes in the global environment pose challenges to the science community that are increasingly difficult to address using data from single satellite sensors or platforms due to the underlying limitations of data availability and tradeoffs that govern the design and implementation of currently existing sensors. Virtual constellations of planned and existing satellite sensors may help to overcome this limitation by combining existing observations to mitigate limitations of any one particular sensor. While multi-sensor applications are not new, the integration and harmonization of multi-sensor data is still challenging, requiring tremendous efforts of science and operational user communities. Defined by the Committee on Earth Observation Satellites (CEOS) as a " set of space and ground segment capabilities that operate in a coordinated manner to meet a combined and common set of Earth Observation requirements " , virtual constellations can principally be used to combine sensors with similar spatial, spectral, temporal, and radiometric characteristics. We extend this definition to also include sensors that are principally incompatible, because they are fundamentally different (for instance active versus passive remote sensing systems), but their combination is necessary and beneficial to achieve a specific monitoring goal. In this case, constellations are more likely to build upon the complementarity of resultant information products from these incompatible sensors rather than the raw physical measurements. In this communication, we explore the potential and possible limitations to be overcome regarding virtual constellations for terrestrial science applications, discuss potentials and limitations of various candidate sensors, and provide context on integration of sensors. Thematically, we focus on land-cover and land-use change (LCLUC), with emphasis given to medium spatial resolution (i.e., pixels sided 10 to 100 m) sensors, specifically as a complement to those onboard the Landsat series of satellites. We conclude that virtual constellations have the potential to notably improve observation capacity and thereby Earth science and monitoring programs in general. Various national and international parties have made notable and valuable progress related to virtual constellations. There is, however, inertia inherent to Earth observation programs, largely related to their complexity, as well as national interests, observation aims, and high system costs. Herein we define and describe virtual constellations, offer the science and applications information needs to offer context, provide the scientific support for a range of virtual constellation levels based upon applications readiness, capped by a discussion of issues and opportunities toward facilitating implementation of virtual constellations (in their various forms). Crown
    Remote Sensing of Environment 09/2015; 170:62-76. DOI:10.1016/j.rse.2015.09.001 · 6.39 Impact Factor
  • Source
    • "data to generate synthetic Landsat-like imagery on a daily basis (Gao et al. 2006), represents a significant step in this direction. STARFM detects reflectance changes in the MODIS data and predicts Landsat reflectance through similar neighbouring pixels weighted by spectral, temporal, and spatial distances. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Remotely sensed surface parameters, such as vegetation index, leaf area index, surface temperature, and evapotranspiration, show diverse spatial scales and temporal dynamics. Generally the spatial and temporal resolutions of remote-sensing data should match the characteristics of surface parameters under observation. These requirements sometimes cannot be provided by a single sensor due to the trade-off between spatial and temporal resolutions. Many spatial and temporal fusion (STF) methods have been proposed to derive the required data. However, the methodology suffers from disorderly development. To better inform future research, this study generalizes the existing methods from around 100 studies as spatial or temporal categories based on their physical assumptions related to spatial scales and temporal dynamics. To be specific, the assumptions are related to the scale invariance of the temporal information and temporal constancy of the spatial information. The spatial information can be contexture or spatial details. Experiments are conducted using Landsat data acquired on 13 dates in two study areas and simulated Moderate Resolution Imaging Spectroradiometer (MODIS) data. The results are presented to demonstrate the typical methods from each category. This study concludes the following. (1) Contexture methods depend heavily on how components maps (contexture) are defined. They are not recommended except when components maps can be estimated properly from observed images. (2) The spatial and temporal adaptive reflectance fusion model (STARFM) and enhanced STARFM (ESTARFM) methods belong to the temporal and spatial categories, respectively. Thus, STARFM and ESTARFM should be better applied to temporal variance – dominated and spatial variance – -dominated areas, respectively. (3) Non-linear methods, such as the sparse representation-based spatio-temporal reflectance fusion model, can successfully address land-cover changes in addition to phonological changes, thereby providing a promising option for STF problems in the future.
    International Journal of Remote Sensing 09/2015; 36(17):4411-4445. DOI:10.1080/01431161.2015.1083633 · 1.65 Impact Factor
Show more