Article

On the blending of the Landsat and MODIS surface reflectance: predicting daily Landsat surface reflectance. IEEE Trans Geosci Remote Sens

Earth Resources Technology Inc, Jessup, MD
IEEE Transactions on Geoscience and Remote Sensing (Impact Factor: 3.51). 08/2006; 44(8):2207-2218. DOI: 10.1109/TGRS.2006.872081
Source: DBLP

ABSTRACT The 16-day revisit cycle of Landsat has long limited its use for studying global biophysical processes, which evolve rapidly during the growing season. In cloudy areas of the Earth, the problem is compounded, and researchers are fortunate to get two to three clear images per year. At the same time, the coarse resolution of sensors such as the Advanced Very High Resolution Radiometer and Moderate Resolution Imaging Spectroradiometer (MODIS) limits the sensors' ability to quantify biophysical processes in heterogeneous landscapes. In this paper, the authors present a new spatial and temporal adaptive reflectance fusion model (STARFM) algorithm to blend Landsat and MODIS surface reflectance. Using this approach, high-frequency temporal information from MODIS and high-resolution spatial information from Landsat can be blended for applications that require high resolution in both time and space. The MODIS daily 500-m surface reflectance and the 16-day repeat cycle Landsat Enhanced Thematic Mapper Plus (ETM+) 30-m surface reflectance are used to produce a synthetic "daily" surface reflectance product at ETM+ spatial resolution. The authors present results both with simulated (model) data and actual Landsat/MODIS acquisitions. In general, the STARFM accurately predicts surface reflectance at an effective resolution close to that of the ETM+. However, the performance depends on the characteristic patch size of the landscape and degrades somewhat when used on extremely heterogeneous fine-grained landscapes

Download full-text

Full-text

Available from: Forrest G. Hall, Jan 31, 2015
27 Followers
 · 
565 Views
    • "STARFM relies on the assumption that land cover does not change between the estimation-and reference-time periods. A series of weights (spatial, temporal, and distance weights) were introduced to increase the ability of the fusion model to detect land cover change (Gao et al., 2006). STARFM has been proven useful to detect gradual changes but was shown to be less effective in detecting abrupt changes often caused by disturbances (Hilker et al., 2009). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Monitoring forest disturbances using remote sensing data with high spatial and temporal resolution can reveal relationships between forest disturbances and forest ecological patterns and processes. In this study, we fused Landsat data at high spatial resolution (30 m) with 8-day MODIS data to produce high spatial and temporal resolution image time-series. The Spatial Temporal Adaptive Algorithm for mapping Reflectance Change (STAARCH) is a simple but effective fusion method. We adapted the STAARCH fusion method to successfully produce a time-series of disturbances with high overall accuracy (89–92%) in mixed forests in southeast Oklahoma. The results demonstrated that in southeast Oklahoma, forest area disturbed in 2011 was higher than it was in 2000. However, two remarkable drops were identified in 2001 and 2006. We speculated that the drops were related to the economic recessions causing reduction in the demand of woody products. The detected fluctuation of area disturbed calls for continuing monitoring of spatial and temporal changes in this and other forest landscapes using high spatial and temporal resolution imagery datasets to better recognize the economic and environmental factors, as well as the consequences of those changes. (Free access until 9/23 - http://authors.elsevier.com/a/1RU9j14ynS3AVi)
    International Journal of Applied Earth Observation and Geoinformation 02/2016; 44:42-52. DOI:10.1016/j.jag.2015.07.001 · 3.47 Impact Factor
  • Source
    • "When considering international satellite missions such as Sentinel, CBERS-2, and IRS, the rich source of medium-resolution remotely sensed data suggests that we may now move urban mapping from the local and regional, to the global scale. Despite the great potential for the combined use of existing and future medium-resolution imagery, many issues deserve to be studied further, including cross-sensor comparison and normalization (Schroeder et al. 2006; Wulder et al. 2008), multisensor fusion (Gao et al. 2006; Weng et al. 2014), and utilization of full suite of Landsat-like data for any location and date (Powell et al. 2007; Gao et al. 2012). Significant challenges remain for mapping urbanization over large areas, in terms of validation and systematically processing data from multiple times, various sources/instruments , and different seasons (Gao et al. 2012). "
    [Show abstract] [Hide abstract]
    ABSTRACT: In the tropical and subtropical regions, remote sensing of urban environment faces more challenges than in the temperate zones due to all-year-round cloudy and rainy climate conditions, complex hydrological systems that often display a strong seasonal change in water surface area, and vegetation phenology and morphological and species complexity. Optical data frequently show their weakness in remote sensing in the tropical and subtropical regions, which prompts researchers to use different sources of imagery from microwave remote sensing. Synthetic aperture radar (SAR), for instance, was widely employed previously to provide complementary information to optical imagery because it works on all-weather conditions, free from the influence of clouds and rains. ......
    Remote Sensing ofImpervious Surfacesin Tropical and Subtropical Areas, Edited by Hongsheng ZhangHui LinYuanzhi ZhangQihao Weng, 09/2015: chapter Preface: pages xvii - xxi; CRC Press., ISBN: 978-1-4822-5486-0
  • Source
    • "These approaches are especially suited when the input images exhibit significantly different spatial resolutions or temporal revisit times [50]. This assumption was used by the spatial and temporal adaptive reflectance fusion model (STARFM) [46], [51] for combining information from Landsat (30 m resolution) and MODIS (250 m to 1 km resolution, more frequent overpass), and by a full family of methods based on [52] for increasing the spatial resolution of MERIS (300 m) by mapping fractional abundances from Landsat through classification [53]–[55]. These methods are particularly interesting examples of fusion as multiple (spatial, spectral, temporal) information modes are jointly considered. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Earth observation through remote sensing images allows the accurate characterization and identification of materials on the surface from space and airborne platforms. Multiple and heterogeneous image sources can be available for the same geographical region: multispectral, hyperspectral, radar, multitemporal, and multiangular images can today be acquired over a given scene. These sources can be combined/fused to improve classification of the materials on the surface. Even if this type of systems is generally accurate, the field is about to face new challenges: the upcoming constellations of satellite sensors will acquire large amounts of images of different spatial, spectral, angular, and temporal resolutions. In this scenario, multimodal image fusion stands out as the appropriate framework to address these problems. In this paper, we provide a taxonomical view of the field and review the current methodologies for multimodal classification of remote sensing images. We also highlight the most recent advances, which exploit synergies with machine learning and signal processing: sparse methods, kernel-based fusion, Markov modeling, and manifold alignment. Then, we illustrate the different approaches in seven challenging remote sensing applications: 1) multiresolution fusion for multispectral image classification; 2) image downscaling as a form of multitemporal image fusion and multidimensional interpolation among sensors of different spatial, spectral, and temporal resolutions; 3) multiangular image classification; 4) multisensor image fusion exploiting physically-based feature extractions; 5) multitemporal image classification of land covers in incomplete, inconsistent, and vague image sources; 6) spatiospectral multisensor fusion of optical and radar images for change detection; and 7) cross-sensor adaptation of classifiers. The adoption of these techniques in operational settings will help to monitor our planet from space in the very near future.
    Proceedings of the IEEE 09/2015; 103(9):1560-1584. DOI:10.1109/JPROC.2015.2449668 · 5.47 Impact Factor
Show more