Conference Paper

Who Does What Where? Advanced Earth Observation for Humanitarian Crisis Management

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

This study investigated the performances of data fusion algorithms when applied to very high spatial resolution satellite images that encompass ongoing- and post-crisis scenes. The evaluation entailed twelve fusion algorithms. The candidate algorithms were applied to GeoEye-1 satellite images taken over three different geographical settings representing natural and anthropogenic crises that had occurred in the recent past: earthquake-damaged sites in Haiti, flood-impacted sites in Pakistan, and armed-conflicted areas in Sri Lanka. Fused images were assessed subjectively and objectively. Spectral quality metrics included correlation coefficient, peak signal-to-noise ratio index, mean structural similarity index, spectral angle mapper, and relative dimensionless global error in synthesis. The spatial integrity of fused images was assessed using Canny edge correspondence and high-pass correlation coefficient. Under each metric, fusion methods were ranked and best competitors were identified. In this study, The Ehlers fusion, wavelet principle component analysis (WV-PCA) fusion, and the high-pass filter fusion algorithms reported the best values for the majority of spectral quality indices. Under spatial metrics, the University of New Brunswick and Gram-Schmidt fusion algorithms reported the optimum values. The color normalization sharpening and subtractive resolution merge algorithms exhibited the highest spectral distortions where as the WV-PCA algorithm showed the weakest spatial improvement. In conclusion, this study recommends the University of New Brunswick algorithm if visual image interpretation is involved, whereas the high-pass filter fusion is recommended if semi- or fully-automated feature extraction is involved, for pansharpening VHSR satellite images of on-going and post crisis sites.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Natural hazards, man-made disasters, civil wars, and regional conflicts occur and recur causing a huge loss of human lives and properties, prolonged social and economic disruptions, and environmental degradation (Roy and Blaschke 2010;Witharana et al. 2010;Witharana, 2012;WItharan and Civco, 2012;. The size and complexity of these crises have escalated drastically over the past decades and will most likely amplify in the future due to the projected impacts of climate change coupled adverse socio-economic and political conditions (e.g., paucity of natural resources, poverty, population growth, weak governance, and tension among different ethnic groups) (Voigt 2007;UNEP 2011; Although natural and anthropogenic crises are inescapable, practicing effective crisis management strategies could relieve the impact on the human and natural environments. ...
... Although these crises are inescapable, practicing effective crisis management strategies could relieve the impact on human lives. Remote sensing (RS) is an indispensible resource in crisis management (Cheema, 2007, Joyce et al. 2009Dell`Acqua and Polli, 2011;Kaya et al., 2011;Witharana 2012). Modern satellite sensors, such as IKONOS, QuickBird, GeoEye, WorldView, and Pléiades provide very high spatial resolution (VHSR) multi-spectral imagery (at sub-meter level) that can capture the fine details needed for crisis information Witharana and Civco, 012). ...
... Mean values were then calculated for all bands and for each subsets in the study area. We encourage readers to refer our previous work (Witharana, 2012; for more details on candidate fusion algorithms and quality metrics. The major steps involved in the fusion-evaluation workflow are depicted in Figure 5). ...
Article
Advanced Earth observation (EO) is increasingly recognized as an indispensible tool to support rigorous and robust decision making in crisis management. In crisis scenarios, EO data need to be streamed through time-critical workflows for delivering reliable and effective information to civil protection authorities. In this context, the over arching goal of this research is to closely examine the integral segments of routing EO-based rapid mapping workflows to understand prevailing shortfalls and to devise novel approaches to catalyze conditioned geoinformation delivery to cater increasing user demand. This study envisions three interconnected objectives, which are primarily fuelled by very high spatial resolution (VHSR) imagery, data fusion, image segmentation, and geographic object-based analysis (GEOBIA) framework. Focal study areas encapsulate natural and anthropogenic crises having occurred in the recent past: the 2010 earthquake-damaged areas in Haiti, the 2010 flood-impacted sites in Pakistan, and armed-conflicted areas and internally displaced persons (IDP) camps in Sri Lanka The first objective investigated how different data fusion algorithms perform when applied to VHSR satellite images that encompass ongoing- and post-crises scenes. The evaluation entailed twelve fusion algorithms. The spatial and spectral fidelities were assessed subjectively using fourteen quality indices. Ehlers, Wavelet, and High-pass filtering (HPF) fusion algorithms had the best scores for the majority of spectral quality indices. The University of New Brunswick and Gram-Schmidt fusion algorithms had the best scores for spatial metrics. The HPF algorithm emerged as the overall best performing fusion algorithm. The second objective aimed to unravel the synergies of data fusion and image segmentation in the context of EO-based rapid mapping workflows. We statistically compared the quality image object candidates among twelve fused products and their original MS and PAN images. We have shown that the GEOBIA framework has the ability to create meaningful image objects during the segmentation process by compensating the fused image’s spectral distortions with the high-frequency information content that has been injected during fusion. We further questioned the necessity of the data fusion step in rapid mapping context. Bypassing time-intense data fusion steps helps to intensify EO-based rapid mapping workflows. The third objective explored the efficacy of supervised empirical discrepancy measures for optimizing multiresolution segmentation (MRS) algorithm. I selected the Euclidean distance 2 (ED2) metric, a recently proposed supervised metric that measures dissimilarity between a reference polygon and an image object candidate, as a candidate to investigate the validity and efficacy of empirical discrepancy measures for finding the optimal scale parameter setting of the MRS algorithm. The discriminative capacity of the ED2 metric across different scales groups was tested using non-parametric statistical methods. My results showed that the ED2 metric significantly discriminates the quality of image object candidates at smaller scale values but it loses the sensitivity at larger scale values. This questions the meaningfulness of the ED2 metric in the MRS algorithms parameter optimization.
... A fusion algorithm that preserves the spectral properties of the MS data and the spatial properties of the PAN data would be ideal, but there is always compromise (Witharana, 2012, Witharana and. Many studies report the problems and limitations associated with different fusion techniques (Chavez et al., 1991;Wald et al., 1997;Zhang, 2002). ...
... Spectral distortions including spatial artifacts affect both manual and automated classifications because any error in the synthesis of the spectral signatures at the highest spatial resolution incurs an error in the decision (Rachin et al., 2003). Qualitative comparison of the fused image and the original MS and PAN images for colour preservation and spatial improvements is the most simple but effective way of benchmarking different fusion algorithms (Nikolakopoulos, 2008;Witharana, 2012;Witharana, 2013); however, visual inspection methods are subjective and largely depend on the experience of the interpreter (Klonus and Ehlers, 2007;Ehlers et al., 2010). ...
... Compared to spectral quality indicators, only few metrics are available to evaluate the spatial fidelity of fused images Witharana and Civco, 2012;Ehlers et al., 2010;Gangkofner et al., 2008;Ehlers, 2007, Yakhdani andAzizi, 2010). Witharana (2012) used highpass correlation and edge detection using filters like Canny, Sobel, and Perwitte. ...
Article
Full-text available
Fusing high-spatial resolution panchromatic and high-spectral resolution multispectral images with complementary characteristics provides basis for complex land-use and land-cover type classifications. In this research, we have investigated how well different pan-sharpening algorithms perform when applied to single-sensor single-date and multi-senor multi–date images that encompassHorton Plains National Park (HPNP) in Sri Lanka. The evaluation entailed six fusion algorithms: Brovey transform, Ehlers fusion algorithm, High-Pass Filter (HPF) fusion algorithm, Modified Intensity-Hue-Saturation (MIHS) fusion algorithm, Principal Component Analysis (PCA) fusion algorithm, and the wavelet-PCA fusion algorithm. These algorithms were applied to eight different aerial and satellite images taken over the HPNP during the last five decades. Fused images were assessed for spectral and spatial fidelity using fifteen quantitative quality indicators and visual inspection methods. From our multidimensional quality assessment, the HPF emerged as the best performing algorithm for single-sensor single-date and multi-sensor multi-date data fusion. Ithas also examined the effect of fusion in object-based image analysis framework. It was also demonstrated how the quality of image object candidates improves when high frequency information is injected to low resolution multispectral images. This study sheds new light on how can the archived remote sensing data be transformed into useful products in support of long term forest dieback monitoring in Horton Plains National Park.
... A fusion algorithm that preserves the spectral properties of the MS data and the spatial properties of the PAN data would be ideal, but there is always compromise [28,29]. Many studies report the problems and limitations associated with different fusion techniques [30,31]. ...
... Spectral distortions including spatial artifacts affect both manual and automated classifications because any error in the synthesis of the spectral signatures at the highest spatial resolution incurs an error in the decision [23]. Qualitative comparison of the fused image and the original MS and PAN images for color preservation and spatial improvements is the most simple but effective way of benchmarking different fusion algorithms [28,33]; however, visual inspection methods are subjective and largely depend on the experience of the interpreter [24,34]. ...
... Wang et al. (2004) proposed another metric called Mean Structure Similarity Index (MSSIM), which was developed based on the findings of Wang and Bovik (2002). Compared to spectral quality indicators, only few metrics are available to evaluate the spatial fidelity of fused images [29,37], Ehlers et al. [24], Gangkofner et al. [20], Klonus and Ehlers [34], Yakhdani and Azizi [27], and Witharana [28] used highpass correlation and edge detection using filters like Canny, Sobel, and Perwitte. ...
Article
Full-text available
Fusing high-spatial resolution panchromatic and high-spectral resolution multispectral images with complementary characteristics provides basis for complex land-use and land-cover type classifications. In this research, we investigated how well different pan sharpening algorithms perform when applied to single-sensor single-date and multi-senor multi–date images that encompass the Horton Plains national park (HPNP), a highly fragile eco-region that has been experiencing severe canopy depletion since 1970s, in Sri Lanka. Our aim was to deliver resolution-enhanced multi- temporal images from multiple earth observation (EO) data sources in support of long-term dieback monitoring in the HPNP. We selected six candidate fusion algorithms: Brovey transform, Ehlers fusion algorithm, high-pass filter (HPF) fusion algorithm, modified intensity-hue-saturation (MIHS) fusion algorithm, principal component analysis (PCA) fusion algorithm, and the wavelet-PCA fusion algorithm. These algorithms were applied to eight different aerial and satellite images taken over the HPNP during last five decades. Fused images were assessed for spectral and spatial fidelity using fifteen quantitative quality indicators and visual inspection methods. Spectral quality metrics include correlation coefficient, root-mean-square-error (RMSE), relative difference to mean, relative difference to standard deviation, spectral discrepancy, deviation index, peak signal-to-noise ratio index, entropy, mean structural similarity index, spectral angle mapper, and relative dimensionless global error in synthesis. The spatial integrity of fused images was assessed using Canny edge correspondence, high-pass correlation coefficient, RMSE of Sobel-filtered edge images, and Fast Fourier Transform correlation. The Wavelet-PCA algorithm exhibited the worst spatial improvement while the Ehlers. MIHS and PCA fusion algorithms showed mediocre results. With respect to our multidimensional quality assessment, the HPF emerged as the best performing algorithm for single-sensor single-date and multi-sensor multi-date data fusion. We further examined the effect of fusion in the object-based image analysis framework. Our subjective analysis showed the improvement of image object candidates when panchromatic images’ high-frequency information is injected to low resolution multispectral images.
... However, in most cases, it is difficult to collect such information in the field due to security and access reasons [8,9]. With the availability of very high spatial resolution satellite imagery, remote sensing has become one of the most powerful tools for population estimation for humanitarian operations over the last decade [8, [10][11][12][13][14]. Refugeedwelling footprints are useful in population estimation [15,16]. ...
Article
Full-text available
Refugee-dwelling footprints derived from satellite imagery are beneficial for humanitarian operations. Recently, deep learning approaches have attracted much attention in this domain. However, most refugees are hosted by low- and middle-income countries where accurate label data are often unavailable. The Object-Based Image Analysis (OBIA) approach has been widely applied to this task for humanitarian operations over the last decade. However, the footprints were usually produced urgently, and thus, include delineation errors. Thus far, no research discusses whether these footprints generated by the OBIA approach (OBIA labels) can replace manually annotated labels (Manual labels) for this task. This research compares the performance of OBIA labels and Manual labels under multiple strategies by semantic segmentation. The results reveal that the OBIA labels can produce IoU values greater than 0.5, which can produce applicable results for humanitarian operations. Most falsely predicted pixels source from the boundary of the built-up structures, the occlusion of trees, and the structures with complicated ontology. In addition, we found that using a small number of Manual labels to fine-tune models initially trained with OBIA labels can outperform models trained with purely Manual labels. These findings show high values of the OBIA labels for deep-learning-based refugee-dwelling extraction tasks for future humanitarian operations.
... Despite the limitation due to shadowing, angle of incidence or lack of topographic information under vegetation cover (Mostafa and Abdelhafiz, 2017;Quartulli and Olaizola, 2013;Sowmya and Trinder, 2000), archaeologists have made use of radar and satellite images to reveal differences in texture, roughness, moisture content, topography and geometry of features and surfaces related to human activities, for example (Evans et al., 2007;Fowler, 2002;Holcomb, 2001;Holcomb and Shingiray, 2006;Moore et al., 2006;Ricketson et al., 2003). More recently, images from satellites have been used not only in relation to anthropogenic features in the context of urban studies (see Ghanea et al., 2016, for a review), for cadastral boundary extraction (Crommelinck et al., 2016;Wassie et al., 2017) and oil spill detection (Brekke and Solberg, 2005), but also to identify target features in ongoing and post-humanitarian crisis scenarios (Witharana, 2012;Witharana and Civco, 2012). In analyses of this type, image analysis techniques, such as wavelets and fusion algorithms, or texture analyses, can enhance and localize patterns related to human activities, and they can be used to extract features of interest, such as not only buildings, but also shelters and war/flood/earthquake damaged structures. ...
Article
Full-text available
Human societies have been reshaping the geomorphology of landscapes for thousands of years, producing anthropogenic geomorphic features ranging from earthworks and reservoirs to settlements, roads, canals, ditches and plough furrows that have distinct characteristics compared with landforms produced by natural processes. Physical geographers have long recognized the widespread importance of these features in altering landforms and geomorphic processes, including hydrologic flows and stores, to processes of soil erosion and deposition. In many of the same landscapes, archaeologists have also utilized anthropogenic geomorphic features to detect and analyse human societal activities, including symbolic formations, agricultural systems, settlement patterns and trade networks. This paper provides a general framework aimed at integrating geophysical and archaeological approaches to observing, identifying and interpreting the full range of anthropogenic geomorphic features based on their structure and functioning, both individually and as components of landscape-scale management strategies by different societies, or “sociocultural fingerprints”. We then couple this framework with new algorithms developed to detect anthropogenic geomorphic features using precisely detailed three-dimensional reconstructions of landscape surface structure derived from LiDAR and computer vision photogrammetry. Human societies are now transforming the geomorphology of landscapes at increasing rates and scales across the globe. To understand the causes and consequences of these transformations and contribute to building sustainable futures, the science of physical geography must advance towards empirical and theoretical frameworks that integrate the natural and sociocultural forces that are now the main shapers of Earth’s surface processes.
... Disaster management cycle consists of four overlapping stages namely mitigation, preparedness, response and recovery. The use of ICT is more important in the response phase than others where main concern is in supporting affected population [7] [8]. The proposed tool will be useful in the response and recovery phases of the disaster management cycle. ...
Article
Flood is a major natural hazard occur recurrently in Sri Lanka. Allocating victims to camps and provide medical facilities are two main activities at the immediate response phase of a flood and use of manual methods delayed this process. This project developed a geo-enabled application to support immediate response planning, mainly focusing on allocation victims to IDP camps, provide medical facilities, and supporting access avoiding already blocked roads based on administrative divisions of the affected area. Capacities and facilities in camps and hospitals are matched against the needs of the victims. It identifies the blocked roads, alternative routes to reach resource centers, camps and hospitals and provide navigation guidance. The tool can be used after a flood disaster, assuming basic demographic data and the current flood affected area data are available. The tool is developed as a plug-in for QGIS, a free and open source desktop Geographic Information System software. The tool is verified with sample data related to 'Kaluthara' area. It is intended to integrate with InaSAFE disaster response support tool at a later stage.
... Fusion results were introduced to a series of quality metrics along with detailed visual inspection procedures to evaluate the spectral and spatial fidelity of fused products compared to their original MS and PAN images. We encourage readers to refer our previous works (Witharana, 2012;Witharana and Civco, 2012;Witharana et al., 2013) for more details on candidate fusion algorithms, quality metrics and their implementation, and the major steps involved in the fusion evaluation workflows. ...
Article
This paper is an exploratory study, which aimed to discover the synergies of data fusion and image segmentation in the context of EO-based rapid mapping workflows. Our approach pillared on the geographic object-based image analysis (GEOBIA) focusing on multiscale, internally-displaced persons’ (IDP) camp information extraction from very high spatial resolution (VHSR) images. We applied twelve pansharpening algorithms to two subsets of a GeoEye-1 image scene that was taken over a former war-induced ephemeral settlement in Sri Lanka. A multidimensional assessment was employed to benchmark pansharpening algorithms with respect to their spectral and spatial fidelity. The multiresolution segmentation (MRS) algorithm of the eCognition Developer software served as the key algorithm in the segmentation process. The first study site was used for comparing segmentation results produced from the twelve fused products at a series of scale, shape, and compactness settings of the MRS algorithm. The segmentation quality and optimum parameter settings of the MRS algorithm were estimated by using empirical discrepancy measures. Non-parametric statistical tests were used to compare the quality of image object candidates, which were derived from the twelve pansharpened products. A wall-to-wall classification was performed based on a support vector machine (SVM) classifier to classify image objects candidates of the fused images. The second site simulated a more realistic crisis information extraction scenario where the domain expertise is crucial in segmentation and classification. We compared segmentation and classification results of the original images (non-fused) and twelve fused images to understand the efficacy of data fusion. We have shown that the GEOBIA has the ability to create meaningful image objects during the segmentation process by compensating the fused image’s spectral distortions with the high-frequency information content that has been injected during fusion. Our findings further questioned the necessity of the data fusion step in rapid mapping context. Bypassing time-intensive data fusion helps to actuate EO-based rapid mapping workflows. We, however, emphasize the fact that data fusion is not limited to VHSR image data but expands over many different combinations of multi-date, multi-sensor EO-data. Thus, further research is needed to understand the synergies of data fusion and image segmentation with respect to multi-date, multi-sensor fusion scenarios and extrapolate our findings to other remote sensing application domains beyond EO-based crisis information retrieval.
Article
Full-text available
We compared global-scale pansharpening of full- or large-scene QuickBird satellite imagery with local-scale pansharpening of a subset of these scenes by using modified intensity-hue-saturation (MIHS), principal component analysis (PCA), multiplicative (MP), and high pass filter (HPF) methods. The spectral properties of all pansharpened images were evaluated by using quantitative indices such as erreur relative globale adimensionnelle de synthèse (ERGAS), correlation coefficient (CC), relative difference to mean (RDM), and relative difference to standard deviation (RDS). This study discovered that local-scale pansharpening was generally lower and higher than the global-scale approach in terms of ERGAS and CC, respectively. In particular, local-scale HPF produced pansharpening results very close to the original multispectral image with less than 0.18 percent and 0.07 percent of RDM and RDS, respectively. Local-scale fusion results with PCA and MP were similar to those of the global-scale approach. Local-scale pansharpening that uses very high spatial resolution imagery could lead to the rapid assessment of the magnitude and severity of humanitarian situations by producing a color image of high spatial resolution with reduced processing time.
Article
Full-text available
Pansharpening is a pixel-level fusion technique used to increase the spatial resolution of the multispectral image using spatial information from the high-resolution panchromatic image, while preserving the spectral information in the multispectral image. Various pansharpening algorithms are available in the literature, and some have been incorporated in commercial remote sensing software packages such as ERDAS Imagine® and ENVI®. The demand for high spatial and spectral resolutions imagery in applications like change analysis, environmental monitoring, cartography, and geology is increasing rapidly. Pansharpening is used extensively to generate images with high spatial and spectral resolution. The suitability of these images for various applications depends on the spectral and spatial quality of the pansharpened images. Hence, the evaluation of the spectral and spatial quality of the pansharpened images using objective quality metrics is a necessity. In this work, quantitative metrics for evaluating the quality of pansharpened images are presented. A performance comparison, using the intensity-hue-saturation (IHS)-based sharpening, Brovey sharpening, principal component analysis (PCA)-based sharpening, and a wavelet-based sharpening method, is made to better quantify their accuracies.
Article
Full-text available
The main objective of this article is quality assessment of pansharpening fusion methods. Pansharpening is a fusion technique to combine a panchromatic image of high spatial resolution with multispectral image data of lower spatial resolution to obtain a high-resolution multispectral image. During this process, the significant spectral characteristics of the multispectral data should be preserved. For images acquired at the same time by the same sensor, most algorithms for pansharpening provide very good results, i.e. they retain the high spatial resolution of the panchromatic image and the spectral information from the multispectral image (single-sensor, single-date fusion). For multi-date, multi-sensor fusion, however, these techniques can still create spatially enhanced data sets, but usually at the expense of the spectral consistency. In this study, eight different methods are compared for image fusion to show their ability to fuse multitemporal and multi-sensor image data. A series of eight multitemporal multispectral remote sensing images is fused with a panchromatic Ikonos image and a TerraSAR-X radar image as a panchromatic substitute. The fused images are visually and quantitatively analysed for spectral characteristics preservation and spatial improvement. It can not only be proven that the Ehlers fusion is superior to all other tested algorithms, it is also the only method that guarantees excellent colour preservation for all dates and sensors used in this study.
Article
Full-text available
Image fusion is a technique that is used to combine the spatial structure of a high-resolution panchromatic image with the spectral information of a low-resolution multispectral image to produce a high-resolution multispectral image. Currently, image fusion techniques via color or statistical transforms such as the Intensity-Hue-Saturation (IHS) and principal component (PC) methods are still widely used. These methods create multispectral images of higher spatial resolution but usually at the cost of color distortions in the fused images. This is especially true if the wavelength range of the panchromatic image does not correspond to that of the employed multispectral bands or for multitemporal/multisensoral fusion. To overcome the color distortion problem, a number of new fusion methods have been developed over the last years. One of these is the Ehlers fusion algorithm, which is based on an IHS transform coupled with adaptive filtering in the Fourier domain. This method preserves the spectral characteristics of the lower spatial resolution multispectral images for single-sensor, multi-sensor, and multi-temporal fusion. A comparison between this method and three sophisticated new fusion techniques that are available in commercial image processing software is presented in this paper using multitemporal multi-sensor fusion with SPOT multispectral and Ikonos panchromatic datasets as well as single-sensor single-date multispectral and panchromatic Quickbird data. The fused images are compared visually and with statistical methods that are objective, reproducible, and quantitative. It can be shown that the sophisticated methods such as Gram Schmidt fusion, CN spectral sharpening, and the modified IHS provide good results in color preservation for single sensor fusion. For multi-temporal multi-sensor fusion, however, these methods produce significant changes in spectral characteristics for the fused datasets. This is not the case for the Ehlers fusion algorithm, which shows no recognizable color distortion even for multi-temporal and multi-sensor datasets.
Article
Full-text available
Existing image fusion techniques such as the intensity–hue–saturation (IHS) transform and principal components analysis (PCA) methods may not be optimal for fusing the new generation commercial high-resolution satellite images such as Ikonos and QuickBird. One problem is color distortion in the fused image, which causes visual changes as well as spectral differences between the original and fused images. In this paper, a fast Fourier transform (FFT)-enhanced IHS method is developed for fusing new generation high-resolution satellite images. This method combines a standard IHS transform with FFT filtering of both the panchromatic image and the intensity component of the original multispectral image. Ikonos and QuickBird data are used to assess the FFT-enhanced IHS transform method. Experimental results indicate that the FFT-enhanced IHS transform method may improve upon the standard IHS transform and the PCA methods in preserving spectral and spatial information.
Article
Full-text available
This paper presents a multi-scale solution based on mathematical morphology for extracting the building features from remotely sensed elevation and spectral data. Elevation data are used as the primary data to delineate the structural information and are firstly represented on a morphological scale-space. The behaviors of elevation clusters across the scale-space are the cues for feature extraction. As a result, a complex structure can be extracted as a multi-part object in which each part is represented on a scale depending on its size. The building footprint is represented by the boundary of the largest part. Other object attributes include the area, height or number of stories. The spectral data is used as an additional source to remove vegetation and possibly classify the building roof material. Finally, the results can be stored in a multi-scale database introduced in this paper. The proposed solution is demonstrated using the data derived from a Light Detection And Ranging (LiDAR) surveying flight over Tokyo, Japan. The results show a reasonable match with reference data and prove the capability of the proposed approach in accommodation of diverse building shapes. Higher density LiDAR is expected to produce better accuracy in extraction, and more spectral sources are necessary for further classification of building roof material. It is also recommended that parallel processing should be implemented to reduce the computation time.
Article
Full-text available
This article aims at explaining the ARSIS concept. By fusing two sets of images A and B, one with a high spatial resolution, the other with a low spatial resolution and different spectral bands, the ARSIS concept permits to synthesise the dataset B at the resolution of A that is as close as possible to reality. It is based on the assumption that the missing information is linked to the high frequencies in the sets A and B. It searches a relationship between the high frequencies in the multispectral set B and the set A and models this relationship. The general problem for the synthesis is presented first. The general properties of the fused product are given. Then, the ARSIS concept is discussed. The general scheme for the implementation of a method belonging to this concept is presented. Then, this article intends to help practitioners and researchers to better understand this concept through practical details about implementations. Two Multiscale Models are described as well as two Inter-Band Structure Models (IBSM). They are applied to an Ikonos image as an illustration case. The fused products are assessed by means of a known protocol comprising a series of qualitative and quantitative tests. The products are found of satisfactory quality. This case illustrates the differences existing between the various models, their advantages and limits. Tracks for future improvements are discussed.
Article
Full-text available
Remote sensing imagery needs to be converted into tangible information which can be utilised in conjunction with other data sets, often within widely used Geographic Information Systems (GIS). As long as pixel sizes remained typically coarser than, or at the best, similar in size to the objects of interest, emphasis was placed on per-pixel analysis, or even sub-pixel analysis for this conversion, but with increasing spatial resolutions alternative paths have been followed, aimed at deriving objects that are made up of several pixels. This paper gives an overview of the development of object based methods, which aim to delineate readily usable objects from imagery while at the same time combining image processing and GIS functionalities in order to utilize spectral and contextual information in an integrative way. The most common approach used for building objects is image segmentation, which dates back to the 1970s. Around the year 2000 GIS and image processing started to grow together rapidly through object based image analysis (OBIA - or GEOBIA for geospatial object based image analysis). In contrast to typical Landsat resolutions, high resolution images support several scales within their images. Through a comprehensive literature review several thousand abstracts have been screened, and more than 820 OBIA-related articles comprising 145 journal papers, 84 book chapters and nearly 600 conference papers, are analysed in detail. It becomes evident that the first years of the OBIA/GEOBIA developments were characterised by the dominance of 'grey' literature, but that the number of peer-reviewed journal articles has increased sharply over the last four to five years. The pixel paradigm is beginning to show cracks and the OBIA methods are making considerable progress towards a spatially explicit information extraction workflow, such as is required for spatial planning as well as for many monitoring programmes.
Article
Full-text available
In January 2006, the Data Fusion Committee of the IEEE Geoscience and Remote Sensing Society launched a public contest for pansharpening algorithms, which aimed to identify the ones that perform best. Seven research groups worldwide participated in the contest, testing eight algorithms following different philosophies [component substitution, multiresolution analysis (MRA), detail injection, etc.]. Several complete data sets from two different sensors, namely, QuickBird and simulated Pléiades, were delivered to all participants. The fusion results were collected and evaluated, both visually and objectively. Quantitative results of pansharpening were possible owing to the availability of reference originals obtained either by simulating the data collected from the satellite sensor by means of higher resolution data from an airborne platform, in the case of the Pléiades data, or by first degrading all the available data to a coarser resolution and saving the original as the reference, in the case of the QuickBird data. The evaluation results were presented during the special session on Data Fusion at the 2006 International Geoscience and Remote Sensing Symposium in Denver, and these are discussed in further detail in this paper. Two algorithms outperform all the others, the visual analysis being confirmed by the quantitative evaluation. These two methods share the same philosophy: they basically rely on MRA and employ adaptive models for the injection of high-pass details.
Article
Full-text available
This paper discusses the needs for a concept and harmonized terms of reference in data fusion. Previously published definitions are analyzed. A new definition of the data fusion is proposed which has been set within an European working group. Several definitions and terms of reference are given which describe the information intervening in any problem of data fusion
Article
The term “image fusion” covers multiple techniques used to combine the geometric detail of a high-resolution panchromatic image and the color information of a lowresolution multispectral image to produce a final image with the highest possible spatial information content while still preserving good spectral information quality. During the last twenty years, many methods such as Principal Component Analysis (PCA), Multiplicative Transform, Brovey Transform, and IHS Transform have been developed producing good quality fused images. Despite the quite good visual results, many research papers have reported the limitations of the above fusion techniques. The most significant problem is color distortion. Another common problem is that the fusion quality often depends upon the operator’s fusion experience and upon the data set being fused. In this study, we compare the efficiency of nine fusion techniques and more specifically the efficiency of IHS, Modified IHS, PCA, Pansharp, Wavelet, LMM (Local Mean Matching), LMVM (Local Mean and Variance Matching), Brovey, and Multiplicative fusion techniques for the fusion of QuickBird data. The suitability of these fusion techniques for various applications depends on the spectral and spatial quality of the fused images. In order to quantitatively measure the quality of the fused images, we have made the following controls. First, we have examined the visual qualitative result. Then, we examined the correlation between the original multispectral and the fused images and all the statistical parameters of the histograms of the various frequency bands. Finally, we performed an unsupervised classification, and we compared the resulting images. All the fusion techniques improve the resolution and the visual result. The resampling method practically has no effect on the final visual result. The LMVM, the LMM, the Pansharp, and the Wavelet merging technique do not change the statistical parameters of the original images. The Modified IHS provokes minor changes to the statistical parameters than the classical IHS or than the PCA. After all the controls, the LMVM, the LMM, the Pansharp, and the Modified IHS algorithm seem to gather the more advantages in fusion panchromatic and multispectral data.
Article
Pixel-level image fusion combines complementary image data, most commonly low spectral-high spatial resolution data with high spectral-low spatial resolution optical data. The presented study aims at refining and improving the High-Pass Filter Additive (HPFA) fusion method towards a tunable and versatile, yet standardized image fusion tool. HPFA is an image fusion method in the spatial domain, which inserts structural and textural details of the higher resolution image into the lower resolution image, whose spectral properties are thereby largely retained. Using various input image pairs, workable sets of HPFA parameters have been derived with regard to high-pass filter properties and injection weights. Improvements are the standardization of the HPFA parameters over a wide range of image resolution ratios and the controlled trade-off between resulting image sharpness and spectral properties. The results are evaluated visually and by spectral and spatial metrics in comparison with wavelet-based image fusion results as a benchmark.
Article
During humanitarian crises, when population figures are often urgently required but very difficult to obtain, remote sensing is able to provide evidence of both present and past population numbers. This research, conducted on Quick Bird time-series imagery of the Zam Zam internally displaced person (IDP) camp in Northern Darfur, investigates automated analysis of the camp's evolution between 2002 and 2008, including delineation of the camp's outlines and inner structure, employment of rule-based extraction for two categories of dwelling units and derivation of population estimates for the time of image capture. Reference figures for dwelling occupancy were obtained from estimates made by aid agencies. Although validation of such 'ondemand' census techniques is still continuing, the benefits of a fast, efficient and objective information source are obvious. Spatial, as well thematic, accuracy was, in this instance, assessed against visual interpretation of eight 200 m × 200 m grid cells and accuracy statistics calculated. Total user's and producer's accuracy rates ranged from 71.6% up to 94.9%. While achieving promising results with respect to accuracy, transfer ability and usability, the remaining limitations of automated population estimation in dynamic crisis situations will provide a stimulus for future research. © 2010 Taylor & Francis.
Article
Various fusion methods have been developed for improving data spatial resolution. The methods most encountered in the literature are the intensity‐hue‐saturation (IHS) transform, the Brovey transform, the principal components algorithm (PCA) fusion method, the Gram–Schmidt fusion method, the local mean matching method, the local mean and variance matching method, the least square fusion method, the discrete wavelet fusion method including Daubechies, Symlet, Coiflet, biorthogonal spline, reverse biorthogonal spline, and Meyer wavelets, the wavelet‐PCA fusion method, and the crossbred IHS and wavelet fusion method. Using various evaluation indicators such as two‐dimensional correlation, relative difference of means, relative variation, deviation index, entropy difference, peak signal‐to‐noise ratio index and universal image quality index, as well as photo‐interpretation methods and techniques, results of the above fusion methods were compared and comments on the fusion methods and potential of evaluation indicators were made. Among data fusion methods and indicators the local mean and variance matching methods proved the most efficient and the peak signal‐to‐noise ratio indicator proved the most appropriate for the evaluation of data fusion results.
Article
In this paper, multivariate regression is adopted to improve spectral quality, without diminishing spatial quality, in image fusion methods based on the well-established component substitution (CS) approach. A general scheme that is capable of modeling any CS image fusion method is presented and discussed. According to this scheme, a generalized intensity component is defined as the weighted average of the multispectral (MS) bands. The weights are obtained as regression coefficients between the MS bands and the spatially degraded panchromatic (Pan) image, with the aim of capturing the spectral responses of the sensors. Once it has been integrated into the Gram-Schmidt spectral-sharpening method, which is implemented in environment for visualizing images (ENVI) program, and into the generalized intensity-hue-saturation fusion method, the proposed preprocessing module allows the production of fused images of the same spatial sharpness but of increased spectral quality with respect to the standard implementations. In addition, quantitative scores carried out on spatially degraded data clearly confirm the superiority of the enhanced methods over their baselines.