Conference PaperPDF Available

Fusion of reconstructed multispectral images

Authors:

Abstract

A new technique for fast fusion of multiresolution satellite images with minimal colour distortion is presented in the paper. The technique allows to reconstruct multispectral images with resolution higher than resolution of the panchromatic image. Combination of image super-resolution restoration and image fusion based on global regression was applied. Super- resolution image restoration is based on simultaneous processing of several multispectral images to reconstruct a panchromatic image with higher resolution. This method is quasi-optimal on minimum squared errors of image restoration.
Fusion of Reconstructed Multispectral Images
Valery Starovoitov
Institute of Computer Science
University of Bialystok
Bialystok, Poland,
United Institute of Informatics Problems
Minsk, Belarus
valerys@newman.bas-net.by
Aliaksei Makarau
United Institute of Informatics Problems
Minsk, Belarus
makarau@newman.bas-net.by
Igor Zakharov and Dmitry Dovnar
Institute of Technology of Metals
Mogilev, Belarus
zakharov@ieee.org, dovnar@inbox.ru
Abstract—A new technique for fast fusion of multiresolution
satellite images with minimal colour distortion is presented in the
paper. The technique allows to reconstruct multispectral images
with resolution higher than resolution of the panchromatic
image. Combination of image super-resolution restoration and
image fusion based on global regression was applied. Super-
resolution image restoration is based on simultaneous processing
of several multispectral images to reconstruct a panchromatic
image with higher resolution. This method is quasi-optimal on
minimum squared errors of image restoration.
Image Fusion, Image Restoration, Super-resolution
I. INTRODUCTION
Image fusion or pan-sharpening is performed in order to
increase resolution of multispectral images utilizing the
panchromatic image information. Resolution of sharpened
images is limited by resolution of a panchromatic image. For
example, Landsat 7ETM+ provides panchromatic images with
resolution 15m and multispectral with 30m. Therefore,
resolution of the sharpened multispectral images is two times
higher than original one. The Brovey transform [1] and IHS
fusion [2] methods are point type methods, fast and applicable
for images of the large size. PCA and wavelet-based fusion
methods [3] are of global and local operator type and
computationally expensive. In satellites launched after 1999,
sensitivity range of the panchromatic image sensor is usually
extended to cover nearest infrared range. It is made in order to
increase resolution of the registered panchromatic image, but
previously developed fusion methods cause significant color
distortion [4].
We develop a principally new technique for multispectral
image fusion with resolution higher than resolution of the
panchromatic image and with minimization of color composite
distortion. Two tasks are to be solved in order to achieve the
higher resolution: panchromatic image reconstruction (super-
resolution) and multispectral image fusion.
To solve the first task one can use reconstruction of a
panchromatic image from color images by demosaicing
algorithm [5] either super-resolution reconstruction [6]. Our
solution is based on the method, which is quasi-optimal on
minimum squared error (MSE) of image restoration [7]. This
method allows a theoretical quality evaluation and super-
resolution restoration from several images. The solution of the
second task may be based on an algorithm of multiresolution
image fusion. We select a fusion algorithm based on the linear
regression. The algorithm is of point type (i.e. fast) and
provides minimal color composite distortion.
The restored panchromatic and original panchromatic
images were compared using the structural similarity index
(SSI) from [8]. Quality of enlarged images we evaluated by the
Euclidean norm (L2) of the histogram difference of the original
and the fused images, correlation, RMSE, square error of the
difference image (SEDI).
II. I
MAGE RESTORATION METHOD
The optimal solution of Fredholm’s integral equation of the
first kind may be used for image resolution increase [7]. Note,
that images are registered by a focal plane array (FPA).
A. Model of Image Formation
Information about optical properties of the original ideal
image
(
)
ληξ
,,Z is transmitted into an optical system, which is
characterized by a point-spread function (PSF)
(
)
ληξ
,,,, yxK .
The optical system projects an image
()
λ
,, yxf
onto the FPA.
This process can be described by a linear equation
() ()( )
,,,,,,,,,
1
1
2
2
ηξληξληξλ
ddyxKZyxf
S
S
S
S
∫∫
−−
=
(1)
where
21
, SS are integration limits, (
η
ξ
, ) are the coordinates
of a point in the plane of image
()
λ
η
ξ
,,Z ,
λ
is the
wavelength of light, (
y
x
, ) are the coordinates of point in the
plane of the registered image
()
λ
,, yxf
. The images with
different spectral bands and light wavelengths in (1) are
labeled by
),,,( pyxf
p
λ
, where Pp ,...1,0= ,
Pp
λ
λ
λ
λ
,...,
10
=
.
Spectral photosensitivity function
),,(
λ
yxSen of an FPA
element can be non-uniform [9]. The fill factor of FPA may be
The work was partly supported by INTAS project No.: 06-1000024-9100.
IGARSS'2007 Barselona, Spain
known from the manufacturer to describe the function.
Registration of the ideal image
()
λ
η
ξ
,,Z can be described by
Fredholm’s integral equation. For the image
),,( pjif registered by FPA the equation is
),,,(),,(),,(
),,,,,(),,(
1
1
2
2
1
pjiFpjipjif
ddpξjiKξZ
S
S
p
S
S
=+=
=
∫∫
−−
γ
ηξληλη
(2)
where
,),,,,,(),,(
),,,,,(
,
1
∫∫
=
=
ji
A
p
dxdypξyxKyxSen
pξjiK
ληλ
λη
(3)
),,( pjiF – signal registered by an FPA ),( ji element with
area
ji
A
,
, λ
p
– the light wavelength for image р, ),,( pji
γ
error of the signal (additive noise). We can write
).,,(
),,,(),,(),,(
,
pji
dxdypyxfyxSenpjiF
ji
A
pp
γ
λλ
+
+=
∫∫
(4)
Hence, the function K (PSF) in (3) can be written as
=
=
=
=
.),,,,,(
.................................
,1),,,,,(
,0),,,,,(
),,,,,(
1
0
PpξyxK
pξyxK
pξyxK
pξyxK
P
λη
λη
λη
λη
(5)
B. Solution of the Integral Equation
Solution of (1) is an ill-posed problem. According to the
theory of regularization, small errors (noise)
),,( pji
γ
can lead
to huge dispersion in solution of the equation. It is possible to
build an approximated solution with regularized properties. Let
us consider an approximated stabilized solution. It is possible
to find the average value over the set of squared error of
restoration. The solution can be represented as decomposition
on arbitrary orthonormalized system of basic functions
),,(
λ
η
ξ
ψ
k
:
,,,),,(),,(
21
1
SScZ
k
kk
<<=
=
ηξληξψληξ
(6)
where
k
c are the decomposition coefficients. The initial image
Z and noise are non-correlated.
In comparison to Wiener method, the main advantage of
the presented image restoration method is calculation of the
MSE for small number of pixels in the initial image. This is
very important for reconstruction of image registered by FPA
[10]. Values of a signal
),,( pjiF are used for restoration
(
)
ληξ
,,
*
Z of the ideal continuous image
Z
. An algorithm
described in [7] was adopted for this restoration. This
algorithm is fast, but in comparison with blind image
restoration methods (e.g. [11]) has a disadvantage – the PSF is
to be known. Image restoration filter
),,,,,(
βηξ
r
pjiQ can be
calculated by equations in [10]. The filter is calculated using
stabilization parameters
,...),(
21
ββ
=β
r
by the equation:
=),,,,,(
βηξ
r
pjiQ
=
=
==
+
=
G
m
m
k
mkkmm
m
k
kkm
m
k
kkm
d
dpjid
1
1
11
),)((
),()(),,()(
ϕϕββ
ηξψβϕβ
r
rr
,
(7)
where
∑∑
=
I
i
J
j
mm
jiji
kk
),(),()(
,
ϕϕϕϕ
are scalar multiplication
of images of basis functions, G is number of functions of object
decomposition, I x J is the number of pixels,
)(
β
r
ln
d
are
recurrently calculated coefficients
====
+
=
=
=
=
. ... ,2 ,1 ,1)(...; ,3 ,2 ;1 ..., ,2 ,1
;
),)((
),)((
)()(
1
1
1
ln
kdnnl
d
d
dd
kk
n
lm
m
k
mkkmm
m
k
mkkm
lm
β
ϕϕββ
ϕϕβ
ββ
r
r
r
rr
(8)
Summation is made over all sampling points of an image
),,( pjiF for the original image restoration
),,,,,(),,(),,(
,,
*
βηξληξ
r
pjiQpjiFZ
pji
=
(9)
Knowledge of the PSF (5) is required for restoration based
on different spectral images.
Image restoration algorithm
Step 1. Enter images
),,( pjiF
,
Pp ,...1,0=
. Enlarge the
images by interpolation.
Step 2. Set parameters of the FPA: fill factor or
),,(
λ
yxSen
, SNR, enter values of the PSF
),,,,,( pξyxK
p
λ
η
for every image.
Step 3. Calculate a filter
),,,,,(
βηξ
r
pjiQ
and reconstruct
an image Z
*
by (9).
III. M
ULTISPECTRAL IMAGE FUSION USING GLOBAL
REGRESSION
Hill et.al. proposed an algorithm for multiresolution image
fusion [12]. The algorithm calculates local regression in a
sliding window with size nn
×
(n=5 in [12]) between degraded
(Pan
deg
) and scaled (Pan
low
) panchromatic and the spectral
image. Local regression analysis between the image Pan
low
and
the spectral image Mult
j
is calculated by
j
low
jjj
EPanBAMult ++= * .
(10)
A
j
, B
j
are the matrices of local regression parameters for the j-th
spectral image, E
j
is the matrix of residuals, Pan
low
is the
degraded and scaled down panchromatic image. The spectral
image Mult
j
and the matrix B
j
are resized to the size of the
panchromatic image by interpolation (entitle results as
h
j
Mult and
h
j
B ). The spectral image with high resolution is
calculated by
)(
deg
PanPanBMultMult
h
j
h
j
high
j
+= ,
(11)
where
high
j
Mult is high resolution spectral image. This
algorithm is a local type algorithm, computationally expensive.
Keeping the matrix
h
j
B , which size is equal to the size of the
panchromatic image, is costly in memory.
Distribution of the local regression parameters B
j
of
Landsat 7 ETM+ images was analyzed. Median values of B
j
for
all spectral images and coefficients of the global linear
regression are very close. Since, the median values are close to
the coefficients of the global regression, the global regression
may be applied for multiresolution image fusion.
Algorithm for multiresolution image fusion based on global
regression
Input data: Mult
j
is the spectral image, Nir is the near
infrared image, Pan is the panchromatic image.
Step 1. For the Mult
j
and Pan images assign zero values for
the pixels representing cloud, water and shadow areas. The
mask can be calculated by the following formulas
,0)(;0)(
,1)..1,..1(
=<=>
=
NirB
TNirMaskTBMask
thenNMMask
(12)
where M,N is the size of the image, B is spectral image of blue
color range, Mask is the mask with zero pixels representing
water, shadow and cloud areas, T
B
and T
Nir
are the thresholds
for the B and the Nir images. The thresholds T
B
and T
Nir
were
calculated experimentally (200 and 40 for Landsat 7 ETM+).
Step 2. Degrade the Pan image by a low-pass filter
(
33× average filtering, for example), entitle as Pan
deg
. Scale
down Pan
deg
to the size of the spectral image by interpolation
(entitle as Pan
low
).
Step 3. Calculate the global regression coefficients
,*
j
low
jjj
EPanbaMult ++=
(13)
where a
j
and b
j
are the global regression parameters, E
j
is the
residuals matrix.
Step 4. Perform spatial scaling of the Mult
i
image to the size
of the panchromatic image (entitle the result as
'
j
Mult ).
Step 5. Increase resolution of the spectral image by
),(*
deg'''
PanPanbMultMult
jj
+=
(14)
where
''
j
Mult
is the spectral image with increased resolution.
Assign 0 or 255 to the pixels of the fused
image, which values
are less than 0 or more than 255, respectively.
a)
b)
c)
d)
Figure 1. Example of image restoration from multispectral images: a) a
fragment of red spectrum image; b) the fragment of the panchromatic
image; c) interpolated image of red spectrum; d) image reconstructed from
multispectral, near infrared and panchromatic images.
IV. RESULTS OF ALGORITHMS EVALUATION
A. Image Restoration Results
Fig. 1 (a) and (b) illustrate fragments of red spectrum image
and original panchromatic image. Fig. 1 (c) presents
interpolated image of red spectrum. Fig. 1 (d) presents restored
image using multispectral, near infrared and panchromatic
images.
B. Quality Assessment of Image Restoration Results
The restored image was compared with interpolated spectral
and interpolated panchromatic images by calculation of
correlation, RMSE, square error of the difference image
(SEDI) and structural similarity (see Table 1). The restored
image has bigger correlation with the interpolated Nir image
than with other images. Utilization of the Nir image for Pan
image restoration is desirable.
TABLE I. COMPARISON OF INTERPOLATED SPECTRAL IMAGES WITH
THE
RESTORED IMAGE SHOWN IN FIG. 1(D)
Interpolated image Correlation RMSE SEDI SSI
Pan
0.988
18694 4.15 0.88
R 0.143 42481 13.33 0.59
G 0.231 52519 24.19 0.56
B 0.390 40564 16.76 0.62
Nir 0.949 45185 11.01 0.60
C. Image Fusion Results
Results of the presented algorithm were compared with the
Brovey, IHS and local regression fusion methods. Spectral
a) b)
c) d)
Figure 2. Fusion by different methods: a) a fragment of an original
multispectral image (Landsat 7 ETM+); b) the panchromatic image, c) the
IHS fusion with original panchromatic image; d) the IHS fusion with
reconstructed panchromatic image.
images with size of
400400 ×
pixels and restored panchromatic
images with size
16001600 ×
pixels were used. Fig. 2 presents
(a) a fragment of a multispectral image, (b) the original
panchromatic image, (c) the IHS fusion, (d) the IHS fusion
with restored panchromatic image (the resolution is 4 times
higher then the resolution of the multispectral image, and 2
times higher than the resolution of the panchromatic image).
TABLE II. NUMERICAL EVALUATION OF THE FUSION METHODS. THE
ORIGINAL MULTISPECTRAL IMAGE OF SIZE
400400×
, THE FUSED IMAGE OF
SIZE
16001600×
PIXELS.
Method
L2 histogram
Norm (R,G,B)
Correlation
(R,G,B)
RMSE
(R,G,B)
SEDI
(R,G,B)
Consumed
time, s
Brovey
Fusion
31045
32443
42114
0.5294
0.3384
0.0189
7654
12144
12247
9.9783
10.0945
14.0574
16.171
IHS
Fusion
162152
162610
164330
0.1101
0.0889
0.1409
21578
25116
29445
16.3232
9.91
7.8131
13.844
Local
regression
3328
2643
4258
0.9858
0.9851
0.9738
1151
704
744
1.9249
1.1786
1.2391
77.703
Global
regression
3372
2785
4604
0.9858
0.9851
0.9744
1157
704
739
1.9376
1.1855
1.2348
15.578
D. Visual Analysis
All the discussed algorithms increase the spatial resolution,
but the Brovey and IHS fusion methods heavily distort the
color composite. Visually there is no color distortion in the
local and global regression fusion results, but the local
regression fusion adds noise near edges and in the
homogeneous areas in the resulting image.
E. Quantitative Analysis
The Euclidean norm (L2) of the histogram difference of the
original and the fused images, correlation, RMSE, square error
of the difference image (SEDI) were used in quantitative
analysis. The fused images were scaled down to the size of the
original multispectral images by bilinear interpolation. Ideal
values for all the measures are 0, but for correlation is 1. Table
2 presents assessment of image fusion by the Brovey, IHS,
local and global regression fusion. Size of the original
multispectral image is
400400
×
pixels. Size of the fused images
is
16001600
×
pixels. The best values are indicated by bold font.
Fusion based on the local and global regression outperforms
widely used the Brovey and IHS fusion methods. Calculation
time of the global regression fusion is less than calculation time
of the Brovey fusion and comparable to the IHS fusion.
Experiments were carried out in Matlab R14.
V. C
ONCLUSION
We presented a new technique for multispectral image
fusion with minimization of color composite distortion. The
resolution of the fused image is higher than resolution of the
original panchromatic image. This was done in two stages: by
super-resolution of panchromatic image using the original
multispectral and panchromatic images; then multispectral
image fusion with the reconstructed panchromatic image. This
allows to increase resolution of the image 2-4 times higher than
the original panchromatic image resolution without loss of
acutance and color composite distortion.
R
EFERENCES
[1] Y. Zhang, “Problems in the fusion of commercial high resolution
satellite images as well as Landsat 7 images and initial solutions,” Joint
Int. Symposium on GeoSpatial Theory, Processing and Applications,
Ottawa, Canada, July 2002, vol. 34, part.4.
[2] T. Tu, S. Su, H. Shyu, and P. Huang “A new approach at IHS-like image
fusion methods,” Information Fusion, vol. 2, no. 3, pp. 177–186, 2001.
[3] Z. Wang, D. Ziou, C. Armenakis, D. Li, and Q. Li, “A comparative
analysis of image fusion methods,IEEE Trans. Geoscience and Remote
Sensing, vol. 43, no. 6, pp. 1391–1402, 2005.
[4] T. Tu, Y. Lee, C. Chang, and P. Huang “Adjustable intensity-hue-
saturation and Brovey transform fusion technique for
IKONOS/Quickbird imagery,” Optical Engineering, vol. 44, no. 11, pp.
116201-1–116201-10, 2005.
[5] R. Kimmel, “Demosaicing: Image reconstruction from color CCD
samples,” IEEE Trans. Image Proc., vol. 8, no. 9, pp. 1221–1228, 1999.
[6] R. Molina, J. Mateos, and A. Katsaggelos “Super resolution
reconstruction of multispectral images,” VIRTUAL OBSERVATORY:
Plate Content Digitization, Archive Mining & Image Sequence
Processing, Heron Press, 2005.
[7] D. Dovnar and K. Predko, “The method for digital restoration of object
distorted by linear system,” Acta Polytechnica Scand. Applied Physics
Series, vol. 1, no 149, pp. 138-141, 1985.
[8] Z. Wang and A. Bovik “A universal image quality index,” IEEE Signal
Proc. Letters, vol. 9, no. 3, pp. 81-84, 2002.
[9] D. Kavaldiev and Z. Ninkov, “Influence of nonuniform charge–coupled
device pixel response on aperture photometry,” Optical Engineering.
vol. 40, no. 2, pp. 162–169, 2001.
[10] D. Dovnar and I. Zakharov “The orthogonalization method for error
compensation of Wiener filter for spatial discreditized images,” in Proc.
8th Int. Conf. on Pattern Recognition and Information Proc., Minsk,
Belarus, May 2005, pp. 173–176.
[11] R. Molina, J. Mateos, and A. Katsaggelos “Blind deconvolution using a
variational approach to parameter, image, and blur estimation,” IEEE
Trans. on Image Processing, vol. 15, no. 12, pp. 3715-3727, 2006.
[12] J. Hill, C. Diemer, O. Stover, and T. Udelhoven. “A local correlation
approach for the fusion of remote sensing data with different spatial
resolution in foresty applications,” in Proc. of Int. Archives of
Photogrammetry and Remote Sensing, Valladolid, Spain, June 1999,
vol. 32, no. Part 7-4-3 W6, pp. 167–174.
... In this section several known fusion methods (e.g.1234567) are presented and its relation to introduced spectral fusion (GFF) is analyzed theoretically. ...
... High frequency addition or high pass filtering method (e.g. [5, 6]) is described by same equations (3) or (4) as GFF. Still, three important differences are mainly due to the implementation in different domains. ...
Conference Paper
Full-text available
Multi-resolution image fusion also known as pan-sharpening aims to include spatial information from a high resolution image, e.g. panchromatic or Synthetic Aperture Radar (SAR) image, into a low resolution image, e.g. multi-spectral or hyper-spectral image, while preserving spectral properties of a low resolution image. A signal processing view at this problem allowed us to perform a systematic classification of most known multi-resolution image fusion approaches and resulted in a General Framework for image Fusion (GFF) which is very well suitable for a fusion of multi-sensor data such as optical-optical and optical-radar imagery. Examples are presented for WorldView-1/2 and TerraSAR-X data.
... GIF-1 extracts high-resolution image detail (high frequency component) from panchromatic image and adds to interpolated spectral image. The amount of transferred image detail data is established using regression (Starovoitov, 2007). GIF-2 employs image detail addition to interpolated spectral image (Ehlers, 2004). ...
Conference Paper
Full-text available
Multiresolution and multispectral image fusion (pan-sharpening) requires proper assessment of spectral consistency but also spatial consistency. Many fusion methods resulting in perfect spectral consistency may leak spatial consistency and vice versa, therefore a proper assessment of both spectral and spatial consistency is required. Up to now, only a few approaches were proposed for spatial consistency assessment using edge map comparison, calculated by gradient-like methods (Sobel or Laplace operators). Since image fusion may change intensity and contrast of the objects in the fused image, gradient methods may give disagreeing edge maps of the fused and reference (panchromatic) image. Unfortunately, this may lead to wrong conclusions on spatial consistency. In this paper we propose to use phase congruency for spatial consistency assessment. This measure is invariant to intensity and contrast change and allows to assess spatial consistency of fused image in multiscale way. Several assessment tests on IKONOS data allowed to compare known assessment measures and the measure based on phase congruency. It is shown that phase congruency measure has common trend with other widely used assessment measures and allows to obtain confident assessment of spatial consistency.
Conference Paper
Full-text available
New techniques and algorithms for the integrated processing of multispectral imagery from future Belarusian and Ukrainian remote sensing satellites are described. Apply the developed techniques extends the interpretation facilities and classification validity of Sich-2 and BelKA-2 multispectral imagery.
Conference Paper
Full-text available
The purpose of multispectral satellite imagery preprocessing for Land Cover Classification is creation of enhanced satellite images before further processing and imagery analysis with final land classification and automated linear object and area border detection for selected classes of objects.
Article
Full-text available
Among various image fusion methods, intensity-hue-saturation (IHS) and Brovey transforms (BT) can quickly merge huge amounts of IKONOS/QuickBird imagery. However, spectral degradation often appears in the fused images. Moreover, IHS and BT suffer from individual color distortion on saturation compression and saturation stretching, respectively. To balance these two saturation changes during the fusion process, an adjustable IHS-BT approach with spectral adjustment is proposed. Furthermore, to solve the typical bright target recovery (BTR) problems, a simple procedure of dynamic range adjustment (DRA) is also presented. By adopting different DRA techniques, the proposed IHS-BT method is divided into two different fusion approaches: the model of preserving spectral information and the model of enhancing spatial details. Experimental results demonstrate that the proposed combined approaches can achieve significant improvement over other current approaches.
Article
Full-text available
Multispectral image reconstruction allows to combine a multispectral low resolution image with a panchromatic high resolution one to obtain a new multispectral image with the spectral properties of the lower resolution image and the level of detail of the higher resolution one. In this paper we formulate the problem of multispectral image reconstruction based on super-resolution techniques and derive an iterative method to estimate the high resolution multispectral image from the observed images. Finally, the proposed method is tested on a real Landsat 7 ETM+ image.
Conference Paper
Full-text available
In this paper we present a new super resolution Bayesian method for pansharpening of multispectral images which: a) incorporates prior knowledge on the expected characteristics of the multispectral images, b) uses the sensor characteristics to model the observation process of both panchromatic and multispectral images, and c) performs the estimation of all the unknown parameters in the model. Using real data, the pansharpened multispectral images are compared with the images obtained by other pansharpening methods and their quality is assessed both qualitatively and quantitatively
Article
Full-text available
Following the hierarchical Bayesian framework for blind deconvolution problems, in this paper, we propose the use of simultaneous autoregressions as prior distributions for both the image and blur, and gamma distributions for the unknown parameters (hyperparameters) of the priors and the image formation noise. We show how the gamma distributions on the unknown hyperparameters can be used to prevent the proposed blind deconvolution method from converging to undesirable image and blur estimates and also how these distributions can be inferred in realistic situations. We apply variational methods to approximate the posterior probability of the unknown image, blur, and hyperparameters and propose two different approximations of the posterior distribution. One of these approximations coincides with a classical blind deconvolution method. The proposed algorithms are tested experimentally and compared with existing blind deconvolution methods.
Article
Full-text available
A simplified color image formation model is used to construct an algorithm for image reconstruction from CCD sensors samples. The proposed method involves two successive steps. The first is motivated by Cok's template matching technique, while the second step uses steerable inverse diffusion in color. Classical linear signal processing techniques tend to oversmooth the image and result in noticeable color artifacts along edges and sharp features. The question is how should the different color channels support each other to form the best possible reconstruction. Our answer is to let the edges support the color information, and the color channels support the edges, and thereby achieve better perceptual results than those that are bounded by the sampling theoretical limit.
Article
Full-text available
There are many image fusion methods that can be used to produce high-resolution multispectral images from a high-resolution panchromatic image and low-resolution multispectral images. Starting from the physical principle of image formation, this paper presents a comprehensive framework, the general image fusion (GIF) method, which makes it possible to categorize, compare, and evaluate the existing image fusion methods. Using the GIF method, it is shown that the pixel values of the high-resolution multispectral images are determined by the corresponding pixel values of the low-resolution panchromatic image, the approximation of the high-resolution panchromatic image at the low-resolution level. Many of the existing image fusion methods, including, but not limited to, intensity-hue-saturation, Brovey transform, principal component analysis, high-pass filtering, high-pass modulation, the a&grave; trous algorithm-based wavelet transform, and multiresolution analysis-based intensity modulation (MRAIM), are evaluated and found to be particular cases of the GIF method. The performance of each image fusion method is theoretically analyzed based on how the corresponding low-resolution panchromatic image is computed and how the modulation coefficients are set. An experiment based on IKONOS images shows that there is consistency between the theoretical analysis and the experimental results and that the MRAIM method synthesizes the images closest to those the corresponding multisensors would observe at the high-resolution level.
Article
The extensive use of charge-coupled devices (CCDs) in astronomical imaging and spectroscopic applications ha resulted in techniques being developed to correct for sensitivity variations attributable to the focal plena array detector. One normally overlooked effect is that these sensitivity variations also include intrapixel spatial effects that can influence measurements made with CCDs in certain circumstances. We demonstrate the influence of CCD pixel response nonuniformity on photometric measurements. An investigation is presented of how the relation between pixel size and the image size affects the photometric accuracy. Results are presented for a front-illuminated CCD array using an experimentally measured pixel sensitivity functions.
Article
Currently available image fusion techniques are not efficient for fusing the new satellite images such as IKONOS and Landsat 7. Significant colour distortion is one of the major problems. Reasons for this distortion are analyzed in this paper. A new initial automatic solution for fusing the new images is introduced. The new solution has reached an optimum fusion result – with minimized colour distortion and maximized spatial detail.
Article
The intensity-hue-saturation (IHS) method, principal component analysis (PCA), Brovey transform (BT) and wavelet transform (WT) are the contemporary image fusion methods in remote sensing community. However, they often face color distortion problems with fused images. In other words, they are sensitive to the characteristics of the analyzed area. To investigate this color distortion problem, this work presents a relatively detailed study indicating that the color distortion problem arises from the change of the saturation during the fusion process. Meanwhile, PCA, BT, and WT are evaluated and found to be IHS-like image merging techniques. Experimental results for distinct image fusion methods are also demonstrated in this paper.
Article
We propose a new universal objective image quality index, which is easy to calculate and applicable to various image processing applications. Instead of using traditional error summation methods, the proposed index is designed by modeling any image distortion as a combination of three factors: loss of correlation, luminance distortion, and contrast distortion. Although the new index is mathematically defined and no human visual system model is explicitly employed, our experiments on various image distortion types indicate that it performs significantly better than the widely used distortion metric mean squared error. Demonstrative images and an efficient MATLAB implementation of the algorithm are available online at http://anchovy.ece.utexas.edu//spl sim/zwang/research/quality_index/demo.html.