Content uploaded by Massimiliano Pepe
Author content
All content in this area was uploaded by Massimiliano Pepe on Jul 26, 2017
Content may be subject to copyright.
Copyright © 2017 Parente C., Pepe M. This is an open access article distributed under the Creative Commons Attribution License, which permits
unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
International Journal of Engineering & Technology, 6 (3) (2017) 71-77
International Journal of Engineering & Technology
Website: www.sciencepubco.com/index.php/IJET
doi: 10.14419/ijet.v6i3.7702
Research paper
Influence of the weights in IHS and Brovey methods for
pan-sharpening WorldView-3 satellite images
Parente C. *, Pepe M.
Department of Sciences and Technologies, University of Naples “Parthenope” – Italy
*Corresponding author E-mail: parente@uniparthenope.it
Abstract
The purpose of this paper is to investigate the impact of weights in pan-sharpening methods applied to satellite images. Indeed, different
data sets of weights have been considered and compared in the IHS and Brovey methods. The first dataset contains the same weight for
each band while the second takes in account the weighs obtained by spectral radiance response; these two data sets are most common in
pan-sharpening application. The third data set is resulting by a new method. It consists to compute the inertial moment of first order of
each band taking in account the spectral response. For testing the impact of the weights of the different data sets, WorlView-3 satellite
images have been considered. In particular, two different scenes (the first in urban landscape, the latter in rural landscape) have been
investigated. The quality of pan-sharpened images has been analysed by three different quality indexes: Root mean square error (RMSE),
Relative average spectral error (RASE) and Erreur Relative Global Adimensionnelle de Synthèse (ERGAS).
Keywords: World View-3; Pan-Sharpening; Spectral Radiance Response; His; Brovey.
1. Introduction
The passive sensors recording the intensity of the reflected elec-
tromagnetic energy coming from the Sun or emitted by the Earth
and the optical sensors, in relation to the spectral range, can ac-
quire in panchromatic (PAN) and multispectral (MS) bands [1].
Panchromatic data from Remote-Sensing systems have a smaller
pixel size, and a greater width of the acquisition band compared to
those multispectral. To avoid an unfavourable signal/noise ratio,
decreasing the width of the acquisition band, it is necessary wid-
ening the size of the pixels to the ground in order to intercept a
greater amount of energy reflected. To overcome the physical
limitations of available sensors, an extensive number of data fu-
sion methods has been proposed in the literature [2]. The pan-
sharpening (branch of data fusion) allows to fuse the higher geo-
metric resolution of the panchromatic images with the spectral
resolution of multispectral images [3]. In this way, each low reso-
lution multispectral image (LRMI) is transformed into high reso-
lution multispectral image (HRMI).
In literature, many pan-sharpening methods have been studied and
developed e.g.: Brovey, Weighted Brovey, Gram Schmidt, Intensi-
ty-Hue-Saturation (IHS), Fast IHS, Multiplicative, Principal Com-
ponent Analysis (PCA), Simple Mean, high-pass filtering (HPF),
Price, Generalized Laplacian pyramid (GLP) and Zhang [4, 5, 6].
Actually, the choose of a method, the image segmentation and
estimation of the sharpening or fusion performance are still an
open problem. Because the aim of this paper is to investigate the
behavior of the quality of the image varying the weights of several
bands, a special focus on IHS and Brovey is explained. In particu-
lar, in relation to the adopted weight data set, different quality of
the pan-sharpened images may be obtained. In addition, this da-
taset of the weights varies with the type of satellite image used.
In this paper, different weights are applied to WorldView-3 (WV-
3) satellite imageries.
2. Data and methods
Nowadays, the pipeline for the analysis of the pan-sharpened im-
ages follows a more or less standardized workflow. In fact, de-
pending on the type of transformation employed, a different pipe-
line is obtained.
According to Baronti et al. [7] the majority of image fusion meth-
ods can be divided into two main classes:
• Techniques that employ linear space-invariant digital filter-
ing of the panchromatic image to extract the spatial details;
• Techniques that yield the spatial details as pixel difference
between the panchromatic image and a nonzero-mean com-
ponent obtained from a spectral transformation of the MS
bands, without any spatial filtering of the former.
In this paper, considering the IHS and Brovey methods for pan-
sharpening images, all the process steps of satellite images, can be
summarized as shown in the figure 1.
As shown in the figure 1, at the end of the single method, a quanti-
tative analysis is conducted, in order to measure the similarity
between images.
72
International Journal of Engineering & Technology
Fig. 1: Pipeline for Pan-Sharpening Process and Analysis.
2.1. WorldView-3 images
The latest generation of commercial satellite sensors provides
images with a very high geometric resolution (VHR). An example
of the VHR commercial satellite is WV-3. Launched on August 13,
2014, it joined the other Digital Globe satellites in orbit, i.e. Geo-
Eye-1 and WorldView-2 supplying panchromatic images with cell
size 0.5 m and multispectral ones with cell size 2.0 m [8, 9, 10].
WV-3 collects data with nominal ground sample distance of 0.31
m (panchromatic), 1.24 m (multispectral) and 3.7 m (SWIR) at
nadir configuration. However, the commercial images are
resampled to 0.3 m (panchromatic), 1.2 m (multispectral) and 7.5
m (SWIR).
From the radiometric point of view, WV-3 acquires 11-bit data in
9 spectral bands covering panchromatic and multispectral bands.
The single-wavelength band for a PAN image is 0.45–0.90 μm.
The MS images have eight wavelength bands: Coastal (B1: 400-
450 nm), Blue (B2: 450–510 nm), Green (B3: 510-580 nm), Yel-
low (B4: 585-625 nm), Red (B5: 630-690 nm), Red Edge (705-
745 nm), NIR1 (B7: 770-895 nm), NIR2 (860-1040 nm). An addi-
tional shortwave infrared (SWIR) sensor acquires 14-bit data in
eight bands covering the 1100 to 2500 nm spectral region (Figure
2).
Fig. 2: Electromagnetic Spectrum Worldview-3 (Image Taken from Digi-
tal globe Website).
2.2. IHS pan-sharpening method
Introduced by Carper [1], the IHS transform separates spatial (In-
tensity) and spectral (Hue and Saturation) information from a
standard RGB image [12]. As it is known, the intensity is bright-
ness of the image; hue is the dominant or average wavelength of
the light contributing to the colour (colour perception) and satura-
tion represent to purity of the colour [13]. In the IHS method, the
component Intensity can be defined as [14]:
(1)
Where:
Weight of the k-th band;
Digital Number (DN) of the k-th band.
By adding to each initial image multispectral (resampled) the dif-
ference between the Intensity and the panchromatic value, it is
possible to obtain the multispectral components with the same
geometric resolution of PAN data using the equation:
(2)
Where
(3)
If the weights assume the same value, this transformation is called
IHS, while if they assume different values, this processing is
called Fast-IHS [15, 16].
2.3. Brovey pan-sharpening method
The transformation of Brovey (BT) allows to obtain a multispec-
tral image of greater detail by exploiting the idea that the spatial
details are modulated into the MS images by multiplying the MS
images by the ratio of the PAN [17].
This transformation was developed to increase the contrast in low
and high ends of histogram of an image and produce visually ap-
pealing images. The mathematic formulation of this method is
[18]:
(4)
Where
Fused k-th band;
MS k-th band (lower resolution);
Synthetic band;
PAN band (higher resolution).
The synthetic image can be obtained in two ways. In the first case,
it is obtained as the average of the multispectral bands that are
included in panchromatic one [3], [19]:
(5)
In the second case, introducing (different) band weights, it is pos-
sible to obtain the following relation of image fused:
(6)
In this latter case, this transformation is called Weighted Brovey.
2.4. Spectral radiance response and first moment of in-
ertia of spectral radiance response
A method to obtain weights is to analyse the spectral radiance
response. It is defined as the ratio of the number of photo-
electrons measured by the instrument system, to the spectral radi-
ance at a particular wavelength present at the entrance to the tele-
scope aperture. It includes not only raw detector quantum efficien-
cy, but also transmission losses due to the telescope optics and
filters. The spectral radiance response for each band is normalized
by dividing by the maximum response value for that band to arrive
at a relative spectral radiance response [20]. According to Belfiore
et al. [21], it is possible to calculate the minimum value of the
intercepted radiance (IntRad) between the panchromatic and mul-
tispectral bands for every wavelength interval; the IntRad for each
International Journal of Engineering & Technology
73
band is divided for the IntRad sum of all bands in order to obtain
the “standard weights” for the pan-sharpening methods.
In the Worldview-3 images, the weights assume the value indicat-
ed in the table 1.
Table 1: Weights Standard for Worldview-3 Multispectral Images
Band
B1
B2
B3
B4
B5
B6
B7
weight
w1
w2
w3
w4
w5
w6
w7
0.005
0.142
0.209
0.144
0.234
0.157
0.116
As can be seen from Table 1, there is not reported the weight for
NIR2 band. This is justified because the NIR2 curve does not
intersect the panchromatic curve.
A new method that takes into account the greater response of
some bands than others has been developed. It consists in the
compute the relative first moment of inertia (or well known as first
moment of area) (Ms) of each band that is the multiplication of the
area contained in panchromatic band for the relative spectral re-
sponse height:
(7)
Where:
Area of single band (k) contained in panchromatic
boundary;
Distance from the centroid of relative band (k) area
contained in panchromatic boundary to the horizontal
axes.
In this way, different weights are attributed taking into account the
spectral response of all bands, so the red component has got a
greater weight while the blue has a lower one.
A graphical representation of elements, such as centroids and dis-
tance from each centroid to the abscissa axis measured on Spectral
Response of WV-3, is shown in figure 3.
Fig. 3: Spectral Response of WorldView-3 Panchromatic and Multispec-
tral Sensors with Representation of the Centroid (Red Dots) of Each Band
Area Contained in Panchromatic Boundary and the Relative Height.
Therefore, normalizing the values of the moment of inertia it is
possible to obtain the following new weight dataset (table 2).
Table 2: Inertial Weights for Worldview-3 Multispectral Images
Band
B1
B2
B3
B4
B5
B6
B7
weight
w1
w2
w3
w4
w5
w6
w7
0.005
0.104
0.198
0.151
0.251
0.178
0.113
2.5. Quality indexes
Quality indexes have been defined in order to compare the pan-
sharpening images with the initial ones. In this paper, three indices
were considered: RMSE, RASE and ERGAS.
Root mean square error (RMSE) index is computed using the for-
mula [22]:
(8)
Where
Difference between the mean values of the input LRMI
and the output one (HRMI);
Standard deviation of the difference images LRMI and
HRMI.
Relative average spectral error (RASE) index characterizes the
average performance of a method in the considered spectral bands
[18]. This index is calculated including all multispectral images by
following formula [23]:
(9)
Where M is the mean value of Digital Numbers (DNs) of the n
input images (MS).
The ERGAS (Erreur Relative Global Adimensionnelle de
Synthèse), also indicated as a dimensionless Global Relative Error
in Synthesis [24] is another index to evaluate the quality of the
pan-sharpening. Introduced by Wald [25], it is calculated using the
following formula:
(10)
Where
Spatial resolution of PAN image;
Spatial resolution of MS image;
Number of bands of the HRPI image;
Mean radiance value of the k-th band of MS image.
The good image quality derived from pan-sharpening is character-
ized by low values of RMSE, RASE and ERGAS index. In the
ideal transformation, these indexes should be close to zero [26].
3. Application of the pan-sharpening methods
In order to verify the quality of several pan-sharpening methods,
using IHS and Brovey methods, the following datasets of weights
have been investigated:
• Standard weights (SW) (table 1);
• Inertial weights (IW) (table 2);
• Equal weights (EW) (0.142 for each of seven bands).
Two different area studios have been taken in account; the charac-
teristic parameters of the satellite images are reported in the table
3.
Table 3: Characteristic Parameters of the WorldView-3 Images
Baden- Wurttemberg (Ger-
many)
Tripoli (Lib-
ya)
Dimension
(pixel)
MS
1479 x 2608
703 x 997
PAN
5913 x 10429
2812 x 3986
Acquisition date
2015/06/06
2016/03/08
Acquisition time
10:35:25
10:12:13
The choice of two different scenarios (urban and rural landscape)
allows to generalize as closely as possible the impact that weight
dataset involves in the quality of the image pan-sharpening (fig-
ure4).
The satellite images are courteously supplied by Digital Globe as
product samples available for download.
The quality indexes (RMSE, RASE and ERGAS) obtained in the
two-investigated landscapes, can be summarized in the following
tables (Tables 4 and 5).
74
International Journal of Engineering & Technology
Fig. 4: Multispectral LRMS Scene of Worldview3 in RGB (5-3-2 Composite Bands): (Left Image) Urban Area (Libya) - Projection: Transverse Mercator;
WGS 1984 UTM Zone 33N; (Right Image) Rural Area (Germany) - Projection: Transverse Mercator; WGS 1984 UTM Zone 32N.
Table 4: Quality Indexes Obtained from Urban Scenario (Libya)
Methods
Band
Standard Weights
Inertial Weights
Equal Weights
RMSE
RASE
ERGAS
RMSE
RASE
ERGAS
RMSE
RASE
ERGAS
IHS
1
188.525
41.719
9.189
195.975
42.373
9.263
214.168
46.837
9.626
2
192.735
194.991
215.954
3
192.920
195.165
216.015
4
192.612
194.919
215.928
5
192.834
195.092
215.981
6
192.420
194.765
215.861
7
192.794
195.061
215.966
BROVEY
1
137.022
41.071
8.710
144.970
42.220
8.872
137.292
48.785
9.428
2
180.846
188.281
208.268
3
235.819
242.445
284.521
4
191.022
194.905
227.426
5
178.127
182.102
219.542
6
177.170
180.353
213.061
7
209.122
213.803
254.631
The values of quality indexes have been obtained by raster calcu-
lator of ArcMap© and suitable algorithm developed in Matlab©
software.
A visual comparison of the results of pan-sharpened data using
diverse methods and weights is shown in the figure 5 and 6.
In order to display more detail, a subset of the two VW-3 images
has been chosen.
In these images, it is possible to note that in all scenarios investi-
gated, the pan-sharpening techniques and weights datasets adopted
have improved the resolution of fused image.
International Journal of Engineering & Technology
75
Table 5: Quality Indexes Obtained from Rural Scenario (Germany)
Methods
Band
Standard Weights
Inertial Weights
Equal Weights
RMSE
RASE
ERGAS
RMSE
RASE
ERGAS
RMSE
RASE
ERGAS
IHS
1
16.707
5.571
2.082
16.937
5.648
2.106
24.680
8.210
3.401
2
16.747
16.974
24.693
3
16.783
17.013
24.707
4
16.760
16.994
24.703
5
16.782
17.013
24.701
6
16.760
16.994
24.700
7
16.784
17.016
24.705
BROVEY
1
12.636
7.149
1.639
13.018
7.283
1.665
16.554
10.661
2.280
2
10.869
11.141
13.977
3
13.717
13.966
17.779
4
12.849
13.013
16.139
5
11.383
11.489
13.816
6
20.637
20.938
29.800
7
45.291
46.171
71.245
Fig. 5: Particular of Urban Images (Bands: 5, 3, 2): (A) LRMS; (B) IHS SW; (C)IHS IW; (D) IHS EW; (E) Brovey SW; (F) Brovey IW; (G) Brovey EW.
76
International Journal of Engineering & Technology
Fig. 6:Particular of Rural Images (Bands: 5, 3, 2): (A) LRMS; (B) IHS SW; (C)IHS IW; (D) IHS EW; (E) Brovey SW; (F) Brovey IW; (G) Brovey EW.
4. Conclusions
In this paper, the use of different weights and pan-sharpening
methods have led diverse quality of the HRMIs in each scenario
investigated. In general, the analysis of quality indexes in the dif-
ferent scenarios has shown highest values of RMSE, RASE and
ERGAS indexes for urban landscape.
As concerning the pan-sharpening images obtained with the iner-
tial weight data set, the quality indexes attest this data set led a
greater adaptation than the equal weights in each scenario, despite
it was not the better in the two tested scenarios. In fact, standard
weight data set supplies better results, but inertial weight one is
comparable with it in order to the performance level. Even if,
according to Snehmani et al [27], no single pan sharpening meth-
od can be considered the best, introducing different weights for
each band of WV-3 images in IHS and Brovey method's better
results are achieved.
Therefore, the good quality and high geometric resolution (pixel
30 cm) of the multispectral pan-sharpened images allow a better
representation of the territory. In addition, the multispectral pan-
sharpened images can be used as a basic information layer for
civil engineering applications, such as the design of roads, rail-
ways, aqueducts, etc.
Acknowledgement
This research was part of the “Change detection techniques ap-
plied to very high-resolution satellite images," a research project
supported by the University of Naples “Parthenope." We would
like to thank Prof. Raffaele Santamaria, the Director of the De-
partment of Sciences and Technologies, for his scientific support
of our research activities.
References
[1] M. A. Gomarasca, Basics of geomatics, Springer Science & Busi-
ness Media, (2009). https://doi.org/10.1007/978-1-4020-9014-1.
[2] B. Aiazzi, S. Baronti, F. Lotti& M. Selva, “A comparison between
global and context-adaptive pansharpening of multispectral images”,
IEEE Geoscience and Remote Sensing Letters,(2009), Vol. 6, Iss. 2,
pp. 302– 306. https://doi.org/10.1109/LGRS.2008.2012003.
[3] C. Parente&R. Santamaria, “Increasing geometric resolution of data
supplied by Quickbird multispectral sensors”, Sensors & Transduc-
ers, 156(9), 111, 2013.
[4] P. S. Chavez, Jr., S. C. Sides &J. A. Anderson, “Comparison of three
different methods to merge multiresolution and multispectral data:
Landsat TM and SPOT panchromatic,” Photogramm. Eng. Remote
Sens., (1991), vol. 57, no. 3, pp. 295–303.
[5] J. Amro, M. Mateos, R. Vega, A. Molina, K. Katsaggelos, “A survey
of classical methods and new trends in pansharpening of multispec-
tral images”, EURASIP Journal on Advances in Signal Processing,
(2011), Springer Open Journal.
[6] Vivone, G., Alparone, L., Chanussot, J., Dalla Mura, M., Garzelli,
A., Licciardi, G. A., ... & Wald, L., “A critical comparison among
pansharpening algorithms”, IEEE Transactions on Geoscience and
Remote Sensing, (2015), 53(5), pp. 2565-2586
https://doi.org/10.1109/TGRS.2014.2361734.
[7] Baronti, S., Aiazzi, B., Selva, M., Garzelli, A., &Alparone, L. “A
theoretical analysis of the effects of aliasing and misregistration on
pansharpened imagery”, IEEE Journal of Selected Topics in Signal
Processing, (2011), 5 (3), pp.446-453.
https://doi.org/10.1109/JSTSP.2011.2104938.
[8] Digital Globe, “The Benefits of the Eight Spectral Bands of
WorldView-2”, available online:
http://www.digitalglobe.com/downloads/WorldView-2_8
Band_Applications_Whitepaper.pdf, last visit: 04.21.2017.
[9] W.J. Carper, T.M. Lillesand& R.W. Kiefer, “The use of intensity-
hue-saturation transformations for merging SPOT panchromatic
and multispectral image data”, Photogramm. Eng. Rem. S. (1990),
56, pp: 459-467.
International Journal of Engineering & Technology
77
[10] P. Maglione, C. Parente, R. Santamaria&A.Vallario, “Modelli
tematici 3D della copertura del suolo a partire da DTM e immagini
telerilevate ad alta risoluzione WorldView-2 [3D thematicmodels
of land cover from DTM and high-resolution remote sensing
images WorldView-2]”, Rendiconti Online Società Geologica
Italiana, 30, pp. 33-40.
[11] T.Updike&C. Comp, “Radiometric use of WorldView-2 imagery”.
Technical Note, 1-16, Digital Globe.
[12] M. Ehlers, S. Klonus, P. Johan Åstrand& P. Rosso, “Multi-sensor
image fusion for pansharpening in remote sensing”, International
Journal of Image and Data Fusion, (2010), 1(1), pp. 25-45.
https://doi.org/10.1080/19479830903561985.
[13] M. Pepe, S. Ackermann, L. Fregonese, & C. Achille, “New per-
spectives of Point Clouds color management – The development of
tool in Matlab for applications in Cultural Heritage”,Int. Arch. Pho-
togramm. Remote Sens. Spatial Inf. Sci., XLII-2/W3, 567-571,
(2017), doi: 10.5194/isprs-archives-XLII-2-W3-567-2017.
https://doi.org/10.5194/isprs-archives-XLII-2-W3-567-2017.
[14] T. M. Tu, P. S. Huang, C. L. Hung and C. P. Chang, “A fast intensi-
ty-hue-saturation fusion technique with spectral adjustment for
IKONOS imagery”, Geoscience Remote Sensing IEEE, (2004), vol.
1(4), pp. 309-312. https://doi.org/10.1109/LGRS.2004.834804.
[15] M. B. Giannini, P. Maglione& C. Parente, “Application of IHS
Pan-Sharpening techniques to IKONOS images”, Proceedings of
IEEE GOLD Conference 2010, (2010).
[16] S. Rahmani, M. Strait, D. Merkurjev, M. Moeller &T. Wittman,
“An adaptive IHS pan-sharpening method”, IEEE Geoscience and
Remote Sensing Letters, (2010), 7(4), 746-750.
https://doi.org/10.1109/LGRS.2010.2046715.
[17] J. Zhang, “Multi-source remote sensing data fusion: status and
trends”, International Journal of Image and Data Fusion, (2010),
1(1), 5-24. https://doi.org/10.1080/19479830903561035.
[18] G. Meinel&M. Neubert, “A comparison of segmentation programs
for high resolution remote sensing data”, International Archives of
Photogrammetry and Remote Sensing, (2014), 35(Part B), pp.
1097-1105.
[19] P. Maglione, C. Parente &A. Vallario, “Pan-sharpening
Worldview-2: IHS, Brovey and Zhang methods in comparison”. Int.
J. Eng. Technol, (2016), 8, pp.673-679.
[20] Digital Globe, “Radiometric Use of WorldView-3 Imagery” -
Technical Note, (2016), Available online: https://dg-cms-uploads-
produc-
tion.s3.amazonaws.com/uploads/document/file/207/Radiometric_U
se_of_WorldView-3_v2.pdf, last visit: 04.20.2017.
[21] O. R. Belfiore, C. Meneghini, C. Parente & R. Santamaria, “Appli-
cation of different Pan-sharpening methods on WorldView-3 imag-
es”, ARPN-JEAS, (2016), 11, pp. 490-496.
[22] S. Chen, R. Zhang, H. Su, J. Tian, J. Xia, “Scaling-up transfor-
mation of multisensor images with multiple resolutions”, Sensors,
(2009), Issue 9, pp: 1370-1381. https://doi.org/10.3390/s90301370.
[23] G. P.Hegde, N. Hegde, &V. D. I. Muralikrishna, “Measurement of
quality preservation of pan-sharpened image”, International Jour-
nal of Engineering Research and Development, (2012), 2(10), pp:
12-17.
[24] M. Lillo-Saavedra &C. Gonzalo, “Spectral or spatial quality for
fused satellite imagery? A trade-off solution using the wavelet á
trous algorithm”, Int. J. Remote Sens.,(2006), vol. 27, no. 7, pp.
1453–1464. https://doi.org/10.1080/01431160500462188.
[25] L. Wald, “Quality of high resolution synthesized images: Is there a
simple criterion?”, Proceedings of the International Conference
Fusion of Earth Data, January 26-28, 2000, Nice, France, Vol. 1,
pp. 99-105.
[26] J. A. Sobrino, Recent advances in quantitative remote sensing,
Universitat de València, (2002).
[27] Gore A. Snehmani, A. Ganju, S. Kumar & P. K. Srivastava, A
comparative analysis of pansharpening techniques on QuickBird
and WorldView-3 images. Geocarto International, (2016), pp. 1-17.
https://doi.org/10.1080/10106049.2016.1206627.