Page 1

Multifocus color image fusion based on

quaternion curvelet transform

Liqiang Guo,∗Ming Dai, and Ming Zhu

Key Laboratory of Airborne Optical Imaging and Measurement,

Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences,

Changchun, Jilin Province 130033, China

∗math circuit@qq.com

Abstract:

age processing, and many fusion algorithms have been developed. However,

the existing techniques can hardly deal with the problem of image blur. This

study present a novel fusion approach that integrates the quaternion with

traditional curvelet transform to overcome the above disadvantage. The

proposed method uses a multiresolution analysis procedure based on the

quaternion curvelet transform. Experimental results show that the proposed

method is promising, and it does significantly improve the fusion quality

compared to the existing fusion methods.

Multifocus color image fusion is an active research area in im-

© 2012 Optical Society of America

OCIS codes: (100.0100) Image processing; (350.2660) Fusion.

References and links

1. X. Li, M. He, and M. Roux, “Multifocus image fusion based on redundant wavelet transform,” IET Image Pro-

cess. 4(4), 283–293 (2010).

2. S. Li, J. T. Kwok, and Y. Wang, “Multifocus image fusion using artificial neutral networks,” Pattern Recogn.

Lett. 23(8), 985–997 (2002).

3. Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recogn. 43(6), 2003–2016 (2010).

4. Q. Zhang and B. Guo, “Multifocus image fusion using the nonsubsampled contourlet transform,” Signal Process.

89(7), 1334–1346 (2009).

5. W. Huang and Z. L. Jing, “Multifocus image fusion using pulse coupled neutral network,” Pattern Recogn. lett.

28(9), 1123–1132 (2007).

6. N. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion algorithm based on shearlets,” Chin. Opt. Lett. 9(4),

041001 (2011).

7. N. Ma, L. Luo, Z. Zhou, and M. Liang, “A Multifocus image fusion in nonsubsampled contourlet domain with

variational fusion stategy,” Proc. SPIE 8004, 800411 (2011).

8. W. Yajie and X. Xinhe, “A multifocus image fusion new method based on multidecision,” Proc. SPIE 6357,

63570G (2006).

9. R. Nava, B. E. Ram´ ırez, and G. Crist´ obal, “A novel multi-focus image fusion algorithm based on feature extrac-

tion and wavelets,” Proc. SPIE 7000, 700028 (2008).

10. I. De and B. Chanda, “A simple and efficient algorithm for multifocus image fusion using morphological

wavelets,” Signal Process. 86(5), 924–936 (2006).

11. S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis.

Comput. 26(7), 971–979 (2008).

12. H. Li, Y. Chai, H. Yin, and G. Liu, “Multifocus image fusion and denoising scheme based on homogeneity

similarity,” Opt. Commun. 285(2), 91–100 (2012).

13. Y. Chai, H. Li, and Z. Li, “Multifocus image fusion scheme using focused region detection and multiresolution,”

Opt. Commun. 284(19), 4376–4389 (2011).

14. S. Gabarda and G. Crist´ obal, “Multifocus image fusion through pseudo-Wigner distribution,” Opt. Eng. 44(4),

047001 (2005).

15. P. L. Lin and P. Y. Huang, “Fusion methods based on dynamic-segmented morphological wavelet or cut and paste

for multifocus images,” Signal Process. 88(6), 1511–1527 (2008).

#170574 - $15.00 USD

(C) 2012 OSA

Received 13 Jun 2012; revised 18 Jul 2012; accepted 24 Jul 2012; published 1 Aug 2012

10 September 2012 / Vol. 20, No. 19 / OPTICS EXPRESS 18846

Page 2

16. Y. Chai, H. F. Li, and M. Y. Guo, “Multifocus image fusion scheme based on features of multiscale products and

PCNN in lifting stationary wavelet domain,” Opt. Commun. 284(5), 1146–1158 (2011).

17. R. Redonodo, F.˘Sroubek, S. Fischer, and G. Grist´ obal, “Multifocus image fusion using the log-Gabor transform

and a Multisize Windows technique,” Inform. Fusion 10(2), 163–171 (2009).

18. F. Luo, B. Lu, and C. Miao, “Multifocus image fusion with trace-based structure tensor,” Proc. SPIE 8200,

82001G (2011).

19. A. Baradarani, Q. M. J. Wu, M. Ahmadi, and P. Mendapara, “Tunable halfband-pair wavelet filter banks and

application to multifocus image fusion,” Pattern Recogn. 45(2), 657–671 (2012).

20. Y. Chai, H. Li, and X. Zhang, “Multifocus image fusion based on features contrast of multiscale products in

nonsubsampled contourlet transform domain,” Optik 123(7), 569–581 (2012).

21. H. Zhao, Q. Li, and H. Feng,“Multi-focus color image fusion in the HSI space using the sum-modified-laplacian

and the coarse edge map,” Image Vis. Comput. 26(9), 1285–1295 (2008).

22. W. Huang and Z. Jing, “Evaluation of focus measures in multi-focus image fusion,” Pattern Recogn. Lett. 28(9),

493–500 (2007).

23. R. Maruthi, “Spatial Domain Method for Fusing Multi-Focus Images using Measure of Fuzziness,” Int. J. Com-

put. Appl. 20(7), 48–57 (2011).

24. H. Shi and M. Fang, “Multi-focus Color Image Fusion Based on SWT and IHS,” in Proceedings of IEEE Con-

ference on Fuzzy Systems and Knowledge Discovery (IEEE 2007), 461–465.

25. Y. Chen, L. Wang, Z. Sun, Y. Jiang, and G. Zhai, “Fusion of color microscopic images based on bidimensional

empirical mode decomposition,” Opt. Express 18(21), 21757–21769 (2010).

26. H. Li, B. S. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical

Models & Image Process. 57(3), 235–245 (1995).

27. Z. Zhang and R. S. Blum, “A categorization of multiscale-decomposition-based image fusion schemes with a

performance study for a digital camera application,” Proc. IEEE. 87(8), 1315–1326 (1999).

28. K. Amolius, Y. Zhang, and P. Dare, “Wavelet based image fusion techniques–An introduction, review and com-

parison,” Photogramm. Eng. Remote Sens. 62(1), 249–263 (2007).

29. S. J. Sangwine, “Fourier transforms of colour images using quaternion, or hypercomplex numbers,” Electron.

Lett. 32(1), 1979–1980 (1996).

30. S. C. Pei and C. M. Cheng, “Color image processing by using binary quaternion-moment-preserving thresholding

technique,” IEEE Trans. Signal Process. 8(5), 614–628 (1999).

31. T. A. Ell and S. J. Sangwine, “Hypercomplex Fourier transforms of color images,” IEEE Trans. Image Process.

16(1), 22–35 (2007).

32. D. S. Alexiadis and G. D. Sergiadis, “Estimation of motions in color image sequences using hypercomplex

Fourier transforms,” IEEE Trans. Sig. Process. 18(1), 168–186 (2009).

33. S. J. Sangwine, T. A. Ell, and N. L. Bihan, “Fundamental representations and algebraic properties of biquater-

nions or complexified quaternions,” Adv. Appl. Clifford Algebras 21(3), 607–636 (2011).

34. L. Q. Guo and M. Zhu, “Quaternion Fourier-Mellin moments for color images,” Pattern Recogn. 44(2), 187–195

(2011).

35. B. J. Chen, H. Z. Shu, H. Zhang, G. Chen, C. Toumoulin, J. L. Dillenseger, and L. M. Luo, “Quaternion Zernike

moments and their invariants for color image analysis and object recognition,” Signal Process. 92(2), 308–318

(2012).

36. S. Sangwine and N. L. Bihan, “Quaternion toolbox for Matlab,” http://qtfm.sourceforge.net

37. E. J. Cand` es and D. L. Donoho, “Continuous curvelet transform I. Resolution of the wavefront set,” Appl. Com-

put. Harmon. Anal. 19(2), 162–197 (2005).

38. E. J. Cand` es and D. L. Donoho, “ Continuous curvelet transform II. Discretization and frames,” Appl. Comput.

Harmon. Anal. 19(2), 198–222 (2005).

39. E. J. Cand` es, L. Demanet, D. L. Donoho, and L. Ying, “The curvelet transform website,” http://www.curvelet.org

40. E. J. Cand` es, L. Demanet, D. L. Donoho, and L. Ying, “Fast discrete curvelet transorms,” Multiscale Model.

Simul. 5(3), 861–899 (2006).

41. Y. Yuan, J. Zhang, B. Chang, and Y. Han, “Objective quality evaluation of visible and infrared color fusion

image,” Opt. Eng. 50(3), 033202 (2011).

42. M. Douze, “Blur image data,” http://lear.inrialpes.fr/people/vandeweijer/data.html

43. Helicon Soft, “Helicon Focus Sample images,” http://www.heliconsoft.com/focus samples.html

1.Introduction

Image fusion is commonly described as the task of combine information from multiple images

of the same scene. The fusion image is suitable for human and machine perception or further

processing, such as segmentation and object recognition. Our investigation focused mainly on

the multifocus color image fusion, which is an important branch of image fusion.

#170574 - $15.00 USD

(C) 2012 OSA

Received 13 Jun 2012; revised 18 Jul 2012; accepted 24 Jul 2012; published 1 Aug 2012

10 September 2012 / Vol. 20, No. 19 / OPTICS EXPRESS 18847

Page 3

It is difficult to get an image with all objects in focus, which is mainly caused by the limited

depth-of-field for camera lens. The multifocus image fusion algorithm can solve this problem,

and create an image with most of the objects in focus. However, most of the multifocus image

fusion methods [1–20] are focused on the grayscale images, only a few studies [21–25] address

the color images.

Most of the existing multifocus color image fusion algorithms belong to the spatial domain

method [21–23],thatis,the computation offocus measureand the fusionprocess arecarriedout

in the spatial domain. Shi et al. proposed the method both in the spatial and frequency domain

[24], namely the focus measure is computed in the frequency domain and the fusion process

is carried out in the spatial domain. Besides, Chen et al. proposed the bidimensional empirical

mode decomposition (BEMD) algorithm for the fusion of color microscopic images [25]. The

fusion algorithms based on the discrete wavelet transform (DWT) for grayscale images were

reported in [26–28]. For the color image fusion, we can use the DWT based method on each

color channel separately and then combined the three fused color channel together.

The generic schematic diagram for the spatial domain fusion methods is given in Fig. 1. This

method consists of the following steps. First, decompose the two source images into smaller

blocks. Second, compute the focus measure for each block. Then, compare the focus measure of

twocorresponding blocks,andselecttheblockswithbiggerfocusmeasureasthecorresponding

blocks of the fused image. In the second step, we can select the energy of image gradient

(EOG), energy of Laplacian (EOL), spatial frequency (SF) or sum-modified-Laplacian (SML)

as focus measure. The experimental results in [22] show that the SML with optimized block

size performs best among those focus measures. Besides, we can select the index of fuzziness

as focus measure [23].

Source

Image

A

Source

Image

B

Fused

Image

Choose

Maxima

Calculate

Focus

measure

Calculate

Focus

measure

Partitioned

image

Fig. 1. Spatial domain based multifocus color image fusion scheme.

The spatial domain fusion methods have the virtue of lower computation complexity. How-

ever, these methods have the problem of image blur. This is mainly caused by the block decom-

position of the source images. In the decomposition process, there must exist some blocks con-

taining both distinct and blurred regions. No matter what fusion rule was adopted, the blurred

region will be in the fused image unavoidably. This situation will become worse when the bor-

der between distinct and blurred regions in the source images is not a straight line (such as the

source multifocus color images in the experiment section).

The IHS integrated with SWT (stationary wavelets transform) based fusion algorithm was

carried out both in spatial and frequency domains [24]. The block diagram for this fusion algo-

rithm is given in Fig. 2.

Thedetailedproceduresareasfollows.First,performtheIHStransformonthesourceimages

to get the I components. Second, perform stationary wavelet transform on the I components to

get the multiresolution coefficients. Then, compute the focus measure by the summation of

#170574 - $15.00 USD

(C) 2012 OSA

Received 13 Jun 2012; revised 18 Jul 2012; accepted 24 Jul 2012; published 1 Aug 2012

10 September 2012 / Vol. 20, No. 19 / OPTICS EXPRESS 18848

Page 4

Source

Image

A

Source

Image

B

IB

Fused

Image

Decision

Map

IA

Multiresolution

representation of

“I”component

IHS

IHS

SWT

SWT

F

Consistency

verification

Fig. 2. Spatial and frequency domains based multifocus color image fusion scheme.

weighted coefficients for the corresponding pixel of each source images. Find the maximum

focus measure of the corresponding pixels to get a decision map. This map decides the pixels

of the fused image come from the source image A or B. Finally, we can get the fused color

image through the consistency verification procedure in the spatial domain.

The IHS and SWT based method processes the source images in a pixel-by-pixel manner,

which is different from the region based spatial domain method [21–23]. This method give us

a new way for multifocus color image fusion. However, just as the experimental results show

that this method can not overcome the image blur problem completely. This is mainly caused

by the spatial domain fusion process, that is, the pixels of the fused color image are directly

obtained from the source images.

As an alternative approach, the BEMD based method was proposed for the color microscopic

images [25]. The source color microscopic images are transformed in the YIQ color space.

Then the bidimensional empirical decomposition is performed on each Y component to get the

Residue and IMF components. The local significance principle fusion rule is applied to fuse the

IMF component, and the principal component analysis rule is applied to fuse the Residue, I and

Q components, separately. Thirdly, the fused Y component is recovered by the inverse BEMD.

In the end, the final fused color image is obtained by the inverse YIQ transform.

The BEMD based method can deal with the multifocus microscopic image successfully.

However, just as the experimental results show, this method can not give us an ideal fusion

results for outdoor scene multifocus color images.

In summary, the existing multifocus color image fusion algorithms face the problem of image

blur. In addition, except for the BEMD based algorithm, the above multifocus color image

fusion algorithms have a common drawback that they can not realize the fusion of multiple

color images.

The contribution of this paper is to design a multifocus color image fusion algorithms which

can overcome the problems mentioned above. The anisotropic scaling principle of the curvelets

and the quaternion based multiresolution representation procedure can eliminate the blurred

regions in the fused color image.

Based on the above motivations, we combined the quaternion with curvelet, and define the

quaternion curvelet transform (QCT) for color image. A novel color image fusion algorithm

based on the quaternion curvelet transform is proposed. Through the quaternion curvelet based

multiresolution analysis, the proposed method can avoid the drawback of image blur, and per-

forms better better than the existing methods.

Thepaperisstructuredasfollows.Insection2wefirstgiveabriefintroductionofquaternion.

Then, we propose the quaternion curvelet transformforcolor images. Next, section 3 introduces

the color image fusion algorithm based on quaternion curvelet transform. In addition, we in-

#170574 - $15.00 USD

(C) 2012 OSA

Received 13 Jun 2012; revised 18 Jul 2012; accepted 24 Jul 2012; published 1 Aug 2012

10 September 2012 / Vol. 20, No. 19 / OPTICS EXPRESS 18849

Page 5

troduce the objective assessment metrics in section 4. The performance of the proposed fusion

method is evaluated via some experiments in section 5, and the comparison with the existing

methods is also discussed in this section. Finally, conclusions are made in section 6.

2. Preliminaries

2.1.Quaternion

In recent years, quaternion has been used more and more in color image processing domain

[29-36]. The quaternion H, which is a type of hypercomplex number, was formally introduced

by Hamilton in 1843. It is an associative noncommutative four-dimensional algebra

H = {q = qr+qii+qjj+qkk | qr, qi, qj, qk∈ R}

where i, j, k are complex operators obeying the following rules

(1)

i2= j2= k2= −1 (2)

ij = −ji = k, jk = −kj = i,ki = −ik = j

(3)

A quaternion can be regarded as the composition of a scalar part and a vector part : q =

S(q)+V(q), where S(q) = qr,V(q) = qii+qjj+qkk. If a quaternion q has a zero scalar part

(qr= 0), then q is called pure quaternion, and if q has a unit norm (?q? = 1), then q is called

unit pure quaternion.

The norm of q is defined as

?

In [29], Sangwine proposed to encode the three channel components of RGB image on the

three imaginary parts of a pure quaternion as follows

?q? =?q·q =

q2r+q2

i+q2

j+q2

k

(4)

f(m,n) = fR(m,n)i+ fG(m,n)j+ fB(m,n)k

(5)

where fR(m,n), fG(m,n) and fB(m,n) are the red, green and blue components of the pixel, re-

spectively. The advantage of quaternion-type representation is that a color image can be treated

in a holistic manner.

2.2. Quaternion curvelet transform

One of the primary tasks in computer vision is to obtain a directional representation of the

features from an image. In general, those features are the irregular or anisotropic lines and

edges. Therefore, a directional multiscale representation method is desirable in this field.

The wavelet transforms and multiresolution ideas permeate many fields especially in signal

and image processing domain. However, two-dimensional (2-D) discrete wavelet transforms

(DWT) can not represent the anisotropic features well in 2-D space. This is mainly caused by

the fact that 2-D DWT do not supply good direction selectivity.

During the past decades, multiscale geometric transforms start a new stage of image pro-

cessing. One of them is the curvelet transform, which was introduced by Cand` es and Donoho

[37–40]. The curvelet has the properties of anisotropy and directionality, which is the optimal

basis for the representation of the smooth curves in an image such as edges and region bound-

aries. For more detailed description of curvelet transform can be found in [37–40].

In this paper, we generalize the traditional curvelet transform from real and complex numbers

to quaternion algebra and define the quaternion curvelet transform (QCT) for color image. The

construction of quaternion valued curvelets φj,k,l(x) is similar as traditional one, given by

#170574 - $15.00 USD

(C) 2012 OSA

Received 13 Jun 2012; revised 18 Jul 2012; accepted 24 Jul 2012; published 1 Aug 2012

10 September 2012 / Vol. 20, No. 19 / OPTICS EXPRESS 18850

Page 6

φj,k,l(x) = φj

?

Rθl

?x−x(j,l)

k

??

(6)

where φj,k,l(x) is obtained by the translation, rotation and parabolic scaling of mother curvelet

φ(x), and x = (x1,x2) .

In Eq. (6), the parabolic scaling is given by

f(x1,x2) = f(22jx1,2jx2),

j = 0,1,2,...

(7)

Rθl(·) represents the rotation transform by an angle θl= 2π ·2−j·l with l = 0,1,2,... at

scale j, given by

?

x(j,l)

k

represents the amount of translation at scale j and direction l, given by

x?

x?

1

2

?

=

?

cosθl

sinθl

−sinθl

cosθl

??

x1

x2

?

(8)

x(j,l)

k

= R−1

θl(k12−j,k22−j/2)

(9)

where R−1

The set of quaternion valued curvelets { φj,k,l(x) } is a tight frame system. The quaternion

valued square integrable function f ∈ L2(R2,H) can be expanded as a quaternion curvelet se-

ries, given by

f(x) =∑

l

θlrepresents the inverse of rotation matrix in Eq. (8), k1,k2∈ Z2.

j∑

k∑

< f(x),φj,k,l(x) > φj,k,l(x)

(10)

where

< f(x),φj,k,l(x) > ?

?

R2f(x) φj,k,l(x)dx

(11)

is defined as quaternion curvelet transform.

We now give a detail procedure for the implementation of quaternion curvelet transform.

Figure 3 is the generic schematic diagram for the forward and inverse QCT.

Quaternion curvelet transform

Quaternion

valued

color image

Reconstructed

Quaternion

valued

color image

qfft

iqfft

Color image

in

QFT domain

Reconstructed

Color image

in

QFT domain

windowing

and

unwrapping

windowing

and

wrapping

Curvelet

coefficients

in

QFT domain

Curvelet

coefficients

in

QFT domain

iqfft

qfft

Quaternion

valued

curvelet

coefficients

Quaternion

valued

curvelet

coefficients

Inverse quaternion curvelet transform

Fig. 3. Block diagram of the forward and inverse quaternion curvelet transform.

The architecture of digital implementation of the QCT is similar as traditional curvelet trans-

form [40], where the QCT coefficients were obtained from the frequency domain. As we can

see from Fig. 3, the QCT is performed in the following four steps:

#170574 - $15.00 USD

(C) 2012 OSA

Received 13 Jun 2012; revised 18 Jul 2012; accepted 24 Jul 2012; published 1 Aug 2012

10 September 2012 / Vol. 20, No. 19 / OPTICS EXPRESS 18851

Page 7

Step1. Represent the color image in quaternion form and select a unit pure quaternion μ

for quaternion Fourier transform (QFT) [31]. In this paper we specify μ =

and use the “qfft” and “iqfft” functions in the Quaternion Toolbox [36], respectively, for fast

computation of the forward and inverse QFT.

Step2. Apply the 2-D quaternion Fourier transform to obtain the quaternion based frequency

samples.

Step3. Interpolate the quaternion Fourier coefficients for each scale j and angle θl. Multi-

ply the interpolated object with the parabolic window, then wrap the above results around the

origin. This step is similar as the realization of traditional curvelet transform, and the detailed

windowing and wrapping procedures can be found in [40].

Step4. Apply the inverse 2-D quaternion Fourier transform on each windowed scale and

angle to obtain the final QCT coefficients.

Performing the above steps reversely, we can achieve the inverse quaternion curvelet trans-

forms (IQCT). More specifically, we first perform 2-D quaternion Fourier transform on the

QCT coefficients to obtain the frequency domain samples. Then multiply the samples by the

corresponding wrapped quaternion curvelet, and unwrap the multiplication results to get the re-

constructedcolorimageinQFTdomain.Finally,weperformtheinverse2-DquaternionFourier

transforms to get the recovered color image.

The QCT can process not only real and complex signals, but also quaternion one. If we

separate a color image into three scalar images and compute the curvelet transforms of these

images separately, the potential color information will be corrupted. But in this work, we are

concerned with the computation of a single, holistic, curvelet transform which treats a color

image as a vector field. The advantage of our method is that the color information can be

preserved to the greatest extent.

√3

3i+

√3

3j+

√3

3k,

3.QCT based multifocus color image fusion algorithm

This section mainly discusses the proposed QCT based multifocus color image fusion algo-

rithm. The overview schematic diagram of the proposed fusion algorithm is shown in Fig. 4.

Fine Levels

Quaternion

valued

Image

1

Quaternion

valued

Image

n

Fine Levels

Coarse level

Fused Fine

levels

Fused

Image

QCT

IQCT

Coarse level

Fused

coarse

level

QCT

Fusion

Rule

...

...

Fig. 4. Block diagram of QCT based multifocus color image fusion scheme.

As we can see from Fig. 4, the detailed fusion procedures are as follows. First, an image mul-

tiscale representation is constructed for each source color images by applying the quaternion

curvelet transform. Each color image is thus described by a set of quaternion-valued curvelet

#170574 - $15.00 USD

(C) 2012 OSA

Received 13 Jun 2012; revised 18 Jul 2012; accepted 24 Jul 2012; published 1 Aug 2012

10 September 2012 / Vol. 20, No. 19 / OPTICS EXPRESS 18852

Page 8

coefficients, including the coarse and fine levels. Then, the image fusion rules are applied to

each of those different levels, independently, to construct a combined multiresolution repre-

sentation. In the end, the fused color image is obtained by performing the inverse quaternion

curvelet transform.

The architecture in Fig. 4 is similar as the DWT based fusion algorithm in [26–28]. However,

the proposed fusion algorithm is not a simple substitution of the DWT with the QCT, rather

it is the change of processing manner. More specifically, the DWT cannot process the color

image directly, but only process the R, G and B color channel separately, whereas the QCT can

process the color image in a holistic manner. This holistic processing method can preserve the

color information to the greatest extent.

The fusion rules are the most important step in the whole process. Due to the different phys-

ical meaning, the coefficients of coarse and fine levels are treated by the different rule in the

fusion process. The fine level coefficients are the representation of detailed information of an

image. The larger norm of the fine level coefficients corresponds to sharp intensity changes in

an image, such as edges or region boundaries. Therefore, the fusion of fine levels coefficient is

based on the maximum selection rule, that is

?

C1

, otherwise

where C1

image A, B and final fused image, respectively.

The coarse level coefficients are the approximate representation of an image, and have inher-

ited of its property such as the mean intensity. In order to stress the details in multifocus color

image and enhance image contrast, we use minimum selection rule to fusion this level, that is

?

C2

, otherwise

whereC2

color image A, B and final fused image, respectively.

We can generalize the above maximum (minimum) selection rule for fusion of multiple color

images, that is, select the coefficients with biggest (smallest) norm from n source color images

as the corresponding coefficients of the fused color image.

C1

F(i, j) =

C1

A(i, j)

B(i, j)

, ?C1

A(i, j)? ≥ ?C1

B(i, j)?

(12)

A(i, j) , C1

B(i, j) and C1

F(i, j) are the fine levels QCT coefficient of the two source color

C2

F(i, j) =

C2

A(i, j)

B(i, j)

, ?C2

A(i, j)? ≤ ?C2

B(i, j)?

(13)

A(i, j),C2

B(i, j)andC2

F(i, j)representthecoarselevelQCTcoefficientofthetwosource

4. Performance evaluation metrics

The assessment of color fusion image quality is a necessary procedure. The performance of

image fusion algorithms is usually assessed using both subjective and objective measures. Sub-

jective measure relies on the ability of people’s comprehension, which is the qualitative analysis

for fused color image. While objective measure can overcome the influence of mentality, hu-

man vision and knowledge, and give a quantitative analysis of the effectiveness of color image

fusion.

For the subjective measure, we mainly focused on whether the blur was occurred in the

fused color image by visual observation. However, the objective metric for quality assessment

of color fusion image is still a challenging issue. As far as we know, there is no generally

accepted objective evaluation measures. This problem partially lies in the difficulty of defining

an ideal fused image. In this paper, we mainly using the subjective measure while the objective

measure subsidiary for quality assessment of the fused color image.

In this paper, the objective evaluation metric is computed in the CIE L∗a∗b∗(CIE 1976) color

space. The reason for the selection of this color space in that the L∗a∗b∗color is designed to

#170574 - $15.00 USD

(C) 2012 OSA

Received 13 Jun 2012; revised 18 Jul 2012; accepted 24 Jul 2012; published 1 Aug 2012

10 September 2012 / Vol. 20, No. 19 / OPTICS EXPRESS 18853

Page 9

approximate human vision. In other words, the L∗a∗b∗color is perceptually uniform, and its L∗

component closely matches human perception of lightness. Here, we give a brief introduction

of the objective evaluation metric used in this paper.

As we know, the blurred image exhibits low contrast. So we select the image contrast metric

(ICM) proposed by Yuan et al. [41] to evaluate the image blur. The ICM is defined based on

both gray and color histogram characteristics. In the computation of ICM, we first convert the

fused color image to grayscale one, and use following formula to compute the gray contrast

metric Cg

NI−1

∑

k=0

where P(Ik) is the gray histogram, and αIis the dynamic range metric of the gray histogram.

We employed the L∗channel in CIE L∗a∗b∗color space to evaluate the color contrast Cc,

given by

NL−1

∑

k=0

where P(Lk) is the histogram of L∗channel in CIE L∗a∗b∗color space, and αLis the dynamic

range metric of the color histogram.

Based on Eq. (13) and Eq. (14), the ICM is computed by

Cg= αI

Ik

NIP(Ik)

(14)

Cc= αL

Lk

NLP(Lk)

(15)

ICM = (0.5×C2

g+0.5×C2

c)

1

2

(16)

A large ICM means better contrast performance and less blur occurred in the fused color

image, and indicates better fusion results.

5. Experimental results and comparisons

In this section, several experiments are carried out on naturally obtained multifocus color im-

ages to evaluate and compare the performance of the proposed QCT based method with the

typical multifocus color image fusion schemes mentioned in section 1. The fusion results are

shown for each experiment, and the assessment of the fused color image quality using both

subjective and objective measures is also presented.

In reference [21–24], the authors only use the artificially obtained multifocus color images

to evaluate the performance of their algorithms. The border between the distinct and blurred

regions for this type of multifocus color image is a straight line. However, this is not sufficient

to illustrate the efficiency of the algorithm. In this paper, three groups of source color image

were obtained from the out of focus of the camera lens. The border between distinct and blurred

regions of naturally obtained multifocus color images, such as Fig. 5, is a set of curve.

In the first two experiments, we demonstrate the effectiveness of the proposed QCT based

algorithm for the blur elimination of the outdoor scene color images. The comparison of the

IHS combined with SWT based method, fuzzy based method, SML based method, BEMD

based method and DWT based method with the proposed method is also presented. The fusion

of the multiple multifocus color images using the proposed method is carried out in the last

experiment.

In the first experiment, the source multifocus color images, as shown in Fig. 5, are captured

from the outdoor scene, and can be downloaded from the website [42]. The right corner part of

Fig. 5(a) is distinct, whereas the other part is blurred. On the contrary, as shown in Fig. 5(b),

the right corner part is blurred, whereas the other part is distinct.

Figure 6 shows the results from the existing fusion methods and the proposed method, where

(a), (b), (c), (d), (e) and (f) are the results from the IHS combined with SWT based method,

#170574 - $15.00 USD

(C) 2012 OSA

Received 13 Jun 2012; revised 18 Jul 2012; accepted 24 Jul 2012; published 1 Aug 2012

10 September 2012 / Vol. 20, No. 19 / OPTICS EXPRESS 18854

Page 10

the fuzzy based method, the SML based method, the BEMD based method, the DWT based

method and the proposed method, respectively. Figure 7 is the subimage of the corresponding

fused image to display the detailed information for subjective evaluation.

(a)(b)

Fig. 5. The source multifocus color images.

(a)(b) (c)

(d) (e) (f)

Fig. 6. Fusion results of different methods. (a) IHS and SWT based fusion result; (b) Fuzzy

based fusion result; (c) SML based fusion result; (d) BEMD based fusion result; (e) DWT

based fusion results; (f) proposed QCT based fusion result.

Now we give a subjective assessment of the different fusion methods. The border between

the distinct and blurred regions of the source color images is a set of curve. The corresponding

part in the fused images, as we can see from Fig. 7(a) and Fig. 7(c), exist blurred effect. The

saw-tooth edges were occurred in Fig. 7(c). This is mainly caused by the block decomposition

procedure. We can also find that the blurred effect was occurred in the whole image in Fig. 7(b).

As for BEMD based method, Fig. 7(d) exhibits a mild blur effect. The DWT based method, as

#170574 - $15.00 USD

(C) 2012 OSA

Received 13 Jun 2012; revised 18 Jul 2012; accepted 24 Jul 2012; published 1 Aug 2012

10 September 2012 / Vol. 20, No. 19 / OPTICS EXPRESS 18855

Page 11

(a) (b)(c)

(d) (e)(f)

Fig. 7. Subimages taken from Fig. 6.

based fusion result; (c) SML based fusion result; (d) BEMD based fusion result; (e) DWT

based fusion results; (f) proposed QCT based fusion result.

(a) IHS and SWT based fusion result; (b) Fuzzy

shown in Fig. 7(e), still face the blur problem. So, the previous multifocus color image fusion

algorithms can not excluded the blurred regions from the source image. In comparison, We can

hardly find the blur by visual observation for the fused image obtained by the proposed method.

ihs+swt

Fig. 8. Objective evaluation of image using ICM measure in Fig. 6.

fuzzy SMLBMED dwtqct

0

0.1

0.2

0.3

0.4

0.27587

0.24387

0.276560.29437 0.302890.32282

Evaluation of image blur

Figure 8 is the corresponding objective evaluation results. It clearly indicates that the ICM

of the proposed method is the maximum. Therefore, we can say that the proposed QCT based

method performs better than the other five methods through objective evaluation.

In the second experiment, the source multifocus color image, as shown in Fig. 9, is also se-

lected from the website [42]. The foreground of Fig. 9(a) is distinct, whereas the background

is blurred. On the contrary, as shown in Fig. 9(b), the foreground is blurred, whereas the back-

ground is distinct.

#170574 - $15.00 USD

(C) 2012 OSA

Received 13 Jun 2012; revised 18 Jul 2012; accepted 24 Jul 2012; published 1 Aug 2012

10 September 2012 / Vol. 20, No. 19 / OPTICS EXPRESS 18856

Page 12

The fusion results from the existing method and the proposed fusion method are shown in

Fig. 10, and Fig. 11 is the subimage of the corresponding fused image.

(a) (b)

Fig. 9. The source multifocus color images.

(a)(b) (c)

(d) (e)(f)

Fig. 10. Fusion results of different methods. (a) IHS and SWT based fusion result; (b)

Fuzzy based fusion result; (c) SML based fusion result; (d) BEMD based fusion result; (e)

DWT based fusion results; (f) proposed QCT based fusion result.

We can see that the background in Fig. 11(b) is blur. For the other methods, as shown in Fig.

11(a), 11(c), 11(d) and 11(e), the corresponding fused color images are blurred to some extents.

In comparison, We can hardly find the blurred effect by visual observation for the fused image

obtained from the proposed method.

Figure 12 shows the objective evaluation results of the different fusion methods. It clearly

indicates that the proposed QCT based method performs better than the other five methods

through objective evaluation.

#170574 - $15.00 USD

(C) 2012 OSA

Received 13 Jun 2012; revised 18 Jul 2012; accepted 24 Jul 2012; published 1 Aug 2012

10 September 2012 / Vol. 20, No. 19 / OPTICS EXPRESS 18857

Page 13

(a) (b)(c)

(d) (e)(f)

Fig. 11. Subimages taken from Fig. 10.

based fusion result; (c) SML based fusion result; (d) BEMD based fusion result; (e) DWT

based fusion results; (f) proposed QCT based fusion result.

(a) IHS and SWT based fusion result; (b) Fuzzy

ihs+swt

Fig. 12. Objective evaluation of image using ICM measure in Fig. 10.

fuzzySML BMEDdwt qct

0

0.1

0.2

0.3

0.22202

0.17129

0.223480.23799 0.23953 0.24334

Evaluation of image blur

The purpose of the last experiment is to demonstrate the effectiveness of the fusion of the

multiple multifocus color images using the proposed QCT based method. As far as we know,

except for the BEMD based method and the DWT based method, the existing multifocus color

image fusion algorithm cannot fusion multiple images. The proposed method can done this

work easily.

Thesource multifocusmicroscopic color imagesareshowninFig.13and canbe downloaded

from the website [43]. Figure 14 shows the results of different methods, where (a), (b) and

(c) are the results from the BEMD based method, the DWT based method and the proposed

method, respectively.

We can see from Fig. 14(b) that the blurred effect was occurred in the whole image for the

DWT based method. For the BEMD based method and the proposed method, as shown in Fig.

#170574 - $15.00 USD

(C) 2012 OSA

Received 13 Jun 2012; revised 18 Jul 2012; accepted 24 Jul 2012; published 1 Aug 2012

10 September 2012 / Vol. 20, No. 19 / OPTICS EXPRESS 18858

Page 14

14(a) and 14(c), the fused image obtained by the proposed method was much distinct than the

BEMD based fusion result. The subjective analysis is also validated by the objective evaluation

results shown in Fig. 14(d). To sum up, the proposed method can be a useful tool in the multiple

multifocus color image fusion tasks.

(a) (b)(c)(d)

Fig. 13. Source multifocus color image for Multiple image fusion.

BEMDDWT

(d)

qct

0

0.1

0.2

0.3

0.4

0.5

0.37459

0.29111

0.40541

Evaluation of image blur

(a)(b) (c)

Fig. 14. Multiple image fusion results.

fusion result; (c) QCT based fusion result; (d) Objective evaluation of (a), (b) and (c).

(a) BEMD based fusion result; (b) DWT based

According to the stage at which the information is combined, color image fusion algorithms

can be classified as pixel-level and region-level. The IHS combined with SWT based method,

BEMD based method, DWT based method and the proposed method belong to pixel-level. The

SML based method and the fuzzy based method belong to region-level. The above experimental

results show that the pixel-level method performs better than region-level one.

The above experimental results show that the proposed QCT based algorithm performs best

from either fusing the outdoor scene images or the microscopic images. The previous fusion

algorithms can not exclude the blurred regions from both of the source images well. The pro-

posed fusion method appears best among all the results by visual analysis. It is also validated

by the objective evaluation results in the above discussion. In summary, we can arrive at a con-

clusion that the proposed QCT based multifocus color image fusion algorithm is successful in

the aspect of blur elimination.

6.Conclusion

Thispaperfirstreviewsthetypicalmultifocuscolorimagefusionalgorithms,andthenproposea

novel fusion approach using quaternion based curvelet multiresolution analysis. In the proposed

method, each source color image is described by a set of quaternion-valued coefficients. The

different image fusion rules are applied to the coarse and fine level coefficients to construct a

final quaternion based multiresolution representation. The fused color image is obtained by the

inverse quaternion curvelet transform.

#170574 - $15.00 USD

(C) 2012 OSA

Received 13 Jun 2012; revised 18 Jul 2012; accepted 24 Jul 2012; published 1 Aug 2012

10 September 2012 / Vol. 20, No. 19 / OPTICS EXPRESS 18859

Page 15

The comparisons between the competing multifocus color image fusion methods and the

proposed method by the subjective and objective analysis are carried out. The experimental

results indicate that the proposed method does significantly solve the image blur problem, and

outperforms the previous methods. In addition, the proposed method can be easily generalized

to fusion of multiple color images. Therefore, the proposed QCT based algorithm can be a

useful tool in the multifocus color image fusion tasks. Our future work will be focused on the

use of quaternion curvelet transform in the color image processing domain.

Acknowledgments

We thank the reviewers for their helpful comments and suggestions that improved the quality

of this manuscript. Thanks are also extended to the editors for their work. This work was sup-

ported by the Major State Basic Research Development Program of China (973 Program, No.

2009CB72400102A) and the National Natural Science Foundation of China (No. 61203242).

#170574 - $15.00 USD

(C) 2012 OSA

Received 13 Jun 2012; revised 18 Jul 2012; accepted 24 Jul 2012; published 1 Aug 2012

10 September 2012 / Vol. 20, No. 19 / OPTICS EXPRESS 18860