Page 1

Multifocus color image fusion based on

quaternion curvelet transform

Liqiang Guo,∗Ming Dai, and Ming Zhu

Key Laboratory of Airborne Optical Imaging and Measurement,

Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences,

Changchun, Jilin Province 130033, China

∗math circuit@qq.com

Abstract:

age processing, and many fusion algorithms have been developed. However,

the existing techniques can hardly deal with the problem of image blur. This

study present a novel fusion approach that integrates the quaternion with

traditional curvelet transform to overcome the above disadvantage. The

proposed method uses a multiresolution analysis procedure based on the

quaternion curvelet transform. Experimental results show that the proposed

method is promising, and it does significantly improve the fusion quality

compared to the existing fusion methods.

Multifocus color image fusion is an active research area in im-

© 2012 Optical Society of America

OCIS codes: (100.0100) Image processing; (350.2660) Fusion.

References and links

1. X. Li, M. He, and M. Roux, “Multifocus image fusion based on redundant wavelet transform,” IET Image Pro-

cess. 4(4), 283–293 (2010).

2. S. Li, J. T. Kwok, and Y. Wang, “Multifocus image fusion using artificial neutral networks,” Pattern Recogn.

Lett. 23(8), 985–997 (2002).

3. Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recogn. 43(6), 2003–2016 (2010).

4. Q. Zhang and B. Guo, “Multifocus image fusion using the nonsubsampled contourlet transform,” Signal Process.

89(7), 1334–1346 (2009).

5. W. Huang and Z. L. Jing, “Multifocus image fusion using pulse coupled neutral network,” Pattern Recogn. lett.

28(9), 1123–1132 (2007).

6. N. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion algorithm based on shearlets,” Chin. Opt. Lett. 9(4),

041001 (2011).

7. N. Ma, L. Luo, Z. Zhou, and M. Liang, “A Multifocus image fusion in nonsubsampled contourlet domain with

variational fusion stategy,” Proc. SPIE 8004, 800411 (2011).

8. W. Yajie and X. Xinhe, “A multifocus image fusion new method based on multidecision,” Proc. SPIE 6357,

63570G (2006).

9. R. Nava, B. E. Ram´ ırez, and G. Crist´ obal, “A novel multi-focus image fusion algorithm based on feature extrac-

tion and wavelets,” Proc. SPIE 7000, 700028 (2008).

10. I. De and B. Chanda, “A simple and efficient algorithm for multifocus image fusion using morphological

wavelets,” Signal Process. 86(5), 924–936 (2006).

11. S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis.

Comput. 26(7), 971–979 (2008).

12. H. Li, Y. Chai, H. Yin, and G. Liu, “Multifocus image fusion and denoising scheme based on homogeneity

similarity,” Opt. Commun. 285(2), 91–100 (2012).

13. Y. Chai, H. Li, and Z. Li, “Multifocus image fusion scheme using focused region detection and multiresolution,”

Opt. Commun. 284(19), 4376–4389 (2011).

14. S. Gabarda and G. Crist´ obal, “Multifocus image fusion through pseudo-Wigner distribution,” Opt. Eng. 44(4),

047001 (2005).

15. P. L. Lin and P. Y. Huang, “Fusion methods based on dynamic-segmented morphological wavelet or cut and paste

for multifocus images,” Signal Process. 88(6), 1511–1527 (2008).

#170574 - $15.00 USD

(C) 2012 OSA

Received 13 Jun 2012; revised 18 Jul 2012; accepted 24 Jul 2012; published 1 Aug 2012

10 September 2012 / Vol. 20, No. 19 / OPTICS EXPRESS 18846

Page 2

16. Y. Chai, H. F. Li, and M. Y. Guo, “Multifocus image fusion scheme based on features of multiscale products and

PCNN in lifting stationary wavelet domain,” Opt. Commun. 284(5), 1146–1158 (2011).

17. R. Redonodo, F.˘Sroubek, S. Fischer, and G. Grist´ obal, “Multifocus image fusion using the log-Gabor transform

and a Multisize Windows technique,” Inform. Fusion 10(2), 163–171 (2009).

18. F. Luo, B. Lu, and C. Miao, “Multifocus image fusion with trace-based structure tensor,” Proc. SPIE 8200,

82001G (2011).

19. A. Baradarani, Q. M. J. Wu, M. Ahmadi, and P. Mendapara, “Tunable halfband-pair wavelet filter banks and

application to multifocus image fusion,” Pattern Recogn. 45(2), 657–671 (2012).

20. Y. Chai, H. Li, and X. Zhang, “Multifocus image fusion based on features contrast of multiscale products in

nonsubsampled contourlet transform domain,” Optik 123(7), 569–581 (2012).

21. H. Zhao, Q. Li, and H. Feng,“Multi-focus color image fusion in the HSI space using the sum-modified-laplacian

and the coarse edge map,” Image Vis. Comput. 26(9), 1285–1295 (2008).

22. W. Huang and Z. Jing, “Evaluation of focus measures in multi-focus image fusion,” Pattern Recogn. Lett. 28(9),

493–500 (2007).

23. R. Maruthi, “Spatial Domain Method for Fusing Multi-Focus Images using Measure of Fuzziness,” Int. J. Com-

put. Appl. 20(7), 48–57 (2011).

24. H. Shi and M. Fang, “Multi-focus Color Image Fusion Based on SWT and IHS,” in Proceedings of IEEE Con-

ference on Fuzzy Systems and Knowledge Discovery (IEEE 2007), 461–465.

25. Y. Chen, L. Wang, Z. Sun, Y. Jiang, and G. Zhai, “Fusion of color microscopic images based on bidimensional

empirical mode decomposition,” Opt. Express 18(21), 21757–21769 (2010).

26. H. Li, B. S. Manjunath, and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical

Models & Image Process. 57(3), 235–245 (1995).

27. Z. Zhang and R. S. Blum, “A categorization of multiscale-decomposition-based image fusion schemes with a

performance study for a digital camera application,” Proc. IEEE. 87(8), 1315–1326 (1999).

28. K. Amolius, Y. Zhang, and P. Dare, “Wavelet based image fusion techniques–An introduction, review and com-

parison,” Photogramm. Eng. Remote Sens. 62(1), 249–263 (2007).

29. S. J. Sangwine, “Fourier transforms of colour images using quaternion, or hypercomplex numbers,” Electron.

Lett. 32(1), 1979–1980 (1996).

30. S. C. Pei and C. M. Cheng, “Color image processing by using binary quaternion-moment-preserving thresholding

technique,” IEEE Trans. Signal Process. 8(5), 614–628 (1999).

31. T. A. Ell and S. J. Sangwine, “Hypercomplex Fourier transforms of color images,” IEEE Trans. Image Process.

16(1), 22–35 (2007).

32. D. S. Alexiadis and G. D. Sergiadis, “Estimation of motions in color image sequences using hypercomplex

Fourier transforms,” IEEE Trans. Sig. Process. 18(1), 168–186 (2009).

33. S. J. Sangwine, T. A. Ell, and N. L. Bihan, “Fundamental representations and algebraic properties of biquater-

nions or complexified quaternions,” Adv. Appl. Clifford Algebras 21(3), 607–636 (2011).

34. L. Q. Guo and M. Zhu, “Quaternion Fourier-Mellin moments for color images,” Pattern Recogn. 44(2), 187–195

(2011).

35. B. J. Chen, H. Z. Shu, H. Zhang, G. Chen, C. Toumoulin, J. L. Dillenseger, and L. M. Luo, “Quaternion Zernike

moments and their invariants for color image analysis and object recognition,” Signal Process. 92(2), 308–318

(2012).

36. S. Sangwine and N. L. Bihan, “Quaternion toolbox for Matlab,” http://qtfm.sourceforge.net

37. E. J. Cand` es and D. L. Donoho, “Continuous curvelet transform I. Resolution of the wavefront set,” Appl. Com-

put. Harmon. Anal. 19(2), 162–197 (2005).

38. E. J. Cand` es and D. L. Donoho, “ Continuous curvelet transform II. Discretization and frames,” Appl. Comput.

Harmon. Anal. 19(2), 198–222 (2005).

39. E. J. Cand` es, L. Demanet, D. L. Donoho, and L. Ying, “The curvelet transform website,” http://www.curvelet.org

40. E. J. Cand` es, L. Demanet, D. L. Donoho, and L. Ying, “Fast discrete curvelet transorms,” Multiscale Model.

Simul. 5(3), 861–899 (2006).

41. Y. Yuan, J. Zhang, B. Chang, and Y. Han, “Objective quality evaluation of visible and infrared color fusion

image,” Opt. Eng. 50(3), 033202 (2011).

42. M. Douze, “Blur image data,” http://lear.inrialpes.fr/people/vandeweijer/data.html

43. Helicon Soft, “Helicon Focus Sample images,” http://www.heliconsoft.com/focus samples.html

1.Introduction

Image fusion is commonly described as the task of combine information from multiple images

of the same scene. The fusion image is suitable for human and machine perception or further

processing, such as segmentation and object recognition. Our investigation focused mainly on

the multifocus color image fusion, which is an important branch of image fusion.

#170574 - $15.00 USD

(C) 2012 OSA

Received 13 Jun 2012; revised 18 Jul 2012; accepted 24 Jul 2012; published 1 Aug 2012

10 September 2012 / Vol. 20, No. 19 / OPTICS EXPRESS 18847

Page 3

It is difficult to get an image with all objects in focus, which is mainly caused by the limited

depth-of-field for camera lens. The multifocus image fusion algorithm can solve this problem,

and create an image with most of the objects in focus. However, most of the multifocus image

fusion methods [1–20] are focused on the grayscale images, only a few studies [21–25] address

the color images.

Most of the existing multifocus color image fusion algorithms belong to the spatial domain

method [21–23],thatis,the computation offocus measureand the fusionprocess arecarriedout

in the spatial domain. Shi et al. proposed the method both in the spatial and frequency domain

[24], namely the focus measure is computed in the frequency domain and the fusion process

is carried out in the spatial domain. Besides, Chen et al. proposed the bidimensional empirical

mode decomposition (BEMD) algorithm for the fusion of color microscopic images [25]. The

fusion algorithms based on the discrete wavelet transform (DWT) for grayscale images were

reported in [26–28]. For the color image fusion, we can use the DWT based method on each

color channel separately and then combined the three fused color channel together.

The generic schematic diagram for the spatial domain fusion methods is given in Fig. 1. This

method consists of the following steps. First, decompose the two source images into smaller

blocks. Second, compute the focus measure for each block. Then, compare the focus measure of

twocorresponding blocks,andselecttheblockswithbiggerfocusmeasureasthecorresponding

blocks of the fused image. In the second step, we can select the energy of image gradient

(EOG), energy of Laplacian (EOL), spatial frequency (SF) or sum-modified-Laplacian (SML)

as focus measure. The experimental results in [22] show that the SML with optimized block

size performs best among those focus measures. Besides, we can select the index of fuzziness

as focus measure [23].

Source

Image

A

Source

Image

B

Fused

Image

Choose

Maxima

Calculate

Focus

measure

Calculate

Focus

measure

Partitioned

image

Fig. 1. Spatial domain based multifocus color image fusion scheme.

The spatial domain fusion methods have the virtue of lower computation complexity. How-

ever, these methods have the problem of image blur. This is mainly caused by the block decom-

position of the source images. In the decomposition process, there must exist some blocks con-

taining both distinct and blurred regions. No matter what fusion rule was adopted, the blurred

region will be in the fused image unavoidably. This situation will become worse when the bor-

der between distinct and blurred regions in the source images is not a straight line (such as the

source multifocus color images in the experiment section).

The IHS integrated with SWT (stationary wavelets transform) based fusion algorithm was

carried out both in spatial and frequency domains [24]. The block diagram for this fusion algo-

rithm is given in Fig. 2.

Thedetailedproceduresareasfollows.First,performtheIHStransformonthesourceimages

to get the I components. Second, perform stationary wavelet transform on the I components to

get the multiresolution coefficients. Then, compute the focus measure by the summation of

#170574 - $15.00 USD

(C) 2012 OSA

Received 13 Jun 2012; revised 18 Jul 2012; accepted 24 Jul 2012; published 1 Aug 2012

10 September 2012 / Vol. 20, No. 19 / OPTICS EXPRESS 18848

Page 4

Source

Image

A

Source

Image

B

IB

Fused

Image

Decision

Map

IA

Multiresolution

representation of

“I”component

IHS

IHS

SWT

SWT

F

Consistency

verification

Fig. 2. Spatial and frequency domains based multifocus color image fusion scheme.

weighted coefficients for the corresponding pixel of each source images. Find the maximum

focus measure of the corresponding pixels to get a decision map. This map decides the pixels

of the fused image come from the source image A or B. Finally, we can get the fused color

image through the consistency verification procedure in the spatial domain.

The IHS and SWT based method processes the source images in a pixel-by-pixel manner,

which is different from the region based spatial domain method [21–23]. This method give us

a new way for multifocus color image fusion. However, just as the experimental results show

that this method can not overcome the image blur problem completely. This is mainly caused

by the spatial domain fusion process, that is, the pixels of the fused color image are directly

obtained from the source images.

As an alternative approach, the BEMD based method was proposed for the color microscopic

images [25]. The source color microscopic images are transformed in the YIQ color space.

Then the bidimensional empirical decomposition is performed on each Y component to get the

Residue and IMF components. The local significance principle fusion rule is applied to fuse the

IMF component, and the principal component analysis rule is applied to fuse the Residue, I and

Q components, separately. Thirdly, the fused Y component is recovered by the inverse BEMD.

In the end, the final fused color image is obtained by the inverse YIQ transform.

The BEMD based method can deal with the multifocus microscopic image successfully.

However, just as the experimental results show, this method can not give us an ideal fusion

results for outdoor scene multifocus color images.

In summary, the existing multifocus color image fusion algorithms face the problem of image

blur. In addition, except for the BEMD based algorithm, the above multifocus color image

fusion algorithms have a common drawback that they can not realize the fusion of multiple

color images.

The contribution of this paper is to design a multifocus color image fusion algorithms which

can overcome the problems mentioned above. The anisotropic scaling principle of the curvelets

and the quaternion based multiresolution representation procedure can eliminate the blurred

regions in the fused color image.

Based on the above motivations, we combined the quaternion with curvelet, and define the

quaternion curvelet transform (QCT) for color image. A novel color image fusion algorithm

based on the quaternion curvelet transform is proposed. Through the quaternion curvelet based

multiresolution analysis, the proposed method can avoid the drawback of image blur, and per-

forms better better than the existing methods.

Thepaperisstructuredasfollows.Insection2wefirstgiveabriefintroductionofquaternion.

Then, we propose the quaternion curvelet transformforcolor images. Next, section 3 introduces

the color image fusion algorithm based on quaternion curvelet transform. In addition, we in-

#170574 - $15.00 USD

(C) 2012 OSA

Received 13 Jun 2012; revised 18 Jul 2012; accepted 24 Jul 2012; published 1 Aug 2012

10 September 2012 / Vol. 20, No. 19 / OPTICS EXPRESS 18849

Page 5

troduce the objective assessment metrics in section 4. The performance of the proposed fusion

method is evaluated via some experiments in section 5, and the comparison with the existing

methods is also discussed in this section. Finally, conclusions are made in section 6.

2. Preliminaries

2.1.Quaternion

In recent years, quaternion has been used more and more in color image processing domain

[29-36]. The quaternion H, which is a type of hypercomplex number, was formally introduced

by Hamilton in 1843. It is an associative noncommutative four-dimensional algebra

H = {q = qr+qii+qjj+qkk | qr, qi, qj, qk∈ R}

where i, j, k are complex operators obeying the following rules

(1)

i2= j2= k2= −1 (2)

ij = −ji = k, jk = −kj = i,ki = −ik = j

(3)

A quaternion can be regarded as the composition of a scalar part and a vector part : q =

S(q)+V(q), where S(q) = qr,V(q) = qii+qjj+qkk. If a quaternion q has a zero scalar part

(qr= 0), then q is called pure quaternion, and if q has a unit norm (?q? = 1), then q is called

unit pure quaternion.

The norm of q is defined as

?

In [29], Sangwine proposed to encode the three channel components of RGB image on the

three imaginary parts of a pure quaternion as follows

?q? =?q·q =

q2r+q2

i+q2

j+q2

k

(4)

f(m,n) = fR(m,n)i+ fG(m,n)j+ fB(m,n)k

(5)

where fR(m,n), fG(m,n) and fB(m,n) are the red, green and blue components of the pixel, re-

spectively. The advantage of quaternion-type representation is that a color image can be treated

in a holistic manner.

2.2. Quaternion curvelet transform

One of the primary tasks in computer vision is to obtain a directional representation of the

features from an image. In general, those features are the irregular or anisotropic lines and

edges. Therefore, a directional multiscale representation method is desirable in this field.

The wavelet transforms and multiresolution ideas permeate many fields especially in signal

and image processing domain. However, two-dimensional (2-D) discrete wavelet transforms

(DWT) can not represent the anisotropic features well in 2-D space. This is mainly caused by

the fact that 2-D DWT do not supply good direction selectivity.

During the past decades, multiscale geometric transforms start a new stage of image pro-

cessing. One of them is the curvelet transform, which was introduced by Cand` es and Donoho

[37–40]. The curvelet has the properties of anisotropy and directionality, which is the optimal

basis for the representation of the smooth curves in an image such as edges and region bound-

aries. For more detailed description of curvelet transform can be found in [37–40].

In this paper, we generalize the traditional curvelet transform from real and complex numbers

to quaternion algebra and define the quaternion curvelet transform (QCT) for color image. The

construction of quaternion valued curvelets φj,k,l(x) is similar as traditional one, given by

#170574 - $15.00 USD

(C) 2012 OSA

Received 13 Jun 2012; revised 18 Jul 2012; accepted 24 Jul 2012; published 1 Aug 2012

10 September 2012 / Vol. 20, No. 19 / OPTICS EXPRESS 18850