Article

Multifocus color image fusion based on quaternion curvelet transform

Optics Express (Impact Factor: 3.49). 08/2012; 20(17):18846-60. DOI: 10.1364/OE.20.018846
Source: PubMed

ABSTRACT

Multifocus color image fusion is an active research area in image processing, and many fusion algorithms have been developed. However, the existing techniques can hardly deal with the problem of image blur. This study present a novel fusion approach that integrates the quaternion with traditional curvelet transform to overcome the above disadvantage. The proposed method uses a multiresolution analysis procedure based on the quaternion curvelet transform. Experimental results show that the proposed method is promising, and it does significantly improve the fusion quality compared to the existing fusion methods.

Full-text preview

Available from: opticsinfobase.org
  • Source
    • "However, multiscale transform methods are usually accompanied with complex computations , which will make these methods become inefficient. Multiscale geometrical analysis methods such as curvelet filter[11], contourlet filter[12], weighted least squares[13], guided filter[14], and bilateral filter[15,16]have been successfully applied on several image fusion methods for their ability of capturing intrinsic geometrical structure of images[10,14,17]. Moreover, the bilateral filter is a spatialdomain filter which can preserve significant edges while smoothing images. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The goal of image fusion is to obtain a fused image that contains most significant information in all input images which were captured by different sensors from the same scene. In particular, the fusion process should improve the contrast and keep the integrity of significant features from input images. In this paper, we propose a region-based image fusion method to fuse spatially registered visible and infrared images while improving the contrast and preserving the significant features of input images. At first, the proposed method decomposes input images into base layers and detail layers using a bilateral filter. Then the base layers of the input images are segmented into regions. Third, a region-based decision map is proposed to represent the importance of every region. The decision map is obtained by calculating the weights of regions according to the gray-level difference between each region and its neighboring regions in the base layers. At last, the detail layers and the base layers are separately fused by different fusion rules based on the same decision map to generate a final fused image. Experimental results qualitatively and quantitatively demonstrate that the proposed method can improve the contrast of fused images and preserve more features of input images than several previous image fusion methods.
    Full-text · Article · Oct 2015 · Mathematical Problems in Engineering
  • [Show abstract] [Hide abstract]
    ABSTRACT: Due to the nature of involved optics, the depth of field in imaging systems is usually constricted in the field of view. As a result, we get the image with only parts of the scene in focus. To extend the depth of field, fusing the images at different focus levels is a promising approach. This paper proposes a novel multifocus image fusion approach based on clarity enhanced image segmentation and regional sparse representation. On the one hand, using clarity enhanced image that contains both intensity and clarity information, the proposed method decreases the risk of partitioning the in-focus and out-of-focus pixels in the same region. On the other hand, due to the regional selection of sparse coefficients, the proposed method strengthens its robustness to the distortions and misplacement usually resulting from pixel based coefficients selection. In short, the proposed method combines the merits of regional image fusion and sparse representation based image fusion. The experimental results demonstrate that the proposed method outperforms six recently proposed multifocus image fusion methods.
    No preview · Article · Feb 2013 · Optics Express
  • [Show abstract] [Hide abstract]
    ABSTRACT: Region level based methods are popular in recent years for multifocus image fusion as they are the most direct fusion ways. However, the fusion result is not ideal due to the difficulty in focus region segmentation. In this paper, we propose a novel region level based multifocus image fusion method that can locate the boundary of the focus region accurately. As a novel tool of image analysis, phases in the quaternion wavelet transform (QWT) are capable of representing the texture information in the image. We use the local variance of the phases to detect the focus or defocus for every pixel initially. Then, we segment the focus detection result by the normalized cut to remove detection errors, thus initial fusion result is acquired through copying from source images according to the focus detection results. Next, we compare initial fusion result with spatial frequency weighted fusion result to accurately locate the boundary of the focus region by structural similarity. Finally, the fusion result is obtained using spatial frequency as fusion weight along the boundary of the focus region. Furthermore, we conduct several experiments to verify the feasibility of the fusion framework. The proposed algorithm is demonstrated superior to the reference methods.
    No preview · Article · Oct 2013 · Signal Processing
Show more