Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.
Available from: Sow Liu
- "However, multiscale transform methods are usually accompanied with complex computations , which will make these methods become inefficient. Multiscale geometrical analysis methods such as curvelet filter, contourlet filter, weighted least squares, guided filter, and bilateral filter[15,16]have been successfully applied on several image fusion methods for their ability of capturing intrinsic geometrical structure of images[10,14,17]. Moreover, the bilateral filter is a spatialdomain filter which can preserve significant edges while smoothing images. "
[Show abstract] [Hide abstract] ABSTRACT: The goal of image fusion is to obtain a fused image that contains most significant information in all input images which were captured by different sensors from the same scene. In particular, the fusion process should improve the contrast and keep the integrity of significant features from input images. In this paper, we propose a region-based image fusion method to fuse spatially registered visible and infrared images while improving the contrast and preserving the significant features of input images. At first, the proposed method decomposes input images into base layers and detail layers using a bilateral filter. Then the base layers of the input images are segmented into regions. Third, a region-based decision map is proposed to represent the importance of every region. The decision map is obtained by calculating the weights of regions according to the gray-level difference between each region and its neighboring regions in the base layers. At last, the detail layers and the base layers are separately fused by different fusion rules based on the same decision map to generate a final fused image. Experimental results qualitatively and quantitatively demonstrate that the proposed method can improve the contrast of fused images and preserve more features of input images than several previous image fusion methods.
[Show abstract] [Hide abstract] ABSTRACT: Due to the nature of involved optics, the depth of field in imaging systems is usually constricted in the field of view. As a result, we get the image with only parts of the scene in focus. To extend the depth of field, fusing the images at different focus levels is a promising approach. This paper proposes a novel multifocus image fusion approach based on clarity enhanced image segmentation and regional sparse representation. On the one hand, using clarity enhanced image that contains both intensity and clarity information, the proposed method decreases the risk of partitioning the in-focus and out-of-focus pixels in the same region. On the other hand, due to the regional selection of sparse coefficients, the proposed method strengthens its robustness to the distortions and misplacement usually resulting from pixel based coefficients selection. In short, the proposed method combines the merits of regional image fusion and sparse representation based image fusion. The experimental results demonstrate that the proposed method outperforms six recently proposed multifocus image fusion methods.
[Show abstract] [Hide abstract] ABSTRACT: Region level based methods are popular in recent years for multifocus image fusion as they are the most direct fusion ways. However, the fusion result is not ideal due to the difficulty in focus region segmentation. In this paper, we propose a novel region level based multifocus image fusion method that can locate the boundary of the focus region accurately. As a novel tool of image analysis, phases in the quaternion wavelet transform (QWT) are capable of representing the texture information in the image. We use the local variance of the phases to detect the focus or defocus for every pixel initially. Then, we segment the focus detection result by the normalized cut to remove detection errors, thus initial fusion result is acquired through copying from source images according to the focus detection results. Next, we compare initial fusion result with spatial frequency weighted fusion result to accurately locate the boundary of the focus region by structural similarity. Finally, the fusion result is obtained using spatial frequency as fusion weight along the boundary of the focus region. Furthermore, we conduct several experiments to verify the feasibility of the fusion framework. The proposed algorithm is demonstrated superior to the reference methods.