Figure - available from: Applied Optics
This content is subject to copyright. Terms and conditions apply.
(a) Original image of peppers. (b) The degraded version.

(a) Original image of peppers. (b) The degraded version.

Source publication
Article
Full-text available
This paper presents a region-based technique for fusion of a multifocus color image sequence in the LUV color space. First, mean shift segmentation was applied on the weighted average image of the image sequence to obtain the fusion reference areas. Second, for each segmented area, the well-known modified Laplacian (LAP2) was used as a focus measur...

Similar publications

Article
Full-text available
The dynamic vision sensor (DVS) measures asynchronously change of brightness per pixel, then outputs an asynchronous and discrete stream of spatiotemporal event information that encodes the time, location, and sign of brightness changes. The dynamic vision sensor has outstanding properties compared to sensors of traditional cameras, with very high...
Article
Full-text available
Camera devices are being deployed everywhere. Cities, enterprises, and more and more smart homes are using camera devices. Fine-grained identification of devices brings an in-depth understanding of the characteristics of these devices. Identifying the device type helps secure the device safe. But, existing device identification methods have difficu...
Preprint
Full-text available
Digital images are becoming large in size containing more information day by day to represent the as is state of the original one due to the availability of high resolution digital cameras, smartphones, and medical tests images. Therefore, we need to come up with some technique to convert these images into smaller size without loosing much informat...
Article
Full-text available
This article is devoted to searching for high-level explainable features that can remain explainable for a wide class of objects or phenomena and become an integral part of explainable AI (XAI). The present study involved a 25-day experiment on early diagnosis of wheat stress using drought stress as an example. The state of the plants was periodica...

Citations

... In this way, they obtain a successful fused image. In addition to this work, Hao et al. [16] implement a multi-focus image fusion method based on mean shift segmentation, and they use the SML metric for activity measurement. This method also works on colour images. ...
... MI is calculated by Eqs. (16) and (17) [53]. ...
... used for the first time in this study, only visual and quantitative results are given, and no comparison is made. Figures 8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23 and 24 shows the visual impacts of the proposed method using the new dataset. The effects of combining both for two images and for more than two source images are given in the figures. ...
Article
Full-text available
Multi-focus image fusion merges multiple source images of the same scene with different focus values to obtain a single image that is more informative. A novel approach is proposed to create this single image in this paper. The method’s primary stages include creating initial decision maps, applying morphological operations, and obtaining the fused image with the created fusion rule. Initial decision maps consist of label values represented as focused or non-focused. While determining these values, the first decision is made by feeding the image patches obtained from each source image to the modified CNN architecture. If the modified CNN architecture is unstable in determining label values, a new improvement mechanism designed based on focus measurements is applied for unstable regions where each image patch is labelled as non-focused. Then, the initial decision maps obtained for each source image are improved by morphological operations. Finally, the dynamic decision mechanism (DDM) fusion rule, designed considering the label values in the decision maps, is applied to minimize the disinformation resulting from classification errors in the fused image. At the end of all these steps, the final fused image is obtained. Also, in the article, a rich dataset containing two or more than two source images for each scene is created based on the COCO dataset. As a result, the method’s success is measured with the help of objective and subjective metrics. When the visual and quantitative results are examined, it is proven that the proposed method successfully creates a perfect fused image.
... The translation stage is then returned to the z-position with the highest sharpness. This procedure is based on methods used previously to detect focused areas in an out-of-focus image [44], and while highly reliable, typically takes 10-20 s to complete, limited by capturing and image processing time. ...
Article
Full-text available
We present the OpenFlexure Microscope software stack which provides computer control of our open source motorised microscope. Our diverse community of users needs both graphical and script-based interfaces. We split the control code into client and server applications interfaced via a web API conforming to the W3C Web of Things standard. A graphical interface is viewed either in a web browser or in our cross-platform Electron application, and gives basic interactive control including common operations such as Z stack acquisition and tiled scanning. Automated control is possible from Python and M atlab , or any language that supports HTTP requests. Network control makes the software stack more robust, allows multiple microscopes to be controlled by one computer, and facilitates sharing of equipment. Graphical and script-based clients can run simultaneously, making it easier to monitor ongoing experiments. We have included an extension mechanism to add functionality, for example controlling additional hardware components or adding automation routines. Using a Web of Things approach has resulted in a user-friendly and extremely versatile software control solution for the OpenFlexure Microscope, and we believe this approach could be generalized in the future to make automated experiments involving several instruments much easier to implement.
... A modified 2D Laplacian is commonly used for edge detection and to assess image sharpness [11]. The form used in OpenFlexure software converts a captured image to greyscale before convolving the image with a Laplacian kernel [12]           0 1 0 ...
Article
Full-text available
The OpenFlexure Microscope is a 3D printed, low-cost microscope capable of automated image acquisition through the use of a motorised translation stage and a Raspberry Pi imaging system. This automation has applications in research and healthcare, including in supporting the diagnosis of malaria in low resource settings. The plasmodium parasites which cause malaria require high magnification imaging, which has a shallow depth of field, necessitating the development of an accurate and precise autofocus procedure. We present methods of identifying the focal plane of the microscope, and procedures for reliably acquiring a stack of focused images on a system affected by backlash and drift. We also present and assess a method to verify the success of autofocus during the scan. The speed, reliability and precision of each method is evaluated, and the limitations discussed in terms of the end users' requirements. This article is protected by copyright. All rights reserved
... In addition to assessing the performance of a sharpness metric, its suitability for various samples and likely causes of failure must be understood. As the OpenFlexure Microscope has the option for down-sampled (832 × 624 pixels, A modified 2D Laplacian is commonly used for edge detection and to assess image sharpness [11]. The form used in OpenFlexure software converts a captured image to greyscale before convolving the image with a Laplacian kernel ...
Preprint
Full-text available
The OpenFlexure Microscope is a 3D printed, low-cost microscope capable of automated image acquisition through the use of a motorised translation stage and a Raspberry Pi imaging system. This automation has applications in research and healthcare, including in supporting the diagnosis of malaria in low resource settings. The \textit{plasmodium} parasites which cause malaria require high magnification imaging, which has a shallow depth of field, necessitating the development of an accurate and precise autofocus procedure. We present methods of identifying the focal plane of the microscope, and procedures for reliably acquiring a stack of focused images on a system affected by backlash and drift. We also present and assess a method to verify the success of autofocus during the scan. The speed, reliability and precision of each method is evaluated, and the limitations discussed in terms of the end users' requirements.
... This method outperformed many existing state-of-the-art techniques. Further, Hao et al. presented a region-based MFIF method for colored images in LUV color space (Hao et al. 2015). In this method mean shift segmentation technique was used for obtaining image sequences. ...
... This dataset consists of source images as well as their corresponding reference images. The latter dataset is the "Lytro" image dataset consisting of 20 image pairs of size 520 × 520 pixels that are collected from "Lytro" picture gallery (Hao et al. 2015;Huang et al. 2018). This data set consists of only the source image pairs. ...
... This method outperformed many existing state-of-the-art techniques. Further, Hao et al. presented a region-based MFIF method for colored images in LUV color space (Hao et al. 2015). In this method mean shift segmentation technique was used for obtaining image sequences. ...
... This dataset consists of source images as well as their corresponding reference images. The latter dataset is the "Lytro" image dataset consisting of 20 image pairs of size 520 × 520 pixels that are collected from "Lytro" picture gallery (Hao et al. 2015;Huang et al. 2018). This data set consists of only the source image pairs. ...
Article
Full-text available
Multi-Focus Image Fusion (MFIF) is a method that combines two or more source images to obtain a single image which is focused, has improved quality and more information than the source images. Due to limited Depth-of-Field of the imagining system, extracting all the useful information from a single image is challenging. Thus two or more defocused source images are fused together to obtain a composite image. This paper provides a comprehensive overview of existing MFIF methods. A new classification scheme is developed for categorizing the existing MFIF methods. These methods are classified into four major categories: spatial domain, transform domain, deep leaning and their hybrids and have been discussed well along with their drawbacks and challenges. In addition to this, both the parametric evaluation metrics i.e. "with reference" and "without reference" have also discussed. Then, a comparative analysis for nine image fusion methods is performed based on 30 pairs of publicly available images. Finally, various challenges that remain unaddressed and future work is also discussed in this work.
... The translation stage is then returned to the z position with the highest sharpness. This procedure is based on methods used previously to detect focused areas in an out of focus image [29], and while highly reliable, typically takes 10 -20 seconds to complete, limited by capturing and image processing time. ...
Preprint
Full-text available
Optical microscopy is an essential tool for scientific analysis and clinical applications. The OpenFlexure Microscope, an open-source, 3D-printed, and fully-automated laboratory microscope, has greatly increased accessibility to modern optical microscopy. However, its diverse global user base poses unique software challenges around device control, automation, and data management. Here we present the OpenFlexure Microscope software stack, based on a modern Web of Things architecture providing a robust, extensible, and open-source device control and data management interface. By utilising internet protocol networking, we have demonstrated microscope management on-device, locally networked, and fully remotely. Functionality is made available in both graphical client applications as well as scripting libraries, providing interfaces for both new microscope users, and experts. The OpenFlexure Microscope software has been deployed around the world for educational, scientific, and clinical applications, demonstrating a robust modern framework for laboratory instrument control powered by modern web technologies.
... Li and Yang [141] proposed a multi-focus image fusion method with normalized cut based segmentation and SF-based focus measure. Hao et al. [142] proposed a fusion method for multi-focus color images based on the mean shift segmentation approach and adopted SML as activity measure. Yang and Guo [143] applied the watershed algorithm to segment source images into a number of super-pixels for multi-focus image fusion. ...
... In most cases, the weight map is also known as decision map because multi-focus image fusion can be regarded as a classification issue in which the focus property (i.e., focused or defocused) of each pixel is determined. For some fusion methods, the weight map is employed straightforwardly [119,122,123,[126][127][128][129][130]; EOL based [121,122]; EOG based [128][129][130]; variance based [128,129]; visibility based [127][128][129]; edge feature based [127][128][129][130]; PCNN based [121][122][123]; phase congruency (PC) based [124]; DWT-based [128][129][130]; DCT-based [125,126,[128][129][130] maximum selection [121][122][123][124][125][126]; threshold based adaptive rule [119]; neural network based decision [127]; random forest based decision [128]; majority voting based decision [129,130] Optimized block size by an evolutionary computation algorithm SF based [132]; visibility based [131,133,135]; SML based [132,134,135]; variance based [132]; EOL based [135] maximum selection [131][132][133][134][135] Quad-tree based adaptive block division morphology based [136]; SML based [137] maximum selection [136,137] Multiple blocks jointly considered SF based [138]; SML based [139]; depth map based [139]; SR based [91] maximum selection [91,139]; hidden Markov model (HMM) based optimization [138] Image segmentation salience based [140]; visibility based [140]; SF based [141,143]; SML based [142,145]; SR based [144]; mean square error (MSE) between the original image and its approximation version based [146] weighted average [140]; maximum selection [141][142][143][144][145][146]; threshold based adaptive rule [146] to obtain the fused image. However, in order to achieve more accurate weights or classification results, more methods attempt to refine the weight or decision map obtained above by adding a consistency verification step, in which various image filtering techniques are the most frequently used. ...
... In most cases, the weight map is also known as decision map because multi-focus image fusion can be regarded as a classification issue in which the focus property (i.e., focused or defocused) of each pixel is determined. For some fusion methods, the weight map is employed straightforwardly [119,122,123,[126][127][128][129][130]; EOL based [121,122]; EOG based [128][129][130]; variance based [128,129]; visibility based [127][128][129]; edge feature based [127][128][129][130]; PCNN based [121][122][123]; phase congruency (PC) based [124]; DWT-based [128][129][130]; DCT-based [125,126,[128][129][130] maximum selection [121][122][123][124][125][126]; threshold based adaptive rule [119]; neural network based decision [127]; random forest based decision [128]; majority voting based decision [129,130] Optimized block size by an evolutionary computation algorithm SF based [132]; visibility based [131,133,135]; SML based [132,134,135]; variance based [132]; EOL based [135] maximum selection [131][132][133][134][135] Quad-tree based adaptive block division morphology based [136]; SML based [137] maximum selection [136,137] Multiple blocks jointly considered SF based [138]; SML based [139]; depth map based [139]; SR based [91] maximum selection [91,139]; hidden Markov model (HMM) based optimization [138] Image segmentation salience based [140]; visibility based [140]; SF based [141,143]; SML based [142,145]; SR based [144]; mean square error (MSE) between the original image and its approximation version based [146] weighted average [140]; maximum selection [141][142][143][144][145][146]; threshold based adaptive rule [146] to obtain the fused image. However, in order to achieve more accurate weights or classification results, more methods attempt to refine the weight or decision map obtained above by adding a consistency verification step, in which various image filtering techniques are the most frequently used. ...
Article
Multi-focus image fusion is an effective technique to extend the depth-of-field of optical lenses by creating an all-in-focus image from a set of partially focused images of the same scene. In the last few years, great progress has been achieved in this field along with the rapid development of image representation theories and approaches such as multi-scale geometric analysis, sparse representation, deep learning, etc. This survey paper first presents a comprehensive overview of existing multi-focus image fusion methods. To keep up with the latest development in this field, a new taxonomy is introduced to classify existing methods into four main categories: transform domain methods, spatial domain methods, methods combining transform domain and spatial domain, and deep learning methods. For each category, representative fusion methods are introduced and summarized. Then, a comparative study for 18 representative fusion methods is conducted based on 30 pairs of commonly-used multi-focus images and 8 popular objective fusion metrics. All the relevant resources including source images, objective metrics and fusion results are released online, aiming to provide a benchmark for the future study of multi-focus image fusion. Finally, several major challenges remained in the current research of this field are discussed and some future prospects are put forward.
... In the block-based fusion methods, source images are divided into many blocks, and the focus measurement method is used to select shaper blocks to reconstruct a fusion image. To improve the segmentation accuracy between focus and defocus regions, many fusion methods based on region segmentation were proposed [17][18][19][20][21][22], such as super-pixel [18], mean shift segmentation [19], guided filtering (GFF) [20], fast filtering (FFIF) [21], and boundary finding (BFM) [22]. ...
... In the block-based fusion methods, source images are divided into many blocks, and the focus measurement method is used to select shaper blocks to reconstruct a fusion image. To improve the segmentation accuracy between focus and defocus regions, many fusion methods based on region segmentation were proposed [17][18][19][20][21][22], such as super-pixel [18], mean shift segmentation [19], guided filtering (GFF) [20], fast filtering (FFIF) [21], and boundary finding (BFM) [22]. ...
Article
Full-text available
Multi-focus image fusion consists in the integration of the focus regions of multiple source images into a single image. At present, there are still some common problems in image fusion methods, such as block artifacts, artificial edges, halo effects, and contrast reduction. To address these problems, a novel, to the best of our knowledge, multi-focus image fusion method using energy of Laplacian and a deep neural network (DNN) is proposed in this paper. The DNN is composed of multiple denoising autoencoders and a classifier. The Laplacian energy operator can effectively extract the focus information of source images, and the trained DNN model can establish a valid mapping relationship between source images and a focus map according to the extracted focus information. First, the Laplacian energy operator is used to perform focus measurement for two source images to obtain the corresponding focus information maps. Then, the sliding window technology is used to sequentially obtain the windows from the corresponding focus information map, and all of the windows are fed back to the trained DNN model to obtain a focus map. After binary segmentation and small region filtering, a final decision map with good consistency is obtained. Finally, according to the weights provided by the final decision map, multiple source images are fused to obtain a final fusion image. Experimental results demonstrate that the proposed fusion method is superior to other existing ones in terms of subjective visual effects and objective quantitative evaluation.
... An image with large depth of field and full clarity is formed to expand the depth of field of microscopic objective by accurately finding the image number and position of the greatest focusing value and extracting the clearest pixels. As a multi-scale decomposition method of image fusion, pyramid transform [28], [29] can highlight the feature information of the image and obtain better fusion effect. Moreover, the salient features in different scales and resolutions are much more in line with human visual characteristics. ...
Article
Full-text available
The large depth of images for microscopic measurement can be achieved by using focus stacking techniques with a small depth of field of objective lens. It is implemented by fusing the image sequences of short depth images. However, the non-linear movement of the objective imaging system or the measured object caused by the moving stage straightness error brings the misalignment of the image sequences, such as transversal translation, rotation, and tilting. All of these interferences, as well as the image brightness variation must be corrected by image registration before fusing the image sequences. In this paper, a fast-automatic registration method based on the scale invariant feature transform (SIFT) is proposed. It is achieved by firstly segmenting the focal regions of the image sequences through fast edge detection. Then the image features are extracted within the small segmented focal areas. It greatly reduces the computational cost of feature extraction and the following steps of image correction, and alignment. In the process, the random sampling consistency (RANSAC) algorithm is also used to remove the mistake features. The Laplacian pyramid method is adopted for the large depth of image fusion. The experimental results show that the proposed method is more efficient than the traditional SIFT algorithm. Its registration efficiency is improved by about 60%. This method facilitates the high-precision and real-time imaging of a monocular three-dimensional focus stacking.