| Camera-in-the-loop holography.

| Camera-in-the-loop holography.

Source publication
Article
Full-text available
Deep learning has been developing rapidly, and many holographic applications have been investigated using deep learning. They have shown that deep learning can outperform previous physically-based calculations using lightwave simulation and signal processing. This review focuses on computational holography, including computer-generated holograms, h...

Contexts in source publication

Context 1
... quality of reproduced images of holographic displays will be degraded because of the following factors: misalignment of optical components (beam splitters and lenses), SLM cover glass, aberrations of optical components, uneven light distribution of a light source on the SLM, and quantized and non-linear light modulation of SLM, as shown in the graph of Figure 5. ...
Context 2
... some studies have been conducted to manually correct aberrations to get closer to the ideal propagation model P ideal , the camera-in-the-loop holography ( Peng et al., 2020) has been proposed to automatically correct these image quality degrading factors. Figure 5 shows the outline of the camera-in-the-loop holography. The camera-in-the-loop holography differs from the GS algorithms, Wirtinger holography, and gradient descent methods because it uses actual reproduced images in the optimization loop. ...

Citations

... As a result, the computational speed for synthesizing holograms is two orders of magnitude faster using deep learning, compared to a conventional iterative approach [41]. Nowadays, deep learning approaches are recognized as foundational techniques in computer-generated holography [42][43][44][45]. ...
Article
Full-text available
Imaging is a longstanding research topic in optics and photonics and is an important tool for a wide range of scientific and engineering fields. Computational imaging is a powerful framework for designing innovative imaging systems by incorporating signal processing into optics. Conventional approaches involve individually designed optical and signal processing systems, which unnecessarily increased costs. Computational imaging, on the other hand, enhances the imaging performance of optical systems, visualizes invisible targets, and minimizes optical hardware. Digital holography and computer-generated holography are the roots of this field. Recent advances in information science, such as deep learning, and increasing computational power have rapidly driven computational imaging and have resulted in the reinvention these imaging technologies. In this paper, I survey recent research topics in computational imaging, where optical randomness is key. Imaging through scattering media, non-interferometric quantitative phase imaging, and real-time computer-generated holography are representative examples. These recent optical sensing and control technologies will serve as the foundations of next-generation imaging systems in various fields, such as biomedicine, security, and astronomy.
... Holographic displays [1,2] are ideal 3D display techniques that calculate the propagation and interference of light waves on a computer to create computer-generated holograms (CGHs) [3,4] and reproduce 3D images using an optical system. The calculation methods for CGHs include point cloud methods [5,6], polygon methods [7][8][9], light-field and layered methods [10][11][12][13][14], and deep-learning-based methods [15][16][17][18]. Particularly, owing to the simplicity of the calculation, the point-cloud method can be used for any type of 3D data, such as objects represented by polygons or layers. ...
Article
Full-text available
The disadvantages of computer-generated holograms (CGHs) using the direct integral method are the high computational requirements with increased object points and hologram size. This can be addressed by a phase-added stereogram (PAS), a fast calculation method for CGHs. PAS divides the hologram into small blocks and calculates the point-spread functions (PSFs) of the object points in the Fourier domain of each block. The PSF can be approximated using sparse spectra, which accelerate calculations. However, this approximation degrades the image quality. In this study, we improved the image quality of the PAS using deep learning while maintaining high computational speed.
... Our analysis can be a novel study method for drug release dynamics. Because the holographic-type behaviors of drug release processes can be assimilated with deep learning methods [88], our model could connect the discrepancy between conception and the real implementation of holographic imaging methods in the study of drug release dynamics [89,90]. In this context, theoretical simulations employing our model can be correlated with datadriven approaches, which make use of holographic image reconstruction methods for monitoring drug release efficiency [91,92]. ...
... Such a relation can describe drug release dynamics of Fickian and non-Fickian types in accordance with standard drug release models [31,[88][89][90][91][92]95]. ...
Article
Full-text available
A unitary model of drug release dynamics is proposed, assuming that the polymer–drug system can be assimilated into a multifractal mathematical object. Then, we made a description of drug release dynamics that implies, via Scale Relativity Theory, the functionality of continuous and undifferentiable curves (fractal or multifractal curves), possibly leading to holographic-like behaviors. At such a conjuncture, the Schrödinger and Madelung multifractal scenarios become compatible: in the Schrödinger multifractal scenario, various modes of drug release can be “mimicked” (via period doubling, damped oscillations, modulated and “chaotic” regimes), while the Madelung multifractal scenario involves multifractal diffusion laws (Fickian and non-Fickian diffusions). In conclusion, we propose a unitary model for describing release dynamics in polymer–drug systems. In the model proposed, the polymer–drug dynamics can be described by employing the Scale Relativity Theory in the monofractal case or also in the multifractal one.
... While deconvolution methods are valuable, the reconstructed images often suffer from reconstruction errors and artifacts. While deep learning methods are now developed to improve the performances of holography systems, especially in the reconstruction part, there are still challenges present that include obtaining a large data set and time-consuming training processes [11,12]. Computational imaging methods have been developed to tune ARP independent of LRP but require special optical configurations (SLMs) and, therefore, cannot be used with existing commercial microscopes [13][14][15][16][17]. ...
Article
Full-text available
Axial resolution is one of the most important characteristics of a microscope. In all microscopes, a high axial resolution is desired in order to discriminate information efficiently along the longitudinal direction. However, when studying thick samples that do not contain laterally overlapping information, a low axial resolution is desirable, as information from multiple planes can be recorded simultaneously from a single camera shot instead of plane-by-plane mechanical refocusing. In this study, we increased the focal depth of an infrared microscope non-invasively by introducing a binary axicon fabricated on a barium fluoride substrate close to the sample. Preliminary results of imaging the thick and sparse silk fibers showed an improved focal depth with a slight decrease in lateral resolution and an increase in background noise.
... [33] Deep learning refers to the application of neural networks to learn and simulate the physical processes involved in hologram generation, enabling complex and efficient holographic imaging. [34] Typically, the LUT method is easy to operate and attracts wide attention. In 1997, the LUT method was proposed, which greatly improves the calculation speed, yet it gives rise to the problem of excessive memory consumption which leads to the efficiency of reading out data lower. ...
Article
Full-text available
Computer-generated holography technology has been widely applied, and as research in this field deepens, the demand for memory and computational power in small AR and VR devices continues to increase. This paper introduces a hologram generation method, symmetrically high-compressed look-up table method, which can reduce memory usage by 50%. In offline computing, the half of basic horizontal and vertical modulation factors are stored, halving the memory requirements without affecting inline speed. Currently, its potential extends to various holographic applications, including the production of optical diffraction elements.
... In computational holography, which utilizes the principle of interference and diffraction to calculate and reconstruct the amplitude and phase of light fields [1], holograms (i.e., recorded interference patterns in conventional holography) are computed digitally from predefined images by solving the inverse design problem instead of using optical systems [2], [3]. The Gerchberg-Saxton (GS) algorithm is widely used for phase retrieval in computational holography due to its simplicity and fast convergence, while it usually suffers from limited image quality, stagnating in local optima, and inconsistency in output [4]- [6]. ...
Article
Full-text available
Metasurface holography has aroused immense interest in producing holographic images with high quality, higher-order diffraction-free, and large viewing angles by using a planar artificial sheet consisting of subwavelength nanostructures. Despite remarkable progress, dynamically tunable metasurface holography in the visible band has rarely been reported due to limited available tuning methods. In this work, we propose and numerically demonstrate a thermally tunable vanadium dioxide (VO2) nanofin based binary-phase metasurface, which generates holographic information in the visible varying with temperature. The insulator-to-metal phase transition in VO2 nanofins allows two independent binary-phase holograms generated by machine learning to be encoded in the respective phases of VO2 and switched under thermal regulation. By elaborately designing the dimensions and compensated phase of VO2 nanofins, high-quality images are reconstructed at corresponding temperatures under appropriate chiral illumination. In contrast, much poorer images are produced under inappropriate chiral illumination. We further demonstrate the advantage of applying the VO2 phase-compensated metasurface in high-security digital encryption, where two desired character combinations are read out with appropriate excitations and temperatures, whereas one identical fraudulent message is received with inappropriate excitations. Our design approach offers a new and efficient method to realize tunable metasurfaces, which is promisingly adopted in dynamic display, information encryption, optical anti-counterfeiting, etc.
... It may arguably become one of the main catalysts in designing complex optical configurations in the near future [40]. In the field of computational holography for example, mature machine learning methods are already state-of-the-art [41,42]. Deep learning architectures for freeform design have also been presented in past research, but so far mainly within the domain of imaging optics, in order to find starting points close to a final solution [43,[44][45][46]. ...
... By including the (frozen) raytracing model in the training phase of the freeform prediction architecture, the emphasis of model 2 is on replicating the ground truth irradiance distribution, as opposed to replicating the ground truth freeform surface, which is not considered in the training of the second model. This approach is somehow related to the unsupervised learning strategy that was adopted in computational holography, in favor of supervised learning using an extensive set of random phase masks paired with their simulated amplitude pattern [42]. A main difference is that computational holography can rely on an analytic forward/backward operator for the complex field at the image plane. ...
Article
Full-text available
Despite significant advances in the field of freeform optical design, there still remain various unsolved problems. One of these is the design of smooth, shallow freeform topologies, consisting of multiple convex, concave and saddle shaped regions, in order to generate a prescribed illumination pattern. Such freeform topologies are relevant in the context of glare-free illumination and thin, refractive beam shaping elements. Machine learning techniques already proved to be extremely valuable in solving complex inverse problems in optics and photonics, but their application to freeform optical design is mostly limited to imaging optics. This paper presents a rapid, standalone framework for the prediction of freeform surface topologies that generate a prescribed irradiance distribution, from a predefined light source. The framework employs a 2D convolutional neural network to model the relationship between the prescribed target irradiance and required freeform topology. This network is trained on the loss between the obtained irradiance and input irradiance, using a second network that replaces Monte-Carlo raytracing from source to target. This semi-supervised learning approach proves to be superior compared to a supervised learning approach using ground truth freeform topology/irradiance pairs; a fact that is connected to the observation that multiple freeform topologies can yield similar irradiance patterns. The resulting network is able to rapidly predict smooth freeform topologies that generate arbitrary irradiance patterns, and could serve as an inspiration for applying machine learning to other open problems in freeform illumination design.
... Since plankton is relatively small compared to the depth-of-field of the camera, only a small diffraction distance range is useful. However, to find the in-focus distance, the above numerical reconstruction needs to be repeated at a wide diffraction depth range [18], and then auto-focusing algorithms [19] are applied to find the clearest image. The key point of the autofocusing algorithm is to measure the sharpness of each reconstructed image based on various methods, such as magnitude differential [20], variance, entropy [21], image eigenvalues [22], etc [23][24][25]. ...
Article
Full-text available
Plankton is the base of the ocean ecosystem and is very sensitive to changes in their environment. Thus, monitoring the status of plankton in-situ has incredible importance for environmental study. Hologram is one of the most effective methods to record the plankton’s living status underwater. However, the reconstruction of holograms, conventionally achieved by numerical calculation, costs high both in computation and memory. Moreover, the plankton holograms are heavily noised, and useful information is sparsely distributed. To obtain high-speed visual image reconstruction from plankton holograms with good performance, in this paper, an efficient, low-redundant, and multi-object reconstruction network for plankton holograms, that is Holo-Net, is proposed. The Holo-Net includes a plankton detection unit and a reconstruction unit. It can first detect the plankton region and then map it to a visual image. A plankton hologram dataset is produced to verify the efficiency of the proposed method. Experiments show that the Holo-Net achieves PSNR and SSIM up to 20.61 and 0.65, respectively. More important, the Holo-Net is faster than the numerical method at least 100 times. We believe this work will facilitate the development of a compact in-situ plankton holographic monitoring system and help the research of the marine biosystem.
... Histologically stained tissue biopsies are scanned routinely for medical diagnosis using brightfield microscopy. Holography, a quantitative label-free microscopy technique, does not require tissue staining and can provide rich data suitable for automated deep learning analysis [1][2][3][4][5][6] and grants access to new metrics such as local dry mass [7][8][9] . This is done by acquiring the quantitative phase profile of the sample which corresponds to the refractive index of the sample at all points in the image. ...
Article
Full-text available
Dynamic holographic profiling of thick samples is limited due to the reduced field of view (FOV) of off-axis holography. We present an improved six-pack holography system for the simultaneous acquisition of six complex wavefronts in a single camera exposure from two fields of view (FOVs) and three wavelengths, for quantitative phase unwrapping of thick and extended transparent objects. By dynamically generating three synthetic wavelength quantitative phase maps for each of the two FOVs, with the longest wavelength being 6207 nm, hierarchical phase unwrapping can be used to reduce noise while maintaining the improvements in the 2π phase ambiguity due to the longer synthetic wavelength. The system was tested on a 7 μm tall PDMS microchannel and is shown to produce quantitative phase maps with 96% accuracy, while the hierarchical unwrapping reduces noise by 93%. A monolayer of live onion epidermal tissue was also successfully scanned, demonstrating the potential of the system to dynamically decrease scanning time of optically thick and extended samples.
... Artificial intelligence for optics has unparalleled advantages, especially in the field of holography. For example, deep learning can address challenging inverse problems in holographic imaging, where the objective is to recover the original scene or object properties from observed images or measurements and enhance the resolution of optical imaging systems beyond their traditional diffraction limit, [24][25][26][27][28][29][30] etc. Intersection research of optics and deep learning aims to solve massive tasks with one model, and one important problem is how to guarantee the alignment between the distribution of any given downstream data and tasks with pretrained models. This means that the same model and weights can only be applied to a specific environment. ...