Conference Paper

Robust reconstruction of arbitrarily deformed bokeh from ordinary multiple differently focused images

Nat. Inst. of Inf., Res. Organ. of Inf. & Syst., Tokyo, Japan
DOI: 10.1109/ICIP.2010.5650900 Conference: Image Processing (ICIP), 2010 17th IEEE International Conference on
Source: IEEE Xplore

ABSTRACT This paper deals with a method of generating seriously deformed bokeh on reconstructed images from ordinary multiple differently focused images including just simple bokeh such as Gaussian blurs. We previously proposed scene re-focusing with various iris shapes by applying a three-dimensional filter to the multi-focus images. However, actually the proposed method implicitly assumed that the feature of the iris can be expressed mathematically and it has some symmetry like a horizontally open iris. In this paper, at first, the captured multi-focus images are robustly decomposed into components, each of which goes through its own corresponding pin-hole on the lens, by using dimension reduction and a two-dimensional filter. Then, based on the appropriate composition of the components, reconstruction of arbitrarily deformed bokeh introduced by any user-defined iris is achieved. By some experiments, we show that our novel method can generate even seriously deformed bokeh that does not have simple symmetry.

0 Followers
 · 
81 Views
 · 
0 Downloads
  • Source
    • "Recently, researchers in computer vision utilize multi-focus images in a similar way, where they are especially involved in its applications for view interpolation [28]. Our preliminary results have shown that, unlike 3D filters [36]–[38] for scene refocusing, even deformed bokeh can be robustly reconstructed without aliasing artifacts if we decompose 3D multi-focus images into 4D light field by adopting dimension reduction based on Fourier slice theorem [41], [42]. However, such an approach needs dimension reduction in appropriate directions repeatedly to obtain a scene-dependent 2D image for each shifted pinhole on the lens plane and it prevents us from achieving sufficient efficiency of dense light field synthesis and rendering, especially, for practical applications such as flexible scene refocusing because, generally , we must reconstruct a large number of shifted pinhole images to synthesize the corresponding light field completely. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Scene refocusing beyond extended depth of field for users to observe objects effectively is aimed by researchers in computational photography, microscopic imaging, and so on. Ordinary all-in-focus image reconstruction from a sequence of multi-focus images achieves extended depth of field, where reconstructed images would be captured through a pinhole in the center on the lens. In this paper, we propose a novel method for reconstructing all-in-focus images through shifted pinholes on the lens based on 3D frequency analysis of multi-focus images. Such shifted pinhole images are obtained by a linear combination of multi-focus images with scene-independent 2D filters in the frequency domain. The proposed method enables us to efficiently synthesize dense 4D light field on the lens plane for image-based rendering, especially, robust scene refocusing with arbitrary bokeh. Our novel method using simple linear filters achieves not only reconstruction of all-in-focus images even for shifted pinholes more robustly than the conventional methods depending on scene/focus estimation, but also scene refocusing without suffering from limitation of resolution in comparison with recent approaches using special devices such as lens arrays in computational photography.
    IEEE Transactions on Image Processing 11/2013; 22(11):4407-21. DOI:10.1109/TIP.2013.2273668 · 3.11 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Light-Field enables us to observe scenes from free viewpoints. However, it generally consists of 4-D enormous data, that are not suitable for storing or transmitting without effective compression. 4-D Light-Field is very redundant because essentially it includes just 3-D scene information. Actually, although robust 3-D scene estimation such as depth recovery from Light-Field is not so easy, a method of reconstructing Light-Field directly from 3-D information composed of multi-focus images without any scene estimation is successfully derived. Previously, based on the method, Light-Field compression via synthesized multi-focus images as effective representation of 3-D scenes was proposed. In this paper, we study efficient compression of multi-focus images synthesized from dense Light-Field by using DWT instead of DCT-based compression in order to suppress degradation such as block noise. Quality of reconstructed Light-Field is evaluated by PSNR and SSIM for analyzing characteristics of residuals. Experimental results reveal that our method is much superior to Light-Field compression using disparity-compensation at low bit-rate.
    Visual Communications and Image Processing (VCIP), 2012 IEEE; 01/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: Light-Field enables us to observe scenes from free viewpoints. However, it generally consists of 4-D enormous data, that are not suitable for storing or transmitting without effective compression. 4-D Light-Field is very redundant because essentially it includes just 3-D scene information. Actually, although robust 3-D scene estimation such as depth recovery from Light-Field is not so easy, we successfully derived a method of reconstructing Light-Field directly from 3-D information composed of multi-focus images without any scene estimation. On the other hand, it is easy to synthesize multi-focus images from Light-Field. In this paper, based on the method, we propose novel Light-Field compression via synthesized multi-focus images as effective representation of 3-D scenes. Multi-focus images are easily compressed because they contain mostly low frequency components. We show experimental results by using synthetic and real images. Reconstruction quality of the method is robust even at very low bit-rate.
    Image Processing (ICIP), 2012 19th IEEE International Conference on; 01/2012