Conference Paper

Robust reconstruction of arbitrarily deformed bokeh from ordinary multiple differently focused images

Nat. Inst. of Inf., Res. Organ. of Inf. & Syst., Tokyo, Japan
DOI: 10.1109/ICIP.2010.5650900 Conference: Image Processing (ICIP), 2010 17th IEEE International Conference on
Source: IEEE Xplore

ABSTRACT This paper deals with a method of generating seriously deformed bokeh on reconstructed images from ordinary multiple differently focused images including just simple bokeh such as Gaussian blurs. We previously proposed scene re-focusing with various iris shapes by applying a three-dimensional filter to the multi-focus images. However, actually the proposed method implicitly assumed that the feature of the iris can be expressed mathematically and it has some symmetry like a horizontally open iris. In this paper, at first, the captured multi-focus images are robustly decomposed into components, each of which goes through its own corresponding pin-hole on the lens, by using dimension reduction and a two-dimensional filter. Then, based on the appropriate composition of the components, reconstruction of arbitrarily deformed bokeh introduced by any user-defined iris is achieved. By some experiments, we show that our novel method can generate even seriously deformed bokeh that does not have simple symmetry.

  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes a method of image generation based on transformation integrating certain sequences of multiple differently focused images. First, we assume that a scene is defocused by a geometrical blurring model. Then we combine spatial frequencies of the scene and the sequence with a 3-D convolution filter that expresses how the scene is defocused on the sequence. The filter can be represented with a linear combination of ray-sets through each point of the lens. Based on the relation, in the 3-D frequency domain we extract each ray-set from the filter as certain frequency components and merge them to reconstruct various filters that can generate images with different viewpoints and blurs
    Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on; 06/2006 · 4.63 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes a method of scene re-focusing with var- ious iris shapes by integrating a sequence of multiple differ- ently focused images. First, we introduce a formula that com- bines the sequence and spatial information of a scene with a convolutionof a 3-D blur. The blur expresses how the scene is defocused in the sequence. Based on the formula, in the 3-D frequency domain we can design filters merging certain fre- quency components of the sequence to generate images that would be acquired with various iris shapes. Some experi- ments of scene re-focusing using synthetic and real images indicate that we can not only arbitrarily suppress blurs of the sequence but also generate images with asymmetrical blurs like motion blurs.
    Proceedings of the International Conference on Image Processing, ICIP 2006, October 8-11, Atlanta, Georgia, USA; 01/2006
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce a novel approach to shape from defocus, i.e., the problem of inferring the three-dimensional (3D) geometry of a scene from a collection of defocused images. Typically, in shape from defocus, the task of extracting geometry also requires deblurring the given images. A common approach to bypass this task relies on approximating the scene locally by a plane parallel to the image (the so-called equifocal assumption). We show that this approximation is indeed not necessary, as one can estimate 3D geometry while avoiding deblurring without strong assumptions on the scene. Solving the problem of shape from defocus requires modeling how light interacts with the optics before reaching the imaging surface. This interaction is described by the so-called point spread function (PSF). When the form of the PSF is known, we propose an optimal method to infer 3D geometry from defocused images that involves computing orthogonal operators which are regularized via functional singular value decomposition. When the form of the PSF is unknown, we propose a simple and efficient method that first learns a set of projection operators from blurred images and then uses these operators to estimate the 3D geometry of the scene from novel blurred images. Our experiments on both real and synthetic images show that the performance of the algorithm is relatively insensitive to the form of the PSF. Our general approach is to minimize the Euclidean norm of the difference between the estimated images and the observed images. The method is geometric in that we reduce the minimization to performing projections onto linear subspaces, by using inner product structures on both infinite and finite-dimensional Hilbert spaces. Both proposed algorithms involve only simple matrix-vector multiplications which can be implemented in real-time.
    IEEE Transactions on Pattern Analysis and Machine Intelligence 04/2005; 27(3):406-17. · 4.80 Impact Factor