Conference Paper

Image Inpainting Considering Brightness Change and Spatial Locality of Textures and Its Evaluation.

DOI: 10.1007/978-3-540-92957-4_24 Conference: Advances in Image and Video Technology, Third Pacific Rim Symposium, PSIVT 2009, Tokyo, Japan, January 13-16, 2009. Proceedings
Source: DBLP

ABSTRACT Image inpainting techniques have been widely investigated to remove undesired objects in an image. Conventionally, missing
parts in an image are completed by optimizing the objective function using pattern similarity. However, unnatural textures
are easily generated due to two factors: (1) available samples in the image are quite limited, and (2) pattern similarity
is one of the required conditions but is not sufficient for reproducing natural textures. In this paper, we propose a new
energy function based on the pattern similarity considering brightness changes of sample textures (for (1)) and introducing
spatial locality as an additional constraint (for (2)). The effectiveness of the proposed method is successfully demonstrated
by qualitative and quantitative evaluation. Furthermore, the evaluation methods used in much inpainting research are discussed.

0 Bookmarks
 · 
84 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this work we propose a new image inpainting technique that combines texture synthesis, anisotropic diffusion, transport equation and a new sampling mechanism designed to alleviate the computational burden of the inpainting process. Given an image to be inpainted, anisotropic diffusion is initially applied to generate a cartoon image. A block-based inpainting approach is then applied so that to combine the cartoon image and a measure based on transport equation that dictates the priority on which pixels are filled. A sampling region is then defined dynamically so as to hold the propagation of the edges towards image structures while avoiding unnecessary searches during the completion process. Finally, a cartoon-based metric is computed to measure likeness between target and candidate blocks. Experimental results and comparisons against existing techniques attest the good performance and flexibility of our technique when dealing with real and synthetic images.
    Pattern Recognition Letters 01/2014; 36:36–45. · 1.27 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a data-driven method to predict the quality of an image completion method. Our method is based on the state-of-the-art non-parametric framework of Wexler et al. [2007]. It uses automatically derived search space constraints for patch source regions, which lead to improved texture synthesis and semantically more plausible results. These constraints also facilitate performance prediction by allowing us to correlate output quality against features of possible regions used for synthesis. We use our algorithm to first crop and then complete stitched panoramas. Our predictive ability is used to find an optimal crop shape before the completion is computed, potentially saving significant amounts of computation. Our optimized crop includes as much of the original panorama as possible while avoiding regions that can be less successfully filled in. Our predictor can also be applied for hole filling in the interior of images. In addition to extensive comparative results, we ran several user studies validating our predictive feature, good relative quality of our results against those of other state-of-the-art algorithms, and our automatic cropping algorithm.
    ACM Transactions on Graphics (TOG). 11/2012; 31(6).
  • [Show abstract] [Hide abstract]
    ABSTRACT: In the field of augmented reality (AR), geometric and photometric registration is routinely achieved in real time. However, real-time geometric registration often leads to misalignment (e.g., jitter and drift) due to the error from camera pose estimation. Due to limited resources on mobile devices, it is also difficult to implement state-of-the-art techniques for photometric registration on mobile AR systems. In order to solve these problems, we developed a mobile AR system in a significantly different way from conventional systems. In this system, captured omnidirectional images and virtual objects are registered geometrically and photometrically in an offline rendering process. The appropriate part of the prerendered omnidirectional AR image is shown to a user through a mobile device with online registration between the real world and the pre-captured image. In order to investigate the validity of our new framework for mobile AR, we conducted experiments using the prototype system on a real site in Todai-ji Temple, a famous world cultural heritage site in Japan.
    SIGGRAPH Asia 2013 Symposium on Mobile Graphics and Interactive Applications; 11/2013

Full-text

View
1 Download
Available from