An Image Inpainting Technique Based on the Fast Marching Method

Article (PDF Available) · January 2004with 4,839 Reads 
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
DOI: 10.1080/10867651.2004.10487596
Cite this publication
Abstract
Digital inpainting provides a means for reconstruction of small damaged portions of an image. Although the inpainting basics are straightforward, most inpainting techniques published in the literature are complex to understand and implement. We present here a new algorithm for digital inpainting based on the fast marching method for level set applications. Our algorithm is very simple to implement, fast, and produces nearly identical results to more complex, and usually slower, known methods. Source code is available online.
Advertisement
Vol.9,No.1:2536
An Image Inpainting Technique Based on
the Fast Marching Method
Alexandru Telea
Eindhoven University of Technology
Abstract. Digital inpainting provides a means for reconstruction of small dam-
aged portions of an image. Although the inpainting basics are straightforward, most
inpainting techniques published in the literature are complex to understand and im-
plement. We present here a new algorithm for digital inpainting based on the fast
marching method for level set applications. Our algorithm is very simple to im-
plement, fast, and produces nearly identical results to more complex, and usually
slower, known methods. Source code is available online.
1. Introduction
Digital inpainting, the technique of reconstructing small damaged portions of
an image, has received considerable attention in recent years. Digital inpaint-
ing serves a wide range of applications, such as removing text and logos from
still images or videos, reconstructing scans of deteriorated images by remov-
ing scratches or stains, or creating artistic eects. Most inpainting methods
work as follows. First, the image regions to be inpainted are selected, usu-
ally manually. Next, color information is propagated inward from the region
boundaries, i.e., the known image information is used to ll in the missing
areas. In order to produce a perceptually plausible reconstruction, an in-
painting technique should attempt to continue the isophotes (lines of equal
gray value) as smoothly as possible inside the reconstructed region. In other
words, the missing region should be inpainted so that the inpainted gray value
and gradient extrapolate the gray value and gradient outside this region.
© A K Peters, Ltd.
25 1086-7651/04 $0.50 per page
26 journal of graphics tools
Several inpainting methods are based on the above ideas. In [Bertalmio 00,
Bertalmio 01], the image smoothness information, estimated by the image
Laplacian, is propagated along the isophotes directions, estimated by the im-
age gradient rotated 90 degrees. The Total Variational (TV) model [Chan
and Shen 00a] uses an Euler-Lagrange equation coupled with anisotropic dif-
fusion to maintain the isophotes’ directions. The Curvature-Driven Diusion
(CCD) model [Chan and Shen 00b] enhances the TV method to drive dif-
fusion along the isophotes’ directions and thus allows inpainting of thicker
regions. All above methods essentially solve a Partial Dierential Equation
(PDE) that describes the color propagation inside the missing region, subject
to various heuristics that attempt to preserve the isophotes’ directions. Pre-
serving isophotes is, however desirable, never perfectly attained in practice.
The main problem is that both isophote estimation and information propaga-
tion are subject to numerical diusion. Diusion is desirable as it stabilizes
the PDEs to be solved, but leads inevitably to a cetain amount of blurring of
the inpainted area.
A second type of methods [Oliveira 01] repeatedly convolves a simple 3 ×
3lter over the missing regions to diuse known image information to the
missing pixels.
However impressive, the above methods have several drawbacks that pre-
clude their use in practice. The PDE-based methods require implementing
nontrivial iterative numerical methods and techniques, such as anisotropic
diusion and multiresolution schemes [Bertalmio 00]. Little or no informa-
tion is given on practical implementation details such as various thresholds
or discretization methods, although some steps are mentioned as numerically
unstable. Moreover, such methods are quite slow, e.g., a few minutes for
the relatively small inpainting region shown in Figure 1. In contrast, the
convolution-based method described in [Oliveira 01] is fast and simple to im-
plement. However, this method has no provisions for preserving the isophotes’
directions. High-gradient image areas must be selected manually before in-
painting and treated separately so as not to be blurred.
We propose a new inpainting algorithm based on propagating an image
smoothness estimator along the image gradient, similar to [Bertalmio 00]. We
estimate the image smoothness as a weighted average over a known image
neighborhood of the pixel to inpaint. We treat the missing regions as level
setsandusethefastmarchingmethod(FMM)describedin[Sethian96]to
propagate the image information. Our approach has several advantages:
it is very simple to implement (the complete pseudocode is given here);
it is considerably faster than other inpainting methods–processing an
800 ×600 image (Figure 1) takes under three seconds on a 800 MHz PC;
it produces very similar results as compared to the other methods;
Telea: An Image Inpainting Technique 27
a) b)
Figure 1. An 800 ×600 image inpainted in less than three seconds.
it can easily be customized to use dierent local inpainting strategies.
In Section 2, we describe our method. Section 3 presents several results,
details our method’s advantages and limitations in comparison to other meth-
ods, and discusses possible enhancements. Source code of a sample method
implementation is available online at the address listed at the end of the paper.
2. Our Method
This section describes our inpainting method. First, we introduce the math-
ematical model on which we base our inpainting (Section 2.1). Next, we
describe how the missing regions are inpainted using the FMM (Section 2.2).
Finally, we detail the implementation of inpainting one point on the missing
region’s boundary (Section 2.3).
2.1. Mathematical Model
To explain our method, consider Figure 2, in which one must inpaint the
point psituated on the boundary of the region to inpaint .Takeasmall
neighborhood Bε(p)ofsizeεof the known image around p(Figure 2(a)). As
described in [Bertalmio 00, Oliveira 01, Chan and Shen 00a], the inpainting
of pshould be determined by the values of the known image points close to
p, i.e., in Bε(p). We rst consider gray value images, color images being a
natural extension (see Section 2.4). For εsmall enough, we consider a rst
order approximation Iq(p)oftheimageinpointp,giventheimageI(q)and
gradient I(q) values of point q(Figure 2(b)):
Iq(p)=I(q)+I(q)(pq).(1)
28 journal of graphics tools
p
I
q
N
a) b)
region to
be inpainted
boundary δΩ
known image
N
p
known
neighborhood
B(ε) of p ε
Figure 2. The inpainting principle.
Next, we inpaint point pas a function of all points qin Bε(p) by summing the
estimates of all points q, weighted by a normalized weighting function w(p, q):
I(p)=qBε(p)w(p, q)[I(q)+I(q)(pq)]
qBε(p)w(p, q).(2)
The weighting function w(p, q), detailed in Section 2.3, is designed such that
the inpainting of ppropagates the gray value as well as the sharp details of
the image over Bε(p).
2.2. Adding Inpainting to the FMM
Section 2.1 explained how to inpaint a point on the unknown region’s bound-
ary as a function of known image pixels only. To inpaint the whole ,we
iteratively apply Equation 2 to all the discrete pixels of , in increasing dis-
tance from ’s initial position i, and advance the boundary inside until
the whole region has been inpainted (see pseudocode in Figure 3). Inpaint-
ing points in increasing distance order from iensures that areas closest
to known image points are lled in rst, thus mimicking manual inpainting
techniques [Bertalmio 00, Bertalmio 01].
Implementing the above requires a method that propagates into by
advancing the pixels of in order of their distance to the initial boundary
i. For this, we use the fast marching method. In brief, the FMM is an
algorithm that solves the Eikonal equation:
|T|=1 on,withT=0on.(3)
The solution Tof Equation 3 is the distance map of the pixels to the
boundary . The level sets, or isolines, of Tare exactly the successive
Telea: An Image Inpainting Technique 29
δi= boundary of region to inpaint
δ=δi
while (δnot empty)
{
p = pixel of δclosest to δi
inpaint p using Eqn.2
advance δinto
}
Figure 3. Inpainting algorithm.
boundaries of the shrinking that we need for inpainting. The normal
Nto , also needed for inpainting, is exactly T. The FMM guarantees
that pixels of are always processed in increasing order of their distance-
to-boundary T[Sethian 99], i.e., that we always inpaint the closest pixels to
theknownimagearearst.
We prefer the FMM over other Distance Transform (DT) methods that
compute the distance map Tto a boundary (e.g., [Borgefors 84, Borge-
fors 86, Meijster et al. 00]). The FMM’s main advantage is that it explicitly
maintains the narrow band that separates the known from the unknown im-
age area and species which is the next pixel to inpaint. Other DT methods
compute the distance map Tbut do not maintain an explicit narrow band.
Adding a narrow band structure to these methods would complicate their
implementation, whereas the FMM provides this structure by default.
To explain our use of the FMM in detail–and since the FMM is not
straightforward to implement from the reference literature [Sethian 96, Seth-
ian 99]–we provide next its complete pseudocode. The FMM maintains a
so-called narrow band of pixels, which is exactly our inpainting boundary .
For every image pixel, we store its value T, its image gray value I(both
represented as oating-point values), and a ag fthat may have three values:
BAND : the pixel belongs to the narrow band. Its Tvalue undergoes
update.
KNOWN : the pixel is outside ,intheknownimagearea. ItsTand
Ivalues a re known.
INSIDE: the pixel is inside , in the region to inpaint. Its Tand I
values are not yet known.
The FMM has an initialization and propagation phase as follows. First,
we set Tto zero on and outside the boundary of the region to inpaint
and to some large value (in practice 106) inside, and initialize fover the
whole image as explained above. All BAND points are inserted in a heap
30 journal of graphics tools
while (NarrowBand not empty)
{
extract P(i,j) = head(NarrowBand); /* STEP 1 */
f(i,j) = KNOWN;
for (k,l) in (i1,j),(i,j1),(i+1,j),(i,j+1)
if (f(k,l)!=KNOWN)
{
if (f(k,l)==INSIDE)
{
f(k,l)=BAND; /* STEP 2 */
inpaint(k,l); /* STEP 3 */
}
T (k,l) = min(solve(k1,l,k,l1), /* STEP 4 */
solve(k+1,l,k,l1),
solve(k1,l,k,l+1),
solve(k+1,l,k,l+1));
insert(k,l) in NarrowBand; /* STEP 5 */
}
}
float solve(int i1,int j1,int i2,int j2)
{
float sol = 1.0e6;
if (f(i1,j1)==KNOWN)
if (f(i2,j2)==KNOWN)
{
float r = sqrt(2(T(i1,j1)T(i2,j2))*(T(i1,j1)T(i2,j2)));
float s = (T(i1,j1)+T(i2,j2)r)/2;
if (s>=T(i1,j1) && s>=T(i2,j2)) sol = s;
else
{s += r; if (s>=T(i1,j1) && s>=T(i2,j2)) sol = s; }
}
else sol = 1+T(i1,j1));
else if (f(i2,j2)==KNOWN) sol = 1+T(i1,j2));
return sol;
}
Figure 4. Fast marching method used for inpainting.
NarrowBand sorted in ascending order of their Tvalues. Next, we propagate
the T,f,andIvalues using the code shown in Figure 4. Step 1 extracts
the BAND point with the smallest T. Step 2 marches the boundary inward
by adding new points to it. Step 3 performs the inpainting (see Section 2.3).
Step 4 propagates the value Tof point (i, j)toitsneighbors(k, l)bysolving
Telea: An Image Inpainting Technique 31
the nite dierence discretization of Equation 3 given by
max(DxT,D+xT,0)2+max(DyT,D+yT,0)2=1,(4)
where DxT(i, j)=T(i, j )T(i1,j)andD+xT(i, j)=T(i+1,j)T(i, j)
and similarly for y. Following the upwind idea of Sethian [Sethian 96], we
solve Equation 4 for (k, l)’s four quadrants and retain the smallest solution.
Finally, Step 5 (re)inserts (k, l)withitsnewTin the heap.
2.3. Inpainting One Point
We consider now how to inpaint a newly discovered point (k, l), as a function
of the KNOWN points around it, following the idea described in Section 2.1.
(Step 3 in Figure 4, detailed in Figure 5). We iterate over the KNOWN points
in the neighborhood Bεof the current point (i, j)andcomputeI(i, j) following
Equation 2. The image gradient I(gradI in the code) is estimated by central
dierences. As stated in Section 2.1, the design of the weighting function
w(p, q) is crucial to propagate the sharp image details and the smooth zones as
such into the inpainted zone. We design w(p, q)=dir(p, q)·dst(p, q)·lev(p, q)
void inpaint(int i,int j)
{
for (all (k,l) in Bε(i,j) such that f(k,l)!=OUTSIDE)
{
r = vector from (i,j) to (k,l);
dir = r * gradT(i,j)/length(r);
dst = 1/(length(r)*length(r));
lev = 1/(1+fabs(T(k,l)T(i,j)));
w = dir*dst*lev;
if (f(k+1,l)!=OUTSIDE && f(k1,l)!=OUTSIDE &&
f(k,l+1)!=OUTSIDE && f(k,l1)!=OUTSIDE)
gradI = (I(k+1,l)I(k1,l),I(k,l+1)I(k,l1));
Ia += w * (I(k,l) + gradI * r);
s+=w;
}
I(i,j) = Ia/s;
}
Figure 5. Inpainting one point.
32 journal of graphics tools
as a product of three factors:
dir(p, q)= pq
||pq|| ·N(p)
dst(p, q)= d2
0
||pq||2
lev(p, q)= T0
1+|T(p)T(q)|.
The directional component dir(p, q) ensures that the contribution of the pix-
els close to the normal direction N=T(gradT in the code), i.e., close
to the FMM’s information propagation direction, is higher than for those
farther from N.Thegeometric distance component dst(p, q) decreases the
contribution of the pixels geometrically farther from p.Thelevel set distance
component lev(p, q) ensures that pixels close to the contour through pcon-
tribute more than farther pixels. Both dst and lev are relative with respect
to the reference distances d0and T0.Inpractice,wesetd0and T0to the
interpixel distance, i.e., to 1. Overall, the above factors model the manual
inpainting heuristics [Bertalmio 00] that describe how to paint a point by
strokes bringing color from a small region around it.
For εup to about six pixels, i.e., when inpainting thin regions, dst and lev
have a weak eect. For thicker regions to inpaint, such as Figure 8(d), where
we used an εof 12 pixels, using dst and lev provides better results than using
dir alone. The above is clearly visible in Figure 6, on a test image taken
from [Bertalmio 00], where the missing ring-shaped region is more than 30
pixels thick. Figure 6(c) shows, on an image detail, the eect of dir alone.
The results are somewhat less blurry when dir and dst (Figure 6(d)) or dir
and lev (Figure6(e))areusedtogether. Theinpaintingisthebestvisually
when all three components are used (Figure 6(f)).
a) b)
c) d)
e) f)
Figure 6.Thick region to inpaint (a) and result (b). Eect of weighting functions:
direction (c), direction and geometric distance (d), direction and level set distance
(e), direction, geometric, and level set distance (f).
Telea: An Image Inpainting Technique 33
2.4. Implementation Details
Several implementation details are important. First, we compute the bound-
ary normal N=Tby numerical derivation of the eld Tcomputed by the
FMM. Derivating Ton the y as it is computed by the FMM is unstable,
since we are not guaranteed that a large enough neighborhood around the
current point contains only KNOWN points. We rst run the FMM outside
the initial inpainting boundary and obtain the distance eld Tout .Since
we use only those points closer to than ε,weruntheFMMoutside
only until we reach T>ε. This restricts the FMM computations to a band of
thickness εaround , thus speeding up the process. Next, we run the FMM
inside and obtain Tin.Theeld Tover the whole image is given by
T(p)=FTin (p)ifp
Tout (p)ifp/.(5)
Next, we smooth Tby a 3 ×3tentlter, and then compute Tby central
dierences.
The value of εgivingthesizeofBusually ranges from three to ten pix-
els. This corresponds with the “thickness” of the regions to inpaint, which
is usually less than 15 pixels. Higher values blur the sharp details to be re-
constructed by inpainting, although they are useful when inpainting thicker
regions.
The test f(k,l)!=OUTSIDE in Figure 5 that restricts Bεto the known image
points can be changed to f(k,l)==KNOWN. The results are visually identical,
as Bεcontains very few BAND pixels. However, one would use the second
test if the initial corresponds to unknown image pixels.
The NarrowBand sorted heap (Section 2.2) is straightforwardly implemented
using the C++ STL multimap container [Musser and Saini 96]. Finally, for
color (RGB) images, we apply the presented method separately for each color
channel.
3. Discussion
We have compared our inpainting method with the methods presented by
Bertalmio et al. in [Bertalmio 00] and Oliveira et al. in [Oliveira 01], fur-
ther denoted by BSCB and OBMC, by running it on the same input images
(see Figures 7 and 8(a)—(c)). For BSCB, we used the implementation pub-
licly available at [Yung and Shankar ??], whereas we reimplemented OBMC
ourselves. Our method produced visually nearly identical results with BSCB.
Compare, for example, the inpaintings in Figure 7(c), (d) (Figure 7(e), (f)
in detail) and Figure 8(b), (c) (Figure 8(g), (h) in detail). In contrast, the
34 journal of graphics tools
a) original b) inpainting mask c) BSCB method d) our method
e
)
BSCB
(
detail
)
f
)
our method
(
detail
)
detail area
Figure 7. Lincoln “cracked photo” inpainting.
results of OBMC (shown in [Oliveira 01]) were visibly more blurry for regions
thicker than six pixels. The runtime for our method was in all cases much
shorter than for BSCB. Our C++ implementation took less than 3 seconds
on an 800 MHz PC for a 800 ×600 color image with about 15% pixels to
inpaint (Figure 1). On the same input, the original BSCB method, published
in [Bertalmio 00], is reported to take less than 5 minutes on a 300 MHz PC.
The BSCB implementation we used [Yung and Shankar ??], which is men-
tioned to be unoptimized by its authors, took between 2.5 and 3 minutes
on the 800 MHz PC, depending on its various parameter settings. In con-
trast, OBMC takes in all cases about the same time as our method. The
above matches the fact that both OBMC and our method are linear in the
inpainted region’s size.
The main limitation of our method (applicable to the BSCB and OBMC
methods mentioned here) is the blurring produced when inpainting regions
thicker than 10—15 pixels, especially visible when sharp isophotes intersect
the region’s boundary almost tangentially. See, for example, the inpainting
in Figure 8(d), (e) (in detail in Figure 8(i), (j)). The above is caused by
the linear and local character of our method. Techniques using an explicit
nonlinear and/or global image model, such as the TV [Chan and Shen 00a]
and CDD [Chan and Shen 00b] methods, achieve better results, at the cost of
considerably more complex implementations.
Overall, the presented inpainting method is simple to implement (our com-
plete C++ code is about 500 lines), fast, and easy to customize for dierent in-
painting strategies. We plan to extend the method by developing new inpaint-
ing functions that are better able to preserve the isophotes’ directions. One
such way is to integrate anisotropic diusion, e.g., following [Bertalmio 00], in
the FMM boundary evolution in order to reduce the blurring for inpainting
Telea: An Image Inpainting Technique 35
a)
b)
c
)
d) e)
i
)j)
f) g) h)
Figure 8.Inpainting examples. Damaged photo (a), inpainting by method BSCB
(b), our method (c), and close-ups (f, g, h). Damaged photo (d), distance-weighted
inpainting (e), and close-ups (i, j)
.
thick regions. A second extension would be to modulate the evolution speed of
the FMM, now equal to 1, by the image anisotropy, i.e., let inpainting “work
more” on the high detail areas than on the smooth regions.
Acknowledgments. We are indebted to Professor J. J. van Wijk from the De-
partment of Mathematics and Computer Science of the Eindhoven University of
Technology for his numerous suggestions for improving this paper.
References
[Bertalmio 00] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester. “Image In-
painting.” In Proceedings SIGGRAPH 2000, Computer Graphics Proceedings,
Annual Conference Series, edited by Kurt Akeley, pp. 417—424, Reading, MA:
Addison-Wesley, 2000.
36 journal of graphics tools
[Bertalmio 01] M. Bertalmio, A. L. Bertozzi, and G. Sapiro. “Navier-Stokes, Fluid
Dynamics, and Image and Video Inpainting.” In Proc. ICCV 2001, pp. 1335—
1362, IEEE CS Press 1. [CITY]: [PUB], 2001.
[Borgefors 84] G. Borgefors. “Distance Transformations in Arbitrary Images.”
Comp. Vision, Graphics, and Image Proc. 27:3 (1984), 321—345.
[Borgefors 86] G. Borgefors. “Distance Transformations in Digital Images.” Comp.
Vision,Graphics,andImageProc.34:3 (1986), 344—371.
[Oliveira01] M.Oliveira,B.Bowen,R.McKenna,andY.-S.Chang.“FastDigital
Image Inpainting.” In Proc. VIIP 2001, pp. 261—266, [CITY]: [PUB], 2001.
[Chan and Shen 00a] T. Chan and J. Shen. “Mathematical Models for Local Deter-
ministic Inpaintings.” Technical Report CAM 00-11, Image Processing Research
Group, UCLA, 2000.
[Chan and Shen 00b] T. Chan and J. Shen. “Non-Texture Inpainting by Curvature-
Driven Diusions (CDD).” Technical Report CAM 00-35, Image Processing
Research Group, UCLA, 2000.
[Meijster et al. 00] A. Meijster, J. Roerdink, and W. Hesselink. “A General Algo-
rithm for Computing Distance Transforms in Linear Time.” In Math. Morph.
and Its Appls. to Image and Signal Proc., pp. 331—340, [CITY]: Kluwer, 2000.
[Sethian 99] J. A. Sethian. Level Set Methods and Fast Marching Methods,Second
edition. Cambridge, UK: Cambridge Univ. Press, 1999.
[Sethian 96] J. A. Sethian. “A Fast Marching Level Set Method for Monotonically
Advancing Fronts.” Proc. Nat. Acad. Sci. 93:4 (1996), 1591—1595.
[Yung and Shankar ??] W. Yung and A. J. Shankar. “Image Inpainting Im-
plementation Software.” Available from World Wide Web (http://www
.bantha.org/aj/inpainting), [YEAR].
[Musser and Saini 96] D. R. Musser and A. Saini. STL Tutorial and Reference
Guide: C++ Programming with the Standard Template Library.” Addison-
Wesley Professional Computing Series. Reading, MA: Addison-Wesley, 1996.
Web Information
Source code of a sample C++ implementation of the inpainting method described
here is available at http://www.acm.org/jgt/papers/Telea03.html.
Alexandru Telea, Department of Mathematics and Computer Science, Eindhoven
University of Technology, Den Dolech 2, Eindhoven 5600 MB, The Netherlands
(alext@win.tue.nl, http://www.win.tue.nl/alext)
Received October 24, 2002; accepted in revised form May 21, 2003.
  • ... Therefore, it will produce blurred results under diffusion during the inpainting. Oh et al. [15] used a histogram matching strategy before blending, such that the side views have similar color conditions with the blended virtual view; additionally, they used the fast matching method (FMM) [18] to achieve hill inpainting. However, this method cannot entirely remove color discontinuities between the unoccluded and disocclusion regions. ...
    Article
    Full-text available
    The recent emergence of three-dimensional (3D) movies and 3D television (TV) indicates an increasing interest in 3D content. Stereoscopic displays have enabled visual experiences to be enhanced, allowing the world to be viewed in 3D. Virtual view synthesis is the key technology to present 3D content, and depth image-based rendering (DIBR) is a classic virtual view synthesis method. With a texture image and its corresponding depth map, a virtual view can be generated using the DIBR technique. The depth and camera parameters are used to project the entire pixel in the image to the 3D world coordinate system. The results in the world coordinates are then reprojected into the virtual view, based on 3D warping. However, these projections will result in cracks (holes). Hence, we herein propose a new method of DIBR for free viewpoint videos to solve the hole problem due to these projection processes. First, the depth map is preprocessed to reduce the number of holes, which does not produce large-scale geometric distortions; subsequently, improved 3D warping projection is performed collectively to create the virtual view. A median filter is used to filter the hole regions in the virtual view, followed by 3D inverse warping blending to remove the holes. Next, brightness adjustment and adaptive image blending are performed. Finally, the synthesized virtual view is obtained using the inpainting method. Experimental results verify that our proposed method can produce a pleasant visibility of the synthetized virtual view, maintain a high peak signal-to-noise ratio (PSNR) value, and efficiently decrease execution time compared with state-of-the-art methods.
  • ... Depth image inpainting with Fast Marching Method (FMM) [10] is a classical and simple inpainted depth map that has inspired several more techniques. FMM method uses only depth information of the known area in a selected region to predict missing depth value. ...
    Article
    Full-text available
    Image inpainting is a process of reconstructing the missing pixels by inferencing information from the known part of an image. This paper aims to increase the precision of depth inpainting by proposing four models based on kriging techniques. Our four kriging models are based on two different semivariance models (exponential model and spherical model) and two different color-similarity functions. The inpainting algorithm is designed to extract both color and depth information of RGB-D images. For efficiency assessment, we look at Root Mean Square Error (RMSE), Structural Similarity Index Measure (SSIM), and Peak Signal to Noise Ratio (PSNR) of the reconstructed images. We then make a performance comparison with the previous six conventional methods. Finally, we show that our implemented models outperform the conventional ones. The accuracy of our four kriging models is competitive, and the PSNR values lie between 30.81 to 45.46 dB.
  • ... For the real depth images, we only tackle the problem introduced by lost pixels, which we found to be the main cause of image distortion on the real 3D sensor used. A mask is constructed to mark all the lost pixels in the real depth images, and then their value is predicted using Telea's inpainting technique [19]. The same procedure is applied over the corrupted simulated depth maps. ...
    Conference Paper
    Full-text available
    In this paper, we propose an end-to-end approach to endow indoor service robots with the ability to avoid collisions using Deep Reinforcement Learning (DRL). The proposed method allows a controller to derive continuous velocity commands for an omnidirectional mobile robot using depth images, laser measurements, and odometry based speed estimations. The controller is parameterized by a deep neural network, and trained using DDPG. To improve the limited perceptual range of most indoor robots, a method to exploit range measurements through sensor integration and feature extraction is developed. Additionally, to alleviate the reality gap problem due to training in simulations, a simple processing pipeline for depth images is proposed. As a case study we consider indoor collision avoidance using the Pepper robot. Through simulated testing we show that our approach is able to learn a proficient collision avoidance policy from scratch. Furthermore, we show empirically the generalization capabilities of the trained policy by testing it in challenging real-world environments. Videos showing the behavior of agents trained using the proposed method can be found at https://youtu.be/ypC39m4BlSk.
  • ... where inp is a fast marching based method for inpainting objects [43], by replacing the pixel values for the auxiliary nuclei labeled in M aux with them for the unlabeled background. Fig. 4 illustrates the visual effectiveness of our proposed nuclei inpainting mechanism. ...
    Preprint
    Unsupervised domain adaptation (UDA) for nuclei instance segmentation is important for digital pathology, as it alleviates the burden of labor-intensive annotation and domain shift across datasets. In this work, we propose a Cycle Consistency Panoptic Domain Adaptive Mask R-CNN (CyC-PDAM) architecture for unsupervised nuclei segmentation in histopathology images, by learning from fluorescence microscopy images. More specifically, we first propose a nuclei inpainting mechanism to remove the auxiliary generated objects in the synthesized images. Secondly, a semantic branch with a domain discriminator is designed to achieve panoptic-level domain adaptation. Thirdly, in order to avoid the influence of the source-biased features, we propose a task re-weighting mechanism to dynamically add trade-off weights for the task-specific loss functions. Experimental results on three datasets indicate that our proposed method outperforms state-of-the-art UDA methods significantly, and demonstrates a similar performance as fully supervised methods.
  • ... In their method, before iteratively searching patches in the image to minimize the energy function, Kawai et al. randomly set the initial values of missing regions. Instead of this, in the first part, we used the result inpainted using the method of Telea [32] as the initial image. This reduced the computation time of iterations and improved the inpainting quality. ...
    Article
    Full-text available
    The conventional warping method only considers translations of pixels to generate stereo images. In this paper, we propose a model that can generate stereo images from a single image, considering both translation as well as rotation of objects in the image. We modified the appearance flow network to make it more general and suitable for our model. We also used a reference image to improve the inpainting method. The quality of images resulting from our model is better than that of images generated using conventional warping. Our model also better retained the structure of objects in the input image. In addition, our model does not limit the size of the input image. Most importantly, because our model considers the rotation of objects, the resulting images appear more stereoscopic when viewed with a device.
  • ... Many image inpainting methods can be classified into two main categories [7], [8]: structure-based and texturebased. Structure-based inpainting methods [9]- [15] calculate the gradient field or second derivative field of the image and then diffuse information by isophotes from the known regions to the unknown regions point-by-point. Most of the structure-based inpainting methods [9]- [14] are also known as partial differential equations (PDE) based methods or total variational (TV) based methods. ...
    Article
    Full-text available
    The texture edge continuity of a finger vein image is very important for the accuracy of feature extraction. However, the traditional inpainting methods which, without accurate texture constraints, are easy to cause the vein texture of the inpainted image to be blurred and break. A finger vein image inpainting method with Gabor texture constraints is proposed. The proposed method effectively protects the texture edge continuity of the inpainted image. Firstly, using the proposed vertical phase difference coding method, the Gabor texture feature matrix of the finger vein image, which can accurately describe the texture information, can be extracted from the Gabor filtering responses. Then, according to the local texture continuity of the finger vein image, the known pixels, which have different texture orientations with the center pixel in the patch, are filtered out using the Gabor texture constraining mechanism during the inpainting process. The proposed method eliminates irrelevant information interference in the inpainting process and has a more precise texture propagation. Simulation experiments of artificially synthetic images and acquired images show that the finger vein images inpainted by the proposed method have better texture continuity and higher image quality than the traditional methods which do not have accurate texture constraints. The proposed method improves the recognition performance of the finger vein identification system with the acquired damaged images.
  • ... Intel RealSense, Microsoft Kinect) inpainting algorithms received a lot of attention in recent years. We implemented a custom inpainting approach that is based on [18]. ...
    Preprint
    Universal grasping of a diverse range of previously unseen objects from heaps is a grand challenge in e-commerce order fulfillment, manufacturing, and home service robotics. Recently, deep learning based grasping approaches have demonstrated results that make them increasingly interesting for industrial deployments. This paper explores the problem from an automation systems point-of-view. We develop a robotics grasping system using Dex-Net, which is fully integrated at the controller level. Two neural networks are deployed on a novel industrial AI hardware acceleration module close to a PLC with a power footprint of less than 10 W for the overall system. The software is tightly integrated with the hardware allowing for fast and efficient data processing and real-time communication. The success rate of grasping an object form a bin is up to 95 percent with more than 350 picks per hour, if object and receptive bins are in close proximity. The system was presented at the Hannover Fair 2019 (world s largest industrial trade fair) and other events, where it performed over 5,000 grasps per event.
  • ... • Telea [40] is another image inpainting algorithm based on fast marching method. ...
    Preprint
    With the advent of NASA's lunar reconnaissance orbiter (LRO), a large amount of high-resolution digital elevation maps (DEMs) have been constructed by using narrow-angle cameras (NACs) to characterize the Moon's surface. However, NAC DEMs commonly contain no-data gaps (voids), which makes the map less reliable. To resolve the issue, this paper provides a deep-learning-based framework for the probabilistic reconstruction of no-data gaps in NAC DEMs. The framework is built upon a state of the art stochastic process model, attentive neural processes (ANP), and predicts the conditional distribution of elevation on the target coordinates (latitude and longitude) conditioned on the observed elevation data in nearby regions. Furthermore, this paper proposes sparse attentive neural processes (SANPs) that not only reduces the linear computational complexity of the ANP O(N) to the constant complexity O(K) but enhance the reconstruction performance by preventing overfitting and over-smoothing problems. The proposed method is evaluated on the Apollo 17 landing site (20.0{\deg}N and 30.4{\deg}E), demonstrating that the suggested approach successfully reconstructs no-data gaps with uncertainty analysis while preserving the high resolution of original NAC DEMs.
  • Chapter
    Full-text available
    A new general algorithm for computing distance transforms of digital images is presented. The algorithm consists of two phases. Both phases consist of two scans, a forward and a backward scan. The first phase scans the image column-wise, while the second phase scans the image row-wise. Since the computation per row (column) is independent of the computation of other rows (columns), the algorithm can be easily parallelized on shared memory computers. The algorithm can be used for the computation of the exact Euclidean, Manhattan (L1 norm), and chessboard distance (L∞ norm) transforms.
  • Article
    Inpainting is an image interpolation problem, often referring to interpolations over large-scale missing domains. In this paper, guided by the connectivity principle of human visual perception, we introduce a nonlinear PDE inpainting model based upon curvature-driven diffusions for nontexture images. This third-order PDE model improves the second-order total variation inpainting model introduced earlier by Chan and Shen (SIAM J. Appl. Math., in press, 2001). Computational schemes and digital examples are given.
  • Conference Paper
    Full-text available
    We present a very simple inpainting algorithm for reconstruction of small missing and damaged portions of images that is two to three orders of magnitude faster than current methods while producing comparable results.
  • Conference Paper
    Full-text available
    Inpainting, the technique of modifying an image in an undetectable form, is as ancient as art itself. The goals and applications of inpainting are numerous, from the restoration of damaged paintings and photographs to the removal/replacement of selected objects. In this paper, we introduce a novel algorithm for digital inpainting of still images that attempts to replicate the basic techniques used by professional restorators. After the user selects the regions to be restored, the algorithm automatically fills-in these regions with information surrounding them. The fill-in is done in such a way that isophote lines arriving at the regions' boundaries are completed inside. In contrast with previous approaches, the technique here introduced does not require the user to specify where the novel information comes from. This is automatically done (and in a fast way), thereby allowing to simultaneously fill-in numerous regions containing completely different structures and surrounding backgrounds. In addition, no limitations are imposed on the topology of the region to be inpainted. Applications of this technique include the restoration of old photographs and damaged film; removal of superimposed text like dates, subtitles, or publicity; and the removal of entire objects from the image like microphones or wires in special effects.
  • The influence of using distinct optimization criteria for determining the coefficients of a distance transform is studied. The criteria studied are (1) minimizing the maximum of the absolute value of the difference between the distance transform and Euclidaan distance, and (2) minimizing the root-mean-square difference between the distance transform and Euclidean distance. By allowing an overall scaling factor to have other than integer values, other integer approximations of the distance transform's coefficients result as optimal. Emphasis is given to isotropy, or invariance with respect to rotation, and to the use of unbiased distance estimates.
  • Article
    In this new edition of the successful book Level Set Methods, Professor Sethian incorporates the most recent advances in Fast Marching Methods, many of which appear here for the first time. Continuing the expository style of the first edition, this introductory volume presents cutting edge algorithms in these groundbreaking techniques and provides the reader with a wealth of application areas for further study. Fresh applications to computer-aided design and optimal control are explored and studies of computer vision, fluid mechanics, geometry, and semiconductor manufacture have been revised and updated. The text includes over thirty new chapters. It will be an invaluable reference for researchers and students.