A New Algorithm for Completing Fragmented
Boundaries in Images
Dalil Benchebra, Nader Anani, Prasad Ponnapalli, and Abdel-Razzak Natsheh
Manchester Metropolitan University,
Department of Engineering and Technology,
Manchester, M1 5GD
D.email@example.com, P.Ponnapalli@mmu.ac.uk, firstname.lastname@example.org, email@example.com
Abstract—In any image processing system, the process of
detection and identification of individual objects within an
image is often impeded by the presence of gaps in the
boundaries of different objects. Various algorithms have
been described in the literature for closing such gaps and
hence completing fragmented boundaries making the
detection of different objects within an image easier. In this
paper a new algorithm, the Growing Circle Algorithm, is
presented for completing broken boundaries. The algorithm
was initially developed for completing the eyes’ boundaries
in medical images of the human face. In particular, the
algorithm has been successfully implemented in a medical
image processing system for diagnosis of paranasal sinuses
conditions. In addition, the algorithm has also been
successfully used for estimating and completing boundaries
of varying geometry, complexity and degree of
Image segmentation is a key process in any image
processing system. Given a digital image which is merely
a set of pixels representing some objects and boundaries,
the aim of the segmentation process is to simplify and /or
change the representation of the image into something
more meaningful and easier to analyze . Practical
applications of image segmentation include interpreting
satellite pictures, computer vision, face and fingerprint
recognition and medical imaging [2,3]. The literature
presents numerous algorithms for image segmentation.
These include global thresholding[4,5],
classification , seeded region growing  and region
competition . Unfortunately, it is not uncommon that
the application of a segmentation method to an image
results in objects with fragmented or weak boundaries.
The gaps in such boundaries must be closed before any
object detection and identification can be attempted.
The problem of incomplete boundaries is probably
more profound in medical images due to the similarity and
nature of objects within medical images “unpublished”
. A number of techniques have been presented in the
literature for boundary completion: edge detection
methods [10,11], active contours [12,4,13], and watershed
transformation [14,15]. These methods vary in complexity
and the initialization they require.
In this paper a new simple algorithm is presented, the
Growing Circle Algorithm, henceforth, referred to as the
GCA, for completing fragmented boundaries of objects
within digital images. The GCA has been successfully
applied and tested using various shapes including regular
and irregular polygons with complete and with gapped
outlines. The algorithm is described in Section II, whilst
the testing and results of the algorithm are presented in
Section III. Finally, Section IV concludes the work.
THE GCA ALGORITHM
The GCA algorithm consists of three distinct and
consecutive stages. These are the approximation,
refinement, and the shape completion stage, which are
A. The Approximation Stage
The aim of this stage is to create a circle with maximum
radius that can be inscribed within the shape whose
boundary is to be completed. Henceforth, referred to as
the Inscribed Circle, This is achieved as follows:
1. Choose a seed point inside the object. This point is
the minimum size circle whose initial diameter is less than
or equal to the resolution of the image, normally one pixel.
2. Growing the circle. The size of the circle is increased
(grown) by one unit, typically, one pixel.
3. Collision detection. When the grown circle collides
with (i.e. have a common pixel with) an edge of the
object, the centre of the next circle to be drawn is
translated by one unit in the direction of the vector −r
where r is the displacement vector from the centre of the
current circle to the point of collision. In the case of
multiple points of collisions, say n, the radius of the next
circle is translated by one unit in the direction of the
displacement from the centre of the current circle and the
4. If after translation, no collision takes place, the
algorithm resumes from step 2. Otherwise, the circle
becomes the Inscribed Circle i.e. the circle with the
maximum radius that can be inscribed within the
boundary of the object.
approximations stage of the GCA.
The application of the above stage to a regular nonagon
is illustrated in Figure 1, which shows that for concave
polygons the approximation algorithm works well with no
or little further refinements required.
However, as can be seen from Figure 2, the
approximation stage is not sufficient to obtain the
boundary of a triangle and further refinement is required
to obtain its boundary.
ir is the vector representing the
This completes the
978-1-86135-370-2/10 ©2010 CSNDSP
B. The Refinement Stage
This is a recursive stage which involv
whose centers are on the circumferenc
Circle obtained from the approximation
are grown until they collide with parts
boundary to be approximated. The algo
may be summarized as follows:
1. Initialize the centers of the refine
drawn on the circumference of the
obtained from the approximation stage, F
2. Grow all the refinement circles b
diameters by one pixel at a time until
with the boundary to be approximated,
the diameter of each refinement circl
radius of the circle it originated from
are parts of the boundary to be approxim
3. If there are refinement circles w
larger than one pixel, resume the algor
with the centers of the new refinement
the circumference of the refinement circ
. All collisio
Figure 3. The refinement ci
Figure 1. For a nonagon, the inscrib
approximates the bound
Figure 2. For a triangle, the inscribed c
approximating the bounda
ves growing circles
ce of the Inscribed
n stage. The circles
of the fragmented
orithm of this stage
ment circles to be
by increasing their
a collision occurs
, Figure 3, or until
le is equal to the
m multiplied by a
ons points obtained
whose radiuses are
rithm from step 1,
t circles drawn on
les obtained at this
The convergence of this alg
factor k is chosen to be less
refinement circles at each re
smaller and smaller convergin
equal to one pixel.
C. Boundary Approximation
The boundary approximatio
different methods. Three metho
The first method is the simp
in terms of computation time. I
the collision points together usin
The second and third method
locate and identify any gaps in
obtained from the previous s
found, the two methods differ
The second method complet
the points on the refinement c
For a given refinement circle th
the point on its circumference t
centre of the Inscribed Circle,
the boundary to approximated.
The third method consists
curve to the collision points o
In order to validate the GCA
performance, it was tested wi
geometries involving two sets
set, shown in Figure 4(a-b),
different shapes with closed bou
ed circle closely
circle is far from
orithm is guaranteed if the
s than 1. The subsequent
efinement depth becoming
ng to a radius less than or
on can be achieved using
ods are considered here.
plest and the less expensive
It consists of connecting all
ng straight lines.
ds require an initial stage to
n the approximate boundary
stages. Once the gaps are
only in the way they close
tes the gaps using some of
circles as explained below.
hat did not make a collision
that is the furthest from the
is considered to be part of
of fitting an interpolating
obtained at either side of a
A algorithm and assess its
ith boundaries of different
of data objects. The first
consists of a number of
undaries which was used as
When the GCA was used to re-generate these
boundaries, its performance in terms of the number of
iterations needed to arrive at an outline varied depending
on the geometry of the object’s boundary. A circular
boundary for example, required only two iterations whilst
a triangular outline required many more.
The second set of data is shown in Figure 5(a-b). This
consists of shapes with different boundaries on which
gaps were randomly introduced as shown in Figure 5(a-b).
The GCA performed very well in tracing and closing these
The performance of the GCA algorithm has also been
assessed and tested using medical images of the sinuses.
An example is shown in Figure 6(a) which is a CT scan of
a human face. Figure 6(b) shows the CT-image after
segmentation was applied. Clearly, from Figure 6(b), the
boundaries of the eyes have substantial gaps which must
be closed prior to any further processing of the image.
The GCA algorithm was used to estimate the
boundaries, of the eyes, and the result is shown in
Figure 6(c) which indicates that the boundaries have been
successfully identified and closed.
Figure 4(a-b). A standard set of closed outlines (left) and the
GCA generated outlines (right).
Figure 5(a-b). A standard set of gapped outlines (left) and the
GCA generated closed outlines (right).
IV. CONCLUSIONS Download full-text
The paper presented a simple algorithm for estimating
the boundaries of objects within images. The ultimate
objective of the algorithm is to close gaps in boundaries
within images so as to allow their accurate detection. The
algorithm was extensively tested by applying it to a
variety of closed and gapped boundaries of different
geometries. In addition, the algorithm was tested using
real medical images. The algorithm was found to work
very well, particularly, for tracing and closing convex
boundaries for shapes with rounded and or curved
 Linda G. Shapiro and G. Stockman, Computer Vision, New
Jersey: Prentice-Hall, 2001.
 A. Natsheh, P.V.S. Ponnapalli, D. Benchebra, N. Anani, A. Al-
Kholy. “Automated tool for diagnosis of sinus analysis CT-scans,”
Proceedings of AI 2007 the Twenty-Seventh SGAI International
Conference on Innovative Techniques and Applications of
Artificial Intelligence, Cambridge, UK, 2007.
 D. L. Pham, C. Xu and J. L. Prince, Current Methods in Image
Segmentation: Annual review of biomedical engineering, Vol. 2,
 P.M.Weeks, M.W. Vannier, and W.G. Stevens, “Three-
dimensional imaging of the wrist,” Journal of Hand- Surgery 10A
(1), pp. 32–39.L.A.,1984.
 S.E. James, R. Richards and D. A. McGrouther, “Three-
dimensional CT-imaging of the wrist,” Journal of Hand Surgery
17B (5), pp. 504–506,1992.
 R.O.Duda, P.E.Hart, Pattern classification and scene analysis.
John Wiley & Sons.1973.
 R.Adams, L.Bischof, “Seeded region growing,” IEEE Trans
Pattern Analysis and Machine Intelligence 16, vol. 6, pp. 641–647,
 S.C. Zhu, A.L.Yuille, “Region competition: unifying snakes,
region growing, and Bayes/MDL for multiband image
segmentation,” IEEE Trans. Pattern Analysis and Machine
Intelligence 18, vol. 9, pp. 884– 900, 1996.
 A. R. Natsheh, P. Ponnapalli, N. A. Anani, D. Benchebra and A.
El-Kholy,“Neural networks-based tool for diagnosis of paranasal
sinuses conditions,” submitted to the 7th IEEE, IET International
symp. on comm. sys, networks and digital signal processing
(CSNDSP 2010), Newcastle Upon Tyne, UK, 21-23 July 2010,
 J. Canny. “A Computational approach to edge Detection,” IEEE
Trans. Pattern Analysis and Machine Intelligence, vol. 8, pp. 679-
 X. M. Pardo, M. J. Carreira, A. Mosquera, and D. Cabello, “A
snake for CT-image segmentation integrating region and edge
information,” Image Vis. Comput., vol. 19, pp. 461–475, 2001.
 M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active contour
models,” Int. J. Comput. Vis., vol. 17, no. 4, 1988.
 T. B. Sebastian, H. Tek, J. J. Crisco, and B. B. Kimia,
“Segmentation of carpal bones from CT-images using skeletally
coupled deformable models,” Med. Image Anal., vol. 7, pp. 21–
 Grau, U. U. J. Mewes, M. Alcaniz, R. Kikinis, and S. K. Warfield,
“Improved watershed transform for medical image segmentation
using prior information,” IEEE Trans. Med. Imag., vol. 23, no. 4,
pp. 447–458,Apr. 2004.
 K. Haris, S. Efstratiadis, and A. Katsaggelos, “Hybrid image
segmentation using watersheds and fast region merging,” IEEE
Trans. Image Process., vol. 7, no. 12, pp. 1684–1699, Dec. 1998.
Figure 6(a). CT scan.
Figure 6(b). After segmentation.
Figure 6(c). Eyes’ boundaries closed.
Figure 6. (a) CT-image of a human face, (b) the
image after applying a segmentation method and (c)
shows the eyes’ boundaries as estimated by the
SIP-11868 CSNDSP 2010