A new sequential thinning algorithm, which uses both flag map and bitmap simultaneously to decide if a boundary pixel can be deleted, as well as the incorporation of smoothing templates to smooth the final skeleton, is proposed in this paper. Three performance measurements are proposed for an objective evaluation of this novel algorithm against a set of well established techniques. Extensive result comparison and analysis are presented in this paper for discussion.
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.
"There are number of performance measures on the basis of which we can measure various skeletonization algorithms. Some of them are described below: 1. Connectivity Measurement CM : It is used to measure the connectivity in the skeletons that are produced as outputs. This is given by: Where CN is defined as current neighborhood function and is defined as follows: "
"he binary images by using two pulse coupled Neural networks and claimed that in future the applications of their algorithm will be studied. They used filling technique to fill the region for the obtaining of the inner and outer image. The firing step obtains the thinned image which is decided and the final thinned image or the process is repeated.(Zhou et al. (1995) presented a novel thinning algorithm based on single pass. Bitmap and flag map are used concurrently for the decision of pixel to be deleted. Problems in existing algorithms and their solutions are presented and a smoothing template is also proposed for the smoothing of thinned image.Chiu and Tseng (1997) presents a handwritten Chinese "
[Show abstract][Hide abstract]ABSTRACT: Optical Character Recognition (OCR) is converting image based images into editable text so the text written in image form is available for editing purpose. The thinning technique can be applied in preprocessing stage or after segmentation of words and characters from a text image when features are extracted to differentiate the characters. Thinning is to decrease the thickness of strokes and finding out one pixel skeleton of the character image. In this paper we present an iterative and interactive thinning algorithm for Sindhi script step by step. Our thinning algorithm removes pixels by preserving connectivity and pattern of image intact. The process can be stopped and checked with pixel based editor for the connectivity patterns. This algorithm can be used with segmentation-based Sindhi OCR and segmentation-free OCR. The algorithm along with application is tested on Sindhi line text and individual characters and the results are presented. The algorithm and the application can also be applied with other language scripts. The presented work is a part of research done on Sindhi OCR.
"re the line segments and the reflex vertices , see Veltkamp et al. (2004). Ogniewicz et al. (1992 also presented skeletonization algorithms through Voronoi poly- gons. Talbot (1992) discussed the skeletonization through the grassfire algorithm (all connected locations of the meet points of propagated firefronts) which is based on Euclidean metrics. Zhou et al. (1995) implemented a refined thinning algorithm to derive skeletons from binary im- ages. Ivanov et. al. (2000) focused on the problem of the medial axis transform through integer-based programming. Haunert (2008) and Gold et. al. (2003) specifically described the application of skeletons within map generalization to collapse elongated feature"
[Show abstract][Hide abstract]ABSTRACT: This paper presents techniques for the automated generalization of categorical data using ArcObjects (ArcGIS) as programming development framework. The dataset is a gapless polygonal land-use mosaic from the region of Valencia and was derived from multispectral airborne image classification and combined with the municipal cadastre so that the level of detail is equivalent to a scale of 1:1,000. The generalization task is to transform this dataset to a scale of 1:25,000 – the same target scale as the manually generated national land-cover geodatabase SIOSE (Information System of Land-Use in Spain). Raster and vector-based algorithms such as Euclidean distance maps and Delaunay triangulation have been applied in order to collapse narrow road polygons. The generalization workflow implements topologic, geometric and semantic constraints, according to specific characteristics of the SIOSE dataset. An evaluation is performed to measure metric and semantic change between the original and generalized dataset and includes a comparison of the results with SIOSE. The process considers cartographic requirements to enhance legibility and to avoid graphic conflicts. Programming with ArcObjects provides powerful methods for the manipulation of objects and to fulfill the purpose of automated generalization within a GIS environment.