ArticlePDF Available

A modified fast parallel algorithm for thinning digital patterns

Authors:

Abstract

A modified version of the fast parallel thinning algorithm proposed by Zhang and Suen is presented in this paper. It preserves the original merits such as the contour noise immunity and good effect in thinning crossing lines; and overcomes the original demerits such as the serious shrinking and line connectivity problems.
The following manuscript was published in
Yung-Sheng Chen and Wen-Hsing Hsu, A modified fast parallel algorithm for
thinning digital patterns, Pattern Recognition Letters, Vol. 7, No. 2, 99-106, 1988.
https://www.sciencedirect.com/science/article/pii/0167865588901249
... Used to remove selected foreground pixels from binary images and is used for skeletonization. Zhang-Suen's algorithm [7] is used to convert the thick pixel image into a single-pixel thick image. ...
... The inbuilt morphological operator function bwmorph() produced unnecessary protrusions and to remove these protrusions, the thinned image required additional pre-processing. Instead of bwmorph morphological operator function, the thinning algorithm as proposed by Zhang-Suens [7] was implemented and produced satisfactory results. The Zhang-Suen Thinning algorithm [7] is a 2-pass algorithm. ...
... Instead of bwmorph morphological operator function, the thinning algorithm as proposed by Zhang-Suens [7] was implemented and produced satisfactory results. The Zhang-Suen Thinning algorithm [7] is a 2-pass algorithm. Two sets of checks are performed in each iteration to remove pixels from the image. ...
Preprint
Full-text available
Contour map has contour lines that are significant in building a Digital Elevation Model (DEM). During the digitization and pre-processing of contour maps, the contour line intersects with each other or break apart resulting in broken contour segments. These broken segments impose a greater risk while building DEM leading to a faulty model. In this project, a simple yet efficient mechanism is used to match and reconnect the endpoints of the broken segments accurately and efficiently. The matching of the endpoints is done using the concept of minimum Euclidean distance and gradient direction while the Cubic Hermite spline interpolation technique is used to reconnect the endpoints by estimating the values using a mathematical function that minimizes overall surface curvature resulting in a smooth curve. The purpose of this work is to reconnect the broken contour lines generated during the digitization of the contour map, to help build the most appropriate digital elevation model for the corresponding contour map.
... Hence, this means that contours have to be extracted from the energy map to obtain shapes. We test two methods for contour extraction of the energy map, Canny and the skeletonization method [43]. The parameters of LACS have been adapted to the length of the contours: q = {q 0 = 1, q 1 = 1, q 2 = 1} , s min = −100 , s max = 100 , N B = 200 , and = 100 px. ...
Article
Full-text available
We present a new recognition framework for ancient coins struck from the same die. It is called Low-complexity Arrays of Patch Signatures. To overcome the problem of illumination conditions we use multi-light energy maps which are a light-independent, 2.5D representation of the coin. The coin recognition is based on a local texture analysis of the energy maps. Descriptors of patches, tailored to coin images via the properties provided by the energy map, are matched against a database using a system of associative arrays. The system of associative arrays used for the matching is a generalization of the Low-complexity Arrays of Contour Signatures. Hence, the matching is very efficient and nearly at constant time. Due to the lack of available data, we present two new data sets of artificial and real ancient coins respectively. Theoretical insights for the framework are discussed and various experiments demonstrate the promising efficiency of our method.
... 4. The skeletonize operator [27] is used to simplify the irregular contours. Then the burr contours, contours with no area, and contours encircled by other contours are removed. ...
Article
Full-text available
The development of urbanization has brought convenience to people, but it has also brought a lot of harmful construction solid waste. The machine vision detection algorithm is the crucial technology for finely sorting solid waste, which is faster and more stable than traditional methods. However, accurate identification relies on large datasets, while the datasets from the field working conditions are scarce, and the manual annotation cost of datasets is high. To rapidly and automatically generate datasets for stacked construction waste, an acquisition and detection platform was built to automatically collect different groups of RGB-D images for instances labeling. Then, based on the distribution points generation theory and data augmentation algorithm, a rapid-generation method for synthetic construction solid waste datasets was proposed. Additionally, two automatic annotation methods for real stacked construction solid waste datasets based on semi-supervised self-training and RGB-D fusion edge detection were proposed, and datasets under real-world conditions yield better models training results. Finally, two different working conditions were designed to validate these methods. Under the simple working condition, the generated dataset achieved an F1-score of 95.98, higher than 94.81 for the manually labeled dataset. In the complicated working condition, the F1-score obtained by the rapid generation method reached 97.74. In contrast, the F1-score of the dataset obtained manually labeled was only 85.97, which demonstrates the effectiveness of proposed approaches.
Article
Purpose Crack detection of pavement is a critical task in the periodic survey. Efficient, effective and consistent tracking of the road conditions by identifying and locating crack contributes to establishing an appropriate road maintenance and repair strategy from the promptly informed managers but still remaining a significant challenge. This research seeks to propose practical solutions for targeting the automatic crack detection from images with efficient productivity and cost-effectiveness, thereby improving the pavement performance. Design/methodology/approach This research applies a novel deep learning method named TransUnet for crack detection, which is structured based on Transformer, combined with convolutional neural networks as encoder by leveraging a global self-attention mechanism to better extract features for enhancing automatic identification. Afterward, the detected cracks are used to quantify morphological features from five indicators, such as length, mean width, maximum width, area and ratio. Those analyses can provide valuable information for engineers to assess the pavement condition with efficient productivity. Findings In the training process, the TransUnet is fed by a crack dataset generated by the data augmentation with a resolution of 224 × 224 pixels. Subsequently, a test set containing 80 new images is used for crack detection task based on the best selected TransUnet with a learning rate of 0.01 and a batch size of 1, achieving an accuracy of 0.8927, a precision of 0.8813, a recall of 0.8904, an F1-measure and dice of 0.8813, and a Mean Intersection over Union of 0.8082, respectively. Comparisons with several state-of-the-art methods indicate that the developed approach in this research outperforms with greater efficiency and higher reliability. Originality/value The developed approach combines TransUnet with an integrated quantification algorithm for crack detection and quantification, performing excellently in terms of comparisons and evaluation metrics, which can provide solutions with potentially serving as the basis for an automated, cost-effective pavement condition assessment scheme.
Chapter
This chapter presents the methods for robotic positioning of zebrafish larvae. A robust three-dimensional localization algorithm of the pipette tip is proposed to serve as a basis of the following techniques. A larva separation and aspiration scheme is proposed to extract a group of randomly positioned larvae one-by-one, and the algorithm to keep the larva at a fixed position within a pipette is also proposed. In addition, a zebrafish larva transportation system driven by the magnetic field is designed. The feasibility of the proposed methods is verified by simulation and experimental results.
Article
Few researches have focused on translation-, scale-, and rotation-invariant recognition of handwritten Chinese characters since writing variation is large and invariant features are hard to find. In this study, an adequate normalization process is first used to normalize characters such that they are invariant to translation and scale; then, five rotation-invariant features are extracted for invariant recognition. We propose a feature-extraction approach which especially considers the skeleton-distortion problem to effectively extract the desired invariant features. The first four invariant features are used for preclassification to reduce the matching time. The last feature, ring data, is used to construct ring-data vectors and weighted ring-data matrices to individually characterize character samples and characters for invariant recognition. A character set was constructed from 200 handwritten Chinese characters with several different samples of each character in arbitrary orientation. Several experiments were conducted using the character set to evaluate the performance of the proposed preclassification and ring-data matching algorithms. The experimental results show that the proposed approaches work well for invariant handwritten Chinese character recognition and are superior to the moment-invariant matching approach.
Article
Representation and matching of feature patterns are two important steps in image analysis and computer vision applications. In the existing approaches, the features for representation and matching are usually described in a 'deterministic' manner instead of in a 'statistical' form. In this study, a model-based approach to representation and matching of feature patterns involving statistical properties for robot operation monitoring is proposed. Here the features treated include line segments and curve segments. To speed up the matching process, a modified distance weighted correlation map is constructed in the learning stage so that the correspondence between an input feature pattern T and its corresponding stored feature pattern S is avoided. This can greatly reduce the computational effort of matching. Some experimental results show the feasibility of the proposed approach.
Article
An algorithm was developed on a digital photoelastic system for extracting stress intensity factor from the experimentally obtained photoelastic fringe pattern. The case of a strip containing an eccentric crack was studied in detail. Excellent agreement between the present experimental and available analytical results demonstrates the applicability of the algorithm on investigating the fracture problems.
Article
A new algorithm for template matching parallel thinning is proposed in this article. This algorithm is an improvement of the fast fully parallel thinning algorithm by producing more quality images. This article describes the details of the algorithm and its hardware realization. Experimental results demonstrate the effectiveness of the algorithm. © 1999 John Wiley & Sons, Inc. Int J Imaging Syst Technol 10, 318–322, 1999
Article
A computer program, the Line Drawing Editor (LDE), that alIows convenient modification of computer representations of simple line drawings has been designed and implemented. The distinguishing feature of the LDE from the multitude of interactive graphics programs that accomplish the same function is the use of a computer controlled TV camera for input and program control rather than conventional interactive input devices such as light pens or data tablets. A graphical editing language allows the LDE user to indicate, by marking the drawing in appropriate fashion, the changes that are to be made. The drawing, which was generated on a hard-copy output device, with the added edit information, is placed in front of the camera. In successive phases the LDE computes a two-dimensional homogeneous transformation relating the viewed drawing and its computer description, locates the lines added to the drawing, parses the added lines according to the edit language, and finally uses this information to update the drawing description. The edit functions available are move, delete, copy, addition of lines, and addition of drawing symbols from a library.
Article
In this report three thinning algorithms are developed: one each for use with rectangular, hexagonal, and triangular arrays. The approach to the development of each algorithm is the same. Pictorial results produced by each of the algorithms are presented and the relative performances of the algorithms are compared. It is found that the algorithm operating with the triangular array is the most sensitive to image irregularities and noise, yet it will yield a thinned image with an overall reduced number of points. It is concluded that the algorithm operating in conjunction with the hexagonal array has features which strike a balance between those of the other two arrays.
Article
In this paper we describe a method for recognizing and extracting the road network in peri-urban areas using SPOT panchromatic images. A particular combination of image representation-description algorithms is proposed, which recognizes road features not clearly defined in remotely sensed images and often confused with other features and extracts them. The method consists of five algorithms thresholding, morphological, thinning, linking, and gap filling that are used sequentially. The only human intervention required is the definition of a threshold. The proposed approach produces a raster road network representation that is highly complete and locationally accurate. Some experimental results are given in this paper.