Text Image Spotting Using Local Crowdedness and Hausdorff Distance

Conference Paper · November 2006with2 Reads
DOI: 10.1007/11931584_36 · Source: DBLP
Conference: Digital Libraries: Achievements, Challenges and Opportunities, 9th International Conference on Asian Digital Libraries, ICADL 2006, Kyoto, Japan, November 27-30, 2006, Proceedings
Abstract
This paper investigates a Hausdorff distance, which is used for measurement of image similarity, to see whether it is also effective for document image retrieval. We proposed a method using a local crowdedness algorithm and a modified Hausdorff distance which has an ability of detection of partial text image in a document image. We found that the proposed method achieved a reliable performance of text spotting on postal envelops.
  • [Show abstract] [Hide abstract] ABSTRACT: In a multimedia world, one would like electronic access to all kinds of information. But a lot of important information still only exists on paper and it is a challenge to e#ciently access or navigate this information even if it is scanned in. The previously proposed "word spotting" idea is an approach for accessing and navigating a collection of handwritten documents available as images using an index automatically generated by matching words as pictures. The most di#cult task in solving this problem is the matching of word images. The quality of the aged documents and the variations in handwriting make this a challenging problem. Here we present a number of word matching techniques along with new normalization methods that are crucial for their success. E#cient pruning techniques, which quickly reduce the set of possible matches for a given word, are also discussed. Our results show that the best of the discussed matching algorithms achieves an average precision of 73% for documents of reasonable quality.
    Article · May 2002 · IEEE Transactions on Knowledge and Data Engineering
  • [Show abstract] [Hide abstract] ABSTRACT: In a typical content-based image retrieval (CBIR) system, query results are a set of images sorted by feature similarities with respect to the query. However, images with high feature similarities to the query may be very di#erent from the query in terms of semantics. This is known as the semantic gap. We introduce a novel image retrieval scheme, CLUster-based rEtrieval of images by unsupervised learning (CLUE), which tackles the semantic gap problem based on a hypothesis: semantically similar images tend to be clustered in some feature space. CLUE attempts to capture semantic concepts by learning the way that images of the same semantics are similar and retrieving image clusters instead of a set of ordered images. Clustering in CLUE is dynamic. In particular, clusters formed depend on which images are retrieved in response to the query. Therefore, the clusters give the algorithm as well as the users semantic relevant clues as to where to navigate. CLUE is a general approach that can be combined with any real-valued symmetric similarity measure (metric or nonmetric). Thus it may be embedded in many current CBIR systems. Experimental results based on a database of about 60, 000 images from COREL demonstrate improved performance.
    Full-text · Article · Oct 2003 · IEEE Transactions on Knowledge and Data Engineering
  • [Show abstract] [Hide abstract] ABSTRACT: With the rising popularity and importance of document images as an information source, information retrieval in document image databases has become a growing and challenging problem. In this paper, we propose an approach with the capability of matching partial word images to address two issues in document image retrieval: word spotting and similarity measurement between documents. First, each word image is represented by a primitive string. Then, an inexact string matching technique is utilized to measure the similarity between the two primitive strings generated from two word images. Based on the similarity, we can estimate how a word image is relevant to the other and, thereby, decide whether one is a portion of the other. To deal with various character fonts, we use a primitive string which is tolerant to serif and font differences to represent a word image. Using this technique of inexact string matching, our method is able to successfully handle the problem of heavily touching characters. Experimental results on a variety of document image databases confirm the feasibility, validity, and efficiency of our proposed approach in document image retrieval.
    Article · Dec 2004
Show more