Conference Paper

Discrimination of Old Document Images Using Their Style

L3i Labs., Univ. of La Rochelle, La Rochelle, France
DOI: 10.1109/ICDAR.2011.86 Conference: 2011 International Conference on Document Analysis and Recognition, ICDAR 2011, Beijing, China, September 18-21, 2011
Source: IEEE Xplore


Based on the principle described by Pareti et al. in [1], [2], and by Chouaib et al. in [3], this paper proposes to combine the use of the Zipf law and the use of bag of patterns for the implementation of a document indexing processing scheme. Contrarily to these two mentioned approaches, we retain the most important patterns based on the TF-IDF criteria, and the pattern selection is local. This paper presents the different stages of our indexing process, as well as their application to historical documents. Results on comlex images are given, illustrated and discussed.

Download full-text


Available from: Mickaël Coustaty
  • Source
    • "Another interest of examining lettrines has recently been revealed by historian exerts, which consists in the computeraided discrimination of lettrines according to their style for lettrine indexing. For instance, Coustaty and Ogier [6] combined the use of the Zipf law with the bag of patterns. By extracting a set of the most relevant patterns (with pre-defined size masks) from lettrine images and ordering them according to the TF- IDF frequency criteria, they proposed a supervised method to retrieve lettrines according to their ornamental background style. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This article tackles some important issues relating to the analysis of a particular case of complex ancient graphic images, called " lettrines " , " drop caps " or " ornamental letters ". Our contribution focuses on proposing generic solutions for lettrine recognition and classification. Firstly, we propose a bottom-up segmentation method, based on texture, ensuring the separation of the letter from the elements of the background in an ornamental letter. Secondly, a structural representation is proposed for characterizing a lettrine. This structural representation is based on filtering automatically relevant information by extracting representative homogeneous regions from a lettrine to generate a graph-based signature. The proposed signature provides a rich and holistic description of the lettrine style by integrating varying low-level features (e.g. texture). Then, to categorize and classify lettrines with similar style, structure (i.e. ornamental background) and content (i.e. letter), a graph-matching paradigm has been carried out to compare and classify the resulting graph-based signatures. Finally, to demonstrate the robustness of the proposed solutions and provide additional insights into their accuracies, an experimental evaluation has been conducted using a relevant set of lettrine images. In addition, we compare the results achieved with those obtained using the state-of-the-art methods to illustrate the effectiveness of the proposed solutions.
    Full-text · Conference Paper · Aug 2015
  • Source
    • "As a consequence, we can use them as global document features and thus describes styles of image. 4) From Zipf law to Image description: Starting from the results obtained in [2], we keep patterns which are in the lefthand portion. To do this, we retain patterns where the TF-IDF is higher than t% of the max value of TF-IDF for each image. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper deals with cultural heritage preservation and ancient document indexing. In the management of historical documents, ancient images are described using semantic information, often manually annotated by historians. In this paper, we propose an approach to interactively propagate the historians' knowledge to a database of drop caps images manually populated by historians with drop caps image annotations. Based on a novel document indexing processing scheme which combines the use of the Zipf law and the use of bag of patterns, our approach extends the Bag of Words model to represent the knowledge by visual features through relevance feedback. Then annotation propagation is automatically performed to propagate knowledge to the drop caps image database. In this article, our approach is presented together with preliminary experimental results and an illustrative example.
    Full-text · Conference Paper · Aug 2013