Chapter

Top-k Ranked Document Search in General Text Databases

DOI: 10.1007/978-3-642-15781-3_17

ABSTRACT Text search engines return a set of k documents ranked by similarity to a query. Typically, documents and queries are drawn from natural language text, which can
readily be partitioned into words, allowing optimizations of data structures and algorithms for ranking. However, in many
new search domains (DNA, multimedia, OCR texts, Far East languages) there is often no obvious definition of words and traditional
indexing approaches are not so easily adapted, or break down entirely. We present two new algorithms for ranking documents
against a query without making any assumptions on the structure of the underlying text. We build on existing theoretical techniques,
which we have implemented and compared empirically with new approaches introduced in this paper. Our best approach is significantly
faster than existing methods in RAM, and is even three times faster than a state-of-the-art inverted file implementation for
English text when word queries are issued.

0 Bookmarks
 · 
136 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Let $\mathcal{D}$ be a collection of $D$ documents, which are strings over an alphabet of size $\sigma$, of total length $n$. We describe a data structure that uses linear space and and reports $k$ most relevant documents that contain a query pattern $P$, which is a string of length $p$, in time $O(p/\log_\sigma n+k)$, which is optimal in the RAM model in the general case where $\lg D = \Theta(\log n)$, and involves a novel RAM-optimal suffix tree search. Our construction supports an ample set of important relevance measures... [clip] When $\lg D = o(\log n)$, we show how to reduce the space of the data structure from $O(n\log n)$ to $O(n(\log\sigma+\log D+\log\log n))$ bits... [clip] We also consider the dynamic scenario, where documents can be inserted and deleted from the collection. We obtain linear space and query time $O(p(\log\log n)^2/\log_\sigma n+\log n + k\log\log k)$, whereas insertions and deletions require $O(\log^{1+\epsilon} n)$ time per symbol, for any constant $\epsilon>0$. Finally, we consider an extended static scenario where an extra parameter $par(P,d)$ is defined, and the query must retrieve only documents $d$ such that $par(P,d)\in [\tau_1,\tau_2]$, where this range is specified at query time. We solve these queries using linear space and $O(p/\log_\sigma n + \log^{1+\epsilon} n + k\log^\epsilon n)$ time, for any constant $\epsilon>0$. Our technique is to translate these top-$k$ problems into multidimensional geometric search problems. As an additional bonus, we describe some improvements to those problems.
    07/2013;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Engineering efficient implementations of compact and succinct structures is a time-consuming and challenging task, since there is no standard library of easy-to- use, highly optimized, and composable components. One consequence is that measuring the practical impact of new theoretical proposals is a difficult task, since older base- line implementations may not rely on the same basic components, and reimplementing from scratch can be very time-consuming. In this paper we present a framework for experimentation with succinct data structures, providing a large set of configurable components, together with tests, benchmarks, and tools to analyze resource requirements. We demonstrate the functionality of the framework by recomposing succinct solutions for document retrieval.
    11/2013;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Formulating and processing phrases and other term dependencies to improve query effectiveness is an important problem in information retrieval. However, accessing word-sequence statistics using inverted indexes requires unreasonable processing time or substantial space overhead. Establishing a balance between these competing space and time trade-offs can dramatically improve system performance. In this article, we present and analyze a new index structure designed to improve query efficiency in dependency retrieval models. By adapting a class of (ε, δ)-approximation algorithms originally proposed for sketch summarization in networking applications, we show how to accurately estimate statistics important in term-dependency models with low, probabilistically bounded error rates. The space requirements for the vocabulary of the index is only logarithmically linked to the size of the vocabulary. Empirically, we show that the sketch index can reduce the space requirements of the vocabulary component of an index of n-grams consisting of between 1 and 4 words extracted from the GOV2 collection to less than 0.01% of the space requirements of the vocabulary of a full index. We also show that larger n-gram queries can be processed considerably more efficiently than in current alternatives, such as positional and next-word indexes.
    ACM Transactions on Information Systems (TOIS). 01/2014; 32(1).

Full-text (2 Sources)

View
42 Downloads
Available from
May 29, 2014