Top-k Ranked Document Search in General Text Databases

DOI: 10.1007/978-3-642-15781-3_17

ABSTRACT Text search engines return a set of k documents ranked by similarity to a query. Typically, documents and queries are drawn from natural language text, which can
readily be partitioned into words, allowing optimizations of data structures and algorithms for ranking. However, in many
new search domains (DNA, multimedia, OCR texts, Far East languages) there is often no obvious definition of words and traditional
indexing approaches are not so easily adapted, or break down entirely. We present two new algorithms for ranking documents
against a query without making any assumptions on the structure of the underlying text. We build on existing theoretical techniques,
which we have implemented and compared empirically with new approaches introduced in this paper. Our best approach is significantly
faster than existing methods in RAM, and is even three times faster than a state-of-the-art inverted file implementation for
English text when word queries are issued.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Supporting top-k document retrieval queries on general text databases, that is, finding the k documents where a given pattern occurs most frequently, has become a topic of interest with practical applications. While the problem has been solved in optimal time and linear space, the actual space usage is a serious concern. In this paper we study various reduced-space structures that support top-k retrieval and propose new alternatives. Our experimental results show that our novel algorithms and data structures dominate almost all the space/time tradeoff.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The inverted index supports efficient full-text searches on natural language text collections. It requires some extra space over the compressed text that can be traded for search speed. It is usually fast for single-word searches, yet phrase searches require more expensive intersections. In this article we introduce a different kind of index. It replaces the text using essentially the same space required by the compressed text alone (compression ratio around 35%). Within this space it supports not only decompression of arbitrary passages, but efficient word and phrase searches. Searches are orders of magnitude faster than those over inverted indexes when looking for phrases, and still faster on single-word searches when little space is available. Our new indexes are particularly fast at counting the occurrences of words or phrases. This is useful for computing relevance of words or phrases. We adapt self-indexes that succeeded in indexing arbitrary strings within compressed space to deal with large alphabets. Natural language texts are then regarded as sequences of words, not characters, to achieve word-based self-indexes. We design an architecture that separates the searchable sequence from its presentation aspects. This permits applying case folding, stemming, removing stopwords, etc. as is usual on inverted indexes.
    ACM Transactions on Information Systems - TOIS. 01/2012;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Let ${\cal{D}}$ = $\{d_1, d_2, d_3, ..., d_D\}$ be a given set of $D$ (string) documents of total length $n$. The top-$k$ document retrieval problem is to index $\cal{D}$ such that when a pattern $P$ of length $p$, and a parameter $k$ come as a query, the index returns the $k$ most relevant documents to the pattern $P$. Hon et. al. \cite{HSV09} gave the first linear space framework to solve this problem in $O(p + k\log k)$ time. This was improved by Navarro and Nekrich \cite{NN12} to $O(p + k)$. These results are powerful enough to support arbitrary relevance functions like frequency, proximity, PageRank, etc. In many applications like desktop or email search, the data resides on disk and hence disk-bound indexes are needed. Despite of continued progress on this problem in terms of theoretical, practical and compression aspects, any non-trivial bounds in external memory model have so far been elusive. Internal memory (or RAM) solution to this problem decomposes the problem into $O(p)$ subproblems and thus incurs the additive factor of $O(p)$. In external memory, these approaches will lead to $O(p)$ I/Os instead of optimal $O(p/B)$ I/O term where $B$ is the block-size. We re-interpret the problem independent of $p$, as interval stabbing with priority over tree-shaped structure. This leads us to a linear space index in external memory supporting top-$k$ queries (with unsorted outputs) in near optimal $O(p/B + \log_B n + \log^{(h)} n + k/B)$ I/Os for any constant $h${$\log^{(1)}n =\log n$ and $\log^{(h)} n = \log (\log^{(h-1)} n)$}. Then we get $O(n\log^*n)$ space index with optimal $O(p/B+\log_B n + k/B)$ I/Os.

Full-text (2 Sources)

Available from
May 29, 2014