Article

Tight and simple Web graph compression

Computing Research Repository - CORR 06/2010;
Source: arXiv

ABSTRACT Analysing Web graphs has applications in determining page ranks, fighting Web
spam, detecting communities and mirror sites, and more. This study is however
hampered by the necessity of storing a major part of huge graphs in the
external memory, which prevents efficient random access to edge (hyperlink)
lists. A number of algorithms involving compression techniques have thus been
presented, to represent Web graphs succinctly but also providing random access.
Those techniques are usually based on differential encodings of the adjacency
lists, finding repeating nodes or node regions in the successive lists, more
general grammar-based transformations or 2-dimensional representations of the
binary matrix of the graph. In this paper we present two Web graph compression
algorithms. The first can be seen as engineering of the Boldi and Vigna (2004)
method. We extend the notion of similarity between link lists, and use a more
compact encoding of residuals. The algorithm works on blocks of varying size
(in the number of input lines) and sacrifices access time for better
compression ratio, achieving more succinct graph representation than other
algorithms reported in the literature. The second algorithm works on blocks of
the same size, in the number of input lines, and its key mechanism is merging
the block into a single ordered list. This method achieves much more attractive
space-time tradeoffs.

0 Bookmarks
 · 
65 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Analysing Web graphs meets a difficulty in the necessity of storing a major part of huge graphs in the external memory, which prevents efficient random access to edge (hyperlink) lists. A number of algorithms involving compression techniques have thus been presented, to represent Web graphs succinctly but also providing random access. Our algorithm belongs to this category. It works on contiguous blocks of adjacency lists, and its key mechanism is merging the block into a single ordered list. This method achieves compression ratios much better than most methods known from the literature at rather competitive access times. Keywordsgraph compression–random access
    08/2011: pages 385-392;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Analyzing Web graphs has applications in determining page ranks, fighting Web spam, detecting communities and mirror sites, and more. This study is however hampered by the necessity of storing a major part of huge graphs in the external memory which prevents efficient random access to edge (hyperlink) lists. A number of algorithms involving compression techniques have thus been presented, to represent Web graphs succinctly, but also providing random access. Those techniques are usually based on differential encodings of the adjacency lists, finding repeating nodes or node regions in the successive lists, more general grammar-based transformations or 2-dimensional representations of the binary matrix of the graph. In this paper we present three Web graph compression algorithms. The first can be seen as engineering of the Boldi and Vigna (2004) [8] method. We extend the notion of similarity between link lists and use a more compact encoding of residuals. The algorithm works on blocks of varying size (in the number of input lists) and sacrifices access time for better compression ratio, achieving more succinct graph representation than other algorithms reported in the literature. The second algorithm works on blocks of the same size in the number of input lists. Its key mechanism is merging the block into a single ordered list. This method achieves much more attractive space–time tradeoffs. Finally, we present an algorithm for bidirectional neighbor query support, which offers compression ratios better than those known from the literature.
    Discrete Applied Mathematics 01/2014; 163:298–306. · 0.72 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In recent years studying the content of the World Wide Web became a very important yet rather difficult task. There is a need for a compression technique that would allow a web graph representation to be put into the memory while maintaining random access time competitive to the time needed to access uncompressed web graph on a hard drive. There are already available techniques that accomplish this task, but there is still room for improvements and this thesis attempts to prove it. It includes a comparison of two methods contained in state of art of this field (BV and k2partitioned) to two already implemented algorithms (rewritten, however, in C++ programming language to maximize speed and resource management efficiency), which are LM and 2D, and introduces the new variant of the latter one, called 2D stripes. This thesis serves as well as a proof of concept. The final considerations show positive and negative aspects of all presented methods, expose the feasibility of the new variant as well as indicate future direction for development.
    04/2013;

Full-text (2 Sources)

View
23 Downloads
Available from
Jun 2, 2014