Algorithms and Hardware for Data Compression in Point Rendering Applications

Department of Electronic and Computer Eng, University of Santiago de Compostela, Spain
Source: DBLP


The high storage requirements associated with point rendering applications make the utilization of data compression techniques interesting. Point rendering has been proposed only recently and, consequently, no compression strategies have yet been developed. In this paper we present compression algorithms for two specific data distributions widely used in point rendering: a naive distribution with no specific sorting of the points and a layer distribution which is suitable for incremental algorithms. In this last case points are sorted in layers and the connectivity among them is encoded. The algorithms we propose have a high compression rate (5.0 bits/point for the naive distribution and 7.7 bits/point for the layer distribution). Additionally we present the hardware implementation for the decompression of both algorithms. Both algorithms are implemented in a single hardware unit providing a control to select between them.

Download full-text


Available from: Margarita Amor, Mar 26, 2014
11 Reads
  • Source
    • "It is necessary to develop an efficient compression algorithm for point data. Mallón et al. [11] proposed a differential coding approach to encode regularly sampled point data, such as surfel images. Fleishman et al. [12] projected points onto local polynomial surfaces to construct multilevel scalar displacement maps and encoded the displacements progressively. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose a lossless compression algorithm for three-dimensional point data in graphics applications. In typical point representation, each point is treated as a sphere and its geometrical and normal data are stored in the hierarchical structure of bounding spheres. The proposed algorithm sorts child spheres according to their positions to achieve a higher coding gain for geometrical data. Also, the proposed algorithm compactly encodes normal data by exploiting high correlation between parent and child normals. Simulation results show that the proposed algorithm saves up to 60% of storage space.
    IEEE Transactions on Multimedia 01/2006; DOI:10.1109/TMM.2005.858410 · 2.30 Impact Factor