Conference Paper

Lossless Compression of Hexahedral Meshes

Lawrence Livermore Nat. Lab., Livermore;
DOI: 10.1109/DCC.2008.12 Conference: Data Compression Conference, 2008. DCC 2008
Source: DBLP

ABSTRACT Many science and engineering applications use high-resolution unstructured hexahedral meshes to model solid 3D shapes for finite element simulations. These simulations frequently dump the mesh and associated fields to disk for subsequent analysis, which involves the transfer of huge volumes of data. To reduce requirements on disk space and bandwidth, we propose efficient schemes for lossless online compression of hexahedral mesh geometry and connectivity. Our approach is to use hash-based value predictors to transform the mesh connectivity list into a more compact byte-aligned stream of symbols that can then be efficiently compressed using conventional text compressors such as gzip. Our scheme is memory efficient, fast, and simple to implement, and yields 1-3 orders of magnitude reduction on a set of benchmark meshes. For geometry and field coding, we derive a set of local spectral predictors optimized for each possible configuration of previously encoded and thus available vertices within a hexahedron. Combined with lossless floating-point residual coding, this approach improves considerably upon prior predictive geometry coding schemes.

0 Bookmarks
 · 
91 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedra our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream also documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes—even meshes that do not fit in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra “blade” mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.
    The Visual Computer 01/2010; 26:1113-1122. · 0.91 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This project investigated layout and compression techniques for large, unstructured simulation data to reduce bandwidth requirements and latency in simulation I/O and subsequent post-processing, e.g. data analysis and visualization. The main goal was to eliminate the data-transfer bottleneck - for example, from disk to memory and from central processing unit to graphics processing unit - through coherent data access and by trading underutilized compute power for effective bandwidth and storage. This was accomplished by (1) designing algorithms that both enforce and exploit compactness and locality in unstructured data, and (2) adapting offline computations to a novel stream processing framework that supports pipelining and low-latency sequential access to compressed data. This report summarizes the techniques developed and results achieved, and includes references to publications that elaborate on the technical details of these methods.
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this article, we present an efficient connectivity compression algorithm for triangular meshes. It is a face-based, single resolution and lossless connectivity compression method. This method is an improvement on Edgebreaker. In the aspect of mesh traversing, we use adaptive mesh traversing method to make Split operations as few as possible, which are burdens of the compression ratio. In the aspect of Entropy encoding, a variable code-mode is well designed for every operator in the operator series, which is the result of mesh traversing. Then a binary strand can be obtained. And finally this binary strand is encoded by using adaptive arithmetic coding method. The compression ratio of our algorithm is obtained when all the operators in the series are encoded. In comparison to the previous best face-based encoding methods, our method can significantly improve the compression ratio.
    01/2010;

Full-text (2 Sources)

View
38 Downloads
Available from
May 22, 2014