Jens-Rainer Ohm’s research while affiliated with RWTH Aachen University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (198)


Block diagram of the proposed encoding process. The genotype matrix G\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal {G}$$\end{document} is processed by a series of transformations: splitting, binarization, and optionally sorting. At the end of the process, entropy coding is applied
Example for G=0∣22/01/00∣0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {G}} = \begin{array}{ll} 0\mid 2 &{} 2/0 \\ 1/0 &{} 0 \mid 0 \end{array}$$\end{document}. Bit plane binarization yields two bit planes representing the most significant bit and the less significant bit of A\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal {A}$$\end{document}. Row binarization generates only three binary rows because the first row of A\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal {A}$$\end{document} requires two bits and the second row of A\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal {A}$$\end{document} requires only one bit
Average compression ratio (averaged over all chromosomes) achieved by each GVC configuration. The colors indicate the employed binarization: orange—bit plane binarization (concatenated), blue—bit plane binarization, green—row binarization. The patterns indicate the employed sorting: no pattern—no sorting, vertical lines—column sorting, horizontal lines—row sorting, both horizontal and vertical lines—sorting in both directions
Compression ratio achieved by each GVC configuration with different block sizes. For all configurations with the sorting enabled, increasing the block size increases the compression ratio
Comparison of GVC to the state-of-the-art methods GTC [4], GTRAC [3], and GTShark [5] with respect to compressed size

+3

GVC: efficient random access compression for gene sequence variations
  • Article
  • Full-text available

March 2023

·

29 Reads

·

2 Citations

BMC Bioinformatics

Yeremia Gunawan Adhisantoso

·

Jan Voges

·

·

[...]

·

Jörn Ostermann

Background In recent years, advances in high-throughput sequencing technologies have enabled the use of genomic information in many fields, such as precision medicine, oncology, and food quality control. The amount of genomic data being generated is growing rapidly and is expected to soon surpass the amount of video data. The majority of sequencing experiments, such as genome-wide association studies, have the goal of identifying variations in the gene sequence to better understand phenotypic variations. We present a novel approach for compressing gene sequence variations with random access capability: the Genomic Variant Codec (GVC). We use techniques such as binarization, joint row- and column-wise sorting of blocks of variations, as well as the image compression standard JBIG for efficient entropy coding. Results Our results show that GVC provides the best trade-off between compression and random access compared to the state of the art: it reduces the genotype information size from 758 GiB down to 890 MiB on the publicly available 1000 Genomes Project (phase 3) data, which is 21% less than the state of the art in random-access capable methods. Conclusions By providing the best results in terms of combined random access and compression, GVC facilitates the efficient storage of large collections of gene sequence variations. In particular, the random access capability of GVC enables seamless remote data access and application integration. The software is open source and available at https://github.com/sXperfect/gvc/.

Download

GVC: Efficient Random Access Compression for Gene Sequence Variations

November 2022

·

35 Reads

Background: In recent years, the advances in high-throughput sequencing technologies enabled the use of genomic information in many fields, such as precision medicine, oncology, and food quality control. The amount of genomic data being generated is growing rapidly and expected to soon surpass the amount of video data. The majority of sequencing experiments, such as genome-wide association studies, have the goal to identify gene sequence variations to better understand phenotypic variations. We present a novel approach for the compression of gene sequence variations with random access capability: the Genomic Variant Codec (GVC). We use techniques such as binarization, joint row-and column-wise sorting of blocks of variations, as well as the image compression standard JBIG for efficient entropy coding. Results: Our results show that GVC provides the best trade-off in an evaluation of compression and random access compared to the state-of-the-art: it reduces the genotype information size from 758 GiB down to 890 MiB on the publicly available 1000 Genomes Project (phase 3) data, which is 21% less than the state of the art in random-access capable methods. Conclusions: By providing the best results in terms of combined random access and compression, GVC facilitates the efficient storage of large collections of gene sequence variations. In particular, the random access capability of GVC enables seamless remote data access and application integration. The software is open source and available at https://github.com/sXperfect/gvc/.




Overview of the Versatile Video Coding (VVC) Standard and its Applications

October 2021

·

1,835 Reads

·

1,430 Citations

IEEE Transactions on Circuits and Systems for Video Technology

Versatile Video Coding (VVC) was finalized in July 2020 as the most recent international video coding standard. It was developed by the Joint Video Experts Team (JVET) of the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) to serve an ever-growing need for improved video compression as well as to support a wider variety of today’s media content and emerging applications. This paper provides an overview of the novel technical features for new applications and the core compression technologies for achieving significant bit rate reductions in the neighborhood of 50% over its predecessor for equal video quality, the High Efficiency Video Coding (HEVC) standard, and 75% over the currently most-used format, the Advanced Video Coding (AVC) standard. It is explained how these new features in VVC provide greater versatility for applications. Highlighted applications include video with resolutions beyond standard- and high-definition, video with high dynamic range and wide color gamut, adaptive streaming with resolution changes, computer-generated and screen-captured video, ultralow-delay streaming, 360° immersive video, and multilayer coding e.g., for scalability. Furthermore, early implementations are presented to show that the new VVC standard is implementable and ready for real-world deployment.


Guest Editorial Introduction to the Special Section on the VVC Standard

October 2021

·

49 Reads

·

5 Citations

IEEE Transactions on Circuits and Systems for Video Technology

In this Special Section of the IEEE Transactions on Circuits and Systems for Video Technology, it is our honor to introduce the Versatile Video Coding (VVC) standard, the latest of the historic partnership collaborations between the International Telecommunication Union Telecommunication Standardization Sector (ITU-T), the International Organization for Standardization (ISO), and the International Electrotechnical Commission (IEC) in the field of video coding standardization.


Optimized feature space learning for generating efficient binary codes for image retrieval

October 2021

·

4 Reads

·

8 Citations

Signal Processing Image Communication

In this paper, a novel approach for learning a low-dimensional optimized feature space for image retrieval with minimum intra-class variance and maximum inter-class variance is proposed. The classical approach of Linear Discriminant Analysis (LDA) is generally used for generating an optimized low-dimensional feature space for single-labeled images. Since image retrieval involves images with multiple objects, LDA cannot be directly used for dimensionality reduction and feature space optimization. This problem is addressed by utilizing the relationship between LDA and Canonical Correlation Analysis (CCA) eigenvalues to generate an optimized feature space for both single-labeled and multi-labeled images. A CCA-based network architecture which correlates the low-dimensional feature vectors with the image label vectors is proposed. We design a novel loss function such that the correlation coefficients of CCA are maximized. Our experiments prove that we could train the neural network to reach the theoretical lower bound of loss corresponding to the negative sum of the correlation coefficients. Once the optimized feature space is generated, feature vectors are binarized with the Iterative Quantization (ITQ) approach. Finally, we propose an ensemble network to generate binary codes of desired bit length for retrieval. The measurement of mean average precision shows that the proposed approach outperforms the retrieval results of other single-labeled and multi-labeled image retrieval benchmarks at same bit numbers in a considerable number of cases.



Deep Hashing with Hash Center Update for Efficient Image Retrieval

June 2021

·

71 Reads

In this paper, we propose an approach for learning binary hash codes for image retrieval. Canonical Correlation Analysis (CCA) is used to design two loss functions for training a neural network such that the correlation between the two views to CCA is maximized. The first loss, maximizes the correlation between the hash centers and learned hash codes. The second loss maximizes the correlation between the class labels and classification scores. A novel weighted mean and thresholding based hash center update scheme is proposed for adapting the hash centers in each epoch. The training loss reaches the theoretical lower bound of the proposed loss functions, showing that the correlation coefficients are maximized during training and substantiating the formation of an efficient feature space for image retrieval. The measured mean average precision shows that the proposed approach outperforms other state-of-the-art approaches in both single-labeled and multi-labeled image datasets.



Citations (57)


... Because Genozip-compressed files must be decompressed before they can be further used, we prefer using CRAM 3.1 files. Moreover, the random access to sequences is slow for Genozip, and specialized algorithms have been developed for efficient random access to sequences 22 . However, when a hundred thousand CRAM 3.1 files need to be stored, the extra 14% file size reduction of Genozip-compressed CRAM may be worth the compression effort because approximately 200 terabytes of storage could be saved. ...

Reference:

A benchmark study of compression software for human short-read sequence data
GVC: efficient random access compression for gene sequence variations

BMC Bioinformatics

... This approach effectively extracts detailed image information and improves retrieval accuracy. In addition to these approaches, Jose et al. [16] proposed a hash center update strategy that enhances image retrieval performance by optimizing hash center distances. Similarly, Li et al. [17] leveraged pre-trained deep neural networks to improve hashing efficiency, which effectively boosts the retrieval process. ...

Deep Hashing with Hash Center Update for Efficient Image Retrieval
  • Citing Conference Paper
  • May 2022

... ISO/IEC 23090-3, and MPEG-I Part 3, is a video compression standard finalized on the sixth of July 2020, by the Joint Video Experts Team (JVET) and the MPEG working group of ISO/IEC JTC 1/SC 29. The verification test of VVC's compression capabilities has demonstrated its major advantage over previous standards for a variety of video content types, including applications requiring ultra-high resolution and high dynamic range, screen content sharing, and 360° immersive VR/AR/XR [3].The VVC has several improvements over its predecessors [4], although it also uses the traditional hybrid prediction/transform approach. A number of new tools and features have been incorporated into VVC, allowing it to achieve up to 50% [5,6] bitrate reduction over high efficiency video coding (HEVC). ...

Update on the emerging versatile video coding (VVC) standard and its applications
  • Citing Conference Paper
  • March 2022

... To enhance diversity in the dataset and, consequently, reduce the computational time required to train the ANNs, redundant structures were filtered out. In this study, pairwise Euclidean distances between each individual in the dataset were calculated, and similar individuals were subsequently removed [24,25]. Flowchart of a GA instance used in the generation of the dataset. ...

Optimized feature space learning for generating efficient binary codes for image retrieval
  • Citing Article
  • October 2021

Signal Processing Image Communication

... In these scenarios, the efficiency and quality of video compression are critical to ensuring seamless user experiences, especially under bandwidth constraints. HEVC [16] and VVC [1], renowned for their high compression efficiency, have been widely adopted as conventional and advanced approaches for video compression. In recent years, learning-based codecs [2,[8][9][10]17] have demonstrated superior performance compared to conventional approaches. ...

Overview of the Versatile Video Coding (VVC) Standard and its Applications

IEEE Transactions on Circuits and Systems for Video Technology

... Thus, in order to address the growing bandwidth demand, a few years ago (July 2020) the Joint Video Experts Team (JVET) introduced the first version of the Versatile Video Coding (VVC) [6], [7] to efficiently encode a wider range of video formats, including screen content [8], UHD 4k/8k video, and omnidirectional video [9]. As a resulta, the VVC achieves a bitrate reduction of 30% to 60% [10], [11] in comparison with its predecessor standard High Efficiency Video Coding (HEVC) [12]. ...

Guest Editorial Introduction to the Special Section on the VVC Standard
  • Citing Article
  • October 2021

IEEE Transactions on Circuits and Systems for Video Technology

... Moving object detection under a dynamic background [16,17] mainly includes optical flow method [18,19] and global motion compensation method [20,21]. The global motion compensation method first estimates the global motion and then analyzes and calculates the motion parameters for motion compensation. ...

3D Geometry-Based Global Motion Compensation For VVC
  • Citing Conference Paper
  • September 2021

... Since 2017, the Moving Picture Experts Group (MPEG) of the International Standards Organization (ISO) has been intensively investigating and promoting potential technologies for high-efficiency point cloud compression (PCC), leading to the proposal of two PCC specifications: geometrybased PCC (G-PCC) and video-based PCC (V-PCC) [10][11][12][13]. In V-PCC, a 3D point cloud sampled at a specific time instance is first projected onto a set of perpendicular 2D planes, and then a sequence of 2D planes consecutively spanning over a period of time is compressed using standard compliant video codecs, such as the prevalent HEVC [14] or VVC [15]. On the other hand, G-PCC directly compresses the 3D point cloud by separating its geometry and attribute components; the well-known octree model is often used to represent geometry coordinates, and the region-adaptive hierarchical transform (RAHT) [16] and hierarchical prediction as lifting transform (Predlift) are selective options to compress the attribute information lossily. ...

Developments in International Video Coding Standardization After AVC, With an Overview of Versatile Video Coding (VVC)

Proceedings of the IEEE

... VVC employs a hybrid coding technology framework, which has evolved from a single fixed division to a more diverse and adaptable structure for encoding and decoding high-resolution images. The coding tree unit (CTU) partition process utilizes a quad-tree with a nested multi-type tree (QTMT) to achieve better compression efficiency [7,8]. The VVC standard introduces a new nested multi-type tree (MTT) block partitioning structure that supports binary tree (BT) and ternary tree (TT) splits in both the vertical and horizontal directions. ...

General Video Coding Technology in Responses to the Joint Call for Proposals on Video Compression With Capability Beyond HEVC

IEEE Transactions on Circuits and Systems for Video Technology

... To compare the proposed DDRO-NR's performance with earlier baseline schemes, we have used evaluations of dRDO, 6 JM (Joint Model), 35 delay-power-rate-distortion (dPRD), 7 delay-constrained video streaming rate control algorithm (DA), 36 and cross-layer optimized rate control algorithm (CA). 37 In Figure 11, the different frame indexes with encoding bit rates for the annual dance sequence and foreman sequence are analyzed. ...

The Joint Exploration Model (JEM) for Video Compression with Capability beyond HEVC
  • Citing Article
  • October 2019

IEEE Transactions on Circuits and Systems for Video Technology