Article

Adaptive Sequential Prediction of Multidimensional Signals With Applications to Lossless Image Coding

Dept. of Electr. & Comput. Eng., McMaster Univ., Hamilton, ON, Canada
IEEE Transactions on Image Processing (Impact Factor: 3.63). 02/2011; 20(1):36 - 42. DOI: 10.1109/TIP.2010.2061860
Source: IEEE Xplore

ABSTRACT

We investigate the problem of designing adaptive sequential linear predictors for the class of piecewise autoregressive multidimensional signals, and adopt an approach of minimum description length (MDL) to determine the order of the predictor and the support on which the predictor operates. The design objective is to strike a balance between the bias and variance of the prediction errors in the MDL criterion. The predictor design problem is particularly interesting and challenging for multidimensional signals (e.g., images and videos) because of the increased degree of freedom in choosing the predictor support. Our main result is a new technique of sequentializing a multidimensional signal into a sequence of nested contexts of increasing order to facilitate the MDL search for the order and the support shape of the predictor, and the sequentialization is made adaptive on a sample by sample basis. The proposed MDL-based adaptive predictor is applied to lossless image coding, and its performance is empirically established to be the best among all the results that have been published till present.

Download full-text

Full-text

Available from: Guangtao Zhai, Jan 22, 2016
  • Source
    • "For one-part coding, normalized maximum likelihood (NML) distribution can be estimated to find the optimal class of models[31]. MDL is prevalent in a variety of signal processing applications, e.g., wireless sensor array processing[32], autoregressive models[33], sparse coding[34], lossless image coding[35], and etc. The interlaced correlations of heterogeneous data can be further exploited for high-performance compression. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes generalized context modeling (GCM) for heterogeneous data compression. The proposed model extends the suffix of predicted subsequences in classic context modeling to arbitrary combinations of symbols in multiple directions. To address the selection of contexts, GCM constructs a model graph with a combinatorial structuring of finite order combination of predicted symbols as its nodes. The estimated probability for prediction is obtained by weighting over a class of context models that contain all the occurrences of nodes in the model graph. Moreover, separable context modeling in each direction is adopted for efficient prediction. To find optimal class of context models for prediction, the normalized maximum likelihood (NML) function is developed to estimate their structures and parameters, especially for heterogeneous data with large sizes. Furthermore, it is refined by context pruning to exclude the redundant models. Such model selection is optimal in the sense of minimum description length (MDL) principle, whose divergence is proven to be consistent with the actual distribution. It is shown that upper bounds of model redundancy for GCM are irrelevant to the size of data. GCM is validated in an extensive field of applications, e.g., Calgary corpus, executable files, and genomic data. Experimental results show that it outperforms most state-of-the-art context modeling algorithms reported.
    Full-text · Article · Nov 2015 · IEEE Transactions on Signal Processing
  • Source
    • ". To achieve the minimum coding length, the piecewise AR model is the best choice, e.g. model selection-based image compression [50]. Precisely, the total description length of I with the kth-order AR model can be expressed by "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we propose a new no-reference (NR) image quality assessment (IQA) metric using the recently revealed free-energy-based brain theory and classical human visual system (HVS)-inspired features. The features used can be divided into three groups. The first involves the features inspired by the free energy principle and the structural degradation model. Furthermore, the free energy theory also reveals that the HVS always tries to infer the meaningful part from the visual stimuli. In terms of this finding, we first predict an image that the HVS perceives from a distorted image based on the free energy theory, then the second group of features is composed of some HVS-inspired features (such as structural information and gradient magnitude) computed using the distorted and predicted images. The third group of features quantifies the possible losses of “naturalness” in the distorted image by fitting the generalized Gaussian distribution to mean subtracted contrast normalized coefficients. After feature extraction, our algorithm utilizes the support vector machine based regression module to derive the overall quality score. Experiments on LIVE, TID2008, CSIQ, IVC, and Toyama databases confirm the effectiveness of our introduced NR IQA metric compared to the state-of-the-art.
    Full-text · Article · Jan 2015 · IEEE Transactions on Multimedia
  • Source
    • "In this paper, we use a 2D linear autoregressive (AR) model to simulate the generative model G for its high recognition and description capability [35]. The AR model is defined as x i = χ k (x i )α + e i (5) where x i is a pixel at location i, α = (a 1 , a 2 , ...a k ) T defines the model parameters, and χ k (x n ) is the k member neighborhood vector of x i . "
    [Show abstract] [Hide abstract]
    ABSTRACT: Image denoising has been fanatically researched for a very long time in that it is a commonplace yet important subject. The task to testify the performance of different image de-noising methods always resorts to PSNR in the past, until the emergence of SSIM, a landmark image quality assessment (IQA) metric. Since then, a vast majority of IQA methods were introduced in terms of various kinds of models. But unfortunately, those IQA metrics are along with more or less deficiencies such as the requirement of original images, making them far less than the ideal approaches. To address this problem, in this paper we propose an effective and blind image quality assessment for noise (dubbed BIQAN) to approximate the human visual perception to noise. The BIQAN is realized with three important portions, namely the free energy based brain principle, image gradient extraction, and texture masking. We conduct and compare the proposed BIQAN and a large amount of existing IQA metrics on three largest and most popular image quality databases (LIVE, TID2013, CSIQ). Results of experiments prove that the BIQAN has acquired very encouraging performance, outperforming those competitors stated above.
    Full-text · Conference Paper · Jun 2014
Show more