Adaptive Sequential Prediction of Multidimensional Signals With Applications to Lossless Image Coding

Dept. of Electr. & Comput. Eng., McMaster Univ., Hamilton, ON, Canada
IEEE Transactions on Image Processing (Impact Factor: 3.63). 02/2011; 20(1):36 - 42. DOI: 10.1109/TIP.2010.2061860
Source: IEEE Xplore


We investigate the problem of designing adaptive sequential linear predictors for the class of piecewise autoregressive multidimensional signals, and adopt an approach of minimum description length (MDL) to determine the order of the predictor and the support on which the predictor operates. The design objective is to strike a balance between the bias and variance of the prediction errors in the MDL criterion. The predictor design problem is particularly interesting and challenging for multidimensional signals (e.g., images and videos) because of the increased degree of freedom in choosing the predictor support. Our main result is a new technique of sequentializing a multidimensional signal into a sequence of nested contexts of increasing order to facilitate the MDL search for the order and the support shape of the predictor, and the sequentialization is made adaptive on a sample by sample basis. The proposed MDL-based adaptive predictor is applied to lossless image coding, and its performance is empirically established to be the best among all the results that have been published till present.

18 Reads
  • Source
    • ". To achieve the minimum coding length, the piecewise AR model is the best choice, e.g. model selection-based image compression [50]. Precisely, the total description length of I with the kth-order AR model can be expressed by "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we propose a new no-reference (NR) image quality assessment (IQA) metric using the recently revealed free-energy-based brain theory and classical human visual system (HVS)-inspired features. The features used can be divided into three groups. The first involves the features inspired by the free energy principle and the structural degradation model. Furthermore, the free energy theory also reveals that the HVS always tries to infer the meaningful part from the visual stimuli. In terms of this finding, we first predict an image that the HVS perceives from a distorted image based on the free energy theory, then the second group of features is composed of some HVS-inspired features (such as structural information and gradient magnitude) computed using the distorted and predicted images. The third group of features quantifies the possible losses of “naturalness” in the distorted image by fitting the generalized Gaussian distribution to mean subtracted contrast normalized coefficients. After feature extraction, our algorithm utilizes the support vector machine based regression module to derive the overall quality score. Experiments on LIVE, TID2008, CSIQ, IVC, and Toyama databases confirm the effectiveness of our introduced NR IQA metric compared to the state-of-the-art.
    IEEE Transactions on Multimedia 01/2015; 17(1):50-63. DOI:10.1109/TMM.2014.2373812 · 2.30 Impact Factor
  • Source
    • "In this paper, we use a 2D linear autoregressive (AR) model to simulate the generative model G for its high recognition and description capability [35]. The AR model is defined as x i = χ k (x i )α + e i (5) where x i is a pixel at location i, α = (a 1 , a 2 , ...a k ) T defines the model parameters, and χ k (x n ) is the k member neighborhood vector of x i . "
    [Show abstract] [Hide abstract]
    ABSTRACT: Image denoising has been fanatically researched for a very long time in that it is a commonplace yet important subject. The task to testify the performance of different image de-noising methods always resorts to PSNR in the past, until the emergence of SSIM, a landmark image quality assessment (IQA) metric. Since then, a vast majority of IQA methods were introduced in terms of various kinds of models. But unfortunately, those IQA metrics are along with more or less deficiencies such as the requirement of original images, making them far less than the ideal approaches. To address this problem, in this paper we propose an effective and blind image quality assessment for noise (dubbed BIQAN) to approximate the human visual perception to noise. The BIQAN is realized with three important portions, namely the free energy based brain principle, image gradient extraction, and texture masking. We conduct and compare the proposed BIQAN and a large amount of existing IQA metrics on three largest and most popular image quality databases (LIVE, TID2013, CSIQ). Results of experiments prove that the BIQAN has acquired very encouraging performance, outperforming those competitors stated above.
    2014 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB); 06/2014
  • Source
    • "In other research, fuzzy logic-based methods have been used as low complexity alternatives to what would otherwise be complex probabilistic methods [16], [17]. Recent research has deviated from probabilistic methods, with significant contributions being made in the design of adaptive sequential linear predictors for lossless image coding [1]. Other research has focuses on high complexity segmentation based [18], [19] and neural network based [20], [21] methods. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Adaptive Predictor Combination (APC) is a framework for combining multiple predictors for lossless image compression and is often at the core of state of the art algorithms. In this paper, a Bayesian parameter estimation scheme is proposed for APC. Extensive experiments using natural, medical and remote sensing images of 8 to 16 bit/pixel have confirmed that the predictive performance is consistently better than that of APC for any combination of fixed predictors and with only a marginal increase in computational complexity. The predictive performance improves with every additional fixed predictor, a property that is not found in other predictor combination schemes studied in this paper. Analysis and simulation show that the performance of the proposed algorithm is not sensitive to the choice of hyper-parameters of the prior distributions. Furthermore, the proposed prediction scheme provides a theoretical justification for the 'error correction' stage that is often included as part of a prediction process.
    IEEE Transactions on Image Processing 10/2013; 22(12). DOI:10.1109/TIP.2013.2284067 · 3.63 Impact Factor
Show more