Article

Adaptive Sequential Prediction of Multidimensional Signals With Applications to Lossless Image Coding

Dept. of Electr. & Comput. Eng., McMaster Univ., Hamilton, ON, Canada
IEEE Transactions on Image Processing (Impact Factor: 3.11). 02/2011; 20(1):36 - 42. DOI: 10.1109/TIP.2010.2061860
Source: IEEE Xplore

ABSTRACT We investigate the problem of designing adaptive sequential linear predictors for the class of piecewise autoregressive multidimensional signals, and adopt an approach of minimum description length (MDL) to determine the order of the predictor and the support on which the predictor operates. The design objective is to strike a balance between the bias and variance of the prediction errors in the MDL criterion. The predictor design problem is particularly interesting and challenging for multidimensional signals (e.g., images and videos) because of the increased degree of freedom in choosing the predictor support. Our main result is a new technique of sequentializing a multidimensional signal into a sequence of nested contexts of increasing order to facilitate the MDL search for the order and the support shape of the predictor, and the sequentialization is made adaptive on a sample by sample basis. The proposed MDL-based adaptive predictor is applied to lossless image coding, and its performance is empirically established to be the best among all the results that have been published till present.

0 Followers
 · 
116 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: LS-based adaptation cannot fully exploit high-dimensional correlations in image signals, as linear prediction model in the input space of supports is undesirable to capture higher order statistics. This paper proposes Gaussian process regression for prediction in lossless image coding. Incorporating kernel functions, the prediction support is projected into a high-dimensional feature space to fit the anisotropic and nonlinear image statistics. Instead of directly conditioned on the support, Gaussian process regression is leveraged to make prediction in the feature space. The model parameters are optimized by measuring the similarities based on the training set, which is evaluated by combined kernel function in the sense of translation and rotation invariance among supports mapped in the feature space. Experimental results show that the proposed predictor outperforms most benchmark predictors reported.
    2014 Data Compression Conference (DCC); 03/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Adaptive predictor has long been used for lossless predictive coding of images. Most of existing lossless predictive coding techniques mainly focus on suitability of prediction model for training set with the underlying assumption of local consistency, which may not hold well on object boundaries and cause large predictive error. In this paper, we propose a novel approach based on the assumption that local consistency and patch redundancy exist simultaneously in natural images. We derive a family of linear models and design a new algorithm to automatically select one suitable model for prediction. From the Bayesian perspective, the model with maximum posterior probability is considered as the best. Two types of model evidence are included in our algorithm. One is traditional Training Evidence, which represents the models' suitability for current pixel under the assumption of local consistency. The other is Target Evidence, which is proposed to express the preference for different models from the perspective of patch redundancy. It is shown that the fusion of Training Evidence and Target Evidence jointly exploits the benefits of local consistency and patch redundancy. As a result, our proposed predictor is more suitable for natural images with textures and object boundaries. Comprehensive experiments demonstrate that the proposed predictor achieves higher efficiency compared with the state-of-the-art lossless predictors.
    IEEE Transactions on Image Processing 10/2014; 23(12). DOI:10.1109/TIP.2014.2365698 · 3.11 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Image denoising has been fanatically researched for a very long time in that it is a commonplace yet important subject. The task to testify the performance of different image de-noising methods always resorts to PSNR in the past, until the emergence of SSIM, a landmark image quality assessment (IQA) metric. Since then, a vast majority of IQA methods were introduced in terms of various kinds of models. But unfortunately, those IQA metrics are along with more or less deficiencies such as the requirement of original images, making them far less than the ideal approaches. To address this problem, in this paper we propose an effective and blind image quality assessment for noise (dubbed BIQAN) to approximate the human visual perception to noise. The BIQAN is realized with three important portions, namely the free energy based brain principle, image gradient extraction, and texture masking. We conduct and compare the proposed BIQAN and a large amount of existing IQA metrics on three largest and most popular image quality databases (LIVE, TID2013, CSIQ). Results of experiments prove that the BIQAN has acquired very encouraging performance, outperforming those competitors stated above.
    2014 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB); 06/2014