Blocking artifact detection and reduction in compressed data

Inf. Process. Lab., Aristotle Univ. of Thessaloniki
IEEE Transactions on Circuits and Systems for Video Technology (Impact Factor: 2.62). 11/2002; 12(10):877 - 890. DOI: 10.1109/TCSVT.2002.804880
Source: DBLP


A novel frequency-domain technique for image blocking artifact detection and reduction is presented. The algorithm first detects the regions of the image which present visible blocking artifacts. This detection is performed in the frequency domain and uses the estimated relative quantization error calculated when the discrete cosine transform (DCT) coefficients are modeled by a Laplacian probability function. Then, for each block affected by blocking artifacts, its DC and AC coefficients are recalculated for artifact reduction. To achieve this, a closed-form representation of the optimal correction of the DCT coefficients is produced by minimizing a novel enhanced form of the mean squared difference of slope for every frequency separately. This correction of each DCT coefficient depends on the eight neighboring coefficients in the subband-like representation of the DCT transform and is constrained by the quantization upper and lower bound. Experimental results illustrating the performance of the proposed method are presented and evaluated.

Download full-text


Available from: George A. Triantafyllidis, Sep 10, 2013
  • Source
    • "However, these methods usually investigate the JPEG images coded at low bit rates and aim to reduce the blocking artifacts between the block boundaries and ringing effect around the content edges due to the lossy JPEG compression. Some of them, e.g., [17]–[22], may be extended to identify JPEG images, however, the performances are very poor based on our experiments as shown in Section III-A. We also note that there are several reported methods, e.g., [6], [7], and [26]–[30], for other forensics/steganlysis issues which tried to identify the double JPEG compressed images and/or further estimate the primary quantization table. "
    [Show abstract] [Hide abstract]
    ABSTRACT: JPEG is one of the most extensively used image formats. Understanding the inherent characteristics of JPEG may play a useful role in digital image forensics. In this paper, we introduce JPEG error analysis to the study of image forensics. The main errors of JPEG include quantization, rounding, and truncation errors. Through theoretically analyzing the effects of these errors on single and double JPEG compression, we have developed three novel schemes for image forensics including identifying whether a bitmap image has previously been JPEG compressed, estimating the quantization steps of a JPEG image, and detecting the quantization table of a JPEG image. Extensive experimental results show that our new methods significantly outperform existing techniques especially for the images of small sizes. We also show that the new method can reliably detect JPEG image blocks which are as small as 8 × 8 pixels and compressed with quality factors as high as 98. This performance is important for analyzing and locating small tampered regions within a composite image.
    IEEE Transactions on Information Forensics and Security 10/2010; 5(3-5):480 - 491. DOI:10.1109/TIFS.2010.2051426 · 2.41 Impact Factor
  • Source
    • "They have given a different solution for minimizing the MSDS. Trianta fyllidis et al. [11] have proposed another method of minimizing MSDS, which involves diagonal neighboring pixels in addition to horizontal and vertical neighboring pixels. Then, for each block affected by blocking artifacts, DC and AC coefficients are recalculated for artifact reduction. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The reconstructed images from JPEG compression producenoticeable image degradation near the block boundaries, in case ofhighly compressed images, because each block is transformed andquantized independently. The blocking effects are classified intothree types of noises: staircase noise, grid noise and corner outlierout of which major thrust is laid on corner outlier in this paper. Apost-processing algorithm is proposed to reduce the blockingartifacts of JPEG decompressed images. The proposed postprocessingalgorithm, which consists of three stages, reduces theblocking artifacts efficiently. A comparative study between theproposed algorithm and other post-processing algorithms based onvarious performance indices is made.
    International Journal of Computer Applications 07/2010; 4(2). DOI:10.5120/804-1144
  • Source
    • "This operation is not performed in the open-loop transcoder. In order to overcome block artifacts in the transcoded sequence, techniques for deblocking in the transform domain could be applied, as in [24] and [25]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, efficient solutions for requantization transcoding in H.264/AVC are presented. By requantizing residual coefficients in the bitstream, different error components can appear in the transcoded video stream. Firstly, a requantization error is present due to successive quantization in encoder and transcoder. In addition to the requantization error, the loss of information caused by coarser quantization will propagate due to dependencies in the bitstream. Because of the use of intra prediction and motion-compensated prediction in H.264/AVC, both spatial and temporal drift propagation arise in transcoded H.264/AVC video streams. The spatial drift in intra-predicted blocks results from mismatches in the surrounding prediction pixels as a consequence of requantization. In this paper, both spatial and temporal drift components are analyzed. As is shown, spatial drift has a determining impact on the visual quality of transcoded video streams in H.264/AVC. In particular, this type of drift results in serious distortion and disturbing artifacts in the transcoded video stream. In order to avoid the spatially propagating distortion, we introduce transcoding architectures based on spatial compensation techniques. By combining the individual temporal and spatial compensation approaches and applying different techniques based on the picture and/or macroblock type, overall architectures are obtained that provide a trade-off between computational complexity and rate-distortion performance. The complexity of the presented architectures is significantly reduced when compared to cascaded decoder–encoder solutions, which are typically used for H.264/AVC transcoding. The reduction in complexity is particularly large for the solution which uses spatial compensation only. When compared to traditional solutions without spatial compensation, both visual and objective quality results are highly improved.
    Signal Processing Image Communication 04/2010; 25(4-25):235-254. DOI:10.1016/j.image.2010.01.006 · 1.46 Impact Factor
Show more