Conference Paper

Video quality assessment by decoupling additive impairments and detail losses

Dept. of Electron. Eng., Chinese Univ. of Hong Kong, Hong Kong, China
DOI: 10.1109/QoMEX.2011.6065719 Conference: Third International Workshop on Quality of Multimedia Experience, QoMEX 2011, Mechelen, Belgium, September 7-9, 2011
Source: DBLP


In this paper, a review on existing methods of extending image quality metric to video quality metric is given. It is found that three processing steps are usually involved which include the temporal channel decomposition, temporal masking and error pooling. They are utilized to extend our previously proposed image quality metric, which separately evaluates additive impairments and detail losses, to video quality metric. The resultant algorithm is tested on subjective video database LIVE and shows a good performance in matching subjective ratings.

20 Reads
  • Source
    • "Each database provides a DMOS value per sequence, obtained through subjective evaluations conducted by the respective authors. Here, the proposed quality index is compared to the following popular video quality indices: i) Peak signal to noise ratio (PSNR) as a benchmark, ii) video quality model (VQM) (Pinson and Wolf, 2004), iii) weighted structural similarity index (wSSIM) (Wang and Li, 2007), iv) motion-based video integrity evaluation index (MOVIE) (Seshadrinathan and Bovik, 2010) and v) video quality assessment by decoupling detail losses and additive impairments (VQAD) (Li et al., 2011). Performances are computed as explained in Section 2.3. "
    [Show abstract] [Hide abstract]
    ABSTRACT: While subjective assessment is recognized as the most reliable means of quantifying video quality, objective assessment has proven to be a desirable alternative. Existing video quality indices achieve reasonable prediction of human quality scores, and are able to well predict quality degradation due to spatial distortions but not so well those due to temporal distortions. In this paper, we propose a perception-based quality index in which the novelty is the direct use of motion information to extract temporal distortions and to model the human visual attention. Temporal distortions are computed from optical flow and common vector metrics. Results of psychovisual experiments are used for modeling the human visual attention. Results show that the proposed index is competitive with current quality indices presented in the state of art. Additionally, the proposed index is much faster than other indices also including a temporal distortion measure.
    Proceedings of the 9th International Conference on Computer Vision Theory and Applications, Lisbon, Portugal; 01/2014
  • Source
    • "In the following steps, we simulate the HVS processing, more specifically, how the human beings perceive the spatial distortions, using several major HVS characteristics, such as contrast sensitivity function (CSF), visual masking, information pooling, and so on. Compared with our preliminary work [40] in which motion information of the video was not considered and with most previous studies, the contribution of the proposed VQM comes from the extensive use of motion information to simulate the HVS processing, that is, motion vectors are derived in the wavelet domain, and employed in the eye-movement model, the spatiovelocity CSF, the motion-based temporal masking, and so on. We also simulate cognitive human behavior, which originally was proposed to be used in continuous quality evaluations [37], [38], and have proved its effectiveness in sequence-level quality prediction. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Video quality assessment plays a fundamental role in video processing and communication applications. In this paper, we study the use of motion information and temporal human visual system (HVS) characteristics for objective video quality assessment. In our previous work, two types of spatial distortions, i.e., detail losses and additive impairments, are decoupled and evaluated separately for spatial quality assessment. The detail losses refer to the loss of useful visual information that will affect the content visibility, and the additive impairments represent the redundant visual information in the test image, such as the blocking or ringing artifacts caused by data compression and so on. In this paper, a novel full-reference video quality metric is developed, which conceptually comprises the following processing steps: 1) decoupling detail losses and additive impairments within each frame for spatial distortion measure; 2) analyzing the video motion and using the HVS characteristics to simulate the human perception of the spatial distortions; and 3) taking into account cognitive human behaviors to integrate frame-level quality scores into sequence-level quality score. Distinguished from most studies in the literature, the proposed method comprehensively investigates the use of motion information in the simulation of HVS processing, e.g., to model the eye movement, to predict the spatio-temporal HVS contrast sensitivity, to implement the temporal masking effect, and so on. Furthermore, we also prove the effectiveness of decoupling detail losses and additive impairments for video quality assessment. The proposed method is tested on two subjective quality video databases, LIVE and IVP, and demonstrates the state-of-the-art performance in matching subjective ratings.
    IEEE Transactions on Circuits and Systems for Video Technology 07/2012; 22(7):1100-1112. DOI:10.1109/TCSVT.2012.2190473 · 2.62 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Research on visual quality assessment has been active during the last decade. In this work, we provide an in-depth review of recent developments in the field. As compared with existing survey papers, our current work has several unique contributions. First, besides image quality databases and metrics, we put equal emphasis on video quality databases and metrics as this is a less investigated area. Second, we discuss the application of visual quality evaluation to perceptual coding as an example for applications. Third, we benchmark the performance of state-of-the-art visual quality metrics with experiments. Finally, future trends in visual quality assessment are discussed.
    APSIPA Transactions on Signal and Information Processing 01/2013; 2. DOI:10.1017/ATSIP.2013.5
Show more


20 Reads
Available from