Conference Paper

Video quality assessment by decoupling additive impairments and detail losses.

Dept. of Electron. Eng., Chinese Univ. of Hong Kong, Hong Kong, China
DOI: 10.1109/QoMEX.2011.6065719 Conference: Third International Workshop on Quality of Multimedia Experience, QoMEX 2011, Mechelen, Belgium, September 7-9, 2011
Source: DBLP

ABSTRACT In this paper, a review on existing methods of extending image quality metric to video quality metric is given. It is found that three processing steps are usually involved which include the temporal channel decomposition, temporal masking and error pooling. They are utilized to extend our previously proposed image quality metric, which separately evaluates additive impairments and detail losses, to video quality metric. The resultant algorithm is tested on subjective video database LIVE and shows a good performance in matching subjective ratings.

0 Bookmarks
 · 
62 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Research on visual quality assessment has been active during the last decade. In this work, we provide an in-depth review of recent developments in the field. As compared with existing survey papers, our current work has several unique contributions. First, besides image quality databases and metrics, we put equal emphasis on video quality databases and metrics as this is a less investigated area. Second, we discuss the application of visual quality evaluation to perceptual coding as an example for applications. Third, we benchmark the performance of state-of-the-art visual quality metrics with experiments. Finally, future trends in visual quality assessment are discussed.
    APSIPA Transactions on Signal and Information Processing. 01/2013; 2.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: While subjective assessment is recognized as the most reliable means of quantifying video quality, objective assessment has proven to be a desirable alternative. Existing video quality indices achieve reasonable prediction of human quality scores, and are able to well predict quality degradation due to spatial distortions but not so well those due to temporal distortions. In this paper, we propose a perception-based quality index in which the novelty is the direct use of motion information to extract temporal distortions and to model the human visual attention. Temporal distortions are computed from optical flow and common vector metrics. Results of psychovisual experiments are used for modeling the human visual attention. Results show that the proposed index is competitive with current quality indices presented in the state of art. Additionally, the proposed index is much faster than other indices also including a temporal distortion measure.
    Proceedings of the 9th International Conference on Computer Vision Theory and Applications, Lisbon, Portugal; 01/2014

Full-text

View
2 Downloads
Available from