[Show abstract][Hide abstract] ABSTRACT: User satisfaction is a key factor in the success of novel multimedia services. Yet, to enable service providers and network
operators to control and maximize the quality (QoS, QoE) of delivered video streams, quite some challenges remain. In this
paper, we particularly focus on three of them. First of all, objectively measuring video quality requires appropriate quality
metrics and methods of assessing them in a real-time fashion. Secondly, the recent Scalable Video Coding (SVC) format opens
opportunities for adapting video to the available (network) resources, yet the appropriate configuration of video encoding
as well as real-time streaming adaptation are largely unaddressed research areas. Thirdly, while bandwidth reservation mechanisms
in access/core networks do exist, service providers lack a means for guaranteeing QoS in the increasingly complex home networks
(which they are not in full control of). In this paper we offer a broad view on these interrelated issues, by presenting the
developments originating in a Flemish research project (including proof-of-concept demonstrations). From a developmental perspective,
we propose an architecture combining a real-time video quality monitoring platform, on-the-fly adaptation (optimizing the
video quality) and QoS reservation in a heterogeneous home network based on UPnP QoS v3. From a research perspective, we propose
a new subjective test procedure that revealed user preference for temporal scalability over quality scalability. In addition,
an extensive study on optimizing HD SVC encoding in IPTV scenarios with fluctuating bandwidth showed that under certain bandwidth
constraints (prohibiting sufficient fidelity) spatial scalability is a better option than quality scalability.
[Show abstract][Hide abstract] ABSTRACT: Ensuring and maintaining adequate Quality of Experience towards end-users are key objectives for video service providers, not only for increasing customer satisfaction but also as service differentiator. However, in the case of High Definition video streaming over IP-based networks, network impairments such as packet loss can severely degrade the perceived visual quality. Several standard organizations have established a minimum set of performance objectives which should be achieved for obtaining satisfactory quality. Therefore, video service providers should continuously monitor the network and the quality of the received video streams in order to detect visual degradations. Objective video quality metrics enable automatic measurement of perceived quality. Unfortunately, the most reliable metrics require access to both the original and the received video streams which makes them inappropriate for real-time monitoring. In this article, we present a novel no-reference bitstream-based visual quality impairment detector which enables real-time detection of visual degradations caused by network impairments. By only incorporating information extracted from the encoded bitstream, network impairments are classified as visible or invisible to the end-user. Our results show that impairment visibility can be classified with a high accuracy which enables real-time validation of the existing performance objectives.
IEEE Transactions on Broadcasting 01/2012; 58(2):187-199. · 2.09 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Thanks to the availability of 3D-capable televisions and blu-ray players, 3D content is made accessible in the home. Re-cently, an extension of the H.264/AVC video coding standard has been defined for encoding 3D video content. This exten-sion, called Multiview Video Coding, allows inter-view pre-diction resulting in a better compression efficiency. However, due to these inter-view dependencies impairments in one view caused by e.g. packet losses can lead to degradations in other views. Research has already been conducted towards estimat-ing packet loss visibility in H.264/AVC encoded sequences. In this paper, we investigate the possibility of using an exist-ing decision tree-based classifier for estimating impairment visibility in 3D MVC encoded sequences. Our results show that, in the case of losing entire pictures, it is possible to es-timate packet loss visibility in 3D MVC encoded sequences with a high accuracy by only taking into account a limited number of parameters.
[Show abstract][Hide abstract] ABSTRACT: In order to ensure adequate quality towards the end users at all time, video service providers are getting more interested in monitoring their video streams. Objective video quality metrics provide a means of measuring (audio)visual quality in an automated manner. Unfortunately, most of the current existing metrics cannot be used for real-time monitoring due to their dependencies on the original video sequence. In this paper we present a new objective video quality metric which classifies packet loss as visible or invisible based on information extracted solely from the captured encoded H.264/AVC video bit stream. Our results show that the visibility of packet loss can be predicted with a high accuracy, without the need for deep packet inspection. This enables service providers to monitor quality in real-time.
Quality of Multimedia Experience (QoMEX), 2010 Second International Workshop on; 07/2010
[Show abstract][Hide abstract] ABSTRACT: With the advent of new upcoming online video services such as IPTV, video on demand (VoD) and peer-to-peer (P2P) video streaming, content providers are gaining more and more interest in measuring and monitoring video quality as perceived by end-users; also known as quality of experience (QoE). Objective video quality metrics provide a means of measuring visual quality degradations but in order to be able to measure QoE, these objective metrics should incorporate all quality affecting parameters such as encoding bitrate, network impairments and error concealment techniques. As a consequence, in order to construct or validate a proper objective video quality metric, extensive video evaluation tests must be performed. In this paper we present a scalable video testing platform that simplifies the management and execution of such video quality evaluation tests. Results indicate that the use of our testing platform drastically reduces overall experiment duration.
Computer and Information Technology, 2008. ICCIT 2008. 11th International Conference on; 01/2009
[Show abstract][Hide abstract] ABSTRACT: The xStreamer intends to be a flexible and modular open source streamer. The selection of current open source streamers which support both video and audio is limited, with VLC Media Player, Darwin Streaming Server and Helix DNA Server being the foremost solutions. The xStreamer distinguishes itself by providing a modularity that goes beyond the mere modular programming offered by the current open source solutions and that manifests itself in how the user controls and configures the streamer.
Proceedings of the 17th International Conference on Multimedia 2009, Vancouver, British Columbia, Canada, October 19-24, 2009; 01/2009
[Show abstract][Hide abstract] ABSTRACT: Lip synchronization is considered a key parameter during interactive communication. In the case of video conferencing and television broadcasting, the differential delay between audio and video should remain below certain thresholds, as recommended by several standardization bodies. However, further research has also shown that these thresholds can be relaxed, depending on the targeted application and use case. In this article, we investigate the influence of lip sync on the ability to perform real-time language interpretation during video conferencing. Furthermore, we are also interested in determining proper lip sync visibility thresholds applicable to this use case. Therefore, we conducted a subjective experiment using expert interpreters, which were required to perform a simultaneous translation, and non-experts. Our results show that significant differences are obtained when conducting subjective experiments with expert interpreters. As interpreters are primarily focused on performing the simultaneous translation, lip sync detectability thresholds are higher compared with existing recommended thresholds. As such, primary focus and the targeted application and use case are important factors to be considered when selecting proper lip sync acceptability thresholds.