This paper proposes a novel video quality assessment method aimed for the next generation (5G) mobile networks, following the small cell deployment architecture. The proposed method is based on a novel usage of the structural similarity (SSIM) index, as a Reduced Reference metric and is suitable for implementation as a Virtual Network Function (VNF) within an IT infrastructure located close to the small cell. It enables the in-service monitoring of the delivered video quality, which is a very useful tool for the mobile network operators, to monitor their customers’ satisfaction. An advantage of the proposed method is that the complex and power consuming process of video quality assessment is performed at the edge of the network, and not at the UE itself, thus significantly reducing its power consumption. An LTE experimental testbed was used for the implementation and performance evaluation of the proposed method. The experimental results show that the proposed method is able to monitor the video quality, when the LTE network is degraded.
Ultra-High-Definition (UHD) video applications such as streaming are envisioned as a main driver for the emerging Fifth Generation (5G) mobile networks being developed worldwide. This paper focuses on addressing a major technical challenge in meeting UHD users' growing expectation for continuous high-quality video delivery in 5G hotspots where congestion is commonplace to occur. A novel 5G-UHD framework is proposed towards achieving adaptive video streaming in this demanding scenario to pave the way for self-optimisation oriented 5G UHD streaming. The architectural design and the video stream optimisation mechanism are described, and the system is prototyped based on a realistic virtualised 5G testbed. Empirical experiments validate the design of the framework and yield a set of insightful performance evaluation results.
Internet video streaming applications have been demanding more bandwidth and higher video quality, especially with the advent of Virtual Reality (VR) and Augmented Reality (AR) applications. While adaptive streaming protocols like MPEG-DASH (Dynamic Adaptive Streaming over HTTP) allows video quality to be flexibly adapted, e.g., degraded when mobile network condition deteriorates, this is not an option if the application itself requires guaranteed 4K quality at all time. On the other hand, conventional end-to-end TCP has been struggling in supporting 4K video delivery across long-distance Internet paths containing both fixed and mobile network segments with heterogeneous characteristics. In this paper, we present a novel and practically-feasible system architecture named MVP (Mobile edge Virtualization with adaptive Prefetching), which enables content providers to embed their content intelligence as a virtual network function (VNF) into the mobile network operator's (MNO) infrastructure's edge. Based on this architecture, we present a context-aware adaptive video prefetching scheme in order to achieve QoE-assured 4K video on demand (VoD) delivery across the global Internet. Through experiments based on a real LTE-A network infrastructure, we demonstrate that our proposed scheme is able to achieve QoE-assured 4K VoD streaming, especially when the video source is located remotely in the public Internet, in which case none of the state-of-the-art solutions is able to support such an objective at global Internet scale.
The high efficiency video coding standard provides excellent coding performance but is also very complex. Especially, the intra mode decision is very time-consuming due to the large number of available prediction modes and the flexible block partitioning scheme. In this paper, a highly parallel intra prediction algorithm for heterogeneous CPU+graphics processing unit (GPU) platforms is proposed, which accelerates the encoder dramatically. It is targeted toward high-quality high definition (HD) and ultra HD applications and utilizes prediction based on original samples (POSs), where the reference samples are generated from original pixels. This makes it possible to perform intramode prediction for all prediction blocks of a video frame concurrently. In addition, parallel-friendly cost functions are proposed which enable parallel rate distortion optimization with no synchronization overhead. A detailed statistical analysis of both POS and the proposed GPU intramethod is provided and the coding performance of the presented prototype is evaluated based on a large amount of experimental data. It is shown that the complexity of the intramode selection on the CPU is reduced by up to 78.03%. This translates to significant encoding time reductions of up to 64.52% for a single-threaded encoder and up to 94.82% in combination with wavefront parallel processing. In high bitrate ranges, average rate increases of only 2.11%-4.26% and 0.80%-2.34% are observed for the proposed high-speed and high-quality configurations, respectively. Furthermore, GPU intra is shown to be extremely efficient in lossless coding scenarios, where up to 53.37% time is saved with an average bitrate increase of only 0.55% among all test cases.
This paper provides an overview of Scalable High efficiency Video Coding (SHVC), the scalable extensions of the High Efficiency Video Coding (HEVC) standard, published in the second version of HEVC. In addition to the temporal scalability already provided by the first version of HEVC, SHVC further provides spatial, signal-to-noise ratio, bit depth, and color gamut scalability functionalities, as well as combinations of any of these. The SHVC architecture design enables SHVC implementations to be built using multiple repurposed single-layer HEVC codec cores, with the addition of interlayer reference picture processing modules. The general multilayer high-level syntax design common to all multilayer HEVC extensions, including SHVC, MV-HEVC, and 3D HEVC, is described. The interlayer reference picture processing modules, including texture and motion resampling and color mapping, are also described. Performance comparisons are provided for SHVC versus simulcast HEVC and versus the scalable video coding extension to H.264/advanced video coding.
Future Fifth-Generation (5G) networks are expected to be underpinned by Software-Defined Networking (SDN) and to predominantly carry multimedia traffic. Ambitious 5G Key Performance Indicators (KPIs) call for reliable, low-latency and high-density networks capable of supporting Ultra High-Definition (UHD) video. Video streaming and multimedia traffic engineering over SDN networks is an important emerging research area where innovative solutions will be required to meet the challenging 5G KPIs. This paper proposes a holistic SDN control plane approach to multimedia transmission engineering problem in which we employ state-of-the-art scalable video encoding based on the latest H.265 video standard, combined with a contextually aware SDN controller. The controller has knowledge of both the network and the scalable characteristics of multimedia streams and makes decisions aimed at meeting 5G KPIs by reducing latency, maintaining reliability and integrity of the network whilst meeting users’ Quality of Experience (QoE) expectations.
The compression capability of several generations of video coding standards is compared by means of peak signal-to-noise ratio (PSNR) and subjective testing results. A unified approach is applied to the analysis of designs, including H.262/MPEG-2 Video, H.263, MPEG-4 Visual, H.264/MPEG-4 Advanced Video Coding (AVC), and High Efficiency Video Coding (HEVC). The results of subjective tests for WVGA and HD sequences indicate that HEVC encoders can achieve equivalent subjective reproduction quality as encoders that conform to H.264/MPEG-4 AVC when using approximately 50% less bit rate on average. The HEVC design is shown to be especially effective for low bit rates, high-resolution video content, and low-delay communication applications. The measured subjective improvement somewhat exceeds the improvement measured by the PSNR metric.
Many versions of Unix provide facilities for user-level packetcapture, making possible the use of general purpose workstationsfor network monitoring. Because network monitorsrun as user-level processes, packets must be copied across thekernel/user-space protection boundary. This copying can beminimized by deploying a kernel agent called a packet filter,which discards unwanted packets as early as possible. Theoriginal Unix packet filter was designed around a stack-basedfilter evaluator...
H.264/AVC is newest video coding standard of the ITU-T Video Coding Experts Group and the ISO/IEC Moving Picture Experts Group. The main goals of the H.264/AVC standardization effort have been enhanced compression performance and provision of a "network-friendly" video representation addressing "conversational" (video telephony) and "nonconversational" (storage, broadcast, or streaming) applications. H.264/AVC has achieved a significant improvement in rate-distortion efficiency relative to existing standards. This article provides an overview of the technical features of H.264/AVC, describes profiles and applications for the standard, and outlines the history of the standardization process.
An NFV-based video quality assessment method over 5G small cell networks
M.-A Kourtis
H Koumaras
G Xilouris
F Multimedia
Autonomic and Secure Computing; Pervasive Intelligence and Computing
Oct 2015
1657-1662
Dependable
Dependable, Autonomic and Secure Computing; Pervasive Intelligence
and Computing, Oct. 2015, pp. 1657-1662.