Table 1 - uploaded by Simone Mangiante
Content may be subject to copyright.
VR network requirements (bandwidth and latency)

VR network requirements (bandwidth and latency)

Source publication
Conference Paper
Full-text available
VR/AR is rapidly progressing towards enterprise and end customers with the promise of bringing immersive experience to numerous applications. Soon it will target smartphones from the cloud and 360° video delivery will need unprecedented requirements for ultra-low latency and ultra-high throughput to mobile networks. Latest developments in NFV and M...

Contexts in source publication

Context 1
... 8K quality and above is necessary for VR, as a 4K VR 360° video only has 10 pixels per degree, equivalent to a 240p on a TV screen. We believe that the VR 360° video experience may evolve through the stages listed in Table 1, requiring a network throughput of 400 Mbps and above, more than 100 times higher than the current throughput supporting HD video services. ...
Context 2
... a bandwidth/compute/latency tradeoff should be considered as a new media network model (Figure 2), improved from the traditional store and forward model, for delivery optimization and adaptive services for different users. Table 1 summarizes our vision regarding bandwidth and latency requirements for network VR/AR applica- tions, related to different VR experience stages [10]. The network bandwidth requirement is estimated based on 1.5 times the bit rate. ...
Context 3
... this paper we presented a FOV rendering application at the edge of a mobile network enabling improved VR 360° live video delivery to mobile devices. Considering our decomposition of future online VR latency requirements in Table 1, many development opportunities exist in each network VR processing stage. More research and validation over different types of access networks, with different user equipment and multiple device connectivity, will be conducted. ...

Similar publications

Article
Full-text available
Wireless networks are growing in capabilities due to technological innovations and unprecedented growth of telecommunications. This is the reason, of late, video streaming applications exploit the power of mobile applications to render their services. However, mobile networks are known to various issues like channel fading, interference, and delay...

Citations

... In recent years, with the development of wireless networks and the popularity of smart mobile devices, mobile applications such as augmented reality (AR), virtual reality (VR), and facial recognition payment have grown exponentially [1,2]. These applications tend to be computation intensive and require low latency, but the battery capacities, computation resources, and storage capacities of mobile user equipment (UE) are very limited. ...
... The computing task on TD is divided into three parts, which are computed on a local, edge cloud, and D2D RD, respectively. x ij ∈ {0, 1}, ∀i ∈ U , ∀j ∈ K/k 0 is the user association between TD i and RD j, x ij = 1 indicates that TD i offloads part of the computing task to D2D RD j, and otherwise, x ij = 0. Since a TD selects, at most, one D2D RD for computational offloading, there are constraints: ∑ 1], i ∈ U denote the proportion of a computing task on TD i that is offloaded to the edge cloud and D2D RD, respectively. Since the locally computed ratio should be non-negative, α i and β i should satisfy the constraint: 0 ≤ α i + β i ≤ 1. ...
... In Figure 3, we show the number of supported (or unexecuted) TDs versus the total number of TDs in the system. The computing capacity of the MEC server is 50 Mcycles/s, and the number of D2D RDs is 1 2 of the number of TDs. Since the scenarios we study mainly concern computationally intensive tasks, none of the tasks are computed locally. ...
Article
Full-text available
Mobile edge computing (MEC) and device-to-device (D2D) communication can alleviate the resource constraints of mobile devices and reduce communication latency. In this paper, we construct a D2D-MEC framework and study the multi-user cooperative partial offloading and computing resource allocation. We maximize the number of devices under the maximum delay constraints of the application and the limited computing resources. In the considered system, each user can offload its tasks to an edge server and a nearby D2D device. We first formulate the optimization problem as an NP-hard problem and then decouple it into two subproblems. The convex optimization method is used to solve the first subproblem, and the second subproblem is defined as a Markov decision process (MDP). A deep reinforcement learning algorithm based on a deep Q network (DQN) is developed to maximize the amount of tasks that the system can compute. Extensive simulation results demonstrate the effectiveness and superiority of the proposed scheme.
... mmWave meets MEC for Transmission Efficiency in Wireless VR: Furthermore, several studies have indicated that introducing FOV into 360 • video will reduce up to 80% of the bandwidth requirements compared to delivering 360 • video, hence lowering the overall necessary transmission data rate [177], [178]. For example, authors in [179] analyze the tradeoff between homogeneous and heterogeneous FOVs for a MEC-based mobile VR delivery model regarding computations and caching tasks. ...
Preprint
Full-text available
Since Facebook was renamed Meta, a lot of attention, debate, and exploration have intensified about what the Metaverse is, how it works, and the possible ways to exploit it. It is anticipated that Metaverse will be a continuum of rapidly emerging technologies, usecases, capabilities, and experiences that will make it up for the next evolution of the Internet. Several researchers have already surveyed the literature on artificial intelligence (AI) and wireless communications in realizing the Metaverse. However, due to the rapid emergence of technologies, there is a need for a comprehensive and in-depth review of the role of AI, 6G, and the nexus of both in realizing the immersive experiences of Metaverse. Therefore, in this survey, we first introduce the background and ongoing progress in augmented reality (AR), virtual reality (VR), mixed reality (MR) and spatial computing, followed by the technical aspects of AI and 6G. Then, we survey the role of AI in the Metaverse by reviewing the state-of-the-art in deep learning, computer vision, and edge AI. Next, we investigate the promising services of B5G/6G towards Metaverse, followed by identifying the role of AI in 6G networks and 6G networks for AI in support of Metaverse applications. Finally, we enlist the existing and potential applications, usecases, and projects to highlight the importance of progress in the Metaverse. Moreover, in order to provide potential research directions to researchers, we enlist the challenges, research gaps, and lessons learned identified from the literature review of the aforementioned technologies.
... Specifically, such a deployment is expected to require a multi-gigabit link, capable of delivering video content at extremely low latency. When streaming content per-frame, one video-frame must be delivered fully within 7 ms to maintain an optimal Quality of Ex-perience (QoE) [5]. Furthermore, the network must be highly reliable, as even modest packet loss is detrimental to the QoE. ...
Conference Paper
Full-text available
Achieving extremely high-quality and truly immer-sive interactive Virtual Reality (VR) is expected to require a wireless link to the cloud, providing multi-gigabit throughput and extremely low latency. A prime candidate for fulfilling these requirements is millimeter-wave (mmWave) communications, operating in the 30 to 300 GHz bands, rather than the traditional sub-6 GHz. Evaluations with first-generation mmWave Wi-Fi hardware, based on the IEEE 802.11ad standard, have so far largely remained limited to lower-layer metrics. In this work, we present the first experimental analysis of the capabilities of mmWave for streaming VR content, using a novel testbed capable of repeatably creating blockage through mobility. Using this testbed, we show that (a) motion may briefly interrupt transmission, (b) a broken line of sight may degrade throughput unpredictably, and (c) TCP-based streaming frameworks need careful tuning to behave well over mmWave.
... Caching presented at the mobile network's edge is designed to optimize the bandwidth and latency required by VR 360-degree video streaming. Mangiante, Klas [64] demonstrated a mobile network edge solution for optimizing the bandwidth and latency required for VR 360-degree video streaming. Matsuzono, Asaeda [65] offer L4C2, in-network caching with low latency, low loss streaming technique for low delay-tolerance streaming with improved quality in real-time. ...
Article
Full-text available
Recently, the usage of 360-degree videos has prevailed in various sectors such as education, real estate, medical, entertainment and more. The development of the Virtual World “Metaverse” demanded a Virtual Reality (VR) environment with high immersion and a smooth user experience. However, various challenges are faced to provide real-time streaming due to the nature of high-resolution 360-degree videos such as high bandwidth requirement, high computing power and low delay tolerance. To overcome these challenges, streaming methods such as Dynamic Adaptive Streaming over HTTP (DASH), Tiling, Viewport-Adaptive and Machine Learning (ML) are discussed. Moreover, the superiorities of the development of 5G and 6G networks, Mobile Edge Computing (MEC) and Caching and the Information-Centric Network (ICN) approaches to optimize the 360-degree video streaming are elaborated. All of these methods strike to improve the Quality of Experience (QoE) and Quality of Service (QoS) of VR services. Next, the challenges faced in QoE modeling and the existing objective and subjective QoE assessment methods of 360-degree video are presented. Lastly, potential future research that utilizes and further improves the existing methods substantially is discussed. With the efforts of various research studies and industries and the gradual development of the network in recent years, a deep fake virtual world, “Metaverse” with high immersion and conducive for daily life working, learning and socializing are around the corner
... Unlike conventional High-Definition (HD) video streaming, 360-degree video streaming also requires that the service provider can quickly respond to the changes in the Field-of-Views (FoVs) from the users. Moreover, the users are usually interested in a certain FoV, i.e., a fraction of the entire 360-degree frame, instead of roaming the whole 360-degree video frame [4], [5]. For this, the 360-degree video needs to be pre-rendered at the transmitter, e.g., at a Base Station (BS). ...
... For this, the 360-degree video needs to be pre-rendered at the transmitter, e.g., at a Base Station (BS). This FoV pre-rendering process at the transmitter can not only guarantee quick responses to the changes of users' demands but also achieve spectral efficiency through reductions of 360-degree video frames being transmitted [4]- [6]. This FoV pre-rendering process is, however, a computationalexpensive task at the transmitter [3]. ...
... However, clustering users based on their locations or wireless channels might not be efficient in the context of 360-degree video streaming [17]. For example, clustering users based on FoVs can be beneficial in latency and spectral efficiency because the transmitter can only transmit a set of tiles of the 360-degree video frames [4], [5]. Moreover, allocating computing resources to users in RSMA is also another big challenge that has not been well investigated. ...
Preprint
Rate Splitting Multiple Access (RSMA) has emerged as an effective interference management scheme for applications that require high data rates. Although RSMA has shown advantages in rate enhancement and spectral efficiency, it has yet not to be ready for latency-sensitive applications such as virtual reality streaming, which is an essential building block of future 6G networks. Unlike conventional High-Definition streaming applications, streaming virtual reality applications requires not only stringent latency requirements but also the computation capability of the transmitter to quickly respond to dynamic users' demands. Thus, conventional RSMA approaches usually fail to address the challenges caused by computational demands at the transmitter, let alone the dynamic nature of the virtual reality streaming applications. To overcome the aforementioned challenges, we first formulate the virtual reality streaming problem assisted by RSMA as a joint communication and computation optimization problem. A novel multicast approach is then proposed to cluster users into different groups based on a Field-of-View metric and transmit multicast streams in a hierarchical manner. After that, we propose a deep reinforcement learning approach to obtain the solution for the optimization problem. Extensive simulations show that our framework can achieve the millisecond-latency requirement, which is much lower than other baseline schemes.
... Intel WiGig wireless adapter can support 8 Gbps data rates which are suf icient to provide reliable wireless connections. However, this is only an entry-level VR that has relatively low PPD, refresh rate, and bit of color [28]. Ultimate XR: Ultimate (or Extreme) XR, which is stage 3 in Fig. 5, requires 360 ∘ ×180 ∘ full-view with 120 Hz refresh rate, 64 PPD, and 12 bits of color [11,28]. ...
... However, this is only an entry-level VR that has relatively low PPD, refresh rate, and bit of color [28]. Ultimate XR: Ultimate (or Extreme) XR, which is stage 3 in Fig. 5, requires 360 ∘ ×180 ∘ full-view with 120 Hz refresh rate, 64 PPD, and 12 bits of color [11,28]. Although a refresh rate higher than 120 Hz can improve the video quality, most users may not be able to distinguish the difference [11,28]. ...
... Ultimate XR: Ultimate (or Extreme) XR, which is stage 3 in Fig. 5, requires 360 ∘ ×180 ∘ full-view with 120 Hz refresh rate, 64 PPD, and 12 bits of color [11,28]. Although a refresh rate higher than 120 Hz can improve the video quality, most users may not be able to distinguish the difference [11,28]. Thus, the required data rate without compression is 2.3 Tbps. ...
Article
Full-text available
Extended Reality (XR) is an umbrella term that includes Augmented Reality (AR), Mixed Reality (MR), and Virtual Reality (VR). XR has a tremendous market size and will profoundly transform our lives by changing the way we interact with the physical world. However, existing XR devices are mainly tethered by cables which limit users' mobility and Quality-of-Experience (QoE). Wireless XR leverages existing and future wireless technologies, such as 5G, 6G, and Wi-Fi, to remove cables that are tethered to the head-mounted devices. Such changes can free users and enable a plethora of applications. High-quality ultimate XR requires an uncompressed data rate up to 2.3 Tbps with an end-to-end latency lower than 10 ms. Although 5G has significantly improved data rates and reduced latency, it still cannot meet such high requirements. This paper provides a roadmap towards wireless ultimate XR. The basics, existing products, and use cases of AR, MR, and VR are reviewed, upon which technical requirements and bottlenecks of realizing ultimate XR using wireless technologies are identified. Challenges of utilizing 6G wireless systems and the next generation Wi-Fi systems and future research directions are provided.
... Emerging network services and applications in the fifth-generation (5G) and beyond (e.g., Augmented Reality (AR) / virtual reality (VR) (Mangiante et al., 2017), mobile social media (Taleb et al., 2016), Internet of Things (IoT) (Borgia et al., 2016), Industrial Internet, and Internet of Vehicles (IoV) (Zhang et al., 2017a)) have put forward higher requirements for the data transmission and processing capacity of computer and communication networks than before. Most of such applications have stringent Quality-of-Service (QoS) requirements facing the explosive growth of network traffic. ...
Article
Computation offloading is one of the key technologies in Mobile Edge Computing (MEC), which makes up for the deficiencies of mobile devices in terms of storage resource, computing capacity, and energy efficiency. On one hand, computation offloading of task requests not only relieves the communication pressure on the core networks but also reduces the delay caused by long-distance data transmission. On the other hand, emerging applications in 5/6G also rely on the computation offloading technology for efficient service provisioning to users. At present, the industry and academia have conducted a lot of researches on the computation offloading methods in MEC networks with a diversity of meaningful techniques and approaches. In this paper, we present a comprehensive survey of the computation offloading in MEC networks including applications, offloading objectives, and offloading approaches. Particularly, we discuss key issues on various offloading objectives, including delay minimization, energy consumption minimization, revenue maximization, and system utility maximization. The approaches to achieve these objectives mainly include mathematical solver, heuristic algorithms, Lyapunov optimization, game theory, and Markov Decision Process (MDP) and Reinforcement Learning (RL). We compare the approaches by characterizing their pros and cons as well as targeting applications.Finally, from the four aspects of subtasks dependency and online task requests, server selection, real-time environment perception, and security, we analyze the current challenges and future directions of computation offloading in MEC networks.
... It is cost efficient when it comes to capturing a point of view of real environments, and it generally does not interfere with the execution of the event. In fact, researchers have been working in improving the streaming of VR360 video content over the internet, adapting different strategies for selective transmission of data [16], and assessing the impact of transmission artifacts, such as stalling and bitrate reduction, on the quality of experience of VR360 video [2]. The relative facility of video capture also make the VR360 option interesting for social platforms. ...
Article
Full-text available
In this paper, we investigate three forms of virtual reality (VR) content production and consumption. Namely, pre-rendered 360 stereoscopic video, full real-time rendered 3D scenes, and the combination of a real-time rendered 3D environment with a pre-rendered video billboard used to present the central elements of the scene. We discuss the advantages and disadvantages of these content formats and describe the production of a piece of VR cinematic content for the three formats. The cinematic segment presented the interaction between two actors, which the VR user could watch from the virtual room next-door, separated from the action by a one-way mirror. To compare the three content formats, we carried out an experiment with 24 participants. In the experiment, we evaluated the quality of experience, including presence, simulation sickness and the participants’ assessment of content quality, for each of the three versions of the cinematic segment. We found that, in the context of our cinematic segment, combining video and 3D content produced the best experience. We discuss our results, including their limitations and the potential applications.
... Regarding to the Metaverse architecture, there is no consensus, e.g., a seven-layer system [20] and a threelayer architecture [9]. However, based on the Metaverse's functionality, overall, its architecture should include four aspects [19]: infrastructure (the fundamental resources to support the platform, such as communication [21], computation, blockchain [22], and other decentralization techniques), interface (immersive technologies, such as AR, VR [11], [23], XR [24], and next generation human-brain interconnection to enrich human's subjective sense in the virtual life), cross-world ecosystem (the services that enable frequent and large amount of data transmission between the Metaverse and the physical world, to enable a convergence between the two worlds [25]), and finally inworld ecosystem (activities that happen only within the virtual worlds, e.g., transaction of the non-fungible token (NFT) [26], playing games to earn Crypto (GameFi) [27], and decentralized finance (DeFi) [28]). See [8], [29], [30] for a more detailed analysis of the architectures and challenges faced by the Metaverse. ...
Preprint
Full-text available
Metaverse has recently attracted much attention from both academia and industry. Virtual services, ranging from virtual driver training to online route optimization for smart good delivery, are emerging in the Metaverse. To make the human experience of virtual life real, digital twins (DTs), namely digital replications of physical objects in life, are the key enablers. However, the status of DTs is not always reliable because their physical counterparties can be moving objects or subject to changes as time passes. As such, it is necessary to synchronize DTs with their physical objects to make DTs status reliable for virtual businesses in the Metaverse. In this paper, we propose a dynamic hierarchical framework in, which a group of IoTs devices assists virtual service providers (VSPs) in synchronizing DTs: the devices sense and collect physical objects' status information collectively in return for incentives. Based on the collected sync data and the value decay rate of the DTs, the VSPs can determine a sync intensity to maximize their payoffs. We adopt a dynamic hierarchical framework in which the lower-level evolutionary game captures the VSPs selection by the population of IoT devices, and the upper-level (Stackelberg) differential game captures the VSPs payoffs affected by the sync strategy, UAVs selection shares, and the DTs value status. We theoretically and experimentally prove the equilibrium to the lower-level game exists and is evolutionarily robust, and provide the sensitivity analysis w.r.t various system parameters. Experiment shows that the dynamic Stackelberg differential game gives higher accumulated payoffs compared to the static Stackelberg game and the simultaneous differential game.
... Following the trend of cloud VR, some researchers go one step further to consider VR transmission in fog computing enabled cellular networks. In [4], a Field of View (FoV) rendering scheme deployed at fog computing infrastructures is proposed for VR video delivery and the test result reveals that the traffic in the core and radio access links can be reduced by over 80%. In [5], the authors implement a VR solution also utilizing rendering with fog computing, where a margin around FoV is streamed back as well to achieve better adaptation to different network latency conditions. ...