Conference Paper

A Comprehensive Review of Generative AI Applications in 6G

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The paper [5] provides a comprehensive review of the applications of Generative AI within the context of 6G networks, emphasizing its role in addressing challenges related to network complexity, data traffic, and personalized services. It systematically analyzes existing literature and potential uses of Generative AI to optimize network functions, enhance security, and offer personalized media content. ...
Article
Full-text available
Enterprise Resource Planning (ERP) systems digitize all business processes within companies in order to enhance automation and optimize efficiency. These solutions integrate data and processes across multiple functions such as sales, marketing, finance, supply chain, manufacturing, services, procurement, and human resources, serving as a central repository of information for numerous organizations. ERP systems typically encompass tens of thousands of business processes and manage data across thousands of tables, creating significant opportunities for the integration of Generative Artificial Intelligence (AI) for increasing process automatization and optimization. Nonetheless, embedding Generative AI into ERP solutions is a complex task due to the intricate nature of these systems, which consist of hundreds of millions of lines of code and cater to a wide array of industry-specific and regional requirements. Consequently, the key research question addressed in this paper is: How to systematically develop and operate Generative AI business applications in ERP systems? This article aims to answer this question by conducting a use case analysis, deriving business requirements, designing and implementing a solution framework, and evaluating its effectiveness through real-world ERP use cases.
Conference Paper
Full-text available
The emerging concept of delivering Network-as-a- Service (NaaS) foresees the deployment and reconfiguration of the next-generation networks, such as 6G, in a dynamic and elastic manner, tailored to the respective stakeholder’s intention. Taking this into account, the efficient management and orchestration of both telecommunication and computational resources across the network domains, i.e. access, transport and core presents a considerable challenge, even for network experts. To tackle this complexity, this paper explores the implementation of an intentbased management framework. The framework receives a highlevel description of the desired network capabilities along with supplementary files, e.g. deployment descriptors, and translates them into configuration files consumable by the network itself. In order to achieve this, the paper establishes a translation pipeline that leverages the employment of emerging multimodal generative artificial intelligence (GenAI) models, specifically Large Language Models (LLMs), and open industry-ready standard templates. The adoption of those two emerging technologies offers high dynamicity on the interpretation process of the user’s intent, while ensuring that its outcome is compatible with every orchestrator or next-generation Operating Support System (Next-gen OSS) that adheres to those standards.
Article
Full-text available
The Metaverse, positioned as the next frontier of the internet, has the ambition to forge a virtual shared realm characterized by immersion, hyper spatiotemporal dynamics, and self-sustainability. Recent technological strides in AI, Extended Reality (XR), 6G, and blockchain propel the Metaverse closer to realization, gradually transforming it from science fiction into an imminent reality. Nevertheless, the extensive deployment of the Metaverse faces substantial obstacles, primarily stemming from its potential to infringe on privacy and be susceptible to security breaches, whether inherent in its underlying technologies or arising from the evolving digital landscape. Metaverse security provisioning is poised to confront various foundational challenges owing to its distinctive attributes, encompassing immersive realism, hyper spatiotemporally, sustainability, and heterogeneity. This paper undertakes a comprehensive study of the security and privacy challenges facing the Metaverse, leveraging Machine Learning (ML) models for this purpose. In particular, our focus centers on an innovative distributed Metaverse architecture characterized by interactions across 3D worlds. Subsequently, we conduct a thorough review of the existing cutting-edge measures designed for Metaverse systems while also delving into the discourse surrounding security and privacy threats. As we contemplate the future of Metaverse systems, we outline directions for open research pursuits in this evolving landscape.
Article
Full-text available
Recently, the development of the Metaverse has become a frontier spotlight, which is an important demonstration of the integration innovation of advanced technologies in the Internet. Moreover, artificial intelligence (AI) and 6G communications will be widely used in our daily lives. However, the effective interactions with the representations of multimodal data among users via 6G communications is the main challenge in the Metaverse. In this work, we introduce an intelligent cross-modal graph semantic communication approach based on generative AI and 3-dimensional (3D) point clouds to improve the diversity of multimodal representations in the Metaverse. Using a graph neural network, multimodal data can be recorded by key semantic features related to the real scenarios. Then, we compress the semantic features using a graph transformer encoder at the transmitter, which can extract the semantic representations through the cross-modal attention mechanisms. Next, we leverage a graph semantic validation mechanism to guarantee the exactness of the overall data at the receiver. Furthermore, we adopt generative AI to regenerate multimodal data in virtual scenarios. Simultaneously, a novel 3D generative reconstruction network is constructed from the 3D point clouds, which can transfer the data from images to 3D models, and we infer the multimodal data into the 3D models to increase realism in virtual scenarios. Finally, the experiment results demonstrate that cross-modal graph semantic communication, assisted by generative AI, has substantial potential for enhancing user interactions in the 6G communications and Metaverse.
Article
Full-text available
The exponential growth of the fifth-generation (5G) network gives businesses and universities a chance to turn their attention to the next-generation network. It is widely acknowledged that many IoT devices require more than 5G to send various types of data in real-time. In addition to 5G, several research centres are currently concentrating on 6G, which is expected to produce networks with great quality of service (QoS) and energy efficiency. Future application requirements will necessitate a significant upgrade in mobile network architecture. 6G technologies offer larger networks with lower latency and faster data transmission than 5G networks. This review presents a comprehensive overview of 6G networks which includes the novel architectural changes within 6G networks, recent research insights from diverse institutions, applications within the realm of 6G networks, and the key features associated with them. We also explored various technologies of 6G networks encompassing terahertz, visible light connectivity, blockchain, and symbiotic broadcasting, all of which contribute to the establishment of robust and socially integrated network structures. In this survey, we have focused on 6G network slices and discussed a detailed exploration of security and privacy concerns regarding the potential 6G technologies at the levels of physical infrastructure, connecting protocols, and service provisions, alongside an evaluation of current security strategies.
Article
Full-text available
Envisioned to be the next-generation Internet, the metaverse has been attracting enormous attention from both the academia and industry. The metaverse can be viewed as a 3D immersive virtual world, where people use Augmented/Virtual Reality (AR/VR) devices to access and interact with others through digital avatars. While early versions of the metaverse exist in several Massively Multiplayer Online (MMO) games, the full-flesh metaverse is expected to be more complex and enabled by various advanced technologies. Blockchain is one of the crucial technologies that could revolutionize the metaverse to become a decentralized and democratic virtual society with its own economic and governance system. Realizing the importance of blockchain for the metaverse, our goal in this paper is to provide a comprehensive survey that clarifies the role of blockchain in the metaverse including in-depth analysis of digital asset management. To this end, we discuss how blockchain can enable the metaverse from different perspectives, ranging from user applications to virtual services and the blockchain-enabled economic system. Furthermore, we describe how blockchain can shape the metaverse from the system perspective, including various solutions for the decentralized governance system and data management. The potential of blockchain for security and privacy aspects of the metaverse infrastructure is also figured out, while a full flow of blockchain-based digital asset management for the metaverse is investigated. Finally, we discuss a wide range of open challenges of the blockchain-empowered metaverse.
Article
Full-text available
Open Radio Access Network (O-RAN) alliance was recently launched to devise a new RAN architecture featuring open, software-driven, virtual, and intelligent radio access architecture. O-RAN architecture is based on (1) disaggregated RAN functions that run as Virtual Network Function (VNF) and Physical Network Function (PNF); (2) the notion of RAN controller that runs centrally RAN applications such as mobility management, users’ scheduling, radio resources allocation, etc. The RAN controller is in charge of enforcing the application decisions by using open interfaces with the RAN functions. One important feature introduced by O-RAN is the heavy usage of Machine Learning (ML) techniques, particularly Deep Learning (DL), to foster innovation and ease the deployment of intelligent RAN applications that are able to fulfill the Quality of Service (QoS) requirements of the envisioned 5G and beyond network services. In this work, we first give an overview of the evolution of RAN architectures toward 5G and beyond, namely C-RAN, vRAN, and O-RAN. We also compare them based on various perspectives, such as edge support, virtualization, control and management, energy consumption, and AI support. Then, we review existing DL-based solutions addressing the RAN part. We also show how they can be integrated/mapped to the O-RAN architecture since these works were not initially adapted to the O-RAN architecture. In addition, we present two case studies for DL techniques deployment in O-RAN. Furthermore, we describe how the main steps of deployed DL models in O-RAN can be automated, to ensure stable performance of these models, introducing ML system operations (MLOps) concept in O-RAN. Finally, we identify key technical challenges, open issues, and future research directions related to the Artificial Intelligence (AI)-enabled O-RAN architecture.
Article
Full-text available
The inherent limitations of the network keep on going to be revealed with the continuous deployment of cellular networks. The next generation 6G is motivated by these drawbacks to properly integrate important rate-hungry applications such as extended reality, wireless brain-computer interactions, autonomous vehicles, etc. Also, to support significant applications, 6G will handle large amounts of data transmission in smart cities with much lower latency. It combines many state-of-the-art trends and technology to provide higher data rates for ultra-reliable and low latency communications. By outlining the system requirements, potential trends, technologies, services, applications, and research progress, this paper comprehensively conceptualized the 6G cellular system. Open research issues and current research groups in their field of research are summarised to provide readers with the technology road-map and the potential challenges to consider in their 6G research.
Article
Full-text available
The demand for wireless connectivity has grown exponentially over the last few decades. Fifth-generation (5G) communications, with far more features than fourth-generation communications, will soon be deployed worldwide. A new paradigm of wireless communication, the sixth-generation (6G) system, with the full support of artificial intelligence, is expected to be implemented between 2027 and 2030. Beyond 5G, some fundamental issues that need to be addressed are higher system capacity, higher data rate, lower latency, higher security, and improved quality of service (QoS) compared to the 5G system. This paper presents the vision of future 6G wireless communication and its network architecture. This article describes emerging technologies such as artificial intelligence, terahertz communications, wireless optical technology, free-space optical network, blockchain, three-dimensional networking, quantum communications, unmanned aerial vehicles, cell-free communications, integration of wireless information and energy transfer, integrated sensing and communication, integrated access-backhaul networks, dynamic network slicing, holographic beamforming, backscatter communication, intelligent reflecting surface, proactive caching, and big data analytics that can assist the 6G architecture development in guaranteeing the QoS. Besides, expected applications with 6G communication requirements and possible technologies are presented. We also describe potential challenges and research directions for achieving this goal.
Article
Digital twin, which enables emulation, evaluation, and optimization of physical entities through synchronized digital replicas, has gained increasing attention as a promising technology for intricate wireless networks. For 6G, numerous innovative wireless technologies and network architectures have posed new challenges in establishing wireless network digital twins. To tackle these challenges, artificial intelligence (AI), particularly the flourishing generative AI, emerges as a potential solution. In this article, we discuss emerging prerequisites for wireless network digital twins, considering the complicated network architecture, tremendous network scale, extensive coverage, and diversified application scenarios in the 6G era. We further explore the applications of generative AI, such as transformer and diffusion models, to empower the 6G digital twin from multiple perspectives, including physical-digital modeling, synchronization, and slicing capability. Subsequently, we propose a hierarchical generative AI-enabled wireless network digital twin at both the message-level and policy-level, and provide a typical use case with numerical results to validate effectiveness and efficiency. Finally, open research issues for wireless network digital twins in the 6G era are discussed.
Article
As the next-generation wireless communication system, sixth-generation (6G) technologies are emerging, enabling various mobile edge networks that can revolutionize wireless communication and connectivity. By integrating generative artificial intelligence (GAI) with mobile edge networks, generative mobile edge networks possess immense potential to enhance the intelligence and efficiency of wireless communication networks. In this article, we propose the concept of generative mobile edge networks and overview widely adopted GAI technologies and their applications in mobile edge networks. We then discuss the potential challenges faced by generative mobile edge networks in resource-constrained scenarios. To address these challenges, we develop a universal resource-efficient generative incentive mechanism framework, in which we design resource-efficient methods for network overhead reduction, formulate appropriate incentive mechanisms for the resource allocation problem, and utilize generative diffusion models (GDMs) to find the optimal incentive mechanism solutions. Furthermore, we conduct a case study on resource-constrained mobile edge networks, employing model partitioning for efficient AI task offloading, and proposing a GDM-based Stackelberg model to motivate edge devices to contribute computing resources for mobile edge intelligence. Finally, we propose several open directions that could contribute to the future popularity of generative mobile edge networks.
Article
As the deployment of 5G technology matures, the anticipation for 6G is growing, which promises to deliver faster and more reliable wireless connections via cutting-edge radio technologies. A pivot to these radio technologies is the effective management of large-scale antenna arrays, which aims to construct valid spatial streams to maximize system throughput. Traditional management methods predominantly rely on user feedback to adapt to dynamic wireless channels. However, a more promising approach lies in the prediction of spatial channel state information (spatial-CSI), which is a channel characterization that consists of all robust line-of-sight (LoS) and non-line-of-sight (NLoS) paths between the transmitter (Tx) and receiver (Rx), with three-dimensional (3D) trajectory, attenuation, phase shift, delay, and polarization of each path. Recent advances in hardware and neural networks make it possible to predict such spatial-CSI using precise environmental information, and further explores the possibility of holographic communication, which implies complete control over every aspect of the radio waves. This paper presents a preliminary exploration of using generative artificial intelligence (AI) to accurately model the environment particularly for radio simulations and identify valid paths within it for real-time spatial-CSI prediction. Our validation project demonstrates promising results, highlighting the potential of this approach in driving forward the evolution of 6G wireless communication technologies.
Article
The sixth generation mobile network (6G) is evolving to provide ubiquitous connections, multidimensional perception, native intelligence, global coverage, etc., which poses intense demands for network design to tackle the highly dynamic context and diverse service requirements. Digital Twin (DT) is envisioned as an efficient method for designing 6G that migrates the behaviors of physical nodes to the virtual space. However, in the high-dynamic 6G network, there still exist challenges in achieving accuracy and flexibility when constructing DT. In this article, we propose a Generative Artificial Intelligence (GAI)-driven mobile network digital twin paradigm, where the GAI is utilized as a key enabler to generate DT data. Specifically, GAI is capable of implicitly learning the complex distribution of network data, allowing it to sample from the distribution and obtain high-fidelity data. In addition, the construction of DT is closely related to various types of data, such as environmental, user, and service data. GAI can utilize these data as conditions to control the generation process under different scenarios, thereby enhancing flexibility. In practice, we develop a network digital twin prototype system to accurately model the behaviors of mobile network elements ( i.e ., mobile users, base stations, and wireless environments) and to evaluate network performance. Evaluation results demonstrate that the proposed prototype system can generate high-fidelity DT data and provide practical network optimization solutions.
Article
The rapid expansion of AI-generated content (AIGC) reflects the iteration from assistive AI towards generative AI (GAI). Meanwhile, the 6G networks will also evolve from the Internet-of-Everything to the Internet-of-Intelligence. However, they seem to be an odd couple, due to the contradiction of data and resources. To achieve a better-coordinated interplay between GAI and 6G, the GAI-native Networks (GainNet), a GAI-oriented collaborative cloud-edge-end intelligence framework, is proposed in this article. By deeply integrating GAI with 6G network design, GainNet realizes the positive closed-loop knowledge flow and sustainable-evolution GAI model optimization. On this basis, the GAI-oriented generic Resource Orchestration Mechanism with Integrated Sensing, Communication, and Computing (GaiRom-ISCC) is proposed to guarantee the efficient operation of GainNet. Two simple case studies demonstrate the effectiveness and robustness of the proposed schemes. Finally, we envision the key challenges and future directions concerning the interplay between GAI models and 6G networks.
Article
In the era of 6G, featuring compelling visions of digital twins and metaverses, Extended Reality (XR) has emerged as a vital conduit connecting the digital and physical realms, garnering widespread interest. Ensuring a fully immersive wireless XR experience stands as a paramount technical necessity, demanding the liberation of XR from the confines of wired connections. In this paper, we first introduce the technologies applied in the wireless XR domain, delve into their benefits and limitations, and highlight the ongoing challenges. We then propose a novel deployment framework for a broad XR pipeline, termed “GeSa-XRF”, inspired by the core philosophy of Semantic Communication (SemCom) which shifts the concern from “how” to transmit to “what” to transmit. Particularly, the framework comprises three stages: data collection, data analysis, and data delivery. In each stage, we integrate semantic awareness to achieve streamlined transmission and employ Generative Artificial Intelligence (GAI) to achieve collaborative refinements. For the data collection of multi-modal data with differentiated data volumes and heterogeneous latency requirements, we propose a novel SemCom paradigm based on multi-modal fusion and separation and a GAI-based robust superposition scheme. To perform a comprehensive data analysis, we employ multi-task learning to perform the prediction of field of view and personalized attention and discuss the possible preprocessing approaches assisted by GAI. Lastly, for the data delivery stage, we present a semantic-aware multicast-based delivery strategy aimed at reducing pixel level redundant transmissions and introduce the GAI collaborative refinement approach. The performance gain of the proposed GeSa-XRF is preliminarily demonstrated through a case study.
Article
This paper introduces a media service model that exploits artificial intelligence (AI) video generators at the receive end. This proposal deviates from the traditional multimedia ecosystem, completely relying on in-house production, by shifting part of the content creation onto the receiver. We bring a semantic process into the framework, allowing the distribution network to provide service elements that prompt the content generator rather than distributing encoded data of fully finished programs. The service elements include fine-tailored text descriptions, lightweight image data of some objects, or application programming interfaces, comprehensively referred to as semantic sources, and the user terminal translates the received semantic data into video frames. Empowered by the random nature of generative AI, users can experience super-personalized services accordingly. The proposed idea incorporates situations in which the user receives different service providers’ element packages, either in a sequence over time or multiple packages at the same time. Given promised in-context coherence and content integrity, the combinatory dynamics will amplify the service diversity, allowing the users to always chance upon new experiences. This work particularly aims at short-form videos and advertisements, which the users would easily feel fatigued by seeing the same frame sequence every time. In those use cases, the content provider’s role will be recast as scripting semantic sources, transformed from a thorough producer. Overall, this work explores a new form of media ecosystem facilitated by receiver-embedded generative models, featuring both random content dynamics and enhanced delivery efficiency simultaneously.
Article
The emerging sixth generation (6G) is the integration of heterogeneous wireless networks, which can seamlessly support anywhere and anytime networking. But high quality of trust should be offered by 6G to meet mobile user expectations. Artificial intelligence (AI) is considered as one of the most important components in 6G. AI-based trust management is a promising paradigm to provide trusted and reliable services. In this article, a generative-adversarial-learning-en-abled trust management method is presented for 6G wireless networks. Some typical AI-based trust management schemes are first reviewed, and then a potential heterogeneous and intelligent 6G architecture is introduced. Next, the integration of AI and trust management is developed to optimize intelligence and security. Finally, the presented AI-based trust management method is applied to secure clustering to achieve reliable and real-time communications. Simulation results have demonstrated its excellent performance in guaranteeing network security and service quality.
Article
The open radio access network (O-RAN) describes an industry-driven open architecture and interfaces for building next generation RANs with artificial intelligence (AI) controllers. We circulated a survey among researchers, developers, and practitioners to gather their perspectives on O-RAN as a framework for 6G wireless research and development (R&D). The majority responded in favor of O-RAN and identified R&D of interest to them. Motivated by these responses, this paper identifies the limitations of the current O-RAN specifications and the technologies for overcoming them. We recognize end-to-end security, deterministic latency, physical layer real-time control, and testing of AI-based RAN control applications as the critical features to enable and discuss R&D opportunities for extending the architectural capabilities of O-RAN as a platform for 6G wireless.
Article
A key enabler for the intelligent information society of 2030, 6G networks are expected to provide performance superior to 5G and satisfy emerging services and applications. In this article, we present our vision of what 6G will be and describe usage scenarios and requirements for multi-terabyte per second (Tb/s) and intelligent 6G networks. We present a large-dimensional and autonomous network architecture that integrates space, air, ground, and underwater networks to provide ubiquitous and unlimited wireless connectivity. We also discuss artificial intelligence (AI) and machine learning [1], [2] for autonomous networks and innovative air-interface design. Finally, we identify several promising technologies for the 6G ecosystem, including terahertz (THz) communications, very-large-scale antenna arrays [i.e., supermassive (SM) multiple-input, multiple-output (MIMO)], large intelligent surfaces (LISs) and holographic beamforming (HBF), orbital angular momentum (OAM) multiplexing, laser and visible-light communications (VLC), blockchain-based spectrum sharing, quantum communications and computing, molecular communications, and the Internet of Nano-Things.