October 2024
·
14 Reads
·
1 Citation
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
October 2024
·
14 Reads
·
1 Citation
July 2024
·
50 Reads
Hierarchical federated learning (HFL) designs introduce intermediate aggregator nodes between clients and the global federated learning server in order to reduce communication costs and distribute server load. One side effect is that machine learning model replication at scale comes "for free" as part of the HFL process: model replicas are hosted at the client end, intermediate nodes, and the global server level and are readily available for serving inference requests. This creates opportunities for efficient model serving but simultaneously couples the training and serving processes and calls for their joint orchestration. This is particularly important for continual learning, where serving a model while (re)training it periodically, upon specific triggers, or continuously, takes place over shared infrastructure spanning the computing continuum. Consequently, training and inference workloads can interfere with detrimental effects on performance. To address this issue, we propose an inference load-aware HFL orchestration scheme, which makes informed decisions on HFL configuration, considering knowledge about inference workloads and the respective processing capacity. Applying our scheme to a continual learning use case in the transportation domain, we demonstrate that by optimizing aggregator node placement and device-aggregator association, significant inference latency savings can be achieved while communication costs are drastically reduced compared to flat centralized federated learning.
July 2024
·
8 Reads
May 2024
·
35 Reads
While various service orchestration aspects within Computing Continuum (CC) systems have been extensively addressed, including service placement, replication, and scheduling, an open challenge lies in ensuring uninterrupted data delivery from IoT devices to running service instances in this dynamic environment, while adhering to specific Quality of Service (QoS) requirements and balancing the load on service instances. To address this challenge, we introduce QEdgeProxy, an adaptive and QoS-aware load balancing framework specifically designed for routing client requests to appropriate IoT service instances in the CC. QEdgeProxy integrates naturally within Kubernetes, adapts to changes in dynamic environments, and manages to seamlessly deliver data to IoT service instances while consistently meeting QoS requirements and effectively distributing load across them. This is verified by extensive experiments over a realistic K3s cluster with instance failures and network variability, where QEdgeProxy outperforms both Kubernetes built-in mechanisms and a state-of-the-art solution, while introducing minimal computational overhead.
January 2024
·
16 Reads
January 2024
·
149 Reads
·
18 Citations
IEEE Access
Sixth-generation (6G) networks anticipate intelligently supporting a wide range of smart services and innovative applications. Such a context urges a heavy usage of Machine Learning (ML) techniques, particularly Deep Learning (DL), to foster innovation and ease the deployment of intelligent network functions/operations, which are able to fulfill the various requirements of the envisioned 6G services. The revolution of 6G networks is driven by massive data availability, moving from centralized and big data towards small and distributed data. This trend has motivated the adoption of distributed and collaborative ML/DL techniques. Specifically, collaborative ML/DL consists of deploying a set of distributed agents that collaboratively train learning models without sharing their data, thus improving data privacy and reducing the time/communication overhead. This work provides a comprehensive study on how collaborative learning can be effectively deployed over 6G wireless networks. In particular, our study focuses on Split Federated Learning (SFL), a technique that recently emerged promising better performance compared with existing collaborative learning approaches. We first provide an overview of three emerging collaborative learning paradigms, including federated learning, split learning, and split federated learning, as well as of 6G networks along with their main vision and timeline of key developments.We then highlight the need for split federated learning towards the upcoming 6G networks in every aspect, including 6G technologies (e.g., intelligent physical layer, intelligent edge computing, zero-touch network management, intelligent resource management) and 6G use cases (e.g., smart grid 2.0, Industry 5.0, connected and autonomous systems). Furthermore, we review existing datasets along with frameworks that can help in implementing SFL for 6G networks. We finally identify key technical challenges, open issues, and future research directions related to SFL-enabled 6G networks.
September 2023
·
247 Reads
Sixth-generation (6G) networks anticipate intelligently supporting a wide range of smart services and innovative applications. Such a context urges a heavy usage of Machine Learning (ML) techniques, particularly Deep Learning (DL), to foster innovation and ease the deployment of intelligent network functions/operations, which are able to fulfill the various requirements of the envisioned 6G services. The revolution of 6G networks is driven by massive data availability, moving from centralized and big data towards small and distributed data. This trend has motivated the adoption of distributed and collaborative ML/DL techniques. Specifically, collaborative ML/DL consists of deploying a set of distributed agents that collaboratively train learning models without sharing their data, thus improving data privacy and reducing the time/communication overhead. This work provides a comprehensive study on how collaborative learning can be effectively deployed over 6G wireless networks. In particular, our study focuses on Split Federated Learning (SFL), a technique recently emerged promising better performance compared with existing collaborative learning approaches. We first provide an overview of three emerging collaborative learning paradigms, including federated learning, split learning, and split federated learning, as well as of 6G networks along with their main vision and timeline of key developments. We then highlight the need for split federated learning towards the upcoming 6G networks in every aspect, including 6G technologies (e.g., intelligent physical layer, intelligent edge computing, zero-touch network management, intelligent resource management) and 6G use cases (e.g., smart grid 2.0, Industry 5.0, connected and autonomous systems). Furthermore, we review existing datasets along with frameworks that can help in implementing SFL for 6G networks. We finally identify key technical challenges, open issues, and future research directions related to SFL-enabled 6G networks.
January 2023
·
158 Reads
·
25 Citations
IEEE Transactions on Cloud Computing
Fog computing enables the execution of IoT applications on compute nodes which reside both in the cloud and at the edge of the network. To achieve this, most fog computing systems route the IoT data on a path which starts at the data source, and goes through various edge and cloud nodes. Each node on this path may accept the data if there are available resources to process this data locally. Otherwise, the data is forwarded to the next node on path. Notably, when the data is forwarded (rather than accepted), the communication latency increases by the delay to reach the next node. To avoid this, we propose a routing mechanism which maintains a history of all nodes that have accepted data of each context in the past. By processing this history, our mechanism sends the data directly to the closest node that tends to accept data of the same context. This lowers the forwarding by nodes on path, and can reduce the communication latency. We evaluate this approach using both prototype- and simulation-based experiments which show reduced communication latency (by up to 23%) and lower number of hops traveled (by up to 73%), compared to a state-of-the-art method.
May 2022
·
132 Reads
·
37 Citations
January 2022
·
92 Reads
We present an architecture for the provision of video Content Delivery Network (CDN) functionality as a service over a multi-domain cloud. We introduce the concept of a CDN slice, that is, a CDN service instance which is created upon a content provider's request, is autonomously managed, and spans multiple potentially heterogeneous edge cloud infrastructures. Our design is tailored to a 5G mobile network context, building on its inherent programmability, management flexibility, and the availability of cloud resources at the mobile edge level, thus close to end users. We exploit Network Functions Virtualization (NFV) and Multi-access Edge Computing (MEC) technologies, proposing a system which is aligned with the recent NFV and MEC standards. To deliver a Quality-of-Experience (QoE) optimized video service, we derive empirical models of video QoE as a function of service workload, which, coupled with multi-level service monitoring, drive our slice resource allocation and elastic management mechanisms. These management schemes feature autonomic compute resource scaling, and on-the-fly transcoding to adapt video bit-rate to the current network conditions. Their effectiveness is demonstrated via testbed experiments.
... i. Communication overheads can be efficiently reduced by using compression techniques, reducing the waiting time by using asynchronous communication, better bandwidth, designing an effective model with the option to reduce or drop updates, and by increasing the overall model performance (Hafi et al. 2024). ii. ...
January 2024
IEEE Access
... In addition to web applications, WASM is increasingly adopted as a standalone technology in various other domains. Its light weight and portability make it suitable for server-side applications, desktop applications, and embedded systems [23]. In these settings, WASM modules operate independently, offering the performance and security benefits typical of native applications. ...
May 2022
... To draw valuable insights and make informed decisions, efficient data management strategies such as data storage, processing, and real-time analytics must be employed. This includes using edge computing [65] for minimized latencies and enhanced efficiency in smart VHF setups, and employing big data analytics to enhance acquiring, storing, and analyzing data from vertical farming activities [66]. ...
November 2021
IEEE Internet Computing
... This group emphasizes enhancing network reliability, improving mobility management, ensuring seamless transitions within the network, and maintaining overall network performance and stability. Authors in [52] introduced a machine learning approach to deduce the stability of User Equipment (UE) channel conditions, providing insights into network performance. Estimating the positioning accuracy of the UE for location-based services was discussed in [53], while [54] suggested Distributed and Multi-Task Learning at the Edge for collaborative model training. ...
July 2021
IEEE Transactions on Network and Service Management
... Cloud-based and edge-of-network computer nodes work together [8] in the fog to run IoT applications. Most fog computing solutions accomplish this by sending IoT data from its origin to numerous destinations in the cloud and on the network's periphery. ...
January 2023
IEEE Transactions on Cloud Computing
... Ye et al. [40] proposed an adaptive runtime verification method based on multi-agent systems. Tsigkanos et al. [41] proposed a service-oriented software architecture and technical framework to support the runtime verification of decentralized edge-dense systems. Hu et al. [42] proposed a runtime verification method based on the Robotic Operating System (ROS). ...
April 2021
IEEE Transactions on Services Computing
... Long-range (LoRa) is one of the most popular low-power wide-area-network (LPWAN) protocols due to its easy deployment and flexible management as well as its open protocol stack. As a physical layer technology, LoRa adopts chirp spread spectrum (CSS) techniques to propagate narrowband signals over a specific channel bandwidth [1]. The signal could therefore travel further while consuming less power, enabling the connection of thousands of devices with long battery lives. ...
January 2021
IEEE Internet Computing
... There are variety of service providers in such a large-scale, multi-domain network, where different management strategies and operational policies are applied in each network domain. [8,9]. To facilitate network management, some companies segment their networks into autonomous, interconnected domains. ...
September 2020
IEEE Transactions on Mobile Computing
... However, OpenAirInterfaces with network slicing can be used instead of LTE for improved user quality of experience (QoE) in remote location task transfer [18]. The MEC platform can utilize location-based services for computational offloading to decrease latency for mobile users [19]. If emphasis is given to the caliber of network connectivity and resource usage, the overall TC time may be reduced by a substantial amount. ...
June 2020
... Navarro [8] proposed to incorporate the protocol stack of eNodeB into the LoRaWAN gateway, enabling the LoRaWAN gateway to access 4G/5G core network directly. Ksentini and Frangoudis [30] utilized the European Telecommunications Standards Institute (ETSI) multi-access edge computing server (MEC) [31] as the cloud for deploying LoRaWAN servers and related applications. To improve the performance of LoRaWAN-5G integrated networks, Torroglosa et al. [32] proposed a roaming method for the end devices with dual connectivity of LoRaWAN and 5G. ...
June 2020
IEEE Communications Standards Magazine