Eryk Dutkiewicz’s research while affiliated with University of Technology Sydney and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (509)


Energy-Based Proportional Fairness in Cooperative Edge Computing
  • Article

December 2024

·

6 Reads

IEEE Transactions on Mobile Computing

·

Nam H. Chu

·

·

[...]

·

Eryk Dutkiewicz

By executing offloaded tasks from mobile users, edge computing augments mobile devices with computing/communications resources from edge nodes (ENs), thus enabling new services/applications (e.g., real-time gaming, virtual/augmented reality). However, despite being more resourceful than mobile devices, allocating ENs' computing/communications resources to a given favorable set of users (e.g., closer to edge nodes) may block other devices from their services. This is often the case for most existing task offloading and resource allocation approaches that only aim to maximize the network social welfare or minimize the total energy consumption but do not consider the computing/battery status of each mobile device. This work develops an energy-based proportionally fair task offloading and resource allocation framework for a multi-layer cooperative edge computing network to serve all user equipments (UEs) while considering both their service requirements and individual energy/battery levels. The resulting optimization involves both binary (offloading decisions) and continuous (resource allocation) variables. To tackle the NP-hard mixed integer optimization problem, we leverage the fact that the relaxed problem is convex and propose a distributed algorithm, namely the dynamic branch-and-bound Benders decomposition (DBBD). DBBD decomposes the original problem into a master problem (MP) for the offloading decisions and multiple subproblems (SPs) for resource allocation. To quickly eliminate inefficient offloading solutions, the MP is integrated with powerful Benders cuts exploiting the ENs' resource constraints. We then develop a dynamic branch-and-bound algorithm (DBB) to efficiently solve the MP considering the load balance among ENs. The SPs can either be solved for their closed-form solutions or be solved in parallel at ENs, thus reducing the complexity. The numerical results show that the DBBD returns the optimal solution in maximizing the proportional fairness among UEs. The DBBD has higher fairness indexes, i.e., Jain's index and min-max ratio, in comparison with the existing ones that minimize the total consumed energy.


Active-RIS Enhances the Multi-User Rate of Multi-Carrier Communications

November 2024

·

13 Reads

IEEE Transactions on Vehicular Technology

This paper explores a multi-user multi-carrier system leveraging an active reconfigurable intelligent surface (RIS), where the joint design of the RIS's programmable reflecting elements and the subcarrier-wise beamformers at the base station is investigated. To overcome the limitation of the conventional design, which aims solely at sum-rate maximization resulting in zero rates for some users across all sub-carriers and thus failing to boost all users rates, we propose the two alternative designs: one maximizing the geometric mean of the users' rates (GM-rate maximization) and the other maximizing the soft users' minimum rate (soft max-min rate optimization). However, they pose challenges as large-scale nonconvex problems, rendering convex-solver computational approaches impractical. To tackle this, we develop iterative computational procedures based on closed-form expressions of scalable complexity. Extensive simulations demonstrate the substantial benefits of these novel designs in significantly enhancing multi-user rates. Notably, under the same power budget, the active-RIS-assisted multi-carrier system achieves approximately twice the minimum user-rate or sum rate compared to RIS-less or passive-RIS-assisted counterparts.


Timeliness of Information in 5G Non-Terrestrial Networks: A Survey

November 2024

·

53 Reads

·

2 Citations

IEEE Internet of Things Journal

This paper explores the significance of the timeliness of information in the context of fifth generation (5G) non-terrestrial networks (NTN). As 5G technology continues to evolve, its integration with non-terrestrial components such as satellites, high-altitude platforms, and unmanned aerial vehicles brings about new possibilities and challenges for ensuring the timely delivery of information. In this paper, we delve into the network structure of NTNs and emphasize the significance of timeliness in various applications, including 5G massive Internet of Things and enhanced Mobile Broadband. We conduct an in-depth review of the design technologies and methodologies that enhance the timeliness of information in these applications. These include network architecture design, resource allocation, protocol design, modulation design, trajectory planning, reconfigurable intelligent surfaces design, energy harvesting scheduling design, offloading strategy design, and caching strategy design. By exploring these technical aspects and solutions, we aim to provide valuable insights into ensuring timely information delivery in 5G NTN. Furthermore, we propose potential future research directions to further improve the timeliness of information in NTNs. Recognizing the importance of timeliness and addressing the related challenges will unlock the full potential of 5G NTN, enabling the successful deployment and operation of a wide range of applications and services that depend on real-time data exchange.


Fig. 1: Overview of a point cloud compression pipeline. Decompression can be done in a reversed order.
Fig. 2: (a) Our entropy estimation approach with the proposed Convolutional Variational Autoencoder (CVAE), (b) detailed architecture of the proposed CVAE, and (c) our bits-back coding approach.
Point Cloud Compression with Bits-back Coding
  • Preprint
  • File available

October 2024

·

7 Reads

This paper introduces a novel lossless compression method for compressing geometric attributes of point cloud data with bits-back coding. Our method specializes in using a deep learning-based probabilistic model to estimate the Shannon's entropy of the point cloud information, i.e., geometric attributes of the 3D floating points. Once the entropy of the point cloud dataset is estimated with a convolutional variational autoencoder (CVAE), we use the learned CVAE model to compress the geometric attributes of the point clouds with the bits-back coding technique. The novelty of our method with bits-back coding specializes in utilizing the learned latent variable model of the CVAE to compress the point cloud data. By using bits-back coding, we can capture the potential correlation between the data points, such as similar spatial features like shapes and scattering regions, into the lower-dimensional latent space to further reduce the compression ratio. The main insight of our method is that we can achieve a competitive compression ratio as conventional deep learning-based approaches, while significantly reducing the overhead cost of storage and/or communicating the compression codec, making our approach more applicable in practical scenarios. Throughout comprehensive evaluations, we found that the cost for the overhead is significantly small, compared to the reduction of the compression ratio when compressing large point cloud datasets. Experiment results show that our proposed approach can achieve a compression ratio of 1.56 bit-per-point on average, which is significantly lower than the baseline approach such as Google's Draco with a compression ratio of 1.83 bit-per-point.

Download

Countering Eavesdroppers With Meta- Learning-Based Cooperative Ambient Backscatter Communications

October 2024

·

4 Reads

IEEE Transactions on Wireless Communications

This article introduces a novel lightweight framework using ambient backscattering communications to counter eavesdroppers. In particular, our framework divides an original message into two parts. The first part, i.e., the active-transmit message, is transmitted by the transmitter using conventional RF signals. Simultaneously, the second part, i.e., the backscatter message, is transmitted by an ambient backscatter tag that backscatters upon the active signals emitted by the transmitter. Notably, the backscatter tag does not generate its own signal, making it difficult for an eavesdropper to detect the backscattered signals unless they have prior knowledge of the system. Here, we assume that without decoding/knowing the backscatter message, the eavesdropper is unable to decode the original message. Even in scenarios where the eavesdropper can capture both messages, reconstructing the original message is a complex task without understanding the intricacies of the message-splitting mechanism. A challenge in our proposed framework is to effectively decode the backscattered signals at the receiver, often accomplished using the maximum likelihood (MLK) approach. However, such a method may require a complex mathematical model together with perfect channel state information (CSI). To address this issue, we develop a novel deep meta-learning-based signal detector that can not only effectively decode the weak backscattered signals without requiring perfect CSI but also quickly adapt to a new wireless environment with very little knowledge. Simulation results show that our proposed learning approach, without requiring perfect CSI and complex mathematical model, can achieve a bit error ratio close to that of the MLK-based approach. They also clearly show the efficiency of the proposed approach in dealing with eavesdropping attacks and the lack of training data for deep learning models in practical scenarios.


A New Class of Analog Precoding for Multi-Antenna Multi-User Communications Over High-Frequency Bands

September 2024

·

11 Reads

IEEE Transactions on Wireless Communications

A network relying on a large antenna-array-aided base station is designed for delivering multiple information streams to multi-antenna users over high-frequency bands such as the millimeter-wave and sub-Terahertz bands. The state-of-the-art analog precoder (AP) dissipates excessive circuit power due to its reliance on a large number of phase shifters. To mitigate the power consumption, we propose a novel AP relying on a controlled number of phase shifters. Within this new AP framework, we design a hybrid precoder (HP) for maximizing the users’ minimum throughput, which poses a computationally challenging problem of large-scale, nonsmooth mixed discrete-continuous log-determinant optimization. To tackle this challenge, we develop an algorithm which iterates through solving convex problems to generate a sequence of HPs that converges to the max-min solution. We also introduce a new framework of smooth optimization termed soft max-min throughput optimization. Additionally, we develop another algorithm, which iterates by evaluating closed-form expressions to generate a sequence of HPs that converges to the soft max-min solution. Simulation results reveal that the HP soft max-min solution approaches the Pareto-optimal solution constructed for simultaneously optimizing both the minimum throughput and sum-throughput. Explicitly, it achieves a minimum throughput similar to directly maximizing the users’ minimum throughput and it also attains a sum-throughput similar to directly maximizing the sum-throughput.


Fig. 1: Our proposed collaborative cyberattack detection model. The detection modules are first trained by their local data. They are then used to detect attacks for incoming traffic of blockchain networks before putting them into the mining nodes.
Fig. 2: The architecture of a DBN. This architecture includes multiple GRBM and RBM layers for classifying blockchain network traffic.
Fig. 3: Experiment setup in our laboratory. This experiment includes three Ethereum nodes and three servers in a network.
Real-time Cyberattack Detection with Collaborative Learning for Blockchain Networks

July 2024

·

32 Reads

With the ever-increasing popularity of blockchain applications, securing blockchain networks plays a critical role in these cyber systems. In this paper, we first study cyberattacks (e.g., flooding of transactions, brute pass) in blockchain networks and then propose an efficient collaborative cyberattack detection model to protect blockchain networks. Specifically, we deploy a blockchain network in our laboratory to build a new dataset including both normal and attack traffic data. The main aim of this dataset is to generate actual attack data from different nodes in the blockchain network that can be used to train and test blockchain attack detection models. We then propose a real-time collaborative learning model that enables nodes in the network to share learning knowledge without disclosing their private data, thereby significantly enhancing system performance for the whole network. The extensive simulation and real-time experimental results show that our proposed detection model can detect attacks in the blockchain network with an accuracy of up to 97%.


Collaborative Learning for Cyberattack Detection in Blockchain Networks

July 2024

·

77 Reads

·

6 Citations

IEEE Transactions on Systems Man and Cybernetics Systems

This article aims to study intrusion attacks and then develop a novel cyberattack detection framework to detect cyberattacks at the network layer (e.g., brute password and flooding of transactions) of blockchain networks. Specifically, we first design and implement a blockchain network in our laboratory. This blockchain network will serve two purposes, i.e., to generate the real traffic data (including both normal data and attack data) for our learning models and to implement real-time experiments to evaluate the performance of our proposed intrusion detection framework. To the best of our knowledge, this is the first dataset that is synthesized in a laboratory for cyberattacks in a blockchain network. We then propose a novel collaborative learning model that allows efficient deployment in the blockchain network to detect attacks. The main idea of the proposed learning model is to enable blockchain nodes to actively collect data, learn the knowledge from data using the Deep Belief Network, and then share the knowledge learned from its data with other blockchain nodes in the network. In this way, we can not only leverage the knowledge from all the nodes in the network but also do not need to gather all raw data for training at a centralized node like conventional centralized learning solutions. Such a framework can also avoid the risk of exposing local data’s privacy as well as excessive network overhead/congestion. Both intensive simulations and real-time experiments clearly show that our proposed intrusion detection framework can achieve an accuracy of up to 98.6% in detecting attacks.



Encrypted Data Caching and Learning Framework for Robust Federated Learning-Based Mobile Edge Computing

June 2024

·

14 Reads

·

3 Citations

IEEE/ACM Transactions on Networking

Federated Learning (FL) plays a pivotal role in enabling artificial intelligence (AI)-based mobile applications in mobile edge computing (MEC). However, due to the resource heterogeneity among participating mobile users (MUs), delayed updates from slow MUs may deteriorate the learning speed of the MEC-based FL system, commonly referred to as the straggling problem. To tackle the problem, this work proposes a novel privacy-preserving FL framework that utilizes homomorphic encryption (HE) based solutions to enable MUs, particularly resource-constrained MUs, to securely offload part of their training tasks to the cloud server (CS) and mobile edge nodes (MENs). Our framework first develops an efficient method for packing batches of training data into HE ciphertexts to reduce the complexity of HE-encrypted training at the MENs/CS. On that basis, the mobile service provider (MSP) can incentivize straggling MUs to encrypt part of their local datasets that are uploaded to certain MENs or the CS for caching and remote training. However, caching a large amount of encrypted data at the MENs and CS for FL may not only overburden those nodes but also incur a prohibitive cost of remote training, which ultimately reduces the MSP’s overall profit. To optimize the portion of MUs’ data to be encrypted, cached, and trained at the MENs/CS, we formulate an MSP’s profit maximization problem, considering all MUs’ and MENs’ resource capabilities and data handling costs (including encryption, caching, and training) as well as the MSP’s incentive budget. We then show that the problem is convex and can be efficiently solved using an interior point method. Extensive simulations on a real-world human activity recognition dataset show that our proposed framework can achieve much higher model accuracy (improving up to 24.29%) and faster convergence rate (by 2.86 times) than those of the conventional FedAvg approach when the straggling probability varies between 20% and 80%. Moreover, the proposed framework can improve the MSP’s profit up to 2.84 times compared with other baseline FL approaches without MEN-assisted training.


Citations (47)


... DL models are good at detecting cyberattacks. DL models also identify new assault kinds [12]. DL models are good at detecting cyberattacks. ...

Reference:

A Deep Transfer Learning Framework for Robust IoT Attack Detection
Collaborative Learning for Cyberattack Detection in Blockchain Networks

IEEE Transactions on Systems Man and Cybernetics Systems

... BCI has huge potential of applications that includes gaming, entertainment, neuroprothetics, and assistive technologies. The integration of BCI with the metaverse can transform the way the users communicate with virtual environments [33]. This BCI enables the provision of natural and spontaneous interactions between the user's thoughts and the virtual world, leading to mesmerizing experiences. ...

Toward BCI-Enabled Metaverse: A Joint Learning and Resource Allocation Approach
  • Citing Conference Paper
  • December 2023

... Weight W (3x3) Homomorphic encryption of W (3x3) which is suitable for deep learning [11]. The CKKS provides basic HE algorithms as follows [12]: ...

Encrypted Data Caching and Learning Framework for Robust Federated Learning-Based Mobile Edge Computing
  • Citing Article
  • June 2024

IEEE/ACM Transactions on Networking

... In [19], This work aims to propose a novel framework called MetaSlicing, designed to manage and allocate various resources effectively for Metaverse applications. By recognizing that Metaverse applications often share common functions, the framework first groups applications into clusters known as MetaInstances. ...

MetaSlicing: A Novel Resource Allocation Framework for Metaverse
  • Citing Article
  • May 2024

IEEE Transactions on Mobile Computing

... In this paper, we consider maximization of the minimum weighted rate/EE, since the achievable rate/EE region can be calculated by solving and employing the rate/EE profile technique [66]- [70]. Additionally, the minimum rate/EE can also be viewed as a metric for the fairness among the users [71]. ...

Max-Min Rate Optimization of Low-Complexity Hybrid Multi-User Beamforming Maintaining Rate-Fairness

IEEE Transactions on Wireless Communications

... on the Human Intellect, Book II, Chapter I). According to the Treccani vocabulary, talento sm [from talento1 means the ingenuity, the predisposition, the capacity and the relevant intellectual qualities, as they are natural and intended for particular activities [1]. Sometimes talent is confused with genius. ...

Potential Applications and Benefits of Metaverse
  • Citing Chapter
  • October 2023

... This mmWave band presents clear advantages owing to the rapid advances in its sophisticated circuit design [3]. In conjunction with the existing mmWave [4]- [6] bands, reconfigurable intelligent surfaces (RIS) have been proposed for enhancing the performance of future wireless systems [7]- [9]. Briefly, a RIS consists of a metasurface having programmable reflecting elements (PREs) that passively manipulate the incident waves, directing them towards desired destinations, unlike traditional signal relaying methods [10]. ...

RIS-Aided Multiple-Input Multiple-Output Broadcast Channel Capacity

IEEE Transactions on Communications

... Optimization method: In the RIS literature, which is mainly based on PIN diodes, there has been extensive research on phase shift designs considering different goals. For example, [32], [33], [34] studied the phase shift design to enhance rate fairness, while [35], [36], [37] focused on maximizing the sum-rate. In addition, other studies have explored designs aimed at maximizing energy efficiency [38], [39], [40] and minimizing transmit power [41], [42], [43]. ...

Rate-Fairness-Aware Low Resolution RIS-Aided Multi-User OFDM Beamforming
  • Citing Article
  • January 2023

IEEE Transactions on Vehicular Technology

... This DL approach outperforms the previously proposed approaches based on convolutional neural networks. Another DL approach employed for spectrum sensing in a cooperative way is deep reinforcement learning [59][60][61][62]. These deep reinforcement learning approaches improve the robustness of the spectrum sensing system and allows it to make more accurate decisions in dynamic environments. ...

Multi-Agent DRL-Based RIS-Assisted Spectrum Sensing in Cognitive Satellite-Terrestrial Networks
  • Citing Article
  • December 2023

IEEE Wireless Communications Letters