Daniel E. Lucani

Daniel E. Lucani
Aalborg University · Department of Electronic Systems

Doctor of Engineering

About

217
Publications
19,886
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
3,133
Citations
Additional affiliations
August 2012 - present
Aalborg University
Position
  • Professor (Associate)
April 2010 - July 2012
University of Porto
Position
  • Professor (Assistant)
September 2006 - April 2010
Massachusetts Institute of Technology
Position
  • Research Assistant

Publications

Publications (217)
Preprint
Full-text available
Cloud Service Providers (CSPs) offer a vast amount of storage space at competitive prices to cope with the growing demand for digital data storage. Dual deduplication is a recent framework designed to improve data compression on the CSP while keeping clients' data private from the CSP. To achieve this, clients perform lightweight information-theore...
Preprint
Full-text available
We consider the problem of sharing sensitive or valuable files across users while partially relying on a common, untrusted third-party, e.g., a Cloud Storage Provider (CSP). Although users can rely on a secure peer-to-peer (P2P) channel for file sharing, this introduces potential delay on the data transfer and requires the sender to remain active a...
Article
Full-text available
Random Linear Network Coding (RLNC) is an erasure network coding technique used to improve communication and content distribution. However, RLNC is not efficient for data streaming applications, e.g., video streaming, where the data packets must be delivered in order and decoded within a tight deadline. Although some approaches have been proposed,...
Preprint
High frame-corruption is widely observed in Long Range Wide Area Networks (LoRaWAN) due to the coexistence with other networks in ISM bands and an Aloha-like MAC layer. LoRa's Forward Error Correction (FEC) mechanism is often insufficient to retrieve corrupted data. In fact, real-life measurements show that at least one-fourth of received transmiss...
Conference Paper
Mobile edge computing pushes computationally-intensive services closer to the user to provide reduced delay due to physical proximity. This has led many to consider deploying deep learning models on the edge – commonly known as edge intelligence (EI). EI services can have many model implementations that provide different QoS. For instance, one mode...
Article
We introduce Titchy, a compression method for time-series data generated by the Internet of Things. Our proposed method is flexible and has several advantages when applied in the IoT ecosystem: (a) it is able to compress even when only a small amount of memory can be allocated to it; (b) it compresses data in tiny chunks, so it introduces very litt...
Preprint
Full-text available
Smart electricity meters typically upload readings a few times a day. Utility providers aim to increase the upload frequency in order to access consumption information in near real time, but the legacy compressors fail to provide sufficient savings on the low-bandwidth, high-cost data connection. We propose a new compression method and data format...
Preprint
Full-text available
[Peer-reviewed and accepted for publication through The 1st International Workshop on Big Data and Machine Learning for Networking under the IEEE ICCCN 2021 conference.] Mobile edge computing pushes computationally-intensive services closer to the user to provide reduced delay due to physical proximity. This has led many to consider deploying deep...
Article
Timely delivery of sensor data is crucial for a wide array of Internet of Things (IoT) applications. Due to the large space- and time-correlation of sensor data, there is a high potential for compression. However, conventional wisdom dictates that compression is at odds with information freshness and timely delivery of data. The reason is that suff...
Article
Revolving codes (ReC) are a new family of network codes that reduce signaling overhead and maintain high decoding probability, which is key in applications with small payloads, e.g., IoT, Industry 4.0. However, they have only been studied using simulations. We present i) the first exact mathematical model for the total overhead and decoding probabi...
Preprint
Full-text available
Network appliances continue to offer novel opportunities to offload processing from computing nodes directly into the data plane. One popular concern of network operators and their customers is to move data increasingly faster. A common technique to increase data throughput is to compress it before its transmission. However, this requires compressi...
Article
Full-text available
In the above article [1] , in Section II-B.1)a, titled “Outer Encoding,” the first sentence of the second paragraph should be corrected to consistently use $\Omega _{j}$ to denote the coded packets, i.e., this sentence should state: “For systematic outer encoding, the outer encoding vectors for coded packets $\Omega _{j},\,\,j = 1, 2, \ldots,...
Preprint
Full-text available
With the increasing demand for computationally intensive services like deep learning tasks, emerging distributed computing platforms such as edge computing (EC) systems are becoming more popular. Edge computing systems have shown promising results in terms of latency reduction compared to the traditional cloud systems. However, their limited proces...
Conference Paper
Full-text available
Given the large and sustained growth in the number of smart meters for different applications, e.g., electricity, water or heat, effective data compression has become increasingly important. Although smart meters tend to encrypt payloads using state-of-the-art solutions, the packet length variability introduced by compression of the data can be exp...
Article
The reliability of communication channels is one of the most challenging tasks in wireless communication. Network coding (NC) has emerged as a compelling solution to improve throughput performance and reliability by sending coded packets and accepting valid ones at the receiver. Discarding corrupted coded packets in NC often meant discarding potent...
Chapter
Cloud Storage Providers (CSPs) offer solutions to relieve users from locally storing vast amounts of data, including personal and sensitive ones. While users may desire to retain some privacy on the data they outsource, CSPs are interested in reducing the total storage space by employing compression techniques such as deduplication. We propose a ne...
Preprint
This paper proposes Yggdrasil, a protocol for privacy-aware dual data deduplication in multi client settings. Yggdrasil is designed to reduce the cloud storage space while safeguarding the privacy of the client's outsourced data. Yggdrasil combines three innovative tools to achieve this goal. First, generalized deduplication, an emerging technique...
Preprint
Vehicles generate a large amount of data from their internal sensors. This data is not only useful for a vehicle's proper operation, but it provides car manufacturers with the ability to optimize performance of individual vehicles and companies with fleets of vehicles (e.g., trucks, taxis, tractors) to optimize their operations to reduce fuel costs...
Preprint
Full-text available
With the advent of the Internet of Things (IoT), the ever growing number of connected devices observed in recent years and foreseen for the next decade suggests that more and more data will have to be transmitted over a network, before being processed and stored in data centers. Generalized deduplication (GD) is a novel technique to effectively red...
Article
Full-text available
Fulcrum coding combines a high-field outer Random Linear Network Coding (RLNC) that generates outer coding expansion packets with a small-field inner RLNC that combines the source packets and the outer coding expansion packets. This two-layer Fulcrum coding allows flexible decoding in receivers with heterogeneous computational capabilities. However...
Article
Full-text available
One of the by-products of Sparse Network Coding (SNC) is the ability to perform partial decoding, i.e., decoding some original packets prior to collecting all needed coded packets to decode the entire coded data. Due to this ability, SNC has been recently used as a technique for reducing the Average Decoding Delay (ADD) per packet in real-time mult...
Article
Full-text available
Abstract Cloud computing considerably reduces the costs of deploying applications through on-demand, automated and fine-granular allocation of resources. Even in private settings, cloud computing platforms enable agile and self-service management, which means that physical resources are shared more efficiently. Cloud computing considerably reduces...
Conference Paper
Full-text available
Cloud and distributed storage applications require processing of large fragments of data. This poses memory, delay, and processing speed challenges for systems using erasure codes to reduce the cost of storage and/or increase the reliability of the system. To address these, this paper proposes and deploys designs that exploit current multi-threadin...
Article
This letter characterizes the optimal policy for minimizing the bandwidth use and the number of required connections between nodes for repairing losses in scenarios that use the Internet of Things (IoT) devices for distributed storage. In particular, we consider the problem of a system that will not receive replacement nodes to compensate for node...
Article
Random linear network coding (RLNC) can enhance the reliability of multimedia transmissions over lossy communication channels. However, RLNC has been designed for equal size packets, while many efficient multimedia compression schemes, such as variable bitrate (VBR) video compression, produce unequal packet sizes. Padding the unequal packet sizes w...
Preprint
Full-text available
We study a generalization of deduplication, which enables lossless deduplication of highly similar data and show that standard deduplication with fixed chunk length is a special case. We provide bounds on the expected length of coded sequences for generalized deduplication and show that the coding has asymptotic near-entropy cost under the proposed...
Article
Full-text available
We introduce Fulcrum, a network coding framework that achieves three seemingly conflicting objectives: (i) to reduce the coding coefficient overhead down to nearly n bits per packet in a generation of n packets; (ii) to conduct the network coding using only GF(2) operations at intermediate nodes if necessary, dramatically reducing computing complex...
Article
This paper presents and characterizes the performance of CORE, a protocol that brings together the efficiency in spectrum usage of inter–session network coding schemes and the robustness against packet losses of intra–session network coding. We provide in-depth mathematical analysis of the gains of CORE followed by protocol design and implementatio...
Conference Paper
In this paper, we present a performance study of the impact of generation and symbol sizes on latency for encoding with Random Linear Network Coding (RLNC). This analysis is important for low latency applications of RLNC as well as data storage applications that use large blocks of data, where the encoding process can be parallelized based on syste...
Article
This letter characterizes the optimal policies for bandwidth use and storage for the problem of distributed storage in Internet of Things (IoT) scenarios, where lost nodes cannot be replaced by new nodes as is typically assumed in Data Center and Cloud scenarios. We develop an information flow model that captures the overall process of data transmi...
Article
Full-text available
Network coding approaches typically consider an unrestricted recoding of coded packets in the relay nodes to increase performance. However, this can expose the system to pollution attacks that cannot be detected during transmission, until the receivers attempt to recover the data. To prevent these attacks while allowing for the benefits of coding i...
Preprint
Network coding approaches typically consider an unrestricted recoding of coded packetsin the relay nodes for increased performance. However, this can expose the system to pollutionattacks that cannot be detected during transmission, until the receivers attempt to recover the data. Toprevent these attacks while allowing for the benefits of coding in...
Conference Paper
Random network coding is a method that achieves multicast capacity asymptotically for general networks [1, 7]. In this approach, vertices in the network randomly and linearly combine incoming information in a distributed manner before forwarding it through their outgoing edges. To ensure success, the involved finite field needs to be large enough [...
Conference Paper
Full-text available
The zero-padding overhead created when performing Random Linear Network Coding (RLNC) on unequal-sized packets can curb its promising benefits since it can be as high as the data to convey. The concept of macro-symbol coding was introduced recently in order to reduce the zero-padding overhead that RLNC has brought. Macro-symbols are subsets of the...
Article
2017 IEEE. Although network coding has shown the potential to revolutionize networking and storage, its deployment has faced a number of challenges. Usual proposals involve two approaches. First, deploying a new protocol (e.g., Multipath Coded TCP), or retrofitting another one (e.g., TCP/NC) to deliver benefits to any application in a computer. How...
Article
Full-text available
High capacity storage systems distribute files across several storage devices (nodes) and apply an erasure code to meet availability and reliability requirements. Since devices can lose network connectivity or fail permanently, a dynamic repair mechanism must be put in place. In such cases a new recovery node gets connected to a given subset of the...
Article
Random Linear Network Coding (RLNC) has been shown to offer an efficient communication scheme, leveraging a remarkable robustness against packet losses. However, it suffers from a high computational complexity, and some novel approaches, which follow the same idea, have been recently proposed. One of such solutions is Sparse Network Coding (SNC), w...
Conference Paper
Full-text available
Perpetual codes provide a sparse, but structured coding for fast encoding and decoding. In this work, we illustrate that perpetual codes introduce linear dependent packet transmissions in the presence of an erasure channel. We demonstrate that the number of linear dependent packet transmissions is highly dependent on a parameter called the width (\...