March 2024
·
2 Reads
·
1 Citation
Performance Evaluation
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
March 2024
·
2 Reads
·
1 Citation
Performance Evaluation
April 2023
·
82 Reads
·
4 Citations
IEEE Transactions on Parallel and Distributed Systems
Models of parallel processing systems typically assume that one has l workers and jobs are split into an equal number of k=l tasks. Splitting jobs into smaller tasks, i.e. using “tiny tasks”, can yield performance and stability improvements because it reduces the variance in the amount of work assigned to each worker, but as k increases, the overhead involved in scheduling and managing the tasks begins to overtake the performance benefit. We perform extensive experiments on the effects of task granularity on an Apache Spark cluster, and based on these, develop a four-parameter model for task and job overhead that, in simulation, produces sojourn time distributions that match those of the real system. We also present analytical results which illustrate how using tiny tasks improves the stability region of split-merge systems, and analytical bounds on the sojourn and waiting time distributions of both split-merge and single-queue fork-join systems with tiny tasks. Finally we combine the overhead model with the analytical models to produce an analytical approximation to the sojourn and waiting time distributions of systems with tiny tasks which include overhead. We also perform analogous tiny-tasks experiments on a hybrid multi-processor shared memory system based on MPI and OpenMP which has no load-balancing between nodes. Though no longer strict analytical bounds, our analytical approximations with overhead match both the Spark and MPI/OpenMP experimental results very well.
March 2023
·
20 Reads
This paper contributes tail bounds of the age-of-information of a general class of parallel systems and explores their potential. Parallel systems arise in relevant cases, such as in multi-band mobile networks, multi-technology wireless access, or multi-path protocols, just to name a few. Typically, control over each communication channel is limited and random service outages and congestion cause buffering that impairs the age-of-information. The parallel use of independent channels promises a remedy, since outages on one channel may be compensated for by another. Surprisingly, for the well-known case of MM1 queues we find the opposite: pooling capacity in one channel performs better than a parallel system with the same total capacity. A generalization is not possible since there are no solutions for other types of parallel queues at hand. In this work, we prove a dual representation of age-of-information in min-plus algebra that connects to queueing models known from the theory of effective bandwidth/capacity and the stochastic network calculus. Exploiting these methods, we derive tail bounds of the age-of-information of parallel GG1 queues. In addition to parallel classical queues, we investigate Markov channels where, depending on the memory of the channel, we show the true advantage of parallel systems. We continue to investigate this new finding and provide insight into when capacity should be pooled in one channel or when independent parallel channels perform better. We complement our analysis with simulation results and evaluate different update policies, scheduling policies, and the use of heterogeneous channels that is most relevant for latest multi-band networks.
January 2023
·
12 Reads
·
7 Citations
IEEE Journal on Selected Areas in Information Theory
This paper contributes tail bounds of the age-of-information of a general class of parallel systems and explores their potential. Parallel systems arise in relevant cases, such as in multi-band mobile networks, multi-technology wireless access, or multi-path protocols, just to name a few. Typically, control over each communication channel is limited and random service outages and congestion cause buffering that impairs the age-of-information. The parallel use of independent channels promises a remedy, since outages on one channel may be compensated for by another. Surprisingly, for the well-known case of M|M|1 queues we find the opposite: pooling capacity in one channel performs better than a parallel system with the same total capacity. A generalization is not possible since there are no solutions for other types of parallel queues at hand. In this work, we prove a dual representation of age-of-information in min-plus algebra that connects to queueing models known from the theory of effective bandwidth/capacity and the stochastic network calculus. Exploiting these methods, we derive tail bounds of the age-of-information of G|G|1 queues. Tail bounds of the age-of-information of independent parallel queues follow readily. In addition to parallel classical queues, we investigate Markov channels where, depending on the memory of the channel, we show the true advantage of parallel systems. We continue to investigate this new finding and provide insight into when capacity should be pooled in one channel or when independent parallel channels perform better. We complement our analysis with simulation results and evaluate different update policies, scheduling policies, and the use of heterogeneous channels that is most relevant for latest multi-band networks.
January 2023
·
1 Read
June 2022
·
20 Reads
Age-of-information is a metric that quantifies the freshness of information obtained by sampling a remote sensor. In signal-agnostic sampling, sensor updates are triggered at certain times without being conditioned on the actual sensor signal. Optimal update policies have been researched and it is accepted that periodic updates achieve smaller age-of-information than random updates. We contribute a study of a signal-aware policy, where updates are triggered by a random sensor event. By definition, this implies random updates and as a consequence inferior age-of-information. Considering a notion of deviation-of-information as a signal-aware metric, our results show, however, that event-triggered systems can perform equally well as time-triggered systems while causing smaller mean network utilization.
May 2022
·
3 Reads
·
7 Citations
May 2022
·
10 Reads
·
4 Citations
February 2022
·
12 Reads
Models of parallel processing systems typically assume that one has l workers and jobs are split into an equal number of k=l tasks. Splitting jobs into smaller tasks, i.e. using ``tiny tasks'', can yield performance and stability improvements because it reduces the variance in the amount of work assigned to each worker, but as k increases, the overhead involved in scheduling and managing the tasks begins to overtake the performance benefit. We perform extensive experiments on the effects of task granularity on an Apache Spark cluster, and based on these, developed a four-parameter model for task and job overhead that, in simulation, produces sojourn time distributions that match those of the real system. We also present analytical results which illustrate how using tiny tasks improves the stability region of split-merge systems, and analytical bounds on the sojourn and waiting time distributions of both split-merge and single-queue fork-join systems with tiny tasks. Finally we combine the overhead model with the analytical models to produce an analytical approximation to the sojourn and waiting time distributions of systems with tiny tasks which include overhead. Though no longer strict analytical bounds, these approximations matched the Spark experimental results very well in both the split-merge and fork-join cases.
December 2021
·
13 Reads
We consider networked sources that generate update messages with a defined rate and we investigate the age of that information at the receiver. Typical applications are in cyber-physical systems that depend on timely sensor updates. We phrase the age of information in the min-plus algebra of the network calculus. This facilitates a variety of models including wireless channels and schedulers with random cross-traffic, as well as sources with periodic and random updates, respectively. We show how the age of information depends on the network service where, e.g., outages of a wireless channel cause delays. Further, our analytical expressions show two regimes depending on the update rate, where the age of information is either dominated by congestive delays or by idle waiting. We find that the optimal update rate strikes a balance between these two effects.
... [12] studies the age-delay trade-off in G/G/∞ queue. [13] observes that a single M/M/1 queue has better age performance than the independent parallel M/M/1 queues with the same total capacity. [14] analyzed age in network of parallel finite identical and memoryless servers, where each server is an LCFS queue with preemption in service. ...
January 2023
IEEE Journal on Selected Areas in Information Theory
... Jian et al. (2021) introduced a model to predict service duration based on users' historical data. For in-depth insights into service splitting, refer to the works of Zhang et al. (2020) and Bora et al. (2023). Each subservice on a separate physical node account for a portion of the total demand and operates autonomously (Xu et al., 2021). ...
April 2023
IEEE Transactions on Parallel and Distributed Systems
... IV we derive statistical bounds of the age of G|G|1 systems. Compared to [12], we take advantage of statistical independence of arrivals and service and use Doob's Martingale inequality to tighten the bounds. ...
May 2022
... For broader classes of systems, a variety of approximation techniques have been used [13]- [18]. More recently several researchers have used stochastic network calculus to derive performance bounds [1], [19]- [22]. Many examples of the fork-join pattern being used in practice are given in [23]. ...
May 2022
... Using a finer granularity, taking k > l, so-called "tiny tasks", actually can have a great and positive impact on system performance. This has been noted by practitioners [2]- [4], but so far only [1], which this paper is an extension of, provides analytical results relating task granularity to parallel system performance. ...
July 2020
... The LSTM shows better performance than the other two approaches by achieving only 3% error rate. Khangura et al. 43 estimated the available bandwidth using a shallow neural network. They used vectors of packet dispersion as input features, which are characteristic of the available bandwidth. ...
May 2019
Computer Communications
... Other network analysis issues can simply be addressed using this approach. Khangura et al. [72] trained the packet dispersion vector as a characteristic of the available bandwidth using a neural network. To choose the next detection rate, an iterative neural network rather than a binary search approach was suggested. ...
May 2018
... While proper scheduling can already be beneficial for latencysensitive applications, the introduction of a certain level of redundancy can help even further. Previous studies have proposed several approaches that transmit multiple copies of packets over different interfaces [12,23,28]. These redundancy approaches share the limitation that a lost packet can only be recovered by its exact copy. ...
October 2018
... Year Gen. Evaluation Method Metrics Mouawad et al. [172] 2021 4G Analysis/Simulation PDR Bartoletti et al. [173] 2021 4G Analysis/Simulation Range, PRR Makinaci et al. [174] 2021 4G Analysis/Simulation PRR Kim et al. [175] 2022 4G Analysis/Simulation Latency, Range, PRR Sabeeh et al. [176] 2023 4G Analysis/Simulation PRR, PCP Guo et al. [177] 2023 4G Analysis/Simulation Throughput, PLR Nguyen et al. [178] 2023 4G Analysis/Simulation Signal Strength, PER, PDR Parvini et al. [179] 2023 4G Analysis/Simulation AoI Chen et al. [180] 2023 4G Analysis/Simulation Latency Akselrod et al. [181] 2017 4G Implementation Throughput, Signal Strength, SINR Lauridsen et al. [182] 2017 4G Implementation Latency, Signal Strength Walelgne et al. [183] 2018 4G Implementation Throughput Neumeier et al. [184] 2019 4G Implementation Latency, Throughput Burke et al. [185] 2020 4G Implementation Latency Gaber et al. [186] 2020 4G Implementation Latency, Throughput Niebisch et al. [187] 2020 4G Implementation PER Toril et al. [188] 2021 4G Implementation Signal Strength Aissioui et al. [189] 2018 5G Analysis/Simulation Latency Campolo et al. [190] 2019 5G Analysis/Simulation Latency, PRR Chekired et al. [191] 2019 5G Analysis/Simulation Latency Wang et al. [192] 2019 5G Analysis/Simulation PRR, PLR Deinlein et al. [193] 2020 5G Analysis/Simulation Latency, PDV Xiaoqin et al. [194] 2020 5G Analysis/Simulation Throughput Lucas-Estañ et al. [195] 2020 5G Analysis/Simulation Latency, Capacity, PDR Huang et al. [196] 2020 5G Analysis/Simulation Throughput, Capacity Ali et al. [197] 2021 5G Analysis/Simulation PDV, Throughput, PRR Yoon et al. [198] 2021 5G Analysis/Simulation Range, PRR Ali et al. [199] 2021 5G Analysis/Simulation PDV, Capacity Saad et al. [200] 2022 5G Analysis/Simulation PDR Khan et al. [201] 2022 5G Analysis/Simulation Latency, Throughput, PLR, SINR Wu et al. [202] 2023 5G Analysis/Simulation Throughput, PRR Ogawa et al. [203] 2018 5G Implementation Latency, Throughput Kutila et al. [204] 2020 5G Implementation Latency, Throughput, Range Szalay et al. [205] 2020 5G Implementation Latency, Throughput, PER Kutila et al. [206] 2021 5G Implementation Latency, PDR Daengsi et al. [207] 2021 5G Implementation Latency, Throughput, PLR Pan et al. [208] 2021 5G Implementation Latency, Throughput Martin-Sacristan et al. [209] 2020 4G/5G Analysis/Simulation PRR, Latency Saad et al. [210] 2021 4G/5G Analysis/Simulation PDR Shin et al. [211] 2023 4G/5G Analysis/Simulation PRR communications require the frequent transmission of very small packets and as such incur problematic overhead. While architecture plays a significant role in the performance of data flows, empirical tests in [181] show that SINR and RSSI have the biggest effect on throughput performance for this interface due to the MCS adaptation and scheduling algorithms implemented by the base station. ...
September 2017
... It is worth noting that packet dispersion techniques are especially affected by the presence of cross-traffic that can either decrease (first packet delayed more than the second) or increase (second packet delayed more than the first) the measured delta, thus affecting the accuracy of bandwidth estimation. It follows that a number of strategies have been developed to filter cross-traffic error such as sending trains of packets with different sizes and using statistical or machine learning-based models (including linear regression, Kalman filters [12], neural networks [11], and measurement repetition) to filter out bad samples. ...
June 2017