Markus Fidler’s research while affiliated with Leibniz Universität Hannover and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (103)


Age- and deviation-of-information of hybrid time- and event-triggered systems: What matters more, determinism or resource conservation?
  • Article

March 2024

·

2 Reads

·

1 Citation

Performance Evaluation

·

Markus Fidler

Fig. 6. Schematic of the spark model [41].
Fig. 11. The stability regions of split-merge and fork-join simulated with and without task and job overhead, with l = 50 parallel workers.
Fig. 13. Comparison of the sojourn time bounds of the single-queue fork-join and split-merge models with l = 50 servers and k tiny tasks. As a reference, the sojourn time bounds of a system with ideal partition, where a job is partitioned into l equisized tasks, is shown. Jobs have exponential inter-arrival times with parameter λ = 0.5s −1 and are composed of k exponential tiny tasks with parameter µ = k l . The bounds are exceeded with probability at most ε = 10 −6 .
The Tiny-Tasks Granularity Trade-Off Balancing Overhead vs. Performance in Parallel Systems
  • Article
  • Full-text available

April 2023

·

82 Reads

·

4 Citations

IEEE Transactions on Parallel and Distributed Systems

Models of parallel processing systems typically assume that one has l workers and jobs are split into an equal number of k=l tasks. Splitting jobs into k>lk \gt l smaller tasks, i.e. using “tiny tasks”, can yield performance and stability improvements because it reduces the variance in the amount of work assigned to each worker, but as k increases, the overhead involved in scheduling and managing the tasks begins to overtake the performance benefit. We perform extensive experiments on the effects of task granularity on an Apache Spark cluster, and based on these, develop a four-parameter model for task and job overhead that, in simulation, produces sojourn time distributions that match those of the real system. We also present analytical results which illustrate how using tiny tasks improves the stability region of split-merge systems, and analytical bounds on the sojourn and waiting time distributions of both split-merge and single-queue fork-join systems with tiny tasks. Finally we combine the overhead model with the analytical models to produce an analytical approximation to the sojourn and waiting time distributions of systems with tiny tasks which include overhead. We also perform analogous tiny-tasks experiments on a hybrid multi-processor shared memory system based on MPI and OpenMP which has no load-balancing between nodes. Though no longer strict analytical bounds, our analytical approximations with overhead match both the Spark and MPI/OpenMP experimental results very well.

Download

Statistical Age-of-Information Bounds for Parallel Systems: When Do Independent Channels Make a Difference?

March 2023

·

20 Reads

This paper contributes tail bounds of the age-of-information of a general class of parallel systems and explores their potential. Parallel systems arise in relevant cases, such as in multi-band mobile networks, multi-technology wireless access, or multi-path protocols, just to name a few. Typically, control over each communication channel is limited and random service outages and congestion cause buffering that impairs the age-of-information. The parallel use of independent channels promises a remedy, since outages on one channel may be compensated for by another. Surprisingly, for the well-known case of M\midM\mid1 queues we find the opposite: pooling capacity in one channel performs better than a parallel system with the same total capacity. A generalization is not possible since there are no solutions for other types of parallel queues at hand. In this work, we prove a dual representation of age-of-information in min-plus algebra that connects to queueing models known from the theory of effective bandwidth/capacity and the stochastic network calculus. Exploiting these methods, we derive tail bounds of the age-of-information of parallel G\midG\mid1 queues. In addition to parallel classical queues, we investigate Markov channels where, depending on the memory of the channel, we show the true advantage of parallel systems. We continue to investigate this new finding and provide insight into when capacity should be pooled in one channel or when independent parallel channels perform better. We complement our analysis with simulation results and evaluate different update policies, scheduling policies, and the use of heterogeneous channels that is most relevant for latest multi-band networks.


Fig. 1. Parallel system. A sensor is sampled at times T A (n) where n ≥ 1 is the sample index. The samples are transmitted as packets via a network, where the arrival stream is split and transmitted in parallel using k different network paths, denoted as queueing subsystems S 1 , S 2 , . . . , S k . The resulting departure time-stamps T D (n) are not necessarily in order, since packets may overtake each other on different paths. A variant [23] uses several independent sampling processes (from one or more sensors) that are transmitted via different network paths. In this case splitting does not apply.
Fig. 2. Example trajectories of the age of a single on-off channel with capacity 2 (top, black line) compared to two independent parallel on-off channels each with capacity 1 (bottom, red line). Gray, crossed slots mark channel outages.
Fig. 3. Single versus parallel M|M|1 queues. We show the CCDF at ε = 10 −6 . Lines are analytical results and the markers simulation results. While the parallel system improves the age compared to the single system with mean service rate r = 1, it is outperformed by the single system with rate r = 2.
Fig. 5. Single M|D|1 queue with service rate r = 2 versus two parallel M|D|1 queues each with rate r = 1. Bounds with ε = 10 −6 (age solid lines, delay dashed lines) compared to simulation results (dotted lines with markers).
Fig. 6. Tail decay of the M|M|1 queue. Bounds compared to exact results. For update interval w ≈ 1 the age is minimal.
Statistical Age-of-Information Bounds for Parallel Systems: When Do Independent Channels Make a Difference?

January 2023

·

12 Reads

·

7 Citations

IEEE Journal on Selected Areas in Information Theory

This paper contributes tail bounds of the age-of-information of a general class of parallel systems and explores their potential. Parallel systems arise in relevant cases, such as in multi-band mobile networks, multi-technology wireless access, or multi-path protocols, just to name a few. Typically, control over each communication channel is limited and random service outages and congestion cause buffering that impairs the age-of-information. The parallel use of independent channels promises a remedy, since outages on one channel may be compensated for by another. Surprisingly, for the well-known case of M|M|1 queues we find the opposite: pooling capacity in one channel performs better than a parallel system with the same total capacity. A generalization is not possible since there are no solutions for other types of parallel queues at hand. In this work, we prove a dual representation of age-of-information in min-plus algebra that connects to queueing models known from the theory of effective bandwidth/capacity and the stochastic network calculus. Exploiting these methods, we derive tail bounds of the age-of-information of G|G|1 queues. Tail bounds of the age-of-information of independent parallel queues follow readily. In addition to parallel classical queues, we investigate Markov channels where, depending on the memory of the channel, we show the true advantage of parallel systems. We continue to investigate this new finding and provide insight into when capacity should be pooled in one channel or when independent parallel channels perform better. We complement our analysis with simulation results and evaluate different update policies, scheduling policies, and the use of heterogeneous channels that is most relevant for latest multi-band networks.



Fig. 1. System model. At time A(n) the nth sample of the sensor signal C(t) arrives at a network of queues, with service times S i (n). At time D(n) the sample departs from the network to a monitor, conveying the signal C(A(n)).
Age- and Deviation-of-Information of Time-Triggered and Event-Triggered Systems

June 2022

·

20 Reads

Age-of-information is a metric that quantifies the freshness of information obtained by sampling a remote sensor. In signal-agnostic sampling, sensor updates are triggered at certain times without being conditioned on the actual sensor signal. Optimal update policies have been researched and it is accepted that periodic updates achieve smaller age-of-information than random updates. We contribute a study of a signal-aware policy, where updates are triggered by a random sensor event. By definition, this implies random updates and as a consequence inferior age-of-information. Considering a notion of deviation-of-information as a signal-aware metric, our results show, however, that event-triggered systems can perform equally well as time-triggered systems while causing smaller mean network utilization.




The Tiny-Tasks Granularity Trade-Off: Balancing overhead vs. performance in parallel systems

February 2022

·

12 Reads

Models of parallel processing systems typically assume that one has l workers and jobs are split into an equal number of k=l tasks. Splitting jobs into k>lk > l smaller tasks, i.e. using ``tiny tasks'', can yield performance and stability improvements because it reduces the variance in the amount of work assigned to each worker, but as k increases, the overhead involved in scheduling and managing the tasks begins to overtake the performance benefit. We perform extensive experiments on the effects of task granularity on an Apache Spark cluster, and based on these, developed a four-parameter model for task and job overhead that, in simulation, produces sojourn time distributions that match those of the real system. We also present analytical results which illustrate how using tiny tasks improves the stability region of split-merge systems, and analytical bounds on the sojourn and waiting time distributions of both split-merge and single-queue fork-join systems with tiny tasks. Finally we combine the overhead model with the analytical models to produce an analytical approximation to the sojourn and waiting time distributions of systems with tiny tasks which include overhead. Though no longer strict analytical bounds, these approximations matched the Spark experimental results very well in both the split-merge and fork-join cases.


Fig. 1. Progression of age of information ∆(t) over time t. T A (n) and T D (n) are the arrival and departure time stamps of status update n.
Fig. 2. Cumulative arrivals A(t) (packetized, left-continuous) and departures D(t) (dashed line fluid model, solid line packetized model) of a system, including examples of age of information ∆(t) and virtual delay V (t).
Fig. 3. Statistical age of information bound ∆ε and virtual delay bound Vε with probability ε for a Markov channel and message generation interval w.
A Min-plus Model of Age-of-Information with Worst-case and Statistical Bounds

December 2021

·

13 Reads

We consider networked sources that generate update messages with a defined rate and we investigate the age of that information at the receiver. Typical applications are in cyber-physical systems that depend on timely sensor updates. We phrase the age of information in the min-plus algebra of the network calculus. This facilitates a variety of models including wireless channels and schedulers with random cross-traffic, as well as sources with periodic and random updates, respectively. We show how the age of information depends on the network service where, e.g., outages of a wireless channel cause delays. Further, our analytical expressions show two regimes depending on the update rate, where the age of information is either dominated by congestive delays or by idle waiting. We find that the optimal update rate strikes a balance between these two effects.


Citations (78)


... [12] studies the age-delay trade-off in G/G/∞ queue. [13] observes that a single M/M/1 queue has better age performance than the independent parallel M/M/1 queues with the same total capacity. [14] analyzed age in network of parallel finite identical and memoryless servers, where each server is an LCFS queue with preemption in service. ...

Reference:

Timely and Energy-Efficient Multi-Step Update Processing
Statistical Age-of-Information Bounds for Parallel Systems: When Do Independent Channels Make a Difference?

IEEE Journal on Selected Areas in Information Theory

... Jian et al. (2021) introduced a model to predict service duration based on users' historical data. For in-depth insights into service splitting, refer to the works of Zhang et al. (2020) and Bora et al. (2023). Each subservice on a separate physical node account for a portion of the total demand and operates autonomously (Xu et al., 2021). ...

The Tiny-Tasks Granularity Trade-Off Balancing Overhead vs. Performance in Parallel Systems

IEEE Transactions on Parallel and Distributed Systems

... For broader classes of systems, a variety of approximation techniques have been used [13]- [18]. More recently several researchers have used stochastic network calculus to derive performance bounds [1], [19]- [22]. Many examples of the fork-join pattern being used in practice are given in [23]. ...

Performance and Scaling of Parallel Systems with Blocking Start and/or Departure Barriers
  • Citing Conference Paper
  • May 2022

... Using a finer granularity, taking k > l, so-called "tiny tasks", actually can have a great and positive impact on system performance. This has been noted by practitioners [2]- [4], but so far only [1], which this paper is an extension of, provides analytical results relating task granularity to parallel system performance. ...

Tiny Tasks – A Remedy for Synchronization Constraints in Multi-Server Systems
  • Citing Conference Paper
  • July 2020

... The LSTM shows better performance than the other two approaches by achieving only 3% error rate. Khangura et al. 43 estimated the available bandwidth using a shallow neural network. They used vectors of packet dispersion as input features, which are characteristic of the available bandwidth. ...

Machine learning for measurement-based bandwidth estimation
  • Citing Article
  • May 2019

Computer Communications

... Other network analysis issues can simply be addressed using this approach. Khangura et al. [72] trained the packet dispersion vector as a characteristic of the available bandwidth using a neural network. To choose the next detection rate, an iterative neural network rather than a binary search approach was suggested. ...

Neural Networks for Measurement-based Bandwidth Estimation
  • Citing Conference Paper
  • May 2018

... While proper scheduling can already be beneficial for latencysensitive applications, the introduction of a certain level of redundancy can help even further. Previous studies have proposed several approaches that transmit multiple copies of packets over different interfaces [12,23,28]. These redundancy approaches share the limitation that a lost packet can only be recovered by its exact copy. ...

Multi-Headed MPTCP Schedulers to Control Latency in Long-Fat / Short-Skinny Heterogeneous Networks

... Year Gen. Evaluation Method Metrics Mouawad et al. [172] 2021 4G Analysis/Simulation PDR Bartoletti et al. [173] 2021 4G Analysis/Simulation Range, PRR Makinaci et al. [174] 2021 4G Analysis/Simulation PRR Kim et al. [175] 2022 4G Analysis/Simulation Latency, Range, PRR Sabeeh et al. [176] 2023 4G Analysis/Simulation PRR, PCP Guo et al. [177] 2023 4G Analysis/Simulation Throughput, PLR Nguyen et al. [178] 2023 4G Analysis/Simulation Signal Strength, PER, PDR Parvini et al. [179] 2023 4G Analysis/Simulation AoI Chen et al. [180] 2023 4G Analysis/Simulation Latency Akselrod et al. [181] 2017 4G Implementation Throughput, Signal Strength, SINR Lauridsen et al. [182] 2017 4G Implementation Latency, Signal Strength Walelgne et al. [183] 2018 4G Implementation Throughput Neumeier et al. [184] 2019 4G Implementation Latency, Throughput Burke et al. [185] 2020 4G Implementation Latency Gaber et al. [186] 2020 4G Implementation Latency, Throughput Niebisch et al. [187] 2020 4G Implementation PER Toril et al. [188] 2021 4G Implementation Signal Strength Aissioui et al. [189] 2018 5G Analysis/Simulation Latency Campolo et al. [190] 2019 5G Analysis/Simulation Latency, PRR Chekired et al. [191] 2019 5G Analysis/Simulation Latency Wang et al. [192] 2019 5G Analysis/Simulation PRR, PLR Deinlein et al. [193] 2020 5G Analysis/Simulation Latency, PDV Xiaoqin et al. [194] 2020 5G Analysis/Simulation Throughput Lucas-Estañ et al. [195] 2020 5G Analysis/Simulation Latency, Capacity, PDR Huang et al. [196] 2020 5G Analysis/Simulation Throughput, Capacity Ali et al. [197] 2021 5G Analysis/Simulation PDV, Throughput, PRR Yoon et al. [198] 2021 5G Analysis/Simulation Range, PRR Ali et al. [199] 2021 5G Analysis/Simulation PDV, Capacity Saad et al. [200] 2022 5G Analysis/Simulation PDR Khan et al. [201] 2022 5G Analysis/Simulation Latency, Throughput, PLR, SINR Wu et al. [202] 2023 5G Analysis/Simulation Throughput, PRR Ogawa et al. [203] 2018 5G Implementation Latency, Throughput Kutila et al. [204] 2020 5G Implementation Latency, Throughput, Range Szalay et al. [205] 2020 5G Implementation Latency, Throughput, PER Kutila et al. [206] 2021 5G Implementation Latency, PDR Daengsi et al. [207] 2021 5G Implementation Latency, Throughput, PLR Pan et al. [208] 2021 5G Implementation Latency, Throughput Martin-Sacristan et al. [209] 2020 4G/5G Analysis/Simulation PRR, Latency Saad et al. [210] 2021 4G/5G Analysis/Simulation PDR Shin et al. [211] 2023 4G/5G Analysis/Simulation PRR communications require the frequent transmission of very small packets and as such incur problematic overhead. While architecture plays a significant role in the performance of data flows, empirical tests in [181] show that SINR and RSSI have the biggest effect on throughput performance for this interface due to the MCS adaptation and scheduling algorithms implemented by the base station. ...

4G LTE on the Road - What Impacts Download Speeds Most?
  • Citing Conference Paper
  • September 2017

... It is worth noting that packet dispersion techniques are especially affected by the presence of cross-traffic that can either decrease (first packet delayed more than the second) or increase (second packet delayed more than the first) the measured delta, thus affecting the accuracy of bandwidth estimation. It follows that a number of strategies have been developed to filter cross-traffic error such as sending trains of packets with different sizes and using statistical or machine learning-based models (including linear regression, Kalman filters [12], neural networks [11], and measurement repetition) to filter out bad samples. ...

Available bandwidth estimation from passive TCP measurements using the probe gap model