Conference PaperPDF Available

Minimizing the Null Message Exchange in Conservative Distributed Simulation

Authors:
... Conservative protocols fundamentally maintain causality in event execution by strictly disallowing the processing of events out of time-stamp order [4]. Some recent research on conservative algorithms in DES can be found in [11][12][13][14]. An effort to combine conservative and optimistic synchronization algorithms on a common layered architecture framework is proposed in [15]. ...
Article
Full-text available
This paper presents a new logical process (LP) simulation model for distributed simulation systems where Null Message Algorithm (NMA) is used as an underlying time management algorithm (TMA) to provide synchronization among LPs. To extend the proposed simulation model for n number of LPs, this paper provides a detailed overview of the internal architecture of each LP and its coordination with the other LPs through sub-system components and models such as communication interface and simulation executive. The proposed architecture of LP simulation model describes the proper sequence of coordination that need to be done among LPs though different subsystem components and models to achieve synchronization. To execute the proposed LP simulation model for different set of parameters, a queuing network model is used. Experiments will be performed to verify the accuracy of the proposed simulation model using the pre-derived mathematical equations. Our numerical and simulation results can be used to observe the exchange of null messages and overhead indices.
... One of the main problems associated with the distributed simulation is the synchronization of a distributed execution. If not properly handled, synchronization problems may degrade the performance of a distributed simulation environment [5]. This situation gets more severe when the synchronization algorithm needs to run to perform a detailed logistics simulation in a distributed environment to simulate a huge amount of data [6]. ...
Conference Paper
Full-text available
Mattern’s GVT algorithm is a time management algorithm that helps achieve the synchronization in parallel and distributed systems. This algorithm uses ring structure to establish cuts C1 and C2 to calculate the GVT. The latency of calculating the GVT is vital in parallel/distributed systems which is extremely high if calculated using this algorithm. However, using synchronous barriers with the Matterns algorithm can help improving the GVT computation process by minimizing the GVT latency. In this paper, we incorporate the butterfly barrier to employ two cuts C1 and C2 and obtain the resultant GVT at an affordable latency. Our analysis shows that the proposed GVT computation algorithm significantly improves the overall performance in terms of memory saving and latency.
... Another notable work done mentioned in [2] was the research done by Syed S. Rizvi, K. M. Elleithy, and Aasia Riasat in which they proposed a mathematical model which can be used to approximate the optimal values of some critical parameters such as frequency of transmission, Lookahead (L) values, and the variance of null message elimination. ...
Conference Paper
Full-text available
In this paper we investigate Chandy-Misra-Bryant Null message algorithm and propose a grouping technique to improve the performance. This technique along with status retrieval which will be explained in detail can improve the performance when compared to the traditional conservative algorithm by Chandy-Misra-Bryant. Null message algorithm is an efficient conservative algorithm that uses null messages to provide synchronization between logical processes in a parallel discrete event simulation (PDES) system. The performance can be decreased if a large number of null messages are generated by LPs to avoid deadlock. The main objective of this research work is to propose a new grouping technique that can be used to reduce the Null messages between the logical processes. Since the performance of Null Message algorithm mainly depends on the Lookahead (L) values, our proposed technique can be used to determine an optimum value of the Lookahead.
Article
Full-text available
Recent advances in computing architectures and networking are bringing parallel computing systems to the masses so increasing the number of potential users of these kinds of systems. In particular, two important technological evolutions are happening at the ends of the computing spectrum: at the “small” scale, processors now include an increasing number of independent execution units (cores), at the point that a mere CPU can be considered a parallel shared-memory computer; at the “large” scale, the Cloud Computing paradigm allows applications to scale by offering resources from a large pool on a pay-as-you-go model. Multi-core processors and Clouds both require applications to be suitably modified to take advantage of the features they provide. Despite laying at the extreme of the computing architecture spectrum – multi-core processors being at the small scale, and Clouds being at the large scale – they share an important common trait: both are specific forms of parallel/distributed architectures. As such, they present to the developers well known problems of synchronization, communication, workload distribution, and so on. Is parallel and distributed simulation ready for these challenges? In this paper, we analyze the state of the art of parallel and distributed simulation techniques, and assess their applicability to multi-core architectures or Clouds. It turns out that most of the current approaches exhibit limitations in terms of usability and adaptivity which may hinder their application to these new computing architectures. We propose an adaptive simulation mechanism, based on the multi-agent system paradigm, to partially address some of those limitations. While it is unlikely that a single approach will work well on both settings above, we argue that the proposed adaptive mechanism has useful features which make it attractive both in a multi-core processor and in a Cloud system. These features include the ability to reduce communication costs by migrating simulation components, and the support for adding (or removing) nodes to the execution architecture at runtime. We will also show that, with the help of an additional support layer, parallel and distributed simulations can be executed on top of unreliable resources.
Conference Paper
Full-text available
Multiuser detectors suffer from their relatively higher computational complexity that prevents widespread use of this technique. In addition, one of the main characteristics of multichannel communications that can severely degrade the performance is the inconsistent values of processing gain (PG) that result in high multiple access interference (MAI). However, if we could lower the complexity of multiuser detectors and produce better values of PG, most of the CDMA systems would likely take advantage of this technique in terms of increased system capacity and a better data rate .This paper presents a deterministic formalization of the PG for a wireless multi-channel DS-CDMA system The proposed deterministic formalization demonstrates that how the reduced BER could be used to achieve reasonable values of PG by which unwanted signals can be suppressed relative to the desired signal at the receiving end. The performance measure adopted in this paper is the achievable bit rate for a fixed probability of error (10-7).
Conference Paper
Full-text available
Null message algorithm is an important conservative time management protocol in parallel discrete event simulation systems for providing synchronization between the distributed computers with the capability of both avoiding and resolving the deadlock. However, the excessive generation of null messages prevents the widespread use of this algorithm. The excessive generation of null messages results due to an improper use of some of the critical parameters such as frequency of transmission and Lookahead values. However, if we could minimize the generation of null messages, most of the parallel discrete event simulation systems would be likely to take advantage of this algorithm in order to gain increased system throughput and minimum transmission delays. In this paper, a new mathematical model for optimizing the performance of parallel and distributed simulation systems is proposed. The proposed mathematical model utilizes various optimization techniques such as variance of null message elimination to improve the performance of parallel and distributed simulation systems. For the sake of simulation results, we consider both uniform and non-uniform distribution of Lookahead values across multiple output lines of an LP. Our experimental verifications demonstrate that an optimal NMA offers better scalability in parallel discrete event simulation systems if it is used with the proper selection of critical parameters.
Conference Paper
Full-text available
Null message algorithm (NMA) is one of the efficient conservative time management algorithms that use null messages to provide synchronization between the logical processes (LPs) in a parallel discrete event simulation (PDES) system. However, the performance of a PDES system could be severely degraded if a large number of null messages need to be generated by LPs to avoid deadlock. In this paper, we present a mathematical model based on the quantitative criteria specified in (Rizvi et al., 2006) to optimize the performance of NMA by reducing the null message traffic. Moreover, the proposed mathematical model can be used to approximate the optimal values of some critical parameters such as frequency of transmission, Lookahead (L) values, and the variance of null message elimination. In addition, the performance analysis of the proposed mathematical model incorporates both uniform and non-uniform distribution of L values across multiple output lines of an LP. Our simulation and numerical analysis suggest that an optimal NMA offers better scalability in PDES system if it is used with the proper selection of critical parameters.
Conference Paper
Full-text available
Global virtual time (GVT) is used in parallel discrete event simulations to reclaim memory, commit output, detect termination, and handle errors. Mattern 's [I] has proposed G VT approximation with distributed termination detection algorithm. This algorithm works fine and gives optimal performance in terms of accurate GVT computation at the expense of slower execution rate. This slower execution rate results a high GVT latency. Due to the high GVT latency, the processors involve in communication remain idle during that period of time. As a result, the overall throughput of a discrete event parallel simulation system degrades significantly. Thus, the high G VT latency prevents the widespread use of this algorithm in discrete event parallel simulation system. However, if we could improve the latency of GVT computation, most of the discrete event parallel simulation system would likely take advantage of this technique in terms of accurate G VT computation. In this paper, we examine the potential use of tress and butterflies barriers with the Mattern's GVT structure using a ring. Simulation results demonstrate that the use of tree barriers with the Mattern's GVT structure can significantly improve the latency time and thus increase the overall throughput of the parallel simulation system. The performance measure adopted in this paper is the achievable latency for a fixed number of processors and the number of message transmission during the G VT computation.
ResearchGate has not been able to resolve any references for this publication.