Article

The effect of time delays on the stability of load balancing algorithms for parallel computations

Electr. & Comput. Eng. Dept., Univ. of Tennessee, Knoxville, TN, USA
IEEE Transactions on Control Systems Technology (Impact Factor: 2). 12/2005; DOI: 10.1109/TCST.2005.854339
Source: IEEE Xplore

ABSTRACT A deterministic dynamic nonlinear time-delay system is developed to model load balancing in a cluster of computer nodes used for parallel computations. The model is shown to be self consistent in that the queue lengths cannot go negative and the total number of tasks in all the queues and the network are conserved (i.e., load balancing can neither create nor lose tasks). Further, it is shown that using the proposed load balancing algorithms, the system is stable in the sense of Lyapunov. Experimental results are presented and compared with the predicted results from the analytical model. In particular, simulations of the models are compared with an experimental implementation of the load balancing algorithm on a distributed computing network.

0 Bookmarks
 · 
84 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: The designs of heterogeneous multi-core multiprocessor real-time systems are evolving for higher energy efficiency at the cost of increased heat density. This adversely effects the reliability and performance of the real-time systems. Moreover, the partitioning of periodic real-time tasks based on their worst case execution time can lead to significant energy wastage. In this paper, we investigate adaptive energy-efficient task partitioning for heterogeneous multi-core multiprocessor realtime systems. We use a power model which incorporates the impact of temperature and voltage of a processor on its static power consumption. Two different thermal models are used to estimate the peak temperature of a processor. We develop two feedback-based optimization and control approaches for adaptively partitioning real-time tasks according to their actual utilizations. Simulation results show that the proposed approaches are effective in minimizing the energy consumption and reducing the number of task migrations.
    High Performance Computing and Simulation (HPCS), 2012 International Conference on; 01/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: An analytical framework for the study of a generic distribution problem is introduced in which a group of agents with different capabilities intend to maximize total utility by dividing themselves into various subgroups without any form of global information-sharing or centralized decision-making. The marginal utility of belonging to a particular subgroup rests on the well-known concept in economic theory of the law of diminishing returns. For a class of discrete event systems, we identify a set of conditions that define local information and cooperation requirements, and prove that if the proposed conditions are satisfied a stable agent distribution representing a Pareto optimum is achieved even under random but bounded decision and transition delays.
    American Control Conference (ACC), 2013; 01/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Models of discrete event systems combine ideas from control theory and computer science to represent the evolution of distributed processes. We formalize a notion of the invalidation of models presumed to describe dynamics on networks, and introduce an algorithm to evaluate a class of event-driven processes that evolve close to an invariant and stable state. The algorithm returns the value true, if according to the proposed notion of invalidation, the evolution of empirical observations is inconsistent with the stability properties of the model. To illustrate the approach, we represent a generic decision-making process in which the marginal utility of allocating agents to particular nodes rests on the well-known concept in economy theory of the law of diminishing returns.
    American Control Conference (ACC), 2013; 01/2013

Full-text (2 Sources)

View
29 Downloads
Available from
Jun 1, 2014