Conference Paper

Dynamic multi-phase scheduling for heterogeneous clusters

Dept. of Electr. & Comput. Eng., Athens Nat. Tech. Univ., Greece
DOI: 10.1109/IPDPS.2006.1639308 Conference: 20th International Parallel and Distributed Processing Symposium (IPDPS 2006), Proceedings, 25-29 April 2006, Rhodes Island, Greece
Source: DBLP


Distributed computing systems are a viable and less
expensive alternative to parallel computers. However,
concurrent programming methods in distributed systems have not been studied as extensively as for parallel computers. Some of the main research issues are
how to deal with scheduling and load balancing of such a
system, which may consist of heterogeneous computers.
In the past, a variety of dynamic scheduling schemes
suitable for parallel loops (with independent iterations)
on heterogeneous computer clusters have been obtained
and studied. However, no study of dynamic schemes
for loops with iteration dependencies has been reported
so far. In this work we study the problem of scheduling loops with iteration dependencies for heterogeneous
(dedicated and non-dedicated) clusters. The presence
of iteration dependencies incurs an extra degree of dif-
ficulty and makes the development of such schemes
quite a challenge. We extend three well known dynamic schemes (CSS, TSS and DTSS) by introducing
synchronization points at certain intervals so that processors compute in pipelined fashion. Our scheme is
called dynamic multi-phase scheduling (DMPS) and
we apply it to loops with iteration dependencies. We
implemented our new scheme on a network of heterogeneous computers and studied its performance. Through
extensive testing on two real-life applications (the heat
equation and the Floyd-Steinberg algorithm), we show
that the proposed method is efficient for parallelizing
nested loops with dependencies on heterogeneous systems.

Download full-text


Available from: Florina Monica Ciorba,
33 Reads
  • Source
    • "A version of the FSS algorithm is proposed and considered for implementation in virtual machine scheduling in cross-cloud environment [10]. Some results [11] also focus on loops with dependencies. Recent research results [12] [13] have been reported for designing loop self-scheduling methods for grid. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Cloud computing infrastructure offers the computing resources as a homogeneous collection of virtual machine instances by different hardware configurations, which is transparent to end users. In fact, the computational powers of these virtual machine instances are different and behaves as a heterogeneous environment. Thus, scheduling and load balancing for high performance computations become challenging on such systems. In this paper, we propose a hierarchical distributed scheduling scheme suitable for parallel loops with independent iterations on a cloud computing system. We also evaluate various performance aspects associated with our distributed scheduling scheme.
    Parallel and Distributed Processing Symposium Workshops & PhD Forum (IPDPSW), 2013 IEEE 27th International, Cambridge, MA; 05/2013
  • Source
    • "In this algorithm, chunks sizes are weighted by the relative power of the server processors and by the number of processes in their run-queue at the moment of requesting work from the master. Ciorba et al. extended in [7] the DTSS algorithm, proposing DTSS+SP to handle loops with dependencies on heterogeneous clusters via synchronization points (SPs). In [9], Chronopoulos et al. proposed a Hierarchical DTSS (HDTSS). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Self-scheduling algorithms are useful for achieving load balance in heterogeneous computational systems. Therefore, they can be applied in computational Grids. Here, we introduce two families of self-scheduling algorithms. The rst considers an explicit form for the chunks distribution function. The second focus on the variation rate of the chunks distribution function. From the rst family, we propose a Quadratic Self-Scheduling (QSS) algorithm. From the second, two new algorithms, Exponential Self-Scheduling (ESS) and Root Self-Scheduling (RSS) are introduced. QSS, ESS and RSS are tested in an Internet-based Grid of Computers involving resources from Spain and Mexico. QSS and ESS outperform previous self-scheduling algorithms. QSS is found slightly more ecient than ESS. RSS shows a poor performance, a fact traced back to the curvature of the chunks distribution function.
    Future Generation Computer Systems 06/2009; 25(6):617–626. DOI:10.1016/j.future.2008.12.003 · 2.79 Impact Factor
  • Source
    • "An important class of dynamic algorithms that has been developed for the parallelization of nested loops and that can provide coarse grain parallelism, is that of Self-Scheduling algorithms ([15] and references therein). These algorithms had been devised initially for loops without dependencies but their use was extended in loops with dependencies with the introduction of " Dynamic multi phase scheduling algorithm " (DMPS) [9]. The problem of parallelizing nested loops using dynamic scheduling algorithms in reconfigurable hardware platforms is still an open research issue. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Dynamic scheduling algorithms have been successfully used for parallel computations of nested loops in traditional parallel computers and clusters. In this paper we propose a new architecture, implementing a coarse grain dynamic loop scheduling, suitable for reconfigurable hardware platforms. We use an analytical model and a case study to evaluate the performance of the proposed architecture. This approach makes efficient memory and processing elements use and thus gives better results than previous approaches.
    Industrial Embedded Systems, 2008. SIES 2008. International Symposium on; 07/2008
Show more