Conference PaperPDF Available

Strategic Oscillation for Exploitation and Exploration of ACS Algorithm for Job Scheduling in Static Grid Computing

Authors:

Abstract and Figures

Exploitation and exploration mechanisms are the main components in metaheuristics algorithms. These mechanisms are implemented explicitly in ant colony system algorithm. The rate between the exploitation and exploration mechanisms is controlled using a parameter set by the users of the algorithm. However, the rate remains unchanged during the algorithm iterations, which makes the algorithm either bias toward exploitation or exploration. Hence, this study proposes a strategic oscillation rate to control the exploitation and exploration in ant colony system. The proposed algorithm was evaluated with job scheduling problem benchmarks on grid computing. Experimental results show that the proposed algorithm outperforms other metaheuristics algorithms in terms of makespan and flowtime. The strategic oscillation has improved the exploration and exploitation in ant colony system.
Content may be subject to copyright.
Strategic Oscillation for Exploitation and Exploration
of ACS Algorithm for Job Scheduling in Static Grid
Computing
Mustafa Muwafak Alobaedy
School of Computing
College of Art and Sciences
University Utara Malaysia, 06010 Sintok, Kedah
new.technology@hotmail.com
Ku Ruhana Ku-Mahamud
School of Computing
College of Art and Sciences
University Utara Malaysia, 06010 Sintok, Kedah
ruhana@uum.edu.my
AbstractExploitation and exploration mechanisms are the
main components in metaheuristics algorithms. These
mechanisms are implemented explicitly in ant colony system
algorithm. The rate between the exploitation and exploration
mechanisms is controlled using a parameter set by the users of
the algorithm. However, the rate remains unchanged during the
algorithm iterations, which makes the algorithm either bias
toward exploitation or exploration. Hence, this study proposes a
strategic oscillation rate to control the exploitation and
exploration in ant colony system. The proposed algorithm was
evaluated with job scheduling problem benchmarks on grid
computing. Experimental results show that the proposed
algorithm outperforms other metaheuristics algorithms in terms
of makespan and flowtime. The strategic oscillation has
improved the exploration and exploitation in ant colony system.
Keywordsstrategic oscillation; ant colony system; job
scheduling; grid computing
I. INTRODUCTION
The popularity of grid systems started in the late 1990s
when Foster developed a grid system called Globus Toolkit
[1]. Grid system could be classified into several types, such as
grid computing, data grid, enterprise grid, sensor grid, campus
grid, global grid, pc grid, and utility grid [2][4]. Grid
computing provides a powerful processing capability which is
not possible to achieve using individual computer. Grid
computing is defined as “geographically distributed computers,
linked through the internet in a Grid-like manner, which are
used to create virtual supercomputers of vast amount of
computing capacity able to solve complex problem from e-
Science in less time than known before” [5]. Another definition
provided by [6] is “a hardware and software infrastructure that
provides transparent, dependable, pervasive and consistent
access to large-scale distributed resources owned and shared by
multiple administrative organizations in order to deliver
support for a wide range of applications with the desired
qualities of service. These applications can perform either as
high throughput computing, on-demand computing, data
intensive computing, or collaborative computing”. From these
definitions, grid computing could be defined as a collection of
computing resources distributed geographically in different
locations. These resources are forming a processing power that
can be used to solve various complex problems in several
fields, such as science, commerce, and education. The
resources in grid computing could be heterogeneous in terms of
hardware and software.
Grid computing has been successfully implemented to
solve various real-life problems. For instance, finding Protein
binding sites using DNA@Home volunteer computing project
[7], grid computing for disaster mitigation implemented by
Universiti Sains Malaysia [8], C-Grid based on Integrated
Rule-Oriented Data System for health care community [9], and
ANSYS® Commercial Suite on the EGI Grid Platform [10].
The main components in Grid are infrastructure fabric,
middleware, and applications [11]. The middleware layer
considered as the brain of the grid and provides many services,
such as job scheduling, job enactment, monitoring, and meta-
scheduling [11]. These services handled by Resource
Management System (RMS) which has the responsibility to
map users submitted tasks to available and suitable resources
[12]. Scheduling algorithm is the major influence on the
performance of grid computing system [5]. Scheduling
algorithm could be implemented using simple approach, such
as first come first serve or greedy algorithm. However, grid
computing system with big number of resources and tasks
needs more sophisticated algorithm in order to achieve good
quality of services. Therefore, more intelligent algorithms are
required for scheduling algorithm implementation in RMS.
Job scheduling problem in grid computing is considered as
NP-complete problem [13]. Due to the complexity nature of
these types of problems, a heuristic and metaheuristics
algorithms are preferred in real application [14]. Metaheuristics
algorithms have the ability to produce optimal or near-optimal
solution in reasonable time and resources. One of the
metaheuristics algorithms branches is nature inspired
algorithms, such as Ant Colony Optimization (ACO),
Simulated Annealing (SA), Particle Swarm Optimization
(PSO), and Artificial Bee Colony (ABC) [15]. Among them,
ACO algorithm shows an excellent performance in various
domains, such as routing, scheduling, classification, and
optimization [16].
87ISBN: 978-1-4799-6210-5 ©2015 IEEE
ACO is a framework algorithm for other variants, such as
Ant System (AS), Elitist Ant System (EAS), Rank-Based Ant
System (), Ant Colony System (ACS), and Max-Min
Ant System (MMAS) [16]. ACS algorithm is considered as one
of the best among them [17]. ACS algorithm is based on two
mechanisms, namely exploitation and exploration [16]. During
the algorithm iteration, the selection between the two
mechanisms is controlled via a fixed parameter󰇛
󰇜 . Therefore, the algorithm with small exploitation or
exploration rate will never change during all iterations. In other
words, the rate of any mechanism will not increase or decrease
during the execution. Therefore, this study focuses on a
dynamic rate specifically, adopting the strategic oscillation rate
between exploitation and exploration which is proposed in
Tabu Search (TS) algorithm by Glover and Laguna in [18].
This strategy makes the ACS algorithm behaves differently but
strategically in each cycle.
This paper organized as follow, Section II reviews the
evolution of ant colony optimization. The strategic oscillation
in ant colony system is provided in Section III. Section IV
illustrates the problem formulation while Section V presents
the experiments and results. Finally, the conclusion is provided
in Section VI.
II. ANT COLONY OPTIMIZATION
Ant colony optimization algorithm was proposed in 1990s
by Dorigo [16] as a metaheuristics algorithm. The first version
of ACO is known as ant system [19]. AS consists of two main
phases, namely solution construction and pheromone update.
The solution construction phase is based on probabilistic action
choice rule, known as random proportional rule. For
pheromone update phase, AS uses evaporation concept and
pheromone deposit method. Compared with other ACO
variants, the performance of AS algorithm decrease
dramatically when the problem instance size increases [16].
Additional reinforcement to the arcs belonging to the best
solution was introduced in Elitist strategy [EAS] algorithm to
improve AS [20]. The implementation of the elitist strategy
enables the ants to find better solution quality as well as lower
number of iterations. EAS algorithm shows better performance
than AS algorithm. Another improvement over AS algorithm is
rank-based ant system introduced in [21]. ASrank algorithm
applies ranking concept to the amount of pheromone deposits
on the arcs. Only the best-so-far ant and best ranked ants are
allowed to deposits the pheromone. Compared to AS and EAS,
ASrank performed significantly better than AS and slightly
better than EAS.
Another variant of ACO algorithm is max-min ant system
which has direct improvement over AS algorithm [22]. MMAS
provides four improvements. Firstly, it uses a stronger
exploitation mechanism. Secondly, MMAS apply a range of
pheromone trail values to the interval that help to avoid the
premature stagnation (all ants converge early to one suboptimal
solution) of the search process. Thirdly, the initial pheromone
value is set to the upper pheromone limit with a small
pheromone evaporation rate to increase the exploration
mechanism. Finally, in MMAS, pheromones values are
reinitialized whenever the algorithm is not able to find an
improved solution for a certain number of iterations. For the
pheromone update, only one of the two ants is allowed to add
pheromone, either the best-so-far ant or the iteration-best ant.
One more important improvement over AS algorithm is ant
colony system proposed in [23]. ACS algorithm improves AS
algorithm in three main aspects. First, ACS implements a
stronger action choice rule than AS. Second, the pheromone
value is added only to the arcs belonging to the global-best
solution. Third, each time an ant moves on an arc, it evaporate
some pheromone from that arc. The three main phases of the
ACS algorithm constitute the ants’ solution construction,
global pheromone trail update, and local pheromone trail
update. In global pheromone update, only one ant (the best-so-
far ant) is allowed to add pheromone after all ants have
finished constructing their tours. In local pheromone update, all
the ants in ACS algorithm apply local pheromone update rule
immediately after moving on arcs during the solution
construction using the evaporation concept. In ACS algorithm,
the tuning between exploitation and exploration is controlled
by a parameter fixed by the user. Therefore, the rate of the
exploration and exploitation will never change during the
algorithm execution. Thus, if the exploitation rate is high, then
the algorithm will behave more toward greedy approach. In
opposite, if the exploration rate is high, then the algorithm will
behave more toward random approach. Hence, this study
proposes strategic oscillation rate for exploitation and
exploration mechanisms in ACS algorithm.
III. STRATEGIC OSCILLATION IN ANT COLONY SYSTEM
Strategic oscillation concept is proposed by Glover and
Laguna in tabu search algorithm [18]. The authors state that
“strategic oscillation provides a means to achieve an effective
interplay between intensification and diversification over the
intermediate to long term” [18]. The idea behind this concept is
simply to oscillate between the exploration and exploitation in
a strategic process. Fig. 1 shows the process of strategic
oscillation in TS algorithm [18].
In ant colony system algorithm, the exploration and
exploitation mechanisms are explicit and controlled via a fixed
parameter 󰇛 󰇜 [16]. Therefore, applying the
concept of strategic oscillation is direct and straightforward in
ACS algorithm.
Fig. 1. Strategic oscillation process [18].
88ISBN: 978-1-4799-6210-5 ©2015 IEEE
The proposed algorithm called Strategic Oscillation Ant
Colony System (SOACS) starts with maximum oscillation rate
(behaves like greedy approach). During the iterations,
gradually moves towards exploration by reducing the
oscillation rate using the step_size parameter󰇛 󰇜.
Once the oscillation rate reaches the minimum ( )
(behaves like random approach), the oscillation rate starts to
moves toward exploitation again by increasing the oscillation
rate with step-size parameter  . Fig. 2 represents the
pseudocode of the strategic oscillation in ACS algorithm.
SOACS algorithm starts with equal pheromone values on all
edges. Therefore, the algorithm starts with maximum
oscillation rate to exploit the heuristic information rather than
pheromone information. The idea of starting with maximum
exploitation will produce a good staring solution which is at
least equal to the solution produced by nearest- neighbour
approach. During the algorithm iterations, the oscillation rate
will change using the parameter. The value of the
parameter is recommended to be very small value in order to
move smoothly between exploitation and exploration
mechanisms. The oscillation rate for 1000 iterations using
 is shown in Fig. 3.
The strategic oscillation could be implemented in ACS
algorithm either in iteration level or ant level. However, this
study implemented the strategic oscillation in iteration level. In
other words, the oscillation rate will change after all the ants
finish one iteration. Fig. 4 represents the complete pseudocode
for SOACS algorithm.
Fig. 2. Strategic oscillation pseudocode for ACS algorithm.
Fig. 3. Strategic oscillation rate.
Fig. 4. SOACS algorithm pseudocode.
In Fig. 4, the syntaxes with bold text represent the proposed
strategic oscillation code in ant colony system. Moving this
part of code to the location after the syntax “Apply local
pheromone update” in Fig. 4, will make the strategic oscillation
based on ant level in ACS algorithm. The proposed SOACS
algorithm could be implemented to solve various combinatorial
problems similar to ACS algorithm. However, this study has
implemented the proposed SOACS algorithm to solve job
scheduling problem on static grid computing system.
IV. PROBLEM FORMULATION
The job scheduling problem on computational grid is
known as a multi-objective problem. There are various criteria
in grid computing that need to be optimized, for instance
makespan, flowtime, load balancing, utilization, matching
proximity, turnaround time, total weighted completion time,
and average weighted response time [24]. This study has
implemented two criteria, namely makespan and flowtime with
the priority to makespan as the main optimization objective.
Initialize the oscillation rate ;
Initialize the step_size parameter;
Initialize switch variable (switch = false);
If ( ) //Maximum exploitation rate
switch true; //Switch to exploration
Else if ( ) //Maximum exploration rate
switch false; //Switch to exploitation
If (switch = true) //If exploration is true
; //Decrease the rate
Else
; //Increase the rate
Procedure SOACS
Initialize the number of ants ;
Initialize parameters and pheromone trails;
Initialize = random [0, 1];
Initialize the oscillation rate ;
Initialize the step_size parameter;
Initialize switch variable (switch = false);
While (Termination condition not met) Do
For  Do
Construct new solution:
= random [0, 1];
If ( )
Exploitation;
Else
Exploration;
Apply local pheromone update;
End For;
Apply pheromone evaporation;
Apply Global pheromone update;
If ( )
switch true;
Else if ( )
switch false;
If (switch = true)
;
Else
;
Update best-so-far solution ;
End while;
End Procedure
89ISBN: 978-1-4799-6210-5 ©2015 IEEE
The general productivity of the grid computing is measured by
makespan. The best scheduling algorithm is the one that can
achieve a small value of makespan, which means that the
algorithm is able to map tasks to machines in a good and
efficient way. Therefore, the main objective in this study is to
minimize the makespan. Makespan is defined as the time when
the last task finishes execution, formally defined as:
 
where is the set of all possible schedules,  is the set
of all jobs to be scheduled, and
denotes the time when task
finalizes [24]. The second criterion implemented in this study
is flowtime which refers to the response time to the user
submissions of task executions. Flowtime is defined as the sum
of finalization times of all tasks, formally defined as:
 󰇝
 󰇞.
These criteria could conflict with each other since limited
resources could be the bottleneck of the system [24]
In order to evaluate the proposed SOACS algorithm, a
suitable benchmark is required to reflect the robustness of the
algorithm. The benchmark model should has the features to
reflect the environment characteristics, such as resources and
jobs heterogeneity. A benchmark for static grid computing
which is based on a successful model known as Expected Time
to Compute (ETC) proposed in [25] has been implemented.
This model is widely accepted by researchers for algorithms
evaluation in job scheduling problem [26], [27]. The
benchmark model defines a matrix known as ETC matrix. Each
row in the 󰇟 󰇠 matrix contains the expected time to
compute task󰇟󰇠 on machine󰇟󰇠. Therefore, ETC has
entries where represents the number of tasks and
represents the number of machines. ETC matrix is again
defined using three metrics, namely task heterogeneity,
machine heterogeneity, and consistency. The task
heterogeneity measures the variance in execution time among
tasks while machine heterogeneity measures the variance in
machine speed among machines. The heterogeneity of tasks
and machines is represented with two values of “high” and
“low” respectively. In addition, ETC matrix captures other
possible features of real heterogeneous computing system
using three more metrics to measure the consistencies, namely
consistent, inconsistent, and semi-consistent. The ETC matrix
is considered consistent whenever a machine
executes a task
faster than another machine, therefore, machine
will
execute all other tasks faster than machine. ETC matrix is
considered inconsistent when a machine
could execute some
tasks faster than machine and some other slower. Finally,
semi-consistent ETC matrix is an inconsistent matrix which
has a consistent submatrix of specific size. Combining all these
matrices will generate 12 distinct types of ETC matrix [25].
V. EXPERIMENT AND RESULTS
The experiments have been conducted using Intel® Core
(TM) i7-3612QM CPU @ 2.10GHz and 8G RAM. A
simulation is developed using C# language. The proposed
SOACS algorithm was evaluated against genetic algorithm, ant
system, and ant colony system provided in [28]. SOACS
algorithm parameters values are given in Table I. The new
parameter, γ is the step size to move from exploitation to
exploration and vice versa. The parameters value of number of
ants, evaporation rate, and beat are adopted from the original
ACS algorithm which are recommended in [16]. The proposed
SOACS algorithm was executed 10 times to calculate the best
and average values. Each run is given only 90 seconds, such a
time restriction is very important requirement to mimic the real
grid computing environment [29], [30].
Experimental results are organized in tables. The first
column of each table represents the instance name with an
abbreviation code: x-yyzz as follows:
x represent the type of consistency; c means consistent, i means
inconsistent, and s means semi-consistent.
yy represents the heterogeneity of the tasks; hi means high and
lo means low.
zz represents the heterogeneity of the machines; hi means high
and lo means low.
For example: c_hilo means consistent environment, hi
heterogeneity in tasks and low heterogeneity in machines. The
results show that the proposed SOACS algorithm outperforms
other algorithms in terms of best makespan values on all the 12
instances as illustrated in Table II. Similar performance is
shown by the proposed SOACS algorithm in terms of average
makespan values as shown in Table III.
TABLE I. SOACS PARAMETERS VALUES.
Run time
Beta
Evaporation rate
No of ants
90second
8
0.6
10
0.00004
TABLE II. BEST MAKESPAN
AS
ACS
SOACS
c_hihi
11210553.9
10794610.8
10525341.5
c_hilo
184701.3
179762.4
177747.2
c_lohi
367182.8
346838.4
346627.3
c_lolo
6224.8
6051.8
6031.9
i_hihi
3946883.2
4066163.7
3919048.2
i_hilo
90968.3
93829.0
87510.4
i_lohi
133825.4
137176.5
129994.6
i_lolo
3141.0
3209.0
3020.8
s_hihi
5991234.3
6119602.0
5741578.0
s_hilo
118988.3
120539.1
115123.5
s_lohi
176800.4
178584.8
166583.1
s_lolo
4296.3
4350.4
4131.0
TABLE III. AVERAGE MAKESPAN
GA
AS
ACS
SOACS
c_hihi
11266455.7
11492186.4
10947366.9
10747849.5
c_hilo
183264.9
186640.1
181434.4
179875.2
c_lohi
375322.2
373766.6
353670.8
350654.3
c_lolo
6152.5
6281.5
6120.0
6062.4
i_hihi
4029108.7
4021032.5
4261681.8
3984413.4
i_hilo
91682.3
92311.6
94832.7
88536.5
i_lohi
135625.0
136721.9
144178.5
133200.5
i_lolo
3051.0
3198.6
3280.0
3045.5
s_hihi
6317823.2
6114694.0
6322969.8
5940008.6
s_hilo
120664.4
121995.8
122440.4
117386.7
s_lohi
181734.6
178990.5
181737.4
170489.2
s_lolo
4249.9
4369.1
4399.4
4186.7
90ISBN: 978-1-4799-6210-5 ©2015 IEEE
For best and average flowtime values, Tables IV and V
show that the proposed SOACS algorithm was able to achieve
the best results on 10 instances followed by AS algorithm on
two instances.
Due to the difference scale of each instance result, the
geometric mean is implemented to normalize the makespan
and flowtime values in order to represent the proposed SOACS
algorithm visually [31]. Figs. 5-8 represent the geometric mean
for best makespan, average makespan, best flowtime, and
average flowtime respectively. The visual geometric mean
figures show that the proposed SOACS algorithm significantly
outperforms all other algorithms for makespan and flowtime
criteria. Optimizing these two criteria at the same time is not an
easy task. Therefore, the proposed SOACS algorithm is
considered as promising algorithm for job scheduling in grid
computing.
TABLE IV. BEST FLOWTIME
GA
AS
ACS
SOACS
c_hihi
175890174.2
170869481.0
167168928.0
164883402.9
c_hilo
2885387.6
2839818.7
2839974.6
2803190.5
c_lohi
5862262.0
5600439.3
5481314.1
5468218.9
c_lolo
97154.5
95877.0
95871.5
94910.8
i_hihi
63759167.6
60169758.2
64092691.0
61094410.9
i_hilo
1461297.4
1403670.4
1451182.0
1378783.4
i_lohi
2141505.9
2032456.4
2150374.0
2038837.0
i_lolo
48547.9
48773.5
50707.6
47652.4
s_hihi
98814397.0
90312215.7
95998535.0
89287719.8
s_hilo
1909954.1
1832927.6
1893970.7
1816923.6
s_lohi
2867157.9
2682621.5
2800124.8
2635073.7
s_lolo
67508.1
65545.5
68232.0
64960.5
TABLE V. AVERAGE FLOWTIME
GA
AS
ACS
SOACS
c_hihi
176638718.7
174513587.9
171594188.4
168542040.5
c_hilo
2893345.6
2866863.1
2865314.2
2831223.1
c_lohi
5867869.1
5712409.2
5587489.2
5533250.4
c_lolo
97298.9
96857.6
96697.1
95703.2
i_hihi
64261850.8
61409716.3
66654183.7
62585921.8
i_hilo
1461683.7
1422434.6
1489277.2
1396360.0
i_lohi
2163840.8
2068376.5
2256605.3
2093506.3
i_lolo
48579.5
49416.3
51606.3
48107.0
s_hihi
99887497.7
92951306.3
98799209.7
92752032.6
s_hilo
1915659.2
1867344.1
1934073.4
1852245.3
s_lohi
2871564.9
2738879.1
2869869.2
2691070.5
s_lolo
67548.4
67048.3
69185.3
65793.0
Fig. 5. Geometric mean for best makespan values
Fig. 6. Geometric mean for average makespan values
Fig. 7. Geometric mean for best flowtime values
Fig. 8. Geometric mean for average flowtime values
VI. CONCLUSION
Assigning tasks to suitable resources is a very critical
process which influences the performance of grid system. This
study has enhanced the exploitation and exploration
mechanisms in ant colony system. The enhancement is based
on implementing the strategic oscillation concept which is
adopted from tabu search algorithm. The proposed SOACS
algorithm was evaluated against other metaheuristics algorithm
using expected to compute model for job scheduling. Results
show that the proposed algorithm outperforms all other
algorithms in terms of makespan and flowtime values. Future
work could focus on the strategic oscillation concept which can
be implemented the ant level which gives different rate for
each ant. In addition, future work can be on the implementation
of the proposed SOACS algorithm to solve other combinatorial
problems such as routing and job shop scheduling.
91ISBN: 978-1-4799-6210-5 ©2015 IEEE
ACKNOWLEDGMENT
The authors wish to thank the Ministry of Higher Education
Malaysia for funding this study under the Fundamental
Research Grant Scheme, S/O codes 12819 and 11980, and
RIMC, Universiti Utara Malaysia, Kedah, for the
administration of this study.
REFERENCES
[1] I. Foster and C. Kesselman, “Globus: a Metacomputing
Infrastructure Toolkit,” Int. J. High Perform. Comput. Appl., vol.
11, no. 2, pp. 115128, Jun. 1997.
[2] J. Kolodziej, Evolutionary Hierarchical Multi-Criteria
Metaheuristics for Scheduling in Large-Scale Grid Systems. New
York: Springer, 2012.
[3] O. Babafemi, M. Sanjay, and M. Adigun, “Towards Developing
Grid-Based Portals for E-Commerce on-Demand Services on a
Utility Computing Platform,” J. Procedia, vol. 4, no. 1, pp. 8187,
Jan. 2013.
[4] N. Z. C. Fulop, “A Desktop Grid Computing Approach for
Scientific Computing and Visualization,” (Doctoral dissertation),
2008.
[5] F. Xhafa and A. Abraham, “Computational Models and Heuristic
Methods for Grid Scheduling Problems,” J. Futur. Gener. Comput.
Syst., vol. 26, no. 4, pp. 608621, Apr. 2010.
[6] F. Magoules, T.-M.-H. Nguyen, and L. Yu, Grid Resource
Management : Toward Virtual and Services Compliant Grid
Computing. Boca Raton: CRC Press, 2009.
[7] T. Desell, L. A. Newberg, M. Magdon-Ismail, B. K. Szymanski, and
W. Thompson, “Finding Protein Binding Sites Using Volunteer
Computing Grids,” in Proceedings of the 2nd International
Congress on Computer Applications and Computational Science, F.
L. Gaol and Q. V. Nguyen, Eds. Berlin Heidelberg: Springer, 2012,
pp. 385393.
[8] H. Koh, S. Teh, T. Majid, and H. Aziz, “Grid Computing for
Disaster Mitigation,” in Data Driven e-Science, S. C. Lin and E.
Yen, Eds. New York: Springer, 2011, pp. 445456.
[9] N. Sukhija and A. K. Datta, “C-Grid: Enabling iRODS-based Grid
Technology for Community Health Research,” in Information
Technology in Bio- and Medical Informatics, M. Bursa, S. Khuri,
and M. E. Renda, Eds. Berlin Heidelberg: Springer, 2013, pp. 17
31.
[10] A. Costantini, “Implementation of the ANSYS® Commercial Suite
on the EGI Grid Platform,” in Computational Science and Its
Applications, vol. 7971, B. Murgante, S. Misra, M. Carlini, C.
Torre, H.-Q. Nguyen, D. Taniar, B. Apduhan, and O. Gervasi, Eds.
Berlin Heidelberg: Springer, 2013, pp. 8495.
[11] M. Siddiqui and T. Fahringer, “Model,” in Grid Resource
Management, vol. 5951, Berlin Heidelberg: Springer, 2010, pp. 17
44.
[12] M. Siddiqui and T. Fahringer, “Grid Resource Management and
Brokerage System,” in Grid Resource Management, vol. 5951,
Berlin Heidelberg: Springer, 2010, pp. 4778.
[13] M. B. Qureshi, M. M. Dehnavi, N. Min-Allah, M. S. Qureshi, H.
Hussain, I. Rentifis, N. Tziritas, T. Loukopoulos, S. U. Khan, C.-Z.
Xu, and A. Y. Zomaya, “Survey on Grid Resource Allocation
Mechanisms,” J. Grid Comput., vol. 12, no. 2, pp. 399441, Apr.
2014.
[14] G. Zapfel, R. Braune, and M. Bogl, Metaheuristic Search Concepts
a Tutorial with Applications to Production and Logistics.
Heidelberg: Springer, 2010.
[15] X.-S. Yang, Nature-Inspired Optimization Algorithms. Amsterdam:
Elsevier, 2014.
[16] M. Dorigo and T. Stutzle, Ant Colony Optimization. Cambridge,
Mass: MIT Press, 2004.
[17] M. Gendreau and J.-Y. Potvin, Handbook of Metaheuristics. New
York: Springer, 2010.
[18] F. Glover and M. Laguna, Tabu Search. Boston: Kluwer Academic,
1997.
[19] A. Colorni, M. Dorigo, and V. Maniezzo, “Distributed Optimization
by Ant Colonies,” in Proceedings of the European Conference on
Artificial Life, 1991, pp. 134 142.
[20] M. Dorigo, V. Maniezzo, and A. Colorni, “Ant System:
Optimization by a Colony of Cooperating Agents,” J. IEEE Trans.
Syst. Man, Cybern. B, Cybern., vol. 26, no. 1, pp. 2941, Jan. 1996.
[21] B. Bullnheimer, R. F. Hart, and C. Straub, “A New Rank-Based
Version of the Ant System: A Computational Study,” Cent. Eur. J.
Oper. Res. Econ., vol. 7, no. 1, pp. 25 38, 1999.
[22] T. Stutzle and H. H. Hoos, MAX-MIN Ant System,” J. Futur.
Gener. Comput. Syst., vol. 16, no. 8, pp. 889914, 2000.
[23] M. Dorigo and L. M. Gambardella, “Ant Colony System: a
Cooperative Learning Approach to the Traveling Salesman
Problem,” J. IEEE Trans. Evol. Comput., vol. 1, no. 1, pp. 5366,
1997.
[24] F. Xhafa and A. Abraham, “Meta-heuristics for Grid Scheduling
Problems,” in Metaheuristics for Scheduling in Distributed
Computing Environments, F. Xhafa and A. Abraham, Eds.
Heidelberg: Springer, 2008, pp. 137.
[25] T. D. Braun et al, “A Comparison of Eleven Static Heuristics for
Mapping a Class of Independent Tasks onto Heterogeneous
Distributed Computing Systems,” J. Parallel Distrib. Comput., vol.
61, no. 6, pp. 810837, 2001.
[26] J. Kołodziej, F. Xhafa, and J. Kolodziej, “Enhancing the Genetic-
Based Scheduling in Computational Grids by a Structured
Hierarchical Population,” J. Futur. Gener. Comput. Syst., vol. 27,
no. 8, pp. 10351046, 2011.
[27] G. Ritchie and J. Levine, “A Hybrid Ant Algorithm for Scheduling
Independent Jobs in Heterogeneous Computing Environments,” in
Proceedings of the 23rd Workshop of the UK Planning and
Scheduling Special Interest Group, 2004, pp. 17.
[28] M. M. Alobaedy and K. R. Ku-Mahamud, Scheduling Jobs in
Computational Grid using Hybrid ACS and GA Approach,” in
Proceedings of the International Conference on Computing,
Communications and IT Applications, 2014, pp. 223228.
[29] J. Carretero, F. Xhafa, and A. Abraham, “Genetic Algorithm Based
Schedulers for Grid Computing Systems,” Int. J. Innov. Comput.
Inf. Control, vol. 3, no. 6, pp. 119, 2007.
[30] F. Xhafa and B. Duran, “Parallel Memetic Algorithms for
Independent Job Scheduling in Computational Grids,” in Recent
Advances in Evolutionary Computation for Combinatorial
Optimization, C. Cotta and J. van Hemert, Eds. Heidelberg:
Springer, 2008, pp. 219239.
[31] H. Izakian, A. Abraham, and V. Snsel, “Performance Comparison
of Six Efficient Pure Heuristics for Scheduling Meta-Tasks on
Heterogeneous Distributed Environments,” J. Neural Netw. World,
vol. 6, no. 09, pp. 695711, 2009.
92ISBN: 978-1-4799-6210-5 ©2015 IEEE
... Another benefit of exploration is to prevent the search from getting stuck at the local optimum. Determining how often or when exploitation and exploration will be utilized during optimization is crucial and called strategic oscillation (SO) [28]. Thanks to SO, the balance between exploitation and exploration is strategically maintained, and thus, finding global optimums while avoiding local optimums becomes possible. ...
... As discussed in Sect. 2.8, the utilization of these two functions together is called strategic oscillation (SO) in the literature, and it significantly contributes to the solution diversity [28]. The implementation of SO requires the determination of the Exploitation Exploration Rate EER ð Þ. ...
Article
Full-text available
Rapidly advancing technology brings a huge volume of data along the way that grows at a staggering pace and cannot be processed with traditional algorithms/hardware. Therefore, storing, processing, and analyzing this data in a timely manner requires distributed data clusters. One of the most critical problems facing these clusters is referred to as task scheduling. In this context, task scheduling is simply the name of the task-cluster node mapping process that will allow the last task to complete its execution as early as possible. Due to the NP-hard nature of the scheduling problem at hand, there is an inevitable need for metaheuristics to solve this problem in such a way that it can produce near-optimal (if possible optimal) solutions at reasonable times. In this study, a simulated annealing-based metaheuristic for cluster-based task scheduling is developed, and serial and parallel (shared memory) versions of the method are implemented in C++. The effectiveness of the proposed approach is demonstrated through twelve famous benchmarks from the Braun dataset. Both the serial and the parallel versions of the approach produce results that are much better than the best latency values ever reported in the literature for all benchmarks within the time constraint of 90 s. For example, the percentage of improvement of the serial version ranges from 0.01% to 0.49%. To decrease the execution time of the developed computer program and improve the quality of the scheduling solutions, different random number generation and perturbation techniques, data structures, early loop termination conditions, exploitation-exploration rates, and compiler effects are also analyzed in detail within the scope of this study.
... These components are explicitly implemented by ACO algorithm. It uses strategic oscillation rate to control the exploration and exploitation in ACO algorithm [15]. In [16], the author uses ACO algorithm to manage the resources in peer to peer grid environment. ...
Chapter
Grid computing is treated as one of the emerging fields in distributed computing; it exploits the services like sharing of resources and scheduling of workflows. One of the major issues in grid computing is resource scheduling, this can be handled using the ant colony optimization algorithm, and it can be implemented in PERMA-G framework and it is an extended version of our previous work. The ant colony optimization is used to reduce the energy consumption and execution time of the tasks. It follows the nature of ant colony mechanism to compute the total execution time and power consumption of the tasks scheduled dynamically, the experimental results show the performance of the proposed model.
... Where is the parameter that evaporates pheromone, Exploitation-Exploration [23] are the two mechanisms that support ACS. Exploitation and Exploration percentage remains unchanged while executing the process [24]. It has two prime features: ...
Conference Paper
Grid computing proposes a dynamic and earthly distributed organization of resources that harvest ideal CPU cycle to drift advance computing demands and accommodate user's prerequisites. Heterogeneous gridsdemand efficient allocation and scheduling strategies to cope up with the expanding grid automations. In order to obtain optimal scheduling solutions, primary focus of research has shifted towards metaheuristic techniques. The paper uses different parameters to provide analytical study of variants of Ant Colony Optimization for scheduling sequential jobs in grid systems. Based on the literature analysis, one can summarize that ACO is the most convincing technique for schedulingproblems. However, incapacitation of ACO to fix up a systematized startup and poor scattering capability cast down its efficiency. To overpower these constraints researchers have proposed different hybridizations of ACO that manages to sustain more effective results than standalone ACO.
... Population-based search methodology such as genetic algorithm (Ibrahim, Shamsuddin & Qasem, 2015) and ant colony algorithm (Alobaedy & Ku-Mahamud, 2015) have successful solved optimization problem. The Particle Swarm ...
Article
Full-text available
Tuning the gain of Proportional-Integral-Derivative (PID) controller in a process control system is exceptionally paramount since correct tuning would help a control process response without steady state error and overshoot. As the conventional gain tuning of PID controller, the Ziegler-Nichols (ZN) approach generally delivers an enormous overshoot; therefore, current heuristics approach, namely Particle Swarm Optimization (PSO) is utilized. A tuning problem of a single tank water level dynamic control system is presented. The best PID controller parameters are determined by using the ZN and PSO approaches. Comparisons of process time performance and the performance measurement of the system are made in order to evaluate both approaches in terms of their step response through the MATLAB/Simulink platform. The results demonstrate that the PSO approach produces promising results with lower overshoot compared to the ZN approach. It is found that the PSO approach would be advantageous for the industries related to single tank water control system for a better PID gain tuning.
Conference Paper
As communication was the major perspective of traditional networks, grid computing focuses on figuring the problems by using unprocessed CPU cycles that cannot be resolved by stand-alone computers. Grid being an earthly distributed network of computers provides a clear, coordinated, consistent and reliable computing medium to various applications. Owing to the heterogeneity of resources in a grid environment, job scheduling is problematic and therefore, needs competent schedulers. The paper provides a comparative analysis of various Ant colony optimization variants based on their effectiveness in determining optimal or near optimal solutions. The conclusion drawn from the survey explores ACO's effectiveness in solving the various scheduling problems. However, uncertainty in convergence time and early Designation of the initial and the extreme point hinders optimal scheduling. With the support of literature survey, not only assured guidelines are extracted for ACO algorithm but also, promising directions are provided for future work.
Thesis
Ant colony optimization (ACO) algorithms can be used to solve nondeterministic polynomial hard problems. Exploration and exploitation are the main mechanisms in controlling search within the ACO. Reactive search is an alternative technique to maintain the dynamism of the mechanics. However, ACO-based reactive search technique has three (3) problems. First, the memory model to record previous search regions did not completely transfer the neighborhood structures to the next iteration which leads to arbitrary restart and premature local search. Secondly, the exploration indicator is not robust due to the difference of magnitudes in distance matrices for the current population. Thirdly, the parameter control techniques that utilize exploration indicators in their feedback process do not consider the problem of indicator robustness. A reactive ant colony optimization (RACO) algorithm has been proposed to overcome the limitations of the reactive search. RACO consists of three main components. The first component is a reactive max-min ant system algorithm for recording the neighborhood structures. The second component is a statistical machine learning mechanism named ACOustic to produce a robust exploration indicator. The third component is the ACO-based adaptive parameter selection algorithm to solve the parameterization problem which relies on quality, exploration and unified criteria in assigning rewards to promising parameters. The performance of RACO is evaluated on traveling salesman and quadratic assignment problems and compared with eight metaheuristics techniques in terms of success rate, Wilcoxon signed-rank, Chi-square and relative percentage deviation. Experimental results showed that the performance of RACO is superior than the eight (8) metaheuristics techniques which confirmed that RACO can be used as a new direction for solving optimization problems. RACO can be used in providing a dynamic exploration and exploitation mechanism, setting a parameter value which allows an efficient search, describing the amount of exploration an ACO algorithm performs and detecting stagnation situations.
Article
Full-text available
Swarm intelligence is a relatively new approach to problem solving that takes inspiration from the social behaviors of insects and of other animals. In particular, ants have inspired a number of methods and techniques among which the most studied and the most successful is the general purpose optimization technique known as ant colony optimization. Ant colony optimization (ACO) takes inspiration from the foraging behavior of some ant species. These ants deposit pheromone on the ground in order to mark some favorable path that should be followed by other members of the colony. Ant colony optimization exploits a similar mechanism for solving optimization problems. From the early nineties, when the first ant colony optimization algorithm was proposed, ACO attracted the attention of increasing numbers of researchers and many successful applications are now available. Moreover, a substantial corpus of theoretical results is becoming available that provides useful guidelines to researchers and practitioners in further applications of ACO. The goal of this article is to introduce ant colony optimization and to survey its most notable applications
Article
Chapter
This chapter provides an overview of the Grid, its components and discourses with special focus on resource management. It defines important aspects and the technological and architectural advances that have led to the evolution of the Grid thereby setting the foundation for this book.
Conference Paper
A Community Grid web portal, C-Grid, has been developed in this study for storing, managing and sharing large amounts of distributed community health related data in a data grid, thus facilitating further analysis of these datasets by health researchers in a collaborative environment. Remote management of this data grid is performed using the middleware iRODS, the Integrated Rule-Oriented Data System. A PHP-based wrapper, ez-iRODS, has been created as a component of C-Grid to interact with this middleware through PRODS, a client application programming interface (API). C-Grid serves as a gateway for the XSEDE resources, and also helps the users via ez-iRODS to create and manage ‘virtual data collection’ that can be stored in heterogeneous data resources across the distributed network. This web-based system has been developed with an objective of long-term data preservation, unified data access and sharing domain specific data amongst the scientific research collaborators of myCHOIS project.
Book
The book gives an introduction to metaheuristics for students and practitioners. It requires no prior knowledge of the field, as the basic concepts are developed step by step from the bottom up using the knapsack problem and therefore allowing the user to concentrate on the main ideas of metaheuristics. The concepts are then expanded to concrete algorithms, which are described in detail. Then a systematization of the algorithms is established and compared with existing classification schemes from literature. The last part of the book deals with the application of popular metaheuristics to two optimization problems from the field of production and logistics, namely the Job Shop Scheduling and the Vehicle Routing Problem, based on examples. © Springer-Verlag Berlin Heidelberg 2010. All rights are reserved.
Book
One of the most challenging issues in modelling today's large-scale computational systems is to effectively manage highly parametrised distributed environments such as computational grids, clouds, ad hoc networks and P2P networks. Next-generation computational grids must provide a wide range of services and high performance computing infrastructures. Various types of information and data processed in the large-scale dynamic grid environment may be incomplete, imprecise, and fragmented, which complicates the specification of proper evaluation criteria and which affects both the availability of resources and the final collective decisions of users. The complexity of grid architectures and grid management may also contribute towards higher energy consumption. All of these issues necessitate the development of intelligent resource management techniques, which are capable of capturing all of this complexity and optimising meaningful metrics for a wide range of grid applications. This book covers hot topics in the design, administration and management of dynamic grid environments with a special emphasis on the preferences and autonomous decisions of system users, secure access to the processed data and services, and application of green technologies. It features advanced research related to scalable genetic-based heuristic approaches to grid scheduling, whereby new scheduling criteria, such as system reliability, security, and energy consumption are incorporated into a general scheduling model. This book may be a valuable reference for students, researchers, and practitioners who work on or who are interested in joining -- interdisciplinary research efforts in the areas of distributed and evolutionary computation.
Chapter
This paper describes initial work in the development of the DNA@Home volunteer computing project, which aims to use Gibbs sampling for the identification and location of DNA control signals on full genome scale data sets. Most current research involving sequence analysis for these control signals involve significantly smaller data sets, however volunteer computing can provide the necessary computational power to make full genome analysis feasible. A fault tolerant and asynchronous implementation of Gibbs sampling using the Berkeley Open Infrastructure for Network Computing (BOINC) is presented, which is currently being used to analyze the intergenic regions of the Mycobacterium tuberculosis genome. In only three months of limited operation, the project has had over 1,800 volunteered computing hosts participate and obtains a number of samples required for analysis over 400 times faster than an average computing host for the Mycobacterium tuberculosis dataset. We feel that the preliminary results for this project provide a strong argument for the feasibility and public interest of a volunteer computing project for this type of bioinformatics.