## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

To read the full-text of this research,

you can request a copy directly from the authors.

... Many researchers have applied the IWD algorithm to workflow scheduling in the cloud. Kalra et al. [33] modified the probability function of the IWD algorithm with the focus on minimizing the makespan. The algorithm performs better in cases of large workflows. ...

... The workflow tasks are assigned to the cloud VMs level by level according to the best paths discovered. The algorithm fares better when scheduled on heterogeneous VMs in place of Choudhary et al. 2018 [28] Chaudhary et al. 2018 [29] Chaudhary et al. 2018 [30] Biswas et al. 2019 [31] Karamoozian et al. 2019 [32] Kalra et al. 2017 [33] Elsherbiny et al. 2018 [34] Kalra et al. 2019 [35] Adhikari et al. 2019 [36] Malik et al. 2016 [37] Chaudhary et al. 2017 [38] Li et al. 2017 [39] Moschakis et al. 2015 [40] Yuan et al. 2016 [41] Yuan et al. 2017 [42] Bukhsh et al. 2018 [43] Yuan et al. 2020 [44] Algorithm GSA IWD HS SA ...

Task scheduling is a critical issue in distributed computing environments like cloud and fog. The objective is to provide an optimal distribution of tasks among the resources. Several research initiatives to use metaheuristic techniques for finding near-optimal solutions to task scheduling problems are under way. This study presents a comprehensive taxonomic review and analysis of recent metaheuristic scheduling techniques using exhaustive evaluation criteria in the cloud and fog environment. A taxonomy of metaheuristic scheduling algorithms is presented. Besides, we have considered an extensive list of scheduling objectives along with their associated metrics. Rigorous evaluation of existing literature is performed, and limitations highlighted. We have also focused on hybrid algorithms as they tend to improve scheduling performance. We believe that this work will encourage researchers to conduct further research for removing the limitations in existing studies.

... The WD-based scheduling algorithm was provided in a cloud computing environment to reduce the mapping of scientific workflows. The results showed that the proposed algorithm in Makespan overcomes the SGA and PSO algorithm [29]. A novel localization algorithm was submitted for WD-based wireless sensor networks. ...

Mosul's city land covers soil, cultivated land, stony, pastoral land, water, and ploughed agricultural land. We have classified multispectral images captured by the sensor (TM) carried on the Landsat satellite. Integrated approach of intelligent water drops (IWDs) algorithm is used to identify natural terrain. In this research, IWDs have been suggested to find the best results for multispectral image classification. The purpose of using an algorithm, give accurate and fast results by comparing the IWD algorithm with the K-mean algorithm. The IWD algorithm is programmed using the Matlab2017b software environment to demonstrate the proposed methodology's effectiveness. The proposed integrated concept has been applied to satellite images of Mosul city in Iraq. By comparing the IWD with the K-mean, we found clear time superiority of the IWD algorithm, equal 1.4122 with (K-mean) time equal 18.9475. Furthermore, the water drop algorithm's classification accuracy is 95%, while the K-mean classification accuracy is 83.3%. Based on the analysis and results, we conclude the IWD is a robust promising and approach to detecting remote sensing image changes and multispectral image classification.

... The IWD algorithm is present in the solution of varied problems such as the following: classification of spam email in [15], workflow scheduling in a cloud environment [5], natural terrain feature identification [6], the capacitated vehicle routing problem [18], the multi-echelon supply chain optimization problem [7], multiobjective job shop scheduling [10], the optimal reactive power dispatch problem [8], and the robot path planning problem [12]. ...

Cloud computing provides many advantages services to its users like online resources accessibility, better cost management, dynamic resource pooling, and efficient Virtual Machine (VM) allocation. With so many useful services, cloud computing also introduces several security risks of cloud user information privacy as well as select the minimum number of VM to execute task load and improve resource utilization are some challenging issues. The objective of this work is to achieve higher resource utilization rate while considering the security of the cloud data. We extended the natural phenomena based Intelligent Water Drop (IWD) algorithm and proposed a VM allocation algorithm which optimizes the task execution in a secure cloud environment. The proposed work is implemented on the CloudSim simulation toolkit, and to test the validity of the algorithm, a comparison is performed with other well-known VM allocation policies in the cloud computing. The experimental simulation results showed that the proposed VM allocation policy performed better than exiting VM allocation approaches.

In this paper, we have proposed an improved Intelligent Water Drop (IWD) Algorithm. The IWD algorithm has been proposed by observing the dynamic flow of water in the river system and the actions of the water drops. The water drops act as agents to find the optimal solution. In this paper, we have modified the original IWD algorithm and proposed an improved variant of it. We have implemented our proposed algorithms to solve a real-life waste collection problem. Our algorithms have shown promising results.

Workflow scheduling is a key component behind the process for an optimal workflow enactment. It is a well-known NP-hard problem and is more challenging in the heterogeneous computing environment. The increasing complexity of the workflow applications is forcing researchers to explore hybrid approaches to solve the workflow scheduling problem. The performance of genetic algorithms can be enhanced by the modification in genetic operators and involving an efficient heuristic. These features are incorporated in the proposed Hybrid Genetic Algorithm (HGA). A solution obtained from a heuristic is seeded in the initial population that provides a direction to reach an optimal (makespan)solution. The modified two fold genetic operators search rigorously and converge the algorithm at the best solution in less amount of time. This is proved to be the strength of the HGA in the optimization of fundamental objective (makespan) of scheduling. The proposed algorithm also optimizes the load balancing during the execution side to utilize resources at maximum. The performance of the proposed algorithm is analyzed by using synthesized datasets, and real-world application workflows. The HGA is evaluated by comparing the results with renowned and state of the art algorithms. The experimental results validate that the HGA outperforms these approaches and provides quality schedules with less makespans.

The cost minimization with due dates in cloud computing workflow is an intractable problem. Taking the characteristics in cloud computing of pay-per-use and resource virtualization into account, in this paper, we present a QoS-based hybrid particle swarm optimization (GHPSO) to schedule applications to cloud resources. In GHPSO, crossover and mutation of genetic algorithm is embedded into the particle swarm optimization algorithm (PSO), so that it can play a role in the discrete problem, in addition, variability index, changing with the number of iterations, is proposed to ensure that population can have higher global search ability during the early stage of evolution, without the premature phenomenon. A hill climbing algorithm is also introduced into the PSO in order to improve the local search ability and to maintain the diversity of the population. The simulation results show that the GHPSO achieves better performance than standard particle swarm algorithm used in minimize costs within a given execution time.

Cloud computing is recently a booming area and has been emerging as a commercial reality in the information technology domain. Cloud computing represents supplement, consumption and delivery model for IT services that are based on internet on pay as per usage basis. The scheduling of the cloud services to the consumers by service providers influences the cost benefit of this computing paradigm. In such a scenario, Tasks should be scheduled efficiently such that the execution cost and time can be reduced. In this paper, we proposed a meta-heuristic based scheduling, which minimizes execution time and execution cost as well. An improved genetic algorithm is developed by merging two existing scheduling algorithms for scheduling tasks taking into consideration their computational complexity and computing capacity of processing elements. Experimental results show that, under the heavy loads, the proposed algorithm exhibits a good performance.

Simulation is one of the most popular evaluation methods in scientific workflow studies. However, existing workflow simulators fail to provide a framework that takes into consideration heterogeneous system overheads and failures. They also lack the support for widely used workflow optimization techniques such as task clustering. In this paper, we introduce WorkflowSim, which extends the existing CloudSim simulator by providing a higher layer of workflow management. We also indicate that to ignore system overheads and failures in simulating scientific workflows could cause significant inaccuracies in the predicted workflow runtime. To further validate its value in promoting other research work, we introduce two promising research areas for which WorkflowSim provides a unique and effective evaluation platform.

In heterogeneous distributed computing systems like cloud computing, the problem of mapping tasks to resources is a major issue which can have much impact on system performance. For some reasons such as heterogeneous and dynamic features and the dependencies among requests, task scheduling is known to be a NP-complete problem.
In this paper, we proposed a hybrid heuristic method (HSGA) to find a suitable scheduling for workflow graph, based on genetic algorithm in order to obtain the response quickly moreover optimizes makespan, load balancing on resources and speedup ratio.
At first, the HSGA algorithm makes tasks prioritization in complex graph considering their impact on others, based on graph topology. This technique is efficient to reduction of completion time of application. Then, it merges Best-Fit and Round Robin methods to make an optimal initial population to obtain a good solution quickly, and apply some suitable operations such as mutation to control and lead the algorithm to optimized solution. This algorithm evaluates the solutions by considering efficient parameters in cloud environment.
Finally, the proposed algorithm presents the better results with increasing number of tasks in application graph in contrast with other studied algorithms.

Purpose
The purpose of this paper is to test the capability of a new population‐based optimization algorithm for solving an NP‐hard problem, called “Multiple Knapsack Problem”, or MKP.
Design/methodology/approach
Here, the intelligent water drops (IWD) algorithm, which is a population‐based optimization algorithm, is modified to include a suitable local heuristic for the MKP. Then, the proposed algorithm is used to solve the MKP.
Findings
The proposed IWD algorithm for the MKP is tested by standard problems and the results demonstrate that the proposed IWD‐MKP algorithm is trustable and promising in finding the optimal or near‐optimal solutions. It is proved that the IWD algorithm has the property of the convergence in value.
Originality/value
This paper introduces the new optimization algorithm, IWD, to be used for the first time for the MKP and shows that the IWD is applicable for this NP‐hard problem. This research paves the way to modify the IWD for other optimization problems. Moreover, it opens the way to get possibly better results by modifying the proposed IWD‐MKP algorithm.

Workflow scheduling is one of the key issues in the management of workflow execution. Scheduling is a process that maps and
manages execution of inter-dependent tasks on distributed resources. It introduces allocating suitable resources to workflow
tasks so that the execution can be completed to satisfy objective functions specified by users. Proper scheduling can have
significant impact on the performance of the system. In this chapter, we investigate existing workflow scheduling algorithms
developed and deployed by various Grid projects.

In this paper, we propose a new problem solving algorithm called "intelligent water drops" or IWD algorithm which is based on the processes that happen in the natural river systems and the actions and reactions that take place between water drops in the river and the changes that happen in the environment that river is flowing. It is observed that a river often chooses an optimum path regarding the conditions of its surroundings to get to its ultimate goal which is often a lake or sea. These ideas are embedded into the proposed algorithm for solving the traveling salesman problem or the TSP. The IWD algorithm is tested with artificial and standard TSP problems and the experimental results demonstrate that it is a very promising problem solving algorithm and deserves more research to improve it and/or to adapt it to other engineering problems.

Cloud computing environments facilitate applications by providing virtualized resources that can be provisioned dynamically. However, users are charged on a pay-per-use basis. User applications may incur large data retrieval and execution costs when they are scheduled taking into account only the `execution time'. In addition to optimizing execution time, the cost arising from data transfers between resources as well as execution costs must also be taken into account. In this paper, we present a particle swarm optimization (PSO) based heuristic to schedule applications to cloud resources that takes into account both computation cost and data transmission cost. We experiment with a workflow application by varying its computation and communication costs. We compare the cost savings when using PSO and existing `Best Resource Selection' (BRS) algorithm. Our results show that PSO can achieve: (a) as much as 3 times cost savings as compared to BRS, and (b) good distribution of workload onto resources.

A natural river often finds good paths among lots of possible paths in its ways from the source to destination. These near optimal or optimal paths are obtained by the actions and reactions that occur among the water drops and the water drops with the riverbeds. The intelligent water drops (IWD) algorithm is a new swarm-based optimisation algorithm inspired from observing natural water drops that flow in rivers. In this paper, the IWD algorithm is tested to find solutions of the n-queen puzzle with a simple local heuristic. The travelling salesman problem (TSP) is also solved with a modified IWD algorithm. Moreover, the IWD algorithm is tested with some more multiple knapsack problems (MKP) in which near-optimal or optimal solutions are obtained.

Grid technologies have progressed towards a service-oriented paradigm that enables a new way of service provisioning based on utility computing models, which are capable of supporting diverse computing services. It facilitates scientific applications to take advantage of computing resources distributed world wide to enhance the capability and performance. Many scientific applications in areas such as bioinformatics and astronomy require workflow processing in which tasks are executed based on their control or data dependencies. Scheduling such interdependent tasks on utility Grid environments need to consider users' QoS requirements. In this paper, we present a genetic algorithm approach to address scheduling optimization problems in workflow applications, based on two QoS constraints, deadline and budget.

Multiprocessor task scheduling is an important and computationally difficult problem. Multiprocessors have emerged as a powerful computing means for running real-time applications, especially that a uni-processor system would not be sufficient enough to execute all the tasks. That computing environment requires an efficient algorithm to determine when and on which processor a given task should execute. A task can be partitioned into a group of subtasks and represented as a DAG (Directed Acyclic Graph), that problem can be stated as finding a schedule for a DAG to be executed in a parallel multiprocessor system. The problem of mapping meta-tasks to a machine is shown to be NP-complete. The NP-complete problem can be solved only using heuristic approach. The execution time requirements of the applications’ tasks are assumed to be stochastic. In multiprocessor scheduling problem, a given program is to be scheduled in a given multiprocessor system such that the program’s execution time should be minimized. The last job must be completed as early as possible. Genetic algorithm (GA) is one of the widely used techniques for constrained optimization. Performance of genetic algorithm can be improved with the introduction of some knowledge about the scheduling problem represented by the use of heuristics. In this paper the problem of same execution time or completion time and same precedence in the homogeneous parallel system is resolved by using concept of Bottom-level (b-level) or Top-level (t-level). This combined approach named as heuristics based genetic algorithm (HGA) based on MET (Minimum execution time)/Min-Min heuristics and b-level or t-level precedence resolution and is compared with a pure genetic algorithm, min-min heuristic, MET heuristic and First Come First Serve (FCFS) approach. Results of the experiments show that the heuristics based genetic algorithm produces much better results in terms of quality of solutions.

Workflows have recently emerged as a paradigm for representing and managing complex distributed scientific computations and therefore accelerate the pace of scientific progress. A recent workshop on the Challenges of Scientific Workflows, sponsored by the National Science Foundation and held on May 1-2, 2006, brought together domain scientists, computer scientists, and social scientists to discuss requirements of future scientific applications and the challenges that they present to current workflow technologies. This paper reports on the discussions and recommendations of the workshop, the full report can be found at http://www.isi.edu/nsf-workflows06.

In this paper, we propose a new problem solving algorithm called "Intelligent Water Drops" or IWD algorithm which is based on the processes that happen in the natural river systems and the actions and reactions that take place between water drops in the river and the changes that happen in the environment that river is flowing. It is observed that a river often chooses an optimum path regarding the conditions of its surroundings to get to its ultimate goal which is often a lake or sea. These ideas are embedded into the proposed algorithm for solving the Traveling Salesman Problem or the TSP. The IWD algorithm is tested with artificial and standard TSP problems and the experimental results demonstrate that it is a very promising problem solving algorithm and deserves more research to improve it and/or to adapt it to other engineering problems.

Cloud computing is an emerging technology and it allows users to pay as you need and has the high performance. Cloud computing is a heterogeneous system as well and it holds large amount of application data. In the process of scheduling some intensive data or computing an intensive application, it is acknowledged that optimizing the transferring and processing time is crucial to an application program. In this paper in order to minimize the cost of the processing we formulate a model for task scheduling and propose a particle swarm optimization (PSO) algorithm which is based on small position value rule. By virtue of comparing PSO algorithm with the PSO algorithm embedded in crossover and mutation and in the local research, the experiment results show the PSO algorithm not only converges faster but also runs faster than the other two algorithms in a large scale. The experiment results prove that the PSO algorithm is more suitable to cloud computing.

Cloud computing is the latest distributed computing paradigm and it offers tremendous opportunities to solve large-scale scientific problems. However, it presents various challenges that need to be addressed in order to be efficiently utilized for workflow applications. Although the workflow scheduling problem has been widely studied, there are very few initiatives tailored for cloud environments. Furthermore, the existing works fail to either meet the user's quality of service (QoS) requirements or to incorporate some basic principles of cloud computing such as the elasticity and heterogeneity of the computing resources. This paper proposes a resource provisioning and scheduling strategy for scientific workflows on Infrastructure as a Service (IaaS) clouds. We present an algorithm based on the meta-heuristic optimization technique, particle swarm optimization (PSO), which aims to minimize the overall workflow execution cost while meeting deadline constraints. Our heuristic is evaluated using CloudSim and various well-known scientific workflows of different sizes. The results show that our approach performs better than the current state-of-the-art algorithms.

In this article; Intelligent Water Drops (IWD) algorithm is adapted for feature selection with Rough Set (RS). Specifically, IWD is used to search for a subset of features based on RS dependency as an evaluation function. The resulting system, called IWDRSFS (Intelligent Water Drops for Rough Set Feature Selection), is evaluated with six benchmark data sets. The performance of IWDRSFS are analysed and compared with those from other methods in the literature. The outcomes indicate that IWDRSFS is able to provide competitive and comparable results. In summary, this study shows that IWD is a useful method for undertaking feature selection problems with RS.

Task scheduling and resource allocation are two of the most important issues in grid computing. In a grid computing system, the workflow management system receives inter-dependent tasks from users and allocates each task to an appropriate resource. The assignment is based on user constraints such as budget and deadline. Thus, the workflow management system has a significant effect on system performance and efficient resource use. In general, optimal task scheduling is an NP-complete problem. Hence, heuristic and meta-heuristic methods are employed to obtain a solution which is close to optimal. In this paper, workflow management based on a multi-objective Genetic Algorithm (GA) is proposed to improve grid computing performance. In grid computing, task runtime is an important parameter. Thus the proposed method considers a workflow as a collection of levels to eliminate the need to check workflow dependencies after a solution is obtained for the next population. As a result, both scheduling time and solution quality are improved. Results are presented which show that the proposed method has better performance compared to similar techniques.

Multi-objective job shop scheduling (MOJSS) problems can be found in various application areas. The efficient solution of MOJSS problems has received continuous attention. In this research, a new meta-heuristic algorithm, namely the Intelligent Water Drops (IWD) algorithm is customized for solving the MOJSS problem. The optimization objective of MOJSS in this research is to find the best compromising solutions (Pareto non-dominance set) considering multiple criteria, namely makespan, tardiness and mean flow time of the schedules. MOJSS-IWD, which is a modified version of the original IWD algorithm, is proposed to solve the MOJSS problem. A scoring function which gives each schedule a score based on its multiple criteria values is embedded into the MOJSS-IWD’s local search process. Experimental evaluation shows that the customized IWD algorithm can identify the Pareto non-dominance schedules efficiently.

Researchers working on the planning, scheduling, and execution of scientific workflows need access to a wide variety of scientific workflows to evaluate the performance of their implementations. This paper provides a characterization of workflows from six diverse scientific applications, including astronomy, bioinformatics, earthquake science, and gravitational-wave physics. The characterization is based on novel workflow profiling tools that provide detailed information about the various computational tasks that are present in the workflow. This information includes I/O, memory and computational characteristics. Although the workflows are diverse, there is evidence that each workflow has a job type that consumes the most amount of runtime. The study also uncovered inefficiency in a workflow component implementation, where the component was re-reading the same data multiple times.

Job-shop scheduling is a typical NP-hard problem which has drawn continuous attention from researchers. In this paper, the Intelligent Water Drops (IWD) algorithm, which is a new meta-heuristics, is customised for solving job-shop scheduling problems. Five schemes are proposed to improve the original IWD algorithm, and the improved algorithm is named the Enhanced IWD algorithm (EIWD) algorithm. The optimisation objective is the makespan of the schedule. Experimental results show that the EIWD algorithm is able to find better solutions for the standard benchmark instances than the existing algorithms. This paper has made a contribution in two aspects. First, to the best of the authors’ knowledge, this research is the first to apply the IWD algorithm to the job-shop scheduling problem. This work can inspire further studies of applying IWD algorithm to other scheduling problems, such as open-shop scheduling and flow-shop scheduling. Second, this research further improves the original IWD algorithm by employing five schemes to increase the diversity of the solution space as well as the solution quality.

Grid applications in virtue of open service grid architecture (OGSA) are promising next-generation computation techniques. One of the most important and challenging problems about grid application is the workflow scheduling problem to achieve the users' QoS (quality of service) requirements as well as to minimize the cost. This paper proposes an ant colony optimization (ACO) algorithm to tackle this problem. Several new features are introduced to the algorithm. First, we define two kinds of pheromone and three kinds of heuristic information to guide the search direction of ants for this bi-criteria problem. Each ant uses either one from these heuristic types and pheromone types in each iteration based on the probabilities controlled by two parameters. These two parameters are adaptively adjusted in the process of the algorithm. Second, we use the information of partial solutions to modify the bias of ants so that inferior choices will be ignored. Moreover, the experimental results in 3 workflow applications under different deadline constraints show that the performance of our algorithm is very promising, for it outperforms the Deadline-MDP algorithm in most cases.

A cloud workflow system is a type of platform service which facilitates the automation of distributed applications based on the novel cloud infrastructure. Compared with grid environment, data transfer is a big overhead for cloud workflows due to the market-oriented business model in the cloud environments. In this paper, a Revised Discrete Particle Swarm Optimization (RDPSO) is proposed to schedule applications among cloud services that takes both data transmission cost and computation cost into account. Experiment is conducted with a set of workflow applications by varying their data communication costs and computation costs according to a cloud price model. Comparison is made on make span and cost optimization ratio and the cost savings with RDPSO, the standard PSO and BRS (Best Resource Selection) algorithm. Experimental results show that the proposed RDPSO algorithm can achieve much more cost savings and better performance on make span and cost optimization.

This paper examines issues related to the execution of scientific applications, and in particular computational workflows, on Cloud-based infrastructure. The paper describes the layering of application-level schedulers on top of the Cloud resources that enables grid-based applications to run on the Cloud. Finally, the paper examines issues of Cloud data management that supports workflow execution. We show how various ways of handling data have impact on the cost of the overall computations.

Grid workflow scheduling problem has been a research focus in grid computing in recent years. Various deterministic or meta-heuristic scheduling approaches have been proposed to solve this NP-complete problem. These existing algorithms, however, are not suitable to tackle a class of workflows, namely the time-varying workflow, in which the topologies change over time. In this paper, we propose an ant colony optimization (ACO) approach to tackle such kind of scheduling problems. The algorithm evaluates the overall performance of a schedule by tracing the sequence of its topologies in a period. Moreover, integrated pheromone information is designed to balance the workflow's cost and makespan. In the case study, a 9-task grid workflow with four topologies is used to test our approach. Experimental results demonstrate the effectiveness and robustness of the proposed algorithm.

Service-oriented grid environment enables a new way of service provisioning based on utility computing models, where users
consume services based on their QoS (Quality of Service) requirements. In such “pay-per-use” Grids, workflow execution cost
must be considered during scheduling based on users’ QoS constraints. In this paper, we propose a knowledge-based ant colony
optimization algorithm (KBACO) for grid workflow scheduling with consideration of two QoS constraints, deadline and budget.
The objective of this algorithm is to find a solution that minimizes execution cost while meeting the deadline in terms of
users’ QoS requirements. Based on the characteristics of workflow scheduling, we define pheromone in terms of cost and design
a heuristic in terms of latest start time of tasks in workflow applications. Moreover, a knowledge matrix is defined for the
ACO approach to integrate the ACO model with knowledge model. Experimental results show that our algorithm achieves solutions
effectively and efficiently.

Grid computing is increasingly considered as a promising next-generation computational platform that supports wide-area parallel and distributed computing. In grid environments, applications are always regarded as workflows. The problem of scheduling workflows in terms of certain quality of service (QoS) requirements is challenging and it significantly influences the performance of grids. By now, there have been some algorithms for grid workflow scheduling, but most of them can only tackle the problems with a single QoS parameter or with small-scale workflows. In this frame, this paper aims at proposing an ant colony optimization (ACO) algorithm to schedule large-scale workflows with various QoS parameters. This algorithm enables users to specify their QoS preferences as well as define the minimum QoS thresholds for a certain application. The objective of this algorithm is to find a solution that meets all QoS constraints and optimizes the user-preferred QoS parameter. Based on the characteristics of workflow scheduling, we design seven new heuristics for the ACO approach and propose an adaptive scheme that allows artificial ants to select heuristics based on pheromone values. Experiments are done in ten workflow applications with at most 120 tasks, and the results demonstrate the effectiveness of the proposed algorithm.

A unified view of metaheuristics. This book provides a complete background on metaheuristics and shows readers how to design and implement efficient algorithms to solve complex optimization problems across a diverse range of applications, from networking and bioinformatics to engineering design, routing, and scheduling. It presents the main design questions for all families of metaheuristics and clearly illustrates how to implement the algorithms under a software framework to reuse both the design and code. Throughout the book, the key search components of metaheuristics are considered as a toolbox for: Designing efficient metaheuristics (e.g. local search, tabu search, simulated annealing, evolutionary algorithms, particle swarm optimization, scatter search, ant colonies, bee colonies, artificial immune systems) for optimization problems. Designing efficient metaheuristics for multi-objective optimization problems. Designing hybrid, parallel, and distributed metaheuristics. Implementing metaheuristics on sequential and parallel machines. Using many case studies and treating design and implementation independently, this book gives readers the skills necessary to solve large-scale optimization problems quickly and efficiently. It is a valuable reference for practicing engineers and researchers from diverse areas dealing with optimization or machine learning; and graduate students in computer science, operations research, control, engineering, business and management, and applied mathematics.

Intelligent water drops algorithm A new optimization method for solving the vehicle routing problem

- shah-hosseini