ArticlePublisher preview available

An improved particle swarm optimization algorithm for task scheduling in cloud computing

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract and Figures

In the context of cloud computing, the task scheduling issue has an immediate effect on service quality. Task scheduling is the process of assigning work to available resources based on requirements. The objective of this NP-hard problem is to identify the ideal timetable for resource allocation so that more tasks can be done in less time. Several algorithms have been presented thus far to solve the problem of work scheduling. In this paper proposes an Improved Particle Swarm Optimization (IPSO) algorithm to address the aforementioned issue. In order to shorten the execution time of the original Particle Swarm Optimization (PSO) algorithm for task scheduling in the cloud computing environment, a multi-adaptive learning strategy is employed. In its initial population phase, the proposed Multi Adaptive Learning for Particle Swarm Optimization (MALPSO) defines two sorts of particles: ordinary particles and locally best particles. During this phase, the population's variety is reduced and the likelihood of reaching the local optimum rises. This study compares the proposed approach to various algorithms based on four criteria: makespan, load balancing, stability, and efficiency. Additionally, we examine the proposed technique using the CEC 2017 benchmark. Compared to what is currently known, the suggested method can solve the problem in less time and get the best answer for most of the criteria.
This content is subject to copyright. Terms and conditions apply.
Vol.:(0123456789)
1 3
Journal of Ambient Intelligence and Humanized Computing (2023) 14:4313–4327
https://doi.org/10.1007/s12652-023-04541-9
ORIGINAL RESEARCH
An improved particle swarm optimization algorithm fortask
scheduling incloud computing
PoriaPirozmand1· HodaJalalinejad2· AliAsgharRahmaniHosseinabadi3 · SeyedsaeidMirkamali4· YingqiuLi1
Received: 30 April 2022 / Accepted: 19 January 2023 / Published online: 15 February 2023
© The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023
Abstract
In the context of cloud computing, the task scheduling issue has an immediate effect on service quality. Task scheduling
is the process of assigning work to available resources based on requirements. The objective of this NP-hard problem is to
identify the ideal timetable for resource allocation so that more tasks can be done in less time. Several algorithms have been
presented thus far to solve the problem of work scheduling. In this paper proposes an Improved Particle Swarm Optimiza-
tion (IPSO) algorithm to address the aforementioned issue. In order to shorten the execution time of the original Particle
Swarm Optimization (PSO) algorithm for task scheduling in the cloud computing environment, a multi-adaptive learning
strategy is employed. In its initial population phase, the proposed Multi Adaptive Learning for Particle Swarm Optimization
(MALPSO) defines two sorts of particles: ordinary particles and locally best particles. During this phase, the population's
variety is reduced and the likelihood of reaching the local optimum rises. This study compares the proposed approach to
various algorithms based on four criteria: makespan, load balancing, stability, and efficiency. Additionally, we examine the
proposed technique using the CEC 2017 benchmark. Compared to what is currently known, the suggested method can solve
the problem in less time and get the best answer for most of the criteria.
Keywords Cloud Computing· Task Scheduling· Metaheuristic· Optimization· Improved particle swarm optimization
1 Introduction
Cloud computing is an evolving technology in the field of
distributed computing and parallel processing. The popular-
ity of cloud computing is increasing due to its unique fea-
tures such as diverse services, security, resilience, and scal-
ability (Mansouri and Javidi 2020). Cloud service providers
provide services such as software, storage space, network
services, etc. to their customers. In order for cloud service
providers to be able to provide such services to their cus-
tomers, they must make the best use of all cloud resources.
This use is effectively done by task scheduling algorithms.
One of the important goals of task scheduling is to increase
performance, Quality of Service (QoS) and also reduce costs
(Bansal and Malik 2020).
Cloud computing is a model for ubiquitously, easily, and
at the user's request, accessing a common set of configurable
computing resources (e.g., networks, data centers, storage
space, applications, and services) through the network that
can be prepared and released quickly with minimal admin-
istrative effort or service provider interaction. The main
purpose of cloud computing is to provide cloud services to
* Ali Asghar Rahmani Hosseinabadi
ark838@uregina.ca; a.r.hosseinabadi1987@gmail.com
Poria Pirozmand
poria@hbu.edu.cn
Hoda Jalalinejad
hoda.jalalinezhad@gmail.com
Seyedsaeid Mirkamali
s.mirkamali@pnu.ac.ir
Yingqiu Li
liyingqiu@neusoft.edu.cn
1 Hebei Key Laboratory ofMachine Learning
andComputational Intelligence, Hebei University,
Baoding071002, China
2 Department ofMathematics andComputer Science, Bandar
Abbas Branch, Islamic Azad University, BandarAbbas, Iran
3 Department ofComputer Science, University ofRegina,
Regina, Canada
4 Department ofComputer Engineering andIT, Payame Noor
University (PNU), Tehran, Iran
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
... These applications highlight the critical role of efficient two-machine flow shop scheduling in improving efficiency and performance across multiple domains. To address this type of problem, there many approaches have been developed such as Particle Swarm Optimization (PSO) [32], Pathfinder Algorithm (PFA) [30], Gravitational Search Algorithm (GSA) [33], e.t. in recent years. ...
... We use Equations (30) and (31) to get the uniform interval and center point (average) for delivery time [6,23]. Equation (32) calculates the central point (average) delivery time (d), considering task processing times (P ) and the end time of all tasks (M ) [23]: ...
... Particle Swarm Optimization (PSO) is an effective method for determining the optimal solution of a nonlinear system. The PSO [43] method was developed based on the natural behavior of schools of fish and flocks of birds that travel in groups within a -dimensional region. In each iteration, each particle moves toward the best solution found so far. ...
Article
Full-text available
Power electronic converters integrating Wide‐Bandgap (WBG) semiconductor devices, based on Silicon Carbide (SiC) and Gallium Nitride (GaN), demonstrate superior efficiency compared to conventional silicon‐based counterparts. This work investigates the performance of a novel WBG SiC MOSFET switch‐based DC‐DC boost converter in a solar‐fed power system. A fractional‐order PID (FOPID) controller, with gain parameters optimized by the particle swarm optimization (PSO) algorithm, is employed for controlling the converters. The transfer characteristics, output characteristics, and transient characteristics of the WBG switch are validated through MATLAB simulation using an available model. The capability of the proposed WBG‐based FOPID‐controlled DC‐DC converter to maintain stability and robustness under varying irradiance as well as load transients is assessed through comprehensive MATLAB simulations. The performance comparison of the proposed DC‐DC converter using Proportional Integral (PI), Proportional Integral Derivative (PID), and FOPID controllers, with both WBG and traditional MOSFET switches, was carried out. The results validate the superiority of WBG switches over conventional switches as well as the effectiveness of the fractional parameter effect on the system response. The proposed approach ensures high efficiency performances under medium voltage applications, which are suitable for charging electric vehicles, making it a promising solution for advanced power electronics applications.
... Pirozmand et al. [20] identified an ideal timetable for task scheduling to reduce the makespan time as well as system efficiency. They proposed a Multi-Adaptive Learning for Particle Swarm Optimization (MALPSO) to bifurcate the particle into ordinary particles and locally best particles, to reduce the variety of population and time to reach the optima. ...
Preprint
Full-text available
Assigning tasks efficiently in cloud computing is a challenging problem and is considered an NP-hard problem. Many researchers have used metaheuristic algorithms to solve it, but these often struggle to handle dynamic workloads and explore all possible options effectively. Therefore, this paper presents a new hybrid method that combines two popular algorithms, Grey Wolf Optimizer (GWO) and Particle Swarm Optimization (PSO). GWO offers strong global search capabilities (exploration), while PSO enhances local refinement (exploitation). The hybrid approach, called HybridPSOGWO, is compared with other existing methods like MPSOSA, RL-GWO, CCGP, and HybridPSOMinMin, using key performance indicators such as makespan, throughput, and load balancing. We tested our approach using both a simulation tool (CloudSim Plus) and real-world data. The results show that HybridPSOGWO outperforms other methods, with up to 15\% improvement in makespan and 10\% better throughput, while also distributing tasks more evenly across virtual machines. Our implementation achieves consistent convergence within a few iterations, highlighting its potential for efficient and adaptive cloud scheduling.
... Zhang et al. [34] introduced a Phasmatodea Population Evolution algorithm (APPE) that aims to reduce costs in heterogeneous environments, whereas Gong et al. [35] achieved better makespan and utilization via an enhanced marine predator algorithm (EMPA) integrated with WOA operators. Additional efforts, such as MALPSO [36] (a multi-adaptive variant of PSO) and EWOA [37] (an improved WOA combining adaptive crossover with Lévy flights), yielded significant gains in terms of execution time, cost, and energy efficiency. Most recently, Chandrashekar et al. [38] proposed HWACO, which pairs Ant Colony Optimization (ACO) and weighted optimization to reduce cost and makespan. ...
Article
Full-text available
Efficient task scheduling in Cloud Computing remains an NP-hard challenge due to combinatorial search spaces and resource heterogeneity, often leading to premature convergence in existing metaheuristics. This paper proposes FL-Jaya, an enhanced Jaya algorithm that addresses these limitations through two key innovations: (1) a Fitness-Distance Balance (FDB) mechanism, which preserves population diversity by selecting solutions that optimally trade off fitness quality and spatial distribution, and (2) a Lévy Flight (LF) operator, enabling stochastic long jumps to escape local optima. By unifying FDB and LF into a single update rule, FL-Jaya dynamically balances exploration and exploitation, overcoming stagnation in large-scale scheduling. Experiments on artificial (100–1000 tasks) and real-world Google Cloud Jobs datasets demonstrate FL-Jaya’s superiority over six algorithms—Jaya, Particle Swarm Optimization, Coati Optimization Algorithm, Whale Optimization Algorithm, Bald Eagle Search, and Snake Optimizer. FL-Jaya achieves 38.98% lower makespan and 44.63% higher average resource utilization (ARU) than standard Jaya on artificial workloads, with real-world results showing 35.34% makespan reduction and 44.63% ARU improvement. These gains stem from FL-Jaya’s ability to maintain solution diversity while navigating complex search spaces, outperforming peers in convergence speed and scalability. The algorithm’s parameter-light design and consistent performance underscore its practicality for heterogeneous cloud environments.
Article
Cloud and Fog computing have emerged as pivotal paradigms in the field of distributed computing, offering flexible and scalable resources for various applications. Efficient job scheduling is a critical factor in optimizing resource utilization and enhancing the performance of these systems. Job scheduling is a complex challenge in Cloud and Fog computing due to their dynamic and heterogeneous nature. The need to balance resource allocation, minimize latency, and enhance energy efficiency poses significant research questions. To address these issues, this article systematically reviews existing literature to identify trends, challenges, and recent advancements in job scheduling strategies. The objectives of this work were to: assess the current landscape of job scheduling techniques in Cloud and Fog computing; analyze the key challenges and trends in job scheduling research; and highlight recent advancements and innovations in this domain, which further provide insights for future research directions in these computing environments. We conducted an advance searching and comprehensive systematic review of peer-reviewed articles (n=48) published in 2023 from Scopus and IEEE databases based on PRISMA framework. Our search and selection criteria ensured the inclusion of relevant studies, and a rigorous analysis was performed to extract key findings and identify emerging trends. By summarizing the state-of-the-art, it offers valuable insights for researchers and practitioners in the field, guiding future research efforts to address the evolving demands of these dynamic computing paradigms.
Article
Path optimization of cold chain logistics (CCL) is an important subject for scientists to explore. This paper mainly studies the optimization algorithm of CCL distribution vehicle scheduling based on cloud computing. By analyzing the characteristics of CCL, an optimization model of the delivery vehicle path is established, including the service time within the optimal time range required by the customer and the delivery vehicle arriving in advance. Next, the classification of multi-source input data for vehicle dynamic route optimization modeling is analyzed, and the cloud computing resource integration technology is used to comprehensively process the multi-source data. Experimental data shows that, taking into account the divisible demand, the total circulation fee is 7356.92 yuan; the fixed fee is 2717 yuan; the transportation fee is 3245 yuan. 19 refrigerated trucks are used, and the vehicle loading rate is 95.2%. The findings indicate that employing cloud computing techniques to enhance the routing of cold chain logistics vehicles is efficacious and possesses notable theoretical implications as well as practical utility.
Article
Full-text available
The Internet of Things (IoT) is an essential part of Information and Communications Technology (ICT) for sustainable smart cities because of its capacity to assist sustainability across multiple disciplines. To attain the required quality of IoT communication systems and to enable sustainable progress in smart cities regarding IoT communication systems, it is necessary to avoid fault through constant and dynamic application of network behavior. In this research work, predicting the performance of IoT communication systems using Finite Element Interpolated Neural Network in smart cities (IoT‐CS‐FEINN‐SC) is proposed. Here, the input data is gathered from IoT devices that include various kinds of sensors like visibility, humidity, temperature, pressure, and wind speed. Signed Cumulative Distribution Transform (SCDT) is employed to extract Received Signal Strength (RSS) features as minimum, maximum, and mean. Afterwards, the extracted features are fed to FEINN for predicting the IoT communication system performance in smart cities. The Secretary Bird Optimization Algorithm (SBOA) is proposed to enhance the weight parameter of FEINN method that predicts the performance of IoT communication systems precisely. The IoT‐CS‐FEINN‐SC technique achieves 20.36%, 28.42%, and 15.27% better accuracy analyzed with existing techniques: Cloud‐assisted IoT intelligent transportation scheme and traffic control scheme in smart city (IoT‐TCS‐SC), Optimized RNN‐dependent performance prediction of IoT and WSN‐oriented smart city application utilizing improved honey badger algorithm (RNN‐IoT‐WSN), and Smart cities: a role of IoT and ML in realizing data‐centric smart environs (IoT‐ANN‐DSE), respectively.
Article
Particle Swarm Optimization (PSO) remains straightforward and has many scientific and engineering applications. Most real-world optimization problems are nonlinear and discrete with local constraints. The PSO algorithm encounters issues such as inefficient solutions and early convergence. It works best with well-tuned attribute weights, improving case retrieval accuracy. Using case-based reasoning to optimize pressure vessel models improves PSO performance, resulting in predictions closer to true values and fulfilling real-world engineering requirements. When developed for a group of Wheeled Mobile Robots (WMR), a Fault Tolerant Formation Control (FTFC) technique is designed to protect against serious actuator defects. At the outset of the study, the WMRs are arranged very orderly. When severe actuator faults impede certain robots, functioning wheeled mobile robots (WMRs) adjust their formation to reduce the consequences of the malfunction. An ideal assignment technique assigns new duties to each functioning robot, followed by evolutionary algorithms and Particle Swarm Optimization (PSO) to design pathways to the reconfigured positions. The CPTD approach uses a piecewise linear approximation to overcome obstacles in optimization problems with continuous switch inputs. This method combines CPTD with the Genetic Algorithm and PSO (GAPSO), resulting in an effective strategy for dynamic formation reconfiguration and path optimization. This holistic method reduces the time required to achieve the configuration while considering the physical restrictions of WMRs and avoiding collisions. Finally, real-world tests are performed to verify the proposed Algorithm's efficacy compared to existing optimization methods. The proposed GAPSO algorithm will achieve an average relative error reduction of 2%, accuracy will improve by 96%, the maximum performance will be achieved by 95%, the F1 score will develop by 95%, and the training error cure rate will improve by 94%.
Article
Full-text available
Cloud computing is becoming a very popular form of distributed computing, in which digital resources are shared via the Internet. The user is provided with an overview of many available resources. Cloud providers want to get the most out of their resources, and users are inclined to pay less for better performance. Task scheduling is one of the most important aspects of cloud computing. In order to achieve high performance from cloud computing systems, tasks need to be scheduled for processing by appropriate computing resources. The large search space of this issue makes it an NP-hard problem, and more random search methods are required to solve this problem. Multiple solutions have been proposed with several algorithms to solve this problem until now. This paper presents a hybrid algorithm called GSAGA to solve the Task Scheduling Problem (TSP) in cloud computing. Although it has a high ability to search the problem space, the Genetic Algorithm (GA) performs poorly in terms of stability and local search. It is therefore possible to create a stable algorithm by combining the general search capacities of the GA with the Gravitational Search Algorithm (GSA). Our experimental results indicate that the proposed algorithm can solve the problem with higher efficiency compared with the state-of-the-art.
Article
Full-text available
Nowadays, technologies cover all human life areas and expand communication platforms with suitable and low-cost space. Advertising and profiteering organizations use this large space of audience and low-cost platform to send their desired information and goals in the form of spam. In addition to creating problems for users, it causes time and bandwidth consumption. They will also be a threat to the productivity, reliability, and security of the network. Various approaches have been proposed to combat spam. The most dynamic and best methods of spam filtering are machine learning and deep learning, which perform high-speed filtering and classification of spam. In this paper, we present a new way to discover spam on various social networks by scaling up a Support Vector Machine (SVM) based on a combination of the Genetic Algorithm (GA) and Gravitational Emulation Local Search Algorithm (GELS) to select the most effective features of spam. The experiments' results show that the accuracy of the proposed method will be more optimal compared to other algorithms, and the algorithm has been able to compete with the compared algorithms.
Article
Full-text available
The purposes is to promote the intelligent fusion of Ant Colony Optimization Algorithm (ACOA) and the cloud computing for resource allocation and task scheduling. First, the analysis is conducted on the problems in the resource allocation and task scheduling via cloud computing and the limitations of ACOA. Second, the ACOA is optimized to meet the expected time and expected cost, which is denoted by Q-ACOA. Besides, the settings of pheromone heuristic factor α and the expected heuristic factor β are determined. Finally, Q-ACOA is compared with the Round-Robin scheduling (RR) algorithm, the Min Min (MM) algorithm, and the Time, Cost, and Load Balance-Enhanced Ant Colony Optimization (TCLB-EACO) algorithm. The adopted evaluation indicators in task scheduling of cloud computing include the task completion time, the total time of data migration, the cost of task completion, and the satisfaction of participating users. Results demonstrate that the values of α and β have comparatively large influences on the algorithms’ iteration times and the task completion time. Ultimately, α is determined as 3, and β is determined as 4.5. Compared with other algorithms, Q-ACOA shows the best performance on several evaluation indicators under multiple tasks. When the number of tasks exceeds 500, Q-ACOA has definite advantages in task completion time. Moreover, its average time of data migration is 2.5% less than the TCLB-EACO algorithm and 2.7% less than the MM algorithm. The overall cost consumption of Q-ACOA is lower than other algorithms, providing users a good experience. The above results can provide a data reference for the improvement of resource allocation and task scheduling based on cloud computing in the future.
Conference Paper
Full-text available
Mobile devices are used by numerous applications that continuously need computing power to grow. Due to limited resources for complex computing, offloading, a service offered for mobile devices, is commonly used in cloud computing. In Mobile Cloud Computing (MCC), offloading decides where to execute the tasks to efficiently maximize the benefits. Hence, we represent offloading as a Task Scheduling Problem (TSP). This latter is a Multi- Objective Optimization (MOO) problem where the goal is to find the best schedule for processing mobile source tasks, while minimizing both the average processor energy consumption and the average task processing time. Owing to the combinatorial nature of the problem, the TSP in MCC is known as NP-hard. To overcome this difficulty in practice, we adopt meta-heuristic search techniques as they offer a good trade-off between solution quality and scalability. More precisely, we introduce a new optimization approach, that we call Multi-objective Discrete Water Cycle Algorithm (MDWCA), to schedule tasks from mobile source nodes to processor resources in a hybrid MCC architecture, including public cloud, cloudlets, and mobile devices. To evaluate the performance of our proposed approach, we conducted several comparative experiments on many generated TSP instances in MCC. The simulation results show that MDWCA outperforms the state-of-the-art optimization algorithms for several quality metrics.
Article
Full-text available
The widespread usage of cloud computing in different fields causes many challenges as resource scheduling, load balancing, power consumption, and security. To achieve a high performance for cloud resources, an effective scheduling algorithm is necessary to distribute jobs among available resources in such a way that maintain the system balance and user tasks are responded to quickly. This paper tackles the multi-objective scheduling problem and presents a modified Harris hawks optimizer (HHO), called elite learning Harris hawks optimizer (ELHHO), for multi-objective scheduling problem. The modifications are done by using a scientific intelligent method called elite opposition-based learning to enhance the quality of the exploration phase of the standard HHO algorithm. Farther, the minimum completion time algorithm is used as an initial phase to obtain a determined initial solution, rather than a random solution in each running time, to avoid local optimality and satisfy the quality of service in terms of minimizing schedule length, execution cost and maximizing resource utilization. The proposed ELHHO is implemented in the CloudSim toolkit and evaluated by considering real data sets. The obtained results indicate that the presented ELHHO approach achieves results better than that obtained by other algorithms. Further, it enhances performance of the conventional HHO.
Article
In virtualized cloud computing systems, energy reduction is a serious concern since it can offer many major advantages, such as reducing running costs, increasing system efficiency, and protecting the environment. At the same time, an energy-efficient task scheduling strategy is a viable way to meet these goals. Unfortunately, mapping cloud resources to user requests to achieve good performance by minimizing the energy consumption of cloud resources within a user-defined deadline is a huge challenge. This paper proposes Energy and Performance-Efficient Task Scheduling Algorithm (EPETS) in a heterogeneous virtualized cloud to resolve the issue of energy consumption. There are two stages in the proposed algorithm: initial scheduling helps to reduce execution time and satisfy task deadlines without considering energy consumption, and the second stage task reassignment scheduling to find the best execution location within the deadline limit with less energy consumption. Moreover, to make a reasonable balance between task scheduling and energy saving, we suggest an energy-efficient task priority system. The simulation results show that, compared to current energy-efficient scheduling methods of RC-GA, AMTS, and E-PAGA, the proposed solution helps to reduce significant energy consumption and improve performance by 5%-20% with deadline constraint satisfied.
Article
Cloud computing has been booming technology in recent years. It is used to share resources over the internet. Though cloud has many advantages and is used worldwide. It also has some disadvantages and issues. The major problem in cloud computing is scheduling and resource allocation. Allocating resources and tasks is one of the highly critical challenges. There are no proper methods or techniques to improve task scheduling and resource allocation. Previous methods used Virtual machine (VM) instances for scheduling. The major drawback of using Virtual machine instances is that it takes a lot of startup time and consumes all the resources to perform the task. In this paper, we have proposed a solution with fuzzy C-means clustering hybrid algorithms of using Black widow optimization for task scheduling and fish swarm optimization for efficient resource allocation to reduce cost, energy, resource utilization.
Article
Task scheduling in the cloud is perceived as a difficult Multi-objective optimization problem. It refers to the assignment of user tasks on the available cloud virtual machines. This problem can be solved effectively by combining two or more approaches for improving task execution and increasing the use of resources. In this article, a third-generation Multi-objective optimization method called Non-dominated Sorting Genetic Algorithm (NSGA-III) was used for the first time to our knowledge to schedule a set of user tasks on a set of available virtual machines (VMs) in the cloud based on a new Multi-objective adaptation function to minimize the runtime (TE), the power consumption (CE), and the cost (cout). Furthermore, the performance of NSGAIII was compared with the performance of its previous version, Non-dominated Sorting Genetic Algorithm (NSGAII) where NSGAIII results outperform the results of NSGAII. The experimental results of the proposed method are encouraging, as they are used to show their effectiveness in solving such problems.
Article
In order to solve the shortcomings of ant colony algorithm in solving large-scale task scheduling problems in cloud computing, the convergence speed is slow and easy to fall into local optimal solutions. This paper designs an adaptive task scheduling algorithm for cloud computing based on ant colony algorithm. On the basis of the polymorphic ant colony algorithm, a pheromone adaptive update adjustment mechanism is added to improve the convergence speed of the algorithm and effectively avoid the emergence of local optimal solutions. The improved algorithm aims to solve a distribution plan with shorter execution time, lower cost and balanced load rate based on the tasks submitted by users. The traditional ant colony algorithm is compared with the improved adaptive ant colony algorithm through the cloud computing platform. Experimental data shows that the improved adaptive ant colony algorithm can quickly find the optimal solution to the cloud computing resource scheduling problem, shorten the task completion time, reduce the execution cost, and maintain the load balance of the entire cloud system center. The performance of this algorithm is better when solving large-scale task scheduling problems.
Article
Cloud computing is the computing technology that offers dynamically scalable and flexible computing resources. Task scheduling in the cloud system is the major problem that needs to be tackled for enhancing the system performance and cloud customer satisfaction level. The task scheduling scheme directly affects the execution time as well as the execution cost of the system. To overcome the above-stated issue, the novel hybrid Whale optimization algorithm-based MBA algorithm is proposed for solving the multi-objective task scheduling problems in cloud computing environments. In the hybrid WOA based MBA algorithm, the multi-objective behavior decreases the makespan by maximizing the resource utilization. The output of the Random double adaptive whale optimization algorithm (RDWOA) is enhanced by utilizing the mutation operator of the Bees algorithm. The performance evaluation is conducted and compared with other algorithms using the platform of Cloudsim tool kit for various measures such as completion, time, and computational cost. The results are analyzed for the performance measures such as makespan, execution time, resource utilization and computational cost and the analysis proves that the proposed algorithm performs better than other algorithms such as IWC, MALO, BA-ABC and MGGS. The proposed HWOA based MBA algorithm converged faster than any other approach for large search spaces and makes it appropriate for large scheduling problems. The experimental results reveal that the HWOA based MBA algorithm effectively minimizes the task completion time and also execution time.