Conference Paper

Shortest Job First Load Balancing Algorithm for Efficient Resource Management in Cloud

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Energy is among the most valuable resource in the world that need to be consumed in an optimized manner. For making intelligent decisions in energy consumption Smart Grid (SG) is introduced. One of the key components of SG is communication. Cloud-Fog based environment is the most popular communication architecture nowadays. Keeping the focus on this point this article proposed an integration of Cloud-Fog based environment with Micro Grid (MG) for effective resource management. For experimentation, the word is divided into 6 regions based on the division of continents. Each region contains 6 clusters and 3 fogs connected to each of them with MG and centralized cloud. Cloud Analyst simulator is used for testing of our proposed scenario. To cater the huge load on fogs a new load balancing technique Shortest Load First(SLF) is introduced in the simulator. The load balancer technique is used to manage the requests on fogs whereas the dynamic service proximity policy is used for connection of clusters with fogs.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The choice involves varying customary procedures without including any swarm knowledge computations. Different load adjusting methodologies were proposed starting late and each based on various edges of figuring and techniques, e.g., using a central burden modifying approach for virtual machines [13], the arranging system on weight altering of virtual machine (VM) resources subject to inherited computations [14], a mapping course of action reliant on multi-resource burden changing for virtual machines [15], different dispersed calculations for VMs [16], weighted least-association methodology [17], and two-stage booking calculations [18]. Also, a few techniques for burden adjusting were displayed for different cloud applications, for instance, an administration-based work set for extensive scale stockpiling [19], information focus the executive design [15], and a heterogeneous cloud. ...
... Shortest Job First (SJF) [17] planning is a need and non-preemptive booking. In Nonpreemptive methods, when the procedures are allocated for a time to a processor then the processor can't be taken by the other, until the procedure is finished in the execution section. ...
Chapter
Cloud computing environment can also be called as Internet-based computing process in which there is no limitation of work. There are multiple numbers of data center (DC) available for solving multiple user requests coming from a different user base (UB). The data center is capable of negotiating multiple instructions simultaneously. But the instructions are submitted to the DC randomly. Thus, there is chance of overload for a particular DC. Hence, the load balancing plays a vital role in cloud computing to maintain the performance of the computing environment. In this research article, we have implemented throttled, round-robin, and the shortest job first loadbalancing algorithm. Also, we have proposed one more algorithm called M-throttled which has the high performance compared to others. We have taken different parameters such as overall response time and DC processing time for comparison. These are simulated by taking closest data center policy in CloudSim environment.
... This mapping clearly shows that VM3 is completely utilized but VM2, VM4, and VM1 are under-utilized. So there is a need for a proper task scheduling algorithm that can utilize the resources of all VMs in a load-balanced way [16,17]. ...
Article
Full-text available
According to the research, many task scheduling approaches have been proposed like GA, ACO, etc., which have improved the performance of the cloud data centers concerning various scheduling parameters. The task scheduling problem is NP-hard, as the key reason is the number of solutions/combinations grows exponentially with the problem size, e.g., the number of tasks and the number of computing resources. Thus, it is always challenging to have complete optimal scheduling of the user tasks. In this research, we proposed an adaptive load-balanced task scheduling (ALTS) approach for cloud computing. The proposed task scheduling algorithm maps all incoming tasks to the available VMs in a load-balanced way to reduce the makespan, maximize resource utilization, and adaptively minimize the SLA violation. The performance of the proposed task scheduling algorithm is evaluated and compared with the state-of-the-art task scheduling ACO, GA, and GAACO approaches concerning average resource utilization (ARUR), Makespan, and SLA violation. The proposed approach has revealed significant improvements concerning the makespan, SLA violation, and resource utilization against the compared approaches.
Article
Full-text available
As the cloud data centers size increases, the number of virtual machines (VMs) grows speedily. Application requests are served by VMs be located in the physical machine (PM). The rapid growth of Internet services has created an imbalance of network resources. Some hosts have high bandwidth usage and can cause network congestion. Network congestion affects overall network performance. Cloud computing load balancing is an important feature that needs to be optimized. Therefore, this research proposes a 3-tier architecture, which consists of Cloud layer, Fog layer, and Consumer layer. The Cloud serves the world, and Fog analyzes the services at the local edge of network. Fog stores data temporarily, and the data is transmitted to the cloud. The world is classified into 6 regions on the basis of 6 continents in consumer layer. Consider Area 0 as North America, for which two fogs and two cluster buildings are considered. Microgrids (MG) are used to supply energy to consumers. In this research, a real-time VM migration algorithm for balancing fog load has been proposed. Load balancing algorithms focus on effective resource utilization, maximum throughput, and optimal response time. Compared to the closest data center (CDC), the real-time VM migration algorithm achieves 18% better cost results and optimized response time (ORT). Realtime VM migration and ORT increase response time by 11% compared to dynamic reconFigure with load (DRL) with load. Realtime VM migration always seeks the best solution to minimize cost and increase processing time.
Conference Paper
Cloud computing is the general name of the services that enable the use of information technology resources or services to users over the internet on demand. Independent and static task scheduling is an important problem in cloud computing and deals with the optimal mapping of tasks to resources when task lengths are predetermined and can work independently from each other. In this study, the performances of FCFS, SJF, Min-Min, Max- Min heuristics, and ABC, PSO metaheuristics were measured on this problem. It has been observed that Min-Min, Max-Min and ABC algorithms are more successful than others according to the maximum completion time criterion. Considering the ease of implementation and fast running time, it has been observed that Min-Min and Max-Min heuristics are sufficient in solving this problem, and metaheuristic approaches do not contribute much.
Article
Cloud computing is a recently looming-evoked paradigm, the aim of which is to provide on-demand, pay-as-you-go, internet-based access to shared computing resources (hardware and software) in a metered, self-service, dynamically scalable fashion. A related hot topic at the moment is task scheduling, which is well known for delivering critical cloud service performance. However, the dilemmas of resources being underutilized (underloaded) and overutilized (overloaded) may arise as a result of improper scheduling, which in turn leads to either wastage of cloud resources or degradation in service performance, respectively. Thus, the idea of incorporating meta-heuristic algorithms into task scheduling emerged in order to efficiently distribute complex and diverse incoming tasks (cloudlets) across available limited resources, within a reasonable time. Meta-heuristic techniques have proven very capable of solving scheduling problems, which is fulfilled herein from a cloud perspective by first providing a brief on traditional and heuristic scheduling methods before diving deeply into the most popular meta-heuristics for cloud task scheduling followed by a detailed systematic review featuring a novel taxonomy of those techniques, along with their advantages and limitations. More specifically, in this study, the basic concepts of cloud task scheduling are addressed smoothly, as well as diverse swarm, evolutionary, physical, emerging, and hybrid meta-heuristic scheduling techniques are categorized as per the nature of the scheduling problem (i.e., single- or multi-objective), the primary objective of scheduling, task-resource mapping scheme, and scheduling constraint. Armed with these methods, some of the most recent relevant literature are surveyed, and insights into the identification of existing challenges are presented, along with a trail to potential solutions. Furthermore, guidelines to future research directions drawn from recently emerging trends are outlined, which should definitely contribute to assisting current researchers and practitioners as well as pave the way for newbies excited about cloud task scheduling to pursue their own glory in the field.
Thesis
Cloud computing offers various services. Numerous cloud data centers are used to provide these services to the users in the whole world. A cloud data center is a house of physical machines (PMs). Millions of virtual machines (VMs) are used to minimize the utilization rate of PMs. The dramatic growth of Internet services results in unbalanced network resources. Resource management is an important factor for the performance of a cloud. Various techniques are used to manage the resources of a cloud efficiently. VM-consolidation is an intelligent and efficient strategy to balance the load of cloud data centers. VM-placement is an important subproblem of the VM-consolidation problem that needs to be resolved. The basic objective of VM-placement is to minimize the utilization rate of PMs. VM-placement is used to save energy and cost. In this thesis, an enhanced levy-based particle swarm optimization algorithm with bin packing (PSOLBP) is proposed for solving the VM-placement problem. Moreover, the best-fit strategy is used. Simulations are done to authenticate the adaptivity of the proposed algorithm. Three algorithms are implemented in Matlab. The given algorithm is compared with simple particle swarm optimization (PSO) and a hybrid of levy flight and particle swarm optimization (LFPSO). The proposed algorithm efficiently minimized the number of running PMs. Further, an enhanced levy based multi-objective gray wolf optimization (LMOGWO) algorithm is proposed to solve the VM placement problem efficiently. An archive is used to store and retrieve true Pareto front. A grid mechanism is used to improve the non-dominated VMs in the archive. A mechanism is also used for the maintenance of an archive. The proposed algorithm mimics the leadership and hunting behavior of gray wolves (GWs) in multi-objective search space. The proposed algorithm is tested on nine well-known bi-objective and tri-objective benchmark functions to verify the compatibility of the work done. LMOGWO is then compared with simple multi-objective gray wolf optimization (MOGWO) and multi-objective particle swarm optimization (MOPSO). Two scenarios are considered for simulations to check the adaptivity of the proposed algorithm. The proposed LMOGWO outperformed MOGWO and MOPSO for University of Florida 1 (UF1), UF5, UF7 and UF8 for Scenario 1. However, MOGWO and MOPSO performed better than LMOGWO for UF2. For Scenario 2, LMOGWO outperformed the other two algorithms for UF5, UF8 and UF9. However, MOGWO performed well for UF2 and UF4. The results of MOPSO are also better than the proposed algorithm for UF4. Moreover, the PM utilization rate (%) is minimized by 30% with LMOGWO, 11% with MOGWO and 10% with MOPSO. VM-consolidation is an NP-hard problem; however, the proposed algorithms outperformed.
Article
Full-text available
Demand side management (DSM) is one of the most challenging areas in smart grids, which provides multiple opportunities for residents to minimize electricity cost. In this work, we propose a DSM scheme for electricity expenses and peak to average ratio (PAR) reduction using two well-known heuristic approaches: the cuckoo search algorithm (CSA) and strawberry algorithm (SA). In our proposed scheme, a smart home decides to buy or sell electricity from/to the commercial grid for minimizing electricity costs and PAR with earning maximization. It makes a decision on the basis of electricity prices, demand and generation from its own microgrid. The microgrid consists of a wind turbine and solar panel. Electricity generation from the solar panel and wind turbine is intermittent in nature. Therefore, an energy storage system (ESS) is also considered for stable and reliable power system operation. We test our proposed scheme on a set of different case studies. The simulation results affirm our proposed scheme in terms of electricity cost and PAR reduction with profit maximization. Furthermore, a comparative analysis is also performed to show the legitimacy and productiveness of CSA and SA.
Article
Full-text available
Over the past few years, using cloud computing technology has become popular. With the cloud computing service providers, reducing the number of physical machines providing resources for virtual services in cloud computing is one of the efficient ways to reduce the amount of energy consumption which in turn enhance the performance of data centres. However, using a minimum of physical machines to allocate resources for virtual services can result in system overload and break the SLA of service. Consequently, providing resources for virtual services which do not only satisfy the constraint of reducing the energy consumption but also ensure the load balancing of the whole system is necessary. In this study, we present the multi-objective resource allocation problem for virtual services. This problem aims at both reducing the energy consumption and balancing the load of physical machines. The MORA-ACS algorithm is proposed to resolve the problem by the Ant Colony System method. The experimental results show that in the CloudSim environment, the MORA-ACS algorithm could balance the load as well as reduce the energy consumption better than the Round Robin algorithm.
Article
Full-text available
Integrated generation systems are increasingly considered suitable to supply remote areas, less developed countries, and small isolated communities with power. The energy management investigated in this paper concerns a smart grid encompassing a photovoltaic park. We propose a novel cloud-distributed solution to determine the best energy dispatch, i.e. where energy is going to be used and whether to change the operating points for some consumption devices. Neural networks have been used to predict both energy production and consumption, making it possible to strategically set the activation time of loading devices and to minimize energy flow changes. Moreover, cloud computing resources make it possible to have fast and distributed computation on the big amount of data gauging power production and consumption.
Article
Full-text available
This paper presents a cooperative game theoretic approach to tackle the cost allocation problem for a virtual power plant (VPP) which consists of multiple demand-side resource aggregators (DRAs) participating in the short-term two settlement electricity market. Given the considered game is balanced, we propose to employ the cooperative game theory’s core cost allocation concept to efficiently allocate the bidding cost to the DRAs. Since the non-empty core contains many potential solutions, we develop a bi-objective optimization framework to determine the core cost allocation solution that can achieve efficient tradeoff between stability and fairness. To solve this problem, we jointly employ the ǫ-constraint and row constraint generation methods to construct the Pareto front, based on which we can specify a desired operation point with reasonable computation effort. Numerical studies show that our proposed design can efficiently exploit the non-empty core to find a cost allocation for the participants, achieve the desirable tradeoff between stability and fairness, and can address the practical DRAs’ large-scale cooperation design.
Article
Full-text available
Cloud computing is one of the incredible technology which enable the new vision for IT industry. Nowadays, it has become a strong alternative for startup large as well as small scale organizations that only use the resources which actually required based on pay as per use. As Cloud Computing is growing continuously and clients from different parts of the world are demanding for the various services and better outcomes, the load balancing has become the challenge for the cloud provider. To accurately manage the available resources of the different cloud provider, resources have to be properly selected according to the properties of task. Many algorithms have been proposed to provide efficient mechanisms and assigning the client's requests to available cloud nodes and aim to enhance the overall performance of the cloud and provide more satisfaction to user and efficient services. Initially this paper gives an introduction to cloud computing and load balancing. A detailed survey on different load balancing policy in cloud analyst, their advantages and drawback with obtainable solution and learn how to add new policy or customize existing load balancing policy.
Conference Paper
Full-text available
In this paper, we propose a reliable, energy efficient and high throughput routing protocol for Wireless Body Area Networks (WBANs). In Forwarding Data Energy Efficiently with Load Balancing in Wireless Body Area Networks (FEEL), a forwarder node is incorporated which reduces the transmission distance between sender and receiver to save energy of other nodes. Nodes consume energy in an efficient manner resulting in longer stability period. Nodes measuring electrocardiography (ECG) and glucose level send their data directly to the sink in order to have minimum delay. Simulation results show that FEEL protocol achieves improved stability period and throughput. As a result it helps in continuous monitoring of patients in WBANs.
Book
With the given work we decided to help not only the readers but ourselves, as the professionals who actively involved in the networking branch, with understanding the trends that have developed in recent two decades in distributed systems and networks. Important architecture transformations of distributed systems have been examined. The examples of new architectural solutions are discussed. © Springer Fachmedien Wiesbaden GmbH 2017. All rights are reserved.
Article
The smart grid is considered to be the next generation power system because of its reliability, efficiency, and cost-effectiveness. In recent years, the smart grid technology has attracted a lot of attention from both academia and industry. Advances in smart grid technologies are enabling more data to be collected and analyzed in real-time for many kinds of smart grid applications. As the amount of data increases, the traditional smart grid data management system cannot provide sufficient storage and processing capacities. To address these challenges, cloud computing is being introduced into the power system and the cloud-based smart grid data management system has been proposed to better support smart grid applications. In this cloud-based system, the data is stored and analyzed by the remote cloud server according to the requirements of smart grid applications. However, the loss of physical control over the smart grid data makes it is a significant challenge in ensuring the integrity of the data. Many provable data possession schemes have been proposed in the past few years. However, most of them suffer from serious security weaknesses or poor performance. We present an efficient, certificateless provable data possession (CL-PDP) scheme for cloud-based smart grid applications. Security analysis shows that the proposed scheme is provably secure in a robust security model and can satisfy several security requirements. Performance analysis demonstrates that the proposed scheme results in lower computation costs as compared to two recently proposed CL-PDP schemes.
Conference Paper
From previous years, the research on usage of renewable energy sources (RES), specially photo voltaic (PV) arrays. This paper is based on home energy management system (HEMS). We propose a grid connected microgrid to fulfill the load demand of residential area. We have consider fifteen homes with six appliance for each home, the appliances are taken as the base load. For bill calculation, real time pricing (RTP) tariff is used. Ant colony optimization (ACO) is used for the scheduling of appliances. To fulfill the load demand; Wind turbine (WT), PV, micro turbine (MT), fuel cell (FC) and diesel generator (DG) are used. Energy storage devices are used with generators to store excessive energy. Also, we propose penalty and incentive (PI) mechanism to reduce the overall cost. Objectives of the paper are cost and peak to average ratio (PAR). The simulation results show better performance with our optimization technique rather than without any technique.
Article
Smart Grid (SG) technology represents an unprecedented opportunity to transfer the energy industry into a new era of reliability, availability, and efficiency that will contribute to our economic and environmental health. On the other hand, the emergence of Electric Vehicles (EVs) promises to yield multiple benefits to both power and transportation industry sectors, but it is also likely to affect the SG reliability, by consuming massive energy. Nevertheless, the plug-in of EVs at public supply stations must be controlled and scheduled in order to reduce the peak load. This paper considers the problem of plug-in EVs at public supply stations (EVPSS). A new communication architecture for smart grid and cloud services is introduced. Scheduling algorithms are proposed in order to attribute priority levels and optimize the waiting time to plug-in at each EVPSS. To the best of our knowledge, this is one of the first papers investigating the aforementioned issues using new network architecture for smart grid based on cloud computing. We evaluate our approach via extensive simulations and compare it with two other recently proposed works, based on real supply energy scenario in Toronto. Simulation results demonstrate the effectiveness of the proposed approach when considering real EVs charging-discharging loads at peak-hours periods.
Article
Fog computing is a new computing architecture, composed of a set of near-user edge devices called fog nodes, which collaborate together in order to perform computational services such as running applications, storing an important amount of data, and transmitting messages. Fog computing extends cloud computing by deploying digital resources at the premise of mobile users. In this new paradigm, management and operating functions, such as job scheduling aim at providing high-performance, cost-effective services requested by mobile users and executed by fog nodes. We propose a new bio-inspired optimization approach called Bees Life Algorithm (BLA) aimed at addressing the job scheduling problem in the fog computing environment. Our proposed approach is based on the optimized distribution of a set of tasks among all the fog computing nodes. The objective is to find an optimal tradeoff between CPU execution time and allocated memory required by fog computing services established by mobile users. Our empirical performance evaluation results demonstrate that the proposal outperforms the traditional particle swarm optimization and genetic algorithm in terms of CPU execution time and allocated memory.
Article
Energy is one of the most valuable resources of the modern era and needs to be consumed in an optimized manner by an intelligent usage of various smart devices, which are major sources of energy consumption nowadays. With the popularity of low-voltage DC appliances such as-LEDs, computers, and laptops, there arises a need to design new solutions for self-sustainable smart energy buildings containing these appliances. These smart buildings constitute the next generation smart cities. Keeping focus on these points, this article proposes a cloud-assisted DC nanogrid for self-sustainable smart buildings in next generation smart cities. As there may be a large number of such smart buildings in different smart cities in the near future, a huge amount of data with respect to demand and generation of electricity is expected to be generated from all such buildings. This data would be of heterogeneous types as it would be generated from different types of appliances in these smart buildings. To handle this situation, we have used a cloudbased infrastructure to make intelligent decisions with respect to the energy usage of various appliances. This results in an uninterrupted DC power supply to all low-voltage DC appliances with minimal dependence on the grid. Hence, the extra burden on the main grid in peak hours is reduced as buildings in smart cities would be self-sustainable with respect to their energy demands. In the proposed solution, a collection of smart buildings in a smart city is taken for experimental study controlled by different data centers managed by different utilities. These data centers are used to generate regular alerts on the excessive usage of energy from the end users' appliances. All such data centers across different smart cities are connected to the cloud-based infrastructure, which is the overall manager for making all the decisions about energy automation in smart cities. The efficacy of the proposed scheme is evaluated with respect to various performance evaluation metrics such as satisfaction ratio, delay incurred, overhead generated, and demand-supply gap. With respect to these metrics, the performance of the proposed scheme is found to be good for implementation in a realworld scenario.
Efficient Resource Allocation Model for Residential Buildings in Smart Grid Using Fog and Cloud Computing. Innovative Mobile and Internet Services in Ubiquitous Computing Advances in Intelligent Systems and Computing
  • Aisha Fatima
Fatima, Aisha, et al. Efficient Resource Allocation Model for Residential Buildings in Smart Grid Using Fog and Cloud Computing. Innovative Mobile and Internet Services in Ubiquitous Computing Advances in Intelligent Systems and Computing, Aug. 2018, pp. 289298 doi:10.1007/978-3-319-93554-626.