Conference Paper

Min-Min Scheduling Algorithm for Efficient Resource Distribution using Cloud and Fog in Smart Buildings

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Load Balancing helps in minimizing the consumption of resources .Cloud and fog concept is used to manage these resources. As the cloud is a centralized network it has information of all the customers. Fog is used to minimize the load on the cloud .The storage of cloud is permanent. However, fog has temporary storage. Smart Grid(SG) technology presents an opportunity that improves reliability, efficiency and stainability. Fog is used to reduce the load on the cloud. In this paper an effective fog and cloud based environment for energy management of resources is proposed. It handles the data of clusters of buildings at the user-end. Each cluster of buildings has the multiple number of apartments. Six fogs are considered for six different regions. Six number of clusters are considered in this scenario. Each cluster has one fog. MicroGrids (MG) are available near the buildings and accessible by fog. Multiple algorithms are used for load balancing to manage the load. The proposed algorithm in this scenario is Min-Min algorithm. The Min-Min algorithm is a simple algorithm that manages the resources efficiently. In this algorithm the completion time of a task is calculated and initially, resources are allocated to those tasks which have minimum execution time. Results are compared with Round Robin(RR) algorithm which is also used for load balancing. Simulation results shows that by applying the proposed algorithm the cost is reduced as compare to RR.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The purpose of scheduling the resources is to discover better resources for customers to realize excellent planning goals, such as lower processing delay and enhance resource utilization and Quality-of-Services (QoS). For the sake of solving various NP-hard problems of Resource scheduling/ task scheduling, in the traditional systems many optimization algorithms have been proposed- [5][6][7][8][9]. ...
... Particle swarm optimization algorithm is used along with fuzzy clustering technique. Saniah Rehman et al. [7] proposed a resource distribution using the Min-Min algorithm for cloud and fog in smart buildings. The author proposed concepts of smart Grid through which calculation of energy required for six clusters having 20 buildings calculated and desired VM assigned to cluster for performing the task. ...
... Rehman et. al. [7] 2019 Cuckoo optimization algorithm ...
Chapter
In the cloud computing paradigm data, owners have to put up their data in the cloud. Due to the longest distance between devices and cloud; problem of delay, bandwidth, and jitter is there. Fog computing was introduced to the edge of the network to overcome cloud problems. During the transfer of data between the Internet of Things (IoT) devices and fog node, scheduling of resources and tasks is necessary to enrich quality of service (QoS) parameters. Various optimization and scheduling algorithms were implemented in a fog environment. Still, the fog environment is facing the problem of efficiency, latency, cost, computation time, and total execution time. Earlier PSO (particle swarm optimization) techniques or ACO (ant colony optimization) are provided the solution to NP-hard problems. Over such types of optimization techniques, various optimization algorithms are provided like Dolphin Partner optimization, Grey wolf, Moth-Flame, Firefly, crow, etc. Priority queue, round robin scheduling algorithm implemented on another side for a solution to the problem. In this paper, the implementation comparison of PSO, ACO on the cloud, and Fog is contrasting using iFogSim toolkit. The results of QoS parameters makespan and cost in fog computing are showing enhancement in QoS over cloud computing.
... To this end, task scheduling heuristics have been widely adapted for load-balanced provisioning of tasks in the clouds. The Min-Min scheduling algorithm was introduced to give priority to the tasks with smaller size in cloud computing environment [7]. Whereas, the larger size tasks have to wait for a longer time before executing them on the available resources, thus leading to inefficient resource utilization. ...
... Whereas, the larger size tasks have to wait for a longer time before executing them on the available resources, thus leading to inefficient resource utilization. In [6], Chen et al. extended the existing Min-Min [7] approach by introducing two efficient load balancing heuristics (namely Load-Balanced Improved Min-Min (LBIMM) and User Priorityaware Load-Balanced Improved Min-Min (PA-LBIMM)). The PA-LBIMM considers user priority while keeping the loadbalanced scheduling, and thus provides improved results in terms of required level of SLA and resource utilization. ...
... Min-Min [7]: This approach employs the principles of MCT for tasks to VM mapping. The Min-Min heuristics compute Expected Execution Time (EET) for all input tasks on all the VMs one by one. ...
Conference Paper
Recently, cloud computing has emerged as a primary enabling technology to provide compute, storage, platform, and analytics services to end-users and organizations based on pay-as-you-use. In essence, cloud provides agility, availability, scalability, and resiliency. However, increased number of users leads to issues such as scheduling of requests, demands, and work-load efficiency over the available cloud resources. Similarly, since the inception of cloud computing, task scheduling is reckoned as an essential ingredient in the commercial value of this technology. Task scheduling is considered as an NP-hard problem in cloud computing and different solutions exist in the literature to address this issue. In this paper, we investigate and empirically compare some of the recent state-of-the-art scheduling mechanisms in cloud computing with respect to Makespan (the time difference between the start and finish of a sequence of jobs or tasks) and throughput (number of tasks successfully executed per unit time (Makespan)). We then extend the comparison by evaluating the considered approaches with respect to Average Resource Utilization Ratio (ARUR). We also recommend and identify factors that can improve resource utilization and maximize revenue-generation for cloud service providers.
... In the recent few years, the researchers have focused on introducing Cloud scheduling heuristics focusing on load-balanced provisioning of tasks with the aim of producing efficient resource utilization. The Min-Min [11] scheduling algorithm was introduced to give priority to the tasks with a smaller size. However, the tasks having larger size had to wait for longer times before executing them on the available resources thus leading to inefficient resource utilization. ...
... However, the tasks having larger size had to wait for longer times before executing them on the available resources thus leading to inefficient resource utilization. In [10], Chen et al. extended the existing Min-Min [11] approach by the proposal of two efficient load balancing heuristics (namely Load-Balanced Improved Min-Min (LBIMM) and User Priority-aware Load-Balanced Improved Min-Min (PA-LBIMM)). The PA-LBIMM considers the users' priority while keeping the load balanced scheduling thus provide improved results in terms of achieving the required level of SLA and resource utilization. ...
... Min-Min [11] tasks scheduling heuristics uses basic principles of MCT for tasks to VM mapping. Min-Min scheduling heuristics calculate Expected Execution Time (EET) for all tasks on each of the VM. ...
Article
Full-text available
Recently, Cloud computing has emerged as one of the widely used platforms to provide compute, storage and analytics services to end-users and organizations on a pay-as-you-use basis, with high agility, availability, scalability, and resiliency. This enables individuals and organizations to have access to a large pool of high processing resources without the need for establishing a high-performance computing (HPC) platform. From the past few years, task scheduling in Cloud computing is reckoned as eminent recourse for researchers. However, task scheduling is considered an NP-hard problem. In this research work, we investigate and empirically compare some of the most prominent state-of-the-art scheduling heuristics in terms of Makespan, Average resource utilization (ARUR), Throughput, and Energy consumption. The comparison is then extended by evaluating the approaches in terms of individual VM level load imbalance. After extensive simulation, the comparative analysis has revealed that Task Aware Scheduling Algorithm (TASA) and Proactive Simulation-based Scheduling and Load Balancing (PSSLB) outperformed as compared to the rest of the approaches and seems to be optimal choice keeping in view the trade-of between the complexities involved and the performance achieved concerning Makespan, Throughput, resource utilization, and Energy consumption.
... Additionally, Rehman, et al. [82] proposed a cloud and fog based environment for effective resource distribution. The presented model had a three-layer architecture consisting of the end-user layer that had clusters of buildings, the fog layer as an intermediate layer that provided services to the clusters and cloud as a service provider. ...
... Therefore, the presented system has performed in a way that response time (RT) increased, and the cost was optimized. In [76], [82] [75], [79], [80], [81] [77], [84] [78], [83] [88] ...
... These metrics are measured in most of the resource-management-based papers; hence, we bring them together here side by side.    Yasmeen, et al. [77]    Fatima, et al. [78]   Javaid, et al. [79]   Fatima, et al. [80]    Abbas, et al. [81]    Rehman, et al. [82]   Fatima, et al. [83]    Gill, et al. [84]   ...
Article
Full-text available
Smart homes are equipped residences for clients aiming at supplying suitable services via intelligent technologies. Through smart homes, household appliances as the Internet of Things (IoT) devices can easily be handled and monitored from a far distance by remote controls. With the day-to-day popularity of smart homes, it is anticipated that the number of connections rises faster. With this remarkable rise in connections, some issues such as substantial data volumes, security weaknesses, and response time disorders are predicted. In order to solve these obstacles and suggest an auspicious solution, fog computing as an eminently distributed architecture has been proposed to administer the massive, security-crucial, and delay-sensitive data, which are produced by communications of the IoT devices in smart homes. Indeed, fog computing bridges space between various IoT appliances and cloud-side servers and brings the supply side (cloud layer) to the demand side (user device layer). By utilizing fog computing architecture in smart homes, the issues of traditional architectures can be solved. This paper proposes a Systematic Literature Review (SLR) method for fog-based smart homes (published between 2014 and May 2019). A practical taxonomy based on the contents of the present research studies is represented as resource-management-based and service-management-based approaches. This paper also demonstrates an abreast comparison of the aforementioned solutions and assesses them under the same evaluation factors. Applied tools, evaluation types, algorithm types, and the pros and cons of each reviewed paper are observed as well. Furthermore, future directions and open challenges are discussed.
... This issue negatively affects the user experience and increase response time for smaller tasks in the waiting queue. Min-Min [24] scheduling heuristic selects the smallest task from the task list and assigns it to the VM that executes them in the minimum time as compared to other heuristics. Min-Min based scheduling algorithms have complex implementation and high overhead as compared to FIFO. ...
... The first aspect focuses on improving individual scheduling objectives and comprises three different algorithms. These algorithms include:1) Cost [24] static favor smaller task,lower makespan penalize larger tasks under-resource utilization and load imbalance LIBMM [25] static support task priority cannot update VM status at run time Max-Min [26] static Map largest jobs on the fastest VMs, Favors larger tasks Execution delay for smaller tasks [17], cannot update VM status at run time Dy-MaxMin [19] dynamic Update VM status after interval, Real-time load balancing ...
Article
Full-text available
For the last few years, Cloud computing has been considered an attractive high-performance computing platform for individuals as well as organizations. The Cloud service providers (CSPs) are setting up data centers with high performance computing resources to accommodate the needs of Cloud users. The users are mainly interested in the response time, whereas the Cloud service providers are more concerned about the revenue generation. Concerning these requirements, the task scheduling for the users’ applications in Cloud computing attained focus from the research community. Various task scheduling heuristics have been proposed that are available in the literature. However, the task scheduling problem is NP-hard in nature and thus finding optimal scheduling is always challenging. In this research, a resource-aware dynamic task scheduling approach is proposed and implemented. The simulation experiments have been performed on the Cloudsim simulation tool considering three renowned datasets, namely HCSP, GoCJ, and Synthetic workload. The obtained results of the proposed approach are then compared against RALBA, Dynamic MaxMin, DLBA, and PSSELB scheduling approach concerning average resource utilization (ARUR), Makespan, Throughput, and average response time (ART). The DRALBA approach has revealed significant improvements in terms of attained ARUR, throughput, and Makespan.This fact is endorsed by the average resource utilization results (i.e., 98 % for HCSP dataset, 75 % for Synthetic workload (improve ARUR by 72.00 %, 77.33 %, 78.67 %, and 13.33 % as compared to RALBA, Dynamic MaxMin, DLBA and PSSELB respectively), and 77 % for GoCJ (i.e., the second best attained ARUR)).
... Saniah Rehman et al. [57] provided a Resource allocation by technique min-min algorithms for Fog and cloud in the IoT-enabled home. The researcher presented the knowledge of the smart grid through which the energy requirements and the required number of virtual machines were allocated to the group to perform the task, which was calculated for six groups of 20 buildings. ...
Article
Full-text available
in present-day Internet of Things (IoT) is a trending technology and playing a vital role in human life to build up a quality life. With the advancement in IoT devices, Fog computing is turn-out solution to handle IoT applications efficiently. Many IoT applications are run in Fog environments with central Fog nodes, and by servers in Cloud. Because of the heterogeneous and distributed environment of the Fog system, the management of an increasing number of IoT applications within available resources for optimal QoS (Quality-of-service) is a necessity. In this work, a review of resource scheduling techniques for optimal QoS has been done. The provided taxonomy for approaches of QoS management in the Fog environment is classified into Resource Scheduling, Energy Efficiency, and Security. In this work issues and approaches of four resource scheduling techniques like task scheduling, resource allocation, task offloading, and application placement are discussed in detail. A comparative analysis of these four techniques on performance metrics, advantages and disadvantages, and implementation tools are discussed.
... Consequently, the applications off-load the data source commencing with the central DCs ton DCs. This can efficiently secure the energy (Zhang et al., 2016), using energy management-as-a Service (Rehman et al., 2018). ...
... Consequently, the applications off-load the data source commencing with the central DCs ton DCs. This can efficiently secure the energy (Zhang et al., 2016), using energy management-as-a Service (Rehman et al., 2018). ...
... Fair task scheduling. Reduce waiting time Take longer processing time for big task, high variations in time slice length, unpredictable loads on server Min_Min [43] The completion time of each task has been calculated and initially, resources are assigned to those task which has minimum execution time ...
Article
Full-text available
This research paper proposes a novel approach named priority-based load balancing (PLB) for cloud computing environment. The PLB provides a resilient and adaptive task scheduling using multi-queues. Numerous strategies have already been proposed in the past researches to prioritize the tasks and mapping all the tasks to different resources available on the cloud. There is still a hindrance in the performance due to the negligible attention paid to the unused resources and tasks having low priority, eventually leading to starvation problem. To this end, the PLB algorithm has been partitioned into four sub-procedures, namely (i) Starvation-free task allocation, (ii) Inserting tasks into the dispatcher, (iii) Reordering tasks inside the queues and eventually, (iv) Mapping tasks onto the Virtual Machines (VMs) calculating the cost incurred for all the corresponding VMs. The sole motivation of this research work is to optimize the performance parameters by allocating all the jobs to all the available resources in the workflow model. It also consolidates the job categorization in the priority-based multi-queues, while filtering tasks from all the queues to overcome the deprivation of low priority tasks. In this paper, a test-bed setup has been deployed using CloudSim 3 and TCS WAN emulator for experimentation and results evaluation. The experimental setup imbibes different aspects such as performance measures, average response time, makespan time in order to ascertain efficiency, resource utilization ratio and bandwidth of the workflow model. The obtained results are further compared with five different approaches including-First Come First Serve, Round Robin, Min-Min, Max-Min and ACO and it was observed that the proposed strategy yielded more efficiency and accuracy in most of the cases. The experimental results have been further validated and demonstrated in order to justify the claims of the proposed approach, being able to tackle out different priority tasks and resource allocation in a stable and optimum manner.
... We also compared the results obtained using lp_solve with arbitrary selected and proportional assignments as well as setups generated using the min-min algorithm [30]. The comparison of real calculation times can be seen in Tables 4 and 5. ...
Article
Full-text available
In the paper we investigate a practical approach to application of integer linear programming for optimization of data assignment to compute units in a multi-level heterogeneous environment with various compute devices, including CPUs, GPUs and Intel Xeon Phis. The model considers an application that processes a large number of data chunks in parallel on various compute units and takes into account computations, communication including bandwidths and latencies, partitioning, merging, initialization, overhead for computational kernel launch and cleanup. We show that theoretical results from our model are close to real results as differences do not exceed 5% for larger data sizes, with up to 16.7% for smaller data sizes. For an exemplary workload based on solving systems of equations of various sizes with various compute-to-communication ratios we demonstrate that using an integer linear programming solver (lp_solve) with timeouts allows to obtain significantly better total (solver+application) run times than runs without timeouts, also significantly better than arbitrary chosen ones. We show that OpenCL 1.2’s device fission allows to obtain better performance in heterogeneous CPU+GPU environments compared to the GPU-only and the default CPU+GPU configuration, where a whole device is assigned for computations leaving no resources for GPU management.
... Less time-consuming tasks are allocated to the resources first. The processing of task will depend upon the execution time, i.e. the task having minimum execution time is allocated, whereas the tasks having maximum execution time will be on stand by till the processor becomes free [90]. Same as the Min-Min algorithm, the Max-Min works for finding out the minimum execution time, but the only exception was that the firstly deals with the tasks that took maximum time for execution. ...
Article
Full-text available
Internet of Things has been growing, due to which the number of user requests on fog computing layer has also increased. Fog works in a real-time environment and offers from connected devices need to be processed immediately. With the increase in users requests on fog layer, virtual machines (VMs) at fog layer become overloaded. Load balancing mechanism can distribute load among all the VMs in equal proportion. It has become a necessity in the fog layer to equally, and equitably distribute all the workload among the existing VMs in the segment. Till now, many load balancing techniques have been proposed for fog computing. An empirical study of existing methods in load balancing have been conducted, and taxonomy has been presented in a hierarchical form. Besides, the article contains the year-wise comprehensive review and summary of research articles published in the area of load balancing from 2013 to 2020. Furthermore, article also contains our proposed fog computing architecture to resolve load balancing problem. It also covers current issues and challenges that can be resolved in future research works. The paper concludes by providing future directions.
... As for offline scheduling that is also called batch mode scheduling, resources are allocated in response to incoming application request, based on predefined moments, which is very useful for rapid calculation of the processing time when there is a larger number of incoming tasks. Min-Min [69] , Max-Min [70] , etc., are a few examples of batch-mode scheduling algorithms. • Preemptive and non-preemptive scheduling: With regard to the preemptive scheduling, the tasks currently being executed could be interrupted and consequently are properly migrated to other free resources. ...
Article
Cloud computing is a recently looming-evoked paradigm, the aim of which is to provide on-demand, pay-as-you-go, internet-based access to shared computing resources (hardware and software) in a metered, self-service, dynamically scalable fashion. A related hot topic at the moment is task scheduling, which is well known for delivering critical cloud service performance. However, the dilemmas of resources being underutilized (underloaded) and overutilized (overloaded) may arise as a result of improper scheduling, which in turn leads to either wastage of cloud resources or degradation in service performance, respectively. Thus, the idea of incorporating meta-heuristic algorithms into task scheduling emerged in order to efficiently distribute complex and diverse incoming tasks (cloudlets) across available limited resources, within a reasonable time. Meta-heuristic techniques have proven very capable of solving scheduling problems, which is fulfilled herein from a cloud perspective by first providing a brief on traditional and heuristic scheduling methods before diving deeply into the most popular meta-heuristics for cloud task scheduling followed by a detailed systematic review featuring a novel taxonomy of those techniques, along with their advantages and limitations. More specifically, in this study, the basic concepts of cloud task scheduling are addressed smoothly, as well as diverse swarm, evolutionary, physical, emerging, and hybrid meta-heuristic scheduling techniques are categorized as per the nature of the scheduling problem (i.e., single- or multi-objective), the primary objective of scheduling, task-resource mapping scheme, and scheduling constraint. Armed with these methods, some of the most recent relevant literature are surveyed, and insights into the identification of existing challenges are presented, along with a trail to potential solutions. Furthermore, guidelines to future research directions drawn from recently emerging trends are outlined, which should definitely contribute to assisting current researchers and practitioners as well as pave the way for newbies excited about cloud task scheduling to pursue their own glory in the field.
... This scheduling algorithm performs better for smaller datasets while high makespan and degrades throughput for larger tasks. Min-Min [22] scheduling heuristic reduces the makespan as compared to other tasks scheduling heuristics. However, the main issue with the Min-Min algorithm is poor resource utilization, which is one of the critical requirements of CSP. ...
Article
Full-text available
In recent years, the growth rate of Cloud computing technology is increasing exponentially, mainly for its extraordinary services with expanding computation power, the possibility of massive storage, and all other services with the maintained quality of services (QoSs). The task allocation is one of the best solutions to improve different performance parameters in the cloud, but when multiple heterogeneous clouds come into the picture, the allocation problem becomes more challenging. This research work proposed a resource-based task allocation algorithm. The same is implemented and analyzed to understand the improved performance of the heterogeneous multi-cloud network. The proposed task allocation algorithm (Energy-aware Task Allocation in Multi-Cloud Networks (ETAMCN)) minimizes the overall energy consumption and also reduces the makespan. The results show that the makespan is approximately overlapped for different tasks and does not show a significant difference. However, the average energy consumption improved through ETAMCN is approximately 14%, 6.3%, and 2.8% in opposed to the random allocation algorithm, Cloud Z-Score Normalization (CZSN) algorithm, and multi-objective scheduling algorithm with Fuzzy resource utilization (FR-MOS), respectively. An observation of the average SLA-violation of ETAMCN for different scenarios is performed.
... Round-Robin (RR) is a simple but well-known algorithm that is one of the most common algorithms for resource allocation [26]. RR has been used by other papers in the literature for comparison [27], [28], [29], [30]. In addition to RR, Minimum Response Time (Minimum Completion Time) [31], which assigns each tasks to the resource with minimum completion time is used for comparison. ...
Conference Paper
Abstract—Cloud computing provides computing and storage resources over the Internet to provide services for different industries. However, delay-sensitive applications like smart health and city applications now require computation over large amounts of data transferred to centralized cloud data centers which leads to drop in performance of such systems. The new paradigms of fog and edge computing provide new solutions by bringing resources closer to the user and provide low latency and energy efficiency compared to cloud services. It is important to find optimal placement of services and resources in the three-tier IoT to achieve improved cost and resource efficiency, higher QoS, and higher level of security and privacy. In this paper, we propose a cost-aware genetic-based (CAG) task scheduling algorithm for fog-cloud environments, which improves the cost efficiency in real-time applications with hard deadlines. iFogSim simulator, which is an extended version of CloudSim is used to deploy and test the performance of the proposed method in terms of latency, network congestion, and cost. The performance results show that the proposed algorithm provides better efficiency in terms of the cost and throughput compared to Round-Robin and Minimum Response Time algorithms.
... The model has been simulated using MATLAB. Rehman et al. [19] proposed Min-Min algorithm for efficient resource distribution and load balancing. The results are then simulated and compared with Round Robin algorithm. ...
... Furthermore, Xu et al. [30] discussed a framework that classifies applications based on deadline, and assists service migration and load distribution. The application management policy of Rehman et al. [21] optimizes energy usage of instances while executing the applications. Taneja et al. [26] also developed a policy that prioritizes application placement on robust Fog nodes to enhance resource utilization. ...
Conference Paper
Full-text available
Fog computing overcomes the limitations of executing Internet of Things (IoT) applications in remote Cloud datacentres by extending the computation facilities closer to data sources. Since most of the Fog nodes are resource constrained, accommodation of every IoT application within Fog environments is very challenging. Hence, we need to efficiently identify which set of applications should be deployed in Fog. It becomes even more complicated when the application characteristics in terms of urgency, size and flow of inputs are considered simultaneously. The necessity of time-optimized execution further intensifies the application management problem. In this work, we propose a policy for Fog environments that distributes application management tasks across the gateway and the infrastructure level. It classifies and places applications according to their Edge affinity. Edge affinity of an application denotes the relative intensity of different attributes coherent with its characteristics such as user-defined deadline, amount of data per input and sensing frequency of IoT devices, which are required to be addressed within Fog environments to meet its Quality of Service (QoS). The proposed policy also minimizes the service delivery time of applications in Fog infrastructure. Its performance is compared with existing application management policies in both iFogSim-simulated and FogBus-based real environments. The experiment results show that our policy outperforms others in combined QoS enhancement, network relaxation and resource utilization.
Article
Full-text available
The tremendous increase in daily internet users leads to the explosion of on-demand requests over the cloud. It causes a burden in the cloud environment (diverse and complicated applications) for cloud services. The assignment of resources to the associated task varies based on the functioning of resources available across the cloud. It establishes the importance of task scheduling in cloud computing. Inadequate scheduling techniques address the issues of resource overuse and underuse (imbalance), resulting in service degradation (in the event of overuse) or cloud resource waste (in the case of underuse or underutilized). The primary idea is to eliminate an imbalance problem by employing an appropriate scheduling algorithm that may efficiently allocate jobs (of varying and complicated types) among cloud resources. The parameters which impact the activity mentioned above are resource utilization, reliability, makespan time, cost, energy consumption, availability, response time, and other critical performance indicator metrics. In order to create a productive cloud scheduling method, these matrices need to be optimized. Many state-of-the-art cloud task scheduling algorithms based on heuristic, meta-heuristic, and hybrid design have been presented and discussed in the literature as part of the study. This study presents a comprehensive assessment and classification of various scheduling systems and their benefits and drawbacks. Our detailed & comprehensive survey effort will serve as a stepping stone for new cloud computing researchers and aid in pursuing research in this direction.
Article
Full-text available
In a distributed computing system, there are limited resources, which needs to be utilized effectively. Then for improving QoS Fog computing paradigm is an effective way, with suitable allocations. Thus, different resource scheduling and optimization algorithms exist. However, still, there is a scope to improve bandwidth, latency, energy consumption, and total communication cost in the Fog environment. In this work investigation is done to show significance of task management in such resource constrained environment. Various heuristics and meta-heuristic algorithms are evaluated using simulations, to show the task placement and their impacts by using 5 different Montage datasets from work flow sim tool kit for Fog-Computing environment. Then QoS parameters like cost, makespan, and energy consumptions are computed for various state-of-the-art techniques like Min-max, PSO, GA, ACO, and BLA. This shows the behaviour of these techniques with such different tasks and allocation environment configurations. Evaluated result parameters are collected and presented in the result section. This work shows the effectiveness of heuristics and meta-heuristics techniques to manage the tasks and their allocations in the Fog environment.
Article
Full-text available
In the traditional system, various researchers have suggested different resource scheduling and optimization algorithms. However, still, there is a scope to reduce Bandwidth, latency, energy consumption, and total communication cost in the Fog environment. in this work discussion is done on various performance challenges that are experienced in the Fog Environment based on 6G networks and explore the role of optimization techniques to overcome these challenges This work is focused on the Comparison of PSO, GA, and Round-Robin algorithm on parameters Cost, makespan, average execution time, and energy consumption for the resource management in the Fog environment. This study also represents which technique among the Group behavior species, Social Behaviour, and Pre-emptive type is better for achieving QoS for resource management in the Fog environment for the 6G network. In this work, we have discussed various resource scheduling problems that may be faced in the future, and what type of improvement can be considered in terms of IoT devices and 6G networks.
Chapter
Full-text available
Image classification is a classic problem in areas pertaining to Computer Vision, Image Processing, and Machine Learning. This paper aims to compare the various Deep Learning Architectures to improve the accuracy of Image Classification to select the best Deep Learning Architecture by implementing and testing various Deep Learning Architectures in combination with Dense Neural Networks. This comparative study helps to improve the accuracy of image separation in both training and testing databases. For targeted training and testing, 3000 training images and 1000 test images were used. The result of the Deep Learning-based classification of images using the platform as Google Colab showed how accurate classification was done by comparing various deep learning architectures.
Research
Full-text available
Cloud computing provides computing resources to the cloud on demand based and the concept is pay per use". Cloud computing mainly focused on optimistic resource utilization in fewer cost efforts. Now, these days cloud computing technology are utilized by most of the IT companies and business organizations. It increases number cloud users as well as computing resources which creates challenges for cloud service providers to maintain optimum utilization of computing resources. Task scheduling methods play an important role in cloud computing. A scheduling machine helps in allocation of the virtual machine to a user task and to maintain the balancing between machine capacity and total task load. Different task scheduling methods are suggested by cloud researchers. In this research work, we are presenting a hybrid ACHBDF (Ant colony, Honey bee with dynamic feedback) load balancing method for optimum resource utilization in cloud computing. The proposed ACHBDF method uses the combined strategy of two dynamic scheduling methods with a dynamic time step feedback method. Proposed ACHBDF utilizes the quality of ant colony method and Honeybee method inefficient task scheduling. Here feedback strategy helps to check system load after each phenomenon in dynamic feedback table. This helps in migration of task more efficiently in less time. An experimental analysis in between existing ant colony optimization, honey bee method and Proposed ACHBDF clearly shows that proposed ACHBDF performs outstanding over existing method.
Conference Paper
Full-text available
Traditional electric generation based on fossil fuel consumption threatens the humanity with global warming, climate change, and increased carbon emission. Renewable resources such as wind or solar power are the solution to these problems. The smart grid is the only choice to integrate green power resources into the energy distribution system, control power usage, and balance energy load. Smart grids employ smart meters which are responsible for two-way flows of electricity information to monitor and manage the electricity consumption. In a large smart grid, smart meters produce tremendous amount of data that are hard to process, analyze and store even with cloud computing. Fog computing is an environment that offers a place for collecting, computing and storing smart meter data before transmitting them to the cloud. This environment acts as a bridge in the middle of the smart grid and the cloud. It is geographically distributed and overhauls cloud computing via additional capabilities including reduced latency, increased privacy and locality for smart grids. This study overviews fog computing in smart grids by analyzing its capabilities and issues. It presents the state-of-the-art in area, defines a fog computing based smart grid and, gives a use case scenario for the proposed model.
Article
Full-text available
A cloud computing environment offers a simplified, centralized platform or resources for use when needed on low cost. One of the key functionality of this type of computing is to allocate the resources on an individual demand. However, with the expanding requirements of cloud user, the need of efficient resource allocation is also emerging. The main role of service provider is to effectively distribute and share the resources which otherwise would result into resource wastage. In addition to the user getting the appropriate service according to request, the cost of respective resource is also optimized. In order to surmount the mentioned shortcomings and perform optimized resource allocation, , this research proposes a new Agent based Automated Service Composition (A2SC) algorithm comprising of request processing and automated service composition phases and is not only responsible for searching comprehensive services but also considers reducing the cost of virtual machines which are consumed by on-demand services only.
Article
Full-text available
Wide area measurement system (WAMS) usually contains three dependent infrastructures called management, measurement, and communication. For optimal operation of a power system, it is necessary to design these infrastructures suitably. In this paper, measurement and communication infrastructures in a wide area network are designed independently from a management viewpoint, considering an adequate level of system observability. In the first step, optimal placement of measurement devices is determined using an integer linear programming (ILP) solution methodology while taking into account zero-injection bus effects. In the next step, new dynamic multiobjective shortest path (MOSP) programming is presented for the optimal design of communication infrastructure. The best architecture design is introduced in terms of optical fiber power ground wire (OPGW) coverage for the suggested central control bus and the number of phasor measurement units (PMUs). The applicability of the proposed model is finally examined on several IEEE standard test systems. The simulation results show better performance of the proposed method compared with other conventional methods. The numerical results reveal that applying the proposed method could not only reduce the OPGW coverage cost, the number of PMUs, and the number of communication links but could also improve the system technical indexes such as latency as subsidiary results of the optimization process.
Conference Paper
Smart Grid (SG) is a modernized electric grid that enhances reliability, efficiency, sustainability, and economics of electricity services. Moreover, it plays a vital role in modern energy infrastructure. The SG core challenges are, how to efficiently utilize different kinds of front end smart devices, such as smart meters and power assets, and in what manner, to process an enormous volume of data received from these devices. Further, the cloud and fog computing is a technology that provides on demand computational resources. It is a good solution to overcome these hurdles, then it has numerous good characteristics such as cost saving, energy saving, scalability, flexibility, and agility. In this paper, a cloud-fog based model is proposed for resource management in SG. The key idea of our model is to figure-out a hierarchical structure of cloud-fog computing to provide different types of computing services for resource management in SG. In addition, for load balancing, three algorithms are used: throttled, round robin and particle swarm optimization. The comparative discussion of these algorithms are presented in this paper.
Conference Paper
The integration of Smart Grid (SG) with cloud computing promises to develop an improved energy management system for utilities and consumers. New applications and services are developed which create large amount of data to be processed on cloud. Fog computing as an extension of cloud computing which helps to mitigate load on cloud data centers. In this paper, a three layered model based on cloud and fog framework is proposed to reduce load of consumers and power generation system. End user layer contains clusters of buildings which are connected to fog server layer. Fog layer is an intermediate layer which connects the end user layer to cloud layer. Three load balancing algorithms Round Robin (RR), throttled and proposed Particle Swarm Optimization with Simulated Annealing (PSOSA) are used for resource allocation. The service broker policy considered in this paper is optimized response time. The findings demonstrate that PSOSA performs better than RR and throttled in order to alleviate response time, processing time and cost of virtual machine, microgrid and data transfer.
Article
Smart Grid (SG) technology represents an unprecedented opportunity to transfer the energy industry into a new era of reliability, availability, and efficiency that will contribute to our economic and environmental health. On the other hand, the emergence of Electric Vehicles (EVs) promises to yield multiple benefits to both power and transportation industry sectors, but it is also likely to affect the SG reliability, by consuming massive energy. Nevertheless, the plug-in of EVs at public supply stations must be controlled and scheduled in order to reduce the peak load. This paper considers the problem of plug-in EVs at public supply stations (EVPSS). A new communication architecture for smart grid and cloud services is introduced. Scheduling algorithms are proposed in order to attribute priority levels and optimize the waiting time to plug-in at each EVPSS. To the best of our knowledge, this is one of the first papers investigating the aforementioned issues using new network architecture for smart grid based on cloud computing. We evaluate our approach via extensive simulations and compare it with two other recently proposed works, based on real supply energy scenario in Toronto. Simulation results demonstrate the effectiveness of the proposed approach when considering real EVs charging-discharging loads at peak-hours periods.
Article
By locally solving an optimization problem and broadcasting an update message over the underlying communication infrastructure, demand response program based on the distributed optimization model encourage all users to participate in the program. However, some challenging issues present themselves, such as the existence of an ideal communication network, especially when utilizing wireless communication, and the effects of communication channel properties, like the bit error rate, on the overall performance of the demand response program. To address the issues, this paper first defines a Cloud-based Demand Response (CDR) model, which is implemented as a two-tier cloud computing platform. Then a communication model is proposed to evaluate the communication performance of both the CDR and DDR (Distributed Demand Response) models. The present study shows that when users are finely clustered, the channel bit error rate is high and the User Datagram Protocol (UDP) is leveraged to broadcast the update messages, making the optimal solution unachievable. Contradictory to UDP, the Transmission Control Protocol (TCP) will be caught up with a higher bandwidth and increase the delay in the convergence time. Finally, the current work presents a cost-effectiveness analysis which confirms that achieving higher demand response performance incurs a higher communication cost.
Article
With the increasing importance of images in people's daily life, content-based image retrieval (CBIR) has been widely studied. Compared with text documents, images consume much more storage space. Hence, its maintenance is considered to be a typical example for cloud storage outsourcing. For privacy-preserving purposes, sensitive images, such as medical and personal images, need to be encrypted before outsourcing, which makes the CBIR technologies in plaintext domain to be unusable. In this paper, we propose a scheme that supports CBIR over encrypted images without leaking the sensitive information to the cloud server. First, feature vectors are extracted to represent the corresponding images. After that, the pre-filter tables are constructed by locality-sensitive hashing to increase search efficiency. Moreover, the feature vectors are protected by the secure kNN algorithm, and image pixels are encrypted by a standard stream cipher. In addition, considering the case that the authorized query users may illegally copy and distribute the retrieved images to someone unauthorized, we propose a watermark-based protocol to deter such illegal distributions. In our watermark-based protocol, a unique watermark is directly embedded into the encrypted images by the cloud server before images are sent to the query user. Hence, when image copy is found, the unlawful query user who distributed the image can be traced by the watermark extraction. The security analysis and the experiments show the security and efficiency of the proposed scheme.
Article
With the rapid increase of monitoring devices and controllable facilities in the demand side of electricity networks, more solid information and communication technology (ICT) resources are required to support the development of demand side management (DSM). Different from traditional computation in power systems which customizes ICT resources for mapping applications separately, DSM especially asks for scalability and economic efficiency, because there are more and more stakeholders participating in the computation process. This paper proposes a novel cost-oriented optimization model for a cloud-based ICT infrastructure to allocate cloud computing resources in a flexible and cost-efficient way. Uncertain factors including imprecise computation load prediction and unavailability of computing instances can also be considered in the proposed model. A modified priority list algorithm is specially developed in order to efficiently solve the proposed optimization model and compared with the mature simulating annealing based algorithm. Comprehensive numerical studies are fulfilled to demonstrate the effectiveness of the proposed cost-oriented model on reducing the operation cost of cloud platform in DSM.
Article
In cloud computing, searchable encryption scheme over outsourced data is a hot research field. However, most existing works on encrypted search over outsourced cloud data follow the model of 'one size fits all' and ignore personalized search intention. Moreover, most of them support only exact keyword search, which greatly affects data usability and user experience. So how to design a searchable encryption scheme that supports personalized search and improves user search experience remains a very challenging task. In this paper, for the first time, we study and solve the problem of personalized multi-keyword ranked search over encrypted data (PRSE) while preserving privacy in cloud computing. With the help of semantic ontology WordNet, we build a user interest model for individual user by analyzing the user's search history, and adopt a scoring mechanism to express user interest smartly. To address the limitations of the model of 'one size fit all' and keyword exact search, we propose two PRSE schemes for different search intentions. Extensive experiments on real-world dataset validate our analysis and show that our proposed solution is very efficient and effective.
Article
By introducing microgrids, energy management is required to control the power generation and consumption for residential, industrial, and commercial domains, e.g., in residential microgrids and homes. Energy management may also help us to reach zero net energy (ZNE) for the residential domain. Improvement in technology, cost, and feature size has enabled devices everywhere, to be connected and interactive, as it is called Internet of Things (IoT). The increasing complexity and data, due to the growing number of devices like sensors and actuators, require powerful computing resources, which may be provided by cloud computing. However, scalability has become the potential issue in cloud computing. In this paper, fog computing is introduced as a novel platform for energy management. The scalability, adaptability, and open source software/hardware featured in the proposed platform enable the user to implement the energy management with the customized control-as-services, while minimizing the implementation cost and time-to-market. To demonstrate the energy management-as-a-service over fog computing platform in different domains, two prototypes of home energy management (HEM) and microgrid-level energy management have been implemented and experimented.
Article
The smartphone is a typical cyberphysical system (CPS). It must be low energy consuming and highly reliable to deal with the simple but frequent interactions with the cloud, which constitutes the cloud-integrated CPS. Dynamic voltage scaling (DVS) has emerged as a critical technique to leverage power management by lowering the supply voltage andfrequency of processors. In this paper, based on the DVS technique, we propose a novel Energy-aware Dynamic Task Scheduling (EDTS) algorithm to minimize the total energy consumption for smartphones, while satisfying stringent time constraints and the probability constraint for applications. Experimental results indicate that the EDTS algorithm can significantly reduce energy consumption for CPS, as compared to the critical path scheduling method and the parallelism-based scheduling algorithm.
Article
Due to the increasing popularity of cloud computing, more and more data owners are motivated to outsource their data to cloud servers for great convenience and reduced cost in data management. However, sensitive data should be encrypted before outsourcing for privacy requirements, which obsoletes data utilization like keyword-based document retrieval. In this paper, we present a secure multi-keyword ranked search scheme over encrypted cloud data, which simultaneously supports dynamic update operations like deletion and insertion of documents. Specifically, the vector space model and the widely-used TF x IDF model are combined in the index construction and query generation. We construct a special tree-based index structure and propose a "Greedy Depth-first Search" algorithm to provide efficient multi-keyword ranked search. The secure kNN algorithm is utilized to encrypt the index and query vectors, and meanwhile ensure accurate relevance score calculation between encrypted index and query vectors. In order to resist statistical attacks, phantom terms are added to the index vector for blinding search results. Due to the use of our special tree-based index structure, the proposed scheme can achieve sub-linear search time and deal with the deletion and insertion of documents flexibly. Extensive experiments are conducted to demonstrate the efficiency of the proposed scheme.
Conference Paper
Load balancing is the major concern in the cloud computing environment. Cloud comprises of many hardware and software resources and managing these will play an important role in executing a client's request. Now a day's clients from different parts of the world are demanding for the various services in a rapid rate. In this present situation the load balancing algorithms built should be very efficient in allocating the request and also ensuring the usage of the resources in an intelligent way so that underutilization of the resources will not occur in the cloud environment. In the present work, a novel VM-assign load balance algorithm is proposed which allocates the incoming requests to the all available virtual machines in an efficient manner. Further, the performance is analyzed using Cloudsim simulator and compared with existing Active-VM load balance algorithm. Simulation results demonstrate that the proposed algorithm distributes the load on all available virtual machines without under/over utilization.
Article
With the significant advances in Information and Communications Technology (ICT) over the last half century, there is an increasingly perceived vision that computing will one day be the 5th utility (after water, electricity, gas, and telephony). This computing utility, like all other four existing utilities, will provide the basic level of computing service that is considered essential to meet the everyday needs of the general community. To deliver this vision, a number of computing paradigms have been proposed, of which the latest one is known as Cloud computing. Hence, in this paper, we define Cloud computing and provide the architecture for creating Clouds with market-oriented resource allocation by leveraging technologies such as Virtual Machines (VMs). We also provide insights on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain Service Level Agreement (SLA)-oriented resource allocation. In addition, we reveal our early thoughts on interconnecting Clouds for dynamically creating global Cloud exchanges and markets. Then, we present some representative Cloud platforms, especially those developed in industries, along with our current work towards realizing market-oriented resource allocation of Clouds as realized in Aneka enterprise Cloud technology. Furthermore, we highlight the difference between High Performance Computing (HPC) workload and Internet-based services workload. We also describe a meta-negotiation infrastructure to establish global Cloud exchanges and markets, and illustrate a case study of harnessing ‘Storage Clouds’ for high performance content delivery. Finally, we conclude with the need for convergence of competing IT paradigms to deliver our 21st century vision.
Ant Colony Optimization based Energy Management Controller for Smart Grid
  • Sahar Rahim
  • Zafar Iqbal
  • Nusrat Shaheen
  • Ali Zahoor
  • Umar Khan
  • Qasim
  • Ahmed Shahid
  • Nadeem Khan
  • Javaid
Sahar Rahim, Zafar Iqbal, Nusrat Shaheen, Zahoor Ali Khan, Umar Qasim, Shahid Ahmed Khan, and Nadeem Javaid. "Ant Colony Optimization based Energy Management Controller for Smart Grid." In Advanced Information Networking and Applications (AINA), 2016 IEEE 30th International Conference on, IEEE, (2016): 1154-1159.
An Ant-Colony-Based Meta-Heuristic Approach for Load Balancing in Cloud Computing
  • Dam
  • Gopa Santanu
  • Kousik Mandal
  • Parmartha Dasgupta
  • Dutta
Dam, Santanu, Gopa Mandal, Kousik Dasgupta, and Parmartha Dutta. An Ant-Colony-Based Meta-Heuristic Approach for Load Balancing in Cloud Computing. Applied Computational Intelligence and Soft Computing in Engineering (2017)
Power Consumption Scheduling for Future Connected Smart Homes Using Bi-Level Cost-Wise Optimization Approach
  • Yaghmaee Moghaddam
  • Mohammad Hossein
  • Morteza Moghaddassian
  • Alberto Leon-Garcia
Yaghmaee Moghaddam, Mohammad Hossein, Morteza Moghaddassian, and Alberto Leon-Garcia. Power Consumption Scheduling for Future Connected Smart Homes Using Bi-Level Cost-Wise Optimization Approach.Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering 166 (2016).
Task scheduling based on load balancing using artificial bee colony in cloud computing environment
  • Fatemeh Rastkhadiv
  • Kamran Zamanifar
Rastkhadiv, Fatemeh, and Kamran Zamanifar. "Task scheduling based on load balancing using artificial bee colony in cloud computing environment." International Journal of Advanced Biotechnology and Research (IJBR) 7.5 (2016). 18