ArticlePDF Available

Abstract and Figures

Many enterprises are running distributed applications on their on-premise servers. However, if load on those servers changes unexpectedly, then itbecomes tedious to scale the resources and requires skilled human power to manage such situations. It may increase the capital expenditure. Hence,many companies have started to migrate their on-premise applications to the cloud. This migration of the applications to the cloud is one of the majorchallenges. To setup and manage the growing complex infrastructure, after migrating these applications to the cloud are really a time-consuming andtedious process which results in downtime. Hence, we need to automate this environment. To achieve architecture for the distributed systems whichsupport security, repeatability, reliability, and scalability, we require some cloud automation tools. This paper summarizes tools such as Terraform andcloud formation for infrastructure automation and Docker and Habitat for application automation.
Content may be subject to copyright.
Special Issue (April)
Online - 2455-3891
Print - 0974-2441
© 2017 The Authors. Published by Innovare Academic Sciences Pvt Ltd. This is an open access article under the CC BY license (http://creativecommons.
org/licenses/by/4. 0/) DOI: http://dx.doi.org/10.22159/ajpcr.2017.v10s1.20519
Advances in Smart Computing and Bioinformatics
A REVIEW OF EXISTING CLOUD AUTOMATION TOOLS
PRASSANNA J*, ANJALI R PAWAR, NEELANARAYANAN V
School of Computing Science and Engineering, VIT University, Chennai, Tamil Nadu, India. Email: prassanna.j@vit.ac.in
Received: 23 January 2017, Revised and Accepted: 03 March 2017
ABSTRACT
Many enterprises are running distributed applications on their on-premise servers. However, if load on those servers changes unexpectedly, then it
becomes tedious to scale the resources and requires skilled human power to manage such situations. It may increase the capital expenditure. Hence,
many companies have started to migrate their on-premise applications to the cloud. This migration of the applications to the cloud is one of the major
challenges. To setup and manage the growing complex infrastructure, after migrating these applications to the cloud are really a time-consuming and
tedious process which results in downtime. Hence, we need to automate this environment. To achieve architecture for the distributed systems which
support security, repeatability, reliability, and scalability, we require some cloud automation tools. This paper summarizes tools such as Terraform and
cloud formation for infrastructure automation and Docker and Habitat for application automation.
Keywords: Cloud computing, Infrastructure automation, Application automation.
INTRODUCTION
Compute, storage, and network are the basic computing resources. The
use of virtualization has reduced the time required to deploy computing
resources from weeks to few minutes. However, for building a cloud-
based infrastructure, this is not enough. Hence, the deployment and
maintenance of those resources should be automated and managed
in such a way that the resources can be effectively used, rapidly
provisioned, and released with less management efforts.
To reduce the capital expenditure on the computational resources,
enterprises are moving their workload to the cloud with the promise
that they will get access to flexible and elastic compute resources in
minimal cost. However, managing this growing infrastructure manually
along with the deployed application is one of the major challenges. This
leads to introducing the cloud automation concept. There are various
cloud automation tools which do all these work for us ensuring that
all the tasks regarding deployment and allocation of the computational
resources are done efficiently.
LITERATURE SURVEY
Authors in this Juve and Deelman [1] paper say that Infrastructure as
a Service clouds provide the ability to provision VMs on demand, but
they do not give information for managing those resources which are
provisioned. Hence, to use such clouds effectively, tools are needed to
use which can help users to easily deploy applications in the cloud.
The authors of this paper developed a system to create, configure, and
manage the CM deployments in the cloud.
Due to variety of the operating system and applications, it becomes
very difficult to deploy a large number of virtual machines in a short
period. This Zhang et al. [2] paper proposes an automatic deployment
mechanism based on cloud computing platform openstack. This system
is responsible for the automatic deployment at operating system
level as well as application level. They have developed an interactive
dashboard for the users which helps them to deploy their systems and
the applications without professional knowledge of cloud.
This Callanan et al. [3] paper has presented the architecture of an
environment migration framework for automating the migration of
existing infrastructure, creation, and configuration in the cloud. They
have discussed some challenges faced while migrating the applications
to the cloud typically security as main along with the legal and
compliance issues.
Compute, storage, and network are the primary resources of computing.
Provisioning time for deployment of these resources is remarkably
minimized by virtualization technology. However, to construct a cloud-
based infrastructure, still only data center virtualization is not sufficient.
If we go for only this data center virtualization technique, then it
may generate virtual resource sprawl. In addition, the infrastructure
which is cloud-based cannot construct by only virtual infrastructure.
In other hand, physical infrastructure also needs to be automated.
Hence, to automate and manage cloud-based infrastructure (virtual
as well as physical resources), we need software. For management
and automation of cloud-based infrastructure, different modules and
their integration are discussed in cloud management and automation
paper [4].
CLOUD AUTOMATION TOOLS
This section summarizes the study of existing infrastructure automation
tools like Terraform and cloud formation and application automation
tools like Habitat and Docker.
Terraform
Terraform takes the concept of managing Infrastructure as Code. This is
one of the best tools for creating, configuring, managing, and versioning
the infrastructure very effectively and safely. It can be used to codify
the knowledge of building and scaling a service into a configuration.
It uses text files to describe the infrastructure. It supports various
cloud service providers. If configuration changes then Terraform will
determine that changes and create execution plans according to it [5].
The key features of Terraform are:
 
Terraform is used for the managing cloud infrastructure as code.
      
infrastructure can be easily shared and reused for other environment.
 
Terraform generates execution plan which states what it will do
to reach the desired state. This execution plan describes what will
happen when we call to apply. Then, it executes that plan to build
that infrastructure.
Full Proceeding Paper
Special Issue (April)
Prassanna et al.
 
    
resources and builds a graph of all our resources. This helps

 
If complex changes are required then that can be applied easily with
the less human involvement. With the help of execution plan and
resource graph, we will easily get knowledge of what Terraform will
change and in what order by avoiding the possible human errors.
Fig. 1 shows the workflow in Terraform. Developers can use Terraform
plan command to calculate what changes will be performed after using
Terraform build command. Hence, they can update the configuration
according to the plan. If Terraform build command is successful, then
it will deploy the changes in Amazon Web Service (AWS) cloud. If you
want to have backup of tfstate, then you can update the tfstate in S3. This
will help other team members to get the tfstate file and make changes.
Cloud formation
Cloud formation is a service provided by AWS which helps us to setup
our AWS resources. This helps us to spend more time on focusing on
our application which is running in AWS rather than spending time
on managing those applications. We can create a template file which
states the resources we want, and AWS cloud formation does the
provisioning and configuring those resources for us. It handles the
creation and configuration of AWS resources as well as it figures out
what is dependent on what [6].
We can create and provision AWS infrastructure deployments
predictably and repeatedly with the help AWS cloud formation. It
          
Amazon elastic block storage, auto scaling, and elastic load balancing to
build highly scalable and reliable applications in the cloud by providing
less attention to the underlying infrastructure [6].
The key features of AWS cloud formation are [7]
 
To build highly scalable and reliable application, we might require
an auto scaling group, an elastic load balancer, or any other service

this to work together. If we are doing this manually, then it adds the
complexity and time before we get our application up and running.
Instead of this, we can modify an existing AWS cloud formation
template which describes all our resources. It provisions all the
required resources for us and easily manages a collection of resources
as a single unit.
 
To make application more reliable, we might replicate it in multiple
regions so it becomes available all the time. AWS cloud formation
enables us to reuse the template to set up the required resources
consistently and repeatedly in multiple regions.
         
templates to describe the infrastructure resources. These are text files
so we can easily track and control the changes to our infrastructure. We
can use a version control system along with the templates so we will
get to km = now what changes we made and if at any point I want to
rollback our infrastructure to the original settings, we can use previous
version of our template.
Docker
Docker is an open-source tool which is designed for creating, deploying
and running applications easily using containers. Docker containers
allow developers to wrap an application with all required libraries
and other dependencies and make it as a single package. This gives the
guarantee that the software will always run the same without caring of
its environment. Docker gives flexibility to developers to build, ship as
well as run any application, anywhere [8].
Docker is bit similar to the virtual machine. However, instead of
creating whole virtual operating system, Docker enables applications
to use the same underlying kernel. This improves the performance
boost significantly and reduces the size of the application. It is
designed to benefit both developers as well as system administrators.
So that, developer can focuses on writing code without underlying
infrastructure. As Docker is open-source, anyone can contribute and
extend it to fulfill their own needs. Hence, it helps developers to start
using one of thousands of programs which are already designed to
run in a Docker container. Because of its low operation expense and
small footprint, operations staff gets flexibility to reduce the number of
system needed [10,11].
The key features of Docker are:
 
Docker can build any application in any language using any stack
and dockerized applications can be run anywhere. Hence, this gives

applications.
 
Docker allows enterprises to make the best business decision by
choosing any infrastructure such as cloud, virtual machines, or
bare metal servers. Docker can run applications anywhere on any
infrastructure.
 
All the containers are running on same machine share the same


 
Containers provide an additional layer of protection for the
application by isolating application from one another and the
underlying infrastructure.
Docker chooses client-server architecture model. The Docker client
is the basic interface to the Docker. It will take configuration and
commands flags from the user and interacts with the Docker daemon.
Single client has capability to communicate with multiple unrelated
daemons as well as the remote Docker daemon. This Docker daemon
runs on the host machine. Docker daemon does the work of building,
running, and distributing your Docker containers.
To run multicontainer applications, Docker has provided tool like
Docker-compose. With Docker-compose, we can write compose file
which describes a configuration for application services. Using compose,
we can define multiple isolated environments on a single host. It can be
used in various ways such as in development environments, automation
testing environment, or in single host deployment [9].
The use of compose is a three-step process [9]:
    
be used anywhere.
        
up our application so that they will run together in and isolated
environment.
Fig. 1: Sample workflow in Terraform
472
Special Issue (April)
Prassanna et al.
3. Finally, use Docker-compose up command and compose will run our
application.
If you want to use compose in production environment, then we
have to make slight changes in configuration and we can rebuild the
image and recreate the application containers. This will help use to
deploy application on single server. If you want to your applications
on multiple hosts, you can use compose against swarm instances [12].
Docker Swarm creates a pool of the Docker hosts and turns it to single
virtual host. Hence, any tool which has already interacted with Docker
daemon can use Swarm to scale to multiple hosts. It supports tools such
as Docker-compose, Docker Machine, and Dokku and Jenkins [13].
Fig. 2 shows how the commands are working and what changes they
will perform. When user asks for Docker build then, it will build the
image. When Docker pull command is given, Docker daemon will
pull the image from Docker registry to the Docker host. Docker run
command will use the image and creates a container from it. Docker
push command is used to push the image from host to Docker hub.
Habitat
Habitat is an open-source project which provides new approach to
automation. It focuses on application rather than infrastructure, it runs
on. Using Habitat, the application we build, deploy, and manage will
behave consistently in any runtime. It packages our application and its
automation together. It enables us to ship our application as well as the
automation we need to manage to any platform [14].
Habitat helps us to spend less time on environment and more time on
building features. It puts application first. It packages application code,
runtime dependencies, start-up scripts, and configuration together [15].
The key features of Habitat are [16]:
 
With the help of Habitat application can run in any environment
whether it is container, bare metal, or PAAS.
 
The legacy applications become independent of the environment, for
which they were designed when they are packaged in a Habitat. They
can easily adopt modern environments such as cloud and containers.
 
The complexity of managing containers in production environment
is reduced using Habitat. Habitat solves the challenges developers
face when moving container-based application from development
environments into production as it automates the application

CONCLUSIONS AND FUTURE WORK
This study of cloud automation tools defines the importance of
automation tools to achieve architecture for the distributed systems.
The future work includes the deploying and managing the infrastructure
and applications; on top of that using, these cloud automation tools
analyze the security, repeatability, reliability, and scalability impacts on
the deployed distributed system.
REFERENCES
1. Juve G, Deelman E. Automating Application Deployment in
Infrastructure Clouds. Cloud Computing Technology and Science
(CloudCom). IEEE Third International Conference on. Athens: IEEE;
2011. p. 658-65.
2. Zhang R, Shang Y, Zhang S. An Automatic Deployment Mechanism on
Cloud Computing Platform. Cloud Computing Technology and Science
(CloudCom). IEEE 6th International Conference on. Singapore: IEEE;
2014. p. 511-8.
3. Callanan S, O’Shea D, O’Regan E. Automated Environment Migration
to the Cloud. 27th Irish Signals and Systems Conference (ISSC).
Londonderry: ISSC; 2016. p. 1-6.
4. Wibowo E. Cloud Management and Automation. 2013 Joint
International Conference on Rural Information and Communication
Technology and Electric-Vehicle Technology (rICT and ICeV-T).
Bandung: rICT and ICeV-T; 2013. p. 1-4.
5. Terraform. Available from: https://www.terraform.io/.
6. Cloud formation. Available from: https://www.aws.amazon.com/
cloudformation/.
7. Cloud formation. Available from: http://www.docs.aws.amazon.com/
AWSCloudFormation/latest/UserGuide/Welcome.html.
8. Docker. Available from: https://www.opensource.com/resources/what-
docker.
9. Docker compose. Available from: https://www.docs.docker.com/
compose/overview/.
10. Docker. Available from: https://www.docker.com/.
11. Available from: https://www.devops.com/2014/11/24/docker-vs-vms/.
12. Docker compose in production. Available from: https://www.docs.
docker.com/compose/production/.
13. Docker Swarm. Available from: https://www.docs.docker.com/swarm/
overview/.
14. Habitat. Available from: https://www.habitat.sh/.
15. Habitat. Available from: https://www.blog.chef.io/2016/06/14/
introducing-habitat/.
16. Available from: https://www.blog.chef.io/2016/06/14/chef-launches-
habitat-new-open-source-project-to-automate-applications.
Fig. 2: Simple Docker architecture [9]
473
... It showed effective transportation policy from a very simplistic source CloudSim tool, and the following procedure was to relocate CloudSim to a physical environment from Amazon Web Services. Jayachandran et al. (2017) syllabicated infrastructure automation tools like Terraform and Cloud Training and Docker and Habitat. Ameri et al. (2020) have formally recognized that their suggested methodology is resistant to both the RSA algorithm and the complexity of the problem. of the Decisional Bilinear Diffy-Hellman (DBDH) statement against selectively selected keyword attacks. ...
Article
Full-text available
A large data center and cloud computing frameworks play a paramount role in providing an efficient service to trade, governments, and academic and other research institutions. The key is to enable the infrastructure to make the information available to applications to drive services and make business decisions. This has always been hard to formulate solutions for fully autonomous, private, variable deployment for the administration of private cloud computing facilities and services. On the other hand, this private cloud computing system promotes collaboration among industry, academic institutions, and research labs so as to promote studies in the field of programming, network infrastructure architecture, and technology in order to test applications on an internet-scale. The development of a virtual computing lab for cloud computing can extend evaluation of sophisticated educational materials such as high-end programs such as Adobe DC 2020, Matlab R2019b, Solid Works 2020 SP5, Labview 2021, SAS 9.4, and so on to the academic. Networking and the constructability of lower-cost private cloud data centers can also help to generate new and innovative business concepts for cloud-based solutions. Hence, the intention of this study is to analyze a dynamic, secure, and automated infrastructure for private cloud data centers, which includes private cloud challenges, third-party private clouds, private cloud virtualization, how to build a private cloud, and vendors and solutions.
... Employing these tools help to further secure infrastructures and services, saves the organization time and money. Some examples of cloud automation tools are [39] Terraform-it constructs and extends the knowledge of building a service into a set of rules. If a change in a rule is observed, Terraform adapts to the change. ...
Article
Full-text available
Existing classification systems of cloud computing security challenges have mostly excluded human error as a major root cause of cloud security issues. Therefore, we propose a new cloud security challenge classification system by adding Human Error as a category and retaining the most relevant categories—Network, Data Access, and Virtualization—from previous research. Through a literature survey, we identified effective defensive measures that are used by experts to combat these security challenges and we provided a mapping of the challenges to their defensive measures. Our findings reveal that there is, indeed, a case for human error to be included as a category in the classification of the security challenges encountered in cloud computing, and if cloud service providers (CSPs) and their customers are fully informed on the security challenges encountered in the cloud, both parties can fully benefit from the advantages this model of computing offers.
... Containerization software, which provides application software a lightweight virtualized environment to run in, has recently become a popular strategy for deploying and running scientific software, portability across different types of systems, and ease of adoption for researchers [11]. Further, software containers coupled with partially or fully automated cloud deployment schemes offer intriguing benefits for a wide range of computational tasks in scientific research, in the form of robust, scalable, and portable software deployments that can be used during development through production [23]. This paper will describe work successfully performed to encapsulate, deploy, and run three different existing scientific workflows -which are broadly representative of common computational science applications -in multiple clouds using automated containerized deployment. ...
Preprint
Full-text available
The increasing availability of cloud computing services for science has changed the way scientific code can be developed, deployed, and run. Many modern scientific workflows are capable of running on cloud computing resources. Consequently, there is an increasing interest in the scientific computing community in methods, tools, and implementations that enable moving an application to the cloud and simplifying the process, and decreasing the time to meaningful scientific results. In this paper, we have applied the concepts of containerization for portability and multi-cloud automated deployment with industry-standard tools to three scientific workflows. We show how our implementations provide reduced complexity to portability of both the applications themselves, and their deployment across private and public clouds. Each application has been packaged in a Docker container with its dependencies and necessary environment setup for production runs. Terraform and Ansible have been used to automate the provisioning of compute resources and the deployment of each scientific application in a Multi-VM cluster. Each application has been deployed on the AWS and Aristotle Cloud Federation platforms. Variation in data management constraints, Multi-VM MPI communication, and embarrassingly parallel instance deployments were all explored and reported on. We thus present a sample of scientific workflows that can be simplified using the tools and our proposed implementation to deploy and run in a variety of cloud environments.
... Software-defined networking uses an operation mode that is sometimes called adaptive or dynamic, in which a switch issues a route request to a controller for a packet that does not have a specific route. This process is separate from adaptive routing, which issues route requests through routers and algorithms based on the network topology, not through a controller [15,16]. ...
Article
The field of Computer networks has grown tremendously in the recent past and still has scope to grow and create a lot of opportunities in this area. In this paper, we highlight the recent trends in computer networks and their applications. A literature review was carried out on different topics and synthesized. The survey on recent trends in computer networks depicts the effectiveness of these technologies in present and future in simplifying the problems of mankind.
Book
Full-text available
The Springer Scopus indexed 4th International Conference on Computer and Communication Technologies (IC3T) 2022 organized by Department of ECE, KITSW, was held in Warangal, Telangana, during July 29–30, 2022. Warangal, the home of a major state university amid pleasant surroundings, was a delightful place for the conference. The 230 scientific participants, 102 of whom were students, had many fruitful discussions and exchanges that contributed to the success of the conference. Participants from eight countries and 14 states of India made the conference truly international in scope. The 62 abstracts that were presented on the two days formed the heart of the conference and provided ample opportunity for discussion. This change, allowing the conference to end with invited talks from the industry experts, was a departure from the format used at previous IC3T conferences. The abstracts were split almost equally between the five main conference areas, i.e., image processing and communications system, VLSI, wireless networks, Internet of Things (IoT) and machine learning (ML), and the posters were distributed across the days of the conference, so that approximately equal numbers of abstracts in the different areas were scheduled for each day. Of the total number of presented abstracts, 50 of these are included in this proceedings volume, the first time that abstracts have been published by IC3T. There were four plenary lectures covering the different areas of the conference: Dr. Suresh Chandra Satapathy (Professor, KIIT, Bhubaneswar) talked on Social Group Optimization and its applications to Image Processing, Dr. Mathini Sellathurai, (Professor, Heriot-Watt University, UK) on Sustainable 6G: Energy Efficient Auto encoder-Based Coded Modulation Designs for Wireless Networks, Dr. K. Srujan Raju (Professor, CMR Technical Campus, Hyderabad), on optimizing solutions in a variety of disciplines of Computer Science and Engineering, and Dr. K. Ashoka Reddy (Professor, KITS Warangal) on Applications of Biomedical Signal processing in Current Engineering Technologies. Two eminent speakers from industry gave very illuminating public lectures that drew many people from the local area, as well as conference participants: Vijay Kumar Gupta Kopuri (CEO, Kwality Photonics Pvt. Ltd.) on “Manufacturing of LEDs and Compound Semiconductors” and K. Jagadeshwar Reddy (Managing Director, Elegant Embedded Solutions Pvt. Ltd.) on “optimized solutions adopted Preface in Embedded System Design in Current Scenario.” These public talks were very accessible to a general audience. In addition, notably, this was the third conference at KITSW, and a formal session was held the first day to honor the event as well as those who were instrumental in initiating the conference. Generous support for the conference was provided by Captain V. Lakshmikantha Rao, Honorable Ex. MP (Rajya Sabha), Former Minister, and Chairman, KITS, Warangal. The funds were sizeable, timely, greatly appreciated, and permitted us to support a significant number of young scientists (postdocs and students) and persons from developing/disadvantaged countries. Nevertheless, the number of requests was far greater than the total support available (by about a factor of five!), and we had to turn down many financial requests. We encourage the organizers of the next IC3T to seek a higher level of funding for supporting young scientists and scientists from developing/disadvantaged countries. All in all, the Springer Scopus Indexed 4th IC3T 2022 in Warangal was very successful. The plenary lectures and the progress and special reports bridged the gap between the different fields of Computers and Communication Technology, making it possible for non-experts in a given area to gain insight into new areas. Also, included among the speakers were several young scientists, namely postdocs and students, who brought new perspectives to their fields. The next IC3T will take place in Warangal in 2023 and trend to be continued every year. Given the rapidity with which science is advancing in all of the areas covered by IC3T 2022, we expect that these future conferences will be as stimulating as this most recent one was, as indicated by the contributions presented in this proceedings volume. We would also like to thank the authors and participants of this conference, who have considered the conference above all hardships. Finally, we would like to thank all the reviewers, session chairs and volunteers who spent tireless efforts in meeting the deadlines and arranging every detail to make sure that the conference runs smoothly.
Chapter
In today’s world, a secure and reliable system configuration is very important be it for any service-based or for any product-based company. Through Blaze, we enable the functionality to remotely connect to any device over the network and configure it as per our need. Flexible and scalable are the values that we follow in our roots. Blaze focuses on customer satisfaction with seamless, high-quality support for the users and ensures the correctness of the actions performed through powerful automation scripts to mitigate the human errors. Further, automation is the value that we nourish throughout the project by adding the functionality of a complete system upgrade and including real-time reporting as well for the system to be upgraded. In this project, we aim to develop a command line tool for Service Management and Monitoring that follows cutting-edge automation compliance.KeywordsDevOpsAutomationSystem provisioningService managementMonitoringRemote reportCommand line utilityAnsibleIntelligent system
Chapter
Cloud computing is possible due to amazing innovations in technology. But equally important are facilitating tools and techniques developed by cloud vendors to enable implementation and day‐to‐day operations. Among these management features are automation, orchestration, replication, and disaster recovery as a service. This chapter explores concepts related to these topics and provides examples of how the “ops” side of DevOps has made innovative strides in recent years. Cloud automation is a catch‐all phrase used to describe the processes and tools that replace manual tasks related to managing organizational computing infrastructure. Orchestration is a type of automation, but it really means coordination of automation tasks in ways that make sense on multiple levels. Cloud management often relies on the use of Application Programming Interface (API)s and Software Development Kit (SDK)s for implementing software or tools on platforms. The chapter looks at both APIs and SDKs.
Conference Paper
Full-text available
Cloud computing systems are becoming an important platform for distributed applications in science and engineering. Infrastructure as a Service (IaaS) clouds provide the capability to provision virtual machines (VMs) on demand with a specific configuration of hardware resources, but they do not provide functionality for managing resources once they are provisioned. In order for such clouds to be used effectively, tools need to be developed that can help users to deploy their applications in the cloud. In this paper we describe a system we have developed to provision, configure, and manage virtual machine deployments in the cloud. We also describe our experiences using the system to provision resources for scientific workflow applications, and identify areas for further research.
Conference Paper
The Public Cloud has enabled the provisioning & management of an organisations IT infrastructure to be handled by a third-party, otherwise referred to Infrastructure as a Service (IaaS). This offers the potential to create cost effective & scalable IT solutions. However, little work has been done on the processes involved in cloud migration of existing infrastructure. Specifically, there has been no framework or solution implemented in a case study setting to detail the processes necessary to perform a migration of existing infrastructure to the public cloud. This paper presents the architecture of an Environment Migration Framework (EMF), the purpose of which is to automate the migration of existing IT infrastructure to the IaaS platform. To prove the viability of this framework, an ongoing case study is described where the Environment Migration Framework is being implemented in an organisation seeking to migrate their existing virtual machine environments to the public cloud. The final sections of this paper detail the results obtained so far from the ongoing project and the future work.
Article
The technology of Cloud Computing has shown its great power in consolidating and integrating compute resources for higher utilizing efficiency. With the cloud computing, the organizations or companies can own their hardware and software infrastructures easily. Because of the diversity of the operating systems and applications, it is very difficult or even impossible for administrator to deploy a large number of virtual machines within a short time manually. In this paper, we propose an automatic deployment mechanism on Open Stack, a popular Cloud Computing platform. This proposed system supports the automatic deployment service at both operating system level and application level. We also develop a dashboard to facilitate users operations. Without professional knowledge of cloud, Users also can deploy their systems and the applications expediently.
Conference Paper
Basic computing resources are compute, storage and network. Virtualisation in data centres has significantly reduced the provisioning time to deploy computing resources from weeks to hours or even minutes [1]. However, data centre virtualisation in itself is not enough for building a cloud-based infrastructure. Without software to manage virtual infrastructure, it will result in virtual resource sprawl that can quickly use up the underlying physical infrastructure. Moreover, cloud-based infrastructure is built not only on virtual infrastructure, but also on physical infrastructure which also needs to be managed and automated. Therefore, there is a need to have software for managing and automating cloud-based infrastructure both virtual and physical resources. In this paper, the main modules of cloud management and automation software and the interaction between those modules are explained and discussed.
Available from: https://www.terraform.io
  • Terraform
Terraform. Available from: https://www.terraform.io/.
Available from: https://www.docker.com
  • Docker
Docker. Available from: https://www.docker.com/.
Available from: https://www.docs.docker.com/swarm/ overview
  • Docker Swarm
Docker Swarm. Available from: https://www.docs.docker.com/swarm/ overview/.
Available from: https://www.habitat.sh
  • Habitat
Habitat. Available from: https://www.habitat.sh/.
Available from: https://www.blog.chef.io
  • Habitat
Habitat. Available from: https://www.blog.chef.io/2016/06/14/ introducing-habitat/.