ArticlePDF Available

Abstract and Figures

Virtual machine allocation problem is one of the challenges in cloud computing environments, especially for the private cloud design. In this environment, each virtual machine is mapped unto the physical host in accordance with the available resource on the host machine. Specifically, quantifying the performance of scheduling and allocation policy on a Cloud infrastructure for different application and service models under varying performance metrics and system requirement is an extremely challenging and difficult problem to resolve. In this paper, the authors present a Virtual Computing Laboratory framework model using the concept of private cloud by extending the open source IaaS solution Eucalyptus. A rule based mapping algorithm for Virtual Machines (VMs) which is formulated based on the principles of set theoretic is also presented. The algorithmic design is projected towards being able to automatically adapt the mapping between VMs and physical hosts' resources. The paper, similarly presents a theoretical study and derivations of some performance evaluation metrics for the chosen mapping policies, these includes determining the context switching, waiting time, turnaround time, and response time for the proposed mapping algorithm.
Content may be subject to copyright.
International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013 47
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
ABSTRACT
Virtual machine allocation problem is one of the challenges in cloud computing environments, especially
for the private cloud design. In this environment, each virtual machine is mapped unto the physical host in
accordance with the available resource on the host machine. Specically, quantifying the performance of
scheduling and allocation policy on a Cloud infrastructure for different application and service models under
varying performance metrics and system requirement is an extremely challenging and difcult problem to
resolve. In this paper, the authors present a Virtual Computing Laboratory framework model using the concept
of private cloud by extending the open source IaaS solution Eucalyptus. A rule based mapping algorithm for
Virtual Machines (VMs) which is formulated based on the principles of set theoretic is also presented. The
algorithmic design is projected towards being able to automatically adapt the mapping between VMs and
physical hosts’ resources. The paper, similarly presents a theoretical study and derivations of some perfor-
mance evaluation metrics for the chosen mapping policies, these includes determining the context switching,
waiting time, turnaround time, and response time for the proposed mapping algorithm.
Virtual Machine Allocation in
Cloud Computing Environment
Absalom E. Ezugwu, Department of Computer Science, Faculty of Science, Federal
University Laa, Laa, Nasarawa State, Nigeria
Seyed M. Buhari, Department of Computer Science, Faculty of Science, Universiti Brunei
Darussalam, Gadong, Brunei
Sahalu B. Junaidu, Department of Mathematics, Faculty of Science, Ahmadu Bello University,
Zaria, Kaduna State, Nigeria
Keywords: Manger-Worker Process, Private Cloud, Rule-Based Mapping, Virtual Machine (VM) Allocation,
Virtualization
1. INTRODUCTION
Cloud computing presents to an end-cloud-user
the modalities to outsource on-site available ser-
vices, computational facilities, or data storage
to an off-site, location-transparent centralized
facility or “Cloud” (Ioannis & Karatza, 2010).
A “Cloud” implies a set of machines and web
services that implement cloud computing. These
machines ideally comprise of a pool of distrib-
uted physical compute resources that include
the following: processors, memory, network
bandwidth and storage, which are potentially
distributed physically across network of serv-
ers that cut across geographical boundaries.
Resources associated with cloud computing
DOI: 10.4018/ijcac.2013040105
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
48 International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013
are often organized into a dynamically logical
entity that are outsourced and leased out on
demand. One of the major characteristics of
cloud computing is elasticity, which means that
cloud resources can grow or shrink in real-time
(Sarathy et al., 2010). This transformation in
cloud computing is made possible today by the
concept of virtualization technology.
Over the past few years, the idea of virtu-
alization technology has become a more com-
mon phrase among IT professionals. The main
concept behind this technology is to enable the
abstraction or decoupling of application pay-
load from the underlying distributed physical
host resource (Buyyaa et al., 2009; Popek &
Goldberg, 1974). This simply means that the
physical resources can be presented in the form
of either logical or virtual resources depending
on individual choices. Furthermore, some of
the advantages of implementing virtualiza-
tion technology are to assist cloud resource
providers to reduce costs through improved
machine utilization, reduced administration
time and infrastructure costs. By introducing
a suitable management mechanism on top of
this virtualization functionality (as we have
proposed in this paper), the provisioning of the
logical resources could be made dynamic that
is, the logical resource could be made either
bigger or smaller in accordance with cloud
user demand (elastic property of the cloud). To
enable a truly cloud computing system, each
computing resource element should be capable
of being dynamically provisioned and managed
in real-time based on the concept of dynamic
provisioning as it applies to cloud computing.
This abstraction actually forms the bases of the
proposed conceptual framework as presented
in Section 3.
To implement the concept of virtualiza-
tion cloud developers often adopted and make
use of the concept of an open source software
framework for cloud computing that implements
what is commonly referred to as Infrastructure
as a Service (IaaS). This software framework
is known as Hypervisor (Nurmi et al., 2009;
Chisnall, 2009). A hypervisor, also called vir-
tual machine manager (VMM), is one of many
hardware virtualization techniques that allow
multiple operating systems, termed guests, to
run concurrently on a host machine. However,
there are different infrastructures available for
implementing virtualization for which we have
different virtual infrastructure management
software for that (Sotomayor et al., 2009).
This paper proposes a novel simulation
framework for the cloud developers. There
are two high level components in the proposed
architecture which are the Manager process
and Worker process. The Manager is assigned
the role of cluster controller while the worker
is assigned the role of node controller in the
new system. The focus of the work however,
is to develop a framework model that will al-
low dynamically the mapping of VMs onto
physical hosts depending upon the resource
requirements of the VMs and their availability
on the physical hosts. The cloud resources to be
considered include machines, network, storage,
operating systems, application development
environments, and application programs.
The remainder of the paper is organized as
follows. In Section 2, a survey of related work
on cloud and eucalyptus cloud environment is
presented. The proposed system architecture and
model is described in Section 3, and in Section
4, an in-depth description of VM allocation
and rule-based mapping algorithm based on
set theoretic concept is presented. Theoretical
derivation of performance evaluation metrics
for the proposed system is derived in Section
5. Finally, Section 6 offers concluding remarks
on the work.
2. RELATED WORK
2.1 Cloud Platform Environment
Cloud computing as defined in Buyya et al.
(2008) is a type of parallel and distributed
computing system consisting of a collection of
interconnected and virtualized computers that
are dynamically provisioned and presented as
one or more unified computing resources based
on service-level agreements established through
negotiation between the service provider and
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013 49
consumers. Currently there are very good cloud
environments which provide effective, efficient
and reliable services to the cloud users, among
which include Amazon1, Google App Engine2,
Apple MobileMe3, and Microsoft clouds (Ama-
zon, 2009; Chu et al., 2007).
2.2 Eucalyptus Cloud Environment
Eucalyptus is a java-based cloud management
tools which consists of five high-level compo-
nents. These components are the cloud control-
ler, cluster controller, node controller, storage
controller and the walrus. Each high-level
system component has its own Web interface
and is implemented as a standalone Web service
(Nurmi et al., 2009; Yoshihisa and Garth, 2010).
Cloud controller is responsible for exposing and
managing the underlying virtualized resources
(machines (servers), network, and storage) via
user-facing APIs. Currently, this system com-
ponent exports a well-defined industry standard
API (Amazon EC2) and via a Web-based user
interface (AEC, 2011). The storage controller
provides block-level network storage that can
be dynamically attached by VMs4. The current
implementation of the storage controller sup-
ports the Amazon Elastic Block Storage (EBS)
semantics (Kleineweber et al., 2011). Walrus
is a storage service that supports third party
interfaces, providing a mechanism for storing
and accessing VM images and user data. The
cluster controller gathers the required informa-
tion and schedules VM execution on specific
node controllers, as well as manages virtual
instance network that run inside the cloud en-
vironment. The node controller manages the
execution, inspection, and termination of VM
instances on the particular host where it runs.
Node controller is responsible for the VM
instance. When user places requests for a VM
image through the cloud controller, the node
controller would start and run that particular VM
image and make available an instance of such
VM in the network which is then accessed by
the cloud end user. Figure 1 depicts the cloud
model of Eucalyptus.
The cloud controller is the entry point into
the cloud computing platform environment for
users and administrators. It queries node man-
agers for information about resources, makes
high level scheduling decisions, and implements
them by making requests to cluster controller.
Then cluster controller makes queries by us-
ing node controllers to implement the cloud
controller requests. The users interact with the
cloud controller via the user interface by means
of access protocol known as the Simple Object
Access Protocol (SOAP) or Representational
State Transfer (REST) messages. Cloud con-
troller moves the user requests to the cluster
controller. A cloud controller can have multiple
cluster controllers and one particular cluster
controller can have multiple node controllers
Figure 1. Eucalyptus architecture
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
50 International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013
at the same time. In Upatissa and Atukorale
(2012) an abstract architecture of a modified
private cloud model is presented with focus
on enhancing the effectiveness of managing
virtual machine images in eucalyptus based
cloud environment.
In the context of Cloud computing and as
assumed in this paper, any hardware or soft-
ware entity such as high-performance systems,
storage systems, laboratory device servers or
applications that are shared between users of
a Cloud is called a resource. However, for the
rest of this paper and unless otherwise stated,
the term resource means hardware such as
computational nodes in the network or storage
systems. Resources are also laboratory devices
or laboratory device servers and hence, these
terms are used interchangeably. The network-
enabled capabilities of the resources that can be
invoked by users, applications or other resources
are called services.
3. SYSTEM ARCHITECTURE:
VIRTUAL COMPUTING
LABORATORY MODEL
As earlier mentioned that virtualization is the
key technology underlying most private cloud
implementations and that it enables multiple
virtual machines to run on a single physical
node. The private cloud in essence is actually
more than just virtualization. It is a program-
matic interface, driven by an API and managed
by a cloud controller that enables automated
provisioning of virtual machines, networking
resources, storage, and other infrastructure
services. Our main concern in this article, is to
find an alternative but a suitable means in which
the provisioning of virtual machine together
with their mapping unto physical host can be
managed and made even more effective and
efficient in a typical private cloud computing
environment.
To support the run-time allocation of virtual
machines to physical hosts, we will construct
a manager/worker-style parallel paradigm
akin to the work presented in Malgaonkar et
al. (2011) for the proposed virtual computing
laboratory. Similarly, we intend to also extend
the eucalyptus private cloud architecture similar
to the work discussed in Upatissa and Atukorale
(2012), Nurmi et al. (2009) by introducing two
high level management components (i.e. the
manager process and worker processes). In the
proposed model, one virtual machine called
the manager is responsible for keeping track
of assigned and unassigned query information
about resource. The worker queries and controls
the system software on its node in response to
queries and control requests from its manager.
The advantage associated with allocating a
single task at a time to each worker is that it
balances workload. Keeping workload balanced
is essential for high efficiency, and we choose
first the manager/worker paradigm as the basis
for our cloud design. Detail of this paradigm
is further explained below in section I and II.
3.1 Manager Process
The manager process is the gateway into the
cloud management platform. Its function is to
query any machine that has network connectiv-
ity to both the nodes running worker processes
and to the machine running the cloud controller
for information about resources. Subsequently,
the manager process is assigned the task of i)
making scheduling decisions (such as sched-
uling of incoming instances to run on specific
nodes) ii) controlling the instances of virtual
network overlay, and iii) gathering information
about a set of nodes from the worker process.
Many of the manager’s request operations to
the worker processes take the following format:
describeInstances, describeResources, and et-
cetera. When the manager process receives a set
of operation instances to perform, it would first
request for information regarding available and
suitable resource through it describeResources
function from the worker processes. The worker
process then searches for this information
comprising of lists of resource characteristics
(processor, memory, storage and bandwidth) and
sends report back to the manager process. With
this information the manager process computes
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013 51
the number of simultaneous instances of the
specific type that can be executed on the lists
of available nodes and sends this value to the
cloud controller for delegation and allocation
to the various booked nodes. This step also
applies to the rest query functions.
3.2. Worker Process
The worker process manages all information
regarding VM instance per host respectively. A
Worker process executes on every node that is
designated for hosting VM instances. A Worker
queries and controls the system software on its
node in response to queries and control requests
from the manager process. The worker processes
execute queries from the manager process such
that discoveries of the physical nodes resource
profiles are acquired. These profiles informa-
tion entail the number of processors, the size of
memory, the available disk space, and as well
as to learn about the state of VM instances on
the node. The information thus collected is
propagated back to the Manager for further
processing and delegation to the cloud control-
ler (see Figure 2).
The cluster or resource pool in addition
also consist of some local storage which can
be either true local storage that is physically
attached to the node, or that is accessed via a
shared pool of storage over storage local area
network fibre channel or similar mechanisms.
The architecture shown in Figure 3 can
be expanded to include multiple clusters com-
prising of managers and workers that add both
capacity to the solution as well as redundancy
that can be used to increase the overall avail-
ability of the infrastructure.
4. PRELIMINARIES: SET
THEORY AND MODELLING
OF VM MAPPING
The proposed model considered in this pa-
per is based on a formal model described in
(Kleineweber et al., 2011) which is a discus-
sion on rule based techniques for the mapping
of virtual machines and virtual network links.
However, we chose to concentrate on the aspect
of virtual machines mapping that associates re-
source requirement of the virtual machines and
their availability on the physical hosts. Before
venturing into modelling of VM-mapping, we
may recall the following preliminary ideas on
set theory.
Definition 1: A set is a collection of distinct
elements or objects of some kind with a
common property that, given an object and
a set, it is possible to decide if the object
belongs to the set.
Figure 2. In a manager/worker-style, a manager process send the elements of a set is often de-
noted by lower case letter (e.g. a b c, , , . . .)
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
52 International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013
1. A set can be described by A={a,b,c,d}, the
elements of a set is often denoted by lower
case letter (e.g. a,b,c, …).
2. x A, if an element x beleongs to a set
A. ( x A if an element x does not belong
to a set A). Similarly, x X
or y Y
,
means ‘x belongs to set X or ‘y belongs
to set Y.'
Example 1: Let
X x x x=
{ }
=
{ }
1 2 3
, , ,memory,OS,ram
where x1= memory, respectively etc.,.
Then ‘ram’
X and ‘Book’ X.
Example 2: Let X x x=
{ }
1 10
,..., . Then
x X
2 and
x X
100
.
The notions of the union of two sets and
the intersection of two sets are well-known and
therefore not defined here.
Definition 2: (Function).
A
function from a
set
X
to a set
Y
is a rule, which assigns
to an element x X a unique element
y Y, denoted by f X Y: , or
y f x=( ) or where x X a n d
y f x Y= ∈( )
We develop a model of virtual machine
mapping by defining the set of virtual machines
as V v v vm
=
0 1
, ..., , also m denote the num-
ber of virtual machines to be mapped to the real
host. Similarly, we represent the set of real host
as R r r rn
=
{ }
0 1
, , ..., , , where R represents the
computational nodes provided at the data cen-
tre,
n
is the number of physical hosts available.
The subscripts indicate an instance of either a
family of virtual machine or physical host. The
use of this expression is further justified based
on the definition of Cartesian product equation
given below:
Figure 3. The proposed private cloud architecture
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013 53
Definition 3: Cartesian product: An n-tuple
( ,..., )a an1 can be defined by explicitly
listing its elements a an1,..., . The gener-
alised Cartesian product of n sets A A
n1,...,
is then defined as the set of all n-tuples:
A a a A a a A
i n n n
i
n=
( )
∈ ∈
{ }
=
1 1 1
1,..., ,...,
(1)
Consider a virtual machine X, for n-ary
Cartesian product Xi
i
N
,
=1
there is a family of
n projection π π
1,..., n
{ }
such that for all
( ,..., )x xn1
=
xi
i
n
1
and for all k n
{ }
1,...,
then the equation πk n k
x x x(( ,..., ))
1= holds.
Each of the physical host has some par-
ticular attribute or profile attached to them.
These profiles are actually very significant,
since resources are often requested along with
some requirements criteria as the case may be
in a distributed resource environment. A host
Machine yi, can have the following profiles;
processors, memory, network bandwidth, and
storage.
Associate to vi the attributes, that is, vi
{attribute} = vi{processors, memory, network
bandwidth, and storage}
Therefore, attributes may be represented
as sets of component values of the physical host
ri, we may denote the elements of this set as:
Set:
p=processors,
m=memory,
n
=
network bandwidth,
s
=
storage available on the host machine
ri,
Therefore, ri{attributes}: { , , , }=r p m n s
i
By the same rule, we may apply this notion
to the set of virtual machines.
Let p' ,
=
processors m' ,
=
memory
and n' ,
=
network bandwidth
s'
=
storage required by the virtual
machine vi. Therefore, vi{attributes}
: { ', ', ', '}.=v p m n s
i
Define a function which maps a physical
host to any virtual machine using round robin
technique based on the number of available
resources on the hosts as follow:
f V R e: { },
(2)
where
e
is a set of VMs without corresponding
attributes and
f v
r if r is the attribute resource name
for virtual machine v
e if v
( )
=
/
ddoes not h ave a corresponding
attribute resource name/
(3)
For v V and r R.
Definition 4: We say that a virtual machine
V
is said to be compatible with a physical
host R if there exist a mapping
f V R: { , }× 0 1 such that for some
V V
k there exists at least an element
r R with f V r
k
(( , )) ,=1 where
V
k
contains some elements v n
i=1 2, ,..., of
V.
Otherwise
V
is said to be incompat-
ible if f V r
k
(( , )) .=0 This means that,
r R such that ( , )V r
k holds.
If we denote an instance of a virtual machine
v1with resource requirement configuration
index1 by v1 1. , and physical host r1with avail-
able resource configuration
index
1
.
Then we
can have a compatibility mapping of virtual
machine unto physical host as shown in Figure
4:
In this caseV v
1 1 1
={ }
. where i=1, 2, 3…,
n
4.1. Setting Initial Conditions
It is easy to observe, that at time t
=
0 there
exist no initial mapping of the VMs, although,
V v
v vn
={ , ,..., }
1 2 exist, and therefore the
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
54 International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013
function f=0 or f V r Z
k
(( , )) , = 0 where
Z is the set of already mapped VMs
The following conditions hold for the pro-
posed VMs mapping to physical host.
The function f V R:
will map a vir-
tual machine vi to a physical host ri only if,
f V r
k
(( , )) ,=1for some V V r R
k⊂ ∈ and
(4)
p p v V
i i i
i
n
∀ ∈
=
,
1
(5)
m m v V
i i i
i
n
∀ ∈
=
,
1
(6)
n n v V
i i i
i
n
∀ ∈
=
,
1
(7)
(8)
These equations enforces that necessary
resources (processor, memory, nodes, and stor-
age) are available on physical host on which the
guest VMs are to be assigned to. The amount
of resources required by all the guests VMs
mapped to a host does not exceed the number
of resources on a host.
In this paper, we assume that the commu-
nication between the guest virtual machines
running on the same host and the host machine
itself is contention-free in terms of resource al-
location i.e. each guest VM runs in a specified
and designated address space. Similarly each
VM is mapped exactly once onto physical host
machine. The pseudo-code for this purpose is
presented in Algorithm Listing 1.
Round-Robin: A round-robin algorithm
distributes the load equally to each server,
regardless of the current number of con-
nections or the response time (Mohanty et
al., 2011). Round-robin is suitable when the
servers in the cluster have equal processing
capabilities; otherwise, some servers may
receive more requests than they can process
while others are using only part of their
resources. In the above algorithm a dynamic
time quantum concept is suggested and
used so as to improve the average waiting
time, average turnaround time and to de-
crease the number of context switches. An
abstracted view of round-robin algorithm
is provided by the following algorithm
Listing 2.
5. PERFORMANCE
EVALUATION
As previously discussed, the sharing of cloud
resources is based on demand and the dynamic
utilization of these resources comes under differ-
ent conditions. Therefore, this section presents
some empirical studies of the key evaluation
Figure 4. Compatibility of VMs to physical host mapping functions
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013 55
metrics for the cloud models presented in Sec-
tion 3. However, we anticipated to use these
parameters to project the efficiency of the
proposed model. Therefore, this study evalu-
ates the proposed model from the performance
efficiency view, and not from cost perspective.
If assumed that cloud is affordable, then per-
formance of such system should be an issue to
reconcile with.
The performance metrics summarized
in Table 1, have been used to evaluate the
system performance. Considering the fact that
the proposed system deploys the round robin
scheduling algorithm to map virtual machines to
physical host, therefore, the major concentration
for the performance metrics is on determining
the Context switching, Waiting time, Turnaround
time, and Response time.
The waiting time wt of a mapping task ti
refers to the time lapses between the dispatch-
ing of VM’s and before their mapping schedul-
ing with the physical host begins. Simply put,
it can also be referred to as the amount of time
a VM’s to be mapped has been waiting in the
ready queue. The Average waiting time (AVG-
WT) of mapping task ti is defined as follows:
AVGWT wt t
n
i
n
i
==
1[ ] (9)
However, the mapping of different virtual
machine by the manager onto the physical host
depends on the resource requirement of the
virtual machines and their availability on the
host system. We include these two additional
Algorithm 1. Mapping of guest VMs to physical host
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
56 International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013
factors by assigning a weight and delay factor
to the system. The number of times it takes for
each mapping tasks to exploit the allotted time
quantum is weighted to the number of resource
requirement rr ti
[ ] and availability factor af ti
[ ].
Therefore the average weighted waiting time
(AVGWWT) is defined as follows:
AVGWWT rr t af t wt t
rr t af t
i
n
i i i
i
n
i i
=+
+
=
=
1
1
( [ ] [ ]). [ ]
[ ] [ ]
(10)
Turnaround time of a mapping task ti is
referred to as the time difference between the
arrival of a mapping request and the successful
completion of the mapping task. The average
turnaround time (AVGTT) and the average
weighted turnaround time are given as:
AVGTT tt t
n
i
n
i
==
1[ ] (11)
AVGWTT rr t af t tt t
rr t af t
i
n
i i i
i
n
i i
=+
+
=
=
1
1
( [ ] [ ]). [ ]
[ ] [ ]
(12)
Response time of a mapping task ti is the
time frame from when a request is submitted
until the time when the virtual machine is
mapped to the physical host and first response
is produced. The average response time
(AVGRT) and average weighted response time
(AVGWRT) are defined accordingly as follows:
AVGRT rt t
n
i
n
i
==
1[ ] (13)
AVGWRT rr t af t rt t
rr t af t
i
n
i i i
i
n
i i
=+
+
=
=
1
1
( [ ] [ ]). [ ]
[ ] [ ]
(14)
The performance of the system depends
on the length of a time quantum assigned. The
idea of choosing a short quantum is considered
a good choice because it would allow many
mapping processes (as in our case the mapping
of virtual machines to physical hosts) to circu-
late through the waiting queue quickly, thereby
allowing each process a brief chance to run. In
this way, highly interactive tasks that usually do
not use up their quantum will not have to wait
for so long before they get processed again by
the system. The advantage of this approach is
the improvement on the interactive performance
of the entire system. However, selecting a short
quantum is bad in someway because the sys-
tem must perform a context switch whenever
a process gets pre-empted. This is considered
somewhat as an overhead: since each time the
system does other than executing submitted
request to performing mapping tasks is seen
essentially as an overhead. A short quantum
Algorithm 2. Round Robin
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013 57
implies many such context switches per unit
time, which takes the system away from per-
forming useful work.
The overall performance efficiency
PE
of the system is determined therefore based on
the value of the assigned time quantum Q, the
useful time T taken to execute a mapping task
ti before the occurrence of a context switch,
the context switch time Tcs required by task ti
and the total time taken to round up the whole
execution tasks (i.e. T + Tcs). This is computed
as follows:
PE
T
T Tcs if Q
T
T Tcs if Q T
Tcs
Q
if Tcs Q T
=
+= ∞
+>
+
< <
1
(15)
However, if the value of Q Tcs=, the
value of PE =50% and if Q0,
PE 0
based on the third formula given in 15. The
general conception in this case is that an increase
in Q increases efficiency but reduces average
response time. A case scenario of this concep-
tion is presented below.
Case: Suppose that there are ten
VMs
ready to be mapped, Q m=100 sec, and
Tcs m=5, sec.
VM 0
(at the head of the
ready queue) gets to run immediately. VM
1 can run only after
VM 0 ' s
quantum
expires (100 msec) and the context switch
takes place (5 msec), so it starts to run at
105 msec. Likewise,
VM 2
can run only
after another 105 msec. We can compute
the amount of time that each
VM
will be
delayed and compare the delays between
a small quantum (4 msec.) and a long
quantum (100 msec.), and similarly be-
tween a quantum of (10 msec.) and a long
Table 1. Description of symbols used in the Round Robin performance metrics
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
58 International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013
quantum (100 msec.). Figure 5 illustrates
the plots of task number against delay time.
Context Switching Issues: Apart from the
normal mapping task the system will have
to perform a check and balance procedures
to confirm whether a particular physical
host has the desired resources that meet
the VM’s requirement first before a map is
allowed. Therefore this task might differ for
all the VMs and thus warrant a time shift
in their respective execution stages. This
process of context switch is ideal, since
the proposed system is modeled based on
the context of distributed and time sharing
system operational scenario.
The context switch is a mechanism which
occurs when the system changes the control
of the scheduling process from an executing
task to another that is ready to run. The system
automatically saves the state of the current task
including the resource requirement profiles
and other profiles that describes this state
based on submitted request by the VMs. After
which, it loads the saved state of the new task
for execution.
To characterize the overhead associated
with context switch, we adopt an empirical study
of the work presented in (AWS, 2009). The
work involves using a test-bench that consists
of creating two threads P1 and P2 and generat-
ing a number of context switches as detailed in
(AWS, 2009; Chu et al., 2007). Even though
their work differs from the one presented in
this paper, the concept and rational for context
switching still remain the same. Figures 6 and
7 depicts an extended version of the work pre-
sented in (AWS, 2009). In fact in step 1, only
two context switches are generated and in step n,
n context switches are generated. In the figures,
Tcs represents the time of the context switch,
Si,j the j-th section of the process Pi and Ti,j is
the execution time of the section Si,j.
The respective total execution time for the
benchmark in step 1 and step n are denoted
by Tstep1 and Tstep1 and are computed as
follows:
Tstep Texec T Texec T Texec
cs cs1 1 1 2 1 1 2
= + + + +
, , ,
(16)
Tstepn
Texec T Texec T Texec Texec n
cs cs m
=
+ + + + +
1 1 2 1 1 2, , , ,
= + + ×
≤ ≤ ≤ ≤
∑ ∑
T T n T
i
i m
j s
j n
1
1
2
1
, , ( ) (17)
where
m
and
n
represents the number of
sections of P1 and P2 respectively. The
context switching times T
CS and the context
switch slowdown overhead SCS are calculated
as follows:
TTstep Tstep
n
cs
n
=
1
1 (18)
SSstep Sstep
n
cs
n
=
1
1 (19)
Figure 5. Tasks number vs. delay for (a) Q = 4 and Q = 100, (b) Q = 10 and Q = 100
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013 59
6. CONCLUSION
In this paper, we described and presented the
basic concepts of private cloud design and
virtual machine allocation problem. The goal
of the design is to model an easy to deploy
private cloud and define virtual machines
allocation problem in terms of set theoretic
concepts. A simple, but efficient rule based
mapping algorithm has also been suggested
and presented with this algorithm; cloud users
can execute their Virtual Machines efficiently
with limited number of physical machines.
Hence, this approach will lead to an efficient
utilization of resources available to facilitate
maximum computing with minimum physical
data centers infrastructures.
REFERENCES
AEC. (n.d.). Amazon elastic compute cloud (Amazon
EC2). Retrieved from http://aws.amazon.com/ec2/
AWS. Amazon Web Services LLC (2009). Amazon
elastic compute cloud (EC2). Retrieved from http://
aws.amazon.com/ec2/
Buyya, R., Yeo, S. C., & Venugopal, S. (2008).
Marketoriented cloud computing: Vision, hype, and
reality for delivering IT services as computing utili-
ties. In Proceedings of the 10th IEEE International
Conference on High Performance Computing and
Communications.
Buyyaa, R., Yeoa, S. C., Venugopala, S., Broberga,
J., & Brandicc, I. (2009). Cloud computing and
emerging IT platforms: Vision, hype, and reality
for delivering computing as the 5th utility. Future
Generation Computer Systems, 25(6), 599–616.
doi:10.1016/j.future.2008.12.001.
Calheiros, N. R., Buyya, R., & De Rose, F. A. C.
(2009). A heuristic for mapping virtual machines and
links in emulation testbeds. In Proceedings of the
9th International Conference on Parallel Process-
ing (ICPP) (pp. 518–525). Washington, DC: IEEE
Computer Society.
Chisnall, D. (2009). Guide to the xen hypervisor.
Prentice Hall Press.
Figure 6. Context switch benchmark (step 1)
Figure 7. Context switch benchmark (step n)
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
60 International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013
Chu, X., Nadiminti, K., Jin, C., Venugopal, S., &
Buyya, R. (2007). Next-generation enterprise grid
platform for e-science and e-business applications.
In Proceedings of the 3rd IEEE International Confer-
ence on e-Science and Grid Computing.
Kleineweber, C., Keller, A., Nieh¨orster, O., &
Brinkmann, A. (2011). Rule-based mapping of
virtual machines in clouds. In Proceedings of the
2011 19th International Euromicro Conference on
Parallel, Distributed and Network-Based Processing
(pp. 527-534).
Malgaonkar, P., Koul, R., Thorat, P., & Zawar, M.
(2011). Mapping of virtual machines in private
cloud. International Journal of Computer Trends
and Technology, 2(2).
Mohanty, R., Das, M., & Lakshmi, P. M., Sudhashree.
(2011). Design and performance evaluation of a new
proposed fittest job first dynamic round robin (FJF-
DRR) scheduling algorithm. International Journal
of Computer Information Systems, 2(2).
Moschakis, I. A., & Karatza, H. D. (2010). Evalu-
ation of gang scheduling performance and cost in a
cloud computing system. Journal of Supercomput-
ing, 1-18. doi doi:10.1007/s11227-010-0481-4,
Springer, Berlin.
Nurmi, D., Wolski, R., Grzegorczyk, C., Obertelli,
G., Soman, S., Youseff, L., & Zagorodnov, D. (2009).
The Eucalyptus open-source cloud-computing
system. In Proceedings of the 2009 9th IEEE/ACM
International Symposium on Cluster Computing and
the Grid (CCGRID ’09). Washington, DC: IEEE
Computer Society.
Nurmi, D., Wolski, R., Grzegorczyk, C., Obertelli, G.,
Soman, S., Youseff, L., & Zagorodnov, D. (2009). The
eucalyptus open-source cloud-computing system.
In Proceedings of the 9th IEEE/ACM International
Symposium on Cluster Computing and the Grid
(CCGRID’09) (pp. 124-131). IEEE.
Popek, G. J., & Goldberg, P. R. (1974). Formal
requirements for virtualizable third generation
architectures. Communications of the ACM, 17(7),
412–421. doi:10.1145/361011.361073.
Sarathy, V., Narayan, P., & Mikkilineni, R. (2010).
Next generation cloud computing architecture:
Enabling real-time dynamism for shared distributed
physical infrastructure. Retrieved from www.kawao-
bjects.com/resources/PID1258479.pdf
Smith, E. J., & Nair, R. (2005). Virtual machines:
Versatile platforms for systems and processes. Mor-
gan Kauffmann.
Sotomayor, B., Montero, R. S., Llorente, I. M., &
Foster, I. (2009). Virtual infrastructure management
in private and hybrid clouds. Internet Computing,
IEEE, 13(5), 14–22. doi:10.1109/MIC.2009.119.
Sun Microsystems, Inc. (2009). Introduction to cloud
computing architecture. White Paper (1st ed.).
Upatissa, D. T., & Atukorale, A. (2012). Low cost
virtual lab environment to the university by using
cloud environment. In Proceedings of the Interna-
tional Conference on Computer Engineering and
Technology (ICCET 2012) (IPCSIT vol.40). Singa-
pore: IACSIT Press.
Yoshihisa, A., & Garth, G. (2010). pWalrus: Towards
better integration of parallel file systems into cloud
storage. In Proceedings of the Workshop on Inter-
faces and Abstractions for Scientific Data Storage
(IASDS10), Co-Located with IEEE International
Conference on Cluster Computing 2010 (Cluster10),
Heraklion, Greece.
ENDNOTES
1 http://aws.amazon.com/ec2/
2 https://developers.google.com/appengine/
3 http://www.apple.com/icloud/
4 Each VM includes its own kernel, operating
system, supporting libraries and applications.
... According to the results obtained, the method can reduce energy usage more than other schemes. Absalom E. Ezugwu et al. (2013) developed comprehensive research and notations of some quality evaluation measures for the selected translation rules, including context switching, processing period, processing times, and reaction time for the suggested estimation problem. ...
... Among the most important responsibilities in private cloud computing is resource scheduling. It entails recognizing and responding to each and every client request [15]. Needs are addressed while the cloud supplier's specified objectives are reached. ...
Article
Full-text available
The level of difficulty that can be envisioned in a cloud data center will not grow with convention. As a result, all hosts should have a standard and pervasive collection of memory and communication characteristics in order to lower ownership costs and operate virtual machine instances. This solution includes fundamental foundations and integrated component basics that will allow an IT or federal agency to embrace cloud computing domestically via private virtual cloud data centers. These private cloud data centers would later be developed to purchase and develop IT services on the outside. They are well aware of the obstacles to cloud computing’s acceptance, including concerns about credibility, privacy, interoperability, and marketplaces. In addition, this procedure describes critical standards and collaborations to address these issues. Ultimately, it offers a coherent response to deploying safe data centers using cloud computing services from both a technological and an IT strategic standpoint. To foster creativity, invention, learning, and enterprise, a private data center and cloud computing must be established to combine the activities of different research teams. In the framework of energy-efficient distribution of resources in private cloud data center architecture, we focus on system structure investigations. On the other hand, we want to equip private cloud providers with the current design and performance analysis for energy-efficient resource allocation. The methodology should be adaptable enough to support a wide range of computing systems, as well as on-demand and extensive resource providing approaches, cloud environment scheduling, and bridging the gap between private cloud users and a complete image of offers.
... Hence, this approach will lead to an efficient utilization of resources available to facilitate maximum computing with minimum physical data centers infrastructures. [1] Awada Uchechukwu et.al [2014] has delivered formulations and solutions for Green Cloud Environments (GCE) to minimize its environmental impact and energy consumption under new models by using static and dynamic portions of cloud components. The proposed methodology captured cloud computing data centers and presented a generic model for them. ...
... Zi = {zi1, zi2…zis}, vi = {vi1, vi2, vis}, i=1, 2, s Where s is the size of swarm. The best previous experience of the i th VM is represented as: mi = {mi1, mi2… mis} Another memory variable "mg", which is the best candidate solutions encountered by all VMs are then manipulated according to the following equations: vi(t+1)=W*vid(t)+c1*rand(mi-zi(t))+c2*rand(mg-zi(t)) ----------(1) zi(t+1) = zi(t)+vi(t+1) -------(2)  Where W-an inertia weight,  c1 and c2 -two positive constants  rand -uniformly generated random number The Eq. (1) shows that in calculating the next velocity for a VM, the previous velocity of the VM, the best location in the neighborhood about the VMs, the global best location to the next velocity. VMs velocities in each dimension can arrive to a maximum velocity v max, which is defined to the range of the cloud in each dimension. ...
Article
Full-text available
A cloud computing uses an important virtualization technology to provide cloud computing resources likes CPU, memory and storage to the users in terms of virtual machines. Virtual machine allocation and optimization of energy utilization in the cloud is one of the challenging problems, specifically for the private cloud environment. This work has been developed to allocate virtual machines to the host and minimize the energy consumption in a cloud computing environment. The proposed method of this research used an efficient energy saving algorithm which utilizes the concept of the allocation process. The proposed model of virtual machine allocation phases to accomplish the resultant energy saving approach detected during the execution process. At the initial stage, finding the best value operation is applied to all the virtual machines and selecting the physical machine based on the resources by using the PSO algorithm. If the selecting physical machine is not matching with virtual machine resources, then it is migrated to the matching virtual machine host. The PSO algorithm shows various parameters, i.e. power consumption, response time and VM migrations compare with the existing method. The result demonstrates that the PSO algorithm gives better results which help to obtain an efficient approach to energy saving model for reduce VM migration and power consumption than the existing algorithm.
... Virtualization and VM-consolidation are two of the most significant approaches utilize for enhancing resource usage and alleviate the issue of over-provisioning. Numerous physical resources may be employed as virtual resources by splitting a single PM into multiple VMs through virtualization [7]. Virtualization also has the benefit of increasing resource efficiency, streamlining server administration, and reducing the overall infrastructure costs of the DC. ...
Article
Full-text available
Computing operations such as databases, networks, hardware, programs, analytics, and so on are all part of the cloud computing service. As such, it may serve as an option to in-house hardware and software. However, dynamic consolidation of VMs is required to improve power consumption, load balance, the frequency of migrations , Quality of Service (QoS), and the rate at which SLA violations are addressed, all of which contribute to better resource use. VM technology has quickly become a pillar of data centers and cluster systems because to its utility in partitioning, consolidating, and moving workloads. VM placement is another thing that affects the quality of consolidation. It is important to design a system that improves energy efficiency by allocating resources to applications in a smart way while still meeting QoS requirements for applications. Moreover, the security of information that is being processed during the migration process is also one of the important tasks. Hence, in this paper, review and analysis based on parameters and metrics for the VM migration under the influence of energy optimization, consolidation, and security have been investigated in a wider manner. Moreover, some open issues that are still prevalent in the current field have also been highlighted.
... VMs enable us to pack isolation and better utilization of hardware in big DCs. They are widely used in IaaS environment [121,122] as a base where users can install their own operating system (OS) and require software tools and applications; ...
Thesis
Full-text available
Cloud computing is facing some serious latency issues due to huge volumes of data that need to be transferred from the place where data is generated to the cloud. For some types of applications, this is not acceptable. One of the possible solutions to this problem is the idea to bring cloud services closer to the edge of the network, where data originates. This idea is called edge computing, and it is advertised that it dramatically reduces the network latency as a bridge that links the users and the clouds, and as such, it makes the foundation for future interconnected applications. Edge computing is a relatively new area of research and still faces many challenges like geo-organization and a clear separation of concerns, but also remote configuration, well defined native applications model, and limited node capacity. Because of these issues, edge computing is hard to be offered as a service for future real-time user-centric applications. This thesis presents the dynamic organization of geo-distributed edge nodes into micro data-centers and forming micro-clouds to cover any arbitrary area and expand capacity, availability, and reliability. We use a cloud organization as an influence with adaptations for a different environment with a clear separation of concerns, and native applications model that can leverage the newly formed system. We argue that the presented model can be integrated into existing solutions or used as a base for the development of future systems. Furthermore, we give a clear separation of concerns for the proposed model. With the separation of concerns setup, edge-native applications model, and a unified node organization, we are moving towards the idea of edge computing as a service, like any other utility in cloud computing. The first chapter of this thesis, gives motivation and problem are that this thesis is trying to resolve. It also presents research questions, hypotheses and goals based on these questions. The second chapter gives an introduction to the area of distributed systems, narrowing it down only the parts that are important for further understanding of the other chapters and the rest of the thesis in general. The third chapter shows related work from different areas that are connected or that influenced this thesis. This chapter also shows what the current state of the art in industry and academia is, and describes the position of this thesis compared to the related research as well. The fourth chapter proposes a model that is influenced by cloud computing architectural organizations but adapted for a different environment. We present how we can separate the geographic area into micro data-centers that are zonally organized to serve the local population, and form them dynamically. This chapter also gives formal models for all protocols used for the creation of such a system with separation of concerns, applications models, and presents limitations of this thesis. The fifth presents an implemented framework that is based on the model described in chapter three. We describe the architecture, and in detail every operation a framework can do, with all existing limitations. The sixth chapter presents the usability of the proposed model, with possible applications that could be implemented based on the model. We also present one example of COVID-19 area traffic control in the city of Milan, Italy. The seventh and the last chapter concludes this thesis and presents future work that should be done. Key words: distributed systems, cloud computing, multi cloud, microservices, software as a service, edge computing, micro clouds, big data, infrastructure as code.
... In the second level, the problem lies in the hosting of new VMs in the hosts of the different datacenters according to their loads. The hosting of virtual machines has become a difficult issue in the resource allocation systems because each virtual machine is associated to a physical host according its available resources [6]. In order to solve the problem in both levels; we suggest smart solutions that depend on two techniques: The Multi Agent System (MAS) and CSP. ...
Article
Full-text available
In the present paper, we aim at solving two problems; the first problem occurring in the transformation of the IoT devices (sensors, actuators, …) to cloud service. Therefore, we work on maintaining a smooth and efficient data transmission for the cloud and support customer applications like: data sharing, storage and processing. The second problem has two dimensions. In the first dimension, the problem is arisen in the submission of cloudlets (customer requested jobs) to Virtual Machines (VMs) in the hosts. To solve this problem, we propose scheduling algorithm for resource allocation according to the lowest cost and load. In the second dimension, the problem lies in the hosting of new VMs in the hosts. To overcome this problem, we need take into account the loads when housing new VMs in different datacenters. In this work, we suggest a resource allocation approach for services oriented IoT applications. The architecture of this approach is based on two technics: Multi Agent System (MAS) and Distributed Constraint Satisfaction Problems (DCSP). The MAS manages the physical resources, making decision and the communication between datacenters, while DCSP used to simplify the policy of the resources provisioning in Datacenters. Variables and constraints are distributed among multiple agents in different layers. The experimental results show that the efficiency of our approach is manifested in: Average System Load, Cost augmentation Rate and Available Mips.
... Hence, this approach will lead to an efficient utilization of resources available to facilitate maximum computing with minimum physical data centers infrastructures. [3] AwadaUchechukwu et.al [2014] has delivered formulations and solutions for Green Cloud Environments (GCE) to minimize its environmental impact and energy consumption under new models by using static and dynamic portions of cloud components. The proposed methodology captured cloud computing data centers and presented a generic model for them. ...
Article
Full-text available
Cloud computing is an ocean of resources which are shared among multiple users for processing over the internet by cloud services like software, infrastructure and platform oriented. Based on the user requests, various processes such as allocation, computation, execution are performed in the cloud environment. An allocation is the most challenging process in the cloud environment. Virtualization technology is the main technology provided by the cloud that is used for that processes. Virtual machines are used for allocating the resources according to the user request. Many algorithms and techniques are used for virtual machine allocation in the cloud environment. In this paper we provide an overview of the fundamental theories and emerging techniques for cloud based virtual machine allocation process as well as several extended work in these areas. This writing provides a research on the cloud based virtual machine allocation techniques that are frequently used in the early work in cloud environment.
... It is private cloud, established using Eucalyptus software. Eucalyptus cloud consists of mainly 3 components [23] namely cloud controller, cluster controller, node controller. Cloud controller is responsible to manage entire cloud resources like network, storage and servers. ...
Article
Full-text available
Cloud computing is becoming a prominent service model of computing platforms offering resources to all categories of users on-demand. On the other side, cloud environment is vulnerable to many criminal activities too. Investigating the cloud crimes is the need of the hour. Anti-forensic attack in cloud is an attack which specifically aims to scuttle the cloud forensic process. Though many researchers proposed various cloud forensic approaches, detecting cloud anti-forensic attack still remains a challenge as it hinders every step of forensic process. In this paper, we propose a three stage system for the detection of cloud anti-forensic attack with a well defined sequence of tasks in which the process of identifying the suspicious packets plays the major part. Every packet affected with any kind of cloud attack is labeled as suspicious packet and such packets are marked to traceback anti-forensic attack. The main focus of this paper is to deploy such a mechanism to identify the suspicious packets in cloud environment. To categorize the type of attack that affected the packet, both signature analysis and anomaly detection at cloud layers are applied in our proposed approach. The proposed anomaly detection approach is tested on NSL-KDD dataset. The experimental results show that the accuracy of the proposed approach is high compared to the existing approaches.
Chapter
Cloud computing is an era idea wherein customers use far-off servers to keep statistics and applications. Cloud computing sources are demand-pushed and are used inside the shape of digital machines (VMs) to facilitate complicated tasks. Deploying a digital system is the method of mapping a digital system to a bodily system. This is an energetic study subject matter, and numerous techniques have been followed to cope with this difficulty inside the literature. Virtual system migration takes a sure quantity of time, consumes plenty of sources, influences the conduct of different digital machines at the server, and degrades machine performance. If you've got got a massive variety of digital system migrations on your cloud computing machine, you may now no longer be capable of meet your provider stage contracts. Therefore, the maximum trustworthy manner to lessen statistics middle electricity intake is to optimize the preliminary placement of digital machines. In the deployment method, many researchers use ant colony optimization (ACO) to save you immoderate electricity intake discounts. This is because of its effective comments mechanism and allotted retrieval method. This article information the contemporary techniques for digital system positioning and integration that let you use ACOs to enhance the electrical performance of your cloud statistics centers. The assessment among the techniques supplied here exhibits the value, limitations, and guidelines for improving different techniques alongside the manner.KeywordsCloud computingVirtual machine placementACOEnergy efficiencyDistributed search method
Article
Cloud computing has been progressively popular in the arenas of research and business in the recent years. Virtualization is a resource management approach used in today's cloud computing environment. Virtual Machine (VM) migration algorithms allow for more dynamic resource allocation, as well as improvement in computing power and communication capability in cloud data centers. This necessitates an intelligent optimization approach to VM allocation design for an improved performance of application. In this article, a multi‐objective optimal design approach is proposed to tackle the tasks of VM allocation. Multi‐Objective Optimization (MOO) is a strategy adopted by several methods to handle tasks and workflow scheduling issues that deal with numerous opposing goals. In the cloud computing context, effective task scheduling is critical for achieving cost effective implementation as well as resource utilization. To address the optimal solution, this article proposes an entropy‐based multi objective mayfly algorithm is assessed using a convergence pattern in MOO. The model is tested by implementing in a cloud simulator and results prove that the recommended model has an improved performance with regard to factors such as time and utilization rate.
Chapter
Full-text available
Distributing application requests across applications located in different datacenters with in cloud equally must be provided by cloud load balancing. In this paper, we compare different provisioning policies within cloud for virtual machines and workloads, where we are focusing on how to distribute the processing power between virtual machines and how to distribute workload among virtual machines. Cloudsim is the simulation plate form used to test the different distributions scenarios to check the performance on makespan, average turnaround time, bandwidth utilization and CPU utilization. Result showed the difference in performance between the three tested provisioning schemes, where the space-shared gives better readings for the selected performance metrics.
Conference Paper
Full-text available
Distributed system emulators provide a paramount plat- form for testing of network protocols and distributed appli- cations in clusters and networks of workstations. However, to allow testers to benefit from these systems, it is necessary an efficient and automatic mapping of hundreds, or even thousands, of virtual nodes to physical hosts—and the map- ping of the virtual links between guests to physical paths in the physical environment. In this paper we present a heuris- tic to map both virtual machines to hosts and virtual links between virtual machines to paths in the real system. We define the problem we are addressing, present the solution for it and evaluate it in different usage scenarios.
Conference Paper
Full-text available
Cloud computing is fundamentally altering the expectations for how and when computing, storage and networking resources should be allocated, managed and consumed. End-users are increasingly sensitive to the latency of services they consume. Service Developers want the Service Providers to ensure or provide the capability to dynamically allocate and manage resources in response to changing demand patterns in real-time. Ultimately, Service Providers are under pressure to architect their infrastructure to enable real-time end-to-end visibility and dynamic resource management with fine grained control to reduce total cost of ownership while also improving agility. The current approaches to enabling real-time, dynamic infrastructure are inadequate, expensive and not scalable to support consumer mass-market requirements. Over time, the server-centric infrastructure management systems have evolved to become a complex tangle of layered systems designed to automate systems administration functions that are knowledge and labor intensive. This expensive and non-real time paradigm is ill suited for a world where customers are demanding communication, collaboration and commerce at the speed of light. Thanks to hardware assisted virtualization, and the resulting decoupling of infrastructure and application management, it is now possible to provide dynamic visibility and control of services management to meet the rapidly growing demand for cloud-based services. What is needed is a rethinking of the underlying operating system and management infrastructure to accommodate the ongoing transformation of the data center from the traditional server-centric architecture model to a cloud or network-centric model. This paper proposes and describes a reference model for a network-centric datacenter infrastructure management stack that borrows and applies key concepts that have enabled dynamism, scalability, reliability and security in the telecom industry, to the computing industry. Finally- - , the paper will describe a proof-of-concept system that was implemented to demonstrate how dynamic resource management can be implemented to enable real-time service assurance for network centric datacenter architecture.2010
Conference Paper
Full-text available
Cloud computing systems fundamentally provide ac- cess to large pools of data and computational resources through a variety of interfaces similar in spirit to exist- ing grid and HPC resource management and program- ming systems. These types of systems offer a new pro- gramming target for scalable application developers and have gained popularity over the past few years. However, most cloud computing systems in operation today are pro- prietary, rely upon infrastructure that is invisible to the research community, or are not explicitly designed to be instrumented and modified by systems researchers. In this work, we present EUCALYPTUS - an open- source software framework for cloud computing that im- plements what is commonly referred to as Infrastructure as a Service (IaaS); systems that give users the ability to run and control entire virtual machine instances deployed across a variety physical resources. We outline the ba- sic principles of the EUCALYPTUS design, detail impor- tant operational aspects of the system, and discuss archi- tectural trade-offs that we have made in order to allow Eucalyptus to be portable, modular and simple to use on infrastructure commonly found within academic settings. Finally, we provide evidence that EUCALYPTUS enables users familiar with existing Grid and HPC systems to ex- plore new cloud computing functionality while maintain- ing access to existing, familiar application development software and Grid middle-ware.
Article
“The Xen hypervisor has become an incredibly strategic resource for the industry, as the focal point of innovation in cross-platform virtualization technology. David's book will play a key role in helping the Xen community and ecosystem to grow.” ヨSimon Crosby, CTO, XenSourceAn Under-the-Hood Guide to the Power of Xen Hypervisor InternalsThe Definitive Guide to the Xen Hypervisor is a comprehensive handbook on the inner workings of XenSource's powerful open source paravirtualization solution. From architecture to kernel internals, author David Chisnall exposes key code components and shows you how the technology works, providing the essential information you need to fully harness and exploit the Xen hypervisor to develop cost-effective, highperformance Linux and Windows virtual environments.Granted exclusive access to the XenSource team, Chisnall lays down a solid framework with overviews of virtualization and the design philosophy behind the Xen hypervisor. Next, Chisnall takes you on an in-depth exploration of the hypervisor's architecture, interfaces, device support, management tools, and internals-including key information for developers who want to optimize applications for virtual environments. He reveals the power and pitfalls of Xen in real-world examples and includes hands-on exercises, so you gain valuable experience as you learn.This insightful resource gives you a detailed picture of how all the pieces of the Xen hypervisor fit and work together, setting you on the path to building and implementing a streamlined, cost-efficient virtual enterprise.Coverge includes Understanding the Xen virtual architecture Using shared info pages, grant tables, and the memory management subsystem Interpreting Xen's abstract device interfaces Configuring and managing device support, including event channels, monitoring with XenStore, supporting core devices, and adding new device types Navigating the inner workings of the Xen API and userspace tools Coordinating virtual machines with the Scheduler Interface and API, and adding a new scheduler Securing near-native speed on guest machines using HVM Planning for future needs, including porting, power management, new devices, and unusual architectures
Article
Cloud Computing refers to the notion of outsourcing on-site available services, computational facilities, or data storage to an off-site, location-transparent centralized facility or “Cloud.” Gang Scheduling is an efficient job scheduling algorithm for time sharing, already applied in parallel and distributed systems. This paper studies the performance of a distributed Cloud Computing model, based on the Amazon Elastic Compute Cloud (EC2) architecture that implements a Gang Scheduling scheme. Our model utilizes the concept of Virtual Machines (or VMs) which act as the computational units of the system. Initially, the system includes no VMs, but depending on the computational needs of the jobs being serviced new VMs can be leased and later released dynamically. A simulation of the aforementioned model is used to study, analyze, and evaluate both the performance and the overall cost of two major gang scheduling algorithms. Results reveal that Gang Scheduling can be effectively applied in a Cloud Computing environment both performance-wise and cost-wise. KeywordsCloud computing–Gang scheduling–HPC–Virtual machines
Article
One of the many definitions of "cloud" is that of an infrastructure-as-a-service (IaaS) system, in which IT infrastructure is deployed in a provider's data center as virtual machines. With IaaS clouds' growing popularity, tools and technologies are emerging that can transform an organization's existing infrastructure into a private or hybrid cloud. OpenNebula is an open source, virtual infrastructure manager that deploys virtualized services on both a local pool of resources and external IaaS clouds. Haizea, a resource lease manager, can act as a scheduling back end for OpenNebula, providing features not found in other cloud software or virtualization-based data center management software.
Conference Paper
Amazon S3-style storage is an attractive option for clouds that provides data access over HTTP/HTTPS. At the same time, parallel file systems are an essential component in privately owned clusters that enable highly scalable dataintensive computing. In this work, we take advantage of both of those storage options, and propose pWalrus, a storage service layer that integrates parallel file systems effectively into cloud storage. Essentially, it exposes the mapping between S3 objects and backing files stored in an underlying parallel file system, and allows users to selectively use the S3 interface and direct access to the files. We describe the architecture of pWalrus, and present preliminary results showing its potential to exploit the performance and scalability of parallel file systems.
Article
With the significant advances in Information and Communications Technology (ICT) over the last half century, there is an increasingly perceived vision that computing will one day be the 5th utility (after water, electricity, gas, and telephony). This computing utility, like all other four existing utilities, will provide the basic level of computing service that is considered essential to meet the everyday needs of the general community. To deliver this vision, a number of computing paradigms have been proposed, of which the latest one is known as Cloud computing. Hence, in this paper, we define Cloud computing and provide the architecture for creating Clouds with market-oriented resource allocation by leveraging technologies such as Virtual Machines (VMs). We also provide insights on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain Service Level Agreement (SLA)-oriented resource allocation. In addition, we reveal our early thoughts on interconnecting Clouds for dynamically creating global Cloud exchanges and markets. Then, we present some representative Cloud platforms, especially those developed in industries, along with our current work towards realizing market-oriented resource allocation of Clouds as realized in Aneka enterprise Cloud technology. Furthermore, we highlight the difference between High Performance Computing (HPC) workload and Internet-based services workload. We also describe a meta-negotiation infrastructure to establish global Cloud exchanges and markets, and illustrate a case study of harnessing ‘Storage Clouds’ for high performance content delivery. Finally, we conclude with the need for convergence of competing IT paradigms to deliver our 21st century vision.
Conference Paper
Infrastructure as a Service providers use virtualization to abstract their hardware and to create a dynamic data center. Virtualization enables the consolidation of virtual machines as well as the migration of them to other hosts during runtime. Each provider has its own strategy to efficiently operate a data center. We present a rule based mapping algorithm for VMs, which is able to automatically adapt the mapping between VMs and physical hosts. It offers an interface where policies can be defined and combined in a generic way. The algorithm performs the initial mapping at request time as well as a remapping during runtime. It deals with policy and infrastructure changes. We extended the open source IaaS solution Eucalyptus and we evaluated it with typical policies: maximizing the compute performance and VM locality to achieve a high performance and minimizing energy consumption. The evaluation was done on state-of-the-art servers in our own data center and by simulations using a workload of the Parallel Workload Archive. The results show that our algorithm performs well in dynamic data centers environments.
Conference Paper
Virtual machine systems have been implemented on a limited number of third generation computer systems, e.g. CP-67 on the IBM 360/67. From previous empirical studies, it is known that certain third generation computer systems, e.g. the DEC PDP-10, cannot support a virtual machine system. In this paper, model of a third-generation-like computer system is developed. Formal techniques are used to derive precise sufficient conditions to test whether such an architecture can support virtual machines.