Content uploaded by Seyed Buhari
Author content
All content in this area was uploaded by Seyed Buhari on Jan 04, 2015
Content may be subject to copyright.
International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013 47
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
ABSTRACT
Virtual machine allocation problem is one of the challenges in cloud computing environments, especially
for the private cloud design. In this environment, each virtual machine is mapped unto the physical host in
accordance with the available resource on the host machine. Specically, quantifying the performance of
scheduling and allocation policy on a Cloud infrastructure for different application and service models under
varying performance metrics and system requirement is an extremely challenging and difcult problem to
resolve. In this paper, the authors present a Virtual Computing Laboratory framework model using the concept
of private cloud by extending the open source IaaS solution Eucalyptus. A rule based mapping algorithm for
Virtual Machines (VMs) which is formulated based on the principles of set theoretic is also presented. The
algorithmic design is projected towards being able to automatically adapt the mapping between VMs and
physical hosts’ resources. The paper, similarly presents a theoretical study and derivations of some perfor-
mance evaluation metrics for the chosen mapping policies, these includes determining the context switching,
waiting time, turnaround time, and response time for the proposed mapping algorithm.
Virtual Machine Allocation in
Cloud Computing Environment
Absalom E. Ezugwu, Department of Computer Science, Faculty of Science, Federal
University Laa, Laa, Nasarawa State, Nigeria
Seyed M. Buhari, Department of Computer Science, Faculty of Science, Universiti Brunei
Darussalam, Gadong, Brunei
Sahalu B. Junaidu, Department of Mathematics, Faculty of Science, Ahmadu Bello University,
Zaria, Kaduna State, Nigeria
Keywords: Manger-Worker Process, Private Cloud, Rule-Based Mapping, Virtual Machine (VM) Allocation,
Virtualization
1. INTRODUCTION
Cloud computing presents to an end-cloud-user
the modalities to outsource on-site available ser-
vices, computational facilities, or data storage
to an off-site, location-transparent centralized
facility or “Cloud” (Ioannis & Karatza, 2010).
A “Cloud” implies a set of machines and web
services that implement cloud computing. These
machines ideally comprise of a pool of distrib-
uted physical compute resources that include
the following: processors, memory, network
bandwidth and storage, which are potentially
distributed physically across network of serv-
ers that cut across geographical boundaries.
Resources associated with cloud computing
DOI: 10.4018/ijcac.2013040105
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
48 International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013
are often organized into a dynamically logical
entity that are outsourced and leased out on
demand. One of the major characteristics of
cloud computing is elasticity, which means that
cloud resources can grow or shrink in real-time
(Sarathy et al., 2010). This transformation in
cloud computing is made possible today by the
concept of virtualization technology.
Over the past few years, the idea of virtu-
alization technology has become a more com-
mon phrase among IT professionals. The main
concept behind this technology is to enable the
abstraction or decoupling of application pay-
load from the underlying distributed physical
host resource (Buyyaa et al., 2009; Popek &
Goldberg, 1974). This simply means that the
physical resources can be presented in the form
of either logical or virtual resources depending
on individual choices. Furthermore, some of
the advantages of implementing virtualiza-
tion technology are to assist cloud resource
providers to reduce costs through improved
machine utilization, reduced administration
time and infrastructure costs. By introducing
a suitable management mechanism on top of
this virtualization functionality (as we have
proposed in this paper), the provisioning of the
logical resources could be made dynamic that
is, the logical resource could be made either
bigger or smaller in accordance with cloud
user demand (elastic property of the cloud). To
enable a truly cloud computing system, each
computing resource element should be capable
of being dynamically provisioned and managed
in real-time based on the concept of dynamic
provisioning as it applies to cloud computing.
This abstraction actually forms the bases of the
proposed conceptual framework as presented
in Section 3.
To implement the concept of virtualiza-
tion cloud developers often adopted and make
use of the concept of an open source software
framework for cloud computing that implements
what is commonly referred to as Infrastructure
as a Service (IaaS). This software framework
is known as Hypervisor (Nurmi et al., 2009;
Chisnall, 2009). A hypervisor, also called vir-
tual machine manager (VMM), is one of many
hardware virtualization techniques that allow
multiple operating systems, termed guests, to
run concurrently on a host machine. However,
there are different infrastructures available for
implementing virtualization for which we have
different virtual infrastructure management
software for that (Sotomayor et al., 2009).
This paper proposes a novel simulation
framework for the cloud developers. There
are two high level components in the proposed
architecture which are the Manager process
and Worker process. The Manager is assigned
the role of cluster controller while the worker
is assigned the role of node controller in the
new system. The focus of the work however,
is to develop a framework model that will al-
low dynamically the mapping of VMs onto
physical hosts depending upon the resource
requirements of the VMs and their availability
on the physical hosts. The cloud resources to be
considered include machines, network, storage,
operating systems, application development
environments, and application programs.
The remainder of the paper is organized as
follows. In Section 2, a survey of related work
on cloud and eucalyptus cloud environment is
presented. The proposed system architecture and
model is described in Section 3, and in Section
4, an in-depth description of VM allocation
and rule-based mapping algorithm based on
set theoretic concept is presented. Theoretical
derivation of performance evaluation metrics
for the proposed system is derived in Section
5. Finally, Section 6 offers concluding remarks
on the work.
2. RELATED WORK
2.1 Cloud Platform Environment
Cloud computing as defined in Buyya et al.
(2008) is a type of parallel and distributed
computing system consisting of a collection of
interconnected and virtualized computers that
are dynamically provisioned and presented as
one or more unified computing resources based
on service-level agreements established through
negotiation between the service provider and
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013 49
consumers. Currently there are very good cloud
environments which provide effective, efficient
and reliable services to the cloud users, among
which include Amazon1, Google App Engine2,
Apple MobileMe3, and Microsoft clouds (Ama-
zon, 2009; Chu et al., 2007).
2.2 Eucalyptus Cloud Environment
Eucalyptus is a java-based cloud management
tools which consists of five high-level compo-
nents. These components are the cloud control-
ler, cluster controller, node controller, storage
controller and the walrus. Each high-level
system component has its own Web interface
and is implemented as a standalone Web service
(Nurmi et al., 2009; Yoshihisa and Garth, 2010).
Cloud controller is responsible for exposing and
managing the underlying virtualized resources
(machines (servers), network, and storage) via
user-facing APIs. Currently, this system com-
ponent exports a well-defined industry standard
API (Amazon EC2) and via a Web-based user
interface (AEC, 2011). The storage controller
provides block-level network storage that can
be dynamically attached by VMs4. The current
implementation of the storage controller sup-
ports the Amazon Elastic Block Storage (EBS)
semantics (Kleineweber et al., 2011). Walrus
is a storage service that supports third party
interfaces, providing a mechanism for storing
and accessing VM images and user data. The
cluster controller gathers the required informa-
tion and schedules VM execution on specific
node controllers, as well as manages virtual
instance network that run inside the cloud en-
vironment. The node controller manages the
execution, inspection, and termination of VM
instances on the particular host where it runs.
Node controller is responsible for the VM
instance. When user places requests for a VM
image through the cloud controller, the node
controller would start and run that particular VM
image and make available an instance of such
VM in the network which is then accessed by
the cloud end user. Figure 1 depicts the cloud
model of Eucalyptus.
The cloud controller is the entry point into
the cloud computing platform environment for
users and administrators. It queries node man-
agers for information about resources, makes
high level scheduling decisions, and implements
them by making requests to cluster controller.
Then cluster controller makes queries by us-
ing node controllers to implement the cloud
controller requests. The users interact with the
cloud controller via the user interface by means
of access protocol known as the Simple Object
Access Protocol (SOAP) or Representational
State Transfer (REST) messages. Cloud con-
troller moves the user requests to the cluster
controller. A cloud controller can have multiple
cluster controllers and one particular cluster
controller can have multiple node controllers
Figure 1. Eucalyptus architecture
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
50 International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013
at the same time. In Upatissa and Atukorale
(2012) an abstract architecture of a modified
private cloud model is presented with focus
on enhancing the effectiveness of managing
virtual machine images in eucalyptus based
cloud environment.
In the context of Cloud computing and as
assumed in this paper, any hardware or soft-
ware entity such as high-performance systems,
storage systems, laboratory device servers or
applications that are shared between users of
a Cloud is called a resource. However, for the
rest of this paper and unless otherwise stated,
the term resource means hardware such as
computational nodes in the network or storage
systems. Resources are also laboratory devices
or laboratory device servers and hence, these
terms are used interchangeably. The network-
enabled capabilities of the resources that can be
invoked by users, applications or other resources
are called services.
3. SYSTEM ARCHITECTURE:
VIRTUAL COMPUTING
LABORATORY MODEL
As earlier mentioned that virtualization is the
key technology underlying most private cloud
implementations and that it enables multiple
virtual machines to run on a single physical
node. The private cloud in essence is actually
more than just virtualization. It is a program-
matic interface, driven by an API and managed
by a cloud controller that enables automated
provisioning of virtual machines, networking
resources, storage, and other infrastructure
services. Our main concern in this article, is to
find an alternative but a suitable means in which
the provisioning of virtual machine together
with their mapping unto physical host can be
managed and made even more effective and
efficient in a typical private cloud computing
environment.
To support the run-time allocation of virtual
machines to physical hosts, we will construct
a manager/worker-style parallel paradigm
akin to the work presented in Malgaonkar et
al. (2011) for the proposed virtual computing
laboratory. Similarly, we intend to also extend
the eucalyptus private cloud architecture similar
to the work discussed in Upatissa and Atukorale
(2012), Nurmi et al. (2009) by introducing two
high level management components (i.e. the
manager process and worker processes). In the
proposed model, one virtual machine called
the manager is responsible for keeping track
of assigned and unassigned query information
about resource. The worker queries and controls
the system software on its node in response to
queries and control requests from its manager.
The advantage associated with allocating a
single task at a time to each worker is that it
balances workload. Keeping workload balanced
is essential for high efficiency, and we choose
first the manager/worker paradigm as the basis
for our cloud design. Detail of this paradigm
is further explained below in section I and II.
3.1 Manager Process
The manager process is the gateway into the
cloud management platform. Its function is to
query any machine that has network connectiv-
ity to both the nodes running worker processes
and to the machine running the cloud controller
for information about resources. Subsequently,
the manager process is assigned the task of i)
making scheduling decisions (such as sched-
uling of incoming instances to run on specific
nodes) ii) controlling the instances of virtual
network overlay, and iii) gathering information
about a set of nodes from the worker process.
Many of the manager’s request operations to
the worker processes take the following format:
describeInstances, describeResources, and et-
cetera. When the manager process receives a set
of operation instances to perform, it would first
request for information regarding available and
suitable resource through it describeResources
function from the worker processes. The worker
process then searches for this information
comprising of lists of resource characteristics
(processor, memory, storage and bandwidth) and
sends report back to the manager process. With
this information the manager process computes
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013 51
the number of simultaneous instances of the
specific type that can be executed on the lists
of available nodes and sends this value to the
cloud controller for delegation and allocation
to the various booked nodes. This step also
applies to the rest query functions.
3.2. Worker Process
The worker process manages all information
regarding VM instance per host respectively. A
Worker process executes on every node that is
designated for hosting VM instances. A Worker
queries and controls the system software on its
node in response to queries and control requests
from the manager process. The worker processes
execute queries from the manager process such
that discoveries of the physical nodes resource
profiles are acquired. These profiles informa-
tion entail the number of processors, the size of
memory, the available disk space, and as well
as to learn about the state of VM instances on
the node. The information thus collected is
propagated back to the Manager for further
processing and delegation to the cloud control-
ler (see Figure 2).
The cluster or resource pool in addition
also consist of some local storage which can
be either true local storage that is physically
attached to the node, or that is accessed via a
shared pool of storage over storage local area
network fibre channel or similar mechanisms.
The architecture shown in Figure 3 can
be expanded to include multiple clusters com-
prising of managers and workers that add both
capacity to the solution as well as redundancy
that can be used to increase the overall avail-
ability of the infrastructure.
4. PRELIMINARIES: SET
THEORY AND MODELLING
OF VM MAPPING
The proposed model considered in this pa-
per is based on a formal model described in
(Kleineweber et al., 2011) which is a discus-
sion on rule based techniques for the mapping
of virtual machines and virtual network links.
However, we chose to concentrate on the aspect
of virtual machines mapping that associates re-
source requirement of the virtual machines and
their availability on the physical hosts. Before
venturing into modelling of VM-mapping, we
may recall the following preliminary ideas on
set theory.
Definition 1: A set is a collection of distinct
elements or objects of some kind with a
common property that, given an object and
a set, it is possible to decide if the object
belongs to the set.
Figure 2. In a manager/worker-style, a manager process send the elements of a set is often de-
noted by lower case letter (e.g. a b c, , , . . .)
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
52 International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013
1. A set can be described by A={a,b,c,d}, the
elements of a set is often denoted by lower
case letter (e.g. a,b,c, …).
2. x A∈, if an element x beleongs to a set
A. ( x A∉ if an element x does not belong
to a set A). Similarly, x X
∈
or y Y
∈
,
means ‘x belongs to set X’ or ‘y belongs
to set Y.'
Example 1: Let
X x x x=
{ }
=
{ }
1 2 3
, , ,memory,OS,ram
where x1= memory, respectively etc.,.
Then ‘ram’
∈
X and ‘Book’ ∉X.
Example 2: Let X x x=
{ }
1 10
,..., . Then
x X
2∈ and
x X
100
∉.
The notions of the union of two sets and
the intersection of two sets are well-known and
therefore not defined here.
Definition 2: (Function).
A
function from a
set
X
to a set
Y
is a rule, which assigns
to an element x X∈ a unique element
y Y∈, denoted by f X Y: ,→ or
y f x=( ) or where x X∈ a n d
y f x Y= ∈( )
We develop a model of virtual machine
mapping by defining the set of virtual machines
as V v v vm
=
{ }
0 1
, ..., , also m denote the num-
ber of virtual machines to be mapped to the real
host. Similarly, we represent the set of real host
as R r r rn
=
{ }
0 1
, , ..., , , where R represents the
computational nodes provided at the data cen-
tre,
n
is the number of physical hosts available.
The subscripts indicate an instance of either a
family of virtual machine or physical host. The
use of this expression is further justified based
on the definition of Cartesian product equation
given below:
Figure 3. The proposed private cloud architecture
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013 53
Definition 3: Cartesian product: An n-tuple
( ,..., )a an1 can be defined by explicitly
listing its elements a an1,..., . The gener-
alised Cartesian product of n sets A A
n1,...,
is then defined as the set of all n-tuples:
A a a A a a A
i n n n
i
n=
( )
∈ ∈
{ }
=
∏
1 1 1
1,..., ,...,
(1)
Consider a virtual machine X, for n-ary
Cartesian product Xi
i
N
,
=1
there is a family of
n projection π π
1,..., n
{ }
such that for all
( ,..., )x xn1 ∈
=
∏xi
i
n
1
and for all k n∈
{ }
1,...,
then the equation πk n k
x x x(( ,..., ))
1= holds.
Each of the physical host has some par-
ticular attribute or profile attached to them.
These profiles are actually very significant,
since resources are often requested along with
some requirements criteria as the case may be
in a distributed resource environment. A host
Machine yi, can have the following profiles;
processors, memory, network bandwidth, and
storage.
Associate to vi the attributes, that is, vi
{attribute} = vi{processors, memory, network
bandwidth, and storage}
Therefore, attributes may be represented
as sets of component values of the physical host
ri, we may denote the elements of this set as:
Set:
p=processors,
m=memory,
n
=
network bandwidth,
s
=
storage available on the host machine
ri,
Therefore, ri{attributes}: { , , , }=r p m n s
i
By the same rule, we may apply this notion
to the set of virtual machines.
Let p' ,
=
processors m' ,
=
memory
and n' ,
=
network bandwidth
s'
=
storage required by the virtual
machine vi. Therefore, vi{attributes}
: { ', ', ', '}.=v p m n s
i
Define a function which maps a physical
host to any virtual machine using round robin
technique based on the number of available
resources on the hosts as follow:
f V R e: { },→
∪
(2)
where
e
is a set of VMs without corresponding
attributes and
f v
r if r is the attribute resource name
for virtual machine v
e if v
( )
=
/
ddoes not h ave a corresponding
attribute resource name/
(3)
For v V∈ and r R∈.
Definition 4: We say that a virtual machine
V
is said to be compatible with a physical
host R if there exist a mapping
f V R: { , }× → 0 1 such that for some
V V
k⊂ there exists at least an element
r R∈ with f V r
k
(( , )) ,=1 where
V
k
contains some elements v n
i=1 2, ,..., of
V.
Otherwise
V
is said to be incompat-
ible if f V r
k
(( , )) .=0 This means that,
r R∈ such that ( , )V r
k holds.
If we denote an instance of a virtual machine
v1with resource requirement configuration
index1 by v1 1. , and physical host r1with avail-
able resource configuration
index
1
.
Then we
can have a compatibility mapping of virtual
machine unto physical host as shown in Figure
4:
In this caseV v
1 1 1
={ }
. where i=1, 2, 3…,
n
4.1. Setting Initial Conditions
It is easy to observe, that at time t
=
0 there
exist no initial mapping of the VMs, although,
V v
v vn
={ , ,..., }
1 2 exist, and therefore the
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
54 International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013
function f=0 or f V r Z
k
(( , )) ,→ = 0 where
Z is the set of already mapped VMs
The following conditions hold for the pro-
posed VMs mapping to physical host.
The function f V R:
→
will map a vir-
tual machine vi to a physical host ri only if,
f V r
k
(( , )) ,=1for some V V r R
k⊂ ∈ and
(4)
p p v V
i i i
i
n
≥′∀ ∈
=
∑,
1
(5)
m m v V
i i i
i
n
≥′∀ ∈
=
∑,
1
(6)
n n v V
i i i
i
n
≥′∀ ∈
=
∑,
1
(7)
s s v V
i i i
i
n
≥′∀ ∈
=
∑,
1
(8)
These equations enforces that necessary
resources (processor, memory, nodes, and stor-
age) are available on physical host on which the
guest VMs are to be assigned to. The amount
of resources required by all the guests VMs
mapped to a host does not exceed the number
of resources on a host.
In this paper, we assume that the commu-
nication between the guest virtual machines
running on the same host and the host machine
itself is contention-free in terms of resource al-
location i.e. each guest VM runs in a specified
and designated address space. Similarly each
VM is mapped exactly once onto physical host
machine. The pseudo-code for this purpose is
presented in Algorithm Listing 1.
• Round-Robin: A round-robin algorithm
distributes the load equally to each server,
regardless of the current number of con-
nections or the response time (Mohanty et
al., 2011). Round-robin is suitable when the
servers in the cluster have equal processing
capabilities; otherwise, some servers may
receive more requests than they can process
while others are using only part of their
resources. In the above algorithm a dynamic
time quantum concept is suggested and
used so as to improve the average waiting
time, average turnaround time and to de-
crease the number of context switches. An
abstracted view of round-robin algorithm
is provided by the following algorithm
Listing 2.
5. PERFORMANCE
EVALUATION
As previously discussed, the sharing of cloud
resources is based on demand and the dynamic
utilization of these resources comes under differ-
ent conditions. Therefore, this section presents
some empirical studies of the key evaluation
Figure 4. Compatibility of VMs to physical host mapping functions
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013 55
metrics for the cloud models presented in Sec-
tion 3. However, we anticipated to use these
parameters to project the efficiency of the
proposed model. Therefore, this study evalu-
ates the proposed model from the performance
efficiency view, and not from cost perspective.
If assumed that cloud is affordable, then per-
formance of such system should be an issue to
reconcile with.
The performance metrics summarized
in Table 1, have been used to evaluate the
system performance. Considering the fact that
the proposed system deploys the round robin
scheduling algorithm to map virtual machines to
physical host, therefore, the major concentration
for the performance metrics is on determining
the Context switching, Waiting time, Turnaround
time, and Response time.
The waiting time wt of a mapping task ti
refers to the time lapses between the dispatch-
ing of VM’s and before their mapping schedul-
ing with the physical host begins. Simply put,
it can also be referred to as the amount of time
a VM’s to be mapped has been waiting in the
ready queue. The Average waiting time (AVG-
WT) of mapping task ti is defined as follows:
AVGWT wt t
n
i
n
i
==
∑1[ ] (9)
However, the mapping of different virtual
machine by the manager onto the physical host
depends on the resource requirement of the
virtual machines and their availability on the
host system. We include these two additional
Algorithm 1. Mapping of guest VMs to physical host
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
56 International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013
factors by assigning a weight and delay factor
to the system. The number of times it takes for
each mapping tasks to exploit the allotted time
quantum is weighted to the number of resource
requirement rr ti
[ ] and availability factor af ti
[ ].
Therefore the average weighted waiting time
(AVGWWT) is defined as follows:
AVGWWT rr t af t wt t
rr t af t
i
n
i i i
i
n
i i
=+
+
=
=
∑
∑
1
1
( [ ] [ ]). [ ]
[ ] [ ]
(10)
Turnaround time of a mapping task ti is
referred to as the time difference between the
arrival of a mapping request and the successful
completion of the mapping task. The average
turnaround time (AVGTT) and the average
weighted turnaround time are given as:
AVGTT tt t
n
i
n
i
==
∑1[ ] (11)
AVGWTT rr t af t tt t
rr t af t
i
n
i i i
i
n
i i
=+
+
=
=
∑
∑
1
1
( [ ] [ ]). [ ]
[ ] [ ]
(12)
Response time of a mapping task ti is the
time frame from when a request is submitted
until the time when the virtual machine is
mapped to the physical host and first response
is produced. The average response time
(AVGRT) and average weighted response time
(AVGWRT) are defined accordingly as follows:
AVGRT rt t
n
i
n
i
==
∑1[ ] (13)
AVGWRT rr t af t rt t
rr t af t
i
n
i i i
i
n
i i
=+
+
=
=
∑
∑
1
1
( [ ] [ ]). [ ]
[ ] [ ]
(14)
The performance of the system depends
on the length of a time quantum assigned. The
idea of choosing a short quantum is considered
a good choice because it would allow many
mapping processes (as in our case the mapping
of virtual machines to physical hosts) to circu-
late through the waiting queue quickly, thereby
allowing each process a brief chance to run. In
this way, highly interactive tasks that usually do
not use up their quantum will not have to wait
for so long before they get processed again by
the system. The advantage of this approach is
the improvement on the interactive performance
of the entire system. However, selecting a short
quantum is bad in someway because the sys-
tem must perform a context switch whenever
a process gets pre-empted. This is considered
somewhat as an overhead: since each time the
system does other than executing submitted
request to performing mapping tasks is seen
essentially as an overhead. A short quantum
Algorithm 2. Round Robin
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013 57
implies many such context switches per unit
time, which takes the system away from per-
forming useful work.
The overall performance efficiency
PE
of the system is determined therefore based on
the value of the assigned time quantum Q, the
useful time T taken to execute a mapping task
ti before the occurrence of a context switch,
the context switch time Tcs required by task ti
and the total time taken to round up the whole
execution tasks (i.e. T + Tcs). This is computed
as follows:
PE
T
T Tcs if Q
T
T Tcs if Q T
Tcs
Q
if Tcs Q T
=
+= ∞
+>
+
< <
1
(15)
However, if the value of Q Tcs=, the
value of PE =50% and if Q≈0,
PE ≈0
based on the third formula given in 15. The
general conception in this case is that an increase
in Q increases efficiency but reduces average
response time. A case scenario of this concep-
tion is presented below.
• Case: Suppose that there are ten
VMs
ready to be mapped, Q m=100 sec, and
Tcs m=5, sec.
VM 0
(at the head of the
ready queue) gets to run immediately. VM
1 can run only after
VM 0 ' s
quantum
expires (100 msec) and the context switch
takes place (5 msec), so it starts to run at
105 msec. Likewise,
VM 2
can run only
after another 105 msec. We can compute
the amount of time that each
VM
will be
delayed and compare the delays between
a small quantum (4 msec.) and a long
quantum (100 msec.), and similarly be-
tween a quantum of (10 msec.) and a long
Table 1. Description of symbols used in the Round Robin performance metrics
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
58 International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013
quantum (100 msec.). Figure 5 illustrates
the plots of task number against delay time.
• Context Switching Issues: Apart from the
normal mapping task the system will have
to perform a check and balance procedures
to confirm whether a particular physical
host has the desired resources that meet
the VM’s requirement first before a map is
allowed. Therefore this task might differ for
all the VMs and thus warrant a time shift
in their respective execution stages. This
process of context switch is ideal, since
the proposed system is modeled based on
the context of distributed and time sharing
system operational scenario.
The context switch is a mechanism which
occurs when the system changes the control
of the scheduling process from an executing
task to another that is ready to run. The system
automatically saves the state of the current task
including the resource requirement profiles
and other profiles that describes this state
based on submitted request by the VMs. After
which, it loads the saved state of the new task
for execution.
To characterize the overhead associated
with context switch, we adopt an empirical study
of the work presented in (AWS, 2009). The
work involves using a test-bench that consists
of creating two threads P1 and P2 and generat-
ing a number of context switches as detailed in
(AWS, 2009; Chu et al., 2007). Even though
their work differs from the one presented in
this paper, the concept and rational for context
switching still remain the same. Figures 6 and
7 depicts an extended version of the work pre-
sented in (AWS, 2009). In fact in step 1, only
two context switches are generated and in step n,
n context switches are generated. In the figures,
Tcs represents the time of the context switch,
Si,j the j-th section of the process Pi and Ti,j is
the execution time of the section Si,j.
The respective total execution time for the
benchmark in step 1 and step n are denoted
by Tstep1 and Tstep1 and are computed as
follows:
Tstep Texec T Texec T Texec
cs cs1 1 1 2 1 1 2
= + + + +
, , ,
(16)
Tstepn
Texec T Texec T Texec Texec n
cs cs m
=
+ + + + +
1 1 2 1 1 2, , , ,
= + + ×
≤ ≤ ≤ ≤
∑ ∑
T T n T
i
i m
j s
j n
1
1
2
1
, , ( ) (17)
where
m
and
n
represents the number of
sections of P1 and P2 respectively. The
context switching times T
CS and the context
switch slowdown overhead SCS are calculated
as follows:
TTstep Tstep
n
cs
n
=−
−
1
1 (18)
SSstep Sstep
n
cs
n
=−
−
1
1 (19)
Figure 5. Tasks number vs. delay for (a) Q = 4 and Q = 100, (b) Q = 10 and Q = 100
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013 59
6. CONCLUSION
In this paper, we described and presented the
basic concepts of private cloud design and
virtual machine allocation problem. The goal
of the design is to model an easy to deploy
private cloud and define virtual machines
allocation problem in terms of set theoretic
concepts. A simple, but efficient rule based
mapping algorithm has also been suggested
and presented with this algorithm; cloud users
can execute their Virtual Machines efficiently
with limited number of physical machines.
Hence, this approach will lead to an efficient
utilization of resources available to facilitate
maximum computing with minimum physical
data centers infrastructures.
REFERENCES
AEC. (n.d.). Amazon elastic compute cloud (Amazon
EC2). Retrieved from http://aws.amazon.com/ec2/
AWS. Amazon Web Services LLC (2009). Amazon
elastic compute cloud (EC2). Retrieved from http://
aws.amazon.com/ec2/
Buyya, R., Yeo, S. C., & Venugopal, S. (2008).
Marketoriented cloud computing: Vision, hype, and
reality for delivering IT services as computing utili-
ties. In Proceedings of the 10th IEEE International
Conference on High Performance Computing and
Communications.
Buyyaa, R., Yeoa, S. C., Venugopala, S., Broberga,
J., & Brandicc, I. (2009). Cloud computing and
emerging IT platforms: Vision, hype, and reality
for delivering computing as the 5th utility. Future
Generation Computer Systems, 25(6), 599–616.
doi:10.1016/j.future.2008.12.001.
Calheiros, N. R., Buyya, R., & De Rose, F. A. C.
(2009). A heuristic for mapping virtual machines and
links in emulation testbeds. In Proceedings of the
9th International Conference on Parallel Process-
ing (ICPP) (pp. 518–525). Washington, DC: IEEE
Computer Society.
Chisnall, D. (2009). Guide to the xen hypervisor.
Prentice Hall Press.
Figure 6. Context switch benchmark (step 1)
Figure 7. Context switch benchmark (step n)
Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
60 International Journal of Cloud Applications and Computing, 3(2), 47-60, April-June 2013
Chu, X., Nadiminti, K., Jin, C., Venugopal, S., &
Buyya, R. (2007). Next-generation enterprise grid
platform for e-science and e-business applications.
In Proceedings of the 3rd IEEE International Confer-
ence on e-Science and Grid Computing.
Kleineweber, C., Keller, A., Nieh¨orster, O., &
Brinkmann, A. (2011). Rule-based mapping of
virtual machines in clouds. In Proceedings of the
2011 19th International Euromicro Conference on
Parallel, Distributed and Network-Based Processing
(pp. 527-534).
Malgaonkar, P., Koul, R., Thorat, P., & Zawar, M.
(2011). Mapping of virtual machines in private
cloud. International Journal of Computer Trends
and Technology, 2(2).
Mohanty, R., Das, M., & Lakshmi, P. M., Sudhashree.
(2011). Design and performance evaluation of a new
proposed fittest job first dynamic round robin (FJF-
DRR) scheduling algorithm. International Journal
of Computer Information Systems, 2(2).
Moschakis, I. A., & Karatza, H. D. (2010). Evalu-
ation of gang scheduling performance and cost in a
cloud computing system. Journal of Supercomput-
ing, 1-18. doi doi:10.1007/s11227-010-0481-4,
Springer, Berlin.
Nurmi, D., Wolski, R., Grzegorczyk, C., Obertelli,
G., Soman, S., Youseff, L., & Zagorodnov, D. (2009).
The Eucalyptus open-source cloud-computing
system. In Proceedings of the 2009 9th IEEE/ACM
International Symposium on Cluster Computing and
the Grid (CCGRID ’09). Washington, DC: IEEE
Computer Society.
Nurmi, D., Wolski, R., Grzegorczyk, C., Obertelli, G.,
Soman, S., Youseff, L., & Zagorodnov, D. (2009). The
eucalyptus open-source cloud-computing system.
In Proceedings of the 9th IEEE/ACM International
Symposium on Cluster Computing and the Grid
(CCGRID’09) (pp. 124-131). IEEE.
Popek, G. J., & Goldberg, P. R. (1974). Formal
requirements for virtualizable third generation
architectures. Communications of the ACM, 17(7),
412–421. doi:10.1145/361011.361073.
Sarathy, V., Narayan, P., & Mikkilineni, R. (2010).
Next generation cloud computing architecture:
Enabling real-time dynamism for shared distributed
physical infrastructure. Retrieved from www.kawao-
bjects.com/resources/PID1258479.pdf
Smith, E. J., & Nair, R. (2005). Virtual machines:
Versatile platforms for systems and processes. Mor-
gan Kauffmann.
Sotomayor, B., Montero, R. S., Llorente, I. M., &
Foster, I. (2009). Virtual infrastructure management
in private and hybrid clouds. Internet Computing,
IEEE, 13(5), 14–22. doi:10.1109/MIC.2009.119.
Sun Microsystems, Inc. (2009). Introduction to cloud
computing architecture. White Paper (1st ed.).
Upatissa, D. T., & Atukorale, A. (2012). Low cost
virtual lab environment to the university by using
cloud environment. In Proceedings of the Interna-
tional Conference on Computer Engineering and
Technology (ICCET 2012) (IPCSIT vol.40). Singa-
pore: IACSIT Press.
Yoshihisa, A., & Garth, G. (2010). pWalrus: Towards
better integration of parallel file systems into cloud
storage. In Proceedings of the Workshop on Inter-
faces and Abstractions for Scientific Data Storage
(IASDS10), Co-Located with IEEE International
Conference on Cluster Computing 2010 (Cluster10),
Heraklion, Greece.
ENDNOTES
1 http://aws.amazon.com/ec2/
2 https://developers.google.com/appengine/
3 http://www.apple.com/icloud/
4 Each VM includes its own kernel, operating
system, supporting libraries and applications.