ArticlePDF Available

Abstract and Figures

Infrastructure as a Service (IaaS) providers must support requests for virtual resources in highly dynamic cloud computing environments. Due to the randomness of customer requests, Virtual Machine Placement (VMP) problems should be formulated under uncertainty. This work presents a novel two-phase optimization scheme for the resolution of VMP problems for cloud computing under uncertainty of several relevant parameters, combining advantages of online and offline formulations in dynamic environments considering service elasticity and overbooking of physical resources. In this context, a formulation of a VMP problem is presented, considering the optimization of the following four objective functions: (i) power consumption, (ii) economical revenue, (iii) resource utilization and (iv) reconfiguration time. The proposed two-phase optimization scheme includes novel methods to decide when to trigger a placement reconfiguration through migration of virtual machines (VMs) between physical machines (PMs) and what to do with VMs requested during the placement recalculation time. An experimental evaluation against state-of-the-art alternative approaches for VMP problems was performed considering 400 scenarios. Experimental results indicate that the proposed methods outperform other evaluated alternatives, improving the quality of solutions in a scenario-based uncertainty model considering the following evaluation criteria: (i) average, (ii) maximum and (iii) minimum objective function costs.
Content may be subject to copyright.
Virtual Machine Placement for Elastic Infrastructures
in Overbooked Cloud Computing Datacenters Under Uncertainty
Fabio L´
opez-Piresa,b,, Benjam´
ın Bar´
anb, Leonardo Ben´
ıtezb, Sa´
ul Zalimbenb, Augusto Amarillab
aInformation Technology and Communications Center, Itaipu Technological Park, Hernandarias, Paraguay
bPolytechnic School, National University of Asunci´on, San Lorenzo, Paraguay
Abstract
Infrastructure as a Service (IaaS) providers must support requests for virtual resources in highly dynamic cloud computing envi-
ronments. Due to the randomness of customer requests, Virtual Machine Placement (VMP) problems should be formulated under
uncertainty. This work presents a novel two-phase optimization scheme for the resolution of VMP problems for cloud computing
under uncertainty of several relevant parameters, combining advantages of online and oine formulations in dynamic environ-
ments considering service elasticity and overbooking of physical resources. In this context, a formulation of a VMP problem is
presented, considering the optimization of the following four objective functions: (i) power consumption, (ii) economical revenue,
(iii) resource utilization and (iv) reconfiguration time. The proposed two-phase optimization scheme includes novel methods to de-
cide when to trigger a placement reconfiguration through migration of virtual machines (VMs) between physical machines (PMs)
and what to do with VMs requested during the placement recalculation time. An experimental evaluation against state-of-the-art
alternative approaches for VMP problems was performed considering 400 scenarios. Experimental results indicate that the pro-
posed methods outperform other evaluated alternatives, improving the quality of solutions in a scenario-based uncertainty model
considering the following evaluation criteria: (i) average, (ii) maximum and (iii) minimum objective function costs.
Keywords:
Virtual Machine Placement, Cloud Computing, Overbooking, Elasticity, Uncertainty, Incremental VMP, VMP Reconfiguration
1. Introduction
Achieving an ecient resource management in cloud compu-
ting datacenters presents several research challenges, including
relevant topics in resource allocation [1]. This work focuses
on one of the most studied problems for resource allocation in
cloud computing datacenters: the process of selecting which re-
quested virtual machines (VMs) should be hosted at each avail-
able physical machine (PM) of a cloud computing infrastruc-
ture, commonly known as Virtual Machine Placement (VMP).
This work proposes a complex Infrastructure as a Service (IaaS)
environment for VMP problems, considering both service elas-
ticity [2] and overbooking of physical resources [3].
To the best of the authors’ knowledge, there is no published
work simultaneously taking into account elasticity and over-
booking, directly related to the most relevant dynamic param-
eters in the literature on uncertain VMP problem considering
multi-objective optimization. In order to model this complex
IaaS environment for VMP problems, cloud services (i.e. inter-
related VMs) are considered instead of isolated VMs [4].
It is worth remembering that VMP is a NP-Hard combina-
torial optimization problem [5]. From an IaaS provider per-
spective, the VMP problem is mostly formulated as an online
problem and must be solved with short time constraints [6].
Corresponding author
Email address: fabio.lopez@pti.org.py (Fabio L´
opez-Pires)
Online decisions made along the operation of a dynamic
cloud computing infrastructure negatively aects the quality of
obtained solutions in VMP problems when comparing to of-
fline decisions [7]. In this context, oine algorithms present
a substantial advantage over online alternatives. Unfortunately,
oine formulations are not appropriate for highly dynamic en-
vironments for real-world IaaS providers, where cloud services
are requested dynamically according to current demand.
This work presents a two-phase optimization scheme, de-
composing the VMP problem into two dierent sub-problems,
combining advantages of online and oine VMP formulations
considering a complex IaaS environment. The presented op-
timization scheme for the VMP problem introduces novel me-
thods to decide when to trigger placement reconfigurations with
migration of VMs between PMs (defined as VMPr Triggering)
and what to do with cloud services requested during placement
recalculation times (defined as VMPr Recovering).
For IaaS customers, cloud computing resources often appear
to be unlimited and can be provisioned in any quantity at any
required time [8]. Consequently, this work considers a basic
federated-cloud deployment architecture for the VMP problem.
It is important to consider that more than 60 dierent objec-
tive functions have been proposed for VMP problems [6]. In
this context, the number of considered objective functions may
rapidly increase once a complete understanding of the VMP
problem is accomplished for practical problems, where seve-
ral dierent parameters should be ideally taken into account.
Preprint submitted to Future Generation Computer Systems August 25, 2017
Consequently, a renewed formulation of the VMP problem is
presented, considering the optimization of the following four
objective functions: (i) power consumption, (ii) economical re-
venue, (iii) resource utilization and (iv) reconfiguration time.
Due to the randomness of customer requests, VMP problems
should be formulated under uncertainty [9]. This work presents
a scenario-based uncertainty approach for modeling uncertain
parameters, considering a two-phase optimization scheme for
VMP problems in the proposed complex IaaS environments.
An experimental evaluation against state-of-the-art alterna-
tive approaches for VMP problems was performed considering
80 dierent workloads in 5 dierent CPU load scenarios, total-
izing 400 experimental scenarios. Experimental results indicate
that the proposed VMPr Triggering and Recovering methods of
the presented two-phase optimization scheme outperform other
evaluated alternatives, improving the quality of solutions.
In summary, the main contributions of this paper are:
A first proposal for a complex IaaS environment for VMP
problems considering service elasticity, including both
vertical and horizontal scaling of cloud services, as well as
overbooking of physical resources, including server (CPU
and RAM) as well as networking resources [4].
A two-phase optimization scheme for VMP problems,
combining advantages of both online and oine VMP for-
mulations in the proposed IaaS environment, introducing a
prediction-based VMPr Triggering method to decide when
to trigger a placement reconfiguration (Research Question
1) as well as an update-based VMPr Recovering method to
decide what to do with VMs requested during placement
recalculation times (Research Question 2).
A first scenario-based uncertainty approach for modeling
the following relevant uncertain parameters of the pro-
posed complex IaaS environment: (i) virtual resources ca-
pacities (vertical elasticity), (ii) number of VMs that com-
pose cloud services (horizontal elasticity), (iii) utilization
of CPU and RAM memory virtual resources (relevant for
overbooking) and (iv) utilization of networking virtual re-
sources (also relevant for overbooking).
A first formulation of a VMP problem considering the
above mentioned contributions, for the optimization of the
following four objective functions: (i) power consump-
tion, (ii) economical revenue, (iii) resource utilization, as
well as (iv) placement reconfiguration time.
An experimental evaluation of the presented two-phase op-
timization scheme against state-of-the-art alternatives for
VMP problems, considering 400 dierent scenarios.
The remainder of this paper is structured in the following
way: preliminary concepts and research challenges addressed
in this work are introduced in Section 2, while related works
and motivation of this work are summarized in Section 3. Sec-
tion 4 presents the proposed uncertain VMP problem formula-
tion considering four objectives, while Section 5 presents de-
tails on the design and implementation of evaluated alternatives
to solve the proposed renewed formulation of the VMP pro-
blem. Experimental results are summarized in Section 6. Fi-
nally, conclusions and future work are left to Section 7.
2. Preliminary Concepts and Research Challenges
The following sub-sections introduce relevant concepts re-
lated to the considered IaaS environments for VMP problems,
a brief motivation for decomposing the VMP problem into two
dierent sub-problems in a two-phase optimization scheme as
well as uncertainty issues related to resource allocation in cloud
computing. Additionally, the main challenges and research
questions addressed in this work are also briefly introduced.
2.1. IaaS Environments for VMP Problems
In real-world environments, IaaS providers dynamically re-
ceive requests for the placement of cloud services with dier-
ent characteristics according to dierent dynamic parameters.
In this context, preliminary results of the authors identified that
the most relevant dynamic parameters in the VMP literature are
[4]: (i) resource capacities of VMs (associated to vertical elas-
ticity) [10], (ii) number of VMs of a cloud service (associated
to horizontal elasticity) [11] and (iii) utilization of resources of
VMs (relevant for overbooking) [12]. Considering the above
mentioned dynamic parameters, environments for IaaS formu-
lations of provider-oriented VMP problems could be classified
by one or more of the following classification criteria: (i) ser-
vice elasticity and (ii) overbooking of physical resources [4]. A
cloud service may represent virtual infrastructures for basic ser-
vices such as Domain Name Service (DNS), web applications
or even elastic applications such as MapReduce programs [4].
An elastic cloud service could request additional resources
to scale-up or scale-out the applications’ resources to be able to
support current demand, where IaaS providers should be able to
satisfy these requirements accordingly. From an IaaS provider
perspective, elastic cloud services are usually considered more
important than non-elastic ones. Dierent IaaS environments
could be formulated considering one of the following service
elasticity alternatives: no elasticity, horizontal elasticity, verti-
cal elasticity or both horizontal and vertical elasticity [4].
Additionally, resources of VMs are dynamically used, giving
space to re-utilization of idle resources that were already re-
served. In this context, IaaS environments identified in [4] may
also consider one of the following overbooking alternatives: no
overbooking, server resources overbooking, network resources
overbooking or both server and network overbooking.
This work formulates a VMP problem taking into account the
most complex IaaS environment identified in [4], that considers
both types of service elasticity and both types of overbooking
of physical resources. To the best of the authors’ knowledge,
there is no published work considering this complex dynamic
environment [4]. IaaS providers eciently solving VMP pro-
blems in this complex dynamic environment could represent a
considerable advance on this research area and consequently
cloud computing datacenters will be able to adapt according to
trends of requirements with sucient flexibility and eciency.
2.2. Two-Phase Optimization Schemes for VMP problems
The VMP could be formulated as both online and oine
optimization problems [6]. A VMP problem formulation is
2
considered to be online when solution techniques (e.g. heuris-
tics) dynamically make decisions on-the-fly [13]. On the other
hand, if solution techniques solve a VMP problem considering
a static environment where VM requests do not change over
time, the VMP problem formulation is considered to be oine
[14]. Considering the on-demand model of cloud computing
with dynamic resource provisioning and dynamic workloads of
cloud applications [8], the resolution of VMP problems should
be performed as fast as possible in order to be able to support
these dynamic requirements. In this context, the VMP problem
for basic IaaS environments was mostly studied in the specia-
lized literature considering online formulations, knowing that
VM requests change according to current demand [6].
It is important to consider that online decisions made along
the operation of a cloud computing infrastructure negatively af-
fects the quality of obtained solutions of VMP problems when
comparing to oine decisions [7]. Clearly, oine algorithms
present a substantial advantage over online alternatives, when
considering the quality of obtained solutions. This advantage is
reported in the literature for the following two main reasons: (i)
an oine algorithm solves a VMP problem considering a static
environment where VM requests do not change over time and
(ii) it considers migration of VMs between PMs, reconfiguring
the placement when convenient.
To improve the quality of solutions obtained by online algo-
rithms, the VMP problem could be formulated as a two-phase
optimization problem, combining advantages of online and of-
fline formulations for IaaS environments [7]. In this context,
VMP problems could be decomposed into two dierent sub-
problems: (i) incremental VMP (iVMP) and (ii) VMP reconfi-
guration (VMPr). This two-phase optimization scheme com-
bines both online (iVMP) and oine (VMPr) algorithms for
solving each considered VMP sub-problem (see Figure 1).
The iVMP sub-problem is considered for dynamic arriving
requests, where VMs could be created, modified and removed
at runtime. Consequently, this sub-problem should be formu-
lated as an online problem and solved with short time con-
straints, where existing heuristics could be reasonably appro-
priate. Additionally, the VMPr sub-problem is considered for
improving the quality of solutions obtained in the iVMP phase,
reconfiguring the placement through VM migration. This sub-
problem could be formulated oine, where alternative solution
techniques could result more suitable (e.g. meta-heuristics).
The VMPr phase is triggered according to a given VMPr Tri-
ggering method. Once the VMPr is triggered, the placement
of VMs at discrete time tis recalculated during βdiscrete time
slots (i.e. recalculation time). In Figure 1, β=2, from t=2
to t=4. It is important to notice that the recalculated place-
ment is potentially obsolete, considering the oine nature of
the VMPr phase. In fact, while the VMPr is making its calcu-
lation, the iVMP still may receive and serve arriving requests,
making obsolete the VMPr calculated solution; therefore, the
recalculated placement must be recovered accordingly using a
VMPr Recovering method, before complete reconfiguration is
performed. The recovering process as well as the migration of
VMs are performed in γdiscrete time slots (i.e. reconfiguration
time), where γmay vary according to the maximum amount of
RAM to be migrated. In Figure 1, γ=1, from t=4 to t=5.
Based on the literature review to be summarized in Section
3, the considered iVMP +VMPr optimization scheme has been
briefly studied in the VMP literature. Consequently, several
challenges for IaaS environments remain unaddressed or could
be improved, considering that only basic methods have been
proposed. This work identifies two main research questions re-
lated to the considered two-phase optimization scheme, speci-
fically for VMPr Triggering and VMPr Recovering methods:
Figure 1: Two-phase optimization scheme for VMP problems considered in this work, presenting a basic example with a placement recalculation time of β=2
(from t=2 to t=4) and a placement reconfiguration time of γ=1 (from t=4 to t=5).
3
Table 1: Summary of IaaS environments and VMPr methods already studied in related works. N/A indicates a Not Applicable criterion.
Reference Overbooking Type Elasticity Type VMPr Triggering VMPr Recovering
[15] CPU Not Considered Periodically Cancellation
[16] Not Considered Not Considered Periodically Not Considered
[17] Not Considered Not Considered Periodically Not Considered
[18] Not Considered Not Considered Periodically Not Considered
[19] CPU and RAM Not Considered Periodically Not Considered
[20] Not Considered Not Considered Periodically Not Considered
[21] Not Considered Not Considered Continuously Not Considered
[22] CPU Not Considered Threshold-based N/A
[23] CPU, RAM and Network Not Considered Threshold-based N/A
[24] CPU Horizontal Threshold-based N/A
This work CPU, RAM and Network Vertical and Horizontal Prediction-based Update-based
Research Question 1 (RQ1): when or under which cir-
cumstances the VMPr phase should be triggered? (VMPr
Triggering method).
Research Question 2 (RQ2): what should be done with
cloud service requests arriving during recalculation time
in the VMPr phase? (VMPr Recovering method).
This work proposes novel VMPr Triggering and VMPr Re-
covering methods to decide when to trigger a placement recon-
figuration with migration of VMs between PMs and what to do
with cloud services requested during recalculation times. The
proposed methods were evaluated against existing state-of-the-
art approaches (see Table 1), considering 400 scenarios.
2.3. Uncertainty in Cloud Computing
Extensive research of uncertainty issues could be found in
several fields such as: computational biology and decision ma-
king in economics, just to cite a few. Particularly, studies of
uncertainty for cloud computing are limited and uncertainty in
resource allocation has not been adequately addressed, repre-
senting several relevant research challenges [25].
According to Tchernykh et al. [25], uncertainties in cloud
computing could be grouped into: (i) parametric and (ii) system
uncertainties. Parametric uncertainties may represent incom-
plete knowledge and variation of parameters, as presented in
the considered uncertain VMP problem. The analysis of these
uncertainties may measure the eect of random parameters on
model outputs. On the other hand, system uncertainties may
represent incomplete understanding of the processes that con-
trol service provisioning (e.g. when the conceptual model of
the system used for service provisioning does not include all
the relevant processes), which is not the case of this work.
Research challenges in the context of this work include de-
signing novel resource management strategies to handle uncer-
tainty in an eective way. IaaS providers must support requests
for virtual resources in highly dynamic environments. Due to
the randomness of customer requests, algorithms for solving
VMP problems should be evaluated under uncertainty, consi-
dering several relevant uncertain parameters. This work ana-
lyzes the following uncertain parameters: (i) virtual resource
capacities (vertical elasticity), (ii) number of VMs that compose
cloud services (horizontal elasticity), (iii) utilization of CPU
and RAM memory virtual resources (relevant for overbooking)
and (iv) utilization of networking virtual resources (also rele-
vant for overbooking).
In this work, uncertainty is modeled through a finite set of
well-defined scenarios S[26] (explained in Section 6.1). When
parameters are uncertain, it is important to find solutions that
are acceptable for each considered scenario. Consequently, the
main objective is not to find an absolute optimum for a unique
still unknown scenario but rather solutions that behave good
enough under modeled uncertainties. For this purpose, several
criteria can be applied to select among solutions such as: (i)
average, (ii) maximum and (iii) minimum objective costs.
3. Related Works and Motivation
Chaisiri et al. studied in [27, 28] broker-oriented VMP pro-
blems under future demand and price uncertainty. To the best
of the authors’ knowledge, there is no published work conside-
ring uncertainty of parameters for provider-oriented VMP pro-
blem formulations. Consequently, the following related works
mainly focus on describing considered IaaS environments that
proposed the utilization of two-phase optimization schemes for
the VMP problem, as well as already proposed VMPr Trigge-
ring and VMPr Recovering methods, when applicable. A sum-
mary of considered related works is presented in Table 1.
Calcavecchia et al. studied in [15] a practical model of
cloud service placement for a stream (or workload) of requests
where inter-related VMs are created and destroyed, considering
CPU overbooking and static reservation of VMs resources. The
mentioned cloud service placement model is composed by two
phases: (i) continuous deployment (or iVMP) and (ii) ongoing
optimization (or VMPr). The continuous deployment is per-
formed by a Best-Fit Decreasing (BFD) heuristic while a Back-
ward Speculative Placement (BSP) is performed in the ongoing
optimization phase. To improve a current placement, the ongo-
ing optimization is periodically triggered for the duration of the
workload and canceled whenever a new request is received.
To ensure quick responses to VMP requests while improving
energy eciency, Yue et al. proposed in [16] a two-phase opti-
mization strategy, where VMs are deployed at runtime and con-
solidated periodically. The placement of VMs is performed us-
4
ing an Improved Multidimensional Space Partition Model (IM-
SPM). Along with the IMSPM, a Modified Energy Ecient Al-
gorithm with balanced resource utilization (MEAGLE) is used
to deploy VMs as the first phase of the optimization (or iVMP).
To perform the consolidation (or VMPr), a Live Migration Al-
gorithm Based on a Basic Set (LMABBS) is presented. The
arrival of VMs during the VMPr is not considered in this work
and consequently, neither VMPr Recovering method nor over-
booking or elasticity are considered.
Considering the computational complexity of the VMP (NP-
Hard problem), a decentralized decision is proposed by Feller et
al. in [17], based on a Peer-to-Peer (P2P) communication model
among PMs. To accomplish a consolidation of PMs to min-
imize power consumption, the mentioned work explored me-
thods to allocate and migrate VMs in the minimum number of
PMs. An incremental allocation is performed using a First-Fit
Decreasing (FFD) heuristic to allocate requested VMs at run-
time (for iVMP). A Virtual Machine Consolidation (VMC) pro-
cedure using a cost-aware Ant Colony Optimization (ACO) is
executed periodically to consolidate VMs (for VMPr).
Li et al. proposed in [18] a hybrid approach for the VMP pro-
blem. As a first phase, incoming VM requests are grouped into
a queue of requests to gather information about their require-
ments. Once the queue is full, an oine approach is conside-
red for the VMP, taking into account collected information. A
Migration-Based Virtual Machine Placement (MBVMP) is pro-
posed (for VMPr), considering the migration time when plan-
ning allocation of VMs and migration of already allocated re-
quests to consolidate VMs and release resources. The VMPr is
executed every time the queue is full.
In [19], Farahnakian et al. proposed a Self-Adaptive Re-
source Management System (SARMS) considering an Adap-
tive Utilization Threshold (AUT) mechanism to classify PMs as
overloaded or underloaded. To allocate incoming VM requests
(during iVMP) it uses a Best-Fit Decreasing (BFD) heuristic.
The proposed SARMS triggers a VMP optimization algorithm
(or VMPr) periodically. The VMPr has two steps, migration of
VMs from overloaded PMs and consolidation of VMs to release
resources of underloaded PMs, switching them to sleep mode.
Since the utilization of resources is considered to define if a
PM is capable of allocating a VM request, overbooking is sup-
ported for CPU and RAM memory resources. Requested VM
resources do not change during the cloud service life-cycle.
According to Zheng et al. in [20], the VMP problem
can be divided into two sub-problems: incremental placement
(VMiP) and placement consolidation (VMcP). A Best-Fit (BF)
heuristic is considered for the incremental placement phase (or
iVMP). Additionally, a VMP Biogeography-Based Optimiza-
tion (VMPBBO) is proposed to optimize resources wastage and
power consumption in the consolidation process (or VMPr).
The VMPr is triggered periodically and no mention is made to
VM request arrival during consolidation process. Finally, nei-
ther overbooking nor elasticity are considered.
Sv¨
ard et al. studied in [21] a resource management system
for continuous datacenter consolidation, based on a combina-
tion of management actions like suspend /resume PMs and
VMs as well as live migration of VMs. The behavior of the
proposed solution follows a set of prioritized events: (i) VM ar-
rival, (ii) VM exit and (iii) PM crash. An incremental placement
of VMs (or iVMP) considers a Best-Fit (BF) heuristic to find
an appropriate PM to host requested VMs. The VMPr could be
executed in two cases: (i) after total allocation of a list of VM
requests at each discrete time or (ii) when the continuous iVMP
does not find any PM to host a VM request. The arrival of VM
requests during the VMPr process is not considered; neither
overbooking nor elasticity are studied.
Beloglazov et al. identified in [22] two stages for the VMP
problem: (i) initial admission of VMs and (ii) optimization of
the current placement. For the admission of VMs (or iVMP) a
Modified Best-Fit Decreasing (MBFD) algorithm is considered,
using the CPU utilization of VMs to sort a list of VM requests
and allocate each VM into a PM that provides the minimum in-
crement in power consumption. Additionally, the optimization
of the current placement (or VMPr) is triggered whenever an
overloaded or underloaded PM is detected, according to well-
defined CPU utilization thresholds. In this case, the VMPr runs
distributively for each overloaded or underloaded PM to mi-
grate VMs from overloaded PMs until each PM is appropriately
loaded, consolidating VMs from underloaded PMs to decrease
the number of running PMs to the minimum possible number.
It is important to consider that this threshold-based triggering
represents a decentralized decision process, relaxing the com-
putational complexity of the VMP problem. Consequently, it
is not necessary to consider the arrival of VM requests during
the reconfiguration because no oine centralized decision is
performed. Considering the VMPr, a selection process is per-
formed to determinate which VMs should be migrated (all in
case of underloaded PMs). Selected VMs are allocated by the
MBFD algorithm into PMs considering CPU overbooking.
Shi et al. proposed in [23] an online VMP formulation with a
two-phase algorithm called Two-Phase Online Virtual Machine
Placement (TPOVMP). Multiple resources of PMs and VM re-
quests are represented as vectors. The first phase of the place-
ment algorithm (or iVMP), called PM Type Selection, assigns
a PM type to the VM request based on a Cosine Similarity (CS)
between their vector representations. Then, the VMs are cate-
gorized by requested resources per PM type. The second phase
allocates categorized VMs into PMs of the assigned type. In the
second phase, reconfiguration of the VMs (or VMPr) is trig-
gered by well-defined utilization thresholds of PM resources.
The VMPr does not consider handling VM requests that arrive
during a reconfiguration process. Overbooking of all resources
is considered, but no type of elasticity is taken into account.
Tighe et al. proposed in [24] an approach to jointly con-
sider auto-scaling and dynamic VM allocation in cloud envi-
ronments. Cloud environments are modeled as workloads of
cloud service requests where CPU overbooking and horizontal
elasticity are considered. A proposed auto-scaling algorithm
considers the following parameters to trigger elasticity mana-
gement actions: (i) CPU utilization of PMs, (ii) requested /
utilized resources of VMs and (iii) SLA metrics. The auto-
scaling algorithm proposed in [24] is included as part of a
Dynamic VM Allocation (DVMA) algorithm. First, the pro-
posed DVMA considers a Best-Fit Decreasing (BFD) algorithm
5
to select an appropriate PM to host an incoming VM request
(iVMP). Whenever an overloaded or underloaded PM is de-
tected, a VMPr algorithm is triggered, similarly to [22].
In summary, most of the related works that consider IaaS
environments with overbooking are limited to CPU resources.
Only [23] considered overbooking for all available resources, as
proposed in this work. Additionally, studied IaaS environments
with elasticity are limited to horizontal elasticity [24], while
this work considers both vertical and horizontal elasticity.
According to the studied articles (see Table 1), existing
VMPr Triggering methods may be classified as: (i) periodi-
cal and (ii) threshold-based. Periodically triggering the VMPr
could present disadvantages when defining a fixed reconfigu-
ration period (e.g. every 10 minutes) because reconfigurations
may be required before the established time or in certain cases
the reconfiguration may not be necessary. For threshold-based
approaches, thresholds are defined in terms of utilization of re-
sources (e.g. CPU) without a complete knowledge of global
optimization objectives. This work proposes a prediction-based
approach for a novel VMPr Triggering method, statistically
analyzing the objective function costs and proactively detect-
ing requirements for triggering the VMPr (see Section 5.3).
Additionally, most of the studied works do not consider any
VMPr Recovering method, when applicable. Only Calcavec-
chia et al. studied in [15] a very basic approach, canceling
the VMPr whenever a new request is received. Consequently,
the VMPr is only performed in periods with no requests, that
could result unrealistic, specially for highly loaded IaaS envi-
ronments. On the other hand, this work proposes a novel VMPr
Recovering method based on updating the potentially obsolete
placement recalculated in the VMPr phase with the required
cloud services created, modified and removed during the recal-
culation time (see Section 5.4).
4. Uncertain VMP Formulation
This section presents a formulation of the VMP problem un-
der uncertainty considering a two-phase scheme for the opti-
mization of the following objective functions: (i) power con-
sumption, (ii) economical revenue, (iii) resource utilization and
(iv) placement reconfiguration time. According to the taxon-
omy presented in [6], this work focuses on a provider-oriented
VMP for federated-cloud deployments, considering a combina-
tion of two types of formulations: (i) online (i.e. iVMP) and (ii)
oine (i.e. VMPr).
An online problem formulation is considered when inputs of
the problem change over time and the algorithm does not have
the entire input set available from the start (e.g. online heuris-
tics) [13]. On the other hand, if inputs of the problem do not
change over time, the formulation is considered to be oine
(e.g. Memetic Algorithms (MAs) proposed in [29] and [30]).
As previously discussed in Section 2.2, the VMP problem
could be formulated as a two-phase optimization problem, com-
bining advantages of online and oine formulations for IaaS
environments. In this context, VMP problems could be decom-
posed into two dierent sub-problems: (i) incremental VMP
(iVMP) and (ii) VMP reconfiguration (VMPr).
The VMP problem proposed in this work takes into account
a complex IaaS environment that considers service elasticity,
including both vertical and horizontal scaling of cloud services,
as well as overbooking of physical resources, including both
server and networking resources, as identified in [4].
The following sub-sections summarize the complex IaaS en-
vironment for VMP problems considered is this work, as well
as formal definitions of both iVMP and VMPr sub-problems.
4.1. Complex IaaS Environment
Real-world IaaS environments include several dierent types
of both physical and virtual resources. Consequently, the VMP
problem should be formulated as a multi-dimensional VMP
problem, such as studied in [31, 32, 33].
The proposed formulation of the VMP problem models a
complex IaaS environment, composed by available PMs and
VMs requested at each discrete time t, considering the follo-
wing information as input data for the proposed VMP problem:
a set of navailable PMs and their specifications (1);
a set of m(t) VMs requested, at each discrete time t, and
their specifications (2);
information about the utilization of resources of each ac-
tive VM at each discrete time t(3);
the current placement at each discrete time t(i.e. x(t)) (4).
The proposed iVMP and VMPr sub-problems consider dif-
ferent sub-sets of the above mentioned input data, as presented
later in Sections 4.2.1 and 4.3.1.
The set of PMs owned by the IaaS provider is represented as
a matrix HRn×(r+2), as presented in (1). Each PM Hiis rep-
resented by rdierent physical resources. This work considers
r=3 physical resources (Pr1to Pr3): CPU [EC2 Compute
Unit (ECU)], RAM [GB] and network capacity [Mbps]. The
maximum power consumption [W] is also considered. It is im-
portant to mention that the proposed notation is general enough
to include more characteristics associated to physical resources
as Solid State Drive (SSD), Graphical Processing Unit (GPU)
or storage, just to cite a few. Finally, considering that an IaaS
provider could own more than one cloud datacenter, PMs nota-
tion also includes a datacenter identifier ci, i.e.
H=
Pr1,1. . . Prr,1pmax1c1
. . . . . . . . . . . . . . .
Pr1,n. . . Prr,npmaxncn
(1)
where:
Prk,i: Physical resource kon Hi, where 1 kr;
pmaxi: Maximum power consumption of Hiin [W];
ci: Datacenter identifier of Hi, where 1 cicmax;
n: Total number of PMs.
Clearly, the set of PMs Hcould be modeled as a function of
time t, considering PM crashes [21], maintenance or even de-
ployment of new hardware. The mentioned modeling approach
for PMs is out of the scope of this work and its particular con-
siderations are left as future work (see Section 7).
6
In peak demand situations where the IaaS provider cannot
provide requested resources, a basic cloud federation to which
assign the over-demand is considered. Formulations with so-
phisticated federation approaches is also left as a future work.
In the complex environment considered in this work, the IaaS
provider dynamically receives requests of cloud services for
placement (i.e. a set of inter-related VMs) at each discrete time
t. A cloud service Sbis composed by a set of VMs, where each
VM may be located for execution in dierent cloud computing
datacenters according to customer preferences or requirements
(e.g. legal issues or high-availability, just to cite a few).
The set of VMs requested by customers at each discrete time
tis represented as a matrix V(t)Rm(t)×(r+2), as presented in
(2). In this work, each VM Vjrequires r=3 dierent vir-
tual resources (Vr1,j(t)-Vr3,j(t)): CPU [ECU], RAM memory
[GB] and network capacity [Mbps]. Additionally, a cloud ser-
vice identifier bjis considered, as well as an economical reve-
nue Rj[$] associated to each VM Vj. As mentioned before, the
proposed notation could represent any other set of rresources.
The requested VMs try to lease the requested virtual re-
sources for an unknown period of discrete time.
V(t)=
Vr1,1(t). . . Vrr,1(t)b1R1(t)
. . . . . . . . . . . . . . .
Vr1,m(t)(t). . . Vrr,m(t)(t)bm(t)Rm(t)(t)
(2)
where:
Vrk,j(t): Virtual resource kon Vj, where 1 kr;
bj: Service identifier of Vj;
Rj(t): Economical revenue for allocating Vjin [$] at instant t;
m(t): Number of VMs at each discrete time t, where
1m(t)mmax ;
mmax : Maximum number of VMs.
Once a VM Vjis powered-oby a customer, its virtual re-
sources are released, so the IaaS provider can reuse them. For
simplicity, in what follows the index jis not reused.
In order to model a dynamic VMP environment taking into
account both vertical and horizontal elasticity of cloud services,
the set of requested VMs V(t) may include the following types
of requests for cloud service placement at each time t:
cloud services creation: where new cloud services Sb,
composed by one or more VMs Vj, are created. Conse-
quently, the number of VMs at each discrete time t(i.e.
m(t)) is a function of time;
scale-up /scale-down of VMs resources: where one or
more VMs Vjof a cloud service Sbincreases (scale-up) or
decreases (scale-down) its capacities of virtual resources
with respect to current demand (vertical elasticity). In or-
der to model these considerations, virtual resource capac-
ities of a VM Vj(i.e. Vr1,j(t)-Vr3,j(t)) are a function of
time, as well as the associated economical revenue (Rj(t));
cloud services scale-out /scale-in: where a cloud service
Sbincreases (scale-out) or decreases (scale-in) the number
of associated VMs according to current demand (horizon-
tal elasticity). Consequently, the number of VMs Vjin a
cloud service Sbat each discrete time t, denoted as mS b(t),
is a function of time;
cloud services destruction: where virtual resources of
cloud services Sb, composed by one or more VMs Vj, are
released.
In most situations, virtual resources requested by cloud ser-
vices are dynamically used, giving space to re-utilization of idle
resources that were already reserved. Information about the uti-
lization of virtual resources at each discrete time tis required
in order to model a dynamic VMP environment where IaaS
providers consider overbooking of both server and networking
physical resources.
Resource utilization of each VM Vjat each discrete time tis
represented as a matrix U(t)Rm(t)×r, as presented in (3):
U(t)=
Ur1,1(t). . . Urr,1(t)
. . . . . . . . .
Ur1,m(t)(t). . . Urr,m(t)(t)
(3)
where:
Urk,j(t): Utilization ratio of V rk(t) in Vjat each discrete time t.
The current placement of VMs into PMs (x(t)) represents
VMs requested in the previous discrete time t1 and assigned to
PMs; consequently, the dimension of x(t) is based on the num-
ber of VMs m(t1). Formally, the placement at each discrete
time tis represented as a matrix x(t)∈ {0,1}m(t1)×n, as defined
in (4):
x(t)=
x1,1(t)x1,2(t). . . x1,n(t)
. . . . . . . . . . . .
xm(t1),1(t)xm(t1),2(t). . . xm(t1),n(t)
(4)
where:
xj,i(t)∈ {0,1}: indicates if Vjis allocated (xj,i(t)=1) or not
(xj,i(t)=0) for execution in a PM Hiat a
discrete time t(i.e., xj,i(t) : VjHi).
4.2. Incremental VMP (iVMP)
In online algorithms for solving the proposed VMP problem,
placement decisions are performed at each discrete time t. The
formulation of the proposed iVMP (online) problem is based on
[7] and could be formally enunciated as:
Given a complex IaaS environment composed by a set of PMs
(H), a set of active VMs already requested before time t (V(t)),
and the current placement of VMs into PMs (i.e. x(t)), it is
sought an incremental placement of V(t)into H for the discrete
time t +1 (x(t+1)) without migrations, satisfying the problem
constraints and optimizing the considered objective functions.
7
4.2.1. Input Data for iVMP
The proposed formulation of the iVMP problem receives the
following information as input data:
a set of navailable PMs and their specifications (1);
a dynamic set of m(t) requested VMs (already allocated
VMs plus new requests) and their specifications (2);
information about the utilization of resources of each ac-
tive VM at each discrete time t(3);
the current placement at each discrete time t(i.e. x(t)) (4).
4.2.2. Output Data for iVMP
The result of the iVMP phase at each discrete time tis an
incremental placement x(t) for the next time instant in such a
way that x(t+1) =x(t)+ ∆ x(t). Clearly, the placement at t+1 is
represented as a matrix x(t+1) ∈ {0,1}m(t)×n, as defined in (5):
x(t+1) =
x1,1(t+1) x1,2(t+1) . . . x1,n(t+1)
. . . . . . . . . . . .
xm(t),1(t+1) xm(t),2(t+1) . . . xm(t),n(t+1)
(5)
Formally, the placement for the next time instant x(t+1) is
a function of the current placement x(t) and the active VMs at
discrete time t, i.e.:
x(t+1) =f[x(t),V(t)](6)
4.3. VMP Reconfiguration (VMPr)
An oine algorithm solves a VMP problem considering a
static environment where VM requests do not change over time
and considers migration of VMs between PMs. The formu-
lation of the proposed VMPr (oine) problem is based on
[29, 30] and could be enunciated as:
Given a current placement of VMs into PMs (x(t)), it is sought
a placement reconfiguration through migration of VMs between
PMs for the discrete time t (i.e. x0(t)), satisfying the constraints
and optimizing the considered objective functions.
4.3.1. Input Data for VMPr
The proposed formulation of the VMPr problem receives the
following information as input data:
a set of navailable PMs and their specifications (1);
information about the utilization of resources of each ac-
tive VM at discrete time t(3);
the current placement at discrete time t(i.e. x(t)) (4).
4.3.2. Output Data for VMPr
The result of the VMPr problem is a placement reconfigura-
tion through migration of VMs between PMs for the discrete
time t(i.e. x0(t)), represented by:
a placement reconfiguration of x(t), i.e. x0(t) (4);
4.4. Constraints
4.4.1. Constraint 1: Unique Placement of VMs
A VM Vjshould be allocated to run on a single PM Hior
alternatively located in another federated IaaS provider. Conse-
quently, this placement constraint is expressed as:
n
X
i=1
xj,i(t)1(7)
j∈ {1,...,m(t)},i.e. for all VM Vj.
where:
xj,i(t)∈ {0,1}: Indicates if Vjis allocated (xj,i(t)=1) or
not (xj,i(t)=0) for execution in a PM Hi
(i.e., xj,i(t) : VjHi) at a discrete time t;
n: Total number of PMs;
m(t): Number of VMs at each discrete time t,
where 1 m(t)mmax .
It should be mentioned that from an IaaS provider perspec-
tive, elastic cloud services usually are considered more impor-
tant than non-elastic ones. Consequently, resources of elastic
cloud services most of the time are allocated with higher prior-
ity over non-elastic ones, what usually is reflected in the con-
tracts between an IaaS provider and each customer.
4.4.2. Constraints 2-4: Overbooked Resources of PMs
A PM Himust have sucient available resources to meet
the dynamic requirements of all VMs Vjthat are allocated to
run on Hi. It is important to remember that resources of VMs
are dynamically used, giving space to re-utilization of idle re-
sources that were already reserved. Re-utilization of idle re-
sources could represent higher risk of unsatisfied demand in
case utilization of resources increases in a short period of time.
Therefore, providers need to reserve a percentage of idle re-
sources as a protection (defined by a protection factor λk) in
case overbooking is used. These constraints are formulated as:
m(t)
X
j=1
xj,i(t)Vrk,j(t)×Urk,j(t)+λkhVrk,j(t)1Urk,j(t)iPrk,i(8)
for every time slot t,i∈ {1,...,n}and k∈ {1,...,r}, i.e. for
each PM Hiand for each of the rconsidered physical resource.
where:
m(t): Number of VMs at each discrete time t,
where 1 m(t)mmax ;
Vrk,j(t): Virtual resource kon Vj, where 1 kr;
Urk,j(t): Utilization ratio of Vrk(t) in Vjat each discrete
time t;
λk: Protection factor for Vrk,j[0,1]. Note that
λk=0 means full overbooking while λk=1
means no-overbooking;
xj,i(t)∈ {0,1}: Indicates if Vjis allocated (xj,i(t)=1) or
not (xj,i(t)=0) for execution on a PM Hi
(i.e., xj,i(t) : VjHi) at a discrete time t;
Prk,i: Physical resource kon Hi, where 1 kr.
8
Physical resources are considered as resources available for
VMs, after hypervisor reservation.
4.5. Objective Functions
As previously described, more than 60 dierent objective
functions for VMP problems were already identified in [6, 34].
Considering the large number of existing objective functions,
identified objective functions with similar characteristics and
goals could be classified into 5 objective function groups [6]:
(G1) energy consumption, (G2) network trac, (G3) economi-
cal costs, (G4) resource utilization and (G5) performance.
This work considers the optimization of four objective func-
tions, directly related to the most relevant objective function
groups (G1-G4), detailed in the following sub-sections. It is
important to consider that by no means, the authors claim that
the considered objective functions represent the best way to
model VMP problems. This formulation only illustrates a rea-
sonable formulation of a VMP problem in order to be able to
study the main contributions of this work, mainly considering
that the evaluated algorithms and the proposed VMPr methods
may work with any set of considered objective functions.
Although in general some objective functions can be min-
imized while maximizing other objectives functions, in this
work each of the considered objective functions are formulated
in a single optimization context (i.e. only minimization).
4.5.1. Power Consumption Minimization
Based on Beloglazov et al. [22], this work models the power
consumption of PMs considering a linear relationship with the
CPU utilization of PMs, without taking into account PMs at
alternative datacenters of the cloud federation. The power con-
sumption minimization can be represented by the sum of the
power consumption of each PM Hithat composes the complex
IaaS environment (see Section 4.1), as defined in (9).
f1(x,t)=
n
X
i=1
((pma xipmini)×Ur1,i(t)+pmini)×Yi(t) (9)
where:
x: Evaluated solution of the problem;
f1(x,t): Total power consumption of PMs at instant t;
pmaxi: Maximum power consumption of a PM Hi.
pmini: Minimum power consumption of a PM Hi. As
suggested in [22], pminipma xi0.6;
Ur1,i(t): Utilization ratio of resource 1 (in this case CPU) by
Hiat instant t;
Yi(t)∈ {0,1}: Indicates if Hiis turned on (Yi(t)=1) or not
(Yi(t)=0) at instant t.
The proposed formulation already considers dynamically
turning on and oPMs, although taking into account the time
needed for this procedure as well as additional power consump-
tion is still out of the scope of this work and is left as a future
work.
4.5.2. Economical Revenue Maximization
For IaaS customers, cloud computing resources often ap-
pear to be unlimited and can be provisioned in any quantity
at any required time t[8]. Consequently, this work considers
a basic federated-cloud deployment architecture, where a main
provider may support requested resources that are not able to be
provided (e.g. a workload peak) by transparently leasing low-
price resources from alternative datacenters owned by federated
providers [35]. This leasing costs should be minimized in order
to maximize economical revenue objective function.
Equation (10) represents the mentioned leasing costs, defined
as the sum of the total costs of leasing each VM Vjthat is ef-
fectively allocated for execution on any PM of an alternative
datacenter of the cloud federation. A provider must oer its
idle resources to the cloud federation at lower prices than of-
fered to customers in the actual cloud market for the federation
to make sense. The pricing scheme may depend on the partic-
ular agreement between providers of the cloud federation [35].
For simplicity, this work considers that the main provider may
lease requested resources (that are not able to provide) from the
cloud federation at 70% ( ˆ
Xj=0.7) of its market price (Rj(t)).
This Leasing Cost (LC(t)) may be formulated as:
LC(t)=
m(t)
X
j=1
(Rj(t)×Xj(t)׈
Xj) (10)
where:
LC(t): Total leasing costs at instant t;
Rj(t): Economical revenue for attending Vjin [$] at instant t;
Xj(t)∈ {0,1}: Indicates if Vjis allocated for execution on a PM
(Xj(t)=1) or not (Xj(t)=0) at instant t;
ˆ
Xj: Indicates if Vjis allocated on the main provider
(ˆ
Xj=0) or on an alternative datacenter of the cloud
federation ( ˆ
Xj=0.7);
m(t): Number of VMs at each discrete time t, where
1m(t)mmax .
It is important to note that ˆ
Xjis not necessarily a function of
time. The decision of locating a VM Vjon a federated provider
is considered only in the placement process, with no possible
migrations between dierent IaaS providers.
Additionally, overbooked resources may incur in unsatisfied
demand of resources at some periods of time, causing Quality
of Service (QoS) degradation, and consequently Service Level
Agreement (SLA) violations with economical penalties. This
economical penalties should be minimized for an economical
revenue maximization. Based on the workload independent
QoS metric presented in [22], formalized in SLAs, this work
proposes (11) to represent total economical penalties for SLA
violations, defined as the sum of the total proportional penalties
costs for unsatisfied demand of resources.
EP(t)=
m(t)
X
j=1r
X
k=1
Rrk,j(t)×rk,j(t)×Xj(t)×φk(11)
where:
EP(t): Total economical penalties at instant t;
r: Number of considered resources. In this paper 3:
9
CPU, RAM memory and network capacity;
Rrk,j(t): Economical revenue for attending Vrk,j(t);
rk,j(t): Ratio of unsatisfied resource kat instant twhere
rk,j(t)=1 means no unsatisfied resource, while
rk,j(t)=0 means resource kis unsatisfied in 100%;
Xj(t)∈ {0,1}: Indicates if Vjis allocated for execution on
a PM (Xj(t)=1) or not (Xj(t)=0) at instant t;
φk: Penalty factor for resource k, where φk1;
m(t): Number of VMs at each discrete time t, where
1m(t)mmax .
In this work, the maximization of the total economical reve-
nue that an IaaS provider receives is achieved by minimizing
the total costs of leasing resources from alternative datacenters
of the cloud federation as well as the total economical penalties
for SLA violations, as presented in (12), i.e.
f2(x,t)=LC(t)+EP(t) (12)
where:
f2(x,t): Total economical expediture of the main IaaS provider
at instant t.
4.5.3. Resources Utilization Maximization
An ecient utilization of resources is a relevant resource ma-
nagement challenge to be addressed by IaaS providers. This
work proposes a maximization of the resource utilization by
minimizing the average ratio of wasted resources on each PM
Hi(i.e. resources that are not allocated to any VM Vj). This
objective function is presented in (13).
f3(x,t)=Pn
i=11Pr
k=1Urk,i(t)
r×Yi(t)
Pn
i=1Yi(t)(13)
where:
f3(x,t): Average ratio of wasted resources at instant t;
Urk,i(t): Utilization ratio of resource kof PM Hiat instant t;
r: Number of considered resources. In this paper r=3:
CPU, RAM memory and network capacity.
4.5.4. Reconfiguration Time Minimization
Performance degradation may occur when migrating VMs
between PMs [13]. Logically, it is desirable that the time of
placement reconfiguration by VM migration is kept to a mini-
mum possible. As explained in [13], the time that a VM takes to
be migrated from one PM to another could be estimated as the
ratio between the total amount of RAM memory to be migrated
and the capacity of the network channel.
Inspired in [13], once a placement reconfiguration is ac-
cepted in the VMPr phase, all VM migrations are assumed to
be performed in parallel through a management network exclu-
sively used for these actions, increasing 10% CPU utilization
in VMs being migrated. Consequently, the minimization of the
(maximum) reconfiguration time could be achieved by mini-
mizing the maximum amount of memory to be migrated from
one PM Hito another Hi0(i,i0).
Equation (14) is proposed to minimize the maximum amount
of RAM memory that must be moved between PMs at instant t.
f4(x,t)=max(MTi,i0)i,i0∈ {1,...,n}(14)
where:
f4(x,t): Network trac overhead for VM migrations at
instant t;
MTi,i0: Total amount of RAM memory to be migrated from
PM Hito Hi0.
It should be noted that there are several possible approaches
to estimate the migration overhead, as presented in [36].
The following sub-section summarizes the main considera-
tions taken into account to combine the four presented objec-
tive functions into a single objective function to be minimized
with the aim of having a single figure of merit (or optimization
metric).
4.6. Normalization and Scalarization Methods
In dynamic VMP environments with placement reconfigura-
tion such as the studied in this work, pure multi-objective opti-
mization [37] presents specific challenges such as automatically
selecting one of the non-dominated solutions from a Pareto set
to eectively perform the reconfiguration. In consequence, a
previous work by the authors [29] experimentally evaluated the
following five selection strategies: (S1) random, (S2) preferred
solution, (S3) minimum distance to origin, (S4) lexicographic
order (provider preference) and (S5) lexicographic order (ser-
vice preference), indicating that S3 (minimum distance to ori-
gin) was the best evaluated strategy for the considered problem.
As a consequence of experimental results obtained in [29] for
VMP problems optimizing multiple objective functions, even in
a many-objective optimization context for cloud computing dat-
acenters, S3 (minimum distance to origin) could be used as a
scalarization method and instead of calculating a whole Pareto
set approximation, it is suggested to combine all considered ob-
jective functions into a single objective function, therefore solv-
ing the studied problem considering a Multi-Objective problem
solved as Mono-Objective (MAM) approach [6].
Consequently, each of the considered objective function must
be formulated in a single optimization context (in this case,
minimization) and each objective function cost must be nor-
malized to be comparable and combinable as a single objective.
This work normalizes each objective function cost by calcu-
lating ˆ
fi(x,t)R, where 0 ˆ
fi(x,t)1 for each original
objective function fi(x,t).
ˆ
fi(x,t)=fi(x,t)fi(x,t)min
fi(x,t)max fi(x,t)min
(15)
where:
ˆ
fi(x,t): Normalized cost of objective function fi(x,t) at
instant t;
fi(x,t): Cost of original objective function fi(x,t);
fi(x,t)min: Minimum possible cost for fi(x,t);
10
fi(x,t)max : Maximum possible cost for fi(x,t).
Finally, the presented normalized objective functions are
combined into a single objective considering a minimum Eu-
clidean distance to the origin, expressed as:
F(x,t)=v
tq
X
i=1
ˆ
fi(x,t)2(16)
where:
F(x,t): Single objective function combining each ˆ
fi(x,t) at
instant t;
ˆ
fi(x,t): Normalized cost of objective function fi(x,t) at
instant t;
q: Number of objective functions. In this work q=4.
4.7. Scenario-based Uncertainty Modeling
In this work, uncertainty is modeled through a finite set of
well-defined scenarios S[26], where the following uncertain
parameters are considered: (i) virtual resources capacities (ver-
tical elasticity), (ii) number of VMs that compose cloud ser-
vices (horizontal elasticity), (iii) utilization of CPU and RAM
memory virtual resources and (iv) utilization of networking vir-
tual resources (both relevant for overbooking).
For each scenario sS, a temporal average value of the
objective function F(x,t) presented in (16) is calculated as:
fs(x,t)=Ptmax
t=1F(x,t)
tmax
(17)
where:
fs(x,t): Temporal average of combined objective function
for all discrete time instants tin scenario sS;
tmax : Duration of a scenario in discrete time instants.
As previously described, when parameters are uncertain, it is
important to find solutions that are acceptable for any (or most)
considered scenario sS. This work considers minimization
of the following criteria to select among solutions from dierent
evaluated alternatives as: (i) average [26], (ii) maximum [26]
and (iii) minimum objective function costs:
F1=F(x,t)=P|S|
s=1fs(x,t)
|S|(18)
F2=max
sS(fs(x,t)) (19)
F3=min
sS(fs(x,t)) (20)
where:
F1: Average fs(x,t) for all scenarios sS[26];
F2: Maximum fs(x,t) considering all scenarios sS[26];
F3: Minimum fs(x,t) considering all scenarios sS.
Although F1and F2are the most studied criteria in the spe-
cialized literature [26], this work considers F3as an additional
criterion just to demonstrate that experimental conclusions do
not change when also considering minimum costs.
In order to be able to separately evaluate each normalized
objective function ˆ
fi(x,t), analogously to F1,F2and F3, the
following evaluation criteria are also defined: Fi
1,Fi
2and Fi
3.
5. Evaluated Algorithms
Taking into account that this work presents a novel uncer-
tain VMP formulation considering a complex IaaS environment
(see Section 4), there are no published alternatives to which we
can compare the proposed algorithm. Therefore, the main goal
of the experimental evaluation to be presented in Section 6 is to
validate that the proposed VMPr Triggering and VMPr Recove-
ring methods improve the quality of solutions, against adapted
state-of-the-art alternatives that originally consider only par-
tially the proposed complex IaaS environment.
This work evaluates four algorithms, presented in Table 2.
First, Algorithm 0 (A0) is evaluated considering only the on-
line iVMP phase, without taking into account reconfiguration
of VMs. Algorithm 1 (A1) is inspired in [15], considering a
centralized decision approach while Algorithm 2 (A2) is in-
spired in [22] following a distributed decision approach. Ad-
ditionally, Algorithm 3 (A3) considers a centralized decision
approach implementing the proposed prediction-based VMPr
Triggering and update-based VMPr Recovering methods. In
this context, A1 and A2 consider original VMPr Triggering and
VMPr Recovering methods proposed on each original research
work [15, 22]. The following sub-sections detail additional re-
levant aspects related to the four algorithms evaluated in this
work.
5.1. Incremental VMP (iVMP) Algorithm
In experimental results previously obtained by the authors
in [7], the First-Fit Decreasing (FFD) heuristic outperformed
other evaluated heuristics in average; consequently, the men-
tioned heuristic was the only one considered in this work for
the iVMP problem in the four evaluated algorithms (A0 to A3),
as summarized in Table 2. This way, this paper focuses in its
main contribution: the VMPr phase. Further studies on alterna-
tive heuristics for the iVMP phase are left as future work.
Table 2: Summary of evaluated algorithms as well as their corresponding VMPr Triggering and Recovering methods. N/A indicates a Not Applicable criterion.
Algorithm
Characteristics Decision Approach iVMP VMPr VMPr Triggering VMPr Recovering
A0 - inspired in [38] N/A FFD N/A N/A N/A
A1 - inspired in [15] Centralized FFD MA Periodically Cancellation
A2 - inspired in [22] Distributed FFD MMT Threshold-based N/A
A3 - proposed in this work Centralized FFD MA Prediction-based Update-based
11
In the First-Fit (FF) heuristic, requested VMs Vj(t) are allo-
cated on the first PM Hiwith available resources (see Section
4.4.2). Interested readers can refer to [38] for details on FF
algorithms for VMP problems. The considered FFD heuristic
operates similarly to FF heuristic, with the main dierence that
FFD heuristic sorts the list of requested VMs Vj(t) in decreas-
ing order by revenue Rj(t) (see details in Algorithm 1).
Taking into account the particularities of the proposed com-
plex IaaS environment, the FFD heuristic presents some modi-
fications when comparing to the one presented in [7], mainly
considering the cloud service request types previously de-
scribed in Section 4.1. In fact, Algorithm 1 shows that cloud
service destruction, scale-down of VM resources and cloud ser-
vices scale-in are processed first, in order to release resources
for immediate re-utilization (steps 1-3 of Algorithm 1). At step
4, requests from V(t) are sorted by a given criterion as reve-
nue (Rj(t)) in decreasing order (of course, other criterion may
be considered, as CPU [7]), where scale-up of VM resources
and cloud services scale-out are firstly processed (steps 5-6),
in order to consider elastic cloud services more important than
non-elastic ones. Next, unprocessed requests from Vj(t) include
only cloud service creations that are allocated in decreasing or-
der (steps 7-18). Here, a Vjis allocated in the first Hiwith
available resources (see (8)) after considering previously sorted
V(t). If no Hihas sucient resources to host Vj, it is allocated
in another federated provider. Finally, the placement x(t+1) is
updated and returned (steps 19-20).
5.2. VMP Reconfiguration (VMPr) Algorithms
Previous research work by the authors focused on developing
VMPr algorithms considering centralized decisions such as the
oine MAs presented in [14, 29, 30]. In this work, the consi-
dered VMPr algorithm for centralized decision approaches (A1
and A3) is based on the one presented in [29] and it works in
the following way (see details in Algorithm 2):
At step 1, a set Pop0of candidate solutions is randomly ge-
nerated. These candidate solutions are repaired at step 2 to en-
sure that Pop0contains only feasible solutions, satisfying con-
straints defined in Section 4.4. Then, the algorithm tries to im-
prove candidate solutions at step 3 using local search. With
the obtained solutions, elitism is applied and the first best solu-
tion x0(t) is selected from Pop00
0x(t) at step 4 using objective
function defined in (16). After an initialization in step 5, evolu-
tion begins (steps 6-12). The evolutionary process basically fol-
lows a similar behavior: solutions are selected from the union of
the evolutionary set of solutions (or population), also known as
Popu, and the best known solution x0(t) (step 7), crossover and
mutation operators are applied as usual (step 8), and eventually
solutions are repaired, as there may be infeasible solutions (step
9). Improvements of solutions of the evolutionary population
Popumay be generated at step 10 using local search (local op-
timization operators). At step 11, the best known solution x0(t)
is updated (if applicable), while at step 12 the generation (or ite-
ration) counter is updated. The evolutionary process is repeated
until the algorithm meets a stopping criterion (such as a maxi-
mum number of discrete time instants or iterations), returning
Algorithm 1: First-Fit Decreasing (FFD) for iVMP.
Data: H,V(t), U(t), x(t) (see notation in Section 4.1)
Result: Incremental Placement x(t+1)
/* removed cloud services */
1process cloud services destruction from V(t);
/* vertical elasticity */
2process scale-down of VMs resources from V(t);
/* horizontal elasticity */
3process cloud services scale-in from V(t);
/* sort VMs by revenue */
4sort VMs by revenue (Rj(t)) in decreasing order;
/* vertical elasticity */
5process scale-up of VMs resources from V(t);
/* horizontal elasticity */
6process cloud services scale-out from V(t);
/* created cloud services */
7foreach unprocessed Vjin V(t)do
8while Vjis not allocated do
9foreach Hiin H do
10 if Hihas enough resources to host Vjthen
11 allocate Vjinto Hiand break loop;
12 end
13 end
14 if Vjis still not allocated then
15 allocate Vjin another federated provider;
16 end
17 end
18 end
19 update x(t+1) with processed requests;
20 return x(t+1)
the best known solution x0(t) for a placement reconfiguration.
More details on the MA may be found in [29].
Additionally, a distributed decision approach is also conside-
red in the experimental evaluation performed in this work (A2).
For this purpose, the most representative related work was con-
Algorithm 2: Memetic Algorithm (MA) for VMPr.
Data: H,U(t), x(t) (see notation in Section 4.1)
Result: Recalculated Placement x0(t)
1initialize set of candidate solutions Pop0;
2Pop0
0=repair infeasible solutions of Pop0;
3Pop00
0=apply local search to solutions of Pop0
0;
4x0(t)=select best solution from Pop00
0x(t) considering (16);
5u=0; Popu=Po p00
0;
6while stopping criterion is not satisfied do
7Popu=selection of solutions from Popux0(t);
8Pop0
u=crossover and mutation on solutions of Popu;
9Pop00
u=repair infeasible solutions of Pop0
u;
10 Pop000
u=apply local search to solutions of Pop00
u;
11 x0(t)=select best solution from Pop000
uconsidering (16);
12 increment number of generations u;
13 end
14 return x0(t)
12
Algorithm 3: Minimum Migration Time (MMT) for VMPr
running at PM Hi.
Data: H,U(t), x(t), Hi(see notation in Section 4.1)
Result: Recalculated Placement x0(t)
/* Hihas exceeded upper threshold */
1if Hiis overloaded then
2sort VMs Vjallocated into Hiin increasing order by RAM;
3while Hiis overloaded do
4schedule migration of Vjfrom Hito Hi0using FFD;
5end
6end
/* Hidoes not reach the lower threshold */
7if Hiis underloaded then
8schedule migration of all Vjfrom Hito Hi0,Hiif possible;
9end
10 update x0(t) considering scheduled migrations;
11 return x0(t)
sidered [22]: the Minimum Migration Time (MMT) algorithm.
The considered MMT algorithm based on [22] is presented in
Algorithm 3 and it work as follows:
For each time the MMT algorithm is triggered for detecting
an overloaded PM Hi(step 1), all VMs Vjthat are currently al-
located in the considered Hiare sorted in increasing order by
RAM size (step 2). This sorting is performed to be able to
firstly schedule migration of Vjwith the minimum associated
migration time, taking into account that the migration time of a
Vjis directly proportional to its RAM size Vr2,j(t) at the con-
sidered time instant t[22]. While Hiis still considered to be
overloaded, each VM Vjis scheduled to be migrated to another
PM H0
iwith available resources (see constraint (8)) using an
FFD heuristic (steps 3-5). On the other hand, for each time the
MMT algorithm is triggered for detecting an underloaded PM
Hi(step 7), all VMs Vjcurrently allocated in the considered
Hiare scheduled to be migrated to another PM H0
iwith avail-
able resources (see constraint (8)) also using an FFD heuristic
(step 8). This full migration is performed in order to be able
to shutdown (or switch to energy-saving mode) the considered
Hi. Finally, the placement is updated considering scheduled
migrations (step 10), returning a recalculated placement for re-
configuration (step 11).
The main dierence between the MA (see Algorithm 2) and
the MMT algorithm (see Algorithm 3) is the considered de-
cision approach, where the MA performs a centralized deci-
sion that globally reconfigures the placement of VMs while the
MMT algorithm performs a distributed decision partially recon-
figuring VMs allocated in only one isolated PM at a time.
5.3. Evaluated VMPr Triggering Methods
In this work, a VMPr Triggering method defines when the
VMPr phase should be triggered in a two-phase optimization
scheme for VMP problems (Research Question 1). Consi-
dering VMPr Triggering methods studied in Section 3, this
work evaluated the two main approaches: (i) periodical and (ii)
threshold-based. Additionally, this work proposes a prediction-
based approach for a novel VMPr Triggering method, statisti-
cally analyzing the objective function costs and proactively de-
tecting requirements for triggering the VMPr phase. The follo-
wing sub-sections describe the VMPr Triggering methods eva-
luated in this work as part of a two-phase optimization scheme
for VMP problems in the proposed complex IaaS environment.
5.3.1. Periodical Triggering
As described in Section 3, several studied works considered
to periodically triggering the VMPr phase (see Table 1). This
work considers the VMPr Triggering method described in [15],
triggering the VMPr phase every 10 discrete time instants.
Periodically triggering the VMPr could present disadvan-
tages when defining a fixed reconfiguration period (e.g. ev-
ery 10 time instants). For example, a reconfiguration could be
required before the established time, where optimization op-
portunities could be wasted or even economical penalties could
impact the cloud datacenter operation. On the other hand, in
certain cases the reconfiguration may not be necessary and tri-
ggering the VMPr could represent profitless reconfigurations.
5.3.2. Threshold-based Triggering
Another very studied VMPr Triggering method considers a
threshold-based approach (see Table 1), where thresholds are
defined in terms of utilization of PM resources (e.g. CPU).
Thresholds indicate when a PM Hiis considered to be under-
loaded or overloaded, and consequently, a VMPr should be
triggered. This work considers a threshold-based VMPr Tri-
ggering method based on [22], fixing utilization thresholds for
overloaded and underloaded PM detection, for all considered
resources, to 10% and 90% respectively.
The above described threshold-based VMPr Triggering
method makes isolated reconfiguration decisions at each PM
without a complete knowledge of global optimization objec-
tives, giving place to a distributed decision approach, as the al-
gorithm A2 implemented in this work.
5.3.3. Proposed Prediction-based Triggering
Considering the main identified issues related to the stu-
died VMPr Triggering methods, this work proposes a novel
prediction-based VMPr Triggering method, statistically analy-
zing the global objective function F(x,t) that is optimized (see
(16)) and proactively detecting situations where a VMPr trigge-
ring is potentially required for a placement reconfiguration.
The proposed prediction-based VMPr Triggering method
considers Double Exponential Smoothing (DES) [39] as a sta-
tistical technique for predicting values of the objective function
F(x,t), as formulated next in (21) to (23):
St=α×Zt+(1 τ)(St1+bt1) (21)
bt=τ(StSt1)+(1 τ)(bt1) (22)
Zt+1=St+bt(23)
where:
α: Smoothing factor, where 0 α1;
τ: Trend factor, where 0 τ1;
Zt: Known value of F(x,t) at discrete time t;
St: Expected value of F(x,t) at discrete time t;
bt: Trend of F(x,t) at discrete time t;
Zt+1: Value of F(x,t+1) predicted at discrete time t.
13
At each discrete time t, the proposed VMPr Triggering
method predicts the next Nvalues of F(x,t) and eectively trig-
gers the VMPr phase in case F(x,t) is predicted to consistently
increase, considering that F(x,t) is being minimized, as shown
in the following basic example.
Basic Example
For a better understanding of how the proposed VMPr Tri-
ggering method works, Table 3 presents an example where cal-
culated values correspond to discrete time instant t=15, consi-
dering parameters α=τ=0.5. The next N=3 values of F(x,t)
were calculated based on known values of previous discrete
time instants. In this basic example, the VMPr is triggered at
t=15 considering that the 3 predicted values of F(x,t) at t=16
to t=18 tend to consistently increase (i.e. 0.66 <0.72 <0.74).
Otherwise, the VMPr phase is not triggered.
5.4. Evaluated VMPr Recovering Methods
It is important to consider that the placement reconfiguration
calculated in the VMPr phase is potentially obsolete, conside-
ring the oine nature of the VMPr problem formulation. In this
work, a VMPr Recovering method defines what should be done
with cloud service requests arriving during the VMPr recalcu-
lation time β(Research Question 2). The iVMP may receive
cloud service requests during the βdiscrete times that the VMPr
performed the calculation of an improved placement (see Fig-
ure 1). Consequently, the calculated new placement must be re-
covered according to the considered VMPr Recovering method
before the reconfiguration is performed. The mentioned issue
is mainly associated to centralized decision approaches, such as
the ones presented in [15, 29].
Most of the studied related works do not consider this is-
sue for two-phase optimization schemes for VMP problems and
only a very basic method is proposed in the specialized litera-
ture [15] (see Table 1). Consequently, this work proposes an
update-based approach for a novel VMPr Recovering method,
applying operations to update the potentially obsolete place-
ment calculated in the VMPr phase. The following sub-sections
describe the VMPr Recovering methods evaluated in this work
as part of a two-phase optimization scheme for VMP problems
in the proposed complex IaaS environment.
Table 3: Basic example of how the proposed VMPr Triggering method predicts
values of F(x,t) for t=16,t=17 and t=18 based on previous discrete time
instants, using (23) with α=τ=0.5. Predicted values are highlighted.
t ZtStbtZt+1Comment
11 0.45 −−−
Previous values at t=15
12 0.50 0.45 0.05
13 0.55 0.53 0.06
14 0.60 0.59 0.07
15 0.65 0.65 0.06 0.66 1st prediction at t=15
16 0.66 0.69 0.05 0.72 2nd prediction at t=15
17 0.72 0.73 0.04 0.74 3rd prediction at t=15
Trigger VMPr at t =15 given that Z16 <Z17 <Z18 with N =3
5.4.1. Canceling Reconfiguration
Calcavecchia et al. studied in [15] a very basic VMPr Reco-
vering method, canceling the VMPr whenever a new request is
received. In this case, the VMPr is only performed in periods
with no requests, that could be considered unpractical for IaaS
providers, taking into account the highly dynamic environment
of cloud computing markets and particularly the complex IaaS
environment proposed as part of this work.
5.4.2. Proposed Update-based Recovering
Considering the identified opportunity to improve the exis-
ting VMPr Recovering method [15], this work proposes a novel
VMPr Recovering method based on updating the placement re-
configuration calculated in the VMPr phase, according to the
changes that happened during the placement recalculation time,
applying operations to update the potentially obsolete place-
ment, as summarized in Algorithm 4.
The proposed update-based VMPr Recovering method re-
ceives the placement reconfiguration calculated in the VMPr
phase (corresponding to the discrete time tβ) and the current
placement x(t) as input data (see Algorithm 4).
Considering that any VM Vjcould be destroyed, or a cloud
service could be scaled-in (horizontal elasticity) during the β
discrete times where the calculation of the placement reconfigu-
ration was performed, these destroyed VMs are removed from
x0(tβ) (step 1). Next, any resource from a VM Vjcould be ad-
justed due to a scale-up or scale-down (vertical elasticity). Con-
sequently, these resource adjustments are performed in x0(tβ)
(see step 2). Additionally, new VMs Vjcould be created, or a
cloud service could be scaled-out (horizontal elasticity), during
the calculation of x0(tβ). In the example of Figure 1 cloud
service S2is created (+V2) and additionally a scale-out of the
mentioned cloud service was performed (+V3) during the recal-
culation time β. These VMs are added to x0(tβ) using an FFD
heuristic (step 3), the same heuristic used in the iVMP phase.
Finally, if the partially recalculated placement x0(tβ) is better
than the current placement x(t), x0(tβ) is accepted (step 5)
and the corresponding management actions are performed (i.e.
mainly migration of VMs between PMs). In case x0(tβ) is not
better than the current placement x(t), no change is performed
and the VMPr phase finishes without any further consequence.
It is important to mention that once a calculated placement
reconfiguration is accepted in the VMPr phase, all VM migra-
tions are assumed to be performed during a reconfiguration time
of γdiscrete time instants. The duration of the reconfiguration
time is directly related to f4(x,t), presented in (14).
Algorithm 4: Update-based VMPr Recovering.
Data: x(t), x0(tβ) (see notation in Section 4.1)
Result: Recovered Placement x0(t)
1remove VMs Vjfrom x0(tβ) that are no longer running in x(t)
2adjust resources from x0(tβ) that changed in x(t)
3add VMs Vjfrom x(t) that were not considered in x0(tβ)
4if x0(tβ)is better than x(t)then ;
5return x0(tβ);
6else return x(t) ;
14
Additionally, new VMs Vjcould be created or a cloud ser-
vice could be scaled-out (horizontal elasticity), during the re-
configuration time. In that case, the iVMP phase will attend
these requests with the only consideration that CPU utilization
in VMs being migrated are increased in 10%, as previously de-
scribed in Section 4.5.4.
5.5. Computational Complexity of Evaluated Algorithms
A detailed study of the computational complexity of the eva-
luated algorithms is out of the scope of this work, considering
that evaluated algorithms are not the main presented contribu-
tion. It is important to consider that the proposed VMPr Tri-
ggering and VMPr Recovering methods may be applied to any
VMPr algorithm. To have a general idea, a brief discussion
about the computational complexity of the evaluated iVMP and
VMPr algorithms (see Table 2) is presented as follows.
As described in [22], the complexity of the FFD algorithm
(see Algorithm 1) considered for the iVMP phase is O(n×m(t)),
where nis the number of available PMs and m(t) is the num-
ber of VMs that have to be allocated. This online heuristic has
shown to use no more than 11
9×OPT +1 PMs, where OPT
represents the number of PMs in an optimal solution [22].
Additionally, the complexity of the MMT algorithm (see Al-
gorithm 3) considered for the VMPr phase in A2 is O(2 ×n),
where nrepresents the number of available PMs.
Finally, the complexity of the MA considered for the VMPr
phase in A1 and A3 could be described per generation, conside-
ring its evolutionary nature. In this context, the considered MA
is based on the Non-Dominated Sorting Genetic Algorithm II
(NSGA-II) [40] which complexity is O(m(t)×Popsize)2, where
m(t) represents the number of requested VMs and Popsize rep-
resents the number of individuals of the evolutionary popula-
tion. It is important to consider that the quality of obtained
solutions is in trade-owith the computation time (number of
generations). Interested readers can refer to [41] for experi-
ments about quality of solutions obtained by MAs for VMP
problems against optimal solutions.
6. Experimental Evaluation
The following sub-sections summarize the experimental en-
vironment as well as the main findings identified in the ex-
periments performed as part of the simulations to validate
the two-phase optimization scheme for VMP problems. The
quality of solutions obtained by the evaluated algorithms in a
scenario-based uncertainty model with 400 dierent scenarios
was compared mainly considering the following evaluation cri-
teria among solutions: (i) average, (ii) maximum and (iii) min-
imum objective function costs, formally defined in (18) to (20).
6.1. Experimental Environment
The three evaluated algorithms (see Table 2) previously pre-
sented in Section 5 were implemented using Java programming
language. The source code is available online1, as well as all the
1http://github.com/DynamicVMP/dynamic-vmp-framework/
releases
considered input data and experimental results. Experiments
were performed on a GNU Linux Operating System with an In-
tel(R) Xeon(R) E5530 at 2.40 GHz CPU and 16 GB of RAM
memory. The following parameters of the proposed uncertain
VMP formulation were considered for the experimental evalu-
ation presented in this work (see details in Section 4):
Number of considered resources: r=3;
Recalculation time for A1 and A3: β=2;
Recalculation time for A2: β=1;
Protection factor for each resource k:λk=0.5;
Penalty factor for each resource k:φk=1.
As input data, available PMs (see (1)) include 4 dierent
types of PMs, as presented in Table 4. Considering the avail-
able PM types, 5 IaaS datacenters with dierent number of PMs
were considered (DC1to DC5), as summarized in Table 5.
Additionally, 80 dierent workload traces1of requested
cloud services (V(t)) and their specifications (see (2)) were con-
sidered as input data as well as their utilization of resources U(t)
at each discrete time t(see (3)). Requested VMs were consi-
dered according to instance types oered by Amazon Elastic
Compute Cloud (EC2), as summarized in Table 6.
It is important to remember that in this work, the following
parameters are considered to be uncertain: (i) virtual resources
capacities (vertical elasticity), (ii) number of VMs that com-
pose cloud services (horizontal elasticity), (iii) utilization of
CPU and RAM memory virtual resources and (iv) utilization of
networking virtual resources (both relevant for overbooking).
Consequently, two dierent Probability Distribution Functions
(PDFs) were considered to represent each parameter behavior
(i.e. Uniform and Poisson). Workload traces of cloud service
requests were generated using a Cloud Workload Trace Genera-
tor (CWTG) for provider-oriented VMP problems [42], and are
available online2for research purposes.
Considering parameters described in Table 7, Table 8
presents a basic example of a workload trace considered in this
work. The duration of the presented workload trace is five dis-
crete time instants, considering one IaaS datacenter and one re-
quested cloud service. Additionally, VMs are created according
to an Uniform PDF on dierent discrete time instants between
t=0 and t=4.
Following input parameters presented in Table 7, virtual re-
source capacities (Vr1,j(t) to Vr3,j(t)) of VMs are selected con-
sidering the IDs associated to each instance type (from 0 to
10), presented in Table 6. It is important to notice that at each
2http://github.com/DynamicVMP/workload-trace-generator
Table 4: Types of PMs considered in simulations. For notation see Section 4.
PM Type Pr1,iPr2,iPr3,ipmaxici
[ECU] [GB] [Mbps] [W]
H.small 32 128 1000 800 1
H.medium 64 256 1000 1000 1
H.large 256 512 1000 3000 1
H.xlarge 512 1024 20000 5000 1
15
Table 5: Number of PMs per type on each considered IaaS cloud datacenter.
PM Type DC1DC2DC3DC4DC5
H.small 50 30 20 15 10
H.medium 50 30 20 10 10
H.large 50 30 15 10 9
H.xlarge 30 10 8 10 8
Table 6: Virtual Machine (VM) types from Amazon EC2 considered for cloud
service requests in simulations. For notation see Section 4.
ID Instance Type Vr1,jV r2,jV r3,jRj
[ECU] [GB] [Mbps] [$]
0 m4.large 6.5 8 450 0.12
1 c4.large 8 3.75 500 0.105
2 m4.xlarge 13 16 750 0.239
3 c4.xlarge 16 7.5 750 0.209
4 m4.2xlarge 26 32 1000 0.479
5 c4.2xlarge 31 15 1000 0.419
6 m4.4xlarge 53.5 64 2000 0.958
7 c4.4xlarge 62 30 2000 0.838
8 m4.10xlarge 124.5 160 4000 2.394
9 c4.8xlarge 132 60 4000 1.675
10 x1.32xlarge 349 1952 10000 13.338
Table 7: Most relevant inputs for example workload trace presented in Table 8.
Parameter Input Data
Workload trace duration [t] 5
Number of IaaS datacenters 1
Number of cloud services 1
VMs creation time [t] Uniform (0,3)
Virtual resources capacities Uniform (0,10)
Number of VMs per cloud service Uniform (1,10)
Utilization of CPU and RAM me-
mory virtual resources Poisson (0.7)
Utilization of networking virtual
resources Poisson (0.7)
discrete time t, these resources may change (vertical elastic-
ity). Similarly to virtual resource capacities of VMs, the num-
ber of VMs per cloud service (mS b(t)) may change uniformly
between 1 and 10 at each discrete time t(horizontal elasticity).
Additionally, utilization of server as well as networking virtual
resources (Ur1,j(t) to U r3,j(t)) are defined according to a Pois-
son PDF with expected value of 0.7 (i.e. 70%).
In the workload trace example presented in Table 8, cloud
service S1is composed by two VMs at t=0: V1and V2. Virtual
resource capacities of both VMs represent a c4.large instance
type (see Table 6). Considering the high CPU utilization of both
VMs (i.e. U r1,1(t)=0.8 and U r1,2(t)=0.9), a scale-up of VM
resources is performed for the next time instant t=1.
As can be seen at t=1, VMs associated to S1scaled to
an instance type with more virtual resources: m4.xlarge (ver-
tical elasticity). At t=1, the high CPU utilization persists,
representing a possible alarm for scaling-out the cloud service
(horizontal elasticity) as can be observed in t=2, where S1is
composed by 3 VMs: V1,V2and V3. Next, a low utilization
of CPU resources can be seen at t=3, representing a possible
alarm for scaling-in the cloud service (horizontal elasticity).
Finally, virtual resource capacities of each VM that compose
cloud service S1are scaled-down to a c4.large instance type
(vertical elasticity) as can be observed at t=4.
It is worth noting that server and network resources utiliza-
tion (Ur1,j(t) to U r3,j(t)) dynamically change from t=0 to
t=4, representing relevant information for CSPs to apply safe
overbooking of physical resources (see example in Table 8). It
is important to consider that the algorithm that decides when to
auto-scale a cloud service is out of the scope of this work.
Considering the scenario-based uncertainty modeling
approach presented in this work, each evaluated scenario sS
is composed by an IaaS datacenter and a workload trace of
requested cloud services, totalizing 400 dierent evaluated
scenarios (i.e. 80 workload traces ×5 IaaS datacenters).
For simplicity, only one cloud computing datacenter is con-
sidered in this work, although in the proposed formulation it
is possible to consider more cloud computing datacenters. Ex-
perimenting with several geo-distributed datacenters is left as
a future work, considering that additional objective functions
should be taken into account for this type of deployment (e.g.
response time according to customers geographical location).
Experiments are summarized in what follows: Ten runs of
the algorithms A1 and A3 were performed for the 400 conside-
red scenarios, taking into account the randomness of the MA
considered for solving the VMPr phase when using A1 or A3.
Average obtained results are presented in Table 9. Additionally,
the same table also shows results of one run of the deterministic
A0 and A2 algorithms, performed with the same 400 scenarios.
The following sub-section summarizes the main findings ob-
tained in the experimental evaluation of the three implemented
algorithms for the proposed uncertain formulation of a two-
phase optimization scheme for VMP problems previously pre-
sented in Section 4.
6.2. Experimental Results
The main goal of the presented experimental evaluation is to
explore alternatives to answer the following research questions:
when or under what circumstances the VMPr phase should
be triggered? (RQ1);
what should be done with cloud service requests arriving
during VMPr recalculation times? (RQ2).
Table 9 presents values of the considered evaluation criteria,
i.e. F1,F2and F3costs (see (18) to (20)), summarizing results
obtained in performed simulations. The mentioned evaluation
criteria are presented separately for each of the five considered
IaaS cloud datacenter. It is worth noting that the considered
IaaS cloud datacenters represent datacenters of dierent sizes
(see Table 5) and consequently, the considered workload traces
represent dierent load of requested CPU resources (e.g. Low
(30%), Medium (60%), High (90%), Full (98%) and
Saturate (120%)) workloads. The main idea of evaluating
dierent load of requested CPU resources is inspired in [43].
16
Table 8: Example of workload trace for a complex IaaS environment. For notation see Section 4.
t S bVj
Vr1,j(t)Vr2,j(t)V r3,j(t)Rj(t)U r1,j(t)Ur2,j(t)Ur3,j(t) Comment
[ECU ] [GB] [Mbps] [$]
0S1V18 3.75 500 0.105 0.8 0.6 0.1 S1requests 2 VMs: V1and V2(c4.large)
0S1V28 3.75 500 0.105 0.9 0.7 0.1
1S1V113 16 750 0.239 0.7 0.5 0.3 S1scales-up V1and V2(to m4.xlarge)
1S1V213 16 750 0.239 0.7 0.5 0.3
2S1V113 16 750 0.239 0.6 0.3 0.2
S1scales-out adding V3(m4.xlarge)2 S1V213 16 750 0.239 0.6 0.3 0.2
2S1V313 16 750 0.239 0.6 0.3 0.2
3S1V113 16 750 0.239 0.2 0.3 0.1 S1scales-in releasing V3(m4.xlarge)
3S1V213 16 750 0.239 0.3 0.4 0.2
4S1V18 3.75 500 0.105 0.5 0.6 0.1 S1scales-down V1and V2(to c4.large)
4S1V28 3.75 500 0.105 0.5 0.6 0.1
Table 9: Summary of evaluation criteria in experimental results for evaluated algorithms.
Criterion Algorithm
Datacenter DC1DC2DC3DC4DC5Ranking
F1
A0 0.691 0.758 0.855 0.901 0.934 3th
A1 0.691 0.758 0.855 0.901 0.934 3th
A2 0.684 0.750 0.847 0.898 0.931 2nd
A3 0.636 0.701 0.819 0.799 0.839 1st
F2
A0 0.773 0.876 0.917 0.962 0.998 3th
A1 0.773 0.876 0.917 0.962 0.998 3th
A2 0.763 0.835 0.918 0.959 0.995 2nd
A3 0.738 0.764 0.876 0.860 0.897 1st
F3
A0 0.603 0.653 0.750 0.806 0.840 3th
A1 0.603 0.653 0.750 0.806 0.840 3th
A2 0.593 0.652 0.741 0.797 0.827 2nd
A3 0.534 0.593 0.677 0.673 0.708 1st
Table 10: Summary of evaluation criteria in experimental results for evaluated algorithms considering f1(x,t): power consumption.
Criterion Algorithm
Datacenter DC1DC2DC3DC4DC5Ranking
F1
1
A0 0.166 0.305 0.460 0.554 0.587 3th
A1 0.166 0.305 0.460 0.554 0.587 3th
A2 0.164 0.301 0.454 0.553 0.587 2nd
A3 0.153 0.248 0.348 0.429 0.452 1st
F1
2
A0 0.239 0.474 0.564 0.623 0.655 3th
A1 0.239 0.474 0.564 0.623 0.655 3th
A2 0.233 0.444 0.562 0.614 0.648 2nd
A3 0.212 0.292 0.428 0.479 0.533 1st
F1
3
A0 0.116 0.236 0.386 0.412 0.476 2nd
A1 0.116 0.236 0.386 0.412 0.476 2nd
A2 0.117 0.242 0.385 0.423 0.454 3th
A3 0.105 0.219 0.306 0.291 0.349 1st
17
0510 15 20 25 30 35 40 45 50 55 60 65 70 75 80
0.5
0.75
1
1.25
Scenario sS
Temporal Average Cost fs(x,t)
A0 - inspired in [38]
A1 - inspired in [15]
A2 - inspired in [22]
A3 - proposed in this work
Figure 2: Temporal average cost: Average values of fs(x,t) in DC1to DC5per each scenario sS.
Based on the information presented in Table 9, the Main
Findings (MFs) of the experimental evaluation performed in
this work are summarized as follows:
MF1: Algorithm A3that considered the proposed VMPr Tri-
ggering and VMPr Recovering methods outperformed all other
evaluated algorithms in every experiment, taking into account
the considered evaluation criteria (F1to F3).
In summary, A3 obtained better results (minimum cost) for
the three considered evaluation criteria, as presented in Table 9.
When considering average objective function costs (F1) as
evaluation criterion, A3 obtained between 3.4% and 12.4% bet-
ter results than A2, as well as between 4.4% and 12.9% better
results than A0 and A1. Additionally, when considering max-
imum objective function costs (F2) as evaluation criterion, the
proposed A3 obtained between 3.3% and 14.1% better results
than A2, which performed as the second best algorithm in this
case. When comparing to A0 and A1, the proposed A3 algo-
rithm obtained between 4.7% and 15.4% better results. Finally,
A3 obtained between 9.9% and 11.4% better results than A2
when considering minimum objective function costs (F3) as
evaluation criterion. A3 algorithm also obtained between 10.1%
and 14.7% better results than A0 and A1.
To better understand the experimental evaluation summa-
rized in Table 9, Figure 2 illustrates the temporal average cost of
the single combined objective function for all scenarios sS,
denoted as fs(x,t) in (17).
MF2: The proposed A3outperformed other evaluated algo-
rithms in the considered scenarios, when considering average
values of the single combined objective function on each sce-
nario s S.
As presented in Figure 2, A3 outperformed the other 3 al-
gorithms in all of the considered scenarios. A3 was the best
algorithm in 100% of the 400 carefully designed and evaluated
scenarios with dierent load of requested CPU resources.
Summarizing, according to the performed experimental eval-
uation, the algorithm that considered the proposed prediction-
based VMPr Triggering and update-based VMPr Recovering
methods (A3) is the clear alternative for solving the uncertain
VMP problem in a two-phase optimization scheme, conside-
ring the simulation results presented in this section.
Additionally, to study the benefits of the proposed approach
for the individual objectives, Tables 10 to 12 present values of
the considered evaluation criteria for each individual normali-
zed objective function ˆ
fi(x,t), i.e. Fi
1,Fi
2and Fi
3costs, sum-
marizing results obtained in performed simulations. The men-
tioned evaluation criteria are also presented separately for each
of the five considered IaaS cloud datacenter.
Based on the information presented in Tables 10 to 12, addi-
tional Main Findings (MFs) are summarized as follows:
MF3: Algorithm A3that considered the proposed VMPr Tri-
ggering and VMPr Recovering methods outperformed all other
evaluated algorithms, taking into account the considered eval-
uation criteria associated to power consumption (F1
1to F1
3).
In summary, A3 obtained minimum power consumption tak-
ing into account the three considered evaluation criteria, as pre-
sented in Table 10.
When considering average objective function costs (F1
1) as
evaluation criterion, A3 obtained between 7.5% and 30.4% bet-
ter results than A2, as well as between 8.7% and 32.1% better
results than A0 and A1. Additionally, when considering max-
imum objective function costs (F1
2) as evaluation criterion, the
proposed A3 obtained between 9.9% and 41.4% better results
than A2, which performed as the second best algorithm in this
case. When comparing to A0 and A1, the proposed A3 algo-
rithm obtained between 12.7% and 45.1% better results. Fi-
nally, A3 obtained between 8.0% and 52.1% better results than
A2 when considering minimum objective function costs (F1
3) as
evaluation criterion. A3 algorithm also obtained between 10.8%
and 62.5% better results than A0 and A1.
It is important to remember that power consumption mana-
gement is an important studied issue in the provider-oriented
VMP literature, with high impact in operational costs and car-
bon dioxide emissions for cloud datacenter operations [44].
18
Table 11: Summary of evaluation criteria in experimental results for evaluated algorithms considering f2(x,t): economical revenue.
Criterion Algorithm
Datacenter DC1DC2DC3DC4DC5Ranking
F2
1
A0 0.155 0.188 0.226 0.196 0.240 1st
A1 0.155 0.188 0.226 0.196 0.240 1st
A2 0.155 0.189 0.228 0.197 0.242 2nd
A3 0.155 0.189 0.228 0.197 0.242 2nd
F2
2
A0 0.278 0.335 0.376 0.348 0.396 1st
A1 0.278 0.335 0.376 0.348 0.396 1st
A2 0.278 0.335 0.376 0.348 0.397 2nd
A3 0.278 0.335 0.376 0.348 0.397 2nd
F2
3
A0 0.005 0.005 0.004 0.003 0.003 3th
A1 0.005 0.005 0.004 0.003 0.003 3th
A2 0.005 0.004 0.003 0.003 0.003 2nd
A3 0.005 0.003 0.003 0.003 0.002 1st
Table 12: Summary of evaluation criteria in experimental results for evaluated algorithms considering f3(x,t): resource utilization.
Criterion Algorithm
Datacenter DC1DC2DC3DC4DC5Ranking
F3
1
A0 0.636 0.642 0.650 0.644 0.645 3th
A1 0.636 0.642 0.650 0.644 0.645 3th
A2 0.629 0.635 0.643 0.640 0.641 2nd
A3 0.604 0.609 0.623 0.563 0.570 1st
F3
2
A0 0.728 0.724 0.707 0.699 0.702 3th
A1 0.728 0.724 0.707 0.699 0.702 3th
A2 0.718 0.716 0.707 0.697 0.701 2nd
A3 0.700 0.697 0.689 0.672 0.672 1st
F3
3
A0 0.589 0.597 0.606 0.606 0.609 3th
A1 0.589 0.597 0.606 0.606 0.609 3th
A2 0.578 0.592 0.603 0.602 0.605 2nd
A3 0.578 0.548 0.585 0.500 0.508 1st
MF4: Taking into account the considered evaluation criteria
associated to economical revenue (F2
1to F2
3), all four evaluated
algorithms performed almost equally good.
As it can be seen in Table 11, all four evaluated algorithms
performed equally when considering a low CPU load (DC1) in
all evaluated criteria (F2
1to F2
3).
In most of the other evaluated CPU load scenarios, A0 and
A1 outperformed other algorithms with a very small dierence.
These could be mainly caused by the migration overhead intro-
duced by live migration in the VMPr phase, considering that A0
and A1 do not executed the VMPr in any evaluated scenario.
MF5: Algorithm A3that considered the proposed VMPr Tri-
ggering and VMPr Recovering methods outperformed all other
evaluated algorithms, taking into account the considered eval-
uation criteria associated to resource utilization (F3
1to F3
3).
When considering average objective function costs (F3
1) as
evaluation criterion, A3 obtained between 3.2% and 13.6% bet-
ter results than A2, as well as between 4.3% and 14.4% better
results than A0 and A1. Additionally, when considering max-
imum objective function costs (F3
2) as evaluation criterion, the
proposed A3 obtained between 2.5% and 20.4% better results
than A2, which performed as the second best algorithm in this
case. When comparing to A0 and A1, the proposed A3 algo-
rithm obtained between 2.5% and 21.3% better results. Finally,
A3 obtained up to 4.4% better results than A2 when considering
minimum objective function costs (F3
3) as evaluation criterion.
A3 algorithm also obtained between 1.8% and 4.5% better re-
sults than A0 and A1.
In summary, A3 obtained minimum resource wastage, tak-
ing into account the three considered evaluation criteria, as pre-
sented in Table 12.
7. Conclusions and Future Work
This work presented a complex IaaS environment for VMP
problems considering service elasticity, including both vertical
and horizontal scaling of cloud services, as well as overbooking
of physical resources, including both server (CPU and RAM)
and networking resources (see Section 4.1).
The proposed complex IaaS environment for VMP problems
was studied in a two-phase optimization scheme, combining ad-
vantages of both online and oine VMP formulations, where
a novel prediction-based VMPr Triggering method was pro-
19
posed to decide when or under what circumstances to trigger
a placement reconfiguration (Research Question 1) as well as a
novel update-based VMPr Recovering method to decide what to
do with VMs requested during VMPr recalculation times (Re-
search Question 2), as described in Sections 5.3.3 and 5.4.2.
A renewed formulation of an uncertain VMP problem con-
sidering the above mentioned contributions was also proposed,
for the optimization of the following four objective functions:
(i) power consumption, (ii) economical revenue, (iii) resource
utilization as well as (iv) placement reconfiguration time (see
Section 4.5) in a context of Multi-Objective problem solved as
Mono-Objective (MAM) [29].
Additionally, a first scenario-based uncertainty approach for
modeling relevant uncertain parameters considering the pro-
posed complex IaaS environment in the two-phase optimization
scheme for VMP problems was presented. The considered un-
certain parameters were: (i) virtual resources capacities (verti-
cal elasticity), (ii) number of VMs that compose cloud services
(horizontal elasticity), (iii) utilization of CPU and RAM me-
mory virtual resources and (iv) utilization of networking virtual
resources (both relevant for overbooking). Each considered un-
certain parameter was modeled considering two dierent PDFs:
(i) Uniform and (ii) Poisson.
Trying to answer Research Questions 1 and 2, the proposed
VMPr Triggering and VMPr Recovering methods were exper-
imentally evaluated against other alternatives identified in the
specialized literature (see Table 1), considering 400 scenarios.
Experimental results in simulations suggested that the best
algorithm for solving the proposed uncertain VMP problem in
a two-phase optimization scheme is the one considering the pro-
posed prediction-based VMPr Triggering (answer to Research
Question 1) and update-based VMPr Recovering methods (an-
swer to Research Question 2) used by the A3 algorithm.
Several future works were also identified, mainly considering
the novelty of the contributions of this work. First, a formula-
tion of a VMP problem considering a dynamic set of PMs H(t),
to consider PM crashes, maintenance or even deployment of
new generation hardware is proposed as a future work.
Although modeling power consumption considering a linear
relationship with CPU utilization is a very accepted approach
in the specialized literature, considering the impact of other
resources such as RAM and networking is proposed as future
work.
Considering VMP formulations with more sophisticated
cloud federation approaches is also left as a future work, taking
into account the basic cloud federation approach considered in
this work. Additionally, an experimental evaluation of alterna-
tive algorithms for both iVMP and VMPr phase is proposed as
a future work, in order to explore performance issues with the
proposed VMPr Triggering and VMPr Recovering methods.
Novel VMPr Triggering and VMPr Recovering methods
could still be proposed to improve the considered two-phase
optimization scheme. A more detailed experimental evalua-
tion of dierent parameters of the proposed VMP formulation
should also be considered, evaluating dierent protection fac-
tors λk(see Section 4.4.2), penalty factors φk(see Section 4.5.2)
or even dierent scalarization methods (see Section 4.6).
The authors of this work also recognized the importance of
jointly considering auto-scaling algorithms with the proposed
two-phase optimization scheme for VMP problems, mainly for
elastic cloud services as the considered in this work.
Experimenting with geo-distributed datacenters is also left as
a future work, taking into account that simulations presented in
this work considered only one cloud computing datacenter.
Finally, the authors are already working on implementing the
evaluated algorithms in IaaS middlewares (e.g. OpenStack3) to
evaluate the proposed methods in real-world cloud computing
datacenters supporting real workloads of cloud applications.
8. References
[1] S. S. Manvi, G. K. Shyam, Resource management for infrastructure as
a service (iaas) in cloud computing: A survey, Journal of Network and
Computer Applications 41 (2014) 424–440.
[2] D. Breitgand, A. Epstein, Sla-aware placement of multi-virtual machine
elastic services in compute clouds, in: Integrated Network Management
(IM), 2011 IFIP/IEEE International Symposium on, IEEE, 2011, pp. 161–
168.
[3] L. Tom´
as, J. Tordsson, An autonomic approach to risk-aware data center
overbooking, IEEE Transactions on Cloud Computing 2 (3) (2014) 292–
305.
[4] J. Ortigoza, F. L´
opez-Pires, B. Bar´
an, A taxonomy on dynamic environ-
ments for provider-oriented virtual machine placement, in: 2016 IEEE
International Conference on Cloud Engineering (IC2E), 2016, pp. 214–
215. doi:10.1109/IC2E.2016.18.
[5] B. Speitkamp, M. Bichler, A mathematical programming approach for
server consolidation problems in virtualized data centers, Services Com-
puting, IEEE Transactions on 3 (4) (2010) 266–278.
[6] F. L´
opez-Pires, B. Bar´
an, A virtual machine placement taxonomy, in:
Cluster, Cloud and Grid Computing (CCGrid), 2015 15th IEEE/ACM In-
ternational Symposium on, IEEE Computer Society, 2015, pp. 159–168.
doi:10.1109/CCGrid.2015.15.
[7] F. L´
opez-Pires, B. Bar´
an, A. Amarilla, L. Ben´
ıtez, R. Ferreira, S. Za-
limben, An experimental comparison of algorithms for virtual machine
placement considering many objectives, in: 9th Latin America Network-
ing Conference (LANC), 2016, pp. 75–79.
[8] P. Mell, T. Grance, The nist definition of cloud computing, National In-
stitute of Standards and Technology 53 (6) (2009) 50.
[9] Z. ´
A. Mann, Allocation of virtual machines in cloud data centers - A
survey of problem models and optimization algorithms, ACM Computing
Surveys (CSUR) 48 (1) (2015) 11.
[10] K. Li, J. Wu, A. Blaisse, Elasticity-aware virtual machine placement for
cloud datacenters, in: Cloud Networking (CloudNet), 2013 IEEE 2nd In-
ternational Conference on, IEEE, 2013, pp. 99–107.
[11] W. Wang, H. Chen, X. Chen, An availability-aware virtual machine place-
ment approach for dynamic scaling of cloud applications, in: Ubiqui-
tous Intelligence & Computing and 9th International Conference on Au-
tonomic & Trusted Computing (UIC/ATC), 2012 9th International Con-
ference on, IEEE, 2012, pp. 509–516.
[12] A. Anand, J. Lakshmi, S. Nandy, Virtual machine placement optimiza-
tion supporting performance SLAs, in: Cloud Computing Technology
and Science (CloudCom), 2013 IEEE 5th International Conference on,
Vol. 1, IEEE, 2013, pp. 298–305.
[13] A. Beloglazov, R. Buyya, Optimal online deterministic algorithms and
adaptive heuristics for energy and performance ecient dynamic consol-
idation of virtual machines in cloud data centers, Concurrency and Com-
putation: Practice and Experience 24 (13) (2012) 1397–1420.
[14] F. L´
opez-Pires, B. Bar´
an, Multi-objective virtual machine placement with
service level agreement: A memetic algorithm approach, in: Proceedings
of the 2013 IEEE/ACM 6th International Conference on Utility and Cloud
Computing, IEEE Computer Society, 2013, pp. 203–210.
[15] N. M. Calcavecchia, O. Biran, E. Hadad, Y. Moatti, Vm placement strate-
3http://www.openstack.org
20
gies for cloud scenarios, in: Cloud Computing (CLOUD), 2012 IEEE 5th
International Conference on, IEEE, 2012, pp. 852–859.
[16] W. Yue, Q. Chen, Dynamic placement of virtual machines with both de-
terministic and stochastic demands for green cloud computing, Mathe-
matical Problems in Engineering 2014.
[17] E. Feller, C. Morin, A. Esnault, A case for fully decentralized dynamic vm
consolidation in clouds, in: Cloud Computing Technology and Science
(CloudCom), 2012 IEEE 4th International Conference on, IEEE, 2012,
pp. 26–33.
[18] X.-F. Liu, Z.-H. Zhan, K.-J. Du, W.-N. Chen, Energy aware virtual ma-
chine placement scheduling in cloud computing based on ant colony op-
timization approach, in: Proceedings of the 2014 conference on Genetic
and evolutionary computation, ACM, 2014, pp. 41–48.
[19] F. Farahnakin, R. Bahsoon, P. Liljeberg, T. Pahikkala, Self-adaptive re-
source management system in iaas clouds, in: R. Bahsoon, P. Liljeberg,
T. Pahikkala (Eds.), 9th International Conference on Cloud Computing
(IEEE CLOUD), IEEE, 2016, p. 553560.
[20] Q. Zheng, R. Li, X. Li, N. Shah, J. Zhang, F. Tian, K.-M. Chao,
J. Li, Virtual machine consolidated placement based on multi-objective
biogeography-based optimization, Future Generation Computer Systems
54 (2016) 95–122.
[21] P. Sv, W. Li, E. Wadbro, J. Tordsson, E. Elmroth, et al., Continuous da-
tacenter consolidation, in: 2015 IEEE 7th International Conference on
Cloud Computing Technology and Science (CloudCom), IEEE, 2015, pp.
387–396.
[22] A. Beloglazov, J. Abawajy, R. Buyya, Energy-aware resource allocation
heuristics for ecient management of data centers for cloud computing,
Future Generation Computer Systems 28 (5) (2012) 755–768.
[23] J. Shi, F. Dong, J. Zhang, J. Luo, D. Ding, Two-phase online virtual ma-
chine placement in heterogeneous cloud data center, in: Systems, Man,
and Cybernetics (SMC), 2015 IEEE International Conference on, IEEE,
2015, pp. 1369–1374.
[24] M. Tighe, M. Bauer, Integrating cloud application autoscaling with dy-
namic vm allocation, in: 2014 IEEE Network Operations and Manage-
ment Symposium (NOMS), IEEE, 2014, pp. 1–9.
[25] A. Tchernykh, U. Schwiegelsohn, V. Alexandrov, E.-g. Talbi, Towards
understanding uncertainty in cloud computing resource provisioning, Pro-
cedia Computer Science 51 (2015) 1772–1781.
[26] M. A. Aloulou, F. Della Croce, Complexity of single machine scheduling
problems under scenario-based uncertainty, Operations Research Letters
36 (3) (2008) 338–342.
[27] S. Chaisiri, B.-S. Lee, D. Niyato, Optimal virtual machine placement
across multiple cloud providers, in: Services Computing Conference,
2009. APSCC 2009. IEEE Asia-Pacific, IEEE, 2009, pp. 103–110.
[28] S. Chaisiri, B.-S. Lee, D. Niyato, Optimization of resource provisioning
cost in cloud computing, IEEE Transactions on Services Computing 5 (2)
(2012) 164–177.
[29] D. Ihara, F. L´
opez-Pires, B. Bar´
an, Many-objective virtual machine place-
ment for dynamic environments, in: 2015 IEEE/ACM 8th International
Conference on Utility and Cloud Computing (UCC), IEEE, 2015, pp. 75–
79.
[30] F. L´
opez-Pires, B. Bar´
an, A many-objective optimization framework for
virtualized datacenters, in: Proceedings of the 2015 5th International
Conference on Cloud Computing and Service Science, 2015, pp. 439–
450. doi:10.5220/0005434604390450.
[31] O. Biran, A. Corradi, M. Fanelli, L. Foschini, A. Nus, D. Raz, E. Sil-
vera, A stable network-aware vm placement for cloud systems, in: Clus-
ter, Cloud and Grid Computing (CCGrid), 2012 12th IEEE/ACM Interna-
tional Symposium on, IEEE, 2012, pp. 498–506.
[32] M. Mishra, A. Sahoo, On theory of vm placement: Anomalies in existing
methodologies and their mitigation using a novel vector based approach,
in: Cloud Computing (CLOUD), 2011 IEEE International Conference on,
IEEE, 2011, pp. 275–282.
[33] L. Shi, J. Furlong, R. Wang, Empirical evaluation of vector bin packing
algorithms for energy ecient data centers, in: Computers and Commu-
nications (ISCC), 2013 IEEE Symposium on, IEEE, 2013, pp. 000009–
000015.
[34] F. L´
opez-Pires, B. Bar´
an, Virtual machine placement literature review,
http://arxiv.org/abs/1506.01509.
[35] M. Gahlawat, P. Sharma, Survey of virtual machine placement in fed-
erated clouds, in: Advance Computing Conference (IACC), 2014 IEEE
International, 2014, pp. 735–738. doi:10.1109/IAdCC.2014.6779415.
[36] P. Sv¨
ard, B. Hudzia, S. Walsh, J. Tordsson, E. Elmroth, Principles and
performance characteristics of algorithms for live vm migration, ACM
SIGOPS Operating Systems Review 49 (1) (2015) 142–155.
[37] C. C. Coello, G. B. Lamont, D. A. Van Veldhuizen, Evolutionary algo-
rithms for solving multi-objective problems, Springer, 2007.
[38] S. Fang, R. Kanagavelu, B.-S. Lee, C. H. Foh, K. M. M. Aung, Power-
ecient virtual machine placement and migration in data centers, in:
Green Computing and Communications (GreenCom), 2013 IEEE and In-
ternet of Things (iThings/CPSCom), IEEE International Conference on
and IEEE Cyber, Physical and Social Computing, IEEE, 2013, pp. 1408–
1413.
[39] J. Huang, C. Li, J. Yu, Resource prediction based on double exponential
smoothing in cloud computing, in: 2012 2nd International Conference on
Consumer Electronics, Communications and Networks (CECNet), 2012,
pp. 2056–2060. doi:10.1109/CECNet.2012.6201461.
[40] K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, A fast and elitist multi-
objective genetic algorithm: NSGA-II, Evolutionary Computation, IEEE
Transactions on 6 (2) (2002) 182–197.
[41] F. L´
opez-Pires, B. Bar´
an, Multi-objective virtual machine placement with
service level agreement: A memetic algorithm approach, in: Proceedings
of the 2013 IEEE/ACM 6th International Conference on Utility and Cloud
Computing, IEEE Computer Society, 2013, pp. 203–210.
[42] J. Ortigoza, F. L´
opez-Pires, B. Bar ´
an, Workload generation for vir-
tual machine placement in cloud computing environments, in: 2016
XLII Latin American Computing Conference (CLEI), 2016, pp. 1–9.
doi:10.1109/CLEI.2016.7833348.
[43] D. P. Pinto-Roa, C. A. Brizuela, B. Bar´
an, Multi-objective routing and
wavelength converter allocation under uncertain trac, Optical Switching
and Networking 16 (2015) 1–20.
[44] F. L´
opez-Pires, B. Bar ´
an, Cloud computing resource allocation tax-
onomies, International Journal of Cloud Computing To appear.
Fabio L´
opez-Pires Fabio Lpez Pires re-
ceived a degree in Informatics Engineer-
ing (2010), a M.Sc. in Networks and
Data Communications (2014) and D.Sc. in
Computer Science (2017) from National
University of Asuncin in Paraguay. Cur-
rently, he works as Head of Distributed
Systems and Parallel Computing Group at
the Itaipu Technological Park. His research interests mainly
focused on Cloud Computing, Evolutionary Algorithms and
Multi-Objective Optimization.
Prof. Benjam´
ın Bar´
an received a degree
in Electronic Engineering (1983) from Na-
tional University of Asunci´
on in Paraguay,
a M.Sc. in Electrical and Computer Engi-
neering (1987) at Northeastern University
in U.S.A. and a Ph.D. degree in Computer
Science (1993) at the Federal University of Rio de Janeiro in
Brazil. With more than 3 decades of teaching and research ex-
perience at several universities, he received the Paraguayan Sci-
ence Award in 1996 and the Pan-American Prize of Scientific
Computing in 2012, among a dozen of international awards.
He was President of the Latin American Center on Informatics
Studies (CLEI) and Research Coordinator at the National Com-
puting Center (CNC) of the National University of Asunci´
on.
Dr. Bar´
an is President of CBA S.A. His research interests fo-
cused on Cloud Computing, Evolutionary Computation, Multi-
Objective Optimization and Optical Networks.
21
Leonardo Ben´
ıtez is a student of Infor-
matics Engineering at the National Univer-
sity of Asunci´
on in Paraguay. Currently,
he is working as independent Software
Developer. His research interest focused
on Multi-Objective Optimization, Cloud
Computing, Software Engineering and Web Technologies.
Sa´
ul Zalimben received a degree in Infor-
matics Engineering (2017) from the Na-
tional University of Asunci´
on in Paraguay,
working as independent Software De-
veloper. His research interests focused
on Multi-Objective Optimization, Cloud
Computing, Datacenters and Software En-
gineering.
Augusto Amarilla is a student of Infor-
matics Engineering at the National Univer-
sity of Asunci´
on in Paraguay. Currently
doing research as part of his final project
to achieve an Informatics Engineering de-
gree. He also works as a full-stack devel-
oper and team leader at Software Natura.
His research interests focused on Cloud Computing, Evolution-
ary Algorithms and Multi-Objective Optimization.
22
... This strategy can facilitate an improved utilization of PM idle resources. However, special care must be taken to reduce risks associated with unmet QoS demand over peak utilization of PM resource [28,29]. ...
Thesis
Full-text available
Increasing power efficiency is one of the most important operational factors for any data centre providers. In this context, one of the most useful approaches is to reduce the number of utilized Physical Machines (PMs) through optimal distribution and re-allocation of Virtual Machines (VMs) without affecting the Quality of Service (QoS). Dynamic VMs provisioning makes use of monitoring tools, historical data, prediction techniques, as well as placement algorithms to improve VMs allocation and migration. Consequently, the efficiency of the data centre energy consumption increases. In this thesis, we propose an efficient real-time dynamic provisioning framework to reduce energy in heterogeneous data centres. This framework consists of an efficient workload preprocessing, systematic VMs clustering, a multivariate prediction, and an optimal Virtual Machine Placement (VMP) algorithm. Additionally, it takes into consideration VM and user behaviours along with the existing state of PMs. The proposed framework consists of pipeline-successive subsystems These subsystems could be used separately or combined to improve accuracy, efficiency, and speed of workload clustering, prediction, and provisioning purposes. The pre-processing and clustering subsystems use current state and historical workload data to create efficient VMs clusters. Efficient VMs clustering includes fewer consumption resources, faster computing, and improved accuracy. A modified multivariate Extreme Learning Machine (ELM)-based predictor is used to forecast the number of VMs in each cluster for the subsequent period. The prediction subsystem considers users’ behaviour to exclude unpredictable VMs requests. The placement subsystem is a multi-objective placement algorithm based on a novel Machine Condition Index (MCI). MCI represents a group of weighted components that is inclusive of the data centre network, PMs, storage, power system, and facilities used in any data centre. This study will be used to measure the extent to which PM is deemed suitable for handling the new and/or consolidated VM in large-scale heterogeneous data centres. It is an efficient tool for comparing server energy consumption used to augment the efficiency and manageability of data centre resources. The proposed framework components separately are tested and evaluated with both synthetic and realistic data traces. Simulation results show that proposed subsystems can achieve efficient results as compared to existing algorithms.
... When VM Migrates back to server that holds the VM's past virtual disk images, a reduction migration time was achieved. The problems of IAA Services like elasticity and over booking of resources was studied by [14]. For handling CDC uncertainty, the authors introduced a two phase section. ...
... This is costly and also limits the capacity of edge servers to execute more tasks in parallel. Furthermore, it has been seen that certain placement strategies [16,25,27] come up with optimal static placements for a given system state, but are be unable to dynamic situations. In case of network, UE mobility or failure events, they make ad-hoc runtime decisions which increase the deployment cost. ...
Preprint
Full-text available
Edge computing hosts applications close to the end users and enables low-latency real-time applications. Modern applications inturn have adopted the microservices architecture which composes applications as loosely coupled smaller components, or services. This complements edge computing infrastructure that are often resource constrained and may not handle monolithic applications. Instead, edge servers can independently deploy application service components, although at the cost of communication overheads. Consistently meeting application service level objectives while also optimizing application deployment (placement and migration of services) cost and communication overheads in mobile edge cloud environment is non-trivial. In this paper we propose and evaluate three dynamic placement strategies, two heuristic (greedy approximation based on set cover, and integer programming based optimization) and one learning-based algorithm. Their goal is to satisfy the application constraints, minimize infrastructure deployment cost, while ensuring availability of services to all clients and User Equipment (UE) in the network coverage area. The algorithms can be extended to any network topology and microservice based edge computing applications. For the experiments, we use the drone swarm navigation as a representative application for edge computing use cases. Since access to real-world physical testbed for such application is difficult, we demonstrate the efficacy of our algorithms as a simulation. We also contrast these algorithms with respect to placement quality, utilization of clusters, and level of determinism. Our evaluation not only shows that the learning-based algorithm provides solutions of better quality; it also provides interesting conclusions regarding when the (more traditional) heuristic algorithms are actually better suited.
... But, authors have not considered the elasticity in their work. Another two-phase approached for VMP proposed by (López-Pires et al., 2018) has considered complex IaaS environment including elasticity but did not consider the application performance while placing the additional VM of autoscaling requests. ...
Article
Due to pay-as-you-go style adopted by cloud datacenters (DC), modern day applications having intercommunicating tasks depend on DC for their computing power. Due to unpredictability of rate at which data arrives for immediate processing, application performance depends on autoscaling service of DC. Normal VM placement schemes place these tasks arbitrarily onto different physical machines (PM) leading to unwanted network traffic resulting in poor application performance and increases the DC operating cost. This paper formulates autoscaling and intercommunication aware task placements (AIATP) as an optimization problem, with additional constraints and proposes solution, which uses the placement knowledge of prior tasks of individual applications. When compared with well-known algorithms, CloudsimPlus-based simulation demonstrates that AIATP reduces the resource fragmentation (30%) and increases the resource utilization (18%) leading to minimal number of active PMs. AIATP places 90% tasks of an application together and thus reduces the number of VM migration (39%) while balancing the PMs.
... The intrusion detection system detects whether hackers are maliciously attacking and destroying computer or network resources to carry out corresponding malicious processing [13]. Intrusion prevention detection uses bypass technology to monitor and detect network data packets 24 h a day, which not only does not affect the performance of the computer network, but also can accurately determine whether the detected data packet contains abnormal attack behaviors, and use the corresponding technology to respond to the administrator [14]. It can detect abnormal behaviors of electronic systems and networks, interrupt the source of intrusion in time, protect the scene, and notify network administrators in various ways to ensure system security [15]. ...
Article
Full-text available
With the continuous development of network technology and the continuous expansion of network scale, the security of the network has suffered more threats, and the attacks facing them have become more and more extensive. The frequent occurrence of network security incidents has caused huge losses. Facing an increasingly severe situation, it is necessary to adopt various network security technologies to solve the problem. Intrusion detection technology can detect internal and external network attacks, respond before the intrusion occurs, and send out alarm information for timely and effective processing. This article mainly introduces the research of cloud computing intrusion detection technology based on BP neural network (BP-NN), and intends to provide ideas and directions for the development of cloud computing intrusion detection technology based on BP-NN. This paper proposes research methods of cloud computing intrusion detection technology based on BP-NN, including BP-NN algorithm, neural network cloud computing intrusion detection technology and artificial bee colony optimization algorithm, which are used to conduct cloud computing intrusion detection technology experiment based on BP-NN; Proposed an artificial bee colony optimization neural network algorithm; designed a cloud computing intrusion detection system based on BP-NN. Experimental result shows that the average detection rate of the ABC-BP network algorithm is 92.67 %, which can effectively distinguish normal data from abnormal data.
... Fard et al. consider the lower and upper bounds of the processing time for executing workflow applications on the cloud [48]. The work of Fabio et al. consider service elasticity, which includes scaling of cloud computing services and overbooking [49]. Roland et al. propose a realistic cloud workflow simulation with noisy parameters [50]. ...
Article
The rapid growth of the cloud industry has increased challenges in the proper governance of the cloud infrastructure. Many intelligent systems have been developing, considering uncertainties in the cloud. Intelligent approaches with the consideration of uncertainties bring optimal management with higher profitability. Uncertainties of different levels and different types exist in various domains of cloud computing. This survey aims to discuss all types of uncertainties and their effect on different components of cloud computing. The article first presents the concept of uncertainty and its quantification. A vast number of uncertain events influence the cloud, as it is connected with the entire world through the internet. Five major uncertain parameters are identified, which are directly affected by numerous uncertain events and affect the performance of the cloud. Notable events affecting major uncertain parameters are also described. Besides, we present notable uncertainty-aware research works in cloud computing. A hype curve on uncertainty-aware approaches in the cloud is also presented to visualize current conditions and future possibilities. We expect the inauguration of numerous uncertainty-aware intelligent systems in cloud management over time. This article may provide a deeper understanding of managing cloud resources with uncertainties efficiently to future cloud researchers.
Article
Full-text available
The rapid growths in demand for computing resources and shift to Cloud Computing (CC) paradigm have necessitated the establishment of Virtualized Data Centers (DCs) known as Server Farms. In these centers, computing resources are virtualized, which consequently consume enormous electrical energy. This theoretical review presents background information and assess topic in a state of flux relating to energy-saving criteria in DCs using Ant Colony System (ACS) algorithms. The purpose is to avail its readership both early and mid-career researchers in computer discipline; critical literature review, existing research gaps and comprehensive bibliography as reference materials. Also, a literature review on other energy reduction using greedy approaches is included. Furthermore, sources of this review, research tools used by previous authors, adopted energy-saving strategies, taxonomy of ACS variants and reputable World's publishers were analyzed. These are expected to pave way for the laying foundation in future research works.
Article
Cloud data centers do not completely use their resources, resulting in resource underutilization. Cloud computing companies primarily leverage virtualization technologies to supply cost‐effective service provision. In order to optimize cloud performance, virtual machines (VMs) must be placed among physical machines (PMs). When it comes to concentrating on the issues in the cloud computing environment, effective VM placement (VMP) is one of the primary difficulties that might cost suppliers money. VMP may be applied in a variety of ways in cloud computing. In terms of lowering related processing overhead, consolidating the cloud environment to become a highly on‐demand method, balancing the load among PMs, power usage, and refining performance, VMP techniques still require improvement in the computing environment. This study aims to provide a comprehensive overview of VMP approaches. This article provides an up‐to‐date survey of the most related VMP literature to highlight study possibilities in cloud settings utilizing nature‐inspired metaheuristic algorithms. The findings suggest that placing VMs in the most efficient place saves power usage substantially. The key problem is to minimize data center energy usage without compromising performance or breaking service level agreements. Finally, we will discuss and look at what further may be accomplished in this line of science.
Article
With the rapid development of virtualization techniques, cloud data centers allow for cost-effective, flexible, and customizable deployments of applications on virtualized infrastructure. Virtual machine (VM) placement aims to assign each virtual machine to a server in the cloud environment. VM Placement is of paramount importance to the design of cloud data centers. Typically, VM placement involves complex relations and multiple design factors as well as local policies that govern the assignment decisions. It also involves different constituents including cloud administrators and customers that might have disparate preferences while opting for a placement solution. Thus, it is often valuable to return not only an optimized solution to the VM placement problem but also a solution that reflects the given preferences of the constituents. In this article, we provide a detailed review on the role of preferences in the recent literature on VM placement. We examine different preference representations found in the literature, explain their existing usage, and explain the adopted solving approaches. We further discuss key challenges and identify possible research opportunities to better incorporate preferences within the context of VM placement.
Conference Paper
Full-text available
Cloud computing datacenters provide millions of virtual machines (VMs) in actual cloud markets. Nowadays, efficient location of these VMs into available physical machines (PMs) represents a research challenge, considering the large number of existing formulations and optimization criteria. Several techniques have been studied for the Virtual Machine Placement (VMP) problem. However, each article performs experiments with different datasets, making difficult the comparison between different formulations and solution techniques. Considering the absence of a highly recognized and accepted benchmark to study the VMP problem, this work proposes and implements a Workload Generator to enable the generation of different instances of the VMP problem for cloud computing environments, based on different configurable parameters. Additionally, this work also provides a set of pre-generated instances of the VMP that facilitates the comparison of different solution techniques of the VMP problem for the most diverse dynamic environments identified in the state-of-the-art.
Conference Paper
Full-text available
Cloud computing datacenters provide thousands to millions of virtual machines (VMs) on-demand in highly dynamic environments, requiring quick placement of requested VMs into available physical machines (PMs). Due to the randomness of customer requests, the Virtual Machine Placement (VMP) should be formulated as an online optimization problem. This work presents a formulation of a VMP problem considering the optimization of the following objective functions: (1) power consumption, (2) economical revenue, (3) quality of service and (4) resource utilization. To analyze alternatives to solve the formulated problem, an experimental comparison of fi�ve diff�erent online deterministic heuristics against an offl�ine memetic algorithm with migration of VMs was performed, considering several experimental workloads. Simulations indicate that First-Fit Decreasing algorithm (A4) outperforms other evaluated heuristics on average. Experimental results prove that an offl�ine memetic algorithm improves the quality of the solutions with migrations of VMs at the expense of placement recon�gurations.
Conference Paper
Full-text available
Cloud computing datacenters provide millions of virtual machines in actual cloud markets. In this context, Virtual Machine Placement (VMP) is one of the most challenging problems in cloud infrastructure management, considering the large number of possible optimization criteria and different formulations that could be studied. Considering the on-demand model of cloud computing, the VMP problem should be solved dynamically to efficiently attend typical workload of modern applications. This work proposes a taxonomy in order to understand possible challenges for Cloud Service Providers (CSPs) in dynamic environments, based on the most relevant dynamic parameters studied so far in the VMP literature. Based on the proposed taxonomy, several unexplored environments have been identified. To further study those research opportunities, sample workload traces for each particular environment are required; therefore, basic examples illustrate a preliminary work on dynamic workload trace generation.
Conference Paper
Full-text available
This paper presents for the first time a formulation of the Virtual Machine Placement as a Many-Objective problem (MaVMP), considering the simultaneous optimization of the following five objective functions for dynamic environments: (1) power consumption, (2) inter-VM network traffic, (3) economical revenue, (4) number of VM migrations and (5) network traffic overhead for VM migrations. To solve the formulated MaVMP problem, a novel Memetic Algorithm is proposed. As a potentially large number of feasible solutions at any time is one of the challenges of MaVMP, five selection strategies are evaluated in order to automatically select one solution at each time. The proposed algorithm with the considered selection strategies were evaluated in two different scenarios.
Article
Full-text available
Data centers in public, private, and hybrid cloud settings make it possible to provision virtual machines (VMs) with unprecedented flexibility. However, purchasing, operating, and maintaining the underlying physical resources incurs significant monetary costs and environmental impact. Therefore, cloud providers must optimize the use of physical resources by a careful allocation of VMs to hosts, continuously balancing between the conflicting requirements on performance and operational costs. In recent years, several algorithms have been proposed for this important optimization problem. Unfortunately, the proposed approaches are hardly comparable because of subtle differences in the used problem models. This article surveys the used problem formulations and optimization algorithms, highlighting their strengths and limitations, and pointing out areas that need further research.
Conference Paper
In this paper, we present the methodologies used in existing literature for Virtual Machine (VM) placement, load balancing and server consolidation in a data center environment. While the methodologies may seem fine on the surface, certain drawbacks and anomalies can be uncovered when they are analyzed deeper. We point out those anomalies and drawbacks in the existing literature and explain what are the root causes of such anomalies. Then we propose a novel methodology based on vector arithmetic which not only addresses those anomalies but also leads to some interesting theories and algorithms to tackle the above mentioned three functionalities required in managing resources of data centers. We believe that with a strong mathematical base, our methodology has the potential to become the foundation of future models and algorithms in this research area. There are few research work reported in the literature for VM placement. Those methods might look fine at a glance, but a deeper scrutiny can expose various anomalies and drawbacks which might affect the performance of the system. Most of them devise a metric, which is a function of resource utilizations of individual resource types. They use this metric for placement and migration of VMs as well as for load balancing and consolidation of servers. In this paper, we present various methodologies used in the literature for VM placement, server load balancing and server consolidation and point out the drawbacks and anomalies in those methodologies and discuss the root cause of such anomalies. Then we propose a novel methodology based on vector arithmetic which not only addresses those anomalies but also leads to some inter- esting theories and algorithms to tackle the above mentioned three functionalities required in managing resources of data centers. We believe that with a strong mathematical base, our methodology has the potential to become the foundation of future models and algorithms in this research area.