Conference PaperPDF Available

Abstract and Figures

In a changing and competitive business world, business processes are at the heart of modern organizations. In some cases, service level agreements (SLAs) are used to regulate how these business processes are provided. This is usually the case when the business process is outsourced, and some guarantees about how the outsourcing service is provided are required. Although some work has been done concerning the structure of SLAs for business processes, the definition of service level objectives (SLOs) remains a manual task performed by experts based on their previous knowledge and intuition. Therefore, an evidence-based approach that curtails humans involvement is required for the definition of realistic while challenging SLOs. This is the purpose of this paper, where performance-focused process mining, goal programming optimization techniques, and simulation techniques have been availed to implement an evidence-based framework for the definition of SLAs. Furthermore, the applicability of the proposed framework has been evaluated in a case study carried out in a hospital scenario.
Content may be subject to copyright.
A New Framework for Defining Realistic SLAs:
An Evidence-Based Approach
Minsu Cho1,2, Minseok Song2(B
), Carlos uller3, Pablo Fernandez3,
Adela del-R´ıo-Ortega3, Manuel Resinas3, and Antonio Ruiz-Cort´es3
1Ulsan National Institute of Science and Technology, Ulsan, Korea
mcho@unist.ac.kr
2Pohang University of Science and Technology, Pohang, Korea
mssong@postech.ac.kr
3University of Seville, Seville, Spain
{cmuller,pablofm,adeladelrio,resinas,aruiz}@us.es
Abstract. In a changing and competitive business world, business
processes are at the heart of modern organizations. In some cases, service
level agreements (SLAs) are used to regulate how these business processes
are provided. This is usually the case when the business process is out-
sourced, and some guarantees about how the outsourcing service is pro-
vided are required. Although some work has been done concerning the
structure of SLAs for business processes, the definition of service level
objectives (SLOs) remains a manual task performed by experts based
on their previous knowledge and intuition. Therefore, an evidence-based
approach that curtails humans involvement is required for the definition
of realistic while challenging SLOs. This is the purpose of this paper,
where performance-focused process mining, goal programming optimiza-
tion techniques, and simulation techniques have been availed to implement
an evidence-based framework for the definition of SLAs. Furthermore, the
applicability of the proposed framework has been evaluated in a case study
carried out in a hospital scenario.
Keywords: Service level agreement ·Process mining ·Process
performance indicators ·Optimization ·Goal programming ·
Simulation
1 Introduction
In a changing and competitive business world, business processes are at the heart
of modern organizations [1]. In some cases, service level agreements (SLAs) are
This work was partially supported by the European Commission (FEDER), the
European Union Horizon 2020 research and innovation programme under the Marie
Sklodowska-Curie grant agreement No. 645751 (RISE BPM), the Spanish and
the Andalusian R&D&I programs (grants TIN2015-70560-R, P12-TIC-1867), the
National Research Foundation of Korea (No. NRF-2014K1A3A7A030737007).
c
Springer International Publishing AG 2017
J. Carmona et al. (Eds.): BPM Forum 2017, LNBIP 297, pp. 19–35, 2017.
DOI: 10.1007/978-3-319-65015-9 2
20 M. Cho et al.
used to regulate how these business processes are provided. This is usually the
case when the business process is outsourced, and some guarantees about how
the outsourcing service is provided are required [2].
Although some work has been done concerning the structure of SLAs for
business processes [2], the problem of defining the actual service level objectives
(SLOs), which are essential factors of an SLA denoting requirements on the ser-
vice performance, in a specific business process is largely unaddressed. This issue
involves first choosing the process performance indicators (PPIs) that should be
considered in the SLA and, second, defining their desired target. This target
must be challenging, but achievable to ensure a good process performance. A
consequence of this lack of methodology for defining SLOs is that, in the current
state of practice, the definition of SLOs is usually carried out by experts based on
their previous knowledge and intuition, and sometimes following a trial and error
model. This is far from desired, since, according to [3], definition of objectives
requires a theory and a practical base and it should meet certain requirements:
not being based on experts opinion, but on measurement data; respecting the
statistical properties of the measure, such as measure scale and distribution, and
be resilient against outlier values; and being repeatable, transparent and easy to
carry out.
To overcome this problem, in this paper, we propose a framework that
includes a series of steps for defining SLAs with a systematic evidence-driven
approach. The proposed method covers the understanding of current behaviors
of business processes, defining SLOs, deriving optimized SLOs with improve-
ment actions, and evaluating the expected effects with a simulation. Specifically,
in this paper, we present a proposal to implement the first three steps. This pro-
posal supports a broad range of PPIs and employs performance-focused process
mining, optimization techniques for multi-objective programming, and simula-
tion techniques. The contributions of our research are as follows: (i) connecting
the realms of process mining and SLAs; and (ii) proposing a new systematic
approach to defining SLAs based on evidence. The applicability of our approach
has been demonstrated with an experimental evaluation based on a hospital
scenario.
The remainder of this paper is organized as follows. Section 2introduces
our evidence-based framework for the definition of SLAs. Then, in Sect. 3,we
describe the first three steps in a formal way: (i) how to derive PPIs and current
SLOs (CSLOs) in Sect. 3.1; and (ii) how to optimize SLOs in Sect. 3.2. Section 4
describes the effectiveness of our approach with the experimental evaluations.
The summarized related works are presented in Sect.5, and finally, Sect. 6con-
cludes the work and describes future directions.
2 An SLA Definition Framework Using
an Evidence-Based Approach
In this section, we introduce the SLA definition framework using an evidence-
based approach. As depicted in Fig. 1, the framework consists of 6 steps:
A New Framework for Defining Realistic SLAs 21
(1) process mining analysis, (2) SLOs inference, (3) SLOs optimization, (4)
simulation analysis, (5) evaluation, and (6) SLA definition. Initially, we con-
duct process mining analysis to calculate PPIs using event logs extracted from
information systems. On the basis of PPIs, results from process mining analy-
sis, CSLOs are inferred. These CSLOs represent the current behavior for PPIs.
After that, CSLOs are used in the SLOs optimization step that generates desired
service level objectives (DSLOs) by applying some optimization techniques. The
next step is to build a simulation model and conduct a simulation analysis for
a scenario based on the optimization. Then, the evaluation step is performed
to analyze the deviation between PPIs from the simulation and the DSLOs. In
such a step, according to the evaluation result, the process can revert to either
Step 2 or Step 3, in the other case, it can proceed to Step 6. Specifically, the
SLOs inference activity is performed in the state of a high deviation between
the PPIs and DSLOs; the SLOs optimization activity is executed in case of a
low deviation between two values. Finally, new SLAs are derived based on the
DSLOs that successfully pass the evaluation step.
Process
Mining
SLA
Definition
Simulation ProcessSLOs
Optimization
SLOs
Inference Evaluation
PPIs
Event
Logs
CSLOs DSLOs
Event
SLA
Managers
Fig. 1. Overviews of the proposed framework
As already mentioned, in this paper we focus on Steps1,2,and3 as a
first approach towards supporting the whole SLA definition framework using an
evidence-based approach. In order to exemplify these steps, from a user inter-
action perspective, Fig. 2depicts a sequence of four mockups describing the
expected interaction flow of a manager using the system in a given scenario.
Specifically, the first mockup corresponds with a particular PPI selection over
the outcomes of the process mining analysis (Step 1 ). Once the user selects the
subset of PPIs to be optimized, the second mockup presents the essential step
where current SLOs are spotted for each PPI (Step 2 ); based on the business
goals, in this point, the manager can specify a desired SLO and check the appro-
priate potential actions to achieve the expected SLO joint with an estimated
impact of the actions (as a starting point, the system calculates an estimation
based on the current data that can then be tuned by the manager). Next, in the
third mockup, a global set of constraints can be established typically including
22 M. Cho et al.
costs over the improvements. Finally, in the fourth mockup, the result of the opti-
mization (Step 3 ) is shown describing the proposed improvement actions along
the desired SLO and the expected metrics according to the global constraints.
Fig. 2. User interaction flow
3 A Proposal for Obtaining DSLOs
In this section, we detail a proposal for the first three steps of the framework,
which is the focus of this paper. The proposal supports a broad range of PPIs
and uses a goal programming approach as the optimization technique for the
SLOs optimization step.
3.1 Process Mining Analysis and SLOs Inference
Among the different perspectives involved in process mining [4], our first step
focuses on the performance perspective and tries to infer the performance of a
current process from its past executions stored in an event log. In such a step,
a set of pre-defined PPIs (i.e., PPIs catalog) is applied, and PPIs are computed
from the event log. After that, SLOs are inferred based on the calculated PPIs.
In contrast to the manual approach currently followed to define SLOs, this paper
proposes an evidence-based approach as an alternative. Therefore, in the SLOs
inference step, managers only have a decision to select target PPIs because all
PPIs are not the key performances for a process. In other words, a couple of
A New Framework for Defining Realistic SLAs 23
Process
Mining
PPI1
PPI2
PPIN
PPIs
PPIs
Catalog
Event
Logs
Managers Inferring
SLOs
CSLO1
CSLO2
CSLOm
CSLOs
Fig. 3. Process mining analysis & inferring SLOs
principal PPIs are selected and inferred to CSLOs as targets to be improved.
Figure 3provides the steps for process mining analysis and SLOs inference.
We now give a detailed explanation with formal definitions for each part.
Event logs, which are the inputs of process mining, are a collection of cases,
where a case is a sequence of events (describing a trace). In other words, each
event belongs to a single case. Events have four properties: activity, originator,
event type, and timestamp. Thus, events can be expressed as assigned values for
these four properties. These are defined as follows.
Definition 1 (Events, Cases, and Event Log). LetA,O,ET,Tbethe
universe of activities, originators, event types, timestamps, respectively. Let E=
A×O×ET ×T be the universe of events. Note that events are characterized
by properties. For each event e and each property p, p(e) denotes the value of
property p for event e (e.g., act(e), type(e), time(e) are the activity, the event
type, and the timestamp of the event e, respectively). If there is no assigned value
of the event e to the property p, we use .LetC=Ebe the set of possible
event sequences (i.e., cases). An event log L∈B(C)is the set of all possible
multi-sets over C.
A simple example log is provided in Table 1. In the table, 24 events for four
cases are included, and each line corresponds to a trace represented as a sequence
of activities. For example, the trace of the case 1 refers to a process instance
where A was started by Paul at 09:00 and completed at 10:00, B was started
by Mike at 10:20 and completed at 12:00, and C was started by Allen at 13:00
and completed at 13:30. Also, event IDs are determined by the order of cases
and timestamps of events (i.e., E1: APaul,09:00
Start ,E2:APaul,10:00
Complete , ..., and E24:
DAllen,17:30
Complete ). This log will be used as a running example.
Table 1. Running example log
Case Tra ce
Case 1 <APaul,09:00
Start ,A
Paul,10:00
Complete ,B
Mike,10:20
Start ,B
Mike,12:00
Complete ,D
Allen,13:00
Start ,D
Allen,13:30
Complete >
Case 2 <APaul,10:30
Start ,A
Paul,11:00
Complete ,CChris,12:10
Start ,CChris,13:00
Complete ,D
Allen,14:00
Start ,D
Allen,15:00
Complete >
Case 3 <APaul,12:00
Start ,A
Paul,12:30
Complete ,B
Mike,14:00
Start ,B
Mike,15:00
Complete ,D
Allen,15:30
Start ,D
Allen,16:30
Complete >
Case 4 <APaul,13:00
Start ,A
Paul,14:00
Complete ,CChris,14:30
Start ,CChris,15:30
Complete ,D
Allen,16:00
Start ,D
Allen,17:30
Complete >
24 M. Cho et al.
Based on the event log, we identify events to be used for calculating PPIs
through two elements: entity types and entity identifiers. The entity type includes
activity and originator. The entity identifiers signify the possible values that
belong to the entity type. For example, in Table1, A, B, C, and D are the entity
identifiers of the entity type activity. Based on these two elements and the log
(i.e., the event log, the entity type, and the entity identifier), required events are
filtered and extracted through the ψfunction. After that, extracted events are
calculated based on measures such as count, working time, and waiting time. A
PPI (Pn(M(E))) is defined as calculating n-th percentile (Pn) from computed
measure values for the filtered events. The PPI is defined as follows.
Definition 2 (Process Performance Indicators). Let T and V be the uni-
verse of entity types and universe of possible values, respectively. For each entity
type tT,Vtdenotes the set of possible values, i.e., the set of entity identifiers
of type t. Let ψL×T×VT⇒Eis a function that finds out the set of events
from an event log for a given entity type and an entity identifier (where, Eis
the set of events). Mis the measures such as count, working time, waiting time,
duration, etc. Pn(M(E)) is the process performance indicator from an event log
for a given entity type and an entity identifier, and a measure (where, Pn=be
the n-th percentile function). Note that P25,P50,P75 are 1st quantile, median,
3rd quantile, respectively.
For example, we can get following examples from the Table 1;ψ(L, Activity , A)
={E1, E2, E7, E8, E13, E14, E19, E20},ψ(L, Originator, Allen)={E5, E6, E11,
E12, E17, E18, E23, E24}. As an example of PPIs, the median of working time for
ψ(L, Activity, A) is calculated as 30 min from {60, 30, 30, 30}, and it also can be
denoted as follows: median of working time of the A is 30 min.
The next step is to infer SLOs based on the calculated PPIs using process
mining. SLO is defined as follows.
Definition 3 (Service Level Objectives and Inferring function). Let M
be the universe of measurements. x is the target value of the measurement m,
and P(t) is the function deriving probability of t. A SLO P(mx)n%
is the probability(m) that measurement is less than x must be at least n%. Let
IΓ(Px(M(E))) ⇒{P(mx)n%}be a function that infers the SLO from
the PPI.
SLO is defined as a probability that a measure of cases that have the entity
identifier value must be more than n%. CSLOs are automatically inferred
from PPIs using the Ifunction. Overall structures of CSLOs and PPIs are quite
similar; thus, we can easily establish CSLOs using given PPIs. As we explained
earlier, PPI is defined as n-th percentile of ameasure of an entity identifier is
value. Based on the PPI, CSLO becomes ameasureof an entity identifier must
be less than the value in n% of cases. For example, in Table 1, one of the PPIs,
the median (50th percentile) of working time of the activity A is 30 min (i.e.
PPI
1). Then, the related CSLO becomes the working time of the activity A
must be less than 30 min in 50 % of the cases. Also, there is another PPI that
A New Framework for Defining Realistic SLAs 25
the median of working time of the originator Allen is 60 min (i.e., PPI
2). Then,
the corresponding CSLO becomes the working time performed by Allen must
be less than 60 minutes in 50 % of the cases.
3.2 SLOs Optimization
The objective for the optimization step is to maximize the whole effect by min-
imizing the target value of each calculated SLO while maintaining it achievable
and realistic by selecting the best improvement actions that enhance the process
performance. Therefore, it needs a multi-objective programming approach to
accomplish multiple goals. We employ the goal programming (GP) approach [5].
The goal programming method is one of the popular approaches for the multi-
objective programming problem [5]. Figure 4shows the SLOs optimization step.
In our approach, the inputs of the GP model are improvement actions,CSLOs,
and business constraints. We assume that improvement actions are given based
on prior knowledge or qualitative research (e.g., interviews and surveys). Employ-
ing more resources and providing incentives are a part of the typical examples
of the actions. As explained in Sect. 3.1, CSLOs are derived from event logs.
Finally, a manager has to determine demands and constraints including costs
of implementation actions, expected SLOs and importance of each SLO. Here,
the expected SLOs signify manager’s expectation regarding the derived SLOs.
On the basis of three inputs, a GP model is constructed, and the output of the
model are how many and what improvement actions are used for each goal and
the minimized SLOs (DSLOs).
Business Constraints
Optimizing
SLOs
CSLO1
CSLO2
CSLOm
CSLOs
DSLO1
DSLO2
DSLOm
DSLOs
Action
1
Action
2
Action
p
Improvement Actions
Action
1
Action
2
Action
q
Selected Actions
Fig. 4. Optimizing SLOs
Before explaining the GP model, we introduce the symbols that are described
in Table 2.
Pn(μ, σ2) denotes the percentile function for a normal distribution with mean
(μ) and variance (σ2). In general, the percentile function is defined as the infi-
mum function of the cumulative distribution function [6]. Here, based on two
aspects, we consider that percentiles are represented by a normal curve plot and
26 M. Cho et al.
Table 2. Optimization symbols
Symbol Meaning
VNumber of entity identifiers in all CSLOs
MNumber of available improvement actions
iIndices of entity identifiers, (i=1,2, ..., V )
jIndices of available actions, (j=1,2, ..., M )
mTypes of measures, (m={d:duration, wo :working, wa :waiting})
xi,j Number of applications of action jfor entity identifier i
li,j Lower bound of number of applications of action jfor entity identifier i
ui,j Upper bound of number of applications of action jfor entity identifier i
μm
iCurrent mean of measure mfor entity identifier i
σm
iCurrent standard deviation of measure mfor entity identifier i
fm
i,j Effect on mean of measure mof action jfor entity identifier i
hm
i,j Effect on std. dev. of measure mof action jfor entity identifier i
ci,j Unit cost of method jfor entity identifier i
CPlanned implement action cost
XTarget percentage by manager (0 X1)
TTarget value by manager
WDetermined range weight for target value (0 W1)
wkImportance of SLO k(k=1,2, ...K )
Pn(μ, σ2) n-th Percentile function with μand σ2
can be expressed with two variables μand σ2. First, there is a principle that large
populations follow a normal distribution [7]. Second, the improvement actions in
this paper have an effect on decreasing mean and standard deviation of distrib-
utions. Figure 5provides the graphical explanation. In a current distribution for
an SLO, the target value based on 95% is V1. If an improvement action makes the
mean decrease without any other changes, the distribution moves to the left. As
such, the reduced new target value (V2) is derived as provided in the left graph
of Fig. 5. On the other hand, if an improvement action affects the decrease of the
standard deviation, the distribution becomes more centralized than before, and
the new target value (V3) is derived as shown in the middle of Fig. 5. Further-
more, an improvement action can affect to reduction of both mean and standard
deviation. Then, as shown in the right of Fig.5, the target value is decreased as
V5depending on the decrease of mean and standard deviation. The following is
the formalization of the percentile function with the normal distribution.
Definition 4 (Percentile Function). Let fbe the probability density function,
the cumulative distribution function Fas follows:
F(x)=x
−∞
f(t)dt(where, −∞ x≤∞)
A New Framework for Defining Realistic SLAs 27
Fig. 5. Effects on target values for SLOs based on improvement actions
With reference to the function F, percentile function is
P(p)=inf{xR:pF(x)}(where, inf = infimum function)
foraprobability0p1. Based on the principle that large populations follow
a norma l distribution, percentile function becomes the inverse function of the
cumulative normal distribution function. The cumulative distribution function
for normal distribution with μand σ2is as follows.
Fx(μ, σ2)= 1
2[1 + erf (xμ
σ2)] (where, erf = error function)
Let n-th percentile function for normal distribution with μand σ2be defined as
follows.
Pn(μ, σ2)=F1
x(μ, σ2)=μ+σ2erf 1(2p1)(n%=p)
As we explained earlier, the GP model aims at minimizing the target values
of all SLOs by employing improvement actions. Therefore, an individual opti-
mization model for each SLO is constructed. Then the GP model is formulated
by combining all optimization models together. An optimization model for each
goal is formalized as follows.
Definition 5 (Optimization Model for Each Goal)
O.F. DSLOi=min PX(μm
i
m
i
2)
where, μm
i
=μm
i+M
j=1 xi,j fm
i,j
σm
i
=σm
i+M
j=1 xi,j hm
i,j
Constraints T×(1 W)DSLOiPX(μm
i
m
i
2)
0xi,1,x
i,2,...,x
i,M
xi,1,x
i,2,...,x
i,M =integer
li,j xi,j ui,j (for j =1,2,...,M)
V
i=1 M
j=1 xi,j ci,j C
As we explained earlier, improvement actions can influence the mean and
standard deviation of the distribution for SLOs. As such, the objective function
28 M. Cho et al.
is formalized aiming to minimize the percentile function considering the modified
mean and the standard deviation (PX(μm
i
m
i
2)). Here, the updated mean
(μm
i
) and standard deviation (σm
i
) are described as the difference between the
current values (μm
iand σm
i
2) and the effects of the applying improvement actions
(i.e., M
j=1 xi,j fm
i,j and M
j=1 xi,j hm
i,j that denote the reduction of mean and
standard deviation, respectively).
For the constraints in the optimization model, the expected SLO determined
by managers is included as a target value with a specific target percentage.
Considering the pre-determined expected SLOs, we set the range of DSLOithat
it should be less than or equal to the current value (PX(μm
i
m
i
2)) and greater
than or equal to the value from the target value (T) and range weight (W).
Moreover, another constraint is that the number of applications for each action
(xi,j ) should be bigger than 0 and integer. In this regard, we can also determine
a lower bound (li,j) and an upper bound (ui,j) of the number of applications
for each action. Furthermore, the cost-related constraint is also included so that
total used cost for implementation (V
i=1 M
j=1 xi,j ci,j ) is less than the planned
implement action cost (C).
At last, we describe how to formalize the GP model that combines the opti-
mization model for the selected SLOs. The objective function of the GP model
considers both the changes of SLOs (i.e., the difference between CSLOs and
the minimized SLOs (DSLOs)) and the importance of each goal determined by
a manager. Also, constraints and bounds in optimization models for goals are
included. Formalization for the GP model is as follows.
Definition 6 (GP Model)
O.F. max Z =w1CSLO1DSLO1
CSLO1+w2CSLO2DSLO2
CSLO2+...
+wKCSLOKDSLOK
CSLOK
subject to Constraints and bounds in optimization models for goals
4 Experimental Evaluation
To demonstrate the effectiveness of our proposed approach, we apply it to an
examination process in an outpatient clinic and the corresponding log utilized in
[8]. In Sect. 4.1, we introduce the examination process and the corresponding log
applied in the evaluation. In Sect.4.2, we describe the results of PPIs calculation
and CSLOs conversion. Section 4.3 introduces the setup for the optimization,
while Sect. 4.4 provides the results of optimization, i.e., DSLOs.
4.1 Experiment Design and Data Set
As we introduced earlier, we used the examination flows in the outpatient clinic
and the corresponding event log. Figure 6provides the graphical description
of the examination process. In the process, patients (i.e., cases) firstly visit a
A New Framework for Defining Realistic SLAs 29
hospital and get both the lab test and the X-ray test. Then, if needed, patients
get the electrocardiogram test (ECG). After that, they visit the hospital again
and get either computerized tomography (CT) or magnetic resonance imaging
(MRI) according to the results of the tests in the first visit. Lastly, the process
is finished with the third visit of the patients. The proposed framework was
applied to the corresponding log of the examination process. The log included
7000 events performed by 17 resources for 1000 cases.
Start
First
Visi t
Lab Test
End
X-ray
ECG
Second
Visi t
Third
Visi t
CT
MRI
Fig. 6. The examination process used in the evaluation
In the case study, we focused on PPIs defined for the working and waiting
time of the test-related activities included in the process. Also, for each indi-
cator, we applied various aggregation functions such as median, first quartile
(1st Q), third quartile (3rd Q ), five percentiles (5%), and 95 percentiles (95 %)
to understand the distribution of the indicator. We computed PPIs with the
examination event log, and Table 3provides the results in detail.
Table 3. Calculated results of PPIs
(measure: min.)
Time value Activity Median 1st Q. 3rd Q. 5% 95%
Working time X-ray 20.0 19.0 21.0 17.0 23.0
Lab Test 20.0 19.0 21.0 17.0 23.0
ECG 30.0 27.0 33.0 22.0 38.0
MRI 61.0 56.0 64.3 50.0 71.0
CT 45.0 44.0 46.0 42.0 48.0
Waiting time X-ray 30.0 27.0 33.0 22.0 38.0
Lab Test 30.0 26.7 33.0 22.0 38.0
ECG 0.0 0.0 0.0 0.0 1.0
MRI 7223.5 6931.8 7547.2 6478.5 7975.7
CT 4314.5 3994.7 4651.2 3506.9 5089.0
4.2 Results for PPIs and CSLOs
As described in the Table3, we identified that MRI had higher working time
than any other activities (e.g., the median of working time of MRI was 61 min).
30 M. Cho et al.
With regard to the waiting time, a couple of activities had higher values than
others: MRI and CT. These results were used to determine the candidates for
optimization (i.e., CSLOs).
To decide what PPIs are taken into account for the CSLOs extraction, we can
consider two types of criteria. First, the indicators that are linked to a critical
part in a process, e.g., a primary activity or a sub-process can be selected because
they are necessary to improve the process. However, this approach has to be
determined by a manager of an organization. In other words, it is required to have
a domain knowledge of the process. The other approach is to select problematic
indicators that have a high potential to be improved such as indicators that
have high volatility or unexpectedly low values. Since that information from the
manager was not available in our case study, we selected the second option.
Among several PPIs, we selected three of them, and the corresponding CSLOs
were obtained as follows.
PPI
1: 95th percentile (i.e., 95%) of working time of MRI is 71.0 min.
CSLO1: Working time of MRI must be less than 71.0 min in 95% of patients.
PPI
2: Median (i.e., 50th percentile) of waiting time of MRI is 7223.5 min.
CSLO2: Waiting time of MRI must be less than 7223.5 min in 50%of
patients.
PPI
3: Median of waiting time of CT is 4314.5 min.
CSLO3: Waiting time of CT must be less than 4314.5 min in 50%of
patients.
4.3 Setup for Optimization
Based on the calculated CSLOs, we built a GP optimization model for two activi-
ties (i={1:MRI,2:CT }) and a couple of time measures (m={wo :working,
wa :waiting}) in this case study. As the inputs for the GP model, we first used
the target values of CSLOs that were derived in Sect. 4.2:CSLO1=71.0, CSLO2
= 7223.5, and CSLO3= 4314.5. Second, we employed three improvement actions
(j=1, 2, 3 ): employing more resources (Action 1 ), changing resources into more
qualified people (Action 2 ), and employing managers (Action 3 ). As we explained
earlier, each action has an effect on decreasing the mean and the standard devia-
tion of time values for entity identifiers (i.e. activities in the case study). Among
three actions, the action 1 lowers the average of waiting time for activities, while
the action 2 reduces the mean of working time and waiting time. On the other
hand, the action 3 decreases the standard deviation of working and average of
waiting time. In this model, detailed effects and costs of each action are provided
in Table 4.
In the table, costs and effects on working time were assumed, while effects
on waiting time were calculated from data. The effects on waiting time in action
1 were inferred from the M/M/c model of the queuing theory. With regard to
the action 2 and 3, we calculated the reduction of waiting time according to the
change in working time.
Lastly, several assumptions were encoded in the model as manager’s decisions
and business constraints: expected SLOs, bounds for the number of applications
A New Framework for Defining Realistic SLAs 31
Table 4. Effects and unit costs of each action for MRI and CT
Action Cost Effects on MRI Effects on CT
fwo hwo fwa hwa fwo hwo fwa hwa
11600 1187m 1187m
2400 1% 47.62m/1mof fwo 1% 62.63m/1mof fwo
3550 10% 10m/1% of hwo 10% 10m/1% of hwo
for each action, planned implement cost, and importance for each goal. Expected
SLOs (i.e., manager’s target SLOs) were assumed as follows. These values were
applied as constraints in the model with the determined range weight (W=
0.05).
ESLO1: Working time of MRI must be less than 69.0 min in 95%ofpatients.
ESLO2: Waiting time of MRI must be less than 7000.0 min in 50%ofpatients.
ESLO3: Waiting time of CT must be less than 3200.0 min in 50%ofpatients.
Also, based on the current status of resources, the number of employing
resources (xi,1) and changing resources into more qualified people (xi,2)foreach
activity were limited as 1 and 3, respectively. Moreover, we assumed that the
planned implement cost was 3000 and the importances for all goals were the
same as 0.5.
Based on these inputs, we built a GP model. The complete formulation of
each goal and the GP model are presented in Table 5.
Table 5. The GP model for optimization
Goal 1 Goal 2
O.F. DSLO1=min P95(μwo
1
wo
1
2)
μwo
1
=μwo
1+3
j=1 x1,j fwo
1,j
σwo
1
=σwo
1+3
j=1 x1,j hwo
1,j
DSLO2=min P50(μwa
1
wa
1
2)
μwa
1
=μwa
1+3
j=1 x1,j fwa
1,j
σwa
1
=σwa
1+3
j=1 x1,j hwa
1,j
Goal 3
O.F. DSLO3=min P50(μwa
2
wa
2
2)
μwa
2
=μwa
2+3
j=1 x2,j fwa
2,j
σwa
2
=σwa
2+3
j=1 x2,j hwa
2,j
GP model
O.F.
subject to
max Z =0.571.0DSLO1
71.0+0.57223.5DSLO2
7223.5+0.54314.5DSLO3
4314.5
69.0×(1 0.05) DSLO171.0
7000.0×(1 0.05) DSLO27223.5
3200.0×(1 0.05) DSLO34314.5
02
i=1 3
j=1 xi,j 2
i=1 3
j=1 xi,j =integer
2
i=1 xi,112
i=1 xi,23
2
i=1 3
j=1 xi,j ci,j 3000
32 M. Cho et al.
4.4 Optimization Results
Based on the constructed GP model, we obtained the optimal solution. Table 6
provides the optimization results for the case study. The results of optimization
with the GP model recommended changing two resources into more qualified
people (Action 2) and employing a manager (Action 3) for MRI. Moreover, for
CT activity, employing one more resource (Action 1) was suggested. As such, the
total used implement cost was turned to 2950. Also, through the optimization,
all SLOs were improved. For example, the target value of CSLO1went from
71.0 min to 68.3 minutes. Likewise, the target values of CSLO2and CSLO3
were decreased by 141.7 and 1181.1min, respectively. Lastly, as a result of the
combination of importance for each goal, there was a 16.7% reduction.
Table 6. Optimization results for the case study
Applied improvement actions
Changing resources into more qualified people (Action 2) for MRI: 2 (times)
Employing managers (Action 3) for MRI: 1
Employing more resources (Action 1) for CT: 1
Tot al used cos t
2950 (= 400 ×2 + 550 ×1 + 1600 ×1)
Derived SLOs
DSLO1: Working time of MRI must be less than 68.3 min in 95% of patients
DSLO2: Waiting time of MRI must be less than 7076.4 min in 50% of patients
DSLO3: Waiting time of CT must be less than 3133.4 min in 50% of patients
The result provided the optimal solutions in the given limited cost. In other
words, it suggested the best answers for solving the problem that the current
process has. Therefore, managers can acquire the direct improvement effects by
applying the recommended actions into the activities in the process.
5 Related Work
Numerous research efforts have focused on proposing models for SLA definition
in computational and non–computational domains [2,9,10], however, none of
them deals with the definition of challenging while achievable SLOs. Some work
has been carried out in this direction in the context of computational services.
[11] proposes a methodology to calculate SLO thresholds to sign IT services SLAs
according to service function cost from a business perspective, but it is useful
only for SLAs that apply to the software infrastructure that supports business
processes and not for business processes offered as a service. [12] describes a
categorization of IT services and outlines a mechanism to obtain efficient SLOs
A New Framework for Defining Realistic SLAs 33
for them. However, they do that at a conceptual level and do not detail how
they can be formalized to enable their automated computation.
Regarding the definition of data-based target values or thresholds for PPIs,
[13] presents an approach to determine PPI thresholds based on the relationship
of different PPIs and their values computed from the process execution data. In
this approach, though, a proven relationship between certain PPIs is required in
order to extract their thresholds.
Concerning our SLO optimization proposal, some related works exist in the
context of process measurement and improvement. A series of proposals exist,
e.g. [4,14,15], that identify correlations between PPIs that, eventually, can lead
to the definition of process improvement actions. Also related to this is the
business process redesign area, which tackles the radical change of a process to
enhance its performance dramatically. In this area, a number of works have been
presented where heuristic-based BPR frameworks, methodologies, and best prac-
tices have been proposed [16,17]. The main drawback of these works concerning
our motivating problem is that they are not SLA-aware and leave out of their
scope the establishment of target values for the performance measures, or SLOs
in the context of business processes offered as services.
6 Conclusion
This paper proposes a structured framework to define realistic SLAs with a
systematic evidence-driven approach. The evaluation results obtained from its
application to an examination process in the outpatient clinic have shown its
applicability and the improvements on the performance of that process.
Our work has a couple of limitations and challenges. The case study adopted
for validation covered only the time-related measures. Therefore, a more com-
prehensive approach that handles various indicators such as frequency and qual-
ity is required. Also, with regard to improvement actions in the optimization
part, we applied assumptions about the types of actions, costs, and effects. As
future work, we will establish more systematic improvement actions by explor-
ing existing works and conducting interviews. In addition, we used the normal
distribution-based percentile function with the normality principle. However, if
we use the distribution itself (e.g., histogram), we can apply more improvement
actions that modify skewness or kurtosis. Therefore, we need to develop a method
to support this idea and be able to formulate those improvement actions.
Furthermore, at the beginning, we claimed that our approach aims at reduc-
ing the human involvement in the specification of SLOs, but we still need the
experts for some steps to gather relevant information. Therefore, we plan to
improve our approach by minimizing the human involvement as much as possible
and increasing the portion of the data analysis. Finally, in this paper, we focused
on the first three steps of the proposed framework. We are already working on
implementing the remaining steps and a tool that supports the whole structure.
Also, more case studies with real data in different contexts will be performed for
further validations.
34 M. Cho et al.
References
1. Harmon, P.: The scope and evolution of business process management. In: Brocke,
J., Rosemann, M. (eds.) Handbook on Business Process Management, vol. 1, pp.
169–194. Springer, Heidelberg (2010). doi:10.1007/978-3-642-00416-2 3
2. del-R´ıo-Ortega, A., Guti´errez, A.M., Dur´an, A., Resinas, M., Ruiz–Cort´es, A.:
Modelling service level agreements for business process outsourcing services. In:
Zdravkovic, J., Kirikova, M., Johannesson, P. (eds.) CAiSE 2015. LNCS, vol. 9097,
pp. 485–500. Springer, Cham (2015). doi:10.1007/978-3-319-19069-3 30
3. Alves, T.L., Ypma, C., Visser, J.: Deriving metric thresholds from benchmark data.
In: 26th IEEE International Conference on Software Maintenance (ICSM 2010),
pp. 1–10 (2010)
4. de Leoni, M., van der Aalst, W.M.P., Dees, M.: A general process mining framework
for correlating, predicting and clustering dynamic behavior based on event logs.
Inf. Syst. 56, 235–257 (2016)
5. Aouni, B., Kettani, O.: Goal programming model: a glorious history and a promis-
ing future. Eur. J. Oper. Res. 133, 225–231 (2001)
6. Wichura, M.J.: Algorithm as 241: the percentage points of the normal distribution.
J.R.Stat.Soc.Ser.C(Appl.Stat.)37(3), 477–484 (1988)
7. Whitley, E., Ball, J.: Statistics review 2: samples and populations. Crit. Care 6(2),
143 (2002)
8. Rozinat, A., Mans, R., Song, M., van der Aalst, W.M.P.: Discovering simulation
models. Inf. Syst. 34(3), 305–327 (2009)
9. Cardoso, J., Barros, A., May, N., Kylau, U.: Towards a unified service description
language for the internet of services: requirements and first developments. In: 2010
IEEE International Conference on Services Computing (SCC), pp. 602–609, July
2010
10. Wieder, P., Butler, J., Theilmann, W., Yahyapour, R. (eds.): Service Level Agree-
ments for Cloud Computing, vol. 2506. Springer, New York (2011). doi:10.1007/
978-1-4614-1614-2
11. Sauv´e, J., Marques, F., Moura, A., Sampaio, M., Jornada, J., Radziuk, E.: SLA
design from a business perspective. In: Sch¨onw¨alder, J., Serrat, J. (eds.) DSOM
2005. LNCS, vol. 3775, pp. 72–83. Springer, Heidelberg (2005). doi:10.1007/
11568285 7
12. Kieninger, A., Baltadzhiev, D., Schmitz, B., Satzger, G.: Towards service level
engineering for IT services: defining IT services from a line of business perspective.
In: 2011 Annual SRII Global Conference, pp. 759–766, March 2011
13. del-R´ıo-Ortega, A., Garc´ıa, F., Resinas, M., Weber, E., Ruiz, F., Ruiz-Cort´es, A.:
Enriching decision making with data-based thresholds of process-related KPIs. In:
Dubois, E., Pohl, K. (eds.) CAiSE 2017. LNCS, vol. 10253, pp. 193–209. Springer,
Cham (2017). doi:10.1007/978-3-319-59536-8 13
14. Rodriguez, R.R., Saiz, J.J.A., Bas, A.O.: Quantitative relationships between key
performance indicators for supporting decision-making processes. Comput. Ind.
60(2), 104–113 (2009)
15. Diamantini, C., Genga, L., Potena, D., Storti, E.: Collaborative building of an
ontology of key performance indicators. In: Meersman, R., et al. (eds.) OTM
2014. LNCS, vol. 8841, pp. 148–165. Springer, Heidelberg (2014). doi:10.1007/
978-3-662-45563-0 9
A New Framework for Defining Realistic SLAs 35
16. Mansar, S.L., Reijers, H.A.: Best practices in business process redesign: validation
of a redesign framework. Comput. Ind. 56(5), 457–471 (2005)
17. Watson, H.J., Wixom, B.H.: The current state of business intelligence. Computer
40(9), 96–99 (2007)
http://www.springer.com/978-3-319-65014-2
... Besides, this framework is generic and not specific to offshore software maintenance outsourcing. Assessment [50] Decision of Outsourcing [55] Assessment (readiness of client and vendor) [57] Assessment (Client's Readiness) [58] Assessment Model (Readiness) [48] Vendor selection [53] Contract, Client-vendor relationship [56] Service Level Agreement (SLA) [59] SLA, Vendor selection [29] Handover [51] Handover, Communication, Execution [52] Handover (knowledge sharing) [60] Handover (transfer, learning, adaptation) [1] Execution [3] Execution of outsourcing, Risk factor [36] Execution of outsourcing [43] Execution (Communication and coordination) [44] Execution [45] Execution (Task allocation) [46] Execution (requirement change management, communication) [4] Maintenance Management [33] Knowledge management [49] Risk [41] Risk [42] Risk [54] Risk [5] Relationship between client and vendor, Risk [47] Software Process Improvement in GSD [61] More focus on SMO rather than OSMO ...
... The findings are fully based on literature review and no industrial validation is performed. Moreover, the study [56] proposed an evidence-based framework for a realistic service level agreement (SLA). This framework has used techniques like process mining, goal programming, and simulation. ...
Research Proposal
Full-text available
Software maintenance outsourcing (SMO) is performed from developed countries to developing countries to gain different advantages like reducing the cost and securing time to pay attention to research and development of new products. This process of outsourcing between two (or more) different countries is called offshore software maintenance outsourcing. Both client and vendor face different kinds of problems during the execution of offshore software maintenance outsourcing (OSMO). In literature, a considerable contribution exists for software development outsourcing but OSMO is still an under-researched activity. This study aims to propose an OSMO process model. The proposed OSMO process model will address the problems of both client and vendor in the OSMO context. It is expected that the proposed an OSMO process model will help both the client and vendor organizations to make the OSMO services a success story
... for further scheduling or allocation techniques. Lastly,W Song et al. (2022) presents an approach that finds the minimal number of change operations needed to transform a control flow model into another one.SLA planning comprises the work ofCho, M Song, Müller, et al. (2017) concerned with an automatic approach for proposing SLAs based on business processes.Resource planning publications aim to plan either the number of resources needed for process execution or the properties of resources such as their locations. ...
Preprint
Business Process Management and Operations Research are two research fields that both aim to enhance value creation in organizations. While Business Process Management has historically emphasized on providing precise models, Operations Research has focused on constructing tractable models and their solutions. This systematic literature review identifies and analyzes work that uses combined concepts from both disciplines. In particular, it analyzes how business process models have been conceptualized as mathematical models and which optimization techniques have been applied to these models. Results indicate a strong focus on resource allocation and scheduling problems. Current approaches often lack support of the stochastic nature of many problems, and do only sparsely use information from process models or from event logs, such as resource-related information or information from the data perspective.
... Specifically, the demanded services include issues related to: reporting, user support, problem correction, application enhancement, process and application improvement, and other services. They provide a template for establishing an SLA with an external vendor that provides the mentioned kind of IT support with the desired service levels and penalties and rewards for the parties [20]. ...
Article
A Service Level Agreement (SLA) regulates the provisioning of a service by defining a set of guarantees. Each guarantee sets a Service Level Objective (SLO) on some service metrics, and optionally a compensation that is applied when the SLO is unfulfilled or overfulfilled. Currently, there are software tools and research proposals that use the information about compensations to automate and optimise certain parts of the service management. However, they assume that compensations are well defined, which is too optimistic in some circumstances and can lead to undesirable situations. In this article we discuss about the notion of validity of guarantees with a compensation, which we refer to as compensable guarantees (CG). We describe an abstract model of CGs and we provide a technique that leverages constraint satisfaction problem solvers to automatically validate them. We also present a materialisation of the model of CGs in iAgree, a language to specify SLAs and a tooling support that implements our whole approach. An assessment over 319 CGs taken from 24 real-world SLAs suggests that the expressiveness and effectiveness of our proposal can pave the way for using CGs in a safer and more reliable way.
Chapter
During the last several decades Key Performance Indicators (KPIs) became the standard approach to the monitoring and management of the enterprise performance. At the same time, Business Process Management (BPM) brought the revolutionary alternative to the traditional way of the management of an enterprise. As BPM dramatically changes the managerial perspective and consequently the view of the enterprise performance this perspective should also significantly change the approach to KPIs. Instead of establishing the KPIs for the particular places of the organization they should be derived from its business processes. In this paper we introduce the idea of deriving KPIs from the business process definitions based on our methodology MMABP in order to contribute to the theory of Key Performance Indicators as well as their practical application.
Article
Full-text available
Nowadays, process mining is becoming a growing area of interest in business process management (BPM). Process mining consists in the extraction of information from the event logs of a business process. From this information, we can discover process models, monitor and improve our processes. One of the applications of process mining, is the predictive monitoring of business process. The aim of these techniques is the prediction of quantifiable metrics of a running process instance with the generation of predictive models. The most representative approaches for the runtime prediction of business process are summarized in this paper. The different types of computational predictive methods, such as statistical techniques or machine learning approaches, and certain aspects as the type of predicted outcomes and quality evaluation metrics, have been considered for the categorization of these methods. This paper also includes a summary of the basic concepts, as well as a global overview of the process predictive monitoring area, that can be used to support future efforts of researchers and practitioners in this research field.
Conference Paper
Full-text available
The continuous performance improvement of business processes usually involves the definition of a set of process performance indicators (PPIs) with their target values. These PPIs can be classified into lag PPIs, which establish a goal that the organization is trying to achieve, though are not directly influ-enceable by process performers, and lead PPIs, which are influenceable by process performers and have a predictable impact on the lag indicator. Determining thresholds for lead PPIs that enable the fulfillment of the related lag PPI is a key task, which is usually done based on the experience and intuition of the process owners. However, the amount and nature of currently available data make it possible for data-driven decisions to be made in this regard. This paper proposes a method that applies statistical techniques for thresholds determination successfully employed in other domains. Its applicability has been evaluated in a real case study, where data from more than a thousand process executions was used.
Article
Full-text available
Process mining can be viewed as the missing link between model-based process analysis and data-oriented analysis techniques. Lion׳s share of process mining research has been focusing on process discovery (creating process models from raw data) and replay techniques to check conformance and analyze bottlenecks. These techniques have helped organizations to address compliance and performance problems. However, for a more refined analysis, it is essential to correlate different process characteristics. For example, do deviations from the normative process cause additional delays and costs? Are rejected cases handled differently in the initial phases of the process? What is the influence of a doctor׳s experience on treatment process? These and other questions may involve process characteristics related to different perspectives (control-flow, data-flow, time, organization, cost, compliance, etc.). Specific questions (e.g., predicting the remaining processing time) have been investigated before, but a generic approach was missing thus far. The proposed framework unifies a number of approaches for correlation analysis proposed in literature, proposing a general solution that can perform those analyses and many more. The approach has been implemented in ProM and combines process and data mining techniques. In this paper, we also demonstrate the applicability using a case study conducted with the UWV (Employee Insurance Agency), one of the largest “administrative factories” in The Netherlands.
Conference Paper
Full-text available
Many proposals to model service level agreements (SLAs) have been 1 elaborated in order to automate different stages of the service lifecycle such as 2 monitoring, implementation or deployment. All of them have been designed for 3 computational services and are not well–suited for other types of services such 4 as business process outsourcing (BPO) services. However, BPO services sup-5 ported by process–aware information systems could also benefit from modelling 6 SLAs in tasks such as performance monitoring, human resource assignment or 7 process configuration. In this paper, we identify the requirements for modelling 8 such SLAs and detail how they can be faced by combining techniques used to 9 model computational SLAs, business processes, and process performance indi-10 cators. Furthermore, our approach has been validated through the modelling of
Conference Paper
Full-text available
In the present paper we propose a logic model for the representation of Key Performance Indicators (KPIs) that supports the construction of a valid reference model (or KPI ontology) by enabling the integration of definitions proposed by different engineers in a minimal and consistent system. In detail, the contribution of the paper is as follows: (i) we combine the descriptive semantics of KPIs with a logical representation of the formula used to calculate a KPI, allowing to make the algebraic relationships between indicators explicit; (ii) we discuss how this representation enables reasoning over KPI formulas to check equivalence of KPIs and overall consistency of the set of indicators, and present an empirical study on the efficiency of the reasoning; (iii) we present a prototype implementing the approach to collaboratively manage a shared ontology of KPI definitions
Chapter
Full-text available
Business Process Management or BPM, broadly speaking, is part of a tradition that is now several decades old that aims at improving the way business people think about and manage their businesses. Its particular manifestations, whether they are termed “Work Simplification,” “Six Sigma,” “Business Process Reengineering,” or “Business Process Management,” may come and go, but the underlying impulse, to shift the way managers and employees think about the organization of business, will continue to grow and prosper. This chapter will provide a very broad survey of the business process movement. Anyone who tries to promote business process change in an actual organization will soon realize that there are many different business process traditions and that individuals from the different traditions propose different approaches to business process change. If we are to move beyond a narrow focus on one tradition or technology, we need a comprehensive understanding of where we have been and where we are today, and we need a vision of how we might move forward. We will begin with a brief overview of the past and of the three business process traditions that have created the context for today’s interest in BPM. Then we will turn to a brief survey of some of the major concerns that process practitioners are focused on today and that will probably impact most corporate BPM efforts in the near future.
Conference Paper
Full-text available
Today, the management of service quality is posing a major challenge for many service systems formed by providers and their business customers. We argue that the trade-off between service costs and benefits incurred by both of these parties is not sufficiently considered when service quality is stipulated. Many Service Level Agreements are tailored to fine-grained IT services. The impact of service levels defined for these technical services on customers' business processes, however, is difficult to estimate. Thus, it is a major objective to identify IT services that directly affect the performance of customers' business departments. In this research-in-progress paper we present first results of an empirical study aiming at the definition of IT services and corresponding service level indicators from a customer business department perspective. Based on an initial literature research and a number of semi-structured interviews - with users working in different departments of a public IT customer and having different backgrounds and IT knowledge - we have identified a set of common, "directly business-relevant" IT services. Thus, we take an important first step towards the application of Service Level Engineering, i.e. the derivation of business-relevant performance metrics and associated cost-efficient target values to precisely identify efficient service quality.
Conference Paper
Full-text available
A wide variety of software metrics have been proposed and a broad range of tools is available to measure them. However, the effective use of software metrics is hindered by the lack of meaningful thresholds. Thresholds have been proposed for a few metrics only, mostly based on expert opinion and a small number of observations. Previously proposed methodologies for systematically deriving metric thresholds have made unjustified assumptions about the statistical properties of source code metrics. As a result, the general applicability of the derived thresholds is jeopardized. We designed a method that determines metric thresholds empirically from measurement data. The measurement data for different software systems are pooled and aggregated after which thresholds are selected that (i) bring out the metric's variability between systems and (ii) help focus on a reasonable percentage of the source code volume. Our method respects the distributions and scales of source code metrics, and it is resilient against outliers in metric values or system size. We applied our method to a benchmark of 100 object-oriented software systems, both proprietary and open-source, to derive thresholds for metrics included in the SIG maintainability model.
Article
Full-text available
Process mining is a tool to extract non-trivial and useful information from process execution logs. These so-called event logs (also called audit trails, or transaction logs) are the starting point for various discovery and analysis techniques that help to gain insight into certain characteristics of the process. In this paper we use a combination of process mining techniques to discover multiple perspectives (namely, the control-flow, data, performance, and resource perspective) of the process from historic data, and we integrate them into a comprehensive simulation model. This simulation model is represented as a colored Petri net (CPN) and can be used to analyze the process, e.g., evaluate the performance of different alternative designs. The discovery of simulation models is explained using a running example. Moreover, the approach has been applied in two case studies; the workflows in two different municipalities in the Netherlands have been analyzed using a combination of process mining and simulation. Furthermore, the quality of the CPN models generated for the running example and the two case studies has been evaluated by comparing the original logs with the logs of the generated models.
Book
Service Level Agreements for Cloud Computing provides a unique combination of business-driven application scenarios and advanced research in the area of service-level agreements for Clouds and service-oriented infrastructures. Current state-of-the-art research findings are presented in this book, as well as business-ready solutions applicable to Cloud infrastructures or ERP (Enterprise Resource Planning) environments. Service Level Agreements for Cloud Computing contributes to the various levels of service-level management from the infrastructure over the software to the business layer, including horizontal aspects like service monitoring. This book provides readers with essential information on how to deploy and manage Cloud infrastructures. Case studies are presented at the end of most chapters. Service Level Agreements for Cloud Computing is designed as a reference book for high-end practitioners working in cloud computing, distributed systems and IT services. Advanced-level students focused on computer science will also find this book valuable as a secondary text book or reference.