Conference PaperPDF Available

A Domain-Specific Language for Service Level Agreement Specification

Authors:

Figures

Content may be subject to copyright.
ICIT 2015 The 7th International Conference on Information Technology
doi:10.15849/icit.2015.0119 © ICIT 2015 (http://icit.zuj.edu.jo/ICIT15)
A Domain-Specific Language for Service Level
Agreement Specification
Renata Vaderna, Željko Vuković, Dušan Okanović, Igor Dejanović
Faculty of Technical Sciences
University of Novi Sad
Novi Sad, Serbia
{vrenata, zeljkov, oki, igord}@uns.ac.rs
AbstractIn order to perform continuous monitoring, SLA document between interested parties has to be signed. These documents
should be in machine readable format in order to automate monitoring process. On the other hand, it would be beneficial if it is human
readable, too. This way, it is easier to perform configuration and maintenance of monitoring subsystem. Building up on our previous
work, in this paper we present DProfLang. DProfLang is a domain specific language for defining SLAs, that are both human and
machine readable.
KeywordsSLA, continuous monitoring, Domain-Specific Languages
I. INTRODUCTION
Requirements that certain software has to fulfill are usually
agreed between interested parties before the start of
implementation. There are two types of requirements:
functional and non-functional. Ensuring that software fulfills
functional requirement means that it will "do what it is
expected to do." On the other hand, implementation of non-
functional requirements means that the software will "do what
is expected, but in a certain way." It is important to stress that
while performance measurements can be performed during the
development phase, it is only under production workload that
we can retrieve realistic software performance data. There are
often bugs that take a lot of time to manifest themselves [1],
and this kind of time is not available during development. In
contrast to profiling and debugging, when performing
continuous monitoring we measure application performance
parameters under production workload.
There is a wide array of nonfunctional requirements and
metrics that can be used to quantify them. Some commonly
used are response time, availability, security, robustness,
memory footprint, CPU time. These parameters are usually
referred to as software performance and are specified in an
additional document that follows the initial agreement between
the parties. This document is called Service Level Agreement
(SLA). It can contain functional requirements, ways of
measuring their fulfillment, referent values, ways of processing
these values, and whom to contact if something goes wrong,
either with the obtained values or the measuring process itself.
In our previous works [2, 3], we have described the DProf
system for adaptive continuous monitoring. It is based on the
Kieker monitoring framework [4], and it monitors application
performance using monitoring probes. These probes are
inserted into software using AspectJ or some other tool [5], and
collect monitoring data, while the application is running.
Adaptation of the monitoring process allows for reduction of
monitoring overhead. This is done by turning monitoring off in
the call tree [6] branches that show no discrepancy between the
obtained values and values specified in SLA.
SLA for the DProf system is an XML document based on
the DProfSLA XML schema [2]. Since XML is a machine
readable format, but not well suited for human use [20], in this
paper we propose a new language - DProfLang - for
monitoring goals definition. The domain specific language that
we propose in this paper has the advantage of being both
human and machine readable, thus allowing easier maintenance
of monitoring configuration, while being well suited for
monitoring automation.
The remainder of this paper is as follows. Chapter 2 shows
XML schema that we currently use. In chapter 3, grammar of
the new language is shown. Chapter 4 shows how to translate a
document from DProfSLA format into DProfLang. Chapter 5
presents related work, while in the last section we draw
conclusions and outline for the future work.
II. DPROFSLA
Root element of DProfSLA XML schema is shown in Fig.
1. It has three subelements:
Fig. 1. Root element of DProfSLA XML schema
Page | 693
ICIT 2015 The 7th International Conference on Information Technology
doi:10.15849/icit.2015.0119 © ICIT 2015 (http://icit.zuj.edu.jo/ICIT15)
Parties element is simple and is used to designate interested
parties and their roles in the execution of the agreement.
Timing element specifies the agreement's time constraints -
the start and the end of the monitoring process, and the
frequency of checkups.
Trace element (of CallTreeNode type - Fig. 2) is used to
specify which part of the application is monitored and how the
obtained data is processed. In essence, every trace element
relates to one node in a call tree, i.e. a method call.
For designating call tree nodes we use attribute name in
CallTreeNodeType and syntax shown in [2]. For the call tree in
Fig. 3, we have the DProfSLA document from Listing 1.
Fig. 2. Call tree node representation in DProfSLA XML schema
A node is represented with class and method name,
followed by names of methods that are invoked from it. In this
example, we monitor execution times, calculate averages, and
compare those values to the specified upper threshold.
Fig. 3. An example of call tree
Listing 1. DProfSLA XML for the example shown in Fig. 3.
As stated in the introductory chapter, the use of XML
provides the possibility of automation of the monitoring
process, since XML is machine readable. However, the use of
DSL would allow human readability, while retaining machine
readability.
III. DPROFLANG LANGUAGE GRAMMAR
DProfLang DSL is implemented using textX [7] meta-
language and library for DSL development in Python
programming language. From a single language description
(grammar) textX builds a parser and a meta-model (i.e. abstract
syntax) for the language.
textX grammar consists of a set of rules which define each
language construct and will be translated to Python classes
during Abstract Syntax Tree (AST) construction. Each rule
also defines the syntax of the language element.
In Listing 2 a part of DProfLang grammar is presented.
From this grammar textX will create the meta-model presented
in Fig. 4. BASETYPE hierarchy is a part of the built-in textX
type system.
Listing 2. A part of the DProfLang grammar in textX
Page | 694
ICIT 2015 The 7th International Conference on Information Technology
doi:10.15849/icit.2015.0119 © ICIT 2015 (http://icit.zuj.edu.jo/ICIT15)
The DProfModel rule is the root of the meta-model.
Instances of these classes have the following attributes:
name is the name of the SLA agreement,
description is an optional description given as a string,
parties is a list of the involved parties,
timing is an interval specifying when the monitoring
will be applied,
call_node is the root of the call tree node hierarchy.
CallNode rule defines a node in a call tree node hierarchy
and specifies monitoring parameters such as: used metric,
repeats and outlier percentage, nominal value, upper and lower
threshold. This rule uses composite pattern, as each node can
contain other nodes which are specified by the assignment
nodes*=CallNode. textX assignment operator '*=' will match
zero or more right-hand-side rules and each instance will be
appended to the left-hand-side attribute.
Listing 3. An example of SLA specification written in DProfLang
DProfLang meta-model instance is a Pyton object which is
capable of parsing and instantiating DProfLang models written
as DSL textual specifications.
Listing 3 shows an example of a DProfLang agreement of
the DProfSLA document from Listing 1. It is obvious that the
readability and comprehensibility is vastly improved with the
DSL approach.
A. Transformation From DProfSLA to DProfLang
In order to integrate the new language with our previous
work, we have developed two code generators. The first
generator loads DProfSLA document in the original XML
format and outputs the agreement in the new DSL format. The
second one does the reverse job - it parses the agreements in
DProfLang format and provides XML based DProfSLA
document.
For code generation, Jinja2 template engine [8] for Python
has been used. A template engine is a piece of software that
combines a data model with a template specification to produce
a textual output. In our case data model is based on DProfLang
meta-model. Two templates have been used: DProfSLA XML
template and DProfLang DSL template. Instantiating data
model from DProfLang DSL is supported through textX, since
it automatically constructs the model from the grammar. In
order to support XML we had to develop a procedure that
builds data model out of DProfSLA XML.
IV. RELATED WORK
SLAs must be defined in machine-readable format to allow
automatic service level management. Tebbani et al. [9] have
already shown that only a few formal SLA specification
languages exist. Usually, SLAs are written in some informal
language, which is not acceptable for automation of the
process. Therefore, authors propose Generalized Service Level
Agreement language - GSLA. A GSLA document is a contract
between interested parties that is designed to create a
measurable common understanding of each party’s role. The
role is a set of rules which defines the service level
expectations and obligations the party has. To specify GSLA in
machine readable format, GXLA XML schema has been
Fig. 4. DProfLang textX meta-model
Page | 695
ICIT 2015 The 7th International Conference on Information Technology
doi:10.15849/icit.2015.0119 © ICIT 2015 (http://icit.zuj.edu.jo/ICIT15)
proposed. Sections of GXLA documents are as follows.
Schedule section contains temporal parameters of the contract.
Party section models involved parties. Service package is an
abstraction that is used to describe the services and previously
mentioned roles. By using GXLA the service management
process can be automated.
For web service SLAs, WSLA [10] can be used. It is also
XML-based. Similarly to GSLA/GXLA, WSLA documents
define the involved parties, metrics, measuring techniques,
responsibilities, and courses of action. The authors state that
every SLA language, such as WSLA, should contain 1)
information regarding the agreeing parties and their roles, 2)
SLA parameters and a measurement specification, as well as 3)
obligations for each party.
SLAng [11] is a language for specifying SLAs based on the
Meta Object Facility [12]. It can use different languages to
describe constraints, e.g., utilizing OCL [13] or HUTN [14].
The WS-Agreement specification language [15] has been
approved by the Open Grid Forum. It defines a language that
can be used by service providers to offer services and
resources, and by clients to create an agreement with that
provider.
Paschke et al. [16] propose to categorize SLA metrics in
order to support the design and implementation of SLAs that
can be monitored and enforced automatically. Standard
elements of each SLA are categorized as: technical (service
descriptions, service objects, metrics, and actions),
organizational (roles, monitoring parameters, reporting, and
change management), and legal (legal obligations, payment,
additional rights, etc.).
According to this categorization, our DProfLang documents
are operation-level documents intended to be used in-house. By
versatility categorization, they belong to standard agreements.
As was the case with DProfSLA schema documents, we do not
need all of the features of the described schemas. DProfLang is
specifically designed to be used with the DProf system. Our
documents provide a subset of the elements defined by GXLA
or WSLA. A transformation of SLA documents between
DProfLang and the mentioned schemas could, for example, be
performed using appropriate generators.
Aside from XML, an SLA can be specified using domain
specific languages. Most of them are AOP based, like DiSL
[17], Josh [18] or Scope [19]. The problem with using AOP is
that they are very platform specific. The use of a true DSL for
SLA specification allows for writing of human readable
documents that can be translated into instrumentation for any
platform.
V. CONCLUSION
In this paper we have shown a new language for
instrumentation specification. The advantage of this approach
over the use of XML is that the SLA documents written with
DProfLang are human readable. This allows for easier
maintenance of monitoring system and better overall control
over monitoring process. In contrast to the use of AOP and
AOP-like tools, our approach is platform independent.
Whichever the underlying platform might be, DProfLang SLA
document will be translated into instrumentation for the
underlying platform.
DProfLang is implemented in textX meta-language which
enables easy language grammar and meta-model modifications
thus facilitating its evolution. To enable integration with our
pre-existing XML based solution we have also implemented a
translator from XML to the new DSL and vice versa.
Our future work will focus on development of
instrumentation generators for different platforms. As DProf
and Kieker use AspectJ instrumentation, our first step is to
develop instrumentation generators for AspectJ. After that, our
work will include generators for DiSL and .NET AOP
frameworks.
ACKNOWLEDGMENT
The research presented in this paper was supported by the
Ministry of Science and Technological Development of the
Republic of Serbia, grant III-44010, Title: Intelligent Systems
for Software Product Development and Business Support based
on Models.
REFERENCES
[1] M. Grottke, K. S. Trivedi. “Fighting Bugs: Remove, Retry, Replicate,
Rejuvenate,” IEEE Computer, v.40, n. 2, 2007, pp. 107-109.
[2] D. Okanović, A. Van Hoorn, Z. Konjović, M. Vidaković, “SLA-Driven
Adaptive Monitoring of Distributed Applications for Performance
Problem Localization,” Computer Science and Information Systems,
vol. 10, no. 1, 2013, pp. 25-50.
[3] D. Okanović, A. van Hoorn, Z. Konjović, M. Vidaković, “Towards
Adaptive Monitoring of Java EE Applications”, Proceedings of the 5th
International Conference on Information Technology - ICIT. Amman,
Jordan, 2011, CD.
[4] A. van Hoorn, W. Hasselbring, J. Waller, “Kieker: A Framework for
Application Performance Monitoring and Dynamic Software Analysis,”
Proceedings of the 3rd ACM/SPEC International Conference on
Performance Engineering (ICPE 2012), Boston, USA, 2012, pp. 247-
248.
[5] D. Okanović, M. Vidaković, “Evaluation of Alternative Instrumentation
Frameworks,” Symposium on Software Performance: Joint
Descartes/Kieker/Palladio Days, Stuttgart, Germany, 2014, pp. 83-90.
[6] W. Binder, J. Hulaas, P. Moret, “Advanced Java Bytecode
Instrumentation,” 5th International Symposium on Principles and
Practice of Programming in Java, Lisboa, Portugal, 2007, p. 135-144.
[7] textX [Online] https://github.com/igordejanovic/textX (January 2015)
[8] Jinja2 [Online] http://jinja.pocoo.org/docs/dev/ (January 2015)
[9] B. Tebbani, I. Aib, “GXLA a Language for the Specification of Service
Level Agreements,” Lecture Notes in Computer Science, v. 4195.
Springer-Verlag, Berlin Heidelberg New York, 2006, p. 201-214.
[10] A. Keller, H. Ludwig, “The WSLA Framework: Specifying and
Monitoring Service Level Agreements for Web Services,” Journal of
Network and Systems Management, vol. 11, no. 1, 2003, pp. 57-81.
[11] D. Lamanna, J. Skene, W. Emmerich, “SLAng: A Language for
Defining Service Level Agreements,” Proceedings of the 9th IEEE
Workshop on Future Trends of Distributed Computer Systems (FTDCS
'03), IEEE Computer Society, San Juan, Puerto Rico, 2003, pp. 100-107.
[12] Meta Object Facility (MOF) 2.0 Core Specification. OMG. [Online]
Available: http://www.omg.org/spec/MOF/2.0 (current September 2011)
Page | 696
ICIT 2015 The 7th International Conference on Information Technology
doi:10.15849/icit.2015.0119 © ICIT 2015 (http://icit.zuj.edu.jo/ICIT15)
[13] Object Constraint Language (OCL) 2.0. OMG. [Online] Available:
http://www.omg.org/spec/MOF/2.0 (January 2015)
[14] Human Usable Textual Notation (HUTN) Specification. OMG. [Online]
Available: http://www.omg.org/spec/HUTN/index.htm (January 2015)
[15] N. Oldham, K. Verma, A. Sheth, F. Hakimpour, “Semantic WS-
agreement partner selection,” 15th International Conference on World
Wide Web. ACM, Edinburgh, Scotland, UK, 2006, pp. 697-706.
[16] A. Paschke, E. Schnappinger-Gerull, “A Categorization Scheme for
SLA Metrics,” Multi-Conference Information Systems (MKWI 2006),
Passau, Germany, 2006, pp. 25-40.
[17] L. Marek, A. Villazón, Y. Zheng, D. Ansaloni, W. Binder, Z. Qi, “DiSL:
a Domain Specific Language for Bytecode Instrumentation,” 11th
Annual International Conference on Aspect-Oriented Software
Development (AOSD '12), 2012, pp. 239-250.
[18] S. Chiba, K. Nakagawa, “Josh: an Open AspectJ-Like Language, ”.
AOSD’04, ACM, 2004, pp. 102–111.
[19] T. Aotani, H. Masuhara, “Scope: an AspectJ Compiler for Supporting
User-Defined Analysis-Based Pointcuts,” AOSD’07, ACM, 2007, pp.
161172.
[20] T. Parr, “Humans should not have to grok XML; Answers to the
question 'When shouldn't you use XML?'”, IBM DeveloperWorks, 2001
Page | 697
... calls for a format that is both machine-readable and human-readable [Par01;Vad+15]. Therefore, [Lud+15] propose a DSL for describing SLAs which is also human readable. ...
Thesis
Full-text available
Software performance is of particular relevance to software system design, operation, and evolution because it has a significant impact on key business indicators. During the life-cycle of a software system, its implementation, configuration, and deployment are subject to multiple changes that may affect the end-to-end performance characteristics. Consequently, performance analysts continually need to provide answers to and act based on performance-relevant concerns. To ensure a desired level of performance, software performance engineering provides a plethora of methods, techniques, and tools for measuring, modeling, and evaluating performance properties of software systems. However, the answering of performance concerns is subject to a significant semantic gap between the level on which performance concerns are formulated and the technical level on which performance evaluations are actually conducted. Performance evaluation approaches come with different strengths and limitations concerning, for example, accuracy, time-to-result, or system overhead. For the involved stakeholders, it can be an elaborate process to reasonably select, parameterize and correctly apply performance evaluation approaches, and to filter and interpret the obtained results. An additional challenge is that available performance evaluation artifacts may change over time, which requires to switch between different measurement-based and model-based performance evaluation approaches during the system evolution. At model-based analysis, the effort involved in creating performance models can also outweigh their benefits. To overcome the deficiencies and enable an automatic and holistic evaluation of performance throughout the software engineering life-cycle requires an approach that: (i) integrates multiple types of performance concerns and evaluation approaches, (ii) automates performance model creation, and (iii) automatically selects an evaluation methodology tailored to a specific scenario. This thesis presents a declarative approach —called Declarative Performance Engineering (DPE)— to automate performance evaluation based on a humanreadable specification of performance-related concerns. To this end, we separate the definition of performance concerns from their solution. The primary scientific contributions presented in this thesis are: A declarative language to express performance-related concerns and a corresponding processing framework: We provide a language to specify performance concerns independent of a concrete performance evaluation approach. Besides the specification of functional aspects, the language allows to include non-functional tradeoffs optionally. To answer these concerns, we provide a framework architecture and a corresponding reference implementation to process performance concerns automatically. It allows to integrate arbitrary performance evaluation approaches and is accompanied by reference implementations for model-based and measurement-based performance evaluation. Automated creation of architectural performance models from execution traces: The creation of performance models can be subject to significant efforts outweighing the benefits of model-based performance evaluation. We provide a model extraction framework that creates architectural performance models based on execution traces, provided by monitoring tools.The framework separates the derivation of generic information from model creation routines. To derive generic information, the framework combines state-of-the-art extraction and estimation techniques. We isolate object creation routines specified in a generic model builder interface based on concepts present in multiple performance-annotated architectural modeling formalisms. To create model extraction for a novel performance modeling formalism, developers only need to write object creation routines instead of creating model extraction software from scratch when reusing the generic framework. Automated and extensible decision support for performance evaluation approaches: We present a methodology and tooling for the automated selection of a performance evaluation approach tailored to the user concerns and application scenario. To this end, we propose to decouple the complexity of selecting a performance evaluation approach for a given scenario by providing solution approach capability models and a generic decision engine. The proposed capability meta-model enables to describe functional and non-functional capabilities of performance evaluation approaches and tools at different granularities. In contrast to existing tree-based decision support mechanisms, the decoupling approach allows to easily update characteristics of solution approaches as well as appending new rating criteria and thereby stay abreast of evolution in performance evaluation tooling and system technologies. Time-to-result estimation for model-based performance prediction: The time required to execute a model-based analysis plays an important role in different decision processes. For example, evaluation scenarios might require the prediction results to be available in a limited period of time such that the system can be adapted in time to ensure the desired quality of service. We propose a method to estimate the time-to-result for modelbased performance prediction based on model characteristics and analysis parametrization. We learn a prediction model using performancerelevant features thatwe determined using statistical tests. We implement the approach and demonstrate its practicability by applying it to analyze a simulation-based multi-step performance evaluation approach for a representative architectural performance modeling formalism. We validate each of the contributions based on representative case studies. The evaluation of automatic performance model extraction for two case study systems shows that the resulting models can accurately predict the performance behavior. Prediction accuracy errors are below 3% for resource utilization and mostly less than 20% for service response time. The separate evaluation of the reusability shows that the presented approach lowers the implementation efforts for automated model extraction tools by up to 91%. Based on two case studies applying measurement-based and model-based performance evaluation techniques, we demonstrate the suitability of the declarative performance engineering framework to answer multiple kinds of performance concerns customized to non-functional goals. Subsequently, we discuss reduced efforts in applying performance analyses using the integrated and automated declarative approach. Also, the evaluation of the declarative framework reviews benefits and savings integrating performance evaluation approaches into the declarative performance engineering framework. We demonstrate the applicability of the decision framework for performance evaluation approaches by applying it to depict existing decision trees. Then, we show how we can quickly adapt to the evolution of performance evaluation methods which is challenging for static tree-based decision support systems. At this, we show how to cope with the evolution of functional and non-functional capabilities of performance evaluation software and explain how to integrate new approaches. Finally, we evaluate the accuracy of the time-to-result estimation for a set of machinelearning algorithms and different training datasets. The predictions exhibit a mean percentage error below 20%, which can be further improved by including performance evaluations of the considered model into the training data. The presented contributions represent a significant step towards an integrated performance engineering process that combines the strengths of model-based and measurement-based performance evaluation. The proposed performance concern language in conjunction with the processing framework significantly reduces the complexity of applying performance evaluations for all stakeholders. Thereby it enables performance awareness throughout the software engineering life-cycle. The proposed performance concern language removes the semantic gap between the level on which performance concerns are formulated and the technical level on which performance evaluations are actually conducted by the user.
... XML, as they were supposed to be processed automatically. However, this increases the manual effort required for their maintenance, and calls for a format that is both machine-and humanreadable [13,18]. ...
Working Paper
The concept of service level agreements (SLAs) defines the idea of a reliable contract between service providers and their users. SLAs provide information on the scope, the quality and the responsibilities of a service and its provider. Service level objectives (SLOs) define the detailed, measurable conditions of the SLAs. After service deployment, SLAs are monitored for situations, that lead to SLA violations. However, the SLA monitoring infrastructure is usually specific to the underlying system infrastructure, lacks generalization, and is often limited to measurement-based approaches. This makes it hard to apply the results from SLA monitoring in other stages of the software life-cycle. In this paper we propose the mapping of concerns defined in SLAs to the performance metrics queries using the Descartes Query Language (DQL). The benefit of our approach is that the same performance query can then be reused for evaluation of performance concerns throughout the entire life-cycle, and regardless of which approach is used for evaluation.
Article
Full-text available
This paper presents an extension of the agent-oriented domain-specific language ALAS to support Distributed Non-Axiomatic Reasoning. ALAS is intended for the development of specific kind of intelligent agents. It is designed to support the Siebog Multi-Agent System (MAS) and implementation of the Siebog intelligent agents. Siebog is a distributed MAS based on the modern web and enterprise standards. Siebog offers support to reasoning based on the Distributed Non-Axiomatic Reasoning System (DNARS). DNARS is a reasoning system based on the Non-Axiomatic Logic (NAL). So far, DNARS-enabled agents could be written only in Java programming language. To solve the problem of interoperability and agent mobility within Siebog platforms, the ALAS language has been developed. The goal of such language is to allow programmers to develop intelligent agents easier by using domain specific constructs. The conversion process of ALAS code to Java code is also described in this paper.
Article
Full-text available
Continuous monitoring of software systems under production workload provides valuable data about application runtime behavior and usage. An adaptive monitoring infrastructure allows controlling, for instance, the overhead as well as the granularity and quality of collected data at runtime. Focusing on application-level monitoring, this paper presents the DProf approach which allows changing the instrumentation of software operations in monitored distributed applications at runtime. It simulates the process human testers employ-monitoring only such parts of an application that cause problems. DProf uses performance objectives specified in service level agreements (SLAs), along with call tree information, to detect and localize problems in application performance. As a proof-of-concept, DProf was used for adaptive monitoring of a sample distributed application.
Conference Paper
Full-text available
Continuous monitoring of software systems under production workload provides valuable data about application runtime behavior and usage. An adaptive monitoring infrastruc-ture allows to control, for instance, the overhead as well as the granularity and quality of collected data at runtime. Focusing on application-level monitoring, this paper presents how we extended the monitoring framework Kieker by reconfiguration capabilities based on JMX technology. The extension allows to change the instrumentation of software operations in monitored distributed Java EE applications. As a proof-of-concept, we demonstrate the adaptive monitoring of a distributed sample Java EE application deployed to a JBoss application server.
Conference Paper
Full-text available
Kieker is an extensible framework for monitoring and analyzing the runtime behavior of concurrent or distributed software systems. It provides measurement probes for application performance monitoring and control-flow tracing. Analysis plugins extract and visualize architectural models, augmented by quantitative observations. Configurable readers and writers allow Kieker to be used for online and offline analysis. This paper reviews the Kieker framework focusing on its features, its provided extension points for custom components, as well the imposed monitoring overhead.
Conference Paper
Full-text available
Effective SLAs are extremely important to assure business continu- ity, customer satisfaction and trust. The metrics used to measure and manage performance compliance to SLA commitments are the heart of a successful agreement and are a critical long term success factor. Lack of experience in the use and automation of performance metrics causes problems for many organi- zations as they attempt to formulate their SLA strategies and set the metrics needed to support those strategies. This paper contributes to a systematic cate- gorization of SLA contents with a particular focus on SLA metrics. The in- tended goal is to support the design and implementation of automatable SLAs based on efficient metrics for automated monitoring and reporting. The catego- rization facilitates design decisions, analysis of existing SLAs and helps to identify responsibilities for critical IT processes in disruption management dur- ing the execution of SLAs.
Conference Paper
Full-text available
In a dynamic service oriented environment it is desirable for service consumers and providers to offer and obtain guarantees regarding their capabilities and requirements. WS-Agreement defines a language and protocol for establishing agreements between two parties. The agreements are complex and expressive to the extent that the manual matching of these agreements would be expensive both in time and resources. It is essential to develop a method for matching agreements automatically. This work presents the framework and implementation of an innovative tool for the matching providers and consumers based on WS- Agreements. The approach utilizes Semantic Web technologies to achieve rich and accurate matches. A key feature is the novel and flexible approach for achieving user personalized matches.
Article
Full-text available
We describe a novel framework for specifying and monitoring Service Level Agreements (SLA) for Web Services. SLA monitoring and enforcement become increasingly important in a Web Service environment where enterprise applications and services rely on services that may be subscribed dynamically and on-demand. For economic and practical reasons, we want an automated provisioning process for both the service itself as well as the SLA managment system that measures and monitors the QoS parameters, checks the agreed-upon service levels, and reports violations to the authorized parties involved in the SLA management process. Our approach to these issues is presented in this paper. The Web Service Level Agreement (WSLA) framework is targeted at defining and monitoring SLAs for Web Services. Although WSLA has been designed for a Web Services environment, it is applicable as well to any inter-domain management scenario, such as business process and service management, or the management of networks, systems and applications in general. The WSLA framework consists of a flexible and extensible language based on XML Schema and a runtime architecture comprising several SLA monitoring services, which may be outsourced to third parties to ensure a maximum of objectivity. WSLA enables service customers and providers to unambiguously define a wide variety of SLAs, specify the SLA parameters and the way they are measured, and relate them to managed resource instrumentations. Upon receipt of an SLA specification, the WSLA monitoring services are automatically configured to enforce the SLA. An implementation of the WSLA framework, termed SLA Compliance Monitor, is publicly available as part of the IBM Web Services Toolkit.
Article
Full-text available
Bytecode instrumentation is a valuable technique for transparently enhancing virtual execution environments for purposes such as monitoring or profiling. Current approaches to bytecode instrumentation either exclude some methods from instrumentation, severely restrict the ways certain methods may be instrumented, or require the use of native code. In this paper we compare different approaches to bytecode instrumentation in Java and come up with a novel instrumentation framework that goes beyond the aforementioned limitations. We evaluate our approach with an instrumentation for profiling which generates calling context trees of various platform-independent dynamic metrics.
Conference Paper
Full-text available
Application or web services are increasingly being used across organisational boundaries. Moreover, new services are being introduced at the network and storage level. Languages to specify interfaces for such services have been researched and transferred into industrial practice. We investigate end-to-end quality of service (QoS) and highlight that QoS provision has multiple facets and requires complex agreements between network services, storage services and middleware services. We introduce SLAng, a language for defining Service Level Agreements (SLAs) that accommodates these needs. We illustrate how SLAng is used to specify QoS in a case study that uses a web services specification to support the processing of images across multiple domains and we evaluate our language based on it.
Conference Paper
In this work we propose GXLA, a language for the specification of Service Level Agreements (SLA). GXLA represents the implementation of the Generalized Service Level Agreement (GSLA) information model we proposed in a previous work. It supports multi-party service relationships through a role-based mechanism. It is intended to catch up the complex nature of service interactivity in the broader range of SLA modeling of all sorts of IT business relationships. GXLA is defined as an XML schema which provides a common ground between the entities in order to automate the configuration. GXLA can be used by service providers, service customers, and third parties in order to configure their respective IT systems. Each party can use its own independent SLA interpretation and deployment technique to enforce the role it has to play in the contract. An illustrative VoIP service negotiation shows how GXLA is used for automating the process of SLA negotiation and deployment.
Conference Paper
Abstract This paper proposes an approach called SCoPE, which supports user-defined analysis-based pointcuts in aspect-oriented program- ming,(AOP) languages. The advantage,of our approach,is better integration with existing AOP languages than previous approaches. Instead of extending the language, SCoPE allows the programmer to write a pointcut that analyzes a program,by using a conditional (if) pointcut with introspective reflection libraries. A compilation scheme,automatically eliminates runtime tests for such a pointcut. The approach also makes effects of aspects visible to the analysis, which is essential for determining proper aspect interactions. We implemented,a SCoPE compiler for the AspectJ language,on top of the AspectBench compiler using a backpatching technique. The implementation efficiently finds analysis-based pointcuts, and gen- erates woven,code without runtime tests for those pointcuts. Our benchmark,tests with JHotDraw and other programs,showed,that SCoPE compiles programs,with less than 1% compile-time over- head, and generates a program that is as efficient as an equivalent program,that uses merely static pointcuts. Categories and Subject Descriptors,D.3.4 [Programming,Lan- guages]: Processors—Compilers General Terms Aspect-Oriented Programming Languages, Point-