ArticlePDF Available

Abstract

One of the network and services management problems is security, either in preventing attacks and using computational mechanisms to protect data and systems or in administrative matters, which involves not just what needs to be protected, but also what security service levels will be delivered. This paper explores Service Level Agreements for Security or just Sec-SLAs. Is tried to provide an overview on the subject, the difficulties faced during the security metrics definition process and the Sec-SLA monitoring, as well as an analysis on the Sec-SLA role in new paradigms like cloud computing.
A preview of the PDF is not available
... In this, we tend to target cloud information storage security, which has invariably been a very important side of quality of service to make sure the correctness of user's information within the cloud. [3,4,5,6,8,9,11,13,15] A. Trust: ...
... To overcome this issue, an intrusion detection system (IDS) mechanism is generally preferred in the cloud paradigm. [4,6,8,15] From its first report in 1987, IDS has got prime importance as a tool to prevent and mitigate the unauthorized access to the user data over the network. I. Taxonomy of the security threats and attacks The security of the data has got a prime importance in the present era cloud computing owing to its distributed nature. ...
... This is one of the main sources of concern in the cloud service models and it augments the importance of assurance for security in the cloud. In [186,44] the authors applied the method proposed in [103] to define metrics to be used in such contracts. In [44] a list of initial guides to start the process is presented. ...
... In [186,44] the authors applied the method proposed in [103] to define metrics to be used in such contracts. In [44] a list of initial guides to start the process is presented. Metrics like password management (how often passwords should be changed), backup policies, and repair time are described. ...
Thesis
Migrating to the cloud results in losing full control of the physical infrastructure as the cloud service provider (CSP) is responsible for managing the infrastructure including its security. As this incites tenants to rely on CSPs for the security of their information system, it creates a trust issue. CSPs acknowledge the trust issue and provide a guarantee through Service Level Agreement (SLA). The agreement describes the provided service and penalties for the cases of violation. Almost all existing SLAs only address the functional features of the cloud and thus do not guarantee the security aspect of tenants’ hosted services. Security monitoring is the process of collecting and analyzing indicators of potential security threats, then triaging these threats by appropriate action. It is highly desirable for CSPs to provide user-specific security monitoring services which are based on the requirements of a tenant. In this thesis we present our contribution to include user-centric security monitoring terms into cloud SLAs. This requires performing different tasks in the cloud service life-cycle, starting before the actual service deployment until the end of the service. Our contributions are presented as follows : we design extensions to an existing SLA language called Cloud SLA (CSLA). Our extension, called Extended CSLA (ECSLA), allows tenants to describe their security monitoring requirements in terms of vulnerabilities. More precisely, a security monitoring service is described as a relation between user requirements as vulnerabilities, a software product having the vulnerabilities and an infrastructure where the software is running. To offer security monitoring SLAs, CSPs need to measure the performance of their security monitoring capability with different configurations. We propose a solution to reduces the required number of evaluations compared to the number of possible configurations. The proposed solution introduces two new ideas. First, we design a knowledge base building method which uses clustering to categorize a bunch of vulnerabilities together in groups using some heuristics. Second we propose a model to quantify the interference between operations of monitoring vulnerabilities. Using these two methods we can estimate the performance of a monitoring device with few numbers of evaluations compared to the naive approach. The metrics used in our SLA terms consider the operational environment of the security monitoring devices. In order to consider the non-determistic operational environment parameters, we propose an estimation mechanism where the performance of a monitoring device is measured using known parameters and the result is used to model its performance and estimate it for unknown values of that parameter. An SLA definition contains the model, which can be used whenever the measurement is performed. We propose an in situ evaluation method of the security monitoring configuration. It can evaluate the performance of a security monitoring setup in a production environment. The method uses an attack injection technique but injected attacks do not affect the production virtual machines. We have implemented and evaluated the proposed method. The method can be used by either of the parties to compute the required metric. However, the method requires cooperation between tenants and CSPs. In order to reduce the dependency between tenants and CSPs while performing verification, we propose to use a logical secure component. The proposed use of a logical secure component for verification is illustrated in an SLA addressing data integrity in clouds. The method uses a secure trusted and distributed ledger (blockchain) to store evidences of data integrity. The method allows checking data integrity without relying on the other party. If there is any conflict between tenants and CSPs the evidence can be used to resolve the conflict.
... A part of the security research community spent the last decade trying to pinpoint the concept of Security SLA, or Sec-SLA, especially in the context of cloud computing [35,154,199,209]. The concept of a sec-SLA is to take the components of an SLA (a time period, a metric, a threshold, and a penalty) and apply it to the security properties of the system. ...
Thesis
N-day vulnerabilities are public, recently disclosed, but not well-known software and hardware vulnerabilities. Past n-day vulnerabilities such as Heartbleed, Shellshock, and EternalBlue already had important real-world impact on thousands of information systems and organizations across the world. One difficulty for proper defense against n-day vulnerabilities is that during the first days after a vulnerability disclosure, preliminary information about the vulnerability is not available in a machine readable format, delaying the use of automated vulnerability management processes. This situation creates a period of time when only expensive manual analysis can be used to react to new vulnerabilities because no data is available for cheaper automated analysis yet. We propose several contributions to automatically analyze newly disclosed vulnerabilities to predict inherent properties such as the software or hardware they affect, and automatically evaluate the risk they pose in the context of a specific information system. We highlight how our contributions take part in a greater end-to-end strategy aiming at mitigating the risk faced by information system because of n-day vulnerabilities.
... This paper will address cloud security vulnerability issues, the threats propagated by a distributed denial of service (DDOS) attack on cloud computing infrastructure and also discuss the means and techniques that could detect and prevent the attacks. [7] A survey on security problems in commission delivery models of cloud computing Cloud computing could be a thanks to increase the capability or add capabilities dynamically while not investment in new infrastructure, coaching new personnel, or licensing new software system. It extends data Technology's (IT) existing capabilities. ...
Chapter
Monitoring plays an essential role in the management and engineering of today’s communication networks. Facing paradigms such as cloud computing and network virtualization, the challenges of monitoring are even more demanding, attending to the variety and dynamics of services and resources involved. The lack of standards regarding cloud services monitoring makes even more difficult to reach a common ground on how to assess these services. In this context, this paper takes a high-level stratified view of the problem to clarify and systematise the involved pieces in cloud services monitoring, from the physical/virtual infrastructure to the customer/provider layer. In each layer, relevant measuring requirements and metrics are discussed, following existing guidelines for the evaluation of performance, reliability, security, and service level agreements fulfilment. In addition, representative cloud services in the market are aggregated per service model and relevant monitoring tools are surveyed and mapped into the proposed functional layers. The present study is a step forward contributing to achieve a more modular, flexible and consensual view to cloud services monitoring.
Chapter
Zur Durchführung einer kontinuierlichen Zertifizierung ist eine fortlaufende Überprüfung von ausgewählten Zertifizierungskriterien notwendig, um die Glaubwürdigkeit einer Zertifizierung zu erhöhen. In diesem Kapitel werden sowohl Methodiken und Metriken exemplarisch vorgestellt, welche ein Cloud-Service-Anbieter implementieren kann, um benötige zertifizierungsrelevante Daten zur Verfügung stellen zu können (zur Förderung von monitoring-basierten Messverfahren), als auch Methodiken und Techniken, welche es Cloud-Service-Auditoren ermöglichen eigenständig Daten zu erheben und zu analysieren (zur Förderung von test-basierten Messverfahren). Ein automatisierter Abgleich von Ergebnissen aus Messverfahren mit Zertifizierungskriterien setzt zudem das Etablieren von Regeln voraus, die definieren, wie Verstöße gehandhabt werden. Aus diesem Grund wird abschließend in diesem Kapitel ein Regelwerk zur Identifizierung von Verstößen und Initiierung von Maßnahmen vorgestellt.
Conference Paper
Security challenges are the most important obstacles for the advancement of IT-based on-demand services and cloud computing as an emerging technology. In this paper, a structural policy management engine has been introduced to enhance the reliability of managing different policies in clouds and to provide standard as well as dedicated security levels (rings) based on the capabilities of the cloud provider and the requirements of cloud customers. Cloud security ontology (CSON) is an object-oriented framework defined to manage and enable appropriate communication between the potential security terms of cloud service providers. CSON uses two super classes to establish appropriate mapping between the requirements of cloud customers and the capabilities of the service provider.
Article
Full-text available
We examine the concept of security as a dimension of Quality of Service in distributed systems. Implicit to the concept of Quality of Service is the notion of choice or variation. Security services also offer a range of choice both from the user perspective and among the underlying resources. We provide a discus- sion and examples of user-specified security variables and show how the range of service levels associated with these variables can support the provision of Quality of Security Service, whereby security is a constructive net- work management tool rather than a performance obstacle. We also discuss various design implications regarding security ranges provided in a QoS-aware dis- tributed system.
Article
Full-text available
The premise of Quality of Security Service is that system and network management functions can be more effective if variable levels of security services and requirements can be presented to users or network tasks. In this approach, the "level of service" must be within an acceptable range, and can indicate degrees of security with respect to various aspects of assurance, mechanistic strength, administrative diligence, etc. These ranges result in additional latitude for management functions to meet overall user and system demands, as well as to balance costs and projected benefits to specific users/clients. With a broader solution space to work within the security realm, the underlying system and network management functions can adapt more gracefully to resource shortages, and thereby do a better job at maintaining requested or required levels of service in all dimensions, transforming security from a performance obstacle into an adaptive, constructive network management tool.
Conference Paper
Full-text available
This paper presents a case study of the application of security metrics to a computer network. A detailed survey is conducted on existing security metric schemes. The Mean Time to Compromise (MTTC) metric and VEA-bility metric are selected for this study. The input data for both metrics are obtained from a network security tool. The results are used to determine the security level of the network using the two metrics. The feasibility and convenience of using both metrics are then compared.
Conference Paper
Full-text available
Measuring security is an important step in creating and deploying secure applications. In order to efficiently mea- sure the level of security that an application provides, three problems need to be solved: obviously metrics need to be available, a suitable metrics framework needs to be chosen and implemented, and the resulting measurements need to be interpreted. This work focuses on the second and third problem. We propose an approach to facilitate the selection and integration of appropriate security metrics, and to sup- port the aggregation and interpretation of measurements. Our approach associates security metrics to security pat- terns, and we exploit the relationships between security pat- terns and security objectives to enable the interpretation of measurements. The approach is illustrated in a case study.
Conference Paper
Full-text available
Emerging grid computing infrastructures such as cloud computing can only become viable alternatives for the enterprise if they can provide stable service levels for business processes and SLA-based costing. In this paper we describe and apply a three-step approach to map SLA and QoS requirements of business processes to such infrastructures. We start with formalization of service capabilities and business process requirements. We compare them and, if we detect a performance or reliability gap, we dynamically improve performance of individual services deployed in grid and cloud computing environments. Here we employ translucent replication of services. An experimental evaluation in Amazon EC2 verified our approach.
Conference Paper
Full-text available
Cloud computing is an emerging computing paradigm. It aims to share data, calculations, and services transpar-ently among users of a massive grid. Although the industry has started selling cloud-computing products, research challenges in various areas, such as UI design, task decomposition, task distribution, and task coordinat-ion, are still unclear. Therefore, we study the methods to reason and model cloud computing as a step toward identifying fundamental research questions in this para-digm. In this paper, we compare cloud computing with service computing and pervasive computing. Both the industry and research community have actively examined these three computing paradigms. We draw a qualitative comparison among them based on the classic model of computer architecture. We finally evaluate the compar-ison results and draw up a series of research questions in cloud computing for future exploration.
Article
In an e-enterprise, the tight coupling between business process and the underlying information technology infrastructure amplifies the effect of hardware and software security failures. This accentuates the need for comprehensive security management of the infrastructure. This paper outlined the challenges posed by fulfilling myriad security requirements throughout the various stages of enterprise integration. To better categorize these requirements, the set of security domains that comprise the security profile of the e-enterprise have been specified. The set of security metrics used to quantify various aspects of security for an e-enterprise are also identified.
Conference Paper
Large, distributed IT infrastructures providing business-critical services have to protect themselves against internal and external threats and adapt to changing environmental parameters, as workload. Most widely applied, structural resilience mechanisms use some form of local static redundancy deployed to each critical resource for failover. However, recently both large-scale interconnected distributed systems and virtualization enable on-line structural reconfiguration exploiting a globally managed spare capacity as on- demand failover resource. In this paper, we present system and service resilience as a control problem and briefly describe how classes of the widely used, but vague notion of 'IT metrics' map to the concepts of generic control with a special emphasis on the control aspects of structural reconfiguration as a generic resilience mechanism. Most importantly, we introduce some initial metrics that aim at measuring the self-healing capability of systems employing structural reconfiguration.
Article
In this paper service level agreement (SLA) management is presented. SLA defines minimum acceptable performance measures for IT unit and its users. In the first part of this paper the role of SLA in IT governance is introduced. Then the definition of SLA is given, followed by an example. In the third part SLA management is presented using COBIT (Control Objectives for Information and Related Technology). The COBIT process "DS1: Define and Manage Service Levels" is discussed including management guidelines. After that, advantages and disadvantages of SLA are described. Finally, SLA is recommended as a possible solution for controlling internal IT units.
Article
Even though the technology faces several significant challenges, many vendors and industry observers predict a bright future for cloud computing.