Conference Paper

Architecture based analysis of performance, reliability and security of software systems

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

With software systems becoming more complex, and handling diverse and critical applications, the need for their thorough evaluation has become ever more important at each phase of software development. With the prevalent use of component-based design, the software architecture as well as the behavior of the individual components of the system needs to be taken into account when evaluating it. In recent past a number of studies have focused on architecture based reliability estimation. But areas such as security and cache behavior still lack such an approach. In this paper we propose an architecture based unified hierarchical model for software reliability, performance, security and cache behavior prediction. We define a metric called the vulnerability index of a software component for quantifying its (in)security. We provide expressions for predicting the overall behavior of the system based on the characteristics of individual components, which also takes into account second order architectural effects for providing an accurate prediction. This approach also facilitates the identification of reliability, performance, security and cache performance bottlenecks. In addition we illustrate how the approach could be applied to software systems by case studies and also provide expressions to perform sensitivity analysis.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In this paper we present such an approach, which aims at helping the software engineers build reliable and better performing component based systems. The main contributions of this work are along the following four dimensions: (i) Extending the existing Discrete Time Markov Chain (DTMC) based reliability modeling techniques [8, 7, 21] to allow component level restarts and application retries and evaluate the increase in system reliability , (ii) Allowing for performance modeling for such systems having unreliable components and fault recovery techniques , (iii) Incorporating the effects of operating system and hardware level failures and corresponding reboots and repairs using a Continuous Time Markov Chain (CTMC) based hardware availability model, and (iv) Assessing the relationships between the performance, reliability and machine availability for such systems and study the variation in these attributes with changes in the system. We discuss some related work in this field in the next section. ...
... For the purpose of evaluation, component based systems have been modeled using DTMCs by [8, 21, 7, 18]. In such DTMC models (Figure 1), the state at any point in time is determined by the component in execution. ...
... The model can be used to calculate the average visit counts [25, 21]Figure 3), R s , is given by the probability of system eventually ending in the state S, which can be calculated by solving the DTMC. This is the reliability of the software system as perceived by the user and will in general be more than the reliability for a system without any restarts or retries, as was modeled inFigure 2. One should note the component failures that could be mitigated using restarts and retries are essentially transient in nature. ...
Conference Paper
High reliability and performance are vital for software systems handling diverse mission critical applications. Such software systems are usually component based and may possess multiple levels of fault recovery. A number of parameters, including the software architecture, behavior of individual components, underlying hardware, and the fault recovery measures, affect the behavior of such systems, and there is a need for an approach to study them. In this paper we present an integrated approach for modeling and analysis of component based systems with multiple levels of failures and fault recovery both at the software, as well as the hardware level. The approach is useful to analyze attributes such as overall reliability, performance, and machine availabilities for such systems, wherein failures may happen at the software components, the operating system, or at the hardware, and corresponding restarts, retries, reboots or repairs are used for mitigation. Our approach encompasses Markov chain, and queueing network modeling, for estimating system reliability, machine availabilities and performance. The approach is helpful for designing and building better systems and also while improving existing systems
... The first author was a graduate student at Indian Institute of Technology Kanpur, when this research work was conducted. This forms a part of his Ph.D. thesis [10]. ...
... While modeling software architectures using DTMCs, either a single state or a set of states of the DTMC represent the software component in execution at any point in time. Transitions between states are governed by the transfer of control from one component to the other and appropriate probabilities are assigned according to the behavior of the system [10,11]. Reaching an absorbing state i.e. a state from which there is no transition to other states, indicates the successful completion of a job. ...
... For example, Figure 3 shows the DTMC model for the tiered architecture specified earlier in Figure 1. This is a DTMC model with two states for each tier, with the upper one representing forward flow of the client request, and the lower one showing the flow of the reply back towards the client [10,11]. ...
Conference Paper
Full-text available
Performance is a critical attribute of software systems and depends heavily on the software architecture. Though the impact of the component and connector architecture on performance is well appreciated and modeled, the impact of component deployment has not been studied much. For a given component and connector architecture, the system performance is also affected by how components are deployed onto hardware resources. In this work we first formulate this problem of finding the deployment that maximizes performance, and then present a heuristic-based solution approach for it. Our approach incorporates the software architecture, component resource requirements, and the hardware specifications of the system. We break the problem into two sub-problems and formulate heuristics for suggesting the best deployment in terms of performance. Our evaluation indicates that the proposed heuristic performs very well and outputs a deployment that is the best or close to the best, in more than 96% cases.
... Therefore, it may be difficult to come up with a measurable componentlevel property p that can be related to confidentiality or integrity . Despite this, a prediction technique for security has been proposed [25] , where a metric called vulnerability index represents both component-level property p as well as assembly-level property P . Although predictable assembly techniques widen our understanding in how properties can be predicted, they do not address how a composition that satisfies given requirements is found. ...
... However, both derivation tools are built on selfmade inference engines, which do not utilise efficient AIbased techniques. The approach presented in [25] proposes to solve the problem of predicting security from component compositions using a vulnerability index. A vulnerability index is a probability that the vulnerability of a component is exposed in a single execution, and it is measured by subjecting the component to attacks for various possible vulnerabilities in the system and finding the ratio of successful attacks to the total number of attacks. ...
... A vulnerability index is a probability that the vulnerability of a component is exposed in a single execution, and it is measured by subjecting the component to attacks for various possible vulnerabilities in the system and finding the ratio of successful attacks to the total number of attacks. Although the approach in [25] has a solid quantitative basis, it is unknown whether such a vulnerability index is meaningful or measurable at the component level. The task of enumerating all vulnerabilities, constructing attacks for these vulnerabilities and evaluating the success of these vulnerabilities is definitely not easy. ...
Conference Paper
Full-text available
Predicting the properties of a component composition has been studied in component-based engineering. However, most studies do not address how to find a component composition that satisfies given functional and nonfunctional requirements. This paper proposes that configurable products and configurators can be used for solving this task. The proposed solution is based on the research on traditional, mechanical configurable products. Due to the computational complexity, the solution should utilise existing techniques from the field of artificial intelligence. The applicability of the approach is demonstrated with KumbangSec, which is a conceptualisation, a language and a configurator tool for deriving component compositions with given functional and security requirements. KumbangSec configurator utilises existing inference engine models to ensure efficiency.
... Desempeño [6] Transforma diagramas UML de actividades en modelos de redes de colas a través de un algoritmo y luego utiliza la teoría de colas para obtener la medición. Desempeño [127] Utiliza una cadena Discreta de Tiempo de Markov (DTMC). [133] Establece diferentes formas de estimar el desempeño: 1) Por análisis a) Orden de cálculos de magnitud (escalas) b) Modelos analíticos (hojas de cálculo) c) Modelos de redes de colas 2) Simulación a) Modelos de redes comerciales b) Modelos de comparación de proveedores c) Modelos híbridos d) Modelos construidos a la medida Su enfoque es general por lo que su aplicación a la arquitectura de software requiere de hacer trabajos de adaptación de la forma de estimación que se seleccione. ...
... Seguridad [127] Vulnerabilidad: Es un defecto potencial o una debilidad en un sistema junto con el conocimiento requerido para explotarlo. ...
Thesis
Full-text available
Una arquitectura de referencia es un instrumento que influye en el diseño de los modelos arquitectónicos en que se basa la construcción de nuevos sistemas y que permite conservar y transmitir las mejores prácticas de la organización en aspectos de ingeniería de software. En esta investigación se propone una metodología que permite generar una arquitectura de software que una vez que ha probado ser útil se generaliza y enriquece para establecer un modelo arquitectónico de referencia que sirve como base para la creación de nuevos sistemas dentro del mismo dominio. La metodología constituye un proceso iterativo de ingeniería hacia adelante que sirve para construir modelos arquitectónicos de referencia que evolucionan de manera permanente y extienden sus beneficios a diferentes soluciones. Así mismo, mediante la extracción de arquitecturas de referencia se aprovecha la retroalimentación que se obtiene de la aplicación de modelos arquitectónicos en la construcción de sistemas. Se considera que los sistemas conceptualizados bajo esta metodología por lo general se integran a ambientes computacionales heterogéneos en los que se cuenta con programas de software diseñados bajo diferentes tecnologías y enfoques, sometidos a fuerzas evolutivas que les han transformado a lo largo del tiempo. En la investigación se ha reconocido que algunos de los sistemas que se encuentran en una gran cantidad de organizaciones son heredados, por lo que su adaptación para satisfacer nuevas necesidades es costosa y difícil; sin embargo, son sistemas que la metodología toma en cuenta como parte de una situación real en la que las nuevas soluciones de software deben desempeñarse. Al tomar en cuenta la integración de sistemas heredados se establece un marco de trabajo que facilita su evolución y reemplazo gradual, además de disminuir el re-trabajo manual que actualmente es requerido para transportar la información entre sistemas que requieren interactuar.
... VI[13] denotes the Vulnerability Index. (VI) is defined as the number of successful attacks on Sort the list of components risk factorsFig. 1. ...
Article
Full-text available
Security risk assessment is considered a significant and indispensible process in all phases of software development lifecycles, and most importantly at the early phases. Estimating the security risk should be integrated with the other product developments parts and this will help developers and engineers determine the risky elements in the software system, and reduce the failure consequences in that software. This is done by building models based on the data collected at the early development cycles. These models will help identify the high security risk elements. In this paper, we introduce a new methodology used at the early phases based on the Unified Modeling Language (UML), Attack graph, and other factors. We estimate the probability and severity of security failure for each element in software architecture based on UML, attack graph, data sensitivity analysis, access rights, and reachability matrix. Then risk factors are computed. An e-commerce case study is investigated as an example.
... Model-based and model-driven design methods have been proposed to improve the complex design process for embedded systems [7] [18] [13], but addressing the performance aspects remains difficult. Particularly, predicting performance in the early design stages is hard, since the system does not exist yet [19] [4]. This paper addresses early-in-design performance evaluation using a DSL. ...
Conference Paper
Full-text available
The increasing complexity of embedded systems requires performance evaluation early in the design phase. We introduce a generic way of generating performance models based on a system description given in a domain-specific language (DSL). We provide a transformation from a DSL to a performance model in the Parallel Object-Oriented Specification Language (POOSL). A case study shows the feasibility of the approach in a complex interventional X-ray system, which requires appropriate measurement data on a prototype. Since distance computations are an integral part of the system, performance profiles of our chosen distance package, Proximity Query Package, have been created. The overall model has been successfully validated by comparing its outcomes with real measurements.
... [17] ever, some quality attributes, such as security or safety, are more difficult in the sense that their prediction requires more information [21] . A prediction mechanism for security has also been proposed [30]. However, a problem with security is that it is not trivial to come up with a quantitative, composable metric that captures security concepts well. ...
Conference Paper
Full-text available
Software products often need to vary in terms of func- tionality, but also in terms of quality attributes. We describe KumbangSec, which in an approach for modelling func- tional and security variability applying different architec- tural viewpoints. KumbangSec builds on an existing vari- ability modelling method, Kumbang, by extending it with security engineering concepts. KumbangSec consists of a conceptualisation and a language implementing the con- ceptualisation. The semantics for the conceptualisation has been described using natural language; the syntax for the language has been defined in a grammar. KumbangSec can be supported with a configurator, which is an intelligent tool supporting derivation of products that each satisfy specific and different functional and security requirements. Such a tool, KumbangSec configurator, is currently being devel- oped.
... This SLR did not show any papers that provided an existing framework or model for security and FDD combined, but there was one paper [57]that showed measurements for a matrix comparing compatible FDD elements and security elements. There were also many frameworks [58], architecture5960616263, models [64], and metrics [65] of security that were mentioned in related works. ...
Article
Full-text available
Agile methodologies have gained recognition as efficient development processes through their quick delivery of software, even under time constraints. However, like other agile methods such as Scrum, Extreme Programming (XP) and The Dynamic Systems Development Method (DSDM), Feature Driven Development (FDD) has been criticized due to the unavailability of security elements in its twelve practices. In order to examine this matter more closely, we conducted a systematic literature review (SLR) and studied literature for the years 2001-2012. Our findings highlight that, in its current form, the FDD model partially supports the development of secure software. However, there is little research on this topic, as detailed information about the usage of secure software is rarely published. Thus, we have been able to conclude that the existing five phases of FDD have not been enough to develop secure software until recently. For this reason, security-based phase and practices in FDD need to be proposed.
... In component-based software engineering, there have been attempts to predict security from the properties of components, but the challenge has been to come up with a suitable component-level property. One such proposal is the use of a vulnerability index [32], which is a probability that the vulnerabilities of a component are exposed in a single execution. However, it requires considerable effort to enumerate all vulnerabilities, and construct, execute and measure all corresponding attacks. ...
Conference Paper
Full-text available
In a software product line, security may need to be varied. Consequently, security variability must be managed both from the customer and product line architecture point of view. We utilize design science to build an artifact and a generalized design theory for representing and configuring security and functional variability from the requirements to the architecture in a configurable software product line. An open source web shop product line, Magento, is used as a case example to instantiate and evaluate the contribution. The results indicate that security variability can be represented and distinguished as countermeasures; and that a configurator tool is able to find consistent products as stable models of answer set programs.
... Deubler et al [34] Proposes an approach for facilitating the development of security-critical service based-software using a tool called AutoFocus, based on the formal method Focus. Sharma & Trevedi [43] Proposes an architecture based unified hierarchical model for predicting software reliability, performance, security, and cache behaviour. Viega et al [57] Considers and explores trust assumptions during every stage of software development. ...
Technical Report
The rapid propagation of software systems into nearly every aspect of modern life together with the ever growing number of threats against these systems have given rise to one of the greatest challenges in information technology today. This is the challenge of obtaining software systems that are secure from threats. These threats range from exploitations of buffer overflows and unprotected critical memory locations to reverse engineering in order to find vulnerabilities. Researchers have risen to this challenge by proposing solutions that touch all aspects of software development and operation. Yet, an overall view of this research, showing how seemingly diverse research efforts fit together, does not appear to exist. Such an organized view may help the secure software research community understand where recent research has occurred and direct new research to interesting and promising areas. In addition, newcomers to this field will quickly see what secure software is all about. This paper provides this view and suggests a way to identify new research topics in secure software.
... Analyzing and evaluating software architecture during the design phase has been proved to be an effective way to find potential problems in the early stages of software life cycle, reduce costs and assure software quality [12]. Guided by this idea, some researchers have conducted research on architecture-level security analysis and design [13][14][15][16][17][18][19]. These methods can be used for verifying whether the design of software architecture has met the security requirements, or assuring the security of software architecture by designing security policies that constrain the components, connectors and configurations. ...
Article
Full-text available
Current research on software vulnerability analysis mostly focus on source codes or executable programs. But these methods can only be applied after software is completely developed when source codes are available. This may lead to high costs and tremendous difficulties in software revision. On the other hand, as an important product of software design phase, architecture can depict not only the static structure of software, but also the information flow due to interaction of components. Architecture is crucial in determining the quality of software. As a result, by locating the architecture-level information flow that violates security policies, vulnerabilities can be found and fixed in the early phase of software development cycle when revision is easier with lower cost. In this paper, an approach for analyzing information flow vulnerability in software architecture is proposed. First, the concept of information flow vulnerability in software architecture is elaborated. Corresponding security policies are proposed. Then, a method for constructing service invocation diagrams based on graph theory is proposed, which can depict information flow in software architecture. Moreover, an algorithm for vulnerability determination is designed to locate architecture-level vulnerabilities. Finally, a case study is provided, which verifies the effectiveness and feasibility of the proposed methods.
... Still, the static analysis of artefacts created as part of MDAD can be used to better understand the system. Additionally, it can be extended as shown by Sharma & Trivedi who used Discrete Time Markov Chain modelling to estimate the reliability, performance and security of a given system by examining the number of visits and times spent in that module calculated from a transitional probability matrix (Sharma & Trivedi, 2005). The probabilities within this matrix are affected by the architecture as different structures and behaviour would lead to the variation in time spent in each modules and the number of visits to specific modules required to finish processing. ...
Chapter
Full-text available
The design optimisation guidance methodology aims to aid the designer in directing the overall system optimisation process. One of the major difficulties of providing such guidance is the nature by which this optimisation process is advanced. Specifically, the designer is essentially incapable of affecting the qualities directly. Instead, he or she is forced to consider a set of choices targeting the specific features of the design contributing towards achievement of desirable system qualities. As a result, since a single choice could affect multiple qualities, this introduces a requirement for guidance to provide the designer with understanding of the causal relationships existing in the system. Achieving this involves the study of assumptions held by the designer and other stakeholders, the relevance of existing knowledge and the accuracy of possible predictions. The fusion of simulation modelling and the BBNs can serve as tool of such study as its aim is to provide a tangible link between the way in which the system is structured and its observed levels of quality. Additionally, by combining the hybrid simulation model with BBN discovery algorithm we managed to obtain a much more repeatable output that is validated against encoded assumptions and is less prone to human error. However, the method’s success relies greatly on validity of the model and clarity of the BBN representation. To this end we have found that the simulation model should be built in an incremental manner using a variety of information sources and explicit encoding of assumptions help by the participants. Consequently, the extracted BBN plays a dual role both as a guidance tool and a model verification tool as the conditional probabilities it displays can quickly highlight inconsistencies within the model. The results presented herein warrant further investigation along four major axis: • further research is needed to help the designer with choice of quality factors and criteria that contribute to the nodes of the CQM; • a taxonomy of simulation primitives needs to be developed to aid the designer with construction of hybrid simulation models; • additional research is needed to examine how various BBN discovery algorithms perform on the types of simulation output produced by models of systems from different domains; • studies should be conducted into the various stochastic methods of optimisation such as Cross-Entropy (Caserata & Nodar, 2005) that could be implemented based on the outcomes of BBN use for qualitative applied over a succession of system development cycles. Finally, the development of this approach to guidance should be used to construct a fully fledged decision support and optimisation framework described in Section 3.
Article
AFIT/GIA/ENG/06-04. "March 2006." Thesis (M.S.)--Air Force Institute of Technology, 2006. Includes bibliographical references (leaves 112-115).
Article
There is a growing demand for using commercial-off-the-shelf (COTS) software components to facilitate the development of software systems. Among many research topics for component-based software, quality-of-service (QoS) evaluation is yet to be given the importance it deserves. In this paper, we propose a novel analytical model to evaluate the QoS of component-based software systems. We use the component execution graph (CEG) graph model to model the architecture at the process level and the interdependence among components. The CEG graph can explicitly capture sequential, parallel, selective and iterative compositions of components. For QoS estimation, each component in the CEG model is associated with execution rate, failure rate and cost per unit time. Three metrics of the QoS are considered and analytically calculated, namely make-span, reliability and cost. Through a case study, we show that our model is capable of modeling real-world COTS software systems effectively. Also, Monte-Carlo simulation in the case study indicates that analytical results are consistent with simulation and all are covered by 95% confidence intervals. We also present a sensitivity analysis technique to identify QoS bottlenecks. This paper concludes with a comparison with related work. Copyright © 2007 John Wiley & Sons, Ltd.
Conference Paper
Full-text available
The Feature-Oriented approach provides a way of modelling commonalities and variabilities among products of a software product line. A feature model can be used as input for generating an architectural representation of a product line. Product line architectures can be specified using one of the architecture description languages that already supports the specification of commonalities and variabilities. xADL 2.0 is a highly-extensible XML- based architecture description language that allows product lines architectures to be defined. But, in the process of generating the architecture from a feature model, several crosscutting variables features and dependencies between features are commonly found from a feature-oriented analysis. These features and dependencies can be modeled using an aspect-oriented architecture approach. In this paper we present how a xADL 2.0 extension with aspects can help represent the crosscutting variables features and the dependencies in product line architectures from a feature-oriented analysis.
Article
Full-text available
As organizations increasingly operate, compete, and cooperate in a global context, business processes are also becoming global to propagate the benefits from coordination and standardization across geographical boundaries. In this context, security has gained significance due to increased threats, as well as legislation and compliance issues. This article presents a framework for assessing the security of Internet technology components that support a globally distributed workplace. Four distinct information flow and design architectures are identified based on location sensitivities and placements of the infrastructure components. Using a combination of scenarios, architectures, and technologies, the article presents the framework of a development tool for information security officers to evaluate the security posture of an information system. To aid managers in better understanding their options to improve security of the system, we also propose a three-dimensional representation, based on the framework, for embedding solution alternatives. To demonstrate its use in a real-world context, the article also applies the framework to assess a globally distributed workforce application at a northeast financial institution.
Conference Paper
This paper presents a continuous investigation on performance analysis with sensitivity analysis. As an effort to develop a systematic approach to improve the performance of service-oriented software systems, the original work introduced a statistical approach of two-factor-based sensitivity analysis to software performance analysis. The goal of generating accurate performance feedback, however, was only partially achieved as performance analysis needs to consider more factors. This paper presents a generalization for the statistical method to handle multiple factors. In addition, it gives detailed discussions on sensitivity analysis with three factors, and provides experiment results to demonstrate the need and advantages of analyzing multiple factors at the same time.
Article
The primary advantage of model-based performance analysis is its ability to facilitate sensitivity and predictive analysis, in addition to providing an estimate of the application performance. To conduct model-based analysis, it is necessary to build a performance model of an application which represents the application structure in terms of the interactions among its components, using an appropriate modeling paradigm. While several research efforts have been devoted to the development of the theoretical aspects of model-based analysis, its practical applicability has been limited despite the advantages it offers. This limited practical applicability is due to the lack of techniques available to estimate the parameters of the performance model of the application. Since the model parameters cannot be estimated in a realistic manner, the results obtained from model-based analysis may not be accurate.In this paper, we present an empirical approach in which profile data in the form of block coverage measurements is used to parameterize the performance model of an application. We illustrate the approach using a network routing simulator called Maryland routing simulator (MaRS). Validation of the performance estimate of MaRS obtained from the performance model parameterized using our approach demonstrates the viability of our approach. We then illustrate how the model could be used for predictive performance analysis using two scenarios. By the virtue of using code coverage measurements to parameterize a performance model, we integrate two mature, yet independent research areas, namely, software testing and model-based performance analysis.
Conference Paper
Service-oriented systems are large, dynamic, and heterogeneous distributed environments that can be plagued with performance problems. These problems are becoming increasingly difficult for human administrators to analyze and indeed rectify, due to the sheer size and complexity of the environments they occur in. To curb this trend, a high degree of performance resilience must be injected into the system, such that it can autonomously overcome problems and resume normal service with minimal human interventions. Attending to this need requires a feed-back loop solution that monitors service-oriented workloads, localizes guilty services using the gathered data, and recuperates the performance of those identified services. Further to previous work on the first two points, this paper seeks to address the last issue through three platform-independent, autonomic means: dynamically switching to another service, automatically restarting the suffering service, and autonomously reversing any incorrect configuration of that service. A strategy is devised to orchestrate these mechanisms to respond to different performance problems. Evaluations in a real- world service-oriented grid show that our approach is effective against two common types of performance problems.
Conference Paper
Software components and software architectures have emerged as a promising paradigm to improve the construction of software systems. Some attributes, such as reliability, requires evidences about failures in the system. An approach addressing the software reliability estimation problem is based on considering all execution traces collected during the testing process. An execution trace is a sequence of blocks grouping source code statements. Following this approach, early reliability assessment of component assemblies requires addressing an important issue: a precise composition semantics representing the behavior of the assembled components. This paper describes a composition model for sequential component assemblies which uses as basic units of composition a set of empirical evidences generated during the component testing process. These units are named as Component Test Records.
Conference Paper
Architectural change heuristics are a very powerful mechanism for implementing architectural optimisation. They allow for both the capture of the systematic changes required to maintain system integrity and the often poorly understood rationale of expert knowledge. However, even though heuristics are one of the oldest and most widely used problem-solving mechanisms, they are also perhaps one of the most mis-used and ill-defined. In order to understand how heuristics can be used in optimising system architectures it is important to understand the nature of heuristics especially as they apply to architectural optimisation. This paper presents a framework that can be used to categorise and classify heuristics as they are used in system optimisation. It is anticipated that this framework will provide a common foundation within which to discuss heuristics in architectural optimisation
Chapter
Full-text available
The Health Information Technology for Economics and Clinical Health (HITECH) Act of 2009 called for the widespread adoption and implementation of electronic medical records (EMR) to improve care quality, avoid unnecessary cost, and promote patient safety as a cultural norm. An electronic health record (EHR) is the systematic collection of patient and population electronically stored health information in a digital format. EHR systems are designed to store data accurately and to capture the state of a patient across time. It eliminates the need to track down a patient’s previous paper medical records and assists in ensuring data are accurate and legible. It can reduce data replication as there is only one modifiable file which means the file is more likely up-to-date and decreases risk of lost paperwork. Sociotechnical models of EHR, EMR, CPOE, MAR, and eMar use point to complex interactions between technology and other aspects of the environment related to human resources, among others. Diverse stakeholders—clinicians, researchers, business leaders, policy makers, and the public—have good reason to believe that the effective use of EHR is essential to meaningful advances in health care quality and patient safety. However, several reports have documented the potential of EHRs to contribute to health care system flaws and patient harm. As organization with limited resources for care process transformation, human-factors engineering, software safety, and project management begin to use EHRs, the chance of EHR associated harm may increase. These include setting EHR implementation in the context of health care process improvement, building safety into the specification and design of EHRs, safety testing and reporting, and rapid communication of EHR-related safety flaws and incidents. The most frequently mentioned security measures and techniques are categorized into three themes: administrative, physical, and technical safeguards. The sensitive nature of the information contained within electronic health records has prompted the need for advanced security techniques that are able to put these worries at ease. It is imperative for security techniques to cover the vast threats like substantial learning curve, confidentiality, security issues, lack of standardized terminology, system architecture, and indexing.
Conference Paper
Full-text available
Controlling and managing variability across all life cycle stages is the key practice area that distin- guishes product line engineering from single system development. Therefore, product line researchers have proposed many approaches and techniques for addressing variability management. There is, how- ever, no standard or standardized framework yet that organizes and structures the different approaches in a clear and comparable way. This paper uses the concept of a variability di- mension to clearly separate variability-related issues from all other aspects related to development arti- facts. Along the variability dimension, the paper then presents the variability philosophy used by Fraun- hofer PuLSE (Product Line Software and System
Conference Paper
Features have been recognized as important building blocks of software product lines. Unfortunately, features have been mostly confined to modeling activities as they were origi- nally conceived by Kang and his group in the early 90's. In this paper we address the negative impact this has had on product line development and how research on programming languages and UML support for features can help.
Conference Paper
This paper investigates the application of mathematical methodologies in software performance engineering. It applies the techniques of design of experiment (DOE), and develops a systematic approach to improve software performance and to reduce development cost for the design of the service-oriented software systems. This work aims at developing a mechanism that helps software designers to optimize software designs by providing more accurate feedback of software performance with sensitivity analysis. To illustrate the efficacy of the proposed approach, the paper also includes a case study on an existing Web service-oriented system & a clinical decision support system (CDSS)
Conference Paper
Blocking is the phenomenon where a service request is momentarily stopped, but not lost, until the service becomes available again. Despite its importance, blocking is a difficult phenomenon to model analytically, because it creates strong inter-dependencies in the systems components. Mean Value Analysis (MVA) is one of the most appealing evaluation methodology since its low computational cost and easy of use. In this paper, an approximate MVA for Bloking After Service is presented that greatly outperforms previous results. The new algorithm is obtained by analyzing the inter-dependencies due to the blocking mechanism and by consequently modifying the MVA equations. The proposed algorithm is tested and then applied to a capacity planning and admission control study of a web server system.
Conference Paper
Full-text available
Background: Non functional requirements (NFRs) are greatly influenced by the architectural choices and designs made during architecture phase. Security is an important concern in today's world in most applications. Security, like other NFRs, is also related with software architecture. A lot of work is being reported to deal with security at architecture level. Aim: This paper synthesizes the work that has been reported about security at architecture level. Results: This paper reports a mapping study conducted to see the quantity and type of work reported about security at software architecture level. Results show that most of the work in this area is about proposing and evaluating a solution. Conclusion: The mapping study identified the efforts that have been reported about security at architecture level. The use of systematic mapping technique can bring a significant contribution to research areas and based on the systematic maps, not only state of art of an area can be identified at an abstract level but it can also suggest future directions for research.
Conference Paper
Current reliability analysis techniques encounter a prohibitive challenge with respect to the control flow representation of large software systems with intricate control flow structures. Some techniques use a component-based Control Flow Graph (CFG) structure which represents only inter-component control flow transitions. This CFG structure disregards the dependencies among multiple outward control flow transitions of a system component and does not provide any details about a component internal control flow structure. To overcome these problems, some techniques use statement-based or block-based CFGs. However, these CFG structures are remarkably complex and difficult to use for large software systems. In this paper, we propose a simple CFG structure called Connection Dependency Graph (CDG) that represents inter-component and intra-component control flow transitions and preserves the dependencies among them. We describe the CDG structure and explain how to derive it from a program source code. Our derivation exploits a number of architectural patterns to capture the control flow transitions and identify the execution paths among connections. We provide a case study to examine the effect of program size on the CDG, the statement-based, and the block-based CFGs by comparing them with respect to complexity using the PostgreSQL open source database system.
Article
Full-text available
Welcome to VaMoS'07 – the First International Workshop on Variability Modelling of Soft-ware-intensive Systems! The goal of VaMoS is to help the variability modelling community to more thoroughly under-stand how the different variability modelling approaches complement each other and how these approaches can be improved and integrated to better meet the needs of practitioners. To this end, VaMoS is planned to be a highly interactive event. Each session is organized in or-der to stimulate discussions among the participants (see page 4). The high number of submissions and the quality of the accepted papers show that variability modelling is an important field of research. We hope that VaMoS will stimulate work on new challenges in variability modelling and thus will help to shape the future of variability model-ling research. In total, we have received 38 papers for VaMoS, out of which 20 were accepted. Each paper was reviewed by at least three members of the programme committee. Based on the PC mem-bers' reviews, the final decision was taken by the organization committee during a two day meeting in Essen on the 1st and 2nd of December.
Article
Software vulnerability is one cause of information security incidents which is becoming more and more damaging. Therefore, software vulnerability analyses are important in the theory and practice of information security. This paper describes software vulnerabilities and software vulnerability analysis methods, and then presents a software vulnerability analysis system. Various methods are compared to identify technical and engineering problems and future trends in software vulnerability analyses.
Thesis
Full-text available
Software product line engineering is a plan-driven paradigm to produce varying software products. Software product lines typically differentiate the products by their functionality. However, customers may have different needs regarding performance, security, reliability, or other quality attributes. Building a software product line that is able to efficiently produce products with purposefully different quality attributes is a challenging task. The aim in this dissertation was to study why and how to vary quality attributes purposefully in a software product line. The study focused on two quality attributes, performance and security. We conducted a systematic literature review on quality attribute variability, conducted two case studies on performance variability, and constructed a design theory and artifacts addressing security variability. The results indicate that quality attributes can be purposefully varied to serve different customer needs, to conduct price differentiation, and to better balance product and design trade-offs. Additionally, quality attributes can be varied to adapt to varying operating environment constraints. The quality attribute differences can be communicated to the customers as observable product properties, as internal resources, or as the target operating environments. In particular, security can be distinguished as countermeasures. In the product line architecture, quality attribute differences can be designed through software or hardware design tactics or by relying on indirect variation. Just designing the differences may not be enough to ensure the product has given quality attributes, but the impact of other variability may need to be handled at the product-line or product level. Our contributions are as follows. Instead of focusing on how to represent quality attribute variability, we focused on understanding the phenomenon of how specific quality attributes vary. We identified several differences between performance and security variability, for example, that security is more difficult to distinguish to the customers but more straightforward to design and derive. We combined design and customer viewpoints: the reason to vary and the means to communicate to the customers should be analyzed both from the technical and non-technical viewpoints. Finally, we drew evidence-based generalizable knowledge from the industrial context.
Conference Paper
With the increasing use of mobile phones in contemporary society, more and more networked computers are connected to each other. This has brought along security issues. To solve these issues, both research and development communities are trying to build more secure software. However, there is the question that how the secure software is defined and how the security could be measured. In this paper, we study this problem by studying what kinds of security measurement tools (i.e. metrics) are available, and what these tools and metrics reveal about the security of software. As the result of the study, we noticed that security verification activities fall into two main categories, evaluation and assurance. There exist 34 metrics for measuring the security, from which 29 are assurance metrics and 5 are evaluation metrics. Evaluating and studying these metrics, lead us to the conclusion that the general quality of the security metrics are not in a satisfying level that could be suitably used in daily engineering work flows. They have both theoretical and practical issues that require further research, and need to be improved.
Article
Full-text available
Developing a new reliable framework based on IEEE 802.15.4 for the communication in internet of smart devices can be a valuable framework to improve the performance of communication reliability. The Internet of Things is a process to communicate and share information among nearby devices. But there are more challenges for secure and reliable communication. In the beginning of Internet, it was developed to communicate one device to another device using accessing the browsers. However, in the current era, high speed smart efficient devices with many advanced technologies like low power consumption, low cost etc. available to communicate with each other. The communication reliability has been raised as one of the most critical issues of wireless networking where resolving such an issue would result in a constant growth in the use and popularity of Internet of Things. The proposed research creates a framework for providing the communication reliability in the internet of smart devices network for the internet of things using IEEE802.15.4. Our main contribution links a new study that integrates reliability to the communication framework and provides reliable connection in internet of smart devices. This study will be useful in Internet of Things framework. The algorithm has been experimentally implemented. The proposed framework predicts well in our comprehensive experiments.
Chapter
Full-text available
Deciding on the optimal architecture of a software system is difficult, as the number of design alternatives and component interactions can be overwhelmingly large. Adding security considerations can make architecture evaluation even more challenging. Existing model-based approaches for architecture optimisation usually focus on performance and cost constraints. This paper proposes a model-based architecture optimisation approach that advances the state-of-the-art by adding security constraints. The proposed approach is implemented in a prototype tool, by extending Palladio Component Model (PCM) and PerOpteryx. Through a laboratory-based evaluation study of a multi-party confidential data analytics system, we show how our tool discovers secure architectural design options on the Pareto frontier of cost and performance.
Article
This article's objective is to develop a model for measuring the security strength of dongle-protected software. We believe such a measure is important because it can attach a clear, simple, and understandable monetization number to security. Dongles are USB keys or small boxes attached to the host parallel port. The copy-protected application interacts with the dongle and progresses its execution only if the dongle answers appropriately. The interaction between the software and the dongle takes place through calls to the dongle API.
Article
Full-text available
A virtual-address translation buffer (TB) is a hardware cache of recently used virtual-to-physical address mappings. The authors present the results of a set of measurements and simulations of translation buffer performance in the VAX-11/780. Two different hardware monitors were attached to VAX-11/780 computers, and translation buffer behavior was measured. Measurements were made under normal time-sharing use and while running reproducible synthetic time-sharing work loads. Reported measurements include the miss ratios of data and instruction references, the rate of TB invalidations due to context switches, and the amount of time taken to service TB misses. Additional hardware measurements were made with half the TB disabled. Trace-driven simulations of several programs were also run; the traces captured system activity as well as user-mode execution. Several variants of the 11/780 TB structure were simulated.
Article
Full-text available
This paper introduces a reliability model, and a reliability analysis technique for component-based software. The technique is named Scenario-Based Reliability Analysis (SBRA). Using scenarios of component interactions, we construct a probabilistic model named Component-Dependency Graph (CDG). Based on CDG, a reliability analysis algorithm is developed to analyze the reliability of the system as a function of reliabilities of its architectural constituents. An extension of the proposed model and algorithm is also developed for distributed software systems. The proposed approach has the following benefits: 1) It is used to analyze the impact of variations and uncertainties in the reliability of individual components, subsystems, and links between components on the overall reliability estimate of the software system. This is particularly useful when the system is built partially or fully from existing off-the-shelf components; 2) It is suitable for analyzing the reliability of distributed software systems because it incorporates link and delivery channel reliabilities; 3) The technique is used to identify critical components, interfaces, and subsystems; and to investigate the sensitivity of the application reliability to these elements; 4) The approach is applicable early in the development lifecycle, at the architecture level. Early detection of critical architecture elements, those that affect the overall reliability of the system the most, is useful in delegating resources in later development phases.
Article
Full-text available
Dependability evaluation is a basic component in assessing the quality of repairable systems. A general model (Op) is presented and is specifically designed for software systems; it allows the evaluation of various dependability metrics, in particular, of availability measures. Op is of the structural type, based on Markov process theory. In particular, Op is an attempt to overcome some limitations of the well-known Littlewood reliability model for modular software. This paper gives the: mathematical results necessary to the transient analysis of this general model; and algorithms that can efficiently evaluate it. More specifically, from the parameters describing the: evolution of the execution process when there is no failure; failure processes together with the way they affect the execution; and recovery process, the results are obtained for the: distribution function of the number of failures in a fixed mission; and dependability metrics which are much more informative than the usual ones in a white-box approach. The estimation procedures of the Op parameters are briefly discussed. Some simple examples illustrate the interest in such a structural view and explain how to consider reliability growth of part of the software with the transformation approach developed by Laprie et al. The complete transient analysis of Op allows discussion of the Poisson approximation by Littlewood for his model
Conference Paper
Full-text available
Using commercial off-the-shelf (COTS) components to build large, complex systems has become the standard way that systems are designed and implemented by government and industry. Much of the literature on COTS-based systems concedes that such systems are not suitable for mission-critical applications. However, there is considerable evidence that COTS-based systems are being used in domains where significant economic damage and even loss-of-life are possible in the event of a major system failure or compromise. Can we ever build such systems so that the risks are commensurate with those typically taken in other areas of life and commerce? This paper describes a risk-mitigation framework for deciding when and how COTS components can be used to build survivable systems. Successful application of the framework will require working with vendors to reduce the risks associated with using the vendors’ products, and improving and making the best use of your own organization’s risk-management skills.
Article
ATOM (Analysis Tools with OM) is a single framework for building a wide range of customized program analysis tools. It provides the common infrastructure present in all code-instrumenting tools; this is the difficult and time-consuming part. The user simply defines the tool-specific details in instrumentation and analysis routines. Building a basic block counting tool like Pixie with ATOM requires only a page of code. ATOM, using OM link-time technology, organizes the final executable such that the application program and user's analysis routines run in the same address space. Information is directly passed from the application program to the analysis routines through simple procedure calls instead of inter-process communication or files on disk. ATOM takes care that analysis routines do not interfere with the program's execution, and precise information about the program is presented to the analysis routines at all times. ATOM uses no simulation or interpretation. ATOM has been implemented on the Alpha AXP under OSF/1. It is efficient and has been used to build a diverse set of tools for basic block counting, profiling, dynamic memory recording, instruction and data cache simulation, pipeline simulation, evaluating branch prediction, and instruction scheduling.
Article
A system is considered in which switching takes place between sub-systems according to a continuous parameter Markov chain. Failures may occur in Poisson processes in the sub-systems, and in the transitions between subsystems. All failure processes are independent. The overall failure process is described exactly and asymptotically for highly reliable sub-systems. An application to process-control computer software is suggested.
Article
This paper deals with evaluation of the dependability (considered as a generic term, whose main measures are reliability, availability, and maintainability) of software systems during their operational life, in contrast to most of the work performed up to now, devoted mainly to development and validation phases. The failure process due to design faults, and the behavior of a software system up to the first failure and during its life cycle are successively examined. An approximate model is derived which enables one to account for the failures due to the design faults in a simple way when evaluating a system's dependability. This model is then used for evaluating the dependability of 1) a software system tolerating design faults, and 2) a computing system with respect to physical and design faults.
Article
A stochastic model which describes behavior of a modular software system is developed and the software system failure rate is derived. The optimal value for the individual module failure rate is derived, under the assumptions that a certain cost function is minimized and that the entire system is guaranteed to have an overall failure rate of a prescribed level.
Conference Paper
Many architecture-based software reliability models have been proposed in the past without any attempt to establish a relationship among them. The aim of this paper is to fill this gap. First, the unifying structural properties of the models are exhibited and the theoretical relationship is established. Then, the estimates provided by the models are compared using an empirical case study. The program chosen for the case study consists of almost 10,000 lines of C code divided into several components. The faulty version of the program was obtained by reinserting the faults discovered during integration testing and operational usage andthe correct version was use as an oracle. A set of test cases was generated randomly accordingly to the known operational profile. The results show that 1) all models give reasonably accurate estimations compared to the actual reliability and 2) faults present in the components influence both components reliabilities and the way components interact.
Article
A probabilistic model is presented of program material in a paging machine. The sequences of page references in the model are associated with certain sequences of LRU stack distances and have reference patterns formalizing a notion of “locality” of reference. Values for parameters of the model can be chosen to make the page-exception characteristics of the generated sequences of page references consistent with those of actual program traces. The statistical properties of the execution intervals (times between page-exception) for sequences of references in the model are derived, and an application of these results is made to a queuing analysis of a simple multiprogrammed paging system. Some numerical results pertaining to the program model and the queuing analysis are given.
Article
Using the Independent Reference assumption to model program behavior, the performance of different buffer organizations (Fully Associative, Direct Mapping, Set Associative, and Sector) are analyzed: (1) The expressions for their fault rate are derived. To show more explicity the dependence of the fault rate on the factors that affect it, distribution-free upper bounds on fault rates are computed for the Direct Mapping, Set Associative, and Sector buffers. The use of such bounds is illustrated in the case of the Direct Mapping buffer. (2) The performance of the buffers for FIFO and Random Replacement are shown to be identical. (3) It is possible to restructure programs to take advantage of the basic organization of the buffers. The effect of such restructuring is quantified for the Direct Mapping buffer. It is shown that the performance of the Direct Mapping buffer under near-optimal restructuring is comparable to the performance of the Fully Associative buffer. Further, the effect of this restructuring is shown to be potentially stronger than that of buffer replacement algorithms.
Article
The optimum capacity of a cache memory with given access time is determined analytically based upon a model of linear storage hierarchies wherein both the hit ratio function and the device technology-cost function are assumed to be power functions. Explicit formulas for the capacities and access times of the storage levels in the matching hierarchy of required capacity and allowable cost are derived. The optimal number of storage levels in a hierarchy is shown to increase linearly with the logarithm of the ratio of the required hierarchy capacity and the cache capacity.
Conference Paper
This paper describes a software assessment method that is being implemented to quantitatively assess information system security and survivability. Our approach-which we call Adaptive Vulnerability Analysis-exercises software (in source-code form) by simulating incoming malicious and non-malicious attacks that fall under various threat classes. A quantitative metric is computed by determining whether the simulated threats undermine the security of the system as defined by the user according to the application program. This approach stands in contrast to common security assurance methods that rely on black-box techniques for testing completely-installed systems. AVA does not provide an absolute metric, such as mean-time-to-failure, but instead provides a relative metric, allowing a user to compare the security of different versions of the same system, or to compare non-related systems with similar functionality
Conference Paper
Describes ATAC (Automatic Test Analysis for C), a tool for data flow coverage testing of C programs. ATAC is being used as a research instrument at Purdue and Bellcore and as a software development tool at Bellcore. The authors discuss the design of ATAC, a preliminary view of its uses in development, and its research uses
Article
A user-oriented reliability model has been developed to measure the reliability of service that a system provides to a user community. It has been observed that in many systems, especially software systems, reliable service can be provided to a user when it is known that errors exist, provided that the service requested does not utilize the defective parts. The reliability of service, therefore, depends both on the reliability of the components and the probabilistic distribution of the utilization of the components to provide the service. In this paper, a user-oriented software reliability figure of merit is defined to measure the reliability of a software system with respect to a user environment. The effects of the user profile, which summarizes the characteristics of the users of a system, on system reliability are discussed. A simple Markov model is formulated to determine the reliability of a software system based on the reliability of each individual module and the measured intermodular transition probabilities as the user profile. Sensitivity analysis techniques are developed to determine modules most critical to system reliability. The applications of this model to develop cost-effective testing strategies and to determine the expected penalty cost of failures are also discussed. Some future refinements and extensions of the model are presented.
Article
A mathematical model for the behavior of programs or workloads is presented and from it is extracted the miss ratio of a finite, fully associative cache (or other first-level memory) using least-recently-used replacement under those workloads. To obtain miss ratios, the function u(t, L), defined to be the number of unique lines of size L referenced before time t, is modeled. Empirical observations show that this function appears to have the form u(t, L)=(W L<sup>a</sup>t<sup>b</sup>) (d<sup>log</sup> <sup>L log t</sup>) where W, a, b, d are constants that are related, respectively, to the working set size, locality of references to nearby addresses (spatial locality), temporal locality (locality in time not attributable to spatial locality), and interactions between spatial locality and temporal locality. The miss ratio of a finite fully associative cache can be approximated as the time derivative of u(t, L) evaluated where the function has a value equal to the size of the cache. When the miss ratios from this model are compared to measured miss ratios for a representative trace, the accuracy is high for large caches. For smaller caches, the model is close but not highly precise
Article
Prevalent approaches to software reliability modeling are black-box based, i.e., the the software system is treated as a monolithic entity and only its interactions with the outside world are modeled. However, with the advancement and widespread use of object oriented systems design and web-based development, the use of component-based software development is on the rise. Software systems are being developed in a heterogeneous fashion using components developed in-house, contractually, or picked off-the-shelf, and hence it may be inappropriate to model the overall failure process of such systems using the existing software reliability growth models. Predicting the reliability of a heterogeneous software system based on its architecture, and the failure behavior of its components is thus absolutely essential. In this paper we present an analytical approach to architecture-based software reliability prediction. The novelty of this approach lies in the idea of parameterizing the analytic ...
A Markovian Analytical cache Performance Model
  • Wei Li
A hierarchical approach to architecture-based analysis of software systems
  • S Swapna
  • Kishor S Gokhale
  • Trivedi