Book

Model-Based Software Performance Analysis

Authors:

Abstract

Poor performance is one of the main quality-related shortcomings that cause software projects to fail. Thus, the need to address performance concerns early during the software development process is fully acknowledged, and there is a growing interest in the research and software industry communities towards techniques, methods and tools that permit to manage system performance concerns as an integral part of software engineering. Model-based software performance analysis introduces performance concerns in the scope of software modeling, thus allowing the developer to carry on performance analysis throughout the software lifecycle. With this book, Cortellessa, Di Marco and Inverardi provide the cross-knowledge that allows developers to tackle software performance issues from the very early phases of software development. They explain the basic concepts of performance analysis and describe the most representative methodologies used to annotate and transform software models into performance models. To this end, they go all the way from performance primers through software and performance modeling notations to the latest transformation-based methodologies. As a result, their book is a self-contained reference text on software performance engineering, from which different target groups will benefit: professional software engineers and graduate students in software engineering will learn both basic concepts of performance modeling and new methodologies; while performance specialists will find out how to investigate software performance model building.

Chapters (7)

The increasing complexity of software and its pervasiveness in everyday life has in the last years motivated growing interest for software analysis. This has mainly been directed to assess functional properties of the software systems (related to their structure and their behavior) and, in the case of safety critical systems, dependability properties. The quantitative behavior of a software system has gained relevance only recently with the advent of software performance analysis. This kind of analysis aims at assessing the quantitative behavior of a software system by comprehensively analyzing its structure and its behavior, from design to code. In this chapter we introduce the concepts and definitions that will be used throughout the entire book.
Software engineers describe static and dynamic aspects of a software system by using ad-hoc models. The static description consists of the identification of software modules or components. The dynamics of a software system concerns its behavior at run time. There exist many notations to describe either the statics or the dynamics of a software system. This chapter focuses on notations that allow for the behavior description since performance is an attribute of the system dynamics. This chapter is divided into two parts: (i) basic notations historically introduced by computer scientists to model software systems, such as Automata, Process Algebras and Petri Nets; (ii) Unified Modeling Language that has become a de facto standard in modeling complex software systems.
A major problem for stably embedding software performance modeling and analysis within the software lifecycle resides in the distance between notations for static and dynamic modeling of software (such as UML) and notations for modeling performance (such as Queueing Networks). In Chap. 2 we have introduced the major notations for software modeling, whereas in this chapter we introduce basic performance modeling notations. A question may arise at this point from readers that are not familiar with performance analysis: “If all the performance notations are able to provide the desired indices, then why using different notations for performance modeling?”. The software performance community is still far from unifying languages and notations, although some recent efforts have been spent in the direction of building a performance ontology as a shared vocabulary of the domain (see Chap. 7). The performance notations that we describe in this chapter are well described in the literature and many references can be found. Although more sophisticated notations have been introduced, most of them build up over the basic notations that are described in this chapter.
This chapter is aimed at illustrating performance modeling and analysis issues within the software lifecycle. After having introduced software and performance modeling notations, here the goal is to illustrate their role within the software development process. In Chap.5 we will describe in more details several approaches that, based on model transformations, can be used to implement the integration of software performance analysis and software development process. After briefly introducing the most common software lifecycle stages, we present our unifying view of software performance analysis as integrated within a software development process (i.e. the Q-Model).
This chapter focuses on the transformational approaches from software system specifications to performance models. These transformations aim at filling the gap between the software development process and the performance analysis by generating performance model ready to be validated from the software models. Three approaches are discussed in detail presenting their foundations and their application to an e-commerce case study. Moreover, the chapter briefly reviews representatives of other transformational approaches present in the literature. All the presented approaches are discussed with respect to a set of relevant dimensions such as software specification, performance model, evaluation methods and level of automated support for performance prediction.
In the process of software performance modeling and analysis, although these two activities do not act in a strict pipeline, once generated/built (at whatever level of abstraction in the software lifecycle) a performance model has to be solved to get the values of performance indices of interest. It is helpful to recall here that the main targets of a performance model solution are the values of performance indices. The existing literature is rich of methodologies, techniques and tools for solving a wide variety of performance models. This is a very active research topic and, despite the complexity of problems encountered in this direction, in the last few decades very promising results have been obtained. Moreover, new tools have been developed to support this key step of software performance process. Therefore, the contents of this chapter are not limited to the basics of model solution techniques, and a short summary of the major tools for model solution is also provided.
In this chapter some advanced issues of software performance have been collected. They address different aspects of this discipline, not necessarily related to each other. The chapter is not meant to be a structured overview of the main open issues in the field, rather it is an anthology of issues that we have faced in the last few years.
... For example, they are fundamental for carrying out all kinds of non-functional analyses, e.g. security [4], performance [15] or dependability [6,7], also behavioural specifications provide the means for executable specifications [49]. Behaviour-driven development (BDD), 1 as an evolution of Test-driven development (TDD) [5], elevates behavioural specifications to guide development, system testing and communication with stakeholders. ...
... In case the detection of the requirement pattern succeeds, then the translation phase follows (lines [9][10][11][12][13][14][15][16][17][18][19], where the User Layer model is possibly refined by adding new model elements. ...
... A candidate transition t is firstly created (line 10) and, then, t is refined with references to model elements-like, source and target states-named in the detected requirement pattern (lines [11][12][13][14][15], and it is added to the model mdl (line 16). Also, mdl is enriched with traceability information between the requirement, specified in the Requirement view of the model, and the newly added transition (lines [17][18][19]. ...
Article
Full-text available
MDE enables the centrality of the models in semi-automated development processes. However, its level of usage in industrial settings is still not adequate for the benefits MDE can introduce. This paper proposes a semi-automatic approach for the completion of high-level models in the lifecycle of critical systems, which exhibit an event-driven behaviour. The proposal suggests a specification guideline that starts from a partial SysML model of a system and on a set of requirements, expressed in the well-known Given–When–Then paradigm. On the basis of such requirements, the approach enables the semi-automatic generation of new SysML state machines model elements. Accordingly, the approach focuses on the completion of the state machines by adding proper transitions (with triggers, guards and effects) among pre-existing states. Also, traceability modelling elements are added to the model. Two case studies demonstrate the feasibility of the proposed approach.
... Constraints are defined at the meta-level, and the consistency relationships between models are guaranteed by means of (bidirectional) model transformations specified on source and target metamodels. With the introduction of model-driven techniques in the software lifecycle, also the analysis of non-functional properties has become effective by means of dedicated tools for the automated assessment of quality attributes [13]. ...
... This step is aimed at analyzing the design model and removing possible performance flaws. In particular, we use runtime data (i.e., traces) to augment the design model, and then we execute a performance analysis driven by antipatterns [13] on the augmented model. ...
... In this step, PADRE provides a list of possible refactoring actions for each performance antipattern. 12 While executing one refactoring action at a time, the refactored design model is given as input to the PADRE Performance analysis step, in which a model transformation is executed to transform the UML-MARTE design model into a closed Queueing Model [13]. 13 Thereafter, the Queueing Model is solved through the Mean-Value Analysis (MVA) algorithm [24], which allows to rapidly carry out performance indices. ...
Preprint
Full-text available
Microservices are quite widely impacting on the software industry in recent years. Rapid evolution and continuous deployment represent specific benefits of microservice-based systems, but they may have a significant impact on non-functional properties like performance. Despite the obvious relevance of this property, there is still a lack of systematic approaches that explicitly take into account performance issues in the lifecycle of microservice-based systems. In such a context of evolution and re-deployment, Model-Driven Engineering techniques can provide major support to various software engineering activities, and in particular they can allow managing the relationships between a running system and its architectural model. In this paper, we propose a model-driven integrated approach that exploits traceability relationships between the monitored data of a microservice-based running system and its architectural model to derive recommended refactoring actions that lead to performance improvement. The approach has been applied and validated on two microservice-based systems, in the domain of e-commerce and ticket reservation, respectively, whose architectural models have been designed in UML profiled with MARTE.
... The software performance engineering (SPE) field Lloyd 2002, 2003;Cortellessa et al. 2011) deals with the above issues by assessing the quantitative behavior of the software systems. SPE was defined by Smith and Lloyd (2002) as "a systematic, quantitative approach to the cost-effective development of software systems to meet performance requirements. ...
... In consequence, the idea is to identify performance flaws, even before the system is deployed, hence comprehensively analyzing the structure and behavior of the software system, from design to code. As stated by Cortellessa et al. (2011), the performance analysis should be common practice within the software development process, then introducing performance concerns in the scope of software models. But for this to be real, we need methodologies and tools. ...
... • Transforms UML-profiled models into analyzable models (Woodside et al. 2014), i.e, Petri nets and reliability models. • Allows to assess performance metrics (Cortellessa et al. 2011). Concretely, response time, throughput and resource utilization. ...
Article
Full-text available
In recent years, we have seen many performance fiascos in the deployment of new systems, such as the US health insurance web. This paper describes the functionality and architecture, as well as success stories, of a tool that helps address these types of issues. The tool allows assessing software designs regarding quality, in particular performance and reliability. Starting from a UML design with quality annotations, the tool applies model-transformation techniques to yield analyzable models. Such models are then leveraged by the tool to compute quality metrics. Finally, quality results, over the design, are presented to the engineer, in terms of the problem domain. Hence, the tool is an asset for the software engineer to evaluate system quality through software designs. While leveraging the Eclipse platform, the tool uses UML and the MARTE, DAM and DICE profiles for the system design and the quality modeling.
... Queueing Networks (QN) [32] are well-known performance models, good at capturing the contention for resources. QN have been successfully applied in previous work to the performance analysis of computer-based systems, software systems, cyber-physical systems [8,17,38]. Efficient analytical solutions exist for a class of QN (separable or product-form QN), which make it possible to derive steady-state performance measures without resorting to building the underlying state space. The advantage is that the solution is faster and larger models can be solved. ...
... The Object Management Group has adopted two standard profiles for performance, schedulability, and time annotations: an earlier UML Profile for Schedulability, Performance, and Time (SPT) defined for UML 1.x [36], and a later replacement UML Profile for Modeling and Analysis of Real-Time and Embedded systems (MARTE) for UML 2.x [37]. The adoption of SPT and MARTE laid the groundwork for research on the automatic generation of different kinds of performance models from annotated UML [8,17]. ...
... In this section we use an e-commerce system model from the literature [17] to show how performance is analyzed. The e-commerce source model contains a deployment diagram and three activity diagrams representing performance-critical scenarios selected for performance analysis. ...
Article
Full-text available
This paper discusses the progress made so far and future challenges in integrating the analysis of multiple Non-Functional Properties (NFP) (such as performance, schedulability, reliability, availability, scalability, security, safety, and maintainability) into the Model-Driven Engineering (MDE) process. The goal is to guide the design choices from an early stage and to ensure that the system under construction will meet its non-functional requirements. The evaluation of the NFPs considered in this paper uses various kinds of NFP analysis models (also known as quality models) based on existent formalisms and tools developed over the years. Examples are queueing networks, stochastic Petri nets, stochastic process algebras, Markov chains, fault trees, probabilistic time automata, etc. In the MDE context, these models are automatically derived by model transformations from the software models built for development. Developing software systems that exhibit a good trade-off between multiple NFPs is difficult because the design of the software under construction and its underlying platforms have a large number of degrees of freedom spanning a very large discontinuous design space, which cannot be exhaustively explored. Another challenge in balancing the NFPs of a system under construction is due to the fact that some NFPs are conflicting—when one gets better the other gets worse—so an appropriate software process is needed to evaluate and balance all the non-functional requirements. The integration approach discussed in this paper is based on an ecosystem of inter-related heterogeneous modeling artifacts intended to support the following features: feedback of analysis results, consistent co-evolution of the software and analysis models, cross-model traceability, incremental propagation of changes across models, (semi)automated software process steps, and metaheuristics for reducing the design space size to be explored.
... Constraints are defined at the meta-level, and the consistency relationships between models are guaranteed by means of (bidirectional) model transformations specified on source and target metamod-150 els. With the introduction of model-driven techniques in the software lifecycle, also the analysis of non-functional properties has become effective by means of dedicated tools for the automated assessment of quality attributes [13]. ...
... This step is aimed at analyzing the design model and removing possible performance flaws. In particular, we use runtime data (i.e., traces) to augment the design model, and then we execute a performance analysis driven by antipatterns [13] on the augmented model. ...
... PADRE [9] has been adopted for this goal, as it is an approach that detects performance for each performance antipattern. 12 While executing one refactoring action at a time, the refactored design model is given as input to the PADRE Performance analysis step, in which a model transformation is executed to transform the UML-MARTE design model into a closed Queueing Model [13]. 13 Thereafter, the Queueing Model is solved through the Mean-Value Analysis (MVA) algo-475 rithm [24], which allows to rapidly carry out performance indices. ...
Article
Microservices are quite widely impacting on the software industry in recent years. Rapid evolution and continuous deployment represent specific benefits of microservice-based systems, but they may have a significant impact on non-functional properties like performance. Despite the obvious relevance of this property, there is still a lack of systematic approaches that explicitly take into account performance issues in the lifecycle of microservice-based systems. In such a context of evolution and re-deployment, Model-Driven Engineering techniques can provide major support to various software engineering activities, and in particular they can allow managing the relationships between a running system and its architectural model. In this paper, we propose a model-driven integrated approach that exploits traceability relationships between the monitored data of a microservice-based running system and its architectural model to derive recommended refactoring actions that lead to performance improvement. The approach has been applied and validated on two microservice-based systems, in the domain of e-commerce and ticket reservation, respectively, whose architectural models have been designed in UML profiled with MARTE.
... Interpretability refers to the capability of the fuzzy model to express the behaviour of the system in an understandable way (Casillas et al., 2003). 3) Performancein general definition (Cortellessa et al., 2011), it measures how effective is a software system with respect to time constraints and allocation of resources. In the analysed papers, a part of authors explicitly mentioned realtime constraint, like real-time prediction (Lee, 2019), real-time control (Chen et al., 2016), online rule learning from real-time data streams ((Bouchachia and Vanaret, 2014), etc. 4) Robustnessas described in (Fernandez et al., 2005), robustness is the ability of a computer system to cope with errors during execution and erroneous input. ...
... Authors of (Modi et al., 2007) understand flexibility as ability of a fuzzy system to form any number of clusters. 6) Efficiencythe degree to which a system performs its designated functions with minimum consumption of resources (Cortellessa et al., 2011). According to authors of (Rajeswari and Deisy, 2019), efficiency refers to cost-effective training. ...
... Fateh (2010) analysed stability for fuzzy control of robot manipulators without knowing the explicit dynamics of a system. 8) User-friendlinessrefers to ease of use as a primary objective (Cortellessa et al., 2011). In (Kóczy and Sugeno, 1996), authors mention that their FIS is userfriendly. ...
Article
Nowadays, data-driven fuzzy inference systems (FIS) have become popular to solve different vague, imprecise, and uncertain problems in various application domains. However, plenty of authors have identified different challenges and issues of FIS development because of its complexity that also influences FIS quality attributes. Still, there is no common agreement on a systematic view of these complexity issues and their relationship to quality attributes. In this paper, we present a systematic literature review of 1340 scientific papers published between 1991 and 2019 on the topic of FIS complexity issues. The obtained results were systematized and classified according to the complexity issues as computational complexity, complexity of fuzzy rules, complexity of membership functions, data complexity, and knowledge representation complexity. Further, the current research was extended by extracting FIS quality attributes related to the found complexity issues. The key, but not all, FIS quality attributes found are performance, accuracy, efficiency, and interpretability.
... Appropriate architectural changes driven by non-functional requirements are particularly challenging to identify, mainly because nonfunctional analysis is based on specific languages and tools (e.g., Petri Nets, Markov Models) that are different from typical software architecture notations like Architecture Description Languages (e.g., ACME [5] ). In fact, very few ADLs embed constructs that enable the specification in the last decades to generate non-functional models from software architectural descriptions [10,11] . There is instead a clear lack of automation in the backward path that basically consists in the interpretation of the analysis results and the generation of architectural feedback to be propagated back to the software architecture. ...
... In order to validate non-functional requirements on a software architecture, some approaches, mostly based on model transformations, have been proposed in the last decades to generate non-functional models from software architectural descriptions [10,11] . This generation step is also called forward path , and it is represented by the topmost steps of Fig. 1 . ...
... Constraints are expressed at the meta-level, and model transformations are based on source and target metamodels. With the introduction of model-driven techniques in the software lifecycle, the analysis of quality attributes has become effective by means of automated transformations from software artifacts to analysis models [10] . ...
Article
Full-text available
Context: With the ever-increasing evolution of software systems, their architecture is subject to frequent changes due to multiple reasons, such as new requirements. Appropriate architectural changes driven by non-functional requirements are particularly challenging to identify because they concern quantitative analyses that are usually carried out with specific languages and tools. A considerable number of approaches have been proposed in the last decades to derive non-functional analysis models from architectural ones. However, there is an evident lack of automation in the backward path that brings the analysis results back to the software architecture. Objective: In this paper, we propose a model-driven approach to support designers in improving the availability of their software systems through refactoring actions. Method: The proposed framework makes use of bidirectional model transformations to map UML models onto Generalized Stochastic Petri Nets (GSPN) analysis models and vice versa. In particular, after availability analysis, our approach enables the application of model refactoring, possibly based on well-known fault tolerance patterns, aimed at improving the availability of the architectural model. Results: We validated the effectiveness of our approach on an Environmental Control System. Our results show that the approach can generate: (i) an analyzable availability model from a software architecture description, and (ii) valid software architecture models back from availability models. Finally, our results highlight that the application of fault tolerance patterns significantly improves the availability in each considered scenario. Conclusion: The approach integrates bidirectional model transformation and fault tolerance techniques to support the availability-driven refactoring of architectural models. The results of our experiment showed the effectiveness of the approach in improving the software availability of the system.
... As with all scientific and engineering disciplines, predictions can be made with models. Software performance models are mathematical abstractions whose analysis provides quantitative insights into real systems under consideration [15]. Typically, these are stochastic models based on Markov chains and other higher-level formalisms such as queueing networks, stochastic process algebra, and stochastic Petri nets (see, e.g., [15] for a detailed account). ...
... Software performance models are mathematical abstractions whose analysis provides quantitative insights into real systems under consideration [15]. Typically, these are stochastic models based on Markov chains and other higher-level formalisms such as queueing networks, stochastic process algebra, and stochastic Petri nets (see, e.g., [15] for a detailed account). Although they have proved effective in describing and predicting the performance behavior of complex software systems (e.g., [8,50]), a pressing limitation is that the current state of the art hinges on considerable craftsmanship to distill the appropriate abstraction level from a concrete software system, and relevant mathematical skills to develop, analyze, and validate the model. ...
... While model-driven approaches to software performance have been researched quite intensively [15], program-driven generation of performance models has been less explored, and has been concerned with specific kinds of applications. Indeed, the early approach by Hrischuk et al. is concerned with the generation of software performance models (specifically, layered queuing networks [19]) from a class of distributed applications whose components communicate solely by remote procedure calls [26]. ...
... As with all scientific and engineering disciplines, predictions can be made with models. Software performance models are mathematical abstractions whose analysis provides quantitative insights into real systems under consideration [15]. Typically, these are stochastic models based on Markov chains and other higher-level formalisms such as queueing networks, stochastic process algebra, and stochastic Petri nets (see, e.g., [15] for a detailed account). ...
... Software performance models are mathematical abstractions whose analysis provides quantitative insights into real systems under consideration [15]. Typically, these are stochastic models based on Markov chains and other higher-level formalisms such as queueing networks, stochastic process algebra, and stochastic Petri nets (see, e.g., [15] for a detailed account). Although they have proved effective in describing and predicting the performance behavior of complex software systems (e.g., [8,50]), a pressing limitation is that the current state of the art hinges on considerable craftsmanship to distill the appropriate Learning Queuing Networks by Recurrent Neural Networks abstraction level from a concrete software system, and relevant mathematical skills to develop, analyze, and validate the model. ...
... While model-driven approaches to software performance have been researched quite intensively [15], program-driven generation of performance models has been less explored, and has been concerned with specific kinds of applications. Indeed, the early approach by Hrischuk et al. is concerned with the generation of software performance models (specifically, layered queuing networks [19]) from a class of distributed applications whose components communicate solely by remote procedure calls [26]. ...
Preprint
Full-text available
It is well known that building analytical performance models in practice is difficult because it requires a considerable degree of proficiency in the underlying mathematics. In this paper, we propose a machine-learning approach to derive performance models from data. We focus on queuing networks, and crucially exploit a deterministic approximation of their average dynamics in terms of a compact system of ordinary differential equations. We encode these equations into a recurrent neural network whose weights can be directly related to model parameters. This allows for an interpretable structure of the neural network, which can be trained from system measurements to yield a white-box parameterized model that can be used for prediction purposes such as what-if analyses and capacity planning. Using synthetic models as well as a real case study of a load-balancing system, we show the effectiveness of our technique in yielding models with high predictive power.
... Performance measurement should take into account a software system's efficiency, corresponding to the time and allocation of resources. It can be expressed through multiple metrics (indices), including response time, utilization, and throughput [15]. ...
... Analytical performance models represent a popular approach to determine timebased system behavior. Queuing Network (QN), Layered Queuing Network (LQN), and Petri Nets (PNs) are well-known examples of analytical performance models [15]. There are many solution techniques to identify performance indices by analyzing performance models. ...
Article
Full-text available
Bioinformatics is a branch of science that uses computers, algorithms, and databases to solve biological problems. To achieve more accurate results, researchers need to use large and complex datasets. Sequence alignment is a well-known field of bioinformatics that allows the comparison of different genomic sequences. The comparative genomics field allows the comparison of different genomic sequences, leading to benefits in areas such as evolutionary biology, agriculture, and human health (e.g., mutation testing connects unknown genes to diseases). However, software engineering best practices, such as software performance engineering, are not taken into consideration in most bioinformatics tools and frameworks, which may lead to serious performance problems. Having an estimate of the software performance in the early phases of the Software Development Life Cycle (SDLC) is beneficial in making better decisions relating to the software design. Software performance engineering provides a reliable and observable method to build systems that can achieve their required performance goals. In this paper, we introduce the use of the Palladio Component Modeling (PCM) methodology to predict the performance of a sequence alignment system. Software performance engineering was not considered during the original system development. As a result of the performance analysis, an alternative design is proposed. Comparing the performance of the proposed design against the one already developed, a better response time is obtained. The response time of the usage scenario is reduced from 16 to 8.6 s. The study results show that using performance models at early stages in bioinformatics systems can help to achieve better software system performance.
... Software Performance Engineering (SPE) [9,28,29] aims to produce performance models early in the development cycle. Solving such models produces predictions that can trigger the process of refactoring the system design to meet performance requirements [29]. ...
... In this section, we model the system described in Section 4.1 using execution graphs (EG) [29] (solved with SPE·ED) and queuing networks combined with Petri Nets (PN) [9] (solved with JMT). The validity of the QN+PN model is assessed by comparing its results with those obtained by solving the EG model. ...
... Performance problems have been studied from several decades in literature, and software performance engineering emerged as the discipline focused on fostering the specification of performance-related factors [95,8,94] and reporting experiences related their management [81,41,78,3]. Performance bugs, i.e., suboptimal implementation choices that create significant performance degradation, have been demonstrated to hurt the satisfaction of end-users in the context of desktop applications [67]. ...
... Moving to the suboptimal CPU usage performance bugs, we classified as such 32 bugs resulting in the excessive/unnecessary usage of the CPU (e.g., unneeded computation, excessive logging, etc.). We were able to identify the causes for the 32 bugs and we sub-categorized them in unneeded computation (15), energy leaks (7), excessive logging(2), and costly operations (8). Fig. 5 summarizes a bug falling in the costly operation category. ...
Article
Full-text available
A recent research showed that mobile apps represent nowadays 75% of the whole usage of mobile devices. This means that the mobile user experience, while tied to many factors (e.g., hardware device, connection speed, etc.), strongly depends on the quality of the apps being used. With “quality” here we do not simply refer to the features offered by the app, but also to its non-functional characteristics, such as security, reliability, and performance. This latter is particularly important considering the limited hardware resources (e.g., memory) mobile apps can exploit. In this paper, we present the largest study at date investigating performance bugs in mobile apps. In particular, we (i) define a taxonomy of the types of performance bugs affecting Android and iOS apps; and (ii) study the survivability of performance bugs (i.e., the number of days between the bug introduction and its fixing). Our findings aim to help researchers and apps developers in building performance-bugs detection tools and focusing their verification and validation activities on the most frequent types of performance bugs.
... Besides inheriting all limitations of the underlying software performance engineering research [36], our approach exhibits the following main threats to validity [37]. ...
Article
Full-text available
The design of cyber-physical systems (CPS) is challenging due to the heterogeneity of software and hardware components that operate in uncertain environments (e.g., fluctuating workloads), hence they are prone to performance issues. Software performance antipatterns could be a key means to tackle this challenge since they recognize design problems that may lead to unacceptable system performance. This manuscript focuses on modeling and analyzing a variegate set of software performance antipatterns with the goal of quantifying their performance impact on CPS. Starting from the specification of eight software performance antipatterns, we build a baseline queuing network performance model that is properly extended to account for the corresponding bad practices. The approach is applied to a CPS consisting of a network of sensors and experimental results show that performance degradation can be traced back to software performance antipatterns. Sensitivity analysis investigates the peculiar characteristics of antipatterns, such as the frequency of checking the status of resources, that provides quantitative information to software designers to help them identify potential performance problems and their root causes. Quantifying the performance impact of antipatterns on CPS paves the way for future work enabling the automated refactoring of systems to remove these bad practices.
... Therefore, quality prediction based on a system's architectural model is a valuable approach to avoid costs and effort caused by a "fix-it-later approach". Several approaches have been introduced in the last decades to ease the model-based prediction of quality attributes [20,57]. ...
Preprint
Full-text available
Software architecture optimization aims to enhance non-functional attributes like performance and reliability while meeting functional requirements. Multi-objective optimization employs metaheuristic search techniques, such as genetic algorithms, to explore feasible architectural changes and propose alternatives to designers. However, the resource-intensive process may not always align with practical constraints. This study investigates the impact of designer interactions on multi-objective software architecture optimization. Designers can intervene at intermediate points in the fully automated optimization process, making choices that guide exploration towards more desirable solutions. We compare this interactive approach with the fully automated optimization process, which serves as the baseline. The findings demonstrate that designer interactions lead to a more focused solution space, resulting in improved architectural quality. By directing the search towards regions of interest, the interaction uncovers architectures that remain unexplored in the fully automated process.
... Three main books cover a large extent of the lecture materials, including Model-based software performance analysis [5], the Palladio approach in modeling and simulating software architectures [10] and the application of Queuing networks and Markov chain in performance evaluation [4]. Moreover, the literature contains, e.g., websites for tools such as PRISM [7] and specifications of standards such as UML. ...
... This is true considering that in a given project, the same UML models can be leveraged by the engineer, in the MDE context, for multiple purposes. For example, UML models are useful for code generation, as previously mentioned, for automatic testing, and also for performance assessment [35] and dependability assessment [36]. However, UML falls short for representing those specific concepts of the blockchain domain that will be eventually needed for proving the security properties. ...
Article
Full-text available
This paper proposes a model-driven approach for the security modelling and analysis of blockchain based protocols. The modelling is built upon the definition of a UML profile, which is able to capture transaction-oriented information. The analysis is based on existing formal analysis tools. In particular, the paper considers the Tweetchain protocol, a recent proposal that leverages online social networks, i.e., Twitter, for extending blockchain to domains with small-value transactions, such as IoT. A specialized textual notation is added to the UML profile to capture features of this protocol. Furthermore, a model transformation is defined to generate a Tamarin model, from the UML models, via an intermediate well-known notation, i.e., the Alice &Bob notation. Finally, Tamarin Prover is used to verify the model of the protocol against some security properties. This work extends a previous one, where the Tamarin formal models were generated by hand. A comparison on the analysis results, both under the functional and non-functional aspects, is reported here too.
... There has been a plethora of literature that discusses performance analysis and modeling techniques of software systems [11][12][13][14][15], using a multitude of different viewpoints and approaches to the problem. Traditionally, performance models in literature are designed to capture operation metrics (e.g., CPU, latency, memory consumption, I/O writes) in relation to an underlying software system and a workload. ...
Preprint
Full-text available
Catching and attributing code change-induced performance regressions in production is hard; predicting them beforehand, even harder. A primer on automatically learning to predict performance regressions in software, this article gives an account of the experiences we gained when researching and deploying an ML-based regression prediction pipeline at Meta. In this paper, we report on a comparative study with four ML models of increasing complexity, from (1) code-opaque, over (2) Bag of Words, (3) off-the-shelve Transformer-based, to (4) a bespoke Transformer-based model, coined SuperPerforator. Our investigation shows the inherent difficulty of the performance prediction problem, which is characterized by a large imbalance of benign onto regressing changes. Our results also call into question the general applicability of Transformer-based architectures for performance prediction: an off-the-shelve CodeBERT-based approach had surprisingly poor performance; our highly customized SuperPerforator architecture initially achieved prediction performance that was just on par with simpler Bag of Words models, and only outperformed them for down-stream use cases. This ability of SuperPerforator to transfer to an application with few learning examples afforded an opportunity to deploy it in practice at Meta: it can act as a pre-filter to sort out changes that are unlikely to introduce a regression, truncating the space of changes to search a regression in by up to 43%, a 45x improvement over a random baseline. To gain further insight into SuperPerforator, we explored it via a series of experiments computing counterfactual explanations. These highlight which parts of a code change the model deems important, thereby validating the learned black-box model.
... To deal with the requirements of CPS applications, some researchers have focused on modelbased and model-driven engineering approaches to self-adaptation [47]. This technique provides early design-time evaluations of the system based on predefined metrics. ...
Thesis
Full-text available
Self-adaptive systems provide the ability of autonomous decision-making for handling the changes affecting the functionalities of cyber-physical systems. A self-adaptive system repeatedly monitors and analyzes the local system and the environment and makes significant decisions regarding fulfilling the system's functional optimization and safety requirements. Such a decision must be made before a deadline, and the autonomy helps the system meet the timing constraints. Suppose the model of the cyber-physical system is available. In that case, it can be used for verification against specific formal properties to reveal whether the system is committed to the properties or not. However, according to the dynamicity of such systems, the system model needs to be reconstructed and reverified at runtime. As the model of the self-adaptive systems is a composition of the local system and the environment models, the size of the composed model is relatively large. Therefore, we need efficient and scalable methods to verify the model at runtime in resource-constrained systems. Since the physical environment and the cyber part of the system usually have stochastic natures, the reflection of each behavior is modeled through probabilistic parameters, which we have some predictions about them. If the system observes or predicts some changes in the behavior of the environment or the local system, the parameter(s) are updated. This research focuses on the problem of runtime model size reduction in self-adaptive systems. As a solution, the model is partitioned into sub-models that can be verified/approximated independently. At runtime, if a change occurs, only the affected sub-models are subject to re-verification/re-approximation. Finally, with the help of an aggregation algorithm, the partial results from the sub-models are composed, and the verification result for the whole model is calculated. In some situations, updating the model may cause some delays in the decision-making. The self-adaptive system must decide about an incomplete model when a few parameters have been missed to meet the decision-making deadlines. We do this by conducting a set of behavioral simulations by random walk and matching the system's current behavior with its previous behavioral patterns. Thus, the system is equipped with a runtime parameter estimation method respecting a certain upper bound of errors. This thesis proposes a new metric for determining an upper bound of errors caused by applying the approximation technique. The metric is the basis for two proposed theorems that guarantee upper bounds of errors and accuracy of runtime verification. The evaluation results confirm that the proposed approximation framework reduces the model's size and helps decision-making within the time restrictions. The framework keeps the accuracy of the parameter estimations and verification results upper than 96.5% and 95%, respectively, while fully guaranteeing the system's safety.
... Performance modeling and testing are considered common approaches to accomplish the mentioned objectives at different stages of performance analysis. Although performance models [7]- [9] provide helpful insight into the behavior of a system, there are still many details of the implementation and the execution environment that might be ignored in the modeling [10]. Moreover, building a precise detailed model of the system behavior with regard to all the factors at play is often costly and sometimes impossible. ...
Preprint
Full-text available
Performance testing with the aim of generating an efficient and effective workload to identify performance issues is challenging. Many of the automated approaches mainly rely on analyzing system models, source code, or extracting the usage pattern of the system during the execution. However, such information and artifacts are not always available. Moreover, all the transactions within a generated workload do not impact the performance of the system the same way, a finely tuned workload could accomplish the test objective in an efficient way. Model-free reinforcement learning is widely used for finding the optimal behavior to accomplish an objective in many decision-making problems without relying on a model of the system. This paper proposes that if the optimal policy (way) for generating test workload to meet a test objective can be learned by a test agent, then efficient test automation would be possible without relying on system models or source code. We present a self-adaptive reinforcement learning-driven load testing agent, RELOAD, that learns the optimal policy for test workload generation and generates an effective workload efficiently to meet the test objective. Once the agent learns the optimal policy, it can reuse the learned policy in subsequent testing activities. Our experiments show that the proposed intelligent load test agent can accomplish the test objective with lower test cost compared to common load testing procedures, and results in higher test efficiency.
... Hence, very few ADLs embed constructs to specify performance parameters, and even fewer ones provide tools to analyze performance within the same ADL environment natively. In most cases, instead, performance models are expressed in different stochastic notations (like Queueing Networks or Petri Nets), thus they have to be generated from architectural specifications through model transformations [35]. ...
Article
Full-text available
Context Software architecture refactoring can be induced by multiple reasons, such as satisfying new functional requirements or improving non-functional properties. Multi-objective optimization approaches have been widely used in the last few years to introduce automation in the refactoring process, and they have revealed their potential especially when quantifiable attributes are targeted. However, the effectiveness of such approaches can be heavily affected by configuration characteristics of the optimization algorithm, such as the composition of solutions. Objective In this paper, we analyze the behavior of EASIER, that is an Evolutionary Approach for Software archItecturE Refactoring, while varying its configuration characteristics, with the objective of studying its potential to find near-optimal solutions under different configurations. Method In particular, we use two different solution space inspection algorithms (i.e., NSGA−II and SPEA2) while varying the genome length and the solution composition. Results EASIER relays on the Æmilia ADL, thus we have conducted our experiments on a specific case study modeled in Æmilia. Conclusion Our results show that the EASIER thoroughly automated process for software architecture refactoring allows to identify configuration contexts of the evolutionary algorithm in which multi-objective optimization more effectively finds near-optimal Pareto solutions.
... Performance modeling and testing are common evaluation approaches to accomplish the associated objectives such as measurement of performance metrics, detection of functional problems emerging under certain performance conditions, and also violations of performance requirements (Jiang and Hassan 2015). Performance modeling mainly involves building a model of the software system's behavior using modeling notations such as queueing networks, Markov processes, Petri nets, and simulation models (Cortellessa et al. 2011;Harchol-Balter 2013;Kant and Srinivasan 1992). Although models provide helpful insights into the performance behavior of the system, there are also many details of implementation and execution platform that might be ignored in the modeling (Denaro et al. 2004). ...
Article
Full-text available
Test automation brings the potential to reduce costs and human effort, but several aspects of software testing remain challenging to automate. One such example is automated performance testing to find performance breaking points. Current approaches to tackle automated generation of performance test cases mainly involve using source code or system model analysis or use-case-based techniques. However, source code and system models might not always be available at testing time. On the other hand, if the optimal performance testing policy for the intended objective in a testing process instead could be learned by the testing system, then test automation without advanced performance models could be possible. Furthermore, the learned policy could later be reused for similar software systems under test, thus leading to higher test efficiency. We propose SaFReL, a self-adaptive fuzzy reinforcement learning-based performance testing framework. SaFReL learns the optimal policy to generate performance test cases through an initial learning phase, then reuses it during a transfer learning phase, while keeping the learning running and updating the policy in the long term. Through multiple experiments in a simulated performance testing setup, we demonstrate that our approach generates the target performance test cases for different programs more efficiently than a typical testing process and performs adaptively without access to source code and performance models.
... Examples of excluded papers in this line are [186,209], which deal with probabilistic databases and uncertainty in complex event systems, respectively. Similarly, we did not consider the numerous works dealing with model-based performance or reliability engineering of software systems, which enrich software models with the information required for evaluation using different notations (e.g., UML Profiles such as SPT [190], MARTE [72] or DAM [129]), transform the enriched model to a formal and mathematical model supporting the evaluation (e.g., Queueing Networks [161,217], Probabilistic Process Algebras [157], Stochastic Petri Nets [182,183], Fault Trees [143] or Markov chains [214]) and evaluate the performance or reliability of the system using the tools available for the formal model [127,139]. The interested reader can consult already existing corresponding surveys about these topics, such as [131,156,175,195,200,224] in the context of databases, [125] in the context of complex event processing, or [127,128,160,168] on model-based performance or reliability engineering of software systems. ...
Article
Full-text available
This paper provides a comprehensive overview and analysis of research work on how uncertainty is currently represented in software models. The survey presents the definitions and current research status of different proposals for addressing uncertainty modeling and introduces a classification framework that allows to compare and classify existing proposals, analyze their current status and identify new trends. In addition, we discuss possible future research directions, opportunities and challenges.
... Code-driven generation of quantitative models. In software performance engineering, the use of Markov chains is widely accepted (e.g., [14]). However, most of the literature has focused on model-driven approaches [2], while code-driven generation of models has been less explored. ...
... Application Data Collection Service (ADC): ADC collects two performance metrics, response time and throughput. Response time is the end-to-end time that a task spends to traverse a certain path within the system (Cortellessa et al. 2011). It includes server processing time and network transmission time (Smith and Williams 2002). ...
Article
Full-text available
Monitoring for cloud is the key technology to know the status and the availability of the resources and services present in the current infrastructure. However, cloud monitoring faces a lot of challenges due to inefficient monitoring capability and enormous resource consumption. We study the adaptive monitoring for cloud computing platform, and focus on the problem of balancing monitoring capability and resource consumption. We proposed HSACMA, a hierarchical scalable adaptive monitoring architecture, that (1) monitors the physical and virtual infrastructure at the infrastructure layer, the middleware running at the platform layer, and the application services at the application layer; (2) achieves the scalability of the monitoring based on microservices; and (3) adaptively adjusts the monitoring interval and data transmission strategy according to the running state of the cloud computing system. Moreover, we study a case of real production system deployed and running on the cloud computing platform called CloudStack, to verify the effectiveness of applying our architecture in practice. The results show that HSACMA can guarantee the accuracy and real-time performance of monitoring and reduces resource consumption.
... Some of the well-known modeling notations are queuing networks, Markov processes, and Petri nets, which are used together with analysis techniques to address performance modeling. [16,17,18]. ...
Preprint
Background: End-user satisfaction is not only dependent on the correct functioning of the software systems but is also heavily dependent on how well those functions are performed. Therefore, performance testing plays a critical role in making sure that the system responsively performs the indented functionality. Load test generation is a crucial activity in performance testing. Existing approaches for load test generation require expertise in performance modeling, or they are dependent on the system model or the source code. Aim: This thesis aims to propose and evaluate a model-free learning-based approach for load test generation, which doesn't require access to the system models or source code. Method: In this thesis, we treated the problem of optimal load test generation as a reinforcement learning (RL) problem. We proposed two RL-based approaches using q-learning and deep q-network for load test generation. In addition, we demonstrated the applicability of our tester agents on a real-world software system. Finally, we conducted an experiment to compare the efficiency of our proposed approaches to a random load test generation approach and a baseline approach. Results: Results from the experiment show that the RL-based approaches learned to generate effective workloads with smaller sizes and in fewer steps. The proposed approaches led to higher efficiency than the random and baseline approaches. Conclusion: Based on our findings, we conclude that RL-based agents can be used for load test generation, and they act more efficiently than the random and baseline approaches. full text: https://arxiv.org/abs/2007.12094v1
... Performance antipatterns [8,31] and stochastic modelling (e.g., using queueing networks, stochastic Petri nets, and Markov models [7,16,33]) have long been used in conjunction, to analyse performance of software systems and to drive system refactoring when requirements are violated. End-to-end approaches supporting this analysis and refinement processes have been developed (e.g., [4,9,20]), often using established tools for the simulation or formal verification of stochastic models of the software system under development (SUD). ...
Chapter
Full-text available
Refactoring is often needed to ensure that software systems meet their performance requirements in deployments with different operational profiles, or when these operational profiles are not fully known or change over time. This is a complex activity in which software engineers have to choose from numerous combinations of refactoring actions. Our paper introduces a novel approach that uses performance antipatterns and stochastic modelling to support this activity. The new approach computes the performance antipatterns present across the operational profile space of a software system under development, enabling engineers to identify operational profiles likely to be problematic for the analysed design, and supporting the selection of refactoring actions when performance requirements are violated for an operational profile region of interest. We demonstrate the application of our approach for a software system comprising a combination of internal (i.e., in-house) components and external third-party services.
... Performance problems at architecture level (IIb) are wellresearched [17]. Methods for finding them include using systematic experiments [22] and model analysis [5] [19]. ...
Article
Designing hybrid controllers for cyber-physical systems (CPSs) where computational and physical components influence each other is a challenging task, as it requires considering the performance of very different types of dynamics simultaneously. Meanwhile, controlling each of these dynamics separately can lead to unacceptable results. Common approaches to controller design rely on the use of analytical methods. Although this approach can provide formal guarantees of stability and performance, the analytical design of hybrid controllers can become quite cumbersome. Alternatively, modeling and simulation (M&S)-based design techniques have proven successful for hybrid controllers, providing robust results based on Monte Carlo techniques. This requires simulation models and platforms capable of seamlessly composing the underlying hybrid domains. Unmanned Aerial Vehicles (UAVs) are CPSs with sensitive physical–computational couplings. We address the development of a hybrid model and simulation platform for a data collection application involving UAVs with onboard data processing. The quality of control (QoC) of the physical dynamics must be ensured together with the quality of service (QoS) of the onboard software competing for scarce processing resources. In this scenario, it is imperative to find safe trade-offs between flight stability and processing throughput that can adapt to uncertain environments. The goal is to design a hybrid supervisory controller that dynamically adapts the use of resources to balance the performance of both aspects in a CPS, while ensuring system-level QoS. We present the end-to-end M&S-based design methodology, which can be regarded as a design template for a broader class of CPSs.
Article
Software architecture optimization aims to enhance non-functional attributes like performance and reliability while meeting functional requirements. Multi-objective optimization employs metaheuristic search techniques, such as genetic algorithms, to explore feasible architectural changes and propose alternatives to designers. However, this resource-intensive process may not always align with practical constraints. This study investigates the impact of designer interactions on multi-objective software architecture optimization. Designers can intervene at intermediate points in the fully automated optimization process, making choices that guide exploration towards more desirable solutions. Through several controlled experiments as well as an initial user study (14 subjects), we compare this interactive approach with a fully automated optimization process, which serves as a baseline. The findings demonstrate that designer interactions lead to a more focused solution space, resulting in improved architectural quality. By directing the search towards regions of interest, the interaction uncovers architectures that remain unexplored in the fully automated process. In the user study, participants found that our interactive approach provides a better trade-off between sufficient exploration of the solution space and the required computation time.
Article
Layered queueing networks (LQNs) are a class of performance models for software systems in which multiple distributed resources may be possessed simultaneously by a job. Estimating response times in a layered system is an essential but challenging analysis dimension in Quality of Service (QoS) assessment. Current analytic methods are capable of providing accurate estimates of mean response times. However, accurately approximating response time distributions used in service-level objective analysis is a demanding task. This paper proposes a novel hybrid framework that leverages phase-type (PH) distributions and neural networks to provide accurate density estimates of response times in layered queueing networks. The core step of this framework is to recursively obtain response time distributions in the submodels that are used to analyse the network by means of decomposition. We describe these response time distributions as a mixture of density functions for which we learn the parameters through a Mixture Density Network (MDN). The approach recursively propagates MDN predictions across software layers using PH distributions and performs repeated moment-matching based refitting to efficiently estimate end-to-end response time densities. Extensive numerical experiment results show that our scheme significantly improves density estimations compared to the state-of-the-art.
Article
Queueing networks serve as a popular performance model in the analysis of business processes and computer systems [4]. Solving queueing network models helps the decision making of system designers. System response time and throughput are two key performance measures in queueing networks. The most widely used algorithms for solving these measures are mean value analysis (MVA) and its approximate extensions [3, §8-9]. However, conventional analytic methods are inaccurate at solving non-product form queueing networks that are frequently encountered in modeling most real systems. Approximation formulas typically rely on assumptions that may lead, on particular regions of the parameters, to inaccurate and misleading results. Simulation modeling is an accurate way, but it requires to be designed for each specific problem, and usually takes longer time to converge.
Chapter
Dynamic routing is an essential part of service- and cloud-based applications. Routing architectures are based on vastly different implementation concepts, such as API Gateways, Enterprise Service Buses, Message Brokers, or Service Proxies. However, their basic operation is that these technologies dynamically route or block incoming requests. This paper proposes a new approach that abstracts all these routing patterns using one adaptive architecture. We hypothesize that a self-adaptation of the dynamic routing is beneficial over any fixed architecture selections concerning reliability and performance trade-offs. Our approach dynamically self-adapts between more central or distributed routing to optimize system reliability and performance. This adaptation is calculated based on a multi-criteria optimization analysis. We evaluate our approach by analyzing our previously-measured data during an experiment of 1200 h of runtime. Our extensive systematic evaluation of 4356 cases confirms that our hypothesis holds and our approach is beneficial regarding reliability and performance. Even on average, where right and wrong architecture choices are analyzed together, our novel architecture offers 9.82% reliability and 47.86% performance gains.KeywordsSelf-Adaptive SystemsDynamic Routing ArchitecturesService- and Cloud-Based ApplicationsReliability and Performance Trade-OffsPrototypical Tool Support
Preprint
Full-text available
Efficiency has been a pivotal aspect of the software industry since its inception, as a system that serves the end-user fast, and the service provider cost-efficiently benefits all parties. A database management system (DBMS) is an integral part of effectively all software systems, and therefore it is logical that different studies have compared the performance of different DBMSs in hopes of finding the most efficient one. This survey systematically synthesizes the results and approaches of studies that compare DBMS performance and provides recommendations for industry and research. The results show that performance is usually tested in a way that does not reflect real-world use cases, and that tests are typically reported in insufficient detail for replication or for drawing conclusions from the stated results.
Chapter
Uncertainty is present in model-based developments in many different ways. In the context of composing model-based analysis tools, this chapter discusses how the combination of different models can increase or decrease the overall uncertainty. It explores how such uncertainty could be more explicitly addressed and systematically managed, with the goal of defining a conceptual framework to deal with and manage it. We proceed towards this goal both with a theoretical reasoning and a practical application through an example of designing a peer-to-peer file-sharing protocol. We distinguish two main steps: (i) software system modelling and (ii) model-based performance analysis by highlighting the challenges related to the awareness that model-based development in software engineering needs to coexist with uncertainty. This core chapter addresses Challenge 5 introduced in Chap. 3 of this book (living with uncertainty).
Chapter
Full-text available
Any analysis produces results to be used by analysis users to understand and improve the system being analysed. But what are the ways in which analysis results can be exploited? And how is exploitation of analysis results related to analysis composition? In this chapter, we provide a conceptual model of analysis-result exploitation and a model of the variability and commonalities between different analysis approaches, leading to a feature-based description of results exploitation. We demonstrate different instantiations of our feature model in nine case studies of specific analysis techniques. Through this discussion, we also showcase different forms of analysis composition, leading to different forms of exploitation of analysis results for refined analysis, improving analysis mechanisms, exploring results, etc. We, thus, present the fundamental terminology for researchers to discuss exploitation of analysis results, including under composition, and highlight some of the challenges and opportunities for future research.
Article
Performance of a software is an important feature to determine the quality of the software developed. Performance testing of modular software is a time consuming and costly task. Several performance testing tools (PTTs) are available in the market which help software developers to test their software performance. In this paper, we propose an integrated multiobjective optimization model for evaluation and selection of best-fit PTT for modular software system. The total performance tool cost is minimized and the fitness evaluation score of the PTTs is maximized. The fitness evaluation of PTT is done based on various attributes by making use of the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). The model allows the software developers to select the number of PTTs as per their requirement. The individual performance of the modules is considered based on some performance properties. The reusability constraints are considered, as a PTT can be used in the same module to test different properties and/or it can be used in different modules to test same or different performance properties. A real-world case study from the domain of enterprise resource planning (ERP) is used to show the working of the suggested optimization model.
Article
Full-text available
When SaaS software suffers from the problem of response time degradation, scaling deployment resources that support the operation of software can improve the response time, but that also means an increase in the costs due to additional deployment resources. For the purpose of saving costs of deployment resources while improving response time, scaling out the SaaS software is an alternative approach. However, how scaling out software affects response time in the case of saving deployment resources is an important issue for effectively improving response time. Therefore, in this paper, we propose a method analysing the impact of scaling out software on response time. Specifically, we define the scaling-out operation of SaaS software and then leverage queueing theory to analyse the impact of the scaling-out operation on response time. According to the conclusions of impact analysis, we further derive an algorithm for improving response time based on scaling out software without using additional deployment resources. Finally, the effectiveness of the analysis’s conclusions and the proposed algorithm is validated by a practical case, which indicates that the conclusions of impact analysis obtained from this paper can play a guiding role in scaling out software and improving response time effectively while saving deployment resources.
Conference Paper
Full-text available
DevOps is a modern software engineering paradigm that is gaining widespread adoption in industry. The goal of DevOps is to bring software changes into production with a high frequency and fast feedback cycles. This conflicts with software quality assurance activities, particularly with respect to performance. For instance, performance evaluation activities --- such as load testing --- require a considerable amount of time to get statistically significant results. We conducted an industrial survey to get insights into how performance is addressed in industrial DevOps settings. In particular, we were interested in the frequency of executing performance evaluations, the tools being used, the granularity of the obtained performance data, and the use of model-based techniques. The survey responses, which come from a wide variety of participants from different industry sectors, indicate that the complexity of performance engineering approaches and tools is a barrier for wide-spread adoption of performance analysis in DevOps. The implication of our results is that performance analysis tools need to have a short learning curve, and should be easy to integrate into the DevOps pipeline in order to be adopted by practitioners.
Article
The deployment change of SaaS (Software as a Service) software will influence its response time, which is an important performance metric. Therefore, studying the impact of deployment change on the response time of SaaS software could contribute to performance improvement of the software. However, there are few performance analysis methods which can directly analyze the relationship between deployment change and response time of SaaS software. In this paper, we propose an approach which provides the impact analysis of specific deployment change operations on response time of SaaS software explicitly. Specifically, we present an evaluation method for the response time of SaaS software in specific deployment scheme by leveraging queueing theory. With mathematical derivation based on the proposed evaluation method, we qualitatively analyze the variation trend of response time with respect to deployment change. Furthermore, we study the relationship between two specific types of deployment change operations and response time variation of SaaS software, which is used to propose a response time improvement method based on deployment change. Finally, the effectiveness of the analysis conclusions and the proposed method in this paper is validated by practical cases, which indicates that adjusting deployment scheme according to the conclusions obtained in this paper can be helpful in improving the response time of SaaS software.
Thesis
The increasing topological changes in the urban environment have caused human civilization to be subjected to an increased risk of emergencies. Fires, earthquakes, floods, hurricanes, overcrowding, or and even pandemic viruses endanger human lives. Hence, designing infrastructures to handle possible emergencies has become an ever-increasing need. The safe evacuation of occupants from the building takes precedence when dealing with the necessary mitigation and disaster risk management. Nowadays, evacuation plans appear as static maps, designed by civil protection operators, that provide some pre-selected routes through which pedestrians should move in case of emergency. The static models may work in low congested spacious areas. However, the situation may barely be imagined static in case of a disaster.The static emergency map exposes several limitations such as: i) ignoring abrupt congestion, obstacles or dangerous routes and areas; ii) leading all pedestrians to the same route and making specific areas highly crowded; iii) ignoring the individual movement behavior of people and special categories (e.g. elderly, children, disabled); iv) lack of providing proper training for security operators in various scenarios; v) lack of providing a comprehensive situational awareness for evacuation managers.By simply tracking people in an indoor area, possible congestions can be detected, the best evacuation paths can be periodically re-calculated, and minimum evacuation time under ever-changing emergency conditions can be evaluated. Using a well-designed internet of things (IoT) infrastructure can provide various solutions in both design-time and real-time. At design-time, a building architecture can be assessed regarding safety conditions, even before its (re-) construction. Simulations are among feasible solutions to assess the evaluability of buildings and the feasibility of evacuation plans.At design-time, an IoT-based evacuation system provides: i) Safety considerations for building architecture in early (re-) construction phase; ii) Finding out the building dimensions that lead to an optimum evacuation performance; iii) Bottleneck discovery that is tied with the building characteristics; iv) Comparing various routing optimization models to pick the best match one as a base of real-time evacuation system; v) Visualizing dynamic evacuation executions to demonstrate a variety of scenarios to security operators and train them. In real-time, an IoT architecture supports the gathering of data that will be used for dynamic monitoring and evacuation planning. At real-time, an IoT-based evacuation system provides: i) Optimal solutions that can be continuously updated, so evacuation guidelines can be adjusted according to visitors position that evolves over time; ii) Paths that become suddenly unfeasible can automatically be discarded by the system; iii) The model can be incorporated into a mobile app supporting emergency units to evacuate closed or open spaces.Since the evacuation time of people from a scene of an emergency (e.g. building) is crucial, IoT-based evacuation infrastructures need to have an optimization algorithm as their core. In order to reduce the time taken for evacuation, a better and more robust exit strategy is developed. Some algorithms are used to model participating agents for their exit patterns and strategies and in order to evaluate their movement behavior based on performance, efficiency, and practicality attributes. The algorithms normally provide a way to evacuate the occupants as quickly as possible. While this research and all associated experiences are carried out in Italy, we see the problem from an international viewpoint. Within this thesis, we carried out the following research and experiments to analyze and develop an IoT-based emergency evacuation system:The first two chapters present systematic mapping studies to review the state of the art and help to design high-quality IoT architectures. More specifically, chapter one investigates on IoT software architectural styles, and chapter two assesses the architectural fault-tolerance. Chapter three proposes some adaptive architectural styles and their associated quality of energy consumption. After taking the preliminary design decisions about the architecture, in chapter four we propose a core computational component to be in charge of minimizing the time necessary to evacuate people from a building. We developed a network flow algorithm that decomposes the building space and time into finite elements: unit cells and time slots. In chapter five, we assessed the effectiveness of the IoT system in providing good real-time and design-time solutions. Chapter six focuses on real-time performance and minimizes computational and evacuation delays to a minimum, by using a queuing network.During our research, we designed and implemented a hardware and software IoT infrastructure. We installed sensors throughout the selected building, whose data constantly feed into the algorithm to show the best evacuation routes to the occupants.We further realized that such a system may lack the accuracy since: i) a pure optimization approach can lack realism as building occupants may not evacuate immediately; ii) occupants may not always follow the recommended optimal paths due to various behavioral and organizational issues; iii) the physical space may prevent an effective emergency evacuation. Therefore, in chapter seven we introduced a simulation-optimization approach. The approach allows us to test more realistic evacuation scenarios and compare them with an optimal approach. We simulated the optimized Netflow algorithm under different realistic behavioral agent-based modeling (ABM) constraints, such as social attachment and improved IoT system accordingly.This thesis makes the following main contributions:Contributions on new and legitimate IoT architectures: - Addressing an up to date state of the art class for IoT architectural styles and patterns.- Proposing a set of self-adaptive IoT patterns and assessing their specific quality attributes (fault-tolerance, energy consumption, and performance).- Designing an IoT infrastructure and testing its performance in both real-time and design-time applications.Algorithmic contribution: - Developing a network flow algorithm that facilitates minimizing the time necessary to evacuate people from a scene of a disaster.Evaluation / experimentation environment contributions: Modeling various social agents and their interactions during an emergency to improve the IoT system accordingly.Evaluating the system by using empirical and real case studies.
Thesis
Full-text available
Fires, earthquakes, floods, hurricanes, overcrowding, or and even pandemic viruses endanger human lives. Hence, designing infrastructures to handle possible emergencies has become an ever-increasing need. The safe evacuation of occupants from the building takes precedence when dealing with the necessary mitigation and disaster risk management. This thesis deals with designing an IoT system to provide safe and quick evacuation suggestions. The IoT-based evacuation system provides optimal evacuation paths that can be continuously updated based on run-time sensory data, so evacuation guidelines can be adjusted according to visitors occupants that evolve over time. This thesis makes the following main contributions: i) Addressing an up to date state of the art class for IoT architectural styles and patterns; ii) Proposing a set of self-adaptive IoT patterns and assessing their specific quality attributes (fault-tolerance, energy consumption, and performance); iii) Designing an IoT infrastructure and testing its performance in both real-time and design-time applications; iv) Developing a network flow algorithm that facilitates minimizing the time necessary to evacuate people from a scene of a disaster; v) Modeling various social agents and their interactions during an emergency to improve the IoT system accordingly; vi) Evaluating the system by using empirical and real case studies.
Preprint
Full-text available
Fires, earthquakes, floods, hurricanes, overcrowding, or and even pandemic viruses endanger human lives. Hence, designing infrastructures to handle possible emergencies has become an ever-increasing need. The safe evacuation of occupants from the building takes precedence when dealing with the necessary mitigation and disaster risk management. This thesis deals with designing an IoT system to provide safe and quick evacuation suggestions. The IoT-based evacuation system provides optimal evacuation paths that can be continuously updated based on run-time sensory data, so evacuation guidelines can be adjusted according to visitors occupants that evolve over time. This thesis makes the following main contributions: i) Addressing an up to date state of the art class for IoT architectural styles and patterns; ii) Proposing a set of self-adaptive IoT patterns and assessing their specific quality attributes (fault-tolerance, energy consumption, and performance); iii) Designing an IoT infrastructure and testing its performance in both real-time and design-time applications; iv) Developing a network flow algorithm that facilitates minimizing the time necessary to evacuate people from a scene of a disaster; v) Modeling various social agents and their interactions during an emergency to improve the IoT system accordingly; vi) Evaluating the system by using empirical and real case studies.
Chapter
Full-text available
Functional specifications describe what program components can do: the sufficient conditions to invoke components’ operations. They allow us to reason about the use of components in a closed world setting, where components interact with known client code, and where the client code must establish the appropriate pre-conditions before calling into a component. Sufficient conditions are not enough to reason about the use of components in an open world setting, where components interact with external code, possibly of unknown provenance, and where components may evolve over time. In this open world setting, we must also consider the necessary conditions, i. e. what are the conditions without which an effect will not happen. In this paper we propose the hainmail specification language for writing holistic specifications that focus on necessary conditions (as well as sufficient conditions). We give a formal semantics for hainmail, and discuss several examples. The core of hainmail has been mechanised in the Coq proof assistant.
Chapter
Full-text available
This report describes the 2020 Competition on Software Testing (Test-Comp), the 2nd^{\text {nd}} edition of a series of comparative evaluations of fully automatic software test-case generators for C programs. The competition provides a snapshot of the current state of the art in the area, and has a strong focus on replicability of its results. The competition was based on 3 230 test tasks for C programs. Each test task consisted of a program and a test specification (error coverage, branch coverage). Test-Comp 2020 had 10 participating test-generation systems.KeywordsSoftware TestingTest-Case GenerationCompetitionSoftware AnalysisSoftware ValidationTest ValidationTest-CompBenchmarkingTest CoverageBug Finding BenchExec TestCov
Article
Full-text available
The current generation of network-centric applications ex- hibits an increasingly higher degree of mobility. Wireless networks allow devices to move from one location to another without loosing connectivity. Also, new software technolo- gies allow code fragments or entire running applications to migrate from one host to another. Performance modeling of such complex systems is a difficult task, which should be carried out during the early design stages of system devel- opment. However, the integration of performance modeling analysis with software system specification for mobile sys- tems is still an open problem, since there is no unique widely accepted notation for describing mobile systems. Moreover performance modeling is usually developed separately from high-level system description. This is not only time con- suming, but the separation of performance model and system specification makes more difficult the feedback process of re- porting the performance analysis results at the system design level, and modifying system model to analyze design alter- natives. In this paper we address the problem of integrating system performance modeling and analysis with a specifica- tion of mobile software system based on UML. In particular we introduce a unified UML notation for high-level descrip- tion and performance modeling of mobile systems. The no- tation allows inclusion of quantitative information, which are used to build a process-oriented simulation model of the sys- tem. The simulation model is executed, and the results are reported back in the UML notation. We describe a prototype tool for translating annotated UML models into simulation programs and we present a simple case study.
Conference Paper
Full-text available
Architectural decisions are among the earliest made in a software development project. They are also the most costly to fix if, when the software is completed, the architecture is found to be inappropriate for meeting quality objectives. Thus, it is important to be able to assess the impact of architectural decisions on quality objectives such as performance and reliability at the time that they are made.This paper describes PASA, a method for performance assessment of software architectures. It was developed from our experience in conducting performance assessments of software architectures in a variety of application domains including web-based systems, financial applications, and real-time systems. PASA uses the principles and techniques of software performance engineering (SPE) to determine whether an architecture is capable of supporting its performance objectives. The method may be applied to new development to uncover potential problems when they are easier and less expensive to fix. It may also be used when upgrading legacy systems to decide whether to continue to commit resources to the current architecture or migrate to a new one. The method is illustrated with an example drawn from an actual assessment.
Conference Paper
Full-text available
Integration of non-functional validation in Model-Driven Architecture is still far from being achieved, although it is ever more necessary in the development of modern software systems. In this paper we make a step ahead towards the adoption of such activity as a daily practice for software engineers all along the MDA process. We consider the Non-Functional MDA framework (NFMDA) that, beside the typical MDA model transformations for code generation, embeds new types of model transformations that allow the generation of quantitative models for non-functional analysis. We plug into the framework two methodologies, one for performance analysis and one for reliability assessment, and we illustrate the relationships between non-functional models and software models. For this aim, Computation Independent, Platform Independent and Platform Specific Models are also defined in the non-functional domains taken into consideration, that are performance and reliability.
Conference Paper
Full-text available
Quantitative performance analysis of software systems should be integrated in the early stages of the development process. We propose a simulation-based performance modeling of software architectures specified in UML. We propose an algorithm for deriving a simulation model from annotated UML software architectures. We introduce the annotation for some UML diagrams, i.e., Use Case, Activity and Deployment diagrams, to describe system performance parameters. Then we show how to derive a process-oriented simulation model by automatically extracting information from the UML diagrams. Simulation provides performance results that are reported into the UML diagrams as tagged values. The proposed methodology has been implemented into a prototype tool called UML-?. The proposed methodology will be illustrated on a simple case study.
Conference Paper
Full-text available
Performance characteristics, such as response time and throughput, play an important role in defining the quality of software products, especially in the case of real-time and distributed systems. The developers of such systems should be able to assess and understand the performance effects of various architectural decisions, starting at an early stage, when changes are easy and less expensive, and continuing throughout the software life cycle. This can be achieved by constructing and analyzing quantitative performance models that capture the interactions between the main system components and point to the system's performance trouble spots. The paper proposes a formal approach to building Layered Queueing Network (LQN) performance models from UML descriptions of the high-level architecture of a system, and more exactly from the architectural patterns used in the system. The performance modelling formalism, LQN, is an extension of the well-known Queueing Network modelling technique. The transformation from UML architectural description of a given system to its LQN model is based on PROGRES, a well known visual language and environment for programming with graph rewriting systems.
Conference Paper
Full-text available
Model transformations in MDA mostly aim at stepping from a Platform Independent Model (PIM) to a Platform Specific Model (PSM) from a functional viewpoint. In order to develop high quality software products, non-functional attributes (such as performance) must be taken into account. In this paper we extend the canonical view of the MDA approach to embed additional types of models that allow to structure a Model Driven approach keeping into account performance issues. We define the relationships between MDA typical models and the newly introduced models, as well as relationships among the latter ones. In this extended framework new types of model-to-model transformations also need to be devised. We place an existing methodology for transforming software models into performance models within the scope of this framework.
Conference Paper
Full-text available
Software performance concerns begin at the very outset of a new project. The first definition of a software system may be in the form of Use Cases, which may be elaborated as scenarios: this work creates performance models from scenarios. The Use Case Maps notation captures the causal flow of intended execution in terms of responsibilities, which may be allocated to components, and which are annotated with expected resource demands. The SPT algorithm was developed to transform scenario models into performance models. The UCM2LQN tool implements SPT and converts UCM scenario models to layered queueing performance models, allowing rapid evaluation of an evolving scenario definition. The same reasoning can be applied to other scenario models such as Message Sequence Charts, UML Activity Graphs (or Collaboration Diagrams, or Sequence Diagrams), but UCMs are particularly powerful, in that they can combine interacting scenarios and show scenario interactions. Thus a solution for UCMs can be applied to multiple scenarios defined with other notations.
Article
Full-text available
A performance model interchange format (PMIF) provides a mechanism whereby system model information may be transferred among performance modeling tools. The PMIF allows diverse tools to exchange information and requires only that the importing and exporting tools support the PMIF. This paper presents the definition of a PMIF by describing a meta-model of the information requirements and the transfer format derived from it. It describes how tool developers can implement the PMIF, how the model interchange via export and import works in practice, and how the PMIF can be extended. A simple case study illustrates the format. The paper concludes with the current status of the PMIF, lessons learned, some suggestions for extensions, and current work in progress.
Conference Paper
Full-text available
. The paper proposes a formal approach to building software performance models for distributed and/or concurrent software systems from a description of the system's architecture by using graph transformations. The performance model is based on the Layered Queueing Network (LQN) formalism, an extension of the well-known Queueing Network modelling technique [16, 17, 8]. The transformation from the architectural description of a given system to its LQN model is based on PROGRES, a known visual language and environment for programming with graph rewriting systems [9-11]. The transformation result is an LQN model that can be analysed with existent solvers [5]. 1 Introduction It is generally accepted that performance characteristics, such as response time and throughput, play an important role in defining the quality of software products. In order to meet the performance requirements of such systems, the software developers should be able to assess and understand the effect of various desig...
Article
The modeling and analysis experience with process algebras has shown the necessity of extending them with priority, probabilistic internal/external choice, and time while preserving compositionality. The purpose of this paper is to make a further step by introducing a way to express performance measures, in order to allow the modeler to capture the QoS metrics of interest. We show that the standard technique of expressing stationary and transient performance measures as weighted sums of state probabilities and transition frequencies can be imported in the process algebra framework. Technically speaking, if we denote by the number of performance measures of interest, in this paper we define a family of extended Markovian process algebras with generative master–reactive slaves synchronization mechanism called including probabilities, priorities, exponentially distributed durations, and sequences of rewards of length n. Then we show that the Markovian bisimulation equivalence is a congruence for which preserves the specified performance measures and we give a sound and complete axiomatization for finite terms. Finally, we present a case study conducted with the software tool TwoTowers in which we contrast the average performance of a selection of distributed algorithms for mutual exclusion modeled with .
Conference Paper
Different paradigms (client-server, mobility based, etc.) have been suggested and adopted to cope with the complexity of designing the software architecture of distributed applications for wide area environments, and selecting the "best" paradigm is a typical choice to be made in the very early software design phases. Several factors should drive this choice, one of them being the impact of the adopted paradigm on the application performance. Within this framework our contribution is as follows: we apply an extension of UML to better modelling the possible adoption of mobility-based paradigms in the software architecture of an application; we extend classical models, like queueing networks models and execution graphs, to cope with mobile architectures; we introduce a complete methodology that, starting from a software architecture described using this extended notation, generates a performance model (namely an Extended Queueing Network augmented with mobility features) that allows the designer to evaluate the convenience of introducing logical mobility into a software application.
Conference Paper
Stochastic Process Algebras have been proposed as compositional specification formalisms for performance models. In this paper, we describe a tool which aims at realising all beneficial aspects of compositional performance modelling, the TIPPtool. It incorporates methods for compositional specification as well as solution, based on state-of-the-art-techniques, and wrapped in a user-friendly graphical front end.