Model-Based Software Performance Analysis
Abstract
Poor performance is one of the main quality-related shortcomings that cause software projects to fail. Thus, the need to address performance concerns early during the software development process is fully acknowledged, and there is a growing interest in the research and software industry communities towards techniques, methods and tools that permit to manage system performance concerns as an integral part of software engineering. Model-based software performance analysis introduces performance concerns in the scope of software modeling, thus allowing the developer to carry on performance analysis throughout the software lifecycle. With this book, Cortellessa, Di Marco and Inverardi provide the cross-knowledge that allows developers to tackle software performance issues from the very early phases of software development. They explain the basic concepts of performance analysis and describe the most representative methodologies used to annotate and transform software models into performance models. To this end, they go all the way from performance primers through software and performance modeling notations to the latest transformation-based methodologies. As a result, their book is a self-contained reference text on software performance engineering, from which different target groups will benefit: professional software engineers and graduate students in software engineering will learn both basic concepts of performance modeling and new methodologies; while performance specialists will find out how to investigate software performance model building.
Chapters (7)
The increasing complexity of software and its pervasiveness in everyday life has in the last years motivated growing interest
for software analysis. This has mainly been directed to assess functional properties of the software systems (related to their
structure and their behavior) and, in the case of safety critical systems, dependability properties. The quantitative behavior
of a software system has gained relevance only recently with the advent of software performance analysis. This kind of analysis
aims at assessing the quantitative behavior of a software system by comprehensively analyzing its structure and its behavior,
from design to code. In this chapter we introduce the concepts and definitions that will be used throughout the entire book.
Software engineers describe static and dynamic aspects of a software system by using ad-hoc models. The static description
consists of the identification of software modules or components. The dynamics of a software system concerns its behavior
at run time. There exist many notations to describe either the statics or the dynamics of a software system. This chapter
focuses on notations that allow for the behavior description since performance is an attribute of the system dynamics. This
chapter is divided into two parts: (i) basic notations historically introduced by computer scientists to model software systems,
such as Automata, Process Algebras and Petri Nets; (ii) Unified Modeling Language that has become a de facto standard in modeling
complex software systems.
A major problem for stably embedding software performance modeling and analysis within the software lifecycle resides in the
distance between notations for static and dynamic modeling of software (such as UML) and notations for modeling performance
(such as Queueing Networks). In Chap. 2 we have introduced the major notations for software modeling, whereas in this chapter we introduce basic performance modeling
notations. A question may arise at this point from readers that are not familiar with performance analysis: “If all the performance
notations are able to provide the desired indices, then why using different notations for performance modeling?”. The software
performance community is still far from unifying languages and notations, although some recent efforts have been spent in
the direction of building a performance ontology as a shared vocabulary of the domain (see Chap. 7). The performance notations that we describe in this chapter are well described in the literature and many references can
be found. Although more sophisticated notations have been introduced, most of them build up over the basic notations that
are described in this chapter.
This chapter is aimed at illustrating performance modeling and analysis issues within the software lifecycle. After having
introduced software and performance modeling notations, here the goal is to illustrate their role within the software development
process. In Chap.5 we will describe in more details several approaches that, based on model transformations, can be used to implement the integration
of software performance analysis and software development process. After briefly introducing the most common software lifecycle
stages, we present our unifying view of software performance analysis as integrated within a software development process
(i.e. the Q-Model).
This chapter focuses on the transformational approaches from software system specifications to performance models. These transformations
aim at filling the gap between the software development process and the performance analysis by generating performance model
ready to be validated from the software models. Three approaches are discussed in detail presenting their foundations and
their application to an e-commerce case study. Moreover, the chapter briefly reviews representatives of other transformational
approaches present in the literature. All the presented approaches are discussed with respect to a set of relevant dimensions
such as software specification, performance model, evaluation methods and level of automated support for performance prediction.
In the process of software performance modeling and analysis, although these two activities do not act in a strict pipeline,
once generated/built (at whatever level of abstraction in the software lifecycle) a performance model has to be solved to
get the values of performance indices of interest. It is helpful to recall here that the main targets of a performance model
solution are the values of performance indices. The existing literature is rich of methodologies, techniques and tools for
solving a wide variety of performance models. This is a very active research topic and, despite the complexity of problems
encountered in this direction, in the last few decades very promising results have been obtained. Moreover, new tools have
been developed to support this key step of software performance process. Therefore, the contents of this chapter are not limited
to the basics of model solution techniques, and a short summary of the major tools for model solution is also provided.
In this chapter some advanced issues of software performance have been collected. They address different aspects of this discipline,
not necessarily related to each other. The chapter is not meant to be a structured overview of the main open issues in the
field, rather it is an anthology of issues that we have faced in the last few years.
... Constraints are defined at the meta-level, and the consistency relationships between models are guaranteed by means of (bidirectional) model transformations specified on source and target metamodels. With the introduction of model-driven techniques in the software lifecycle, also the analysis of non-functional properties has become effective by means of dedicated tools for the automated assessment of quality attributes [13]. ...
... This step is aimed at analyzing the design model and removing possible performance flaws. In particular, we use runtime data (i.e., traces) to augment the design model, and then we execute a performance analysis driven by antipatterns [13] on the augmented model. ...
... In this step, PADRE provides a list of possible refactoring actions for each performance antipattern. 12 While executing one refactoring action at a time, the refactored design model is given as input to the PADRE Performance analysis step, in which a model transformation is executed to transform the UML-MARTE design model into a closed Queueing Model [13]. 13 Thereafter, the Queueing Model is solved through the Mean-Value Analysis (MVA) algorithm [24], which allows to rapidly carry out performance indices. ...
Microservices are quite widely impacting on the software industry in recent years. Rapid evolution and continuous deployment represent specific benefits of microservice-based systems, but they may have a significant impact on non-functional properties like performance. Despite the obvious relevance of this property, there is still a lack of systematic approaches that explicitly take into account performance issues in the lifecycle of microservice-based systems. In such a context of evolution and re-deployment, Model-Driven Engineering techniques can provide major support to various software engineering activities, and in particular they can allow managing the relationships between a running system and its architectural model. In this paper, we propose a model-driven integrated approach that exploits traceability relationships between the monitored data of a microservice-based running system and its architectural model to derive recommended refactoring actions that lead to performance improvement. The approach has been applied and validated on two microservice-based systems, in the domain of e-commerce and ticket reservation, respectively, whose architectural models have been designed in UML profiled with MARTE.
... The software performance engineering (SPE) field Lloyd 2002, 2003;Cortellessa et al. 2011) deals with the above issues by assessing the quantitative behavior of the software systems. SPE was defined by Smith and Lloyd (2002) as "a systematic, quantitative approach to the cost-effective development of software systems to meet performance requirements. ...
... In consequence, the idea is to identify performance flaws, even before the system is deployed, hence comprehensively analyzing the structure and behavior of the software system, from design to code. As stated by Cortellessa et al. (2011), the performance analysis should be common practice within the software development process, then introducing performance concerns in the scope of software models. But for this to be real, we need methodologies and tools. ...
... • Transforms UML-profiled models into analyzable models (Woodside et al. 2014), i.e, Petri nets and reliability models. • Allows to assess performance metrics (Cortellessa et al. 2011). Concretely, response time, throughput and resource utilization. ...
In recent years, we have seen many performance fiascos in the deployment of new systems, such as the US health insurance web. This paper describes the functionality and architecture, as well as success stories, of a tool that helps address these types of issues. The tool allows assessing software designs regarding quality, in particular performance and reliability. Starting from a UML design with quality annotations, the tool applies model-transformation techniques to yield analyzable models. Such models are then leveraged by the tool to compute quality metrics. Finally, quality results, over the design, are presented to the engineer, in terms of the problem domain. Hence, the tool is an asset for the software engineer to evaluate system quality through software designs. While leveraging the Eclipse platform, the tool uses UML and the MARTE, DAM and DICE profiles for the system design and the quality modeling.
... Queueing Networks (QN) [32] are well-known performance models, good at capturing the contention for resources. QN have been successfully applied in previous work to the performance analysis of computer-based systems, software systems, cyber-physical systems [8,17,38]. Efficient analytical solutions exist for a class of QN (separable or product-form QN), which make it possible to derive steady-state performance measures without resorting to building the underlying state space. The advantage is that the solution is faster and larger models can be solved. ...
... The Object Management Group has adopted two standard profiles for performance, schedulability, and time annotations: an earlier UML Profile for Schedulability, Performance, and Time (SPT) defined for UML 1.x [36], and a later replacement UML Profile for Modeling and Analysis of Real-Time and Embedded systems (MARTE) for UML 2.x [37]. The adoption of SPT and MARTE laid the groundwork for research on the automatic generation of different kinds of performance models from annotated UML [8,17]. ...
... In this section we use an e-commerce system model from the literature [17] to show how performance is analyzed. The e-commerce source model contains a deployment diagram and three activity diagrams representing performance-critical scenarios selected for performance analysis. ...
This paper discusses the progress made so far and future challenges in integrating the analysis of multiple Non-Functional Properties (NFP) (such as performance, schedulability, reliability, availability, scalability, security, safety, and maintainability) into the Model-Driven Engineering (MDE) process. The goal is to guide the design choices from an early stage and to ensure that the system under construction will meet its non-functional requirements. The evaluation of the NFPs considered in this paper uses various kinds of NFP analysis models (also known as quality models) based on existent formalisms and tools developed over the years. Examples are queueing networks, stochastic Petri nets, stochastic process algebras, Markov chains, fault trees, probabilistic time automata, etc. In the MDE context, these models are automatically derived by model transformations from the software models built for development. Developing software systems that exhibit a good trade-off between multiple NFPs is difficult because the design of the software under construction and its underlying platforms have a large number of degrees of freedom spanning a very large discontinuous design space, which cannot be exhaustively explored. Another challenge in balancing the NFPs of a system under construction is due to the fact that some NFPs are conflicting—when one gets better the other gets worse—so an appropriate software process is needed to evaluate and balance all the non-functional requirements. The integration approach discussed in this paper is based on an ecosystem of inter-related heterogeneous modeling artifacts intended to support the following features: feedback of analysis results, consistent co-evolution of the software and analysis models, cross-model traceability, incremental propagation of changes across models, (semi)automated software process steps, and metaheuristics for reducing the design space size to be explored.
... Constraints are defined at the meta-level, and the consistency relationships between models are guaranteed by means of (bidirectional) model transformations specified on source and target metamod-150 els. With the introduction of model-driven techniques in the software lifecycle, also the analysis of non-functional properties has become effective by means of dedicated tools for the automated assessment of quality attributes [13]. ...
... This step is aimed at analyzing the design model and removing possible performance flaws. In particular, we use runtime data (i.e., traces) to augment the design model, and then we execute a performance analysis driven by antipatterns [13] on the augmented model. ...
... PADRE [9] has been adopted for this goal, as it is an approach that detects performance for each performance antipattern. 12 While executing one refactoring action at a time, the refactored design model is given as input to the PADRE Performance analysis step, in which a model transformation is executed to transform the UML-MARTE design model into a closed Queueing Model [13]. 13 Thereafter, the Queueing Model is solved through the Mean-Value Analysis (MVA) algo-475 rithm [24], which allows to rapidly carry out performance indices. ...
Microservices are quite widely impacting on the software industry in recent years. Rapid evolution and continuous deployment represent specific benefits of microservice-based systems, but they may have a significant impact on non-functional properties like performance. Despite the obvious relevance of this property, there is still a lack of systematic approaches that explicitly take into account performance issues in the lifecycle of microservice-based systems.
In such a context of evolution and re-deployment, Model-Driven Engineering techniques can provide major support to various software engineering activities, and in particular they can allow managing the relationships between a running system and its architectural model.
In this paper, we propose a model-driven integrated approach that exploits traceability relationships between the monitored data of a microservice-based running system and its architectural model to derive recommended refactoring actions that lead to performance improvement. The approach has been applied and validated on two microservice-based systems, in the domain of e-commerce and ticket reservation, respectively, whose architectural models have been designed in UML profiled with MARTE.
... Interpretability refers to the capability of the fuzzy model to express the behaviour of the system in an understandable way (Casillas et al., 2003). 3) Performancein general definition (Cortellessa et al., 2011), it measures how effective is a software system with respect to time constraints and allocation of resources. In the analysed papers, a part of authors explicitly mentioned realtime constraint, like real-time prediction (Lee, 2019), real-time control (Chen et al., 2016), online rule learning from real-time data streams ((Bouchachia and Vanaret, 2014), etc. 4) Robustnessas described in (Fernandez et al., 2005), robustness is the ability of a computer system to cope with errors during execution and erroneous input. ...
... Authors of (Modi et al., 2007) understand flexibility as ability of a fuzzy system to form any number of clusters. 6) Efficiencythe degree to which a system performs its designated functions with minimum consumption of resources (Cortellessa et al., 2011). According to authors of (Rajeswari and Deisy, 2019), efficiency refers to cost-effective training. ...
... Fateh (2010) analysed stability for fuzzy control of robot manipulators without knowing the explicit dynamics of a system. 8) User-friendlinessrefers to ease of use as a primary objective (Cortellessa et al., 2011). In (Kóczy and Sugeno, 1996), authors mention that their FIS is userfriendly. ...
Nowadays, data-driven fuzzy inference systems (FIS) have become popular to solve different vague, imprecise, and uncertain problems in various application domains. However, plenty of authors have identified different challenges and issues of FIS development because of its complexity that also influences FIS quality attributes. Still, there is no common agreement on a systematic view of these complexity issues and their relationship to quality attributes. In this paper, we present a systematic literature review of 1340 scientific papers published between 1991 and 2019 on the topic of FIS complexity issues. The obtained results were systematized and classified according to the complexity issues as computational complexity, complexity of fuzzy rules, complexity of membership functions, data complexity, and knowledge representation complexity. Further, the current research was extended by extracting FIS quality attributes related to the found complexity issues. The key, but not all, FIS quality attributes found are performance, accuracy, efficiency, and interpretability.
... Appropriate architectural changes driven by non-functional requirements are particularly challenging to identify, mainly because nonfunctional analysis is based on specific languages and tools (e.g., Petri Nets, Markov Models) that are different from typical software architecture notations like Architecture Description Languages (e.g., ACME [5] ). In fact, very few ADLs embed constructs that enable the specification in the last decades to generate non-functional models from software architectural descriptions [10,11] . There is instead a clear lack of automation in the backward path that basically consists in the interpretation of the analysis results and the generation of architectural feedback to be propagated back to the software architecture. ...
... In order to validate non-functional requirements on a software architecture, some approaches, mostly based on model transformations, have been proposed in the last decades to generate non-functional models from software architectural descriptions [10,11] . This generation step is also called forward path , and it is represented by the topmost steps of Fig. 1 . ...
... Constraints are expressed at the meta-level, and model transformations are based on source and target metamodels. With the introduction of model-driven techniques in the software lifecycle, the analysis of quality attributes has become effective by means of automated transformations from software artifacts to analysis models [10] . ...
Context: With the ever-increasing evolution of software systems, their architecture is subject to frequent changes due to multiple reasons, such as new requirements. Appropriate architectural changes driven by non-functional requirements are particularly challenging to identify because they concern quantitative analyses that are usually carried out with specific languages and tools. A considerable number of approaches have been proposed in the last decades to derive non-functional analysis models from architectural ones. However, there is an evident lack of automation in the backward path that brings the analysis results back to the software architecture.
Objective: In this paper, we propose a model-driven approach to support designers in improving the availability of their software systems through refactoring actions.
Method: The proposed framework makes use of bidirectional model transformations to map UML models onto Generalized Stochastic Petri Nets (GSPN) analysis models and vice versa. In particular, after availability analysis, our approach enables the application of model refactoring, possibly based on well-known fault tolerance patterns, aimed at improving the availability of the architectural model.
Results: We validated the effectiveness of our approach on an Environmental Control System. Our results show that the approach can generate: (i) an analyzable availability model from a software architecture description, and (ii) valid software architecture models back from availability models. Finally, our results highlight that the application of fault tolerance patterns significantly improves the availability in each considered scenario.
Conclusion: The approach integrates bidirectional model transformation and fault tolerance techniques to support the availability-driven refactoring of architectural models. The results of our experiment showed the effectiveness of the approach in improving the software availability of the system.
... As with all scientific and engineering disciplines, predictions can be made with models. Software performance models are mathematical abstractions whose analysis provides quantitative insights into real systems under consideration [15]. Typically, these are stochastic models based on Markov chains and other higher-level formalisms such as queueing networks, stochastic process algebra, and stochastic Petri nets (see, e.g., [15] for a detailed account). ...
... Software performance models are mathematical abstractions whose analysis provides quantitative insights into real systems under consideration [15]. Typically, these are stochastic models based on Markov chains and other higher-level formalisms such as queueing networks, stochastic process algebra, and stochastic Petri nets (see, e.g., [15] for a detailed account). Although they have proved effective in describing and predicting the performance behavior of complex software systems (e.g., [8,50]), a pressing limitation is that the current state of the art hinges on considerable craftsmanship to distill the appropriate abstraction level from a concrete software system, and relevant mathematical skills to develop, analyze, and validate the model. ...
... While model-driven approaches to software performance have been researched quite intensively [15], program-driven generation of performance models has been less explored, and has been concerned with specific kinds of applications. Indeed, the early approach by Hrischuk et al. is concerned with the generation of software performance models (specifically, layered queuing networks [19]) from a class of distributed applications whose components communicate solely by remote procedure calls [26]. ...
... As with all scientific and engineering disciplines, predictions can be made with models. Software performance models are mathematical abstractions whose analysis provides quantitative insights into real systems under consideration [15]. Typically, these are stochastic models based on Markov chains and other higher-level formalisms such as queueing networks, stochastic process algebra, and stochastic Petri nets (see, e.g., [15] for a detailed account). ...
... Software performance models are mathematical abstractions whose analysis provides quantitative insights into real systems under consideration [15]. Typically, these are stochastic models based on Markov chains and other higher-level formalisms such as queueing networks, stochastic process algebra, and stochastic Petri nets (see, e.g., [15] for a detailed account). Although they have proved effective in describing and predicting the performance behavior of complex software systems (e.g., [8,50]), a pressing limitation is that the current state of the art hinges on considerable craftsmanship to distill the appropriate Learning Queuing Networks by Recurrent Neural Networks abstraction level from a concrete software system, and relevant mathematical skills to develop, analyze, and validate the model. ...
... While model-driven approaches to software performance have been researched quite intensively [15], program-driven generation of performance models has been less explored, and has been concerned with specific kinds of applications. Indeed, the early approach by Hrischuk et al. is concerned with the generation of software performance models (specifically, layered queuing networks [19]) from a class of distributed applications whose components communicate solely by remote procedure calls [26]. ...
It is well known that building analytical performance models in practice is difficult because it requires a considerable degree of proficiency in the underlying mathematics. In this paper, we propose a machine-learning approach to derive performance models from data. We focus on queuing networks, and crucially exploit a deterministic approximation of their average dynamics in terms of a compact system of ordinary differential equations. We encode these equations into a recurrent neural network whose weights can be directly related to model parameters. This allows for an interpretable structure of the neural network, which can be trained from system measurements to yield a white-box parameterized model that can be used for prediction purposes such as what-if analyses and capacity planning. Using synthetic models as well as a real case study of a load-balancing system, we show the effectiveness of our technique in yielding models with high predictive power.
... In combination with formal tests, these methods are furthermore particularly suitable to be used for comparing different design alternatives in order to eliminate poor or faulty designs at an early stage and to focus on the most promising solutions from the beginning. The need to face potential performance problems at early design stages using model-based approaches has been recognized as one of the most important factors for cost-effective and highly productive development of complex software systems [15,49]. ...
... The metamodel that represents the abstract syntax for the relevant structure of a resulting test model after the application of our transformation rules is shown in Figure 5. 15. Accordingly, the derived behaviors of single test components are thereafter correspond to the UML metamodel for state machines which is sketched in Figure 5. 16. ...
... Although our framework is designed to enable realization of translators for arbitrary simulation engines, OMNeT++ is currently the primary target tool that we use. However, translations to another simulation tool AnyLogic 15 The content of this section is deliberately condensed to the most relevant and substantial amendments to the transformation rules and their practical realization. ...
In view of increasing the efficiency of development processes in the field of software and systems engineering, model-driven techniques are coming into ever more widespread use. On the one hand, the abstract graphic models help to master the complexity of the system under development. On the other hand, formal models serve as the source for analysis and automated synthesis of a system. Thereby, model-based transformation engines and generators allow the specifications to be defined platform- and target-language-independently and to be automatically mapped to the desired target platform. Test-driven development is a promising approach in the field of agile software development. In this method, the development process is based on relatively short iteration cycles with preceding test specifications. Due to the fact that the actual implementation is consistently carried out in compliance with the previously written tests, this method leads to a higher test coverage at early development stages and thus contributes to the overall quality assurance of the resulting system. This thesis introduces the concept of Test-driven Agile Simulation(TAS) as a consistent evolution of the systems engineering methods through the combination of test- and model-driven development techniques with the model-driven simulation. With the help of simulations, performance evaluations and validation of the modeled system can be carried out in the early stages of the development process, even when no program code or a fully implemented system is yet available. The primary goal of this approach is to combine the advantages of the above techniques to enable a holistic model-based approach to systems engineering with improved quality assurance. In particular, special attention is thereby paid to the modeling and validation of the overall system, taking into account the effects of communication between its components. The whole approach is founded upon the widespread and established standards for Model-Driven Architecture(MDA) provided by the Object Management Group(OMG). Using the OMG's standard modeling language UML in combination with the specialized extension profiles, it is possible to specify requirements, system models as well as test models in a uniform, formal, and standard-compliant manner. The creation and presentation of the essential elements of such specifications are largely done with the help of graphical diagrams, such as class, composition structure, state, and activity diagrams. In order to facilitate behavioral modeling using detailed activity diagrams, TAS provides support for the textual activity language Alf, which is also a standard provided by OMG. UML models can be used at different levels of abstraction for specification as well as for analysis. In the TAS approach, these models are automatically transformed into an executable simulation code which can then be executed to ensure primarily the required behavior of the system and the correctness of the tests. In this way, by running tests early on the simulated model, the mutual validation of the system and test specification is performed. The simulation of the modeled system also provides insights into the expected dynamic behavior of the system in terms of functional as well as non-functional properties. For supporting the TAS approach, a versatile integrated tool environment is provided by our framework SimTAny. The framework offers seamless support for modeling, transforming, simulating, and testing UML-based specification models. In addition to the modeling methodology of TAS, the realization of the framework itself is largely based on the standards of the OMG. Thereby, model-based approaches and standardized transformation languages have a wide application in different components of SimTAny. Other helpful features of SimTAny include traceability of requirements across modeled elements and automatically generated code artifacts, as well as the management and design of simulation experiments. The service-oriented architecture of the framework also makes it possible to met the challenges in the distributed development processes. This also leads to the simplifications in terms of functionality extension of the framework itself and its integration into existing development environments and processes.
... At this moment, we need to recall that the UML models are not 'per se' useful to carry out performance prediction [20]. In fact, performance prediction, based on models, needs the use of analyzable models, e.g., Petri nets [1]. ...
... Table 3 shows the values given to the attributes of stereotypes in this proof of concept. The input parameters have the following values: [10,20,30] -N ranges from 5000 to 30,000 in steps of 5000. ...
... TOSCA is being supported by a number of orchestrators that, given a TOSCA blueprint and all node and relationship types used there, are able to execute it deploying the corresponding system and managing its lifecycle. Examples of such orchestrators are Cloudify, 18 ARIA TOSCA, 19 Indigo, 20 Apache Brooklyn 21 or ECoWare [6]. The DIA profile intended to cover the architectural modeling needs of your DIA 1-5 4 (mean) ...
Big Data or Data-Intensive applications (DIAs) seek to mine, manipulate, extract or otherwise exploit the potential intelligence hidden behind Big Data. However, several practitioner surveys remark that DIAs potential is still untapped because of very difficult and costly design, quality assessment and continuous refinement. To address the above shortcoming, we propose the use of a UML domain-specific modeling language or profile specifically tailored to support the design, assessment and continuous deployment of DIAs. This article illustrates our DIA-specific profile and outlines its usage in the context of DIA performance engineering and deployment. For DIA performance engineering, we rely on the Apache Hadoop technology, while for DIA deployment, we leverage the TOSCA language. We conclude that the proposed profile offers a powerful language for data-intensive software and systems modeling, quality evaluation and automated deployment of DIAs on private or public clouds.
... Performance measurement should take into account a software system's efficiency, corresponding to the time and allocation of resources. It can be expressed through multiple metrics (indices), including response time, utilization, and throughput [15]. ...
... Analytical performance models represent a popular approach to determine timebased system behavior. Queuing Network (QN), Layered Queuing Network (LQN), and Petri Nets (PNs) are well-known examples of analytical performance models [15]. There are many solution techniques to identify performance indices by analyzing performance models. ...
Bioinformatics is a branch of science that uses computers, algorithms, and databases to solve biological problems. To achieve more accurate results, researchers need to use large and complex datasets. Sequence alignment is a well-known field of bioinformatics that allows the comparison of different genomic sequences. The comparative genomics field allows the comparison of different genomic sequences, leading to benefits in areas such as evolutionary biology, agriculture, and human health (e.g., mutation testing connects unknown genes to diseases). However, software engineering best practices, such as software performance engineering, are not taken into consideration in most bioinformatics tools and frameworks, which may lead to serious performance problems. Having an estimate of the software performance in the early phases of the Software Development Life Cycle (SDLC) is beneficial in making better decisions relating to the software design. Software performance engineering provides a reliable and observable method to build systems that can achieve their required performance goals. In this paper, we introduce the use of the Palladio Component Modeling (PCM) methodology to predict the performance of a sequence alignment system. Software performance engineering was not considered during the original system development. As a result of the performance analysis, an alternative design is proposed. Comparing the performance of the proposed design against the one already developed, a better response time is obtained. The response time of the usage scenario is reduced from 16 to 8.6 s. The study results show that using performance models at early stages in bioinformatics systems can help to achieve better software system performance.
... Software Performance Engineering (SPE) [9,28,29] aims to produce performance models early in the development cycle. Solving such models produces predictions that can trigger the process of refactoring the system design to meet performance requirements [29]. ...
... In this section, we model the system described in Section 4.1 using execution graphs (EG) [29] (solved with SPE·ED) and queuing networks combined with Petri Nets (PN) [9] (solved with JMT). The validity of the QN+PN model is assessed by comparing its results with those obtained by solving the EG model. ...
... Performance problems have been studied from several decades in literature, and software performance engineering emerged as the discipline focused on fostering the specification of performance-related factors [95,8,94] and reporting experiences related their management [81,41,78,3]. Performance bugs, i.e., suboptimal implementation choices that create significant performance degradation, have been demonstrated to hurt the satisfaction of end-users in the context of desktop applications [67]. ...
... Moving to the suboptimal CPU usage performance bugs, we classified as such 32 bugs resulting in the excessive/unnecessary usage of the CPU (e.g., unneeded computation, excessive logging, etc.). We were able to identify the causes for the 32 bugs and we sub-categorized them in unneeded computation (15), energy leaks (7), excessive logging(2), and costly operations (8). Fig. 5 summarizes a bug falling in the costly operation category. ...
A recent research showed that mobile apps represent nowadays 75% of the whole usage of mobile devices. This means that the mobile user experience, while tied to many factors (e.g., hardware device, connection speed, etc.), strongly depends on the quality of the apps being used. With “quality” here we do not simply refer to the features offered by the app, but also to its non-functional characteristics, such as security, reliability, and performance. This latter is particularly important considering the limited hardware resources (e.g., memory) mobile apps can exploit. In this paper, we present the largest study at date investigating performance bugs in mobile apps. In particular, we (i) define a taxonomy of the types of performance bugs affecting Android and iOS apps; and (ii) study the survivability of performance bugs (i.e., the number of days between the bug introduction and its fixing). Our findings aim to help researchers and apps developers in building performance-bugs detection tools and focusing their verification and validation activities on the most frequent types of performance bugs.
... Various modeling notations have been proposed for performance modeling including queueing networks, Markov process, petri nets, process algebras and simulation models [9,10,11,12]. There are various analysis techniques in the literature for measuring the performance metrics based on the performance modeling [13,14,15,16]. Performance testing is intended to achieve the aforementioned objectives through executing the software system regarding the actual execution conditions. ...
... Automating the performance model generation has been considered remarkably to replace the manual model generation and provide agile performance analysis to the early stages of software development life cycle. There is an extensive literature published in the field of performance modeling [13,14,15,16] Performance testing is another family of approaches intended to address the objectives of performance analysis. Performance, load and stress testing are the terms often used interchangeably, even though there are also some definitions/interpretations to make a distinguish between them [17]. ...
Test automation can result in reduction in cost and human effort. If the optimal policy, the course of actions taken, for the intended objective in a testing process could be learnt by the testing system (e.g., a smart tester agent), then it could be reused in similar situations, thus leading to higher efficiency, i.e., less computational time. Automating stress testing to find performance breaking points remains a challenge for complex software systems. Common approaches are mainly based on source code or system model analysis or use-case based techniques. However, source code or system models might not be available at testing time. In this paper, we propose a self-adaptive fuzzy reinforcement learning-based performance (stress) testing framework (SaFReL) that enables the tester agent to learn the optimal policy for generating stress test cases leading to performance breaking point without access to performance model of the system under test. SaFReL learns the optimal policy through an initial learning, then reuses it during a transfer learning phase, while keeping the learning running in the long-term. Through multiple experiments on a simulated environment, we demonstrate that our approach generates the stress test cases for different programs efficiently and adaptively without access to performance models.------>
https://arxiv.org/abs/1908.06900
... These models can be used to predict performance indices of the respective versions and variants, for example CPU utilisation and response times. Two common approaches are used for prediction [CMI11]: (1) simulating the models and (2) transforming the architectural models to analytical models, for example queuing networks or Petri nets and solving or simulating these models using respective tools. Once implementation artefacts become available, performance indices can be obtained by measurements, for example using profilers or application performance management (APM) tools [Heg+17]. ...
... We implemented the transformation of sequence diagrams to LQNs based on the description of Cortellessa et al. [CMI11]. They suggest a transformation from three source models as activity diagrams, component diagrams, and sequence diagrams to one target LQN model. ...
In this chapter, we discuss the diverse set of challenges, from different perspectives, that we face because of our aim to incorporate knowledge in software and processes tailored for software and systems evolution. Firstly, the discovery and externalization of knowledge about requirements, the recording and representation of design decisions, and the learning from past experiences in evolution form the human perspective, including developers, operators, and users. Secondly, performance and security induce the software quality perspective. Thirdly, round-trip engineering, testing, and co-evolution define the technical perspective. And fourthly, formal methods for evolutionary changes provide the foundation and define the formal perspective.
... These models can be used to predict performance indices of the respective versions and variants, for example CPU utilisation and response times. Two common approaches are used for prediction [CMI11]: (1) simulating the models and (2) transforming the architectural models to analytical models, for example queuing networks or Petri nets and solving or simulating these models using respective tools. Once implementation artefacts become available, performance indices can be obtained by measurements, for example using profilers or application performance management (APM) tools [Heg+17]. ...
... We implemented the transformation of sequence diagrams to LQNs based on the description of Cortellessa et al. [CMI11]. They suggest a transformation from three source models as activity diagrams, component diagrams, and sequence diagrams to one target LQN model. ...
This chapter is devoted to the performance analysis of configurable and evolving software. Both configurability and evolution imply a high degree of software variation, that is a large space of software variants and versions, that challenges state-of-the-art analysis techniques for software. We give an overview on strategies to cope with software variation, which mostly focuses either on configuration (variants) or evolution (versions). Interestingly, we found several directions where research on variants and versions can profit from one another.
... These models can be used to predict performance indices of the respective versions and variants, for example CPU utilisation and response times. Two common approaches are used for prediction [CMI11]: (1) simulating the models and (2) transforming the architectural models to analytical models, for example queuing networks or Petri nets and solving or simulating these models using respective tools. Once implementation artefacts become available, performance indices can be obtained by measurements, for example using profilers or application performance management (APM) tools [Heg+17]. ...
... We implemented the transformation of sequence diagrams to LQNs based on the description of Cortellessa et al. [CMI11]. They suggest a transformation from three source models as activity diagrams, component diagrams, and sequence diagrams to one target LQN model. ...
The main challenges of rational decision-making, documentation, and exploitation are intrusiveness of the activities and consistency between decisions and between decisions and artefacts. This chapter presents three approaches that address these challenges and that contribute to a continuous design decision support: They assist the design decision-making by using a catalogue of design patterns. Once the design decisions are documented, they are made visible in the program code and are accessible from other artefacts such as requirements to increase the developers’ awareness of the decisions and to enable an easy exploitation. In addition, short-cycled practices in continuous software engineering are used to support the documentation and exploitation of design decisions.
... These models can be used to predict performance indices of the respective versions and variants, for example CPU utilisation and response times. Two common approaches are used for prediction [CMI11]: (1) simulating the models and (2) transforming the architectural models to analytical models, for example queuing networks or Petri nets and solving or simulating these models using respective tools. Once implementation artefacts become available, performance indices can be obtained by measurements, for example using profilers or application performance management (APM) tools [Heg+17]. ...
... We implemented the transformation of sequence diagrams to LQNs based on the description of Cortellessa et al. [CMI11]. They suggest a transformation from three source models as activity diagrams, component diagrams, and sequence diagrams to one target LQN model. ...
This chapter describes two perspectives on the identification and externalisation of tacit knowledge, that is expertise that is difficult to verbalise, within long-living and continuously evolving systems. During the design time of a software system, heuristics and machine learning classifiers can be used to identify and externalise tacit knowledge. For instance, externalised tacit security knowledge supports requirement engineers to understand security-related requirements. During the run time of a software system, tacit knowledge about a system’s usability can be captured through monitoring user interactions. The identification and extraction of tacit usage knowledge can improve usability-related aspects and even trigger new functional requirement requests.
... These models can be used to predict performance indices of the respective versions and variants, for example CPU utilisation and response times. Two common approaches are used for prediction [CMI11]: (1) simulating the models and (2) transforming the architectural models to analytical models, for example queuing networks or Petri nets and solving or simulating these models using respective tools. Once implementation artefacts become available, performance indices can be obtained by measurements, for example using profilers or application performance management (APM) tools [Heg+17]. ...
... We implemented the transformation of sequence diagrams to LQNs based on the description of Cortellessa et al. [CMI11]. They suggest a transformation from three source models as activity diagrams, component diagrams, and sequence diagrams to one target LQN model. ...
Successful system evolution is dependent on knowledge about the system itself, its past and its present, as well as the environment of the system. This chapter presents several approaches to automate the acquisition of knowledge about the system’s past, for example past evolution steps, and its present, for example models of its behaviour. Based on these results, further approaches support the validation and verification of evolution steps, as well as the recommendation of evolutions to the system, as well as similar systems. The approaches are illustrated using the joint automation production system case study, the Pick and Place Unit (PPU) and Extended Pick and Place Unit (xPPU).
... This is true considering that in a given project, the same UML models can be leveraged by the engineer, in the MDE context, for multiple purposes. For example, UML models are useful for code generation, as previously mentioned, for automatic testing, and also for performance assessment [35] and dependability assessment [36]. However, UML falls short for representing those specific concepts of the blockchain domain that will be eventually needed for proving the security properties. ...
This paper proposes a model-driven approach for the security modelling and analysis of blockchain based protocols. The modelling is built upon the definition of a UML profile, which is able to capture transaction-oriented information. The analysis is based on existing formal analysis tools. In particular, the paper considers the Tweetchain protocol, a recent proposal that leverages online social networks, i.e., Twitter, for extending blockchain to domains with small-value transactions, such as IoT. A specialized textual notation is added to the UML profile to capture features of this protocol. Furthermore, a model transformation is defined to generate a Tamarin model, from the UML models, via an intermediate well-known notation, i.e., the Alice &Bob notation. Finally, Tamarin Prover is used to verify the model of the protocol against some security properties. This work extends a previous one, where the Tamarin formal models were generated by hand. A comparison on the analysis results, both under the functional and non-functional aspects, is reported here too.
... There has been a plethora of literature that discusses performance analysis and modeling techniques of software systems [11][12][13][14][15], using a multitude of different viewpoints and approaches to the problem. Traditionally, performance models in literature are designed to capture operation metrics (e.g., CPU, latency, memory consumption, I/O writes) in relation to an underlying software system and a workload. ...
Catching and attributing code change-induced performance regressions in production is hard; predicting them beforehand, even harder. A primer on automatically learning to predict performance regressions in software, this article gives an account of the experiences we gained when researching and deploying an ML-based regression prediction pipeline at Meta. In this paper, we report on a comparative study with four ML models of increasing complexity, from (1) code-opaque, over (2) Bag of Words, (3) off-the-shelve Transformer-based, to (4) a bespoke Transformer-based model, coined SuperPerforator. Our investigation shows the inherent difficulty of the performance prediction problem, which is characterized by a large imbalance of benign onto regressing changes. Our results also call into question the general applicability of Transformer-based architectures for performance prediction: an off-the-shelve CodeBERT-based approach had surprisingly poor performance; our highly customized SuperPerforator architecture initially achieved prediction performance that was just on par with simpler Bag of Words models, and only outperformed them for down-stream use cases. This ability of SuperPerforator to transfer to an application with few learning examples afforded an opportunity to deploy it in practice at Meta: it can act as a pre-filter to sort out changes that are unlikely to introduce a regression, truncating the space of changes to search a regression in by up to 43%, a 45x improvement over a random baseline. To gain further insight into SuperPerforator, we explored it via a series of experiments computing counterfactual explanations. These highlight which parts of a code change the model deems important, thereby validating the learned black-box model.
... To deal with the requirements of CPS applications, some researchers have focused on modelbased and model-driven engineering approaches to self-adaptation [47]. This technique provides early design-time evaluations of the system based on predefined metrics. ...
Self-adaptive systems provide the ability of autonomous decision-making for handling the changes affecting the functionalities of cyber-physical systems. A self-adaptive system repeatedly monitors and analyzes the local system and the environment and makes significant decisions regarding fulfilling the system's functional optimization and safety requirements. Such a decision must be made before a deadline, and the autonomy helps the system meet the timing constraints. Suppose the model of the cyber-physical system is available. In that case, it can be used for verification against specific formal properties to reveal whether the system is committed to the properties or not. However, according to the dynamicity of such systems, the system model needs to be reconstructed and reverified at runtime. As the model of the self-adaptive systems is a composition of the local system and the environment models, the size of the composed model is relatively large. Therefore, we need efficient and scalable methods to verify the model at runtime in resource-constrained systems. Since the physical environment and the cyber part of the system usually have stochastic natures, the reflection of each behavior is modeled through probabilistic parameters, which we have some predictions about them. If the system observes or predicts some changes in the behavior of the environment or the local system, the parameter(s) are updated. This research focuses on the problem of runtime model size reduction in self-adaptive systems. As a solution, the model is partitioned into sub-models that can be verified/approximated independently. At runtime, if a change occurs, only the affected sub-models are subject to re-verification/re-approximation. Finally, with the help of an aggregation algorithm, the partial results from the sub-models are composed, and the verification result for the whole model is calculated. In some situations, updating the model may cause some delays in the decision-making. The self-adaptive system must decide about an incomplete model when a few parameters have been missed to meet the decision-making deadlines. We do this by conducting a set of behavioral simulations by random walk and matching the system's current behavior with its previous behavioral patterns. Thus, the system is equipped with a runtime parameter estimation method respecting a certain upper bound of errors. This thesis proposes a new metric for determining an upper bound of errors caused by applying the approximation technique. The metric is the basis for two proposed theorems that guarantee upper bounds of errors and accuracy of runtime verification. The evaluation results confirm that the proposed approximation framework reduces the model's size and helps decision-making within the time restrictions. The framework keeps the accuracy of the parameter estimations and verification results upper than 96.5% and 95%, respectively, while fully guaranteeing the system's safety.
... Performance modeling and testing are considered common approaches to accomplish the mentioned objectives at different stages of performance analysis. Although performance models [7]- [9] provide helpful insight into the behavior of a system, there are still many details of the implementation and the execution environment that might be ignored in the modeling [10]. Moreover, building a precise detailed model of the system behavior with regard to all the factors at play is often costly and sometimes impossible. ...
Performance testing with the aim of generating an efficient and effective workload to identify performance issues is challenging. Many of the automated approaches mainly rely on analyzing system models, source code, or extracting the usage pattern of the system during the execution. However, such information and artifacts are not always available. Moreover, all the transactions within a generated workload do not impact the performance of the system the same way, a finely tuned workload could accomplish the test objective in an efficient way. Model-free reinforcement learning is widely used for finding the optimal behavior to accomplish an objective in many decision-making problems without relying on a model of the system. This paper proposes that if the optimal policy (way) for generating test workload to meet a test objective can be learned by a test agent, then efficient test automation would be possible without relying on system models or source code. We present a self-adaptive reinforcement learning-driven load testing agent, RELOAD, that learns the optimal policy for test workload generation and generates an effective workload efficiently to meet the test objective. Once the agent learns the optimal policy, it can reuse the learned policy in subsequent testing activities. Our experiments show that the proposed intelligent load test agent can accomplish the test objective with lower test cost compared to common load testing procedures, and results in higher test efficiency.
... Performance modeling and testing are common evaluation approaches to accomplish the associated objectives such as measurement of performance metrics, detection of functional problems emerging under certain performance conditions, and also violations of performance requirements (Jiang and Hassan 2015). Performance modeling mainly involves building a model of the software system's behavior using modeling notations such as queueing networks, Markov processes, Petri nets, and simulation models (Cortellessa et al. 2011;Harchol-Balter 2013;Kant and Srinivasan 1992). Although models provide helpful insights into the performance behavior of the system, there are also many details of implementation and execution platform that might be ignored in the modeling (Denaro et al. 2004). ...
Test automation brings the potential to reduce costs and human effort, but several aspects of software testing remain challenging to automate. One such example is automated performance testing to find performance breaking points. Current approaches to tackle automated generation of performance test cases mainly involve using source code or system model analysis or use-case-based techniques. However, source code and system models might not always be available at testing time. On the other hand, if the optimal performance testing policy for the intended objective in a testing process instead could be learned by the testing system, then test automation without advanced performance models could be possible. Furthermore, the learned policy could later be reused for similar software systems under test, thus leading to higher test efficiency. We propose SaFReL, a self-adaptive fuzzy reinforcement learning-based performance testing framework. SaFReL learns the optimal policy to generate performance test cases through an initial learning phase, then reuses it during a transfer learning phase, while keeping the learning running and updating the policy in the long term. Through multiple experiments in a simulated performance testing setup, we demonstrate that our approach generates the target performance test cases for different programs more efficiently than a typical testing process and performs adaptively without access to source code and performance models.
... Examples of excluded papers in this line are [186,209], which deal with probabilistic databases and uncertainty in complex event systems, respectively. Similarly, we did not consider the numerous works dealing with model-based performance or reliability engineering of software systems, which enrich software models with the information required for evaluation using different notations (e.g., UML Profiles such as SPT [190], MARTE [72] or DAM [129]), transform the enriched model to a formal and mathematical model supporting the evaluation (e.g., Queueing Networks [161,217], Probabilistic Process Algebras [157], Stochastic Petri Nets [182,183], Fault Trees [143] or Markov chains [214]) and evaluate the performance or reliability of the system using the tools available for the formal model [127,139]. The interested reader can consult already existing corresponding surveys about these topics, such as [131,156,175,195,200,224] in the context of databases, [125] in the context of complex event processing, or [127,128,160,168] on model-based performance or reliability engineering of software systems. ...
This paper provides a comprehensive overview and analysis of research work on how uncertainty is currently represented in software models. The survey presents the definitions and current research status of different proposals for addressing uncertainty modeling and introduces a classification framework that allows to compare and classify existing proposals, analyze their current status and identify new trends. In addition, we discuss possible future research directions, opportunities and challenges.
... Code-driven generation of quantitative models. In software performance engineering, the use of Markov chains is widely accepted (e.g., [14]). However, most of the literature has focused on model-driven approaches [2], while code-driven generation of models has been less explored. ...
... Application Data Collection Service (ADC): ADC collects two performance metrics, response time and throughput. Response time is the end-to-end time that a task spends to traverse a certain path within the system (Cortellessa et al. 2011). It includes server processing time and network transmission time (Smith and Williams 2002). ...
Monitoring for cloud is the key technology to know the status and the availability of the resources and services present in the current infrastructure. However, cloud monitoring faces a lot of challenges due to inefficient monitoring capability and enormous resource consumption. We study the adaptive monitoring for cloud computing platform, and focus on the problem of balancing monitoring capability and resource consumption. We proposed HSACMA, a hierarchical scalable adaptive monitoring architecture, that (1) monitors the physical and virtual infrastructure at the infrastructure layer, the middleware running at the platform layer, and the application services at the application layer; (2) achieves the scalability of the monitoring based on microservices; and (3) adaptively adjusts the monitoring interval and data transmission strategy according to the running state of the cloud computing system. Moreover, we study a case of real production system deployed and running on the cloud computing platform called CloudStack, to verify the effectiveness of applying our architecture in practice. The results show that HSACMA can guarantee the accuracy and real-time performance of monitoring and reduces resource consumption.
... Some of the well-known modeling notations are queuing networks, Markov processes, and Petri nets, which are used together with analysis techniques to address performance modeling. [16,17,18]. ...
Background: End-user satisfaction is not only dependent on the correct functioning of the software systems but is also heavily dependent on how well those functions are performed. Therefore, performance testing plays a critical role in making sure that the system responsively performs the indented functionality. Load test generation is a crucial activity in performance testing. Existing approaches for load test generation require expertise in performance modeling, or they are dependent on the system model or the source code. Aim: This thesis aims to propose and evaluate a model-free learning-based approach for load test generation, which doesn't require access to the system models or source code. Method: In this thesis, we treated the problem of optimal load test generation as a reinforcement learning (RL) problem. We proposed two RL-based approaches using q-learning and deep q-network for load test generation. In addition, we demonstrated the applicability of our tester agents on a real-world software system. Finally, we conducted an experiment to compare the efficiency of our proposed approaches to a random load test generation approach and a baseline approach. Results: Results from the experiment show that the RL-based approaches learned to generate effective workloads with smaller sizes and in fewer steps. The proposed approaches led to higher efficiency than the random and baseline approaches. Conclusion: Based on our findings, we conclude that RL-based agents can be used for load test generation, and they act more efficiently than the random and baseline approaches.
full text: https://arxiv.org/abs/2007.12094v1
... Performance antipatterns [8,31] and stochastic modelling (e.g., using queueing networks, stochastic Petri nets, and Markov models [7,16,33]) have long been used in conjunction, to analyse performance of software systems and to drive system refactoring when requirements are violated. End-to-end approaches supporting this analysis and refinement processes have been developed (e.g., [4,9,20]), often using established tools for the simulation or formal verification of stochastic models of the software system under development (SUD). ...
Refactoring is often needed to ensure that software systems meet their performance requirements in deployments with different operational profiles, or when these operational profiles are not fully known or change over time. This is a complex activity in which software engineers have to choose from numerous combinations of refactoring actions. Our paper introduces a novel approach that uses performance antipatterns and stochastic modelling to support this activity. The new approach computes the performance antipatterns present across the operational profile space of a software system under development, enabling engineers to identify operational profiles likely to be problematic for the analysed design, and supporting the selection of refactoring actions when performance requirements are violated for an operational profile region of interest. We demonstrate the application of our approach for a software system comprising a combination of internal (i.e., in-house) components and external third-party services.
... Performance problems at architecture level (IIb) are wellresearched [17]. Methods for finding them include using systematic experiments [22] and model analysis [5] [19]. ...
... Performance modeling is often based on building a performance model of the system behavior and measuring the target performance metrics. It can be done using various modeling notations like Markov Processes, queueing networks, petri nets and simulation models [2,3,4,5,6,7]. Performance testing is considered as a family of performance-related testing techniques intended for addressing the objectives of performance analysis. ...
Automated testing activities like automated test case generation imply a reduction in human effort and cost, with the potential to impact the test coverage positively. If the optimal policy, i.e., the course of actions adopted, for performing the intended test activity could be learnt by the testing system, i.e., a smart tester agent, then the learnt policy could be reused in analogous situations which leads to even more efficiency in terms of required efforts. Performance testing under stress execution conditions, i.e., stress testing, which involves providing extreme test conditions to find the performance breaking points, remains a challenge, particularly for complex software systems. Some common approaches for generating stress test conditions are based on source code or system model analysis, or use-case based design approaches. However, source code or precise system models might not be easily available for testing. Moreover, drawing a precise performance model is often difficult, particularly for complex systems. In this research, I have used model-free reinforcement learning to build a self-adaptive autonomous stress testing framework which is able to learn the optimal policy for stress test case generation without having a model of the system under test. The conducted experimental analysis shows that the proposed smart framework is able to generate the stress test conditions for different software systems efficiently and adaptively without access to performance models.
... The extraction of information about the interactions between components differs for design time and run-time. At design time, models can be created based on designer expertise and design documents as proposed, e.g., by [SW01], [MG00], [PW02], and [CDI11]. Commencing at a runnable state, monitoring logs can be generated to derive an effective architecture including only the executed system elements [IWF07]. ...
Software performance is of particular relevance to software system design, operation, and evolution because it has a significant impact on key business indicators. During the life-cycle of a software system, its implementation, configuration, and deployment are subject to multiple changes that may affect the end-to-end performance characteristics. Consequently, performance analysts continually need to provide answers to and act based on performance-relevant concerns. To ensure a desired level of performance, software performance engineering provides a plethora of methods, techniques, and tools for measuring, modeling, and evaluating performance properties of software systems. However, the answering of performance concerns is subject to a significant semantic gap between the level on which performance concerns are formulated and the technical level on which performance evaluations are actually conducted. Performance evaluation approaches come with different strengths and limitations concerning, for example, accuracy, time-to-result, or system overhead. For the involved stakeholders, it can be an elaborate process to reasonably select, parameterize and correctly apply performance evaluation approaches, and to filter and interpret the obtained results. An additional challenge is that available performance evaluation artifacts may change over time, which requires to switch between different measurement-based and model-based performance evaluation approaches during the system evolution. At model-based analysis, the effort involved in creating performance models can also outweigh their benefits. To overcome the deficiencies and enable an automatic and holistic evaluation of performance throughout the software engineering life-cycle requires an approach that: (i) integrates multiple types of performance concerns and evaluation approaches, (ii) automates performance model creation, and (iii) automatically selects an evaluation methodology tailored to a specific scenario. This thesis presents a declarative approach —called Declarative Performance Engineering (DPE)— to automate performance evaluation based on a humanreadable specification of performance-related concerns. To this end, we separate the definition of performance concerns from their solution. The primary scientific contributions presented in this thesis are: A declarative language to express performance-related concerns and a corresponding processing framework: We provide a language to specify performance concerns independent of a concrete performance evaluation approach. Besides the specification of functional aspects, the language allows to include non-functional tradeoffs optionally. To answer these concerns, we provide a framework architecture and a corresponding reference implementation to process performance concerns automatically. It allows to integrate arbitrary performance evaluation approaches and is accompanied by reference implementations for model-based and measurement-based performance evaluation. Automated creation of architectural performance models from execution traces: The creation of performance models can be subject to significant efforts outweighing the benefits of model-based performance evaluation. We provide a model extraction framework that creates architectural performance models based on execution traces, provided by monitoring tools.The framework separates the derivation of generic information from model creation routines. To derive generic information, the framework combines state-of-the-art extraction and estimation techniques. We isolate object creation routines specified in a generic model builder interface based on concepts present in multiple performance-annotated architectural modeling formalisms. To create model extraction for a novel performance modeling formalism, developers only need to write object creation routines instead of creating model extraction software from scratch when reusing the generic framework. Automated and extensible decision support for performance evaluation approaches: We present a methodology and tooling for the automated selection of a performance evaluation approach tailored to the user concerns and application scenario. To this end, we propose to decouple the complexity of selecting a performance evaluation approach for a given scenario by providing solution approach capability models and a generic decision engine. The proposed capability meta-model enables to describe functional and non-functional capabilities of performance evaluation approaches and tools at different granularities. In contrast to existing tree-based decision support mechanisms, the decoupling approach allows to easily update characteristics of solution approaches as well as appending new rating criteria and thereby stay abreast of evolution in performance evaluation tooling and system technologies. Time-to-result estimation for model-based performance prediction: The time required to execute a model-based analysis plays an important role in different decision processes. For example, evaluation scenarios might require the prediction results to be available in a limited period of time such that the system can be adapted in time to ensure the desired quality of service. We propose a method to estimate the time-to-result for modelbased performance prediction based on model characteristics and analysis parametrization. We learn a prediction model using performancerelevant features thatwe determined using statistical tests. We implement the approach and demonstrate its practicability by applying it to analyze a simulation-based multi-step performance evaluation approach for a representative architectural performance modeling formalism. We validate each of the contributions based on representative case studies. The evaluation of automatic performance model extraction for two case study systems shows that the resulting models can accurately predict the performance behavior. Prediction accuracy errors are below 3% for resource utilization and mostly less than 20% for service response time. The separate evaluation of the reusability shows that the presented approach lowers the implementation efforts for automated model extraction tools by up to 91%. Based on two case studies applying measurement-based and model-based performance evaluation techniques, we demonstrate the suitability of the declarative performance engineering framework to answer multiple kinds of performance concerns customized to non-functional goals. Subsequently, we discuss reduced efforts in applying performance analyses using the integrated and automated declarative approach. Also, the evaluation of the declarative framework reviews benefits and savings integrating performance evaluation approaches into the declarative performance engineering framework. We demonstrate the applicability of the decision framework for performance evaluation approaches by applying it to depict existing decision trees. Then, we show how we can quickly adapt to the evolution of performance evaluation methods which is challenging for static tree-based decision support systems. At this, we show how to cope with the evolution of functional and non-functional capabilities of performance evaluation software and explain how to integrate new approaches. Finally, we evaluate the accuracy of the time-to-result estimation for a set of machinelearning algorithms and different training datasets. The predictions exhibit a mean percentage error below 20%, which can be further improved by including performance evaluations of the considered model into the training data. The presented contributions represent a significant step towards an integrated performance engineering process that combines the strengths of model-based and measurement-based performance evaluation. The proposed performance concern language in conjunction with the processing framework significantly reduces the complexity of applying performance evaluations for all stakeholders. Thereby it enables performance awareness throughout the software engineering life-cycle. The proposed performance concern language removes the semantic gap between the level on which performance concerns are formulated and the technical level on which performance evaluations are actually conducted by the user.
... Software architectures represent another key competence that will be exploited in three different directions: (i) for architecting the exoskeleton, (ii) for enabling automated synthesis and enforcing techniques, and (iii) for developing recommendations in order to provide stakeholders with practical guidelines to enable the deployment, execution, and run-time management of the exoskeleton. The group has a consolidated experience in architectural languages [43], [44], architecture analysis [48], [26], architectural connectors synthesis [16], [17], [18], [19], [40], [56], as well as architectural works in the robotic [34] or automotive domain [49], [59]. Techniques to characterize privacy and ethics: Privacy concerns data and has been historically addressed by means of permission systems that comprise both specification of access policies and their enforcement [50]. ...
Citizens of the digital world are threatened. The digital systems that surround them are increasingly able to make autonomous decisions over and above them and on their behalf. They feel that their moral rights, as well as the social, economic and political spheres, can be affected by the behavior of such systems. Although unavoidable, the digital world is becoming uncomfortable and potentially hostile to its users as human beings and as citizens. Notwithstanding the introduction of the GDPR and of initiatives to establish criteria on software transparency and accountability, users feel vulnerable and unprotected. In this paper we present EXOSOUL, an overarching research framework that aims at building a software personalized exoskeleton that enhances and protects users by mediating their interactions with the digital world according to their own ethics of actions and privacy of data. The exoskeleton disallows or adapts the interactions that would result in unacceptable or morally wrong behaviors according to the ethics and privacy preferences of the users. With their software shield, users will feel empowered and in control, and more in balance of forces with the other actors of the digital world. To reach the breakthrough result of automatically building a personalized exoskeleton, EXOSOUL identifies multidisciplinary challenges never touched before: (i) defining the scope for and inferring citizens ethical preferences; (ii) treating privacy as an ethical dimension managed through the disruptive notion of active data; and (iii) automatically synthesizing ethical actuators, i.e., connector components that mediate the interaction between the user and the digital world to enforce her ethical preferences. In this paper we discuss the research challenges of EXOSOUL in terms of their feasibility and risks.
... QNs have been widely and successfully applied to the hw/sw performance assessment domain (Cortellessa et al. 2011;Petriu et al. 2012) and several implementations have been developed by providing editors and analysis environments with QN models. Many existing approaches use QNs as first-class entities for performance analysis (Arcelli and Cortellessa 2013;Trubiani et al. 2014;Altamimi et al. 2016;Arcelli, Cortellessa, and Leva 2016). ...
This paper describes the design of an Internet of Things (IoT) system for building evacuation. There are two main design decisions for such systems: i) specifying the platform on which the IoT intelligent components should be located; and ii) establishing the level of collaboration among the components. For safety-critical systems, such as evacuation, real-time performance and evacuation time are critical. The approach aims to minimize computational and evacuation delays and uses Queuing Network (QN) models. The approach was tested, by computer simulation, on a real exhibition venue in Alan Turing Building, Italy, that has 34 sets of IoT sensors and actuators. Experiments were performed that tested the effect of segmenting the physical space into different sized virtual cubes. Experiments were also conducted concerning the distribution of the software architecture. The results show that using centralized architectural pattern with a segmentation of the space into large cubes is the only practical solution.
... (1) Intermediate formats that are suited between software models and performance models and enable the exchange and reusability of performance information of a system, (2) Transformation approaches that are available for model-tomodel transformations, (3) Performance annotations that are a way to integrate performance information, requirements and the operational profiles and also, (4) Several approaches have been introduced in literature which adapt MDD to automate the derivation and detection of performance models from the software behavioral models, called Smodels, and remove performance problems in software models. A valuable book [25] reviews all the necessary concepts to analyze the performance of software systems. It also explains the most used methodologies in the relevant literature to convert and annotate Smodels to Pmodels. ...
By increasing the importance of the performance in industrial and business software systems, efficient approaches to model-based performance engineering are becoming an inherent part of the development life cycle. Performance engineering at abstract levels of the software development process has an important effect on concluding the success of the software by obtaining the knowledge of optimal alternative designs. This paper introduces the performance-driven software development approach and a prediction technique that regards performance quality attributes at the abstract levels of the software development in an incremental refinement manner. The approach provides Z-based specification formalism at the meta-model level in which its instance models are automatically transformed into the formal performance analytical model, called refinable state machine (RSM). This paper analyses the throughput of a RSM by performing an approximation algorithm on two experimental case studies to determine weights of subjective performance characteristics. The approach can use the inherent performance parameters according to product usage and derive an incremental probabilistic policy determination method under design decisions in the performance plan hierarchy. The results exhibit significant support of abstract level performance profiling in terms of the throughput values.
Efficiency has been a pivotal aspect of the software industry since its inception, as a system that serves the end-user fast, and the service provider cost-efficiently benefits all parties. A database management system (DBMS) is an integral part of effectively all software systems, and therefore it is logical that different studies have compared the performance of different DBMSs in hopes of finding the most efficient one. This survey systematically synthesizes the results and approaches of studies that compare DBMS performance and provides recommendations for industry and research. The results show that performance is usually tested in a way that does not reflect real-world use cases, and that tests are typically reported in insufficient detail for replication or for drawing conclusions from the stated results.
Uncertainty is present in model-based developments in many different ways. In the context of composing model-based analysis tools, this chapter discusses how the combination of different models can increase or decrease the overall uncertainty. It explores how such uncertainty could be more explicitly addressed and systematically managed, with the goal of defining a conceptual framework to deal with and manage it. We proceed towards this goal both with a theoretical reasoning and a practical application through an example of designing a peer-to-peer file-sharing protocol. We distinguish two main steps: (i) software system modelling and (ii) model-based performance analysis by highlighting the challenges related to the awareness that model-based development in software engineering needs to coexist with uncertainty. This core chapter addresses Challenge 5 introduced in Chap. 3 of this book (living with uncertainty).
Any analysis produces results to be used by analysis users to understand and improve the system being analysed. But what are the ways in which analysis results can be exploited? And how is exploitation of analysis results related to analysis composition? In this chapter, we provide a conceptual model of analysis-result exploitation and a model of the variability and commonalities between different analysis approaches, leading to a feature-based description of results exploitation. We demonstrate different instantiations of our feature model in nine case studies of specific analysis techniques. Through this discussion, we also showcase different forms of analysis composition, leading to different forms of exploitation of analysis results for refined analysis, improving analysis mechanisms, exploring results, etc. We, thus, present the fundamental terminology for researchers to discuss exploitation of analysis results, including under composition, and highlight some of the challenges and opportunities for future research.
Performance of a software is an important feature to determine the quality of the software developed. Performance testing of modular software is a time consuming and costly task. Several performance testing tools (PTTs) are available in the market which help software developers to test their software performance. In this paper, we propose an integrated multiobjective optimization model for evaluation and selection of best-fit PTT for modular software system. The total performance tool cost is minimized and the fitness evaluation score of the PTTs is maximized. The fitness evaluation of PTT is done based on various attributes by making use of the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). The model allows the software developers to select the number of PTTs as per their requirement. The individual performance of the modules is considered based on some performance properties. The reusability constraints are considered, as a PTT can be used in the same module to test different properties and/or it can be used in different modules to test same or different performance properties. A real-world case study from the domain of enterprise resource planning (ERP) is used to show the working of the suggested optimization model.
When SaaS software suffers from the problem of response time degradation, scaling deployment resources that support the operation of software can improve the response time, but that also means an increase in the costs due to additional deployment resources. For the purpose of saving costs of deployment resources while improving response time, scaling out the SaaS software is an alternative approach. However, how scaling out software affects response time in the case of saving deployment resources is an important issue for effectively improving response time. Therefore, in this paper, we propose a method analysing the impact of scaling out software on response time. Specifically, we define the scaling-out operation of SaaS software and then leverage queueing theory to analyse the impact of the scaling-out operation on response time. According to the conclusions of impact analysis, we further derive an algorithm for improving response time based on scaling out software without using additional deployment resources. Finally, the effectiveness of the analysis’s conclusions and the proposed algorithm is validated by a practical case, which indicates that the conclusions of impact analysis obtained from this paper can play a guiding role in scaling out software and improving response time effectively while saving deployment resources.
The deployment change of SaaS (Software as a Service) software will influence its response time, which is an important performance metric. Therefore, studying the impact of deployment change on the response time of SaaS software could contribute to performance improvement of the software. However, there are few performance analysis methods which can directly analyze the relationship between deployment change and response time of SaaS software. In this paper, we propose an approach which provides the impact analysis of specific deployment change operations on response time of SaaS software explicitly. Specifically, we present an evaluation method for the response time of SaaS software in specific deployment scheme by leveraging queueing theory. With mathematical derivation based on the proposed evaluation method, we qualitatively analyze the variation trend of response time with respect to deployment change. Furthermore, we study the relationship between two specific types of deployment change operations and response time variation of SaaS software, which is used to propose a response time improvement method based on deployment change. Finally, the effectiveness of the analysis conclusions and the proposed method in this paper is validated by practical cases, which indicates that adjusting deployment scheme according to the conclusions obtained in this paper can be helpful in improving the response time of SaaS software.
The increasing topological changes in the urban environment have caused human civilization to be subjected to an increased risk of emergencies. Fires, earthquakes, floods, hurricanes, overcrowding, or and even pandemic viruses endanger human lives. Hence, designing infrastructures to handle possible emergencies has become an ever-increasing need. The safe evacuation of occupants from the building takes precedence when dealing with the necessary mitigation and disaster risk management. Nowadays, evacuation plans appear as static maps, designed by civil protection operators, that provide some pre-selected routes through which pedestrians should move in case of emergency. The static models may work in low congested spacious areas. However, the situation may barely be imagined static in case of a disaster.The static emergency map exposes several limitations such as: i) ignoring abrupt congestion, obstacles or dangerous routes and areas; ii) leading all pedestrians to the same route and making specific areas highly crowded; iii) ignoring the individual movement behavior of people and special categories (e.g. elderly, children, disabled); iv) lack of providing proper training for security operators in various scenarios; v) lack of providing a comprehensive situational awareness for evacuation managers.By simply tracking people in an indoor area, possible congestions can be detected, the best evacuation paths can be periodically re-calculated, and minimum evacuation time under ever-changing emergency conditions can be evaluated. Using a well-designed internet of things (IoT) infrastructure can provide various solutions in both design-time and real-time. At design-time, a building architecture can be assessed regarding safety conditions, even before its (re-) construction. Simulations are among feasible solutions to assess the evaluability of buildings and the feasibility of evacuation plans.At design-time, an IoT-based evacuation system provides: i) Safety considerations for building architecture in early (re-) construction phase; ii) Finding out the building dimensions that lead to an optimum evacuation performance; iii) Bottleneck discovery that is tied with the building characteristics; iv) Comparing various routing optimization models to pick the best match one as a base of real-time evacuation system; v) Visualizing dynamic evacuation executions to demonstrate a variety of scenarios to security operators and train them. In real-time, an IoT architecture supports the gathering of data that will be used for dynamic monitoring and evacuation planning. At real-time, an IoT-based evacuation system provides: i) Optimal solutions that can be continuously updated, so evacuation guidelines can be adjusted according to visitors position that evolves over time; ii) Paths that become suddenly unfeasible can automatically be discarded by the system; iii) The model can be incorporated into a mobile app supporting emergency units to evacuate closed or open spaces.Since the evacuation time of people from a scene of an emergency (e.g. building) is crucial, IoT-based evacuation infrastructures need to have an optimization algorithm as their core. In order to reduce the time taken for evacuation, a better and more robust exit strategy is developed. Some algorithms are used to model participating agents for their exit patterns and strategies and in order to evaluate their movement behavior based on performance, efficiency, and practicality attributes. The algorithms normally provide a way to evacuate the occupants as quickly as possible. While this research and all associated experiences are carried out in Italy, we see the problem from an international viewpoint. Within this thesis, we carried out the following research and experiments to analyze and develop an IoT-based emergency evacuation system:The first two chapters present systematic mapping studies to review the state of the art and help to design high-quality IoT architectures. More specifically, chapter one investigates on IoT software architectural styles, and chapter two assesses the architectural fault-tolerance. Chapter three proposes some adaptive architectural styles and their associated quality of energy consumption. After taking the preliminary design decisions about the architecture, in chapter four we propose a core computational component to be in charge of minimizing the time necessary to evacuate people from a building. We developed a network flow algorithm that decomposes the building space and time into finite elements: unit cells and time slots. In chapter five, we assessed the effectiveness of the IoT system in providing good real-time and design-time solutions. Chapter six focuses on real-time performance and minimizes computational and evacuation delays to a minimum, by using a queuing network.During our research, we designed and implemented a hardware and software IoT infrastructure. We installed sensors throughout the selected building, whose data constantly feed into the algorithm to show the best evacuation routes to the occupants.We further realized that such a system may lack the accuracy since: i) a pure optimization approach can lack realism as building occupants may not evacuate immediately; ii) occupants may not always follow the recommended optimal paths due to various behavioral and organizational issues; iii) the physical space may prevent an effective emergency evacuation. Therefore, in chapter seven we introduced a simulation-optimization approach. The approach allows us to test more realistic evacuation scenarios and compare them with an optimal approach. We simulated the optimized Netflow algorithm under different realistic behavioral agent-based modeling (ABM) constraints, such as social attachment and improved IoT system accordingly.This thesis makes the following main contributions:Contributions on new and legitimate IoT architectures: - Addressing an up to date state of the art class for IoT architectural styles and patterns.- Proposing a set of self-adaptive IoT patterns and assessing their specific quality attributes (fault-tolerance, energy consumption, and performance).- Designing an IoT infrastructure and testing its performance in both real-time and design-time applications.Algorithmic contribution: - Developing a network flow algorithm that facilitates minimizing the time necessary to evacuate people from a scene of a disaster.Evaluation / experimentation environment contributions: Modeling various social agents and their interactions during an emergency to improve the IoT system accordingly.Evaluating the system by using empirical and real case studies.
Fires, earthquakes, floods, hurricanes, overcrowding, or and even pandemic viruses endanger human lives. Hence, designing infrastructures to handle possible emergencies has become an ever-increasing need. The safe evacuation of occupants from the building takes precedence when dealing with the necessary mitigation and disaster risk management. This thesis deals with designing an IoT system to provide safe and quick evacuation suggestions. The IoT-based evacuation system provides optimal evacuation paths that can be continuously updated based on run-time sensory data, so evacuation guidelines can be adjusted according to visitors occupants that evolve over time. This thesis makes the following main contributions: i) Addressing an up to date state of the art class for IoT architectural styles and patterns; ii) Proposing a set of self-adaptive IoT patterns and assessing their specific quality attributes (fault-tolerance, energy consumption, and performance); iii) Designing an IoT infrastructure and testing its performance in both real-time and design-time applications; iv) Developing a network flow algorithm that facilitates minimizing the time necessary to evacuate people from a scene of a disaster; v) Modeling various social agents and their interactions during an emergency to improve the IoT system accordingly; vi) Evaluating the system by using empirical and real case studies.
Fires, earthquakes, floods, hurricanes, overcrowding, or and even pandemic viruses endanger human lives. Hence, designing infrastructures to handle possible emergencies has become an ever-increasing need. The safe evacuation of occupants from the building takes precedence when dealing with the necessary mitigation and disaster risk management. This thesis deals with designing an IoT system to provide safe and quick evacuation suggestions. The IoT-based evacuation system provides optimal evacuation paths that can be continuously updated based on run-time sensory data, so evacuation guidelines can be adjusted according to visitors occupants that evolve over time. This thesis makes the following main contributions: i) Addressing an up to date state of the art class for IoT architectural styles and patterns; ii) Proposing a set of self-adaptive IoT patterns and assessing their specific quality attributes (fault-tolerance, energy consumption, and performance); iii) Designing an IoT infrastructure and testing its performance in both real-time and design-time applications; iv) Developing a network flow algorithm that facilitates minimizing the time necessary to evacuate people from a scene of a disaster; v) Modeling various social agents and their interactions during an emergency to improve the IoT system accordingly; vi) Evaluating the system by using empirical and real case studies.
Functional specifications describe what program components can do: the sufficient conditions to invoke components’ operations. They allow us to reason about the use of components in a closed world setting, where components interact with known client code, and where the client code must establish the appropriate pre-conditions before calling into a component. Sufficient conditions are not enough to reason about the use of components in an open world setting, where components interact with external code, possibly of unknown provenance, and where components may evolve over time. In this open world setting, we must also consider the necessary conditions, i. e. what are the conditions without which an effect will not happen. In this paper we propose the hainmail specification language for writing holistic specifications that focus on necessary conditions (as well as sufficient conditions). We give a formal semantics for hainmail, and discuss several examples. The core of hainmail has been mechanised in the Coq proof assistant.
This report describes the 2020 Competition on Software Testing (Test-Comp), the 2\(^{\text {nd}}\) edition of a series of comparative evaluations of fully automatic software test-case generators for C programs. The competition provides a snapshot of the current state of the art in the area, and has a strong focus on replicability of its results. The competition was based on 3 230 test tasks for C programs. Each test task consisted of a program and a test specification (error coverage, branch coverage). Test-Comp 2020 had 10 participating test-generation systems.KeywordsSoftware TestingTest-Case GenerationCompetitionSoftware AnalysisSoftware ValidationTest ValidationTest-CompBenchmarkingTest CoverageBug Finding
BenchExec
TestCov
Data logging is helpful for the operation and maintenance manager of SaaS-based solutions to diagnose performance issues. However, long-running SaaS software may generate huge amounts of log data which is difficult to analyze, and it lacks a systematic approach to collect the running log and lacks a unified data structure to normalize the performance-related data. All these threaten the timeliness of SaaS performance issue diagnosis. In this paper, we propose an architecture for log collection and analysis to support the assessment of performance and diagnosis of performance issues of SaaS-based application in cloud computing. The architecture has the three-tier structure and includes a pivot data model to integrate heterogeneous log. The two high-level metrics in the model of Average Response Time (ART) and Request Timeout Rate (RTR) are calculated by statistical measurement and the lower-level metrics are monitored in real-time. Operation and maintenance managers can evaluate the performance of SaaS software based on the high-level metrics, then timely locate the issues from the low-level metrics and take appropriate measures. Thereupon, this study presents the general-purpose technique for the architecture to support real-time big log data collection, access, computation, storage. The proposal has been implemented and validated in a case study.
We propose a model‐based test generation methodology to evaluate the impact of the interaction of the wireless network and application configurations on the performance of mobile networked applications. We consider waiting time delay to model wireless network quality. We classify mobile applications into two groups. Group I represents applications where end‐user experience is mainly affected by waiting time delay during service consumption, while group II represents applications where end‐user experience is affected by waiting time delay before service consumption. Test generation is formulated as an inversion problem. However, for group I applications, solving the inversion problem is expensive. Therefore, we utilize metamorphic testing to mitigate the cost of test oracles. We formulate metamorphic test generation as maximization of the distance between seed and follow‐up test cases. Two test coverage criteria are proposed: user experience and user‐experience‐and‐input interaction. Network models are developed for a mobile device that has network access through a WiFi hot spot and uses either transmission control protocol or user datagram protocol. Two mobile applications are used to demonstrate the methodology: multimedia streaming and web browsing. Application of the methodology when user actions are taken into consideration is also addressed. The effectiveness of the methodology is evaluated using two metrics: the incurred time cost and redundancy in the generated test suite. The obtained results show the advantage of casting test generation as an inversion problem, compared with random testing. For apps with intensive performance models, combining metamorphic testing with the methodology has tremendously reduced the cost of test oracles.
The current generation of network-centric applications ex- hibits an increasingly higher degree of mobility. Wireless networks allow devices to move from one location to another without loosing connectivity. Also, new software technolo- gies allow code fragments or entire running applications to migrate from one host to another. Performance modeling of such complex systems is a difficult task, which should be carried out during the early design stages of system devel- opment. However, the integration of performance modeling analysis with software system specification for mobile sys- tems is still an open problem, since there is no unique widely accepted notation for describing mobile systems. Moreover performance modeling is usually developed separately from high-level system description. This is not only time con- suming, but the separation of performance model and system specification makes more difficult the feedback process of re- porting the performance analysis results at the system design level, and modifying system model to analyze design alter- natives. In this paper we address the problem of integrating system performance modeling and analysis with a specifica- tion of mobile software system based on UML. In particular we introduce a unified UML notation for high-level descrip- tion and performance modeling of mobile systems. The no- tation allows inclusion of quantitative information, which are used to build a process-oriented simulation model of the sys- tem. The simulation model is executed, and the results are reported back in the UML notation. We describe a prototype tool for translating annotated UML models into simulation programs and we present a simple case study.
Architectural decisions are among the earliest made in a software development project. They are also the most costly to fix if, when the software is completed, the architecture is found to be inappropriate for meeting quality objectives. Thus, it is important to be able to assess the impact of architectural decisions on quality objectives such as performance and reliability at the time that they are made.This paper describes PASA, a method for performance assessment of software architectures. It was developed from our experience in conducting performance assessments of software architectures in a variety of application domains including web-based systems, financial applications, and real-time systems. PASA uses the principles and techniques of software performance engineering (SPE) to determine whether an architecture is capable of supporting its performance objectives. The method may be applied to new development to uncover potential problems when they are easier and less expensive to fix. It may also be used when upgrading legacy systems to decide whether to continue to commit resources to the current architecture or migrate to a new one. The method is illustrated with an example drawn from an actual assessment.
Integration of non-functional validation in Model-Driven Architecture is still far from being achieved, although it is ever
more necessary in the development of modern software systems. In this paper we make a step ahead towards the adoption of such
activity as a daily practice for software engineers all along the MDA process. We consider the Non-Functional MDA framework
(NFMDA) that, beside the typical MDA model transformations for code generation, embeds new types of model transformations
that allow the generation of quantitative models for non-functional analysis. We plug into the framework two methodologies,
one for performance analysis and one for reliability assessment, and we illustrate the relationships between non-functional
models and software models. For this aim, Computation Independent, Platform Independent and Platform Specific Models are also
defined in the non-functional domains taken into consideration, that are performance and reliability.
Quantitative performance analysis of software systems should be integrated in the early stages of the development process. We propose a simulation-based performance modeling of software architectures specified in UML. We propose an algorithm for deriving a simulation model from annotated UML software architectures. We introduce the annotation for some UML diagrams, i.e., Use Case, Activity and Deployment diagrams, to describe system performance parameters. Then we show how to derive a process-oriented simulation model by automatically extracting information from the UML diagrams. Simulation provides performance results that are reported into the UML diagrams as tagged values. The proposed methodology has been implemented into a prototype tool called UML-?. The proposed methodology will be illustrated on a simple case study.
Performance characteristics, such as response time and throughput, play an important role in defining the quality of software products, especially in the case of real-time and distributed systems. The developers of such systems should be able to assess and understand the performance effects of various architectural decisions, starting at an early stage, when changes are easy and less expensive, and continuing throughout the software life cycle. This can be achieved by constructing and analyzing quantitative performance models that capture the interactions between the main system components and point to the system's performance trouble spots. The paper proposes a formal approach to building Layered Queueing Network (LQN) performance models from UML descriptions of the high-level architecture of a system, and more exactly from the architectural patterns used in the system. The performance modelling formalism, LQN, is an extension of the well-known Queueing Network modelling technique. The transformation from UML architectural description of a given system to its LQN model is based on PROGRES, a well known visual language and environment for programming with graph rewriting systems.
Model transformations in MDA mostly aim at stepping from a Platform Independent Model (PIM) to a Platform Specific Model (PSM) from a functional viewpoint. In order to develop high quality software products, non-functional attributes (such as performance) must be taken into account. In this paper we extend the canonical view of the MDA approach to embed additional types of models that allow to structure a Model Driven approach keeping into account performance issues. We define the relationships between MDA typical models and the newly introduced models, as well as relationships among the latter ones. In this extended framework new types of model-to-model transformations also need to be devised. We place an existing methodology for transforming software models into performance models within the scope of this framework.
Software performance concerns begin at the very outset of a new project. The first definition of a software system may be
in the form of Use Cases, which may be elaborated as scenarios: this work creates performance models from scenarios. The Use
Case Maps notation captures the causal flow of intended execution in terms of responsibilities, which may be allocated to
components, and which are annotated with expected resource demands. The SPT algorithm was developed to transform scenario
models into performance models. The UCM2LQN tool implements SPT and converts UCM scenario models to layered queueing performance
models, allowing rapid evaluation of an evolving scenario definition. The same reasoning can be applied to other scenario
models such as Message Sequence Charts, UML Activity Graphs (or Collaboration Diagrams, or Sequence Diagrams), but UCMs are
particularly powerful, in that they can combine interacting scenarios and show scenario interactions. Thus a solution for
UCMs can be applied to multiple scenarios defined with other notations.
A performance model interchange format (PMIF) provides a mechanism whereby system model information may be transferred among performance modeling tools. The PMIF allows diverse tools to exchange information and requires only that the importing and exporting tools support the PMIF. This paper presents the definition of a PMIF by describing a meta-model of the information requirements and the transfer format derived from it. It describes how tool developers can implement the PMIF, how the model interchange via export and import works in practice, and how the PMIF can be extended. A simple case study illustrates the format. The paper concludes with the current status of the PMIF, lessons learned, some suggestions for extensions, and current work in progress.
. The paper proposes a formal approach to building software performance models for distributed and/or concurrent software systems from a description of the system's architecture by using graph transformations. The performance model is based on the Layered Queueing Network (LQN) formalism, an extension of the well-known Queueing Network modelling technique [16, 17, 8]. The transformation from the architectural description of a given system to its LQN model is based on PROGRES, a known visual language and environment for programming with graph rewriting systems [9-11]. The transformation result is an LQN model that can be analysed with existent solvers [5]. 1 Introduction It is generally accepted that performance characteristics, such as response time and throughput, play an important role in defining the quality of software products. In order to meet the performance requirements of such systems, the software developers should be able to assess and understand the effect of various desig...
The modeling and analysis experience with process algebras has shown the necessity of extending them with priority, probabilistic internal/external choice, and time while preserving compositionality. The purpose of this paper is to make a further step by introducing a way to express performance measures, in order to allow the modeler to capture the QoS metrics of interest. We show that the standard technique of expressing stationary and transient performance measures as weighted sums of state probabilities and transition frequencies can be imported in the process algebra framework. Technically speaking, if we denote by the number of performance measures of interest, in this paper we define a family of extended Markovian process algebras with generative master–reactive slaves synchronization mechanism called including probabilities, priorities, exponentially distributed durations, and sequences of rewards of length n. Then we show that the Markovian bisimulation equivalence is a congruence for which preserves the specified performance measures and we give a sound and complete axiomatization for finite terms. Finally, we present a case study conducted with the software tool TwoTowers in which we contrast the average performance of a selection of distributed algorithms for mutual exclusion modeled with .
Different paradigms (client-server, mobility based, etc.) have been suggested and adopted to cope with the complexity of designing the software architecture of distributed applications for wide area environments, and selecting the "best" paradigm is a typical choice to be made in the very early software design phases. Several factors should drive this choice, one of them being the impact of the adopted paradigm on the application performance. Within this framework our contribution is as follows: we apply an extension of UML to better modelling the possible adoption of mobility-based paradigms in the software architecture of an application; we extend classical models, like queueing networks models and execution graphs, to cope with mobile architectures; we introduce a complete methodology that, starting from a software architecture described using this extended notation, generates a performance model (namely an Extended Queueing Network augmented with mobility features) that allows the designer to evaluate the convenience of introducing logical mobility into a software application.
Stochastic Process Algebras have been proposed as compositional specification formalisms for performance models. In this paper, we describe a tool which aims at realising all beneficial aspects of compositional performance modelling, the TIPPtool. It incorporates methods for compositional specification as well as solution, based on state-of-the-art-techniques, and wrapped in a user-friendly graphical front end.