Book

Non-Functional Requirements in Software Engineering

Authors:

Chapters (14)

Consider the design of an information system, such as one for managing credit card accounts. The system should debit and credit accounts, check credit limits, charge interest, issue monthly statements, and so forth.
In this chapter and the next, we present the NFR Framework in more detail. The NFR Framework helps developers deal with non-functional requirements (NFRs) during software development. The Framework helps developers express NFRs explicitly, deal with them systematically, and use them to drive the software development process rationally.
During the process of software development, developers use softgoals and inter dependencies to analyze and record in a softgoal interdependency graph their intentions, design alternatives and tradeoffs and rationale. The evaluation procedure is then used to determine if their softgoals have been met.
Part II shows how the NFR Framework can accommodate different types of non-functional requirements. Part II focusses on three: accuracy, security and performance requirements.
The accuracy of information is often regarded as an inherent property of any automated information system. As a familiar example, some people inquire about a payment request, such as monthly credit card or telephone bill, and get a reply from a staff member saying, “As the transactions were handled by the computer, there can’t be any error!”
One important concern in building an information system is information security, the protection of information as an asset of an enterprise, just like money or any other forms of property. But how do we “design in” information security?
As performance is a vital quality factor for systems, an important challenge during system development is to deal with performance requirements. For a credit card system, for example, a developer might consider performance requirements to “achieve good response time for sales authorization” and “achieve good space usage for storing customer information.” Being global in their nature and impact, performance requirements are an important class of non-functional requirements (NFRs). To be effective, systems should attain NFRs as well as functional requirements (e.g., requiring a credit card system to authorize sales). However, it is generally difficult to deal with performance requirements and other NFRs, as they can conflict and interact with each other and with the many implementation alternatives which have varying features and tradeoffs.
The previous chapter addressed performance requirements. In order to be more specific about performance requirements, we need to make assumptions about the kind of system under development. This chapter focusses on performance requirements for information systems. It continues the presentation of the “Performance Requirements Framework” started in the last chapter.
In Part III, we look at particular case studies and applications of the NFR Framework, having already presented the Framework and its specializations for particular types of NFRs.
In this chapter, credit card systems are studied. We consider an information system for a bank’s credit card operation. A body of information on cardholders and merchants is maintained. In this highly competitive market, it is important to provide fast response time and accuracy for sales authorizations. To reduce losses due to fraud, lost and stolen cards must be invalidated as soon as the bank is notified.
In this chapter, an administrative system is studied. Performance requirements are considered for a government system to help administer income tax appeals. This involves long-term, consultative processes. The study considers descriptions of the organization, its workload and procedures.
Non-functional requirements, such as modifiability, performance, reusability, comprehensibility and security, are often crucial for software systems. They should be addressed as early as possible in a software lifecycle, and properly reflected in a software architecture before a commitment is made to a specific implementation.
Applications of the NFR Framework are not limited to the development of software systems. In this chapter, we apply the NFR Framework to enterprise modelling and business process redesign.
The NFR Framework has been applied to a number of case studies. We now present some feedback on the NFR Framework and some studies.
... However, Invisibility for UbiComp and IoT was indicated as an NFR that may impact Usability, negatively [10]. In fact, it is well known that in general NFRs interact with each other [2,6,19,24,30,31,33,38], revealing positive correlations, when one NFR helps another and negative correlations, when a procedure favors an NFR but creates difficulty for others [15]. A classic example of an interaction between NFRs is Security and Performance. ...
... Correlations can be captured from the developers experience and stored in correlation catalogs, a common artifact used by the requirements community to help software engineers avoid conflicting NFRs and select suitable strategies to satisfy different NFRs [15]. The literature has several catalogs that generally focus on correlations that are generic to any system [18,31,44,52], but it lacks catalogs with Invisibility for the domain of UbiComp and IoT systems. ...
... Invisibility is long seen as an essential characteristic for achieving the goals of UbiComp [16,27,39,41,42], which can also be taken to IoT systems [1,8]. This NFR was recently cataloged using the Softgoal Interdependency Graph (SIG) [15]. In this notation, every concept related to the NFR being cataloged is documented as softgoals. ...
Article
Full-text available
The advance of Ubiquitous Computing (UbiComp) and Internet of Things (IoT) brought a new set of Non-Functional Requirements (NFRs), especially related to Human-Computer Interaction (HCI). Invisibility is one of these NFRs and refers to either the merging of technology in the user environment or the decrease in the interaction workload. This new NFR may impact traditional ones (e.g., Usability), revealing positive correlations, when one NFR helps another, and negative correlations, when a procedure favors an NFR but creates difficulty for another one. Correlations between NFRs are usually stored in catalogs, which is a well-defined body of knowledge gathered from previous experience. Although Invisibility has been recently cataloged with development strategies, the literature still lacks catalogs with correlations for this NFR. Therefore, this work aims at capturing and cataloging invisibility correlations for UbiComp and IoT systems. To do that, we also propose to systematize the definition of correlations using the following well-defined research methods: Interview, Content Analysis and Questionnaire. As a result, we defined a catalog with 110 positive and negative correlations with 9 NFRs. We evaluated this correlation catalog using a controlled experiment to verify if it helps developers when they are making decisions about NFRs in UbiComp and IoT systems. Results indicated that the catalog improved the decisions made by the participants. Therefore, this well-defined body of knowledge is useful for supporting software engineers to select appropriate strategies that satisfy Invisibility and other NFRs related to user interaction.
... La méthode SysML/KAOS représente les exigences non-fonctionnelles à travers un langage similaire à celui utilisé pour la représentation des exigences fonctionnelles [68,71] et qui réutilise des notions du NFR Framework [38]. Ainsi, la hiérarchie des buts non-fonctionnels est construite par raffinements successifs à l'aide des opérateurs de raffinement And et Or. ...
... Un but NFRType[Sujet] peut être raffiné soit par les sous-buts NFRType i [Sujet] (raffinement par type) ou par les sous-buts NFRType[Sujet i ] (raffinement par sujet), sachant que NFRType i est un sous-type de NFRType et Sujet i est une sous-entité de Sujet. Par exemple, le but non-fonctionnel Sécurité[Système] peut être raffiné par les sous-buts Confidentialité[Système], Intégrité[Système] et Disponibilité[Système] en conformité avec la taxonomie des types de buts non-fonctionnels [38]. Il s'agit là d'un raffinement par type. ...
... This paper also reports improvements identified to enhance the expressiveness of SysML/KAOS goal modeling languages and validated with VdM stakeholders. This includes the introduction of (1) a way to quantify the impact or contribution of a goal (a contribution goal is a satisficity solution to a non-functional requirement [38]), (2) a non-functional goal refinement strategy based on logical formulas, (3) an approach to refine contribution goals similar to that of Chung et al. [38], and (4) an obstacle modeling language such as the one proposed by Lamsweerde in [139]. ...
Thesis
Full-text available
The SysML/KAOS method allows to model system requirements through goal hierarchies. B System is a formal method used to construct, verify and validate system specifications. A B System model consists of a structural part (abstract and enumerated sets, constants with their associated properties, and variables with their associated invariant) and a behavioral part (events). Correspondence links are established in previous work between SysML/KAOS and B System to produce a formal specification from requirements models. This specification serves as a basis for formal verification and validation tasks to detect and correct inconsistencies. However, it is required to manually provide the structural part of the B System specification.This thesis aims at enriching SysML/KAOS with a language that allows tomodel the application domain of the system and which would be compatible withthe requirements modeling language. This includes the definition of the domainmodeling language and of mechanisms for leveraging domain modeling to providethe structural part of the B System specification obtained from requirements models.The defined language uses ontologies to allow the formal representation of systemdomain. Moreover, the established correspondence links and rules, formally verified,allow both propagation and backpropagation of additions and deletions, betweendomain models and B System specifications. An important part of the thesis is alsodevoted to assessment of the SysML/KAOS method on case studies. Furthermore,since the systems naturally break down into subsystems (enabling the distribution ofwork between several agents: hardware, software and human), SysML/KAOS goalmodels allow the capture of assignments of requirements to subsystems responsibleof their achievement. This thesis therefore describes the mechanisms required toformally guarantee that each requirement assigned to a subsystem will be wellachieved by the subsystem, within the constraints defined by the high-level systemand subsystem specifications.The SysML/KAOS method, thus enriched, has been implemented within an opensource tool using the model federation platform Openflexo, and has been evaluated on various industrial scale case studies. It enables the formal verification of requirements and facilitates their validation by the various stakeholders, including those with less or no expertise in formal methods. However, both the specification of the body of B System events and domain model logical formulas (that give B System properties and invariants) and the formal assessment (verification and validation) of the specification can only be manual. They are time consuming and require experts in formal methods. But this is the price to pay to achieve a formal verification and validation of requirements.
... Moreover, some important activities, such as the analysis of non-functional properties (NFPs) or quality attributes and the evolution of SPL's artifacts [11], are set aside from existing SPL approaches. When considered, these activities are usually integrated into the traditional SPL process by reusing existing mechanisms which were not specifically designed for that purpose, for instance using attributes of extended feature models to specify quality attributes [12] while there are more appropriate approaches to deal with quality attributes, such as the NFR Framework [13]. ...
... Also, the combination of multiple product lines (called MultiPLs) [42] allows defining several families of products that are related among them. Other extensions have been explicitly defined to deal with the modularization of large models and provide scalable models such as hierarchical levels and compositions units [43]; to deal with the evolution of models [44] using refined FMs or edits to FMs [45]; to handle non-functional properties (NFPs) such as the NFR Framework [13]; or to differentiate static and dynamic variability by defining binding modes such as binding states, units, or time [46]. ...
... Others are exclusive to a particular domain, such as FMCAT [93] that focuses on the analysis of dynamic services product lines, and those activities are also supported by other tools such as FeatureIDE [14] or pure::variants [80], so they did not pass IC3. Finally, other tools such as HADAS [94] offer a spe- 13 A stable release (also called production release) is the last product version that has passed all verifications/tests, and whose remaining bugs are considered acceptable. ...
Article
Full-text available
For the last ten years, software product line (SPL) tool developers have been facing the implementation of different variability requirements and the support of SPL engineering activities demanded by emergent domains. Despite systematic literature reviews identifying the main characteristics of existing tools and the SPL activities they support, these reviews do not always help to understand if such tools provide what complex variability projects demand. This paper presents an empirical research in which we evaluate the degree of maturity of existing SPL tools focusing on their support of variability modeling characteristics and SPL engineering activities required by current application domains. We first identify the characteristics and activities that are essential for the development of SPLs by analyzing a selected sample of case studies chosen from application domains with high variability. Second, we conduct an exploratory study to analyze whether the existing tools support those characteristics and activities. We conclude that, with the current tool support, it is possible to develop a basic SPL approach. But we have also found out that these tools present several limitations when dealing with complex variability requirements demanded by emergent application domains, such as non-Boolean features or large configuration spaces. Additionally, we identify the necessity for an integrated approach with appropriate tool support to completely cover all the activities and phases of SPL engineering. To mitigate this problem, we propose different road map using the existing tools to partially or entirely support SPL engineering activities, from variability modeling to product derivation.
... Reactivity appeared to us as an NFR (Non-functional Requirement) [4] during an ecommerce project [5]. In this project, we reused code from Yin's GitHub [6]. ...
... Through the analysis of Yin's architecture, we delve into the React paradigm, which led us to various sources of information such as the Reactive Manifesto [7], The Reactive Design Patterns [8], and React JS' own documentation [3]. The cited literature uses the NFRs: responsive, resilient, elastic and message-oriented, among others, to satisfice 2 [4] a Reactive System. This work draws from these information sources to elicit Reactivity as an NFR, which are modeled for the e-commerce application ( Figure 1). ...
... To the best to our knowledge, reactivity is not being dealt from the NFR framework [4] point of view, which is building a software application with NFRs as first-class requirements. Existing work is focused on concurrent and distributed systems that uses a Rebeca modeling language for formal verification [11] [12]. ...
Conference Paper
Full-text available
An understanding of Non-functional Requirements (NFRs) is important for designing a software system, however, time and resource constraints, usually, lead to systems being developed mostly from a functional perspective. We explore the case of using programming libraries as a support to the design of software systems explicitly considering NFRs. We tackle the case of the React JS library, within the context of reengineering a Web based e-commerce software. This library operationalizes a set of NFRs needed for a system to be reactive. We abstracted these implementations as softgoals to derive an i* model with the NFRs made explicit. The resulting model, created collaboratively, is an example of using both functional and qualitative perspectives in designing a software system.
... Finally, in order to evaluate the platform architecture proposed in this study, we used non-functional metrics outlined by previous work in Chung et Al. 22 (see a detailed description in Section 3.2). ...
... First, starting from the metrics outlined by Chung et Al. 22 It should be noted, however, that, although we have concentrated on the following list based on discussions/interviews with stakeholders (see Table 1), the technical availability of atomic measurements that the IoT devices had available forced us to concentrate on an even more limited set of non-functional requirements, relating primarily to energy-consumption, performance and other data available from the devices themselves. ...
Preprint
Full-text available
Internet of things (IoT) technologies are becoming a more and more widespread part of civilian life in common urban spaces, which are rapidly turning into cyber-physical spaces. Simultaneously, the fear of terrorism and crime in such public spaces is ever-increasing. Due to the resulting increased demand for security, video-based IoT surveillance systems have become an important area for research. Considering the large number of devices involved in the illicit recognition task, we conducted a field study in a Dutch Easter music festival in a national interest project called VISOR to select the most appropriate device configuration in terms of performance and results. We iteratively architected solutions for the security of cyber-physical spaces using IoT devices. We tested the performance of multiple federated devices encompassing drones, closed-circuit television, smart phone cameras, and smart glasses to detect real-case scenarios of potentially malicious activities such as mosh-pits and pick-pocketing. Our results pave the way to select optimal IoT architecture configurations -- i.e., a mix of CCTV, drones, smart glasses, and camera phones in our case -- to make safer cyber-physical spaces' a reality.
... We conducted sentiment analysis on the overall reviews for each app using AppBot data. We also identified and qualitatively analysed the reviews around the top discussed topics of NFRs in reviews and media, i.e., Privacy, Trust, Transparency, Effectiveness, Security, Reliability, User Satisfaction and Acceptance [30][31][32]. We used Factiva to collect the news articles to identify the relevant media coverage of the contact-tracing apps in these countries in order to understand the socio-political event-based timeline around the acceptance or rejection of the apps and the reasons outlined in the news. ...
... In general, NFRs for software are much harder to define, understand and translate into the design than functional requirements [31]. The contact-tracing apps present unique challenges for the app designers that stems from the culture and the behaviour of the users. ...
... Because of these issues, ensuring safe, fair, transparent, and accurate ML-enabled systems is challenging. From a Requirement Engineering (RE) perspective, these quality aspects are known as non-functional requirements (NFRs) [5]. ...
... In this work, we focus on NFRs as quality requirements. Despite the NFR-related challenges, progress in the area of NFR exploration has been made, including, for example, definitions (e.g., [7]), taxonomies (e.g., [9]), classification methods (e.g., [10]), modeling approaches (e.g., [5]), management methods (e.g., [11]), and industrial studies (e.g., [12]). ...
Preprint
Machine Learning (ML) is an application of Artificial Intelligence (AI) that uses big data to produce complex predictions and decision-making systems, which would be challenging to obtain otherwise. To ensure the success of ML-enabled systems, it is essential to be aware of certain qualities of ML solutions (performance, transparency, fairness), known from a Requirement Engineering (RE) perspective as non-functional requirements (NFRs). However, when systems involve ML, NFRs for traditional software may not apply in the same ways; some NFRs may become more prominent or less important; NFRs may be defined over the ML model, data, or the entire system; and NFRs for ML may be measured differently. In this work, we aim to understand the state-of-the-art and challenges of dealing with NFRs for ML in industry. We interviewed ten engineering practitioners working with NFRs and ML. We find examples of (1) the identification and measurement of NFRs for ML, (2) identification of more and less important NFRs for ML, and (3) the challenges associated with NFRs and ML in the industry. This knowledge paints a picture of how ML-related NFRs are treated in practice and helps to guide future RE for ML efforts.
... Catalogues are a usual solution to help software engineers to reach quality characteristics [6]. According to [7], a catalogue is a set of joined knowledge about previous experience. ...
... It is possible to save time and effort by reusing requirements since the reused requirements have already been analyzed in other systems [9]. Furthermore, the use of catalogues prevents engineers from spending time researching diverse sources or relying on experts in the field to make decisions on how to obtain [6] requirements. ...
Conference Paper
Full-text available
KAOS, a goal-based modeling language, has been extended since its creation. Searching for existing KAOS extensions and their constructs is a task that can be the start point for requirements modeling or for the creation of new KAOS extensions. This search can be performed by using strings searches in databases or through a catalogue that supports them. This exploratory task about extensions can take a great deal of time and be prone to failure when performed without specific and adequate support. catalogues have been used successfully to bring together a body of knowledge, including knowledge of modeling language extensions. Motivated by this scenario, this work proposes a catalogue of extensions to the KAOS modeling language. The results suggest that the proposed catalogue can be used to retrieve information about extensions and their constructs correctly and that it is easy to use.
... Drawing on our previous work Metis [8], we present GOMPHY, a Goal-Oriented and M achine learning-based framework using a P roblem H Ypothesis, to help validate business problems [9,10]. This paper proposes four main technical contributions: 1. ...
... Once the most likely problem hypothesis is validated, as shown by 'check mark' in Fig. 9, qualitative reasoning, e.g., the label propagation procedure [9], is carried out to determine the validated problem's impact upward a problem. If the Minimum balance of an Account SourceP H and Somewhat positively contribute to P Hcontribution are satisficed, then the Balance of an Account AbstractP H is satisficed. ...
... Performance-related issues can have a large impact on cost, especially if those issues are not treated early [15,16,66]. Another example of a software performance issue was Pokemon Go [51], a mobile game that, after the initial rollout, became unusable in many countries. ...
... We found that 106 out of 149 requirements were quantified, while the remaining 43 were quantifiable but were not actually quantified (e.g., "The product will reside on the Internet so more than one user can access the product and download its content for use on their computer." 15 ). ...
Article
Full-text available
Model-based testing (MBT) is a method that supports the design and execution of test cases by models that specify the intended behaviors of a system under test. While systematic literature reviews on MBT in general exist, the state of the art on modeling and testing performance requirements has seen much less attention. Therefore, we conducted a systematic mapping study on model-based performance testing. Then, we studied natural language software requirements specifications in order to understand which and how performance requirements are typically specified. Since none of the identified MBT techniques supported a major benefit of modeling, namely identifying faults in requirements specifications, we developed the Performance Requirements verificatiOn and Test EnvironmentS generaTion approach (PRO-TEST). Finally, we evaluated PRO-TEST on 149 requirements specifications. We found and analyzed 57 primary studies from the systematic mapping study and extracted 50 performance requirements models. However, those models don’t achieve the goals of MBT, which are validating requirements, ensuring their testability, and generating the minimum required test cases. We analyzed 77 Software Requirements Specification (SRS) documents, extracted 149 performance requirements from those SRS, and illustrate that with PRO-TEST we can model performance requirements, find issues in those requirements and detect missing ones. We detected three not-quantifiable requirements, 43 not-quantified requirements, and 180 underspecified parameters in the 149 modeled performance requirements. Furthermore, we generated 96 test environments from those models. By modeling performance requirements with PRO-TEST, we can identify issues in the requirements related to their ambiguity, measurability, and completeness. Additionally, it allows to generate parameters for test environments.
... Overarching these pathways, there is a challenge of how to choose properties, analysis pathways, and combinations thereof to form an overall argument of fitness-for-purpose of the system as a whole. Goal-Question-Metric, safety cases, goal-oriented modelling (e.g., the NFR framework [Chu+00]) appear to have building blocks for answering this challenge, but as far as we are aware, there is currently no integrated approach for this purpose. ...
Chapter
Full-text available
Any analysis produces results to be used by analysis users to understand and improve the system being analysed. But what are the ways in which analysis results can be exploited? And how is exploitation of analysis results related to analysis composition? In this chapter, we provide a conceptual model of analysis-result exploitation and a model of the variability and commonalities between different analysis approaches, leading to a feature-based description of results exploitation. We demonstrate different instantiations of our feature model in nine case studies of specific analysis techniques. Through this discussion, we also showcase different forms of analysis composition, leading to different forms of exploitation of analysis results for refined analysis, improving analysis mechanisms, exploring results, etc. We, thus, present the fundamental terminology for researchers to discuss exploitation of analysis results, including under composition, and highlight some of the challenges and opportunities for future research.
... Requirement documents commonly have two types of requirements one is functional requirements, which defines the feature of the system-to-be, and the other is non-functional requirements, which defines the quality attributes of the system features. Such attributes enforce operational constraints on different aspects of the system's behavior, such as its usability, security, reliability, performance, and portability [30]. ...
Thesis
In recent years, system design constraints evolve more and more requiring to embed more stakeholders in the projects. Consequently, modern software projects are becoming many times larger and more complex than in the past. Model-Based Systems Engineering (MBSE) methods are on their side recognized to foster holistic view of design and empower high quality and maintainable software architecture. However, architecture design models are always extracted manually by engineers, which became a tedious, time-consuming and error prone task. Especially, the exponential growth of the number of system requirements raises difficulties in managing the requirements manually and having a clear crystal view of the expectation and scope of the system to be designed. The lack of human expertise as well as powerful automation tools are often cited as the main key barriers that still slow down the spread of the MBSE approach and present significant hurdles to demonstrate its Return On Investments (ROI).Recently, Artificial Intelligence (AI) has been receiving intensive attention and its applications have made their way into products in our daily life. In fact, AI techniques together with suitable technology have enabled systems to perceive, predict, and act in assisting humans in a wide range of applications. Hence, it stands to reason that advances in AI can bring great practical value to mitigate some of the challenges raised by the adoption of MBSE. In this thesis, we contributed a first step towards applying AI for MBSE to optimize the adoption of MBSE and resolve some of its challenges. Specifically, we proposed a new flow of Machine Learning (ML) and Natural Language Processing (NLP) components, empowering the automation to go from natural language requirements towards a preliminary UML architecture design model including a package breakdown model denoting the system’s decomposition. First, we proposed a clustering solution that helps to decompose the complex system into smallest sub-systems based on the semantic similarity of early requirements. The core of the proposed clustering solution is a semantic similarity computation module that analyzes the semantic information of both the words and requirement statements of each pair of requirements using the neural word embedding model word2vec. Accordingly, a set of clusters of similar requirements are generated denoting the identified sub-systems and hence, helping to reduce the complexity of the target system. Then, we proposed a model extractor that extracts from each identified cluster (i.e., sub-system), the relevant elements that are needed to build the UML use-case package break-down model using a set of NLP heuristics. Finally, we proposed a mapping operation that programmatically maps the extracted model elements into their corresponding ones in the target UML use-case package model. Our proposal was prototyped on Papyrus and evaluated on several case studies encompassing different types of natural language software requirements.
... Señalan que no encontraron técnicas establecidas para la elicitación de los requisitos emocionales. Incluso Lawrence Chung [39], en su análisis detallado de los requisitos no-funcionales, no tiene en cuenta los problemas emocionales. Salen y Zimmerman [40], Laramee [41] y Marc Saltzzman [42] abordan las cuestiones emocionales y su comunicación en los equipos de requisitos. ...
Chapter
Full-text available
El proceso de desarrollo de software es un ejercicio complejo en el que las características del producto se consideran desde diferentes puntos de vista, roles, responsabilidades y objetivos. Esto les exige a los ingenieros de software aplicar prácticas estructuradas en cada una de las fases del ciclo de vida del producto. La Ingeniería de Requisitos es la fase más importante de este proceso y su objetivo es elicitar, analizar, gestionar y especificar las necesidades de los clientes y usuarios, para luego generar un documento que se convierta en guía para las demás fases. En este trabajo se propone el Modelo para Desarrollar y Gestionar la Ingeniería de Requisitos MoDeMaRe, con el que es posible especificarlos estructuradamente para las sub-siguientes fases del ciclo de vida. En él se integra, mediante procesos iterativos, las buenas prácticas de la investigación en el área y los principios de la lógica, la abstracción y los métodos formales para generar un documento de especificación con criterios de calidad adecuados.
... Goal model elements can be allocated to stakeholders or systems, often referred to as actors or agents. Over the years, several common goal modeling languages and notations have been developed in the Requirements Engineering (RE) community, including popular ones such as i* [60], Keep All Objects Satisfied (KAOS) [55], the NFR Framework [15], Tropos [22], and the Goal-oriented Requirement Language (GRL) [36] part of ITU-T's User Requirements Notation (URN) standard. Several techniques have been proposed to analyze goal models. ...
Article
Full-text available
Goal-oriented requirements engineering approaches aim to capture desired goals and strategies of relevant stakeholders during early requirements engineering stages, using goal models. Socio-technical systems (STSs) involve a rich interplay of human actors (traditional stakeholders, described as actors in goal models) and technical systems. Actors may depend on each other for goals to be achieved, activities to be performed, and resources to be supplied. These dependencies create new opportunities by extending actors’ capabilities but may make the actor vulnerable if the dependee fails to deliver the dependum (knowingly or unintentionally). This paper proposes a novel quantitative metric, called Actor Interaction Metric (AIM), to measure inter-actor dependencies in Goal-oriented Requirements Language (GRL) models. The metric is used to categorize inter-actor dependencies into positive (beneficial), negative (harmful), and neutral (no impact). Furthermore, the AIM metric is used to identify the most harmful/beneficial dependency for each actor. The proposed approach is implemented in a tool targeting the textual GRL language, part of the User Requirements Notation (URN) standard. We evaluate experimentally our approach using 13 GRL models, with positive results on applicability and scalability.
... Specifically, the Strategic Alignment Model (SAM) from [11] indicated two types of strategies (business strategy and the IT strategy). A graphical representation of the business objectives and IT objectives (at the strategic level) are proposed to demonstrate these strategies of the MFin application based on Non-Functional Requirements (NFR) tree [6] to refine the decomposition of the Strategic Objectives. The business strategy of the MFin is concentrated around the main objective: Provide high-quality FinTech services. ...
Chapter
Digital transformation requires FinTech organizations to be agile and apply innovative approaches and flexible architectures that allow the delivery of new digital services to their clients, partners, and employees. To consolidate this perspective, this paper proposes an agile approach using organization modeling techniques to illustrate all management processes and disciplines in microservices-based FinTech software development. On the one hand, agile methods are development processes to drive the system life cycle in terms of incremental and iterative engineering techniques. On the other hand, microservices architecture offers nimble, scalability, and faster deployment life-cycle and the ability to provide solutions using a blend of different technologies. Typically, such methods are well-suited for implementing and adapting software processes management to cope with stakeholders’ requirements and expectations immediately into the development life cycle.
... Evaluation criteria related to non-functional requirements (NFRs) are important for naval vessel acquisitions and as noted by Andrews (2017: p. 72) are "a key hidden decision in the ship's style from the beginning of any ship design study". NFRs have been termed quality attributes, constraints, goals, or extra functional requirements (Chung et al., 2000) or "ilities" (Mirakhorli and Cleland-Huang, 2013). NFRs relevant to naval vessels could include: reliability, availability, maintainability, logistic supportability, compatibility, interoperability, training, human factors, safety, security and resilience. ...
Article
This paper describes a research programme to construct a Model-Based Systems Engineering (MBSE) methodology that supports acquiring organisations in the early stages of Off-the-Shelf (OTS) naval vessel acquisitions. A structured approach to design and requirements definition activities has been incorporated into the methodology to provide an easily implemented, reusable approach that supports defensible acquisition of OTS naval vessels through traceability of decisions. The methodology comprises two main parts. Firstly, a design space is developed from the capability needs using Set-Based Design principles, Model-Based Conceptual Design, and Design Patterns. A key idea is to employ Concept and Requirements Exploration to trim the design space to the region of OTS designs most likely to meet the needs. This region can be used to specify Request for Tender (RFT) requirements. Secondly, the methodology supports trades-off between the OTS design options proposed in the RFT responses using a multi-criteria decision making method. The paper includes an example implementation of the methodology for an indicative Offshore Patrol Vessel capability.
... Testing will be carried out by three methods, namely Load Testing, Stress Testing, and Backup Testing. Hardware performance counters are used for capturing the result of the difference between the system used by software engineers for measuring performance and allowing software vendors to enhance their code [28]. As a benchmarking tool, HammerDB is using an automatic task and categorize the benchmark into five randomly selected mixed transactions following the percentage as follows: ...
Article
Full-text available
Data security is being one of the most crucial aspects to be focused on system development. However, using such a feature to enhance the security of data might affect the system's performance. This study aims to observe how substantial Transparent Data Encryption as a solution for data security on Microsoft SQL Server will affect the database management system's performance. Each of the system performance is conducted with stress and load test. This paper concentrates on the upsides of using Transparent Data Encryption over standard database by finding how significant performance degradation has occurred in terms of Reliability and Efficiency.
... Goal-oriented requirements engineering is closely related to agent-oriented requirements engineering but explicitly captures non-functional requirements such as reliability, flexibility, integrity and adaptability, by representing them as particular cases of goals (sometimes called soft-goals). Examples of this approach are KAOS (Knowledge Acquisition in autOmated Specification) [22], which is a formal framework focused on requirements acquisition I, and NFR [18], which focuses on the representation of, and reasoning about, non-functional requirements. ...
Thesis
p>Even though there is evidence of the suitability of the multi-agent approach to cope with the complexity of current systems, its use is not widespread in other areas of computing science, nor in industrial and commercial environments. This can be explained, particularly for agent-oriented methodologies, by the absence of key software engineering best practices. In particular, we have identified three groups of drawbacks that limit the use of agent-oriented methodologies: incomplete coverage of the development cycle, a lack of tools for supporting the development process, and a high degree of dependence on specific toolkits, methods or platforms. Although these issues negatively affect the applicability of the multi-agent approach in general, it is arguably for open systems that their effect is particularly noticeable. In this thesis, therefore, we aim to address the issues involved in taking existing agent-oriented methodologies to a point where they can be effectively applied to the development of open systems. In order to do so, we consider the combination of organisational design and agent design, as well as the methodological process itself. Specifically, we address organisational design by constructing a software engineering technique (software patterns) for the representation and incorporation of standard organisations into the organisational design of a multi-agent system. The agent design aspect is addressed by constructing an agent design phase which uses standard agent architectures through a pattern catalogue. Based on this, we develop a methodological process that combines the organisational and agent designs, and that also considers the use of iterations for making the development of a system more agile. This methodological process is exemplified and assessed by means of a case study. Finally, we address the problem of monitoring the correct behaviour of agents in an open system, by constructing a model for the specification of open multi-agent systems.</p
... Previous work in the HIS non-functional requirements area includes Lowry's [12] report on technical evaluation of the usability of Electronic Health Records (EHR) and Horbst's systematic review on EHR quality requirements [13]. There is also an extensive literature on non-functional requirements and associated processes; however, its intended target and applicability is focused mainly on systems engineering [14,15] or more specifically software systems engineering [16][17][18], rather than enterprises. As the 'health enterprise' and its HIS form a typically large scale, complex and evolving System-of-Systems (SoS) with a long-term viability requirement, the priorities and relevance of the existing -ilities will differ. ...
Conference Paper
Full-text available
The increasing adoption of Health Information Systems (HIS) does not seem to have resolved the ongoing lack of ubiquitous, dependable and accurate patient information so as to effectively prevent medical errors. Previous work has identified multiple causes, including but not limited to improper or incomplete HIS implementation, incompatibility in healthcare standards, lack of proper data input and validation, and accelerating evolution of technology triggering instability of candidate solutions depending on it. This paper continues the research by describing high-level non-functional requirements that any solution should satisfy relying on current best-practice, and subsequently customizing an established international standard so as to define an evaluation framework that can be used to assess candidate HIS architectures. The ultimate aim of the research is to support selection of stable, sustainable long-term architectural solutions and thus to assist HIS strategic decision making and self-evolution supporting agility.
... One promising research direction to this work is to improve the current support given to Requirement Engineering activities. Some works such as the ones presented in [Webster et al. 2005] and [Chung et al. 1999] have been trying to connect the non-functional requirements and the functional requirements. In this sense, it is possible to represent in a requirements document the association among the functional requirements, the nonfunctional requirements and the law specification to deal with them. ...
Conference Paper
In this paper we propose to incorporate the Dependability Explicit Computing (DepEx) ideas into a law-governed approach in order to build dependable open multi-agent systems. We show that the law specification can explicitly incorporate dependability concerns, collect data and publish them in a metadata registry. This data can be used to realize DepEx and, for example, it can help to guide design and runtime decisions. The advantages of using a law-approach are (i) the explicit specification of the dependability concerns; (ii) the automatic collection of the dependability metadata reusing the mediators’ infrastructure presenting in law-governed approaches; and (iii) the ability to specify reactions to undesirable situations, thus preventing service failures.
Chapter
Sustainability poses key challenges in software development for its complexity. Our goal is to contribute with a reusable sustainability software requirements catalog. We started by performing a systematic mapping to elicit and extract sustainability-related properties, and synthesized the results in feature models. Next we used iStar to model a more expressive configurable catalog with the collected data, and implemented a tool with several operations on the sustainability catalog. The sustainability catalog was qualitatively evaluated regarding its readability, interest, utility, and usefulness by 50 participants from the domain. The results were encouraging, showing that, on average, 79% of the respondents found the catalog “Good” or “Very Good” in endorsing the quality criteria evaluated. This paper discusses the social and technical dimensions of the sustainability catalog.
Chapter
The advent of socio-technical, cyber-physical and artificial intelligence systems has broadened the scope of requirements engineering, which must now deal with new classes of requirements, concerning ethics, privacy and trust. This brings new challenges to Requirements Engineering, in particular regarding the understanding of the non-functional requirements behind these new types of systems. To address this issue, we propose the Ontology-based Requirements Engineering (ObRE) method, which aims to systematize the elicitation and analysis of requirements, by using an ontology to conceptually clarify the meaning of a class of requirements, such as privacy, ethicality and trustworthiness. We illustrate the working of ObRE by applying it to a real case study concerning trustworthiness requirements.
Article
The initial and crucial phase of the automation and software development is identifying requirements and documenting them in an appropriate format that denotes software requirement specification (SRS). The quality and productivity at different phases of the software development depend on requirements specification. The quality of the end product of software development is proportionate to the quality of SRS. Hence, estimating the quality of the SRS is essential. Though the numerous metrics have been defined in contemporary research, the IEEE standard metrics are considered authentic to scale SRS quality. This manuscript endeavored to redefine the IEEE standard metrics’ measuring approaches to improvise SRS quality assessment. The experimental study exhibiting the significance and robustness of the proposed approach that compared to the contemporary contributions of the recent literature
Chapter
In business or in industry, some entities are in collaboration with each other when they work together with or without common objectives. In this paper, we are interested in this collaboration relationship in the context of aeronautics. More precisely, we focus on a use case in which two actors’ objectives are respectively to design an aircraft and to design the assembly line for this aircraft. Following some previous work on coopetition, we analyse the dependency relationship between these actors and propose i∗models. In order to solve dependency cycle issues, we introduce a third actor that is in charge of realising trade-offs between the two designs. Finally, we show how existing methodology could be applied for supporting this trade-off activity.
Article
Internet of things (IoT) technologies are becoming a more and more widespread part of civilian life in common urban spaces, which are rapidly turning into cyber–physical spaces. Simultaneously, the fear of terrorism and crime in such public spaces is ever‐increasing. Due to the resulting increased demand for security, video‐based IoT surveillance systems have become an important area for research. Considering the large number of devices involved in the illicit recognition task, we conducted a field study in a Dutch Easter music festival in a national interest project called VISOR to select the most appropriate device configuration in terms of performance and results. We iteratively architected solutions for the security of cyber–physical spaces using IoT devices. We tested the performance of multiple federated devices encompassing drones, closed‐circuit television, smart phone cameras, and smart glasses to detect real‐case scenarios of potentially malicious activities such as mosh pits and pick‐pocketing. Our results pave the way to select optimal IoT architecture configurations—that is, a mix of CCTV, drones, smart glasses, and camera phones in our case—to make safer cyber–physical spaces' a reality. To study and understand the devices involved in recognizing illicit content in public spaces, we conducted a field study in a Dutch Easter music festival in a national interest project called VISOR to select the most appropriate device configuration in terms of performance and results. We tested the performance of multiple IoT devices, such as drones, closed‐circuit television, smartphone cameras, and smart glasses, to detect real‐case scenarios of potentially malicious activities such as mosh pits and pick‐pocketing. Our results aim to select optimal IoT architecture configurations to improve the monitoring process of public spaces.
Article
In Requirements Engineering (RE), goal models are used to represent stakeholder objectives, which are also known as system requirements or system goals. Stakeholder requirements are of two types: functional requirements and non-functional requirements. Goal models are analysed to find out suitable functional requirements amongst ensemble of all functional requirements. RE literature has addressed both qualitative and quantitative methods for performing goal analysis. Recently, operation research techniques have been used for performing optimal goal analysis. The existing optimisation approaches focus on maximising objective functions. But real world problems involve optimisation of both maximising and minimising objective functions simultaneously. In this study, a game theory based approach is used for solving simultaneous optimisation of both maximum and minimum objective functions in goal models. The proposed approach is applied to Goal-Oriented Requirement Language (GRL) framework which is perceived as a standard for goal-modelling. The practicality of the proposed approach was assessed by running the case studies in a simulated environment using JAVA Eclipse combined with IBM Cplex tool. The results showed that the proposed approach aids in the analysis of goals in goal models with opposing objective functions.
Article
Full-text available
Various architectures can be applied in software design. The aim of this research is to examine a typical implementation of Jakarta EE monolithic and microservice software architectures in the context of software quality attributes. Software quality standards are used to define quality models, as well as quality characteristics and sub-characteristics, i.e., software quality attributes. This paper evaluates monolithic and microservice architectures in the context of Coupling, Testability, Security, Complexity, Deployability, and Availability quality attributes. The performed examinations yielded a quality-based mixed integer goal programming mathematical model for software architecture optimization. The model incorporates various software metrics and considers their maximal, minimal or targeted values, as well as upper and lower deviations. The objective is the sum of all deviations, which should be minimal. Considering the presented model, a solution which incorporated multiple monoliths and microservices was defined. This way, the internal structure of the software is defined in a consistent and symmetrical context, while the external software behavior remains unchanged. In addition, an intersection point of monolithic and microservice software architectures, where software metrics obtain the same values, was introduced. Within the intersection point, either one of the architectures can be applied. With the exception of some metrics, an increase in the number of features leads to a value increase of software metrics in microservice software architecture, whilst these values are constant in monolithic software architecture. An increase in the number of features indicated a quality attribute’s importance for the software system should be examined and an appropriate architecture should be selected accordingly. Finally, practical recommendations regarding software architectures in terms of software quality were given. Since each software system needs to meet non-functional in addition to functional requirements, a quality-driven software engineering can be established.
Article
Full-text available
Most of the research related to Non Functional Requirements (NFRs) have presented NFRs frameworks by integrating non functional requirements with functional requirements while we proposed that measurement of NFRs is possible e.g. cost and performance and NFR like usability can be scaled. Our novel hybrid approach integrates three things rather than two i.e. Functional Requirements (FRs), Measurable NFRs (M-NFRs) and Scalable NFRs (S-NFRs). We have also found the use of Fuzzy Logic and Likert Scale effective for handling of discretely measurable as well as scalable NFRs as these techniques can provide a simple way to arrive at a discrete or scalable NFR in contrast to vague, ambiguous, imprecise, noisy or missing NFR. Our approach can act as baseline for new NFR and aspect oriented frameworks by using all types of UML diagrams.
Article
Full-text available
Integrating user‐centered approaches into development processes is one of the main challenges nowadays that derives from different objectives of software engineering (SE) and human‐computer interaction (HCI) fields. For SE experts, the main goal is quality code creation, whereas for HCI professionals, it is the continuous product interaction with the users. The major question is what tools and timings can be used together to achieve these goals effectively. Therefore, this article provides comparative, exploratory, and qualitative research about possible solutions on how practitioners transfer HCI values and practices to SE processes. The current practice of software companies was studied by conducting interviews on a sample of 13 Hungarian Information Technology companies to explore the SE processes in respect of several dimensions (applied development models, the integrity of user‐centered methods, and the user experience [UX] maturity). According to preliminary expectations, the development processes of the various companies proceed in different steps; nevertheless, they can be well grouped together based on the UX methods applied. The results representing the various user‐centered processes can be considered useful for future decision makers of software companies worldwide. Integrating user‐centered approaches into development processes is one of the main challenges nowadays that derives from different objectives of software engineering (SE) and human‐computer interaction (HCI) fields. Therefore, this article provides comparative, exploratory, and qualitative research about possible solutions on how practitioners transfer HCI values and practices to SE processes. The results representing the various user‐centered processes can be considered useful for future decision makers of software companies worldwide.
Article
Full-text available
Over a decade since transparency was introduced as a first-class concept in computing, transparency is still an emerging concept that is quite poorly understood. Also, despite existing research contributions, transparency is yet to be incorporated into the software engineering practice, and the promise it holds remains unfulfilled. Although there is evidence of increasing stakeholders’ demand for software and process transparency, the realization of such demand is yet to be fully witnessed within the software engineering practice. There is a need to uncover transparency and how it has so far been conceptualized, operationalized, and challenges faced. We applied a systematic literature review method in search of articles published between January 2006 and March 2022. This study reports a systematic review of the explicit conceptualization and application of transparency in 18 articles out of a total of 162 selected for review. Our study found that transparency remains an under-researched non-functional quality requirement concept, especially as it impacts information and software systems development. Of the 18 articles reviewed, only three studies representing 16.67% conceptualized transparency in software development and focused on the transparency of software artifacts. The remaining 83.33% of studies conceptualized transparency in information systems, focusing on general information and fully functional information systems. Transparency is yet to be fully explored from a theoretical gathering point of view and as a non-functional indicator of software quality hence its slow adoption and incorporation into mainstream software practice. Apart from providing a catalog of transparency factors that stakeholders can use to evaluate transparency achievement, the paper proposed a roadmap to enhance transparency implementation and also provides future research directions.
Conference Paper
Full-text available
In the early phases of a project, software architects and developers design solutions to satisfy quality concerns. However, as a byproduct of the long-term maintenance effort, qualities tend to erode, causing quality-related bugs to surface across the codebase. In principle, quality-related concerns not only can be expensive and difficult to detect, but they can have a detrimental effect on the system operating as intended. Moreover, quality-related concerns can directly affect users' experiences at large. To address this problem, we build a quality-based bug classifier that leverages several feature selection techniques, TF-IDF, Chi-square (2), Mutual Information, and Extra Randomized Trees, including the incorporation of various machine learning algorithms. Our results indicate that Random Forest with the (TF-IDF+ 2) configuration achieved the best results for detecting six-quality related types, achieving a precision of 76%, recall of 70%, and F1 of 70%. However, the same approach returned low precision of 48%, recall of 15%, and F1 of 23% for detecting functional-related bugs. We argue that such low performance has resulted in an aftermath of overlapping content caused by functional and quality-related information which opens another challenging topic that we aim to expand in future work.
Article
Full-text available
The present research estimates the efficacy of a legacy program and the areas of its development. The research also intends to put forward as to what extent reengineering of a legacy program has to be done on the basis of the estimation approach. The study has tried to outline the current issues and trends in reengineering of a legacy program from various perspectives. An all-inclusive literature review reveals that a lot of work has already been piled up with legacy system estimation and the reengineering domain, yet the basic assumptions of Complexity, Quality and Effort have not been worked out collectively. Hence the present research underlines this very maxim and studies the reengineering of a legacy program on the paradigms of Quality, Complexity, and Effort Estimation collectively. The findings put forward an equation and reengineering scale which would be highly compatible with present technology for the feasibility of an effective reengineering.
Article
Context The definition and assessment of software quality have not converged to a single specification. Each team may formulate its own notion of quality and tools and methodologies for measuring it. Software quality can be improved via code changes, most often as part of a software maintenance loop. Objective This manuscript contributes towards providing decision support for code change selection given a) a set of preferences on a software product’s qualities and b) a pool of heterogeneous code changes to select from. Method We formulate the problem as an instance of Multiple-Criteria Decision Making, for which we provide both an abstract flavor and a prototype implementation. Our prototype targets energy efficiency, technical debt and dependability. Results This prototype achieved inconsistent results, in the sense of not always recommending changes reflecting the decision maker’s preferences. Encouraged from some positive cases and cognizant of our prototype’s shortcomings, we propose directions for future research. Conclusion This paper should thus be viewed as an imperfect first step towards quality-driven, code change-centered decision support and, simultaneously, as a curious yet pragmatic enough gaze on the road ahead.
Article
Full-text available
Goal-oriented models are gaining significant attention from researchers and practitioners in various domains, especially in software requirements engineering. Similar to other software engineering models, goal models are subject to bad practices (i.e., bad smells). Detecting and rectifying these bad smells would improve the quality of these models. In this paper, we formally define the circular dependency bad smell and then develop an approach based on the simulated annealing (SA) search-based algorithm to detect its instances. Furthermore, we propose two mechanisms (namely, pruning and pairing) to improve the effectiveness of the proposed approach. We empirically evaluate three algorithm combinations, i.e., (1) the base SA search algorithm, (2) the base SA search algorithm augmented with pruning mechanism, and (3) the base SA search algorithm augmented with pruning and pairing mechanisms, using several case studies. Results show that simulated annealing augmented with pruning and pairing is the most effective approach, while the simulated annealing augmented with pruning mechanism is more effective than the base SA search algorithm. We also found that the proposed pruning and pairing mechanisms provide a significant improvement in the detection of circular dependency bad smell, in terms of computation time and accuracy.
Chapter
Every system that is fielded must be operated and maintained (supported). Lack of supportability analysis and requirements will affect the design and deployment and may make the resulting system unsupportable in the field. Here we discuss the notion of supportability and its criticality to successful system development and operations. We start by describing a sensor system and then move into more general discussions of supportability.
ResearchGate has not been able to resolve any references for this publication.