Conference Paper

Unleashing the Power of Implicit Feedback in Software Product Lines: Benefits Ahead

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... An FM can include other artefacts that are used for analysis purposes. For instance, a table with desired or existing configurations, user ratings of features, or implicit feedback of users during product usage [36,40] -just to mention a few. It is important to clarify that the software product line engineering process of Figure 2.2 is an iterative process and some of the additional artefacts for FM analysis can be produced in other stages of the process, for example, the implicit feedback of feature usage or the feature ratings after deploying a product. ...
Chapter
Full-text available
Developing and maintaining Feature Models (FMs) can become an errorprone activity. In this chapter, we focus on different aspects of analyzing relevant properties of FMs. Such an analysis helps to increase the maintainability and correctness of FMs and also makes them better manageable in industrial settings. Analysis operations are discussed in detail and also presented formally. In addition to analysis operations, we also show how to automatically determine erroneous elements of an FM that have to be adapted or deleted in order to restore the intended FM semantics.
... The project is maintained and promoted by 4 different universities and its spirit is to serve a common base for the development of FM analysis and configuration capabilities. Many applications use flama as background for analysis capabilities [15,49,58,59,71,77,95,96,103]. ...
Chapter
Full-text available
Feature Models (FMs) are not only an active scientific topic but they are supported by many tools from industry and academia. In this chapter, we provide an overview of example feature modelling tools and corresponding FM configurator applications. In our discussion, we first focus on different tools supporting the design of FMs. Thereafter, we provide an overview of tools that also support FM analysis. Finally, we discuss different existing FM configurator applications.
Chapter
Full-text available
In this chapter, we discuss different AI techniques that can be applied to support interactive FM configuration scenarios.We have in mind situations where the user of a FM configurator is in the need of support, for example, in terms of requiring recommendations and related explanations for feature inclusions or exclusions or recommendations of how to get out of an inconsistent situation. We show how to support feature selection on the basis of recommendation technologies and also show how to apply the concepts of conflict detection and model-based diagnosis to support users in inconsistent situations as well as in the context of reconfiguration.
Chapter
Full-text available
In this chapter, we describe the basis of Feature Models (FMs) using graphical as well as textual representations. We introduce a smartwatch FM that will be used as a working example for this and later chapters. Based on this example, we describe feature modelling extensions using cardinalities and attributes. In the following,we showhowFMs can be translated into a formal representation (constraint satisfaction problems and SAT problems) and introduce corresponding definitions of a FM configuration task and a corresponding FM configuration (also known as configuration, product, or solution). Finally, we discuss example machine learning (ML) approaches that can be applied in the context of feature modelling tasks.
Article
Full-text available
For the last ten years, software product line (SPL) tool developers have been facing the implementation of different variability requirements and the support of SPL engineering activities demanded by emergent domains. Despite systematic literature reviews identifying the main characteristics of existing tools and the SPL activities they support, these reviews do not always help to understand if such tools provide what complex variability projects demand. This paper presents an empirical research in which we evaluate the degree of maturity of existing SPL tools focusing on their support of variability modeling characteristics and SPL engineering activities required by current application domains. We first identify the characteristics and activities that are essential for the development of SPLs by analyzing a selected sample of case studies chosen from application domains with high variability. Second, we conduct an exploratory study to analyze whether the existing tools support those characteristics and activities. We conclude that, with the current tool support, it is possible to develop a basic SPL approach. But we have also found out that these tools present several limitations when dealing with complex variability requirements demanded by emergent application domains, such as non-Boolean features or large configuration spaces. Additionally, we identify the necessity for an integrated approach with appropriate tool support to completely cover all the activities and phases of SPL engineering. To mitigate this problem, we propose different road map using the existing tools to partially or entirely support SPL engineering activities, from variability modeling to product derivation.
Article
Full-text available
Many analyses on configurable software systems are intractable when confronted with colossal and highly-constrained configuration spaces. These analyses could instead use statistical inference, where a tractable sample accurately predicts results for the entire space. To do so, the laws of statistical inference requires each member of the population to be equally likely to be included in the sample, i.e., the sampling process needs to be “uniform”. SAT-samplers have been developed to generate uniform random samples at a reasonable computational cost. However, there is a lack of experimental validation over colossal spaces to show whether the samplers indeed produce uniform samples or not. This paper (i) proposes a new sampler named BDDSampler, (ii) presents a new statistical test to verify sampler uniformity, and (iii) reports the evaluation of BDDSampler and five other state-of-the-art samplers: KUS, QuickSampler, Smarch, Spur, and Unigen2. Our experimental results show only BDDSampler satisfies both scalability and uniformity.
Conference Paper
Full-text available
Continuous Deployment (CD) advocates for quick and frequent deployments of software to production. The goal is to bring new functionality as early as possible to users while learning from their usage. CD has emerged from web-based applications where it has been gaining traction over the past years. While CD is appealing for many software development organizations , empirical evidence on perceived benefits in software-intensive embedded systems is scarce. The objective of this paper is to identify perceived benefits after transitioning to continuous deployment from a long-cycle release and deployment process. To do that, a case study at a multinational telecommunication company was conducted focusing on large and complex embedded software; the Third Generation (3G) Radio Access Network (RAN) software.
Article
Full-text available
Agile software development (ASD) and software product line (SPL) have shown significant benefits for software engineering processes and practices. Although both methodologies promise similar benefits, they are based on different foundations. SPL encourages systematic reuse that exploits the commonalities of various products belonging to a common domain and manages their variations systematically. In contrast, ASD stresses a flexible and rapid development of products using iterative and incremental approaches. ASD encourages active involvement of customers and their frequent feedback. Both ASD and SPL require alternatives to extend agile methods for several reasons such as (1) to manage reusability and variability across the products of any domain, (2) to avoid the risk of developing core assets that will become obsolete and not used in future projects, and (3) to meet the requirements of changing markets. This motivates the researchers for the integration of ASD and SPL approaches. As a result, an innovative approach called agile product line engineering (APLE) by integrating SPL and ASD has been introduced. The principal aim of APLE is to maximize the benefits of ASD and SPL and address the shortcomings of both. However, combining both is a major challenge. Researchers have proposed a few approaches that try to put APLE into practice, but none of the existing approaches cover all APLE features needed. This paper proposes a new dynamic variability approach for APLE that uses APLE practices for reusing features. The proposed approach (PA) is based on the agile method Scrum and the reactive approach of SPL. In this approach, reusable core assets respond reactively to customer requirements. The PA constructs and develops the SPL architecture iteratively and incrementally. It provides the benefits of reusability and maintainability of SPLs while keeping the delivery-focused approach from agile methods. We conducted a quantitative survey of software companies applying the APLE to assess the performance of the PA and hypotheses of empirical study. Findings of empirical evaluation provide evidence on integrating ASD and SPL and the application of APLE into practices.
Conference Paper
Full-text available
The evolution of variant-rich systems is a challenging task. To support developers, the research community has proposed a range of different techniques over the last decades. However, many techniques have not been adopted in practice so far. To advance such techniques and to support their adoption, it is crucial to evaluate them against realistic baselines, ideally in the form of generally accessible benchmarks. To this end, we need to improve our empirical understanding of typical evolution scenarios for variant-rich systems and their relevance for benchmarking. In this paper, we establish eleven evolution scenarios in which benchmarks would be beneficial. Our scenarios cover typical lifecycles of variant-rich system, ranging from clone & own to adopting and evolving a configurable product-line platform. For each scenario, we formulate benchmarking requirements and assess its clarity and relevance via a survey with experts in variant-rich systems and software evolution. We also surveyed the existing benchmarking landscape, identifying synergies and gaps. We observed that most scenarios, despite being perceived as important by experts, are only partially or not at all supported by existing benchmarks-a call to arms for building community benchmarks upon our requirements. We hope that our work raises awareness for benchmarking as a means to advance techniques for evolving variant-rich systems, and that it will lead to a benchmarking initiative in our community.
Article
Full-text available
In the last decade, modern data analytics technologies have enabled the creation of software analytics tools offering real-time visualization of various aspects related to software development and usage. These tools seem to be particularly attractive for companies doing agile software development. However, the information provided by the available tools is neither aggregated nor connected to higher quality goals. At the same time, assessing and improving software quality has also been a key target for the software engineering community, yielding several proposals for standards and software quality models. Integrating such quality models into software analytics tools could close the gap by providing the connection to higher quality goals. This study aims at understanding whether the integration of quality models into software analytics tools provides understandable, reliable, useful, and relevant information at the right level of detail about the quality of a process or product, and whether practitioners intend to use it. Over the course of more than one year, the four companies involved in this case study deployed such a tool to assess and improve software quality in several projects. We used standardized measurement instruments to elicit the perception of 22 practitioners regarding their use of the tool. We complemented the findings with debriefing sessions held at the companies. In addition, we discussed challenges and lessons learned with four practitioners leading the use of the tool. Quantitative and qualitative analyses provided positive results; i.e., the practitioners’ perception with regard to the tool’s understandability, reliability, usefulness, and relevance was positive. Individual statements support the statistical findings and constructive feedback can be used for future improvements. We conclude that potential for future adoption of quality models within software analytics tools definitely exists and encourage other practitioners to use the presented seven challenges and seven lessons learned and adopt them in their companies.
Conference Paper
Full-text available
The analysis of software product lines is challenging due to the potentially large number of products, which grow exponentially in terms of the number of features. Product sampling is a technique used to avoid exhaustive testing, which is often infeasible. In this paper, we propose a classification for product sampling techniques and classify the existing literature accordingly. We distinguish the important characteristics of such approaches based on the information used for sampling, the kind of algorithm, and the achieved coverage criteria. Furthermore, we give an overview on existing tools and evaluations of product sampling techniques. We share our insights on the state-of-the-art of product sampling and discuss potential future work.
Conference Paper
Full-text available
Context: Software evolution ensures that software systems in use stay up to date and provide value for end-users. However, it is challenging for requirements engineers to continuously elicit needs for systems used by heterogeneous end-users who are out of organisational reach. Objective: We aim at supporting continuous requirements elicitation by combining user feedback and usage monitoring. Online feedback mechanisms enable end-users to remotely communicate problems, experiences, and opinions, while monitoring provides valuable information about runtime events. It is argued that bringing both information sources together can help requirements engineers to understand end-user needs better. Method/Tool: We present FAME, a framework for the combined and simultaneous collection of feedback and monitoring data in web and mobile contexts to support continuous requirements elicitation. In addition to a detailed discussion of our technical solution, we present the first evidence that FAME can be successfully introduced in real-world contexts. Therefore, we deployed FAME in a web application of a German small and medium-sized enterprise (SME) to collect user feedback and usage data. Results/Conclusion: Our results suggest that FAME not only can be successfully used in industrial environments but that bringing feedback and monitoring data together helps the SME to improve their understanding of end-user needs, ultimately supporting continuous requirements elicitation.
Article
Full-text available
Feature models have been used since the 90's to describe software product lines as a way of reusing common parts in a family of software systems. In 2010, a systematic literature review was published summarizing the advances and settling the basis of the area of Automated Analysis of Feature Models (AAFM). From then on, different studies have applied the AAFM in different domains. In this paper, we provide an overview of the evolution of this field since 2010 by performing a systematic mapping study considering 423 primary sources. We found six different variability facets where the AAFM is being applied that define the tendencies: product configuration and derivation; testing and evolution; reverse engineering; multi-model variability-analysis; variability modelling and variability-intensive systems. We also confirmed that there is a lack of industrial evidence in most of the cases. Finally, we present where and when the papers have been published and who are the authors and institutions that are contributing to the field. We observed that the maturity is proven by the increment in the number of journals published along the years as well as the diversity of conferences and workshops where papers are published. We also suggest some synergies with other areas such as cloud or mobile computing among others that can motivate further research in the future.
Article
Full-text available
A software product line comprises a family of software products that share a common set of features. Testing an entire product-line product-by-product is infeasible due to the potentially exponential number of products in the number of features. Accordingly, several sampling approaches have been proposed to select a presumably minimal, yet sufficient number of products to be tested. Since the time budget for testing is limited or even a priori unknown, the order in which products are tested is crucial for effective product-line testing. Prioritizing products is required to increase the probability of detecting faults faster. In this article, we propose similarity-based prioritization, which can be efficiently applied on product samples. In our approach, we incrementally select the most diverse product in terms of features to be tested next in order to increase feature interaction coverage as fast as possible during product-by-product testing. We evaluate the gain in the effectiveness of similarity-based prioritization on three product lines with real faults. Furthermore, we compare similarity-based prioritization to random orders, an interaction-based approach, and the default orders produced by existing sampling algorithms considering feature models of various sizes. The results show that our approach potentially increases effectiveness in terms of fault detection ratio concerning faults within real-world product-line implementations as well as synthetically seeded faults. Moreover, we show that the default orders of recent sampling algorithms already show promising results, which, however, can still be improved in many cases using similarity-based prioritization.
Conference Paper
Full-text available
E-Commerce is a reliably creating business segment with a significant measure of potential. The continuing advancement of customers prompts an unyieldingly genuine contention. Various fresh works are arriving in the business sector through included parts, placing weight on assessing. An assessing competition will impact benefit of associations out and out and may pulverize business segment costs over a whole deal. Toward the end, there may be various terminations as a result of this resistance in expense. Google Analytics is a specific illustrative instrument from Google which serves to trace guests & assemble an extensive variety of profitable data concerning them. This instrument has ended up being extremely standard for site heads and has a tremendous offer of business part. The instrument's straightforwardness of utilization has developed it as an OK distinctive choice for customary web investigative mechanical assemblies. Then again, with regards to conveying unrefined statistics, belongings get troublesome; Google Analytics tries to maintain assembled data developing a fitting ability to passage rough data. It is a venture course founded examination device & gives a straightforward perspective of site movement and promoting viability. It has capable & development includes that provide knowledge into sites & enhance site ROI. This exploration rag comprises of contextual investigation on Google Analytics that exhibits components and creates the report. On the assessment's premise of web utilization, site proprietors can improve the proficiency of showcasing, and web traffic flow. This paper likewise exhibits the Google's restrictions analytics and proposes better way to deal with the concerns. This paper separates & depicts the tactic by these inconveniences remain tended near & perceives whether Google Analytics can be seeing as the best in class distinctive alternative for accumulate numbers aimed at web use mining.
Chapter
Full-text available
Today, products within telecommunication, transportation, consumer electronics, home automation, security etc. involve an increasing amount of software. As a result, organizations that have a tradition within hardware development are transforming to become software-intensive organizations. This implies products where software constitutes the majority of functionality, costs, future investments, and potential. While this shift poses a number of challenges, it brings with it opportunities as well. One of these opportunities is to collect product data in order to learn about product use, to inform product management decisions, and for improving already deployed products. In this paper, we focus on the opportunity to use post-deployment data, i.e. data that is generated while products are used, as a basis for product improvement and new product development. We do so by studying three software development companies involved in large-scale development of embedded software. In our study, we highlight limitations in post-deployment data usage and we conclude that post-deployment data remains an untapped resource for most companies. The contribution of the paper is two-fold. First, we present key opportunities for more effective product development based on post-deployment data usage. Second, we propose a framework for organizations interested in advancing their use of post-deployment product data.
Conference Paper
Full-text available
Software-intensive product companies are becoming increasingly data-driven as can be witnessed by the big data and Internet of Things trends. However, optimally prioritizing customer needs in a mass-market context is notoriously difficult. While most companies use product owners or managers to represent the customer, research shows that the prioritization made is far from optimal. In earlier research, we have coined the term ‘the open loop problem’ to characterize this challenge. For instance, research shows that up to half of all the features in products are never used. This paper presents a conceptual model that emphasizes the need for combining qualitative feedback in early stages of development with quantitative customer observation in later stages of development. Our model is inductively derived from an 18 months close collaboration with six large global software-intensive companies.
Conference Paper
Full-text available
Combining Software Product Line Engineering (SPLE) and Agile Software Development (ASD) is an approach for companies working with similar systems in scenarios of volatile requirements aiming to address fast changes and a systematic variability management. However, a development process covering the whole SPLE lifecycle and using agile practices in small and medium size development projects has not been established yet. There is a need to disseminate such combination through well-defined roles, activities, tasks and artifacts. This paper presents SPLICE, a lightweight development process combining SPLE and agile practices, following reactive and extractive approaches to build similar systems. SPLICE addresses the needs of small development teams aiming to adopt SPL practices with low upfront costs and fast return on investment. In order to evaluate our proposal, we report our experience in a case study by developing Rescue MeSPL, a product line for mobile applications that assists users in emergency situations. The case study results point SPLICE achieves the three evaluated aspects by providing short and proper iterations, possibilities for activities adaptations and continuous feedback.
Article
Software product line (SPL) scoping aids companies to define the boundaries of their resources such as products, domains, and assets, the target of reuse tasks scoping technical and organizational aspects. As scoping guides the management of the resources in SPL development, it becomes one of the core activities in this process. We can find in the literature several approaches on this topic, proposing techniques and methodologies to be applicable in different organizational scenarios. However, no work comprehensively reviews such approaches and describes the advances in state of the art in the last years. In this context, we look into identifying, analyzing, and extracting detailed characteristics from SPL scoping proposals found in the literature. These characteristics allowed us to compare these approaches, reason about their applicability, and identify existing limitations and research opportunities. Thus, we conducted a systematic literature review alongside snowballing, following a well-defined protocol to retrieve, classify and extract information from the literature. We analyzed a total of 58 studies, identifying 41 different approaches in the field, highlighting their similarities and differences, and establishing a generic scoping process. Furthermore, we discuss research opportunities in the SPL scoping field.
Article
Most modern software systems (operating systems like Linux or Android, Web browsers like Firefox or Chrome, video encoders like ffmpeg, x264 or VLC, mobile and cloud applications, etc.) are highly configurable. Hundreds of configuration options, features, or plugins can be combined, each potentially with distinct functionality and effects on execution time, security, energy consumption, etc. Due to the combinatorial explosion and the cost of executing software, it is quickly impossible to exhaustively explore the whole configuration space. Hence, numerous works have investigated the idea of learning it from a small sample of configurations’ measurements. The pattern ”sampling, measuring, learning” has emerged in the literature, with several practical interests for both software developers and end-users of configurable systems. In this systematic literature review, we report on the different application objectives (e.g., performance prediction, configuration optimization, constraint mining), use-cases, targeted software systems, and application domains. We review the various strategies employed to gather a representative and cost-effective sample. We describe automated software techniques used to measure functional and non-functional properties of configurations. We classify machine learning algorithms and how they relate to the pursued application. Finally, we also describe how researchers evaluate the quality of the learning process. The findings from this systematic review show that the potential application objective is important; there are a vast number of case studies reported in the literature related to particular domains or software systems. Yet, the huge variant space of configurable systems is still challenging and calls to further investigate the synergies between artificial intelligence and software engineering.
Chapter
[Context and motivation] According to Data-Driven Requirements Engineering (RE), explicit and implicit user feedback can be considered a relevant source of requirements, thus supporting requirements elicitation. [Question/problem] Less attention has been paid so far to the role of implicit feedback in RE tasks, such as requirements validation, and on how to specify what implicit feedback to collect and analyse. [Principal idea/results] We propose an approach that leverages on goal-oriented requirements modelling combined with Goal-Question-Metric. We explore the applicability of the approach on an industrial project in which a platform for online training has been adapted to realise a citizen information service that has been used by hundreds of people during the COVID-19 pandemic. [Contributions] Our contribution is twofold: (i) we present our approach towards a systematic definition of requirements for data collection and analysis, at support of software requirements validation and evolution; (ii) we discuss our ideas using concrete examples from an industrial case study and formulate a research question that will be addressed by conducting experiments as part of our research.
Article
Developers often need to use appropriate APIs to program efciently, but it is usually a difcult task to identify the exact one they need from a vast list of candidates. To ease the burden, a multitude of API recommendation approaches have been proposed. However, most of the currently available API recommenders do not support the effective integration of user feedback into the recommendation loop. In this paper, we propose a framework, BRAID (Boosting RecommendAtion with Implicit FeeDback), which leverages learning-to-rank and active learning techniques to boost recommendation performance. By exploiting user feedback information, we train a learning-to-rank model to re-rank the recommendation results. In addition, we speed up the feedback learning process with active learning. Existing query-based API recommendation approaches can be plugged into BRAID. We select three state-of-the-art API recommendation approaches as baselines to demonstrate the performance enhancement of BRAID measured by Hit@k (Top-k), MAP, and MRR. Empirical experiments show that, with acceptable overheads, the recommendation performance improves steadily and substantially with the increasing percentage of feedback data, comparing with the baselines.
Chapter
The shift from on-premise to cloud enterprise software has fundamentally changed the interactions between software vendors and users. Since enterprise software users are now working directly on an infrastructure that is provided or monitored by the software vendor, enterprise cloud software providers are technically able to measure nearly every interaction of each individual user with their cloud products. The novel insights into actual usage that can thereby be gained provide an opportunity for requirements engineering to improve and effectively extend enterprise cloud products while they are being used. Even though academic literature has been proposing ideas and conceptualizations of leveraging usage data in requirements engineering for nearly a decade, there are no functioning prototypes that implement such ideas. Drawing on an exploratory case study at one of the world’s leading cloud software vendors, we conceptualize an Action Design Research project that fills this gap. The project aims to establish a software prototype that supports requirements engineering activities to incrementally improve enterprise cloud software in the post-delivery phase based on actual usage data.
Article
Product lines are designed to support the reuse of features across multiple products. Features are product functional requirements that are important to stakeholders. In this context, feature models are used to establish a reuse platform and allow the configuration of multiple products through the interactive selection of a valid combination of features. Although there are many specialized configurator tools that aim to provide configuration support, they only assure that all dependencies from selected features are automatically satisfied. However, no support is provided to help decision makers focus on likely relevant configuration options. Consequently, since decision makers are often unsure about their needs, the configuration of large feature models becomes challenging. To improve the efficiency and quality of the product configuration process, we propose a new approach that provides users with a limited set of permitted, necessary and relevant choices. To this end, we adapt six state-of-the-art recommender algorithms to the product line configuration context. We empirically demonstrate the usability of the implemented algorithms in different domain scenarios, based on two real-world datasets of configurations. The results of our evaluation show that recommender algorithms, such as CF-shrinkage, CF-significance weighting, and BRISMF, when applied in the context of product-line configuration can efficiently support decision makers in a most efficient selection of features.
Article
Context-aware recommender systems leverage the value of recommendations by exploiting context information that affects user preferences and situations, with the goal of recommending items that are really relevant to changing user needs. Despite the importance of context-awareness in the recommender systems realm, researchers and practitioners lack guides that help them understand the state of the art and how to exploit context information to smarten up recommender systems. This paper presents the results of a comprehensive systematic literature review we conducted to survey context-aware recommenders and their mechanisms to exploit context information. The main contribution of this paper is a framework that characterizes context-aware recommendation processes in terms of: i) the recommendation techniques used at every stage of the process, ii) the techniques used to incorporate context, and iii) the stages of the process where context is integrated into the system. This systematic literature review provides a clear understanding about the integration of context into recommender systems, including context types more frequently used in the different application domains and validation mechanisms-explained in terms of the used datasets, properties, metrics, and evaluation protocols. The paper concludes with a set of research opportunities in this field.
Chapter
A Software Product Line (SPL) aims to support the development of a family of similar software products from a common set of shared assets. SPLs represent a long-term investment and have a considerable life-span. In order to realize a return-on-investment, companies dealing with SPLs often plan their product portfolios and software engineering activities strategically over many months or years ahead. Compared to single system engineering, SPL evolution exhibits higher complexity due to the variability and the interdependencies between products. This chapter provides an overview on concepts and challenges in SPL evolution and summarizes the state of the art. For this we first describe the general process for SPL evolution and general modeling concepts to specify SPL evolution. On this base, we provide an overview on the state-of-the-art in each of the main process tasks which are migration towards SPLs, analysis of (existing) SPL evolution, planning of future SPL evolution, and implementation of SPL evolution.
Article
Maintenance of unused features leads to unnecessary costs. Therefore, identifying unused features can help product owners to prioritize maintenance efforts. We present a tool that employs dynamic analyses and text mining techniques to identify use case documents describing unused features to approximate unnecessary features. We report on a preliminary study of an industrial business information system over the course of one year quantifying unused features and measuring the performance of the approach. It indicates the relevance of the problem and the capability of the presented approach to detect unused features.
Article
Highly configurable systems allow users to tailor software to specific needs. Valid combinations of configuration options are often restricted by intricate constraints. Describing options and constraints in a variability model allows reasoning about the supported configurations. To automate creating and verifying such models, we need to identify the origin of such constraints. We propose a static analysis approach, based on two rules, to extract configuration constraints from code. We apply it on four highly configurable systems to evaluate the accuracy of our approach and to determine which constraints are recoverable from the code. We find that our approach is highly accurate (93% and 77% respectively) and that we can recover 28% of existing constraints. We complement our approach with a qualitative study to identify constraint sources, triangulating results from our automatic extraction, manual inspections, and interviews with 27 developers. We find that, apart from low-level implementation dependencies, configuration constraints enforce correct runtime behavior, improve users’ configuration experience, and prevent corner cases. While the majority of constraints is extractable from code, our results indicate that creating a complete model requires further substantial domain knowledge and testing. Our results aim at supporting researchers and practitioners working on variability model engineering, evolution, and verification techniques.
Article
Context Due to increased competition and the advent of mass customization, many software firms are utilizing product families–groups of related products derived from a product platform–to provide product variety in a cost-effective manner. The key to designing a successful software product family is the product platform, so it is important to determine the most appropriate product platform scope related to business objectives, for product line development. Aim This paper proposes a novel method to find the optimized scope of a software product platform based on end-user features. Method The proposed method, PPSMS (Product Platform Scoping Method for Software Product Lines), mathematically formulates the product platform scope selection as an optimization problem. The problem formulation targets identification of an optimized product platform scope that will maximize life cycle cost savings and the amount of commonality, while meeting the goals and needs of the envisioned customers’ segments. A simulated annealing based algorithm that can solve problems heuristically is then used to help the decision maker in selecting a scope for the product platform, by performing tradeoff analysis of the commonality and cost savings objectives. Results In a case study, PPSMS helped in identifying 5 non-dominated solutions considered to be of highest preference for decision making, taking into account both cost savings and commonality objectives. A quantitative and qualitative analysis indicated that human experts perceived value in adopting the method in practice, and that it was effective in identifying appropriate product platform scope.
Software product lines - practices and patterns
  • Paul Clements
  • Linda M Northrop
  • Clements Paul
Paul Clements and Linda M. Northrop. 2002. Software product linespractices and patterns. Addison-Wesley.
Inès Gam, Raul Mazo, and Henda Ghezala. 2021. Devising Configuration Guidance with Process Mining Support
  • Houssem Chemingui
  • Camille Salinesi
Houssem Chemingui, Camille Salinesi, Inès Gam, Raul Mazo, and Henda Ghezala. 2021. Devising Configuration Guidance with Process Mining Support. (2021).
David Benavides, and Don Batory. 2022. Uniform and scalable sampling of highly configurable systems
  • Ruben Heradio
  • David Fernandez-Amoros
  • A José
  • Galindo
Ruben Heradio, David Fernandez-Amoros, José A Galindo, David Benavides, and Don Batory. 2022. Uniform and scalable sampling of highly configurable systems. Empirical Software Engineering 27, 2 (2022), 44.
New methods in software product line practice
  • W Charles
  • Krueger
Charles W. Krueger. 2006. New methods in software product line practice. Commun. ACM 49, 12 (2006), 37-40. https://doi.org/10.1145/ 1183236.1183262