Article

How can feature usage be tracked across product variants? Implicit Feedback in Software Product Lines

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This calls for implicit-feedback to be moved to the SPLplatform level. In previous work, we addressed the feasibility of specifying feedback requirements at the platform level by using features as the unit of tracking: the Feedback Model [10]. At the time the variant is generated from the Configuration Model, the Feedback Model is checked out. ...
Article
Full-text available
Software Product Lines (SPLs) aim at systematically reusing software assets, and deriving products (a.k.a., variants) out of those assets. However, it is not always possible to handle SPL evolution directly through these reusable assets. Time-to-market pressure, expedited bug fixes, or product specifics lead to the evolution to first happen at the product level, and to be later merged back into the SPL platform where the core assets reside. This is referred to as product-based evolution. In this scenario, deciding when and what should go into the next SPL release is far from trivial. Distinct questions arise. How much effort are developers spending on product customization? Which are the most customized core assets? To which extent is the core asset code being reused for a given product? We refer to this endeavor as Customization Analysis, i.e., understanding the functional increments in adjusting products from the last SPL platform release. The scale of the SPLs’ code-base calls for customization analysis to be conducted through Visual Analytics tools. This work addresses the design principles for such tools through a joint effort between academia and industry, specifically, Danfoss Drives, a company division in charge of the P400 SPL. Accordingly, we adopt an Action Design Research approach where answers are sought by interacting with the practitioners in the studied situations. We contribute by providing informed goals for customization analysis as well as an intervention in terms of a visual analytics tool. We conclude by discussing to what extent this experience can be generalized to product-based evolving SPL organizations other than Danfoss Drives.
Conference Paper
Full-text available
Continuous Deployment (CD) advocates for quick and frequent deployments of software to production. The goal is to bring new functionality as early as possible to users while learning from their usage. CD has emerged from web-based applications where it has been gaining traction over the past years. While CD is appealing for many software development organizations , empirical evidence on perceived benefits in software-intensive embedded systems is scarce. The objective of this paper is to identify perceived benefits after transitioning to continuous deployment from a long-cycle release and deployment process. To do that, a case study at a multinational telecommunication company was conducted focusing on large and complex embedded software; the Third Generation (3G) Radio Access Network (RAN) software.
Article
Full-text available
Software product-line engineering is arguably one of the most successful methods for establishing large portfolios of software variants in an application domain. However, despite the benefits, establishing a product line requires substantial upfront investments into a software platform with a proper product-line architecture, into new software-engineering processes (domain engineering and application engineering), into business strategies with commercially successful product-line visions and financial planning, as well as into re-organization of development teams. Moreover, establishing a full-fledged product line is not always possible or desired, and thus organizations often adopt product-line engineering only to an extent that deemed necessary or was possible. However, understanding the current state of adoption, namely, the maturity or performance of product-line engineering in an organization, is challenging, while being crucial to steer investments. To this end, several measurement methods have been proposed in the literature, with the most prominent one being the Family Evaluation Framework (FEF), introduced almost two decades ago. Unfortunately, applying it is not straightforward, and the benefits of using it have not been assessed so far. We present an experience report of applying the FEF to nine medium- to large-scale product lines in the avionics domain. We discuss how we tailored and executed the FEF, together with the relevant adaptations and extensions we needed to perform. Specifically, we elicited the data for the FEF assessment with 27 interviews over a period of 11 months. We discuss experiences and assess the benefits of using the FEF, aiming at helping other organizations assessing their practices for engineering their portfolios of software variants.
Article
Full-text available
Crowdsourcing is an appealing concept for achieving good enough requirements and just‐in‐time requirements engineering (RE). A promising form of crowdsourcing in RE is the use of feedback on software systems, generated through a large network of anonymous users of these systems over a period of time. Prior research indicated implicit and explicit user feedback as key to RE‐practitioners to discover new and changed requirements and decide on software features to add, enhance, or abandon. However, a structured account on the types and characteristics of user feedback useful for RE purposes is still lacking. This research fills the gap by providing a mapping study of literature on crowdsourced user feedback employed for RE purposes. On the basis of the analysis of 44 selected papers, we found nine pieces of metadata that characterized crowdsourced user feedback and that were employed in seven specific RE activities. We also found that the published research has a strong focus on crowd‐generated comments (explicit feedback) to be used for RE purposes, rather than employing application logs or usage‐generated data (implicit feedback). Our findings suggest a need to broaden the scope of research effort in order to leverage the benefits of both explicit and implicit feedback in RE.
Article
Full-text available
In the last decade, modern data analytics technologies have enabled the creation of software analytics tools offering real-time visualization of various aspects related to software development and usage. These tools seem to be particularly attractive for companies doing agile software development. However, the information provided by the available tools is neither aggregated nor connected to higher quality goals. At the same time, assessing and improving software quality has also been a key target for the software engineering community, yielding several proposals for standards and software quality models. Integrating such quality models into software analytics tools could close the gap by providing the connection to higher quality goals. This study aims at understanding whether the integration of quality models into software analytics tools provides understandable, reliable, useful, and relevant information at the right level of detail about the quality of a process or product, and whether practitioners intend to use it. Over the course of more than one year, the four companies involved in this case study deployed such a tool to assess and improve software quality in several projects. We used standardized measurement instruments to elicit the perception of 22 practitioners regarding their use of the tool. We complemented the findings with debriefing sessions held at the companies. In addition, we discussed challenges and lessons learned with four practitioners leading the use of the tool. Quantitative and qualitative analyses provided positive results; i.e., the practitioners’ perception with regard to the tool’s understandability, reliability, usefulness, and relevance was positive. Individual statements support the statistical findings and constructive feedback can be used for future improvements. We conclude that potential for future adoption of quality models within software analytics tools definitely exists and encourage other practitioners to use the presented seven challenges and seven lessons learned and adopt them in their companies.
Conference Paper
Full-text available
Context: Software evolution ensures that software systems in use stay up to date and provide value for end-users. However, it is challenging for requirements engineers to continuously elicit needs for systems used by heterogeneous end-users who are out of organisational reach. Objective: We aim at supporting continuous requirements elicitation by combining user feedback and usage monitoring. Online feedback mechanisms enable end-users to remotely communicate problems, experiences, and opinions, while monitoring provides valuable information about runtime events. It is argued that bringing both information sources together can help requirements engineers to understand end-user needs better. Method/Tool: We present FAME, a framework for the combined and simultaneous collection of feedback and monitoring data in web and mobile contexts to support continuous requirements elicitation. In addition to a detailed discussion of our technical solution, we present the first evidence that FAME can be successfully introduced in real-world contexts. Therefore, we deployed FAME in a web application of a German small and medium-sized enterprise (SME) to collect user feedback and usage data. Results/Conclusion: Our results suggest that FAME not only can be successfully used in industrial environments but that bringing feedback and monitoring data together helps the SME to improve their understanding of end-user needs, ultimately supporting continuous requirements elicitation.
Article
Full-text available
Feature models have been used since the 90's to describe software product lines as a way of reusing common parts in a family of software systems. In 2010, a systematic literature review was published summarizing the advances and settling the basis of the area of Automated Analysis of Feature Models (AAFM). From then on, different studies have applied the AAFM in different domains. In this paper, we provide an overview of the evolution of this field since 2010 by performing a systematic mapping study considering 423 primary sources. We found six different variability facets where the AAFM is being applied that define the tendencies: product configuration and derivation; testing and evolution; reverse engineering; multi-model variability-analysis; variability modelling and variability-intensive systems. We also confirmed that there is a lack of industrial evidence in most of the cases. Finally, we present where and when the papers have been published and who are the authors and institutions that are contributing to the field. We observed that the maturity is proven by the increment in the number of journals published along the years as well as the diversity of conferences and workshops where papers are published. We also suggest some synergies with other areas such as cloud or mobile computing among others that can motivate further research in the future.
Chapter
Full-text available
Context: Continuous experimentation is frequently used in web-facing companies and it is starting to gain the attention of embedded systems companies. However, embedded systems companies have different challenges and requirements to run experiments in their systems. Objective: This paper explores the challenges during the adoption of continuous experimentation in embedded systems from both industry practice and academic research. It presents strategies, guidelines, and solutions to overcome each of the identified challenges. Method: This research was conducted in two parts. The first part is a literature review with the aim to analyze the challenges in adopting continuous experimentation from the research perspective. The second part is a multiple case study based on interviews and workshop sessions with five companies to understand the challenges from the industry perspective and how they are working to overcome them. Results: This study found a set of twelve challenges divided into three areas; technical , business, and organizational challenges and strategies grouped into three categories, architecture, data handling and development processes. Conclusions: The set of identified challenges are presented with a set of strategies, guidelines, and solutions. To the knowledge of the authors, this paper is the first to provide an extensive list of challenges and strategies for continuous experimentation in embedded systems. Moreover, this research points out open challenges and the need for new tools and novel solutions for the further development of experimentation in embedded systems.
Chapter
Full-text available
Today, products within telecommunication, transportation, consumer electronics, home automation, security etc. involve an increasing amount of software. As a result, organizations that have a tradition within hardware development are transforming to become software-intensive organizations. This implies products where software constitutes the majority of functionality, costs, future investments, and potential. While this shift poses a number of challenges, it brings with it opportunities as well. One of these opportunities is to collect product data in order to learn about product use, to inform product management decisions, and for improving already deployed products. In this paper, we focus on the opportunity to use post-deployment data, i.e. data that is generated while products are used, as a basis for product improvement and new product development. We do so by studying three software development companies involved in large-scale development of embedded software. In our study, we highlight limitations in post-deployment data usage and we conclude that post-deployment data remains an untapped resource for most companies. The contribution of the paper is two-fold. First, we present key opportunities for more effective product development based on post-deployment data usage. Second, we propose a framework for organizations interested in advancing their use of post-deployment product data.
Article
Full-text available
In social economic researches we often need to measure non-observable, latent variables. For this purpose we use special research instruments, with uni and multi dimensional scales designed for measuring the constructs of interest. The validity and reliability of these scales are crucial and special tests have been developed in this respect. Reliability concerns often arise, due to external factors that can influence the power and significance of such tests. Even for standardized instruments such variations are possible, and they could seriously affect research results. The purpose of the present study is to investigate if and how external factors could influence a largely used reliability estimator - Cronbach Alpha. Several scales commonly used in marketing researches were tested, using a bootstrapping technique. Results show that important differences in the values of Cronbach Alpha are possible due to indirect influence from external factors such as respondents’ age, gender, level of study, religiousness, rural/urban living, survey type and relevance of the research subject for the participants to the survey.
Article
Full-text available
Medical educators attempt to create reliable and valid tests and questionnaires in order to enhance the accuracy of their assessment and evaluations. Validity and reliability are two fundamental elements in the evaluation of a measurement instrument. Instruments can be conventional knowledge, skill or attitude tests, clinical simulations or survey questionnaires. Instruments can measure concepts, psychomotor skills or affective values. Validity is concerned with the extent to which an instrument measures what it is intended to measure. Reliability is concerned with the ability of an instrument to measure consistently.1 It should be noted that the reliability of an instrument is closely associated with its validity. An instrument cannot be valid unless it is reliable. However, the reliability of an instrument does not depend on its validity.2 It is possible to objectively measure the reliability of an instrument and in this paper we explain the meaning of Cronbach’s alpha, the most widely used objective measure of reliability.
Article
Full-text available
This is a preview of the workshop on Aspect-Oriented Pr o- gramming at ICSE 98. It includes an overview of the pos i- tion papers. The workshop takes place on Monday, April the 20th 1998.
Conference Paper
Full-text available
When developing software platforms for product lines, decisions on which features to implement are affected by factors such as changing markets and evolving technologies. Effective scoping thus requires continuous assessment of how changes in the domain impact scoping decisions. Decisions may have to be changed as circumstances change, resulting in a dynamic evolution of the scope of software asset investments. This paper presents an industrial case study in a large-scale setting where a technique called feature survival charts for visualization of scoping change dynamics has been implemented and evaluated in three projects. The evaluation demonstrated that the charts can effectively focus investigations of reasons behind scoping decisions, valuable for future process improvements. A set of scoping measurements is also proposed, analyzed theoretically and evaluated empirically with data from the cases. The conclusions by the case company practitioners are positive, and the solution is integrated with their current requirements engineering measurement process.
Article
Full-text available
Software product line engineering is about producing a set of related products that share more commonalities than variabilities. Feature models are widely used for variability and commonality management in software product lines. Feature models are information models where a set of products are represented as a set of features in a single model. The automated analysis of feature models deals with the computer-aided extraction of information from feature models. The literature on this topic has contributed with a set of operations, techniques, tools and empirical results which have not been surveyed until now. This paper provides a comprehensive literature review on the automated analysis of feature models 20 years after of their invention. This paper contributes by bringing together previously disparate streams of work to help shed light on this thriving area. We also present a conceptual framework to understand the different proposals as well as categorise future contributions. We finally discuss the different studies and propose some challenges to be faced in the future.
Article
Full-text available
Design research (DR) positions information technology artifacts at the core of the Information Systems discipline. However, dominant DR thinking takes a technological view of the IT artifact, paying scant attention to its shaping by the organizational context. Consequently, existing DR methods focus on building the artifact and relegate evaluation to a subsequent and separate phase. They value technological rigor at the cost of organizational relevance, and fail to recognize that the artifact emerges from interaction with the organizational context even when its initial design is guided by the researchers' intent. We propose action design research (ADR) as a new DR method to address this problem. ADR reflects the premise that IT artifacts are ensembles shaped by the organizational context during development and use. The method conceptualizes the research process as containing the inseparable and inherently interwoven activities of building the IT artifact, intervening in the organization, and evaluating it concurrently. The essay describes the stages of ADR and associated principles that encapsulate its underlying beliefs and values. We illustrate ADR through a case of competence management at Volvo IT.
Article
Full-text available
Valid measurement scales for predicting user acceptance of computers are in short supply. Most subjective measures used in practice are unvalidated, and their relationship to system usage is unknown. The present research develops and validates new scales for two specific variables, perceived usefulness and perceived ease of use, which are hypothesized to be fundamental determinants of user acceptance. Definitions for these two variables were used to develop scale items that were pretested for content validity and then tested for reliability and construct validity in two studies involving a total of 152 users and four application programs. The measures were refined and streamlined, resulting in two six-item scales with reliabilities of .98 for usefulness and .94 for ease of use. The scales exhibited high convergent, discriminant, and factorial validity. Perceived usefulness was significantly correlated with both self-reported current usage (r=.63, Study 1) and self-predicted future usage (r =.85, Study 2). Perceived ease of use was also significantly correlated with current usage (r=.45, Study 1) and future usage (r=.59, Study 2). In both studies, usefulness had a significantly greater correlation with usage behavior than did ease of use. Regression analyses suggest that perceived ease of use may actually be a causal antecedent to perceived usefulness, as opposed to a parallel, direct determinant of system usage. Implications are drawn for future research on user acceptance.
Conference Paper
Full-text available
An important goal of most empirical software engineering research is the transfer of research results to industrial applications. Two important obstacles for this transfer are the lack of control of variables of case studies, i.e., the lack of explanatory power, and the lack of realism of controlled experiments. While it may be difficult to increase the explanatory power of case studies, there is a large potential for increasing the realism of controlled software engineering experiments. To convince industry about the validity and applicability of the experimental results, the tasks, subjects and the environments of the experiments should be as realistic as practically possible. Such experiments are, however, more expensive than experiments involving students, small tasks and pen-and-paper environments. Consequently, a change towards more realistic experiments requires a change in the amount of resources spent on software engineering experiments. This paper argues that software engineering researchers should apply for resources enabling expensive and realistic software engineering experiments similar to how other researchers apply for resources for expensive software and hardware that are necessary for their research. The paper describes experiences from recent experiments that varied in size from involving one software professional for 5 days to 130 software professionals, from 9 consultancy companies, for one day each.
Article
A significant amount of research project funding is spent creating customized annotation systems, re-inventing the wheel once and again, developing the same common features. In this paper, we present WACline, a Software Product Line to facilitate customization of browser extension Web annotation clients. WACline reduces the development effort by reusing common features (e.g., highlighting and commenting) while putting the main focus on customization. To this end, WACline provides already implemented 111 features that can be extended with new ones. In this way, researchers can reduce the development and maintenance costs of annotation clients.
Chapter
[Context and motivation] According to Data-Driven Requirements Engineering (RE), explicit and implicit user feedback can be considered a relevant source of requirements, thus supporting requirements elicitation. [Question/problem] Less attention has been paid so far to the role of implicit feedback in RE tasks, such as requirements validation, and on how to specify what implicit feedback to collect and analyse. [Principal idea/results] We propose an approach that leverages on goal-oriented requirements modelling combined with Goal-Question-Metric. We explore the applicability of the approach on an industrial project in which a platform for online training has been adapted to realise a citizen information service that has been used by hundreds of people during the COVID-19 pandemic. [Contributions] Our contribution is twofold: (i) we present our approach towards a systematic definition of requirements for data collection and analysis, at support of software requirements validation and evolution; (ii) we discuss our ideas using concrete examples from an industrial case study and formulate a research question that will be addressed by conducting experiments as part of our research.
Article
Software and data analytics solutions support improving development processes and the quality of the software produced in Agile Software Development (ASD). However, decision makers in software teams (e.g., product owner, project manager) are demanding powerful tools providing evidence data that support their strategic decision-making processes. In this paper, we present and provide access to QaSD, a Quality-aware Strategic Dashboard supporting decision makers in ASD. The dashboard allows decision makers to define high-level strategic indicators (e.g., customer satisfaction, process performance) related to software quality and to measure, explore, simulate and forecast the values of those indicators in order to explain and justify their decisions. Moreover, we also provide the results of a conducted evaluation of the dashboard quality in a real environment that evaluated the QaSD as usable, easy to use, with good navigation, and reliable.
Conference Paper
The paper describes a demonstration of pure::variants, a commercial tool for variant and variability management for product lines. The demonstration shows how flexible product line (PL) architectures can be built, tested and maintained by using the modeling and integration capabilities provided by pure::variants. With pure::variants being available for a long time, the demonstration (and the paper) combines both basics of pure::variants, known to parts of the audience, and new capabilities, introduced within the last year.
Conference Paper
This paper describes a demonstration of the product line engineering tool and framework Gears from BigLever Software. Gears provides a single feature modeling language, a single variation point mechanism that works across the entire product lifecycle, and a single automated product configurator that are used to configure a product portfolio's shared engineering assets appropriately for each product in the portfolio. The result is an automated production line capability that can quickly produce any product in the portfolio from the same, single set of shared assets.
Chapter
This chapter provides a prospective look at the “big research issues” in data quality. It is based on 25 years experience, most as a practitioner; early work with a terrific team of researchers and business people at Bell Labs and AT&T; constant reflection on the meanings and methods of quality, the strange and wondrous properties of data, the importance of data and data quality in markets and companies, and the underlying reasons that some enterprises make rapid progress and others fall flat; and interactions with most of the leading companies, practitioners, and researchers.
Conference Paper
Product Line Engineering (PLE) with feature models has gained reputation in science and industry as a successful reuse strategy in the domain of systems engineering. But, initially developing every new functionality as a reusable feature does not always comply to companies' needs. To be able to profit from PLE and being free to develop new functionality in the scope of a specific product variant, a proper update and feedback strategy has to be in place to avoid that variants decouple from the product line and reuse is no longer possible. In this work we discuss the challenges that need to be solved to realize a successful feedback strategy based on four examples from industry.
Conference Paper
Internetware is required to respond quickly to emergent user requirements or requirements changes by providing application upgrade or making context-aware recommendations. As user requirements in Internet computing environment are often changing fast and new requirements emerge more and more in a creative way, traditional requirements engineering approaches based on requirements elicitation and analysis cannot ensure the quick response of Internetware. In this paper, we propose an approach for mining context-aware user requirements from crowd contributed mobile data. The approach captures behavior records contributed by a crowd of mobile users and automatically mines context-aware user behavior patterns (i.e., when, where and under what conditions users require a specific service) from them using Apriori-M algorithm. Based on the mined user behaviors, emergent requirements or requirements changes can be inferred from the mined user behavior patterns and solutions that satisfy the requirements can be recommended to users. To evaluate the proposed approach, we conduct an experimental study and show the effectiveness of the requirements mining approach.
Article
Nowadays, Agile Software Development (ASD) is used to cope with increasing complexity in system development. Hybrid development models, with the integration of User-Centered Design (UCD), are applied with the aim to deliver competitive products with a suitable User Experience (UX). Therefore, stakeholder and user involvement during Requirements Engineering (RE) are essential in order to establish a collaborative environment with constant feedback loops. The aim of this study is to capture the current state of the art of the literature related to Agile RE with focus on stakeholder and user involvement. In particular, we investigate what approaches exist to involve stakeholder in the process, which methodologies are commonly used to present the user perspective and how requirements management is been carried out. We conduct a Systematic Literature Review (SLR) with an extensive quality assessment of the included studies. We identified 27 relevant papers. After analyzing them in detail, we derive deep insights to the following aspects of Agile RE: stakeholder and user involvement, data gathering, user perspective, integrated methodologies, shared understanding, artifacts, documentation and Non-Functional Requirements (NFR). Agile RE is a complex research field with cross-functional influences. This study will contribute to the software development body of knowledge by assessing the involvement of stakeholder and user in Agile RE, providing methodologies that make ASD more human-centric and giving an overview of requirements management in ASD.
Article
On autopsy, a patient is found to have hypertrophic cardiomyopathy. The patient's family pursues genetic testing that shows a "likely pathogenic" variant for the condition on the basis of a study in an original research publication. Given the dominant inheritance of the condition and the risk of sudden cardiac death, other family members are tested for the genetic variant to determine their risk. Several family members test negative and are told that they are not at risk for hypertrophic cardiomyopathy and sudden cardiac death, and those who test positive are told that they need to be regularly monitored for cardiomyopathy . . .
Article
Qualitative researchers rely — implicitly or explicitly — on a variety of understandings and corresponding types of validity in the process of describing, interpreting, and explaining phenomena of interest. In this article, Joseph Maxwell makes explicit this process by defining five types of understanding and validity commonly used in qualitative research. After discussing the nature of validity in qualitative research, the author details the philosophical and practical dimensions of: descriptive validity, interpretive validity, theoretical validity, generalizability, and evaluative validity. In each case, he addresses corresponding issues of understanding. In conclusion, Maxwell discusses the implications of the proposed typology as a useful checklist of the kinds of threats to validity that one needs to consider and as a framework for thinking about the nature of these threats and the possible ways that specific threats might be addressed.
Article
Experience from a dozen years of analyzing software engineering processes and products is summarized as a set of software engineering and measurement principles that argue for software engineering process models that integrate sound planning and analysis into the construction process. In the TAME (Tailoring A Measurement Environment) project at the University of Maryland, such an improvement-oriented software engineering process model was developed that uses the goal/question/metric paradigm to integrate the constructive and analytic aspects of software development. The model provides a mechanism for formalizing the characterization and planning tasks, controlling and improving projects based on quantitative analysis, learning in a deeper and more systematic way about the software process and product, and feeding the appropriate experience back into the current and future projects. The TAME system is an instantiation of the TAME software engineering process model as an ISEE (integrated software engineering environment). The first in a series of TAME system prototypes has been developed. An assessment of experience with this first limited prototype is presented including a reassessment of its initial architecture
Article
The use of software measures for project management and software process improvement has been encouraged for many years. However, the low level of acceptance and use of software measures in practice has been a constant concern. In this paper we propose and test a model which explains and predicts the use of software measures. The model is based on the technology acceptance model (TAM) and operationalizes the perceived usefulness construct according to the “desirable properties of software measures.” Our research provides guidance for software engineers in selecting among different software measures and for software metrics coordinators who are planning measurement programs.
Article
Performance measurement of tourism websites is becoming a critical issue for effective online marketing. The aim of this article is to analyse the effectiveness of entries (visit behaviour and length of sessions) depending on their traffic source: direct visit, in-link entries (for instance, en.wikipedia.org), and search engine visits (for example, Google). For this purpose, time series analysis of Google Analytics data is made use of. This method could be interesting for any tourism website optimizer.
Article
ContextThe technology acceptance model (TAM) was proposed in 1989 as a means of predicting technology usage. However, it is usually validated by using a measure of behavioural intention to use (BI) rather than actual usage.ObjectiveThis review examines the evidence that the TAM predicts actual usage using both subjective and objective measures of actual usage.MethodWe performed a systematic literature review based on a search of six digital libraries, along with vote-counting meta-analysis to analyse the overall results.ResultsThe search identified 79 relevant empirical studies in 73 articles. The results show that BI is likely to be correlated with actual usage. However, the TAM variables perceived ease of use (PEU) and perceived usefulness (PU) are less likely to be correlated with actual usage.ConclusionCare should be taken using the TAM outside the context in which it has been validated.
Conference Paper
Current requirements engineering practices for gathering user input are characterized by a number of communication gaps between users and engineers, which might lead to wrong requirements. The problem situations and context which underlie user input are either gathered back in time, or submitted with wrong a level of details. We think that making user input a first order concern of both software processes and software systems harbours many innovation opportunities. We propose and discuss a continuous and context-aware approach for communicating user input to engineering teams and other users, by a) instrumenting the problem domain, b) proactively recommending to share feedback and c) annotating graphical interfaces.
Controlled continuous deployment: A case study from the telecommunications domain
  • Dakkak
The technology acceptance model 30 years of tam
  • Davis
Research perspectives: The anatomy of a design principle
  • Gregor
Feature crumbs: Adapting usage monitoring to continuous software engineering
  • Johanssen
Automatic and manual web annotations in an infrastructure to handle fake news and other online media phenomena
  • Rehm