Chapter

Collaborative Web Accessibility Evaluation: An EARL-Based Workflow Approach

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The Web Accessibility Guidelines are designed to help developers ensure that web content is accessible to all users. These guidelines provide the foundation for evaluation tools that automate inspection processes. However, due to the heterogeneity of these guidelines and the subjectivity involved in their evaluation, humans are still necessary for the process. As a result, evaluating accessibility becomes a collaborative endeavor wherein different human experts and tools interact. Despite quickly being noticed by the W3C, it has largely been overlooked in the existing literature. Tool vendors often focus on providing a thorough evaluation rather than importing, integrating, and combining results from diverse sources. This paper examines an EARL-based document-centric workflow. It introduces a dedicated editor for EARL documents that accounts for the life-cycle of EARL documents where evaluation episodes feedback on each other. Expert evaluations were conducted (n = 5 experts), not so much about the tool itself but its ability to facilitate a collaborative approach.KeywordsWeb engineeringWeb accessibilityWeb accessibility evaluationBrowser extensionAggregation

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Several Web accessibility evaluation tools have been put forward to reduce the burden of identifying accessibility barriers for users, especially those with disabilities. One common issue in using accessibility evaluation tools in practice is that the results provided by different tools are sometimes unclear, and often diverging. Such limitations may confuse the users who may not understand the reasons behind them, and thus hamper the possible adoption of such tools. Hence, there is a need for tools that shed light on their actual functioning, and the success criteria and techniques supported. For this purpose, we must identify what criteria should be adopted in order for such tools to be transparent and to help users better interpret their results. In this paper, we discuss such issues, provide design criteria for obtaining user-centred and transparent accessibility evaluation tools, and analyse how they have been addressed by a representative set of open, license-free, accessibility tools. We also report on the results of a survey with 138 users of such tools, aimed at capturing the perceived usefulness of previously identified transparency requirements. Finally, we performed a user study with 18 users working in the Web design or accessibility fields with the goal of receiving more feedback about the transparency of a selected subset of accessibility tools.
Article
Full-text available
For the last ten years, software product line (SPL) tool developers have been facing the implementation of different variability requirements and the support of SPL engineering activities demanded by emergent domains. Despite systematic literature reviews identifying the main characteristics of existing tools and the SPL activities they support, these reviews do not always help to understand if such tools provide what complex variability projects demand. This paper presents an empirical research in which we evaluate the degree of maturity of existing SPL tools focusing on their support of variability modeling characteristics and SPL engineering activities required by current application domains. We first identify the characteristics and activities that are essential for the development of SPLs by analyzing a selected sample of case studies chosen from application domains with high variability. Second, we conduct an exploratory study to analyze whether the existing tools support those characteristics and activities. We conclude that, with the current tool support, it is possible to develop a basic SPL approach. But we have also found out that these tools present several limitations when dealing with complex variability requirements demanded by emergent application domains, such as non-Boolean features or large configuration spaces. Additionally, we identify the necessity for an integrated approach with appropriate tool support to completely cover all the activities and phases of SPL engineering. To mitigate this problem, we propose different road map using the existing tools to partially or entirely support SPL engineering activities, from variability modeling to product derivation.
Conference Paper
Full-text available
Continuous Deployment (CD) advocates for quick and frequent deployments of software to production. The goal is to bring new functionality as early as possible to users while learning from their usage. CD has emerged from web-based applications where it has been gaining traction over the past years. While CD is appealing for many software development organizations , empirical evidence on perceived benefits in software-intensive embedded systems is scarce. The objective of this paper is to identify perceived benefits after transitioning to continuous deployment from a long-cycle release and deployment process. To do that, a case study at a multinational telecommunication company was conducted focusing on large and complex embedded software; the Third Generation (3G) Radio Access Network (RAN) software.
Article
Full-text available
Agile software development (ASD) and software product line (SPL) have shown significant benefits for software engineering processes and practices. Although both methodologies promise similar benefits, they are based on different foundations. SPL encourages systematic reuse that exploits the commonalities of various products belonging to a common domain and manages their variations systematically. In contrast, ASD stresses a flexible and rapid development of products using iterative and incremental approaches. ASD encourages active involvement of customers and their frequent feedback. Both ASD and SPL require alternatives to extend agile methods for several reasons such as (1) to manage reusability and variability across the products of any domain, (2) to avoid the risk of developing core assets that will become obsolete and not used in future projects, and (3) to meet the requirements of changing markets. This motivates the researchers for the integration of ASD and SPL approaches. As a result, an innovative approach called agile product line engineering (APLE) by integrating SPL and ASD has been introduced. The principal aim of APLE is to maximize the benefits of ASD and SPL and address the shortcomings of both. However, combining both is a major challenge. Researchers have proposed a few approaches that try to put APLE into practice, but none of the existing approaches cover all APLE features needed. This paper proposes a new dynamic variability approach for APLE that uses APLE practices for reusing features. The proposed approach (PA) is based on the agile method Scrum and the reactive approach of SPL. In this approach, reusable core assets respond reactively to customer requirements. The PA constructs and develops the SPL architecture iteratively and incrementally. It provides the benefits of reusability and maintainability of SPLs while keeping the delivery-focused approach from agile methods. We conducted a quantitative survey of software companies applying the APLE to assess the performance of the PA and hypotheses of empirical study. Findings of empirical evaluation provide evidence on integrating ASD and SPL and the application of APLE into practices.
Conference Paper
Full-text available
The evolution of variant-rich systems is a challenging task. To support developers, the research community has proposed a range of different techniques over the last decades. However, many techniques have not been adopted in practice so far. To advance such techniques and to support their adoption, it is crucial to evaluate them against realistic baselines, ideally in the form of generally accessible benchmarks. To this end, we need to improve our empirical understanding of typical evolution scenarios for variant-rich systems and their relevance for benchmarking. In this paper, we establish eleven evolution scenarios in which benchmarks would be beneficial. Our scenarios cover typical lifecycles of variant-rich system, ranging from clone & own to adopting and evolving a configurable product-line platform. For each scenario, we formulate benchmarking requirements and assess its clarity and relevance via a survey with experts in variant-rich systems and software evolution. We also surveyed the existing benchmarking landscape, identifying synergies and gaps. We observed that most scenarios, despite being perceived as important by experts, are only partially or not at all supported by existing benchmarks-a call to arms for building community benchmarks upon our requirements. We hope that our work raises awareness for benchmarking as a means to advance techniques for evolving variant-rich systems, and that it will lead to a benchmarking initiative in our community.
Article
Full-text available
In the last decade, modern data analytics technologies have enabled the creation of software analytics tools offering real-time visualization of various aspects related to software development and usage. These tools seem to be particularly attractive for companies doing agile software development. However, the information provided by the available tools is neither aggregated nor connected to higher quality goals. At the same time, assessing and improving software quality has also been a key target for the software engineering community, yielding several proposals for standards and software quality models. Integrating such quality models into software analytics tools could close the gap by providing the connection to higher quality goals. This study aims at understanding whether the integration of quality models into software analytics tools provides understandable, reliable, useful, and relevant information at the right level of detail about the quality of a process or product, and whether practitioners intend to use it. Over the course of more than one year, the four companies involved in this case study deployed such a tool to assess and improve software quality in several projects. We used standardized measurement instruments to elicit the perception of 22 practitioners regarding their use of the tool. We complemented the findings with debriefing sessions held at the companies. In addition, we discussed challenges and lessons learned with four practitioners leading the use of the tool. Quantitative and qualitative analyses provided positive results; i.e., the practitioners’ perception with regard to the tool’s understandability, reliability, usefulness, and relevance was positive. Individual statements support the statistical findings and constructive feedback can be used for future improvements. We conclude that potential for future adoption of quality models within software analytics tools definitely exists and encourage other practitioners to use the presented seven challenges and seven lessons learned and adopt them in their companies.
Conference Paper
Full-text available
The analysis of software product lines is challenging due to the potentially large number of products, which grow exponentially in terms of the number of features. Product sampling is a technique used to avoid exhaustive testing, which is often infeasible. In this paper, we propose a classification for product sampling techniques and classify the existing literature accordingly. We distinguish the important characteristics of such approaches based on the information used for sampling, the kind of algorithm, and the achieved coverage criteria. Furthermore, we give an overview on existing tools and evaluations of product sampling techniques. We share our insights on the state-of-the-art of product sampling and discuss potential future work.
Conference Paper
Full-text available
Context: Software evolution ensures that software systems in use stay up to date and provide value for end-users. However, it is challenging for requirements engineers to continuously elicit needs for systems used by heterogeneous end-users who are out of organisational reach. Objective: We aim at supporting continuous requirements elicitation by combining user feedback and usage monitoring. Online feedback mechanisms enable end-users to remotely communicate problems, experiences, and opinions, while monitoring provides valuable information about runtime events. It is argued that bringing both information sources together can help requirements engineers to understand end-user needs better. Method/Tool: We present FAME, a framework for the combined and simultaneous collection of feedback and monitoring data in web and mobile contexts to support continuous requirements elicitation. In addition to a detailed discussion of our technical solution, we present the first evidence that FAME can be successfully introduced in real-world contexts. Therefore, we deployed FAME in a web application of a German small and medium-sized enterprise (SME) to collect user feedback and usage data. Results/Conclusion: Our results suggest that FAME not only can be successfully used in industrial environments but that bringing feedback and monitoring data together helps the SME to improve their understanding of end-user needs, ultimately supporting continuous requirements elicitation.
Article
Full-text available
Feature models have been used since the 90's to describe software product lines as a way of reusing common parts in a family of software systems. In 2010, a systematic literature review was published summarizing the advances and settling the basis of the area of Automated Analysis of Feature Models (AAFM). From then on, different studies have applied the AAFM in different domains. In this paper, we provide an overview of the evolution of this field since 2010 by performing a systematic mapping study considering 423 primary sources. We found six different variability facets where the AAFM is being applied that define the tendencies: product configuration and derivation; testing and evolution; reverse engineering; multi-model variability-analysis; variability modelling and variability-intensive systems. We also confirmed that there is a lack of industrial evidence in most of the cases. Finally, we present where and when the papers have been published and who are the authors and institutions that are contributing to the field. We observed that the maturity is proven by the increment in the number of journals published along the years as well as the diversity of conferences and workshops where papers are published. We also suggest some synergies with other areas such as cloud or mobile computing among others that can motivate further research in the future.
Article
Full-text available
A software product line comprises a family of software products that share a common set of features. Testing an entire product-line product-by-product is infeasible due to the potentially exponential number of products in the number of features. Accordingly, several sampling approaches have been proposed to select a presumably minimal, yet sufficient number of products to be tested. Since the time budget for testing is limited or even a priori unknown, the order in which products are tested is crucial for effective product-line testing. Prioritizing products is required to increase the probability of detecting faults faster. In this article, we propose similarity-based prioritization, which can be efficiently applied on product samples. In our approach, we incrementally select the most diverse product in terms of features to be tested next in order to increase feature interaction coverage as fast as possible during product-by-product testing. We evaluate the gain in the effectiveness of similarity-based prioritization on three product lines with real faults. Furthermore, we compare similarity-based prioritization to random orders, an interaction-based approach, and the default orders produced by existing sampling algorithms considering feature models of various sizes. The results show that our approach potentially increases effectiveness in terms of fault detection ratio concerning faults within real-world product-line implementations as well as synthetically seeded faults. Moreover, we show that the default orders of recent sampling algorithms already show promising results, which, however, can still be improved in many cases using similarity-based prioritization.
Conference Paper
Full-text available
E-Commerce is a reliably creating business segment with a significant measure of potential. The continuing advancement of customers prompts an unyieldingly genuine contention. Various fresh works are arriving in the business sector through included parts, placing weight on assessing. An assessing competition will impact benefit of associations out and out and may pulverize business segment costs over a whole deal. Toward the end, there may be various terminations as a result of this resistance in expense. Google Analytics is a specific illustrative instrument from Google which serves to trace guests & assemble an extensive variety of profitable data concerning them. This instrument has ended up being extremely standard for site heads and has a tremendous offer of business part. The instrument's straightforwardness of utilization has developed it as an OK distinctive choice for customary web investigative mechanical assemblies. Then again, with regards to conveying unrefined statistics, belongings get troublesome; Google Analytics tries to maintain assembled data developing a fitting ability to passage rough data. It is a venture course founded examination device & gives a straightforward perspective of site movement and promoting viability. It has capable & development includes that provide knowledge into sites & enhance site ROI. This exploration rag comprises of contextual investigation on Google Analytics that exhibits components and creates the report. On the assessment's premise of web utilization, site proprietors can improve the proficiency of showcasing, and web traffic flow. This paper likewise exhibits the Google's restrictions analytics and proposes better way to deal with the concerns. This paper separates & depicts the tactic by these inconveniences remain tended near & perceives whether Google Analytics can be seeing as the best in class distinctive alternative for accumulate numbers aimed at web use mining.
Conference Paper
Full-text available
The UMUX-LITE is a two-item questionnaire that assesses perceived usability. In previous research it correlated highly with the System Usability Scale (SUS) and, with appropriate adjustment using a regression formula, had close correspondence to the magnitude of SUS scores, enabling its comparison with emerging SUS norms. Those results, however, were based on the data used to compute the regression formula. In this paper we describe a study conducted to investigate the quality of the published formula using independent data. The formula worked well. As expected, the correlation between the SUS and UMUX-LITE was significant and substantial, and the overall mean difference between their scores was just 1.1, about 1 % of the range of values the questionnaires can take, verifying the efficacy of the regression formula.
Chapter
Full-text available
Today, products within telecommunication, transportation, consumer electronics, home automation, security etc. involve an increasing amount of software. As a result, organizations that have a tradition within hardware development are transforming to become software-intensive organizations. This implies products where software constitutes the majority of functionality, costs, future investments, and potential. While this shift poses a number of challenges, it brings with it opportunities as well. One of these opportunities is to collect product data in order to learn about product use, to inform product management decisions, and for improving already deployed products. In this paper, we focus on the opportunity to use post-deployment data, i.e. data that is generated while products are used, as a basis for product improvement and new product development. We do so by studying three software development companies involved in large-scale development of embedded software. In our study, we highlight limitations in post-deployment data usage and we conclude that post-deployment data remains an untapped resource for most companies. The contribution of the paper is two-fold. First, we present key opportunities for more effective product development based on post-deployment data usage. Second, we propose a framework for organizations interested in advancing their use of post-deployment product data.
Conference Paper
Full-text available
Software-intensive product companies are becoming increasingly data-driven as can be witnessed by the big data and Internet of Things trends. However, optimally prioritizing customer needs in a mass-market context is notoriously difficult. While most companies use product owners or managers to represent the customer, research shows that the prioritization made is far from optimal. In earlier research, we have coined the term ‘the open loop problem’ to characterize this challenge. For instance, research shows that up to half of all the features in products are never used. This paper presents a conceptual model that emphasizes the need for combining qualitative feedback in early stages of development with quantitative customer observation in later stages of development. Our model is inductively derived from an 18 months close collaboration with six large global software-intensive companies.
Conference Paper
Full-text available
Combining Software Product Line Engineering (SPLE) and Agile Software Development (ASD) is an approach for companies working with similar systems in scenarios of volatile requirements aiming to address fast changes and a systematic variability management. However, a development process covering the whole SPLE lifecycle and using agile practices in small and medium size development projects has not been established yet. There is a need to disseminate such combination through well-defined roles, activities, tasks and artifacts. This paper presents SPLICE, a lightweight development process combining SPLE and agile practices, following reactive and extractive approaches to build similar systems. SPLICE addresses the needs of small development teams aiming to adopt SPL practices with low upfront costs and fast return on investment. In order to evaluate our proposal, we report our experience in a case study by developing Rescue MeSPL, a product line for mobile applications that assists users in emergency situations. The case study results point SPLICE achieves the three evaluated aspects by providing short and proper iterations, possibilities for activities adaptations and continuous feedback.
Conference Paper
Full-text available
The use of web accessibility evaluation tools is a widespread practice. Evaluation tools are heavily employed as they help in reducing the burden of identifying accessibility barriers. However, an over-reliance on automated tests often leads to setting aside further testing that entails expert evaluation and user tests. In this paper we empirically show the capabilities of current automated evaluation tools. To do so, we investigate the effectiveness of 6 state-of-the-art tools by analysing their coverage, completeness and correctness with regard to WCAG 2.0 conformance. We corroborate that relying on automated tests alone has negative effects and can have undesirable consequences. Coverage is very narrow as, at most, 50% of the success criteria are covered. Similarly, completeness ranges between 14% and 38%; however, some of the tools that exhibit higher completeness scores produce lower correctness scores (66-71%) due to the fact that catching as many violations as possible can lead to an increase in false positives. Therefore, relying on just automated tests entails that 1 of 2 success criteria will not even be analysed and among those analysed, only 4 out of 10 will be caught at the further risk of generating false positives.
Conference Paper
Full-text available
In this paper we present the UMUX-LITE, a two-item questionnaire based on the Usability Metric for User Experience (UMUX) [6]. The UMUX-LITE items are This system's capabilities meet my requirements and This system is easy to use." Data from two independent surveys demonstrated adequate psychometric quality of the questionnaire. Estimates of reliability were .82 and .83 -- excellent for a two-item instrument. Concurrent validity was also high, with significant correlation with the SUS (.81, .81) and with likelihood-to-recommend (LTR) scores (.74, .73). The scores were sensitive to respondents' frequency-of-use. UMUX-LITE score means were slightly lower than those for the SUS, but easily adjusted using linear regression to match the SUS scores. Due to its parsimony (two items), reliability, validity, structural basis (usefulness and usability) and, after applying the corrective regression formula, its correspondence to SUS scores, the UMUX-LITE appears to be a promising alternative to the SUS when it is not desirable to use a 10-item instrument.
Conference Paper
Full-text available
Processing documents is a critical and crucial aspect for en-terprises. The management of documents involves several people and can be a long and time-wasting process. We developed a document workflow engine based on email paradigm. Exploiting a web application, the sub-ject of the workflow, the document, can be sent as an email attachment. Our solution overcomes the current limitation in the use of Document Workflow software, especially regarding user experience. With our sys-tem there is no need for users to learn how a new framework works. In addition, users with different roles have different customized view of the document. Moreover a suggest feature has been implemented; the system suggests a possible receiver for the document, depending on the document flow.
Article
Full-text available
The advent of the WWW has caused a dramatic evolution in getting the information. From the web, most of the people nowadays can get information easily from Internet from anywhere and anytime. These Web sites also become an important tool that the government used to market their institution to prospective customers and, to provide government information and services available on-line. To make these web sites more functional, they must be accessible to all users, including those with disabilities. This study was undertaken with the purpose of identifying the accessibility of e-government websites based on the World Wide Web Consortium (W3C). In addition, the study was also intended to investigate webmaster's knowledge and practices pertaining to accessibility. The result of the analysis indicated that there were no single Malaysian e-government websites that passed the W3C Priority 1 accessibility checkpoints. A follow-up study using interviews unveiled that most webmasters did not fully adhere to the standard of Web Content Accessibility Guidelines (WCAG).
Article
Full-text available
The present research develops and tests a theoretical extension of the Technology Acceptance Model (TAM) that explains perceived usefulness and usage intentions in terms of social influence and cognitive instrumental processes. The extended model, referred to as TAM2, was tested using longitudinal data collected regarding four different systems at four organizations (N = 156), two involving voluntary usage and two involving mandatory usage. Model constructs were measured at three points in time at each organization: preimplementation, one month postimplementation, and three months postimplementation. The extended model was strongly supported for all four organizations at all three points of measurement, accounting for 40%--60% of the variance in usefulness perceptions and 34%--52% of the variance in usage intentions. Both social influence processes (subjective norm, voluntariness, and image) and cognitive instrumental processes (job relevance, output quality, result demonstrability, and perceived ease of use) significantly influenced user acceptance. These findings advance theory and contribute to the foundation for future research aimed at improving our understanding of user adoption behavior.
Article
Full-text available
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, 1986. Includes bibliographical references (leaves 233-250). Photocopy.
Article
Software product line (SPL) scoping aids companies to define the boundaries of their resources such as products, domains, and assets, the target of reuse tasks scoping technical and organizational aspects. As scoping guides the management of the resources in SPL development, it becomes one of the core activities in this process. We can find in the literature several approaches on this topic, proposing techniques and methodologies to be applicable in different organizational scenarios. However, no work comprehensively reviews such approaches and describes the advances in state of the art in the last years. In this context, we look into identifying, analyzing, and extracting detailed characteristics from SPL scoping proposals found in the literature. These characteristics allowed us to compare these approaches, reason about their applicability, and identify existing limitations and research opportunities. Thus, we conducted a systematic literature review alongside snowballing, following a well-defined protocol to retrieve, classify and extract information from the literature. We analyzed a total of 58 studies, identifying 41 different approaches in the field, highlighting their similarities and differences, and establishing a generic scoping process. Furthermore, we discuss research opportunities in the SPL scoping field.
Article
Most modern software systems (operating systems like Linux or Android, Web browsers like Firefox or Chrome, video encoders like ffmpeg, x264 or VLC, mobile and cloud applications, etc.) are highly configurable. Hundreds of configuration options, features, or plugins can be combined, each potentially with distinct functionality and effects on execution time, security, energy consumption, etc. Due to the combinatorial explosion and the cost of executing software, it is quickly impossible to exhaustively explore the whole configuration space. Hence, numerous works have investigated the idea of learning it from a small sample of configurations’ measurements. The pattern ”sampling, measuring, learning” has emerged in the literature, with several practical interests for both software developers and end-users of configurable systems. In this systematic literature review, we report on the different application objectives (e.g., performance prediction, configuration optimization, constraint mining), use-cases, targeted software systems, and application domains. We review the various strategies employed to gather a representative and cost-effective sample. We describe automated software techniques used to measure functional and non-functional properties of configurations. We classify machine learning algorithms and how they relate to the pursued application. Finally, we also describe how researchers evaluate the quality of the learning process. The findings from this systematic review show that the potential application objective is important; there are a vast number of case studies reported in the literature related to particular domains or software systems. Yet, the huge variant space of configurable systems is still challenging and calls to further investigate the synergies between artificial intelligence and software engineering.
Chapter
[Context and motivation] According to Data-Driven Requirements Engineering (RE), explicit and implicit user feedback can be considered a relevant source of requirements, thus supporting requirements elicitation. [Question/problem] Less attention has been paid so far to the role of implicit feedback in RE tasks, such as requirements validation, and on how to specify what implicit feedback to collect and analyse. [Principal idea/results] We propose an approach that leverages on goal-oriented requirements modelling combined with Goal-Question-Metric. We explore the applicability of the approach on an industrial project in which a platform for online training has been adapted to realise a citizen information service that has been used by hundreds of people during the COVID-19 pandemic. [Contributions] Our contribution is twofold: (i) we present our approach towards a systematic definition of requirements for data collection and analysis, at support of software requirements validation and evolution; (ii) we discuss our ideas using concrete examples from an industrial case study and formulate a research question that will be addressed by conducting experiments as part of our research.
Article
Developers often need to use appropriate APIs to program efciently, but it is usually a difcult task to identify the exact one they need from a vast list of candidates. To ease the burden, a multitude of API recommendation approaches have been proposed. However, most of the currently available API recommenders do not support the effective integration of user feedback into the recommendation loop. In this paper, we propose a framework, BRAID (Boosting RecommendAtion with Implicit FeeDback), which leverages learning-to-rank and active learning techniques to boost recommendation performance. By exploiting user feedback information, we train a learning-to-rank model to re-rank the recommendation results. In addition, we speed up the feedback learning process with active learning. Existing query-based API recommendation approaches can be plugged into BRAID. We select three state-of-the-art API recommendation approaches as baselines to demonstrate the performance enhancement of BRAID measured by Hit@k (Top-k), MAP, and MRR. Empirical experiments show that, with acceptable overheads, the recommendation performance improves steadily and substantially with the increasing percentage of feedback data, comparing with the baselines.
Chapter
The shift from on-premise to cloud enterprise software has fundamentally changed the interactions between software vendors and users. Since enterprise software users are now working directly on an infrastructure that is provided or monitored by the software vendor, enterprise cloud software providers are technically able to measure nearly every interaction of each individual user with their cloud products. The novel insights into actual usage that can thereby be gained provide an opportunity for requirements engineering to improve and effectively extend enterprise cloud products while they are being used. Even though academic literature has been proposing ideas and conceptualizations of leveraging usage data in requirements engineering for nearly a decade, there are no functioning prototypes that implement such ideas. Drawing on an exploratory case study at one of the world’s leading cloud software vendors, we conceptualize an Action Design Research project that fills this gap. The project aims to establish a software prototype that supports requirements engineering activities to incrementally improve enterprise cloud software in the post-delivery phase based on actual usage data.
Chapter
The objective of Web accessibility evaluation is to verify that all users are able to use the Web, this means that they can perceive, understand, navigate, and interact with it (Henry 2018a). Since the manual verification of the fulfilment of guidelines that specify accessibility requirements can often turn out to be difficult and cumbersome, it is crucial to have appropriate computer tools available to assist this activity. There exist numerous applications that perform diverse types of automatic accessibility evaluations. On the other hand, on-site and remote evaluations with users can also be supported by specific tools. Even manual evaluations may be supported by crowdsourcing-based tools. All these innovations may have crucial importance in the advancement of Web accessibility. This chapter studies the need for tools in this field, reviews the main characteristics of the tools used for Web accessibility evaluation, and reflects upon their future.
Article
Product lines are designed to support the reuse of features across multiple products. Features are product functional requirements that are important to stakeholders. In this context, feature models are used to establish a reuse platform and allow the configuration of multiple products through the interactive selection of a valid combination of features. Although there are many specialized configurator tools that aim to provide configuration support, they only assure that all dependencies from selected features are automatically satisfied. However, no support is provided to help decision makers focus on likely relevant configuration options. Consequently, since decision makers are often unsure about their needs, the configuration of large feature models becomes challenging. To improve the efficiency and quality of the product configuration process, we propose a new approach that provides users with a limited set of permitted, necessary and relevant choices. To this end, we adapt six state-of-the-art recommender algorithms to the product line configuration context. We empirically demonstrate the usability of the implemented algorithms in different domain scenarios, based on two real-world datasets of configurations. The results of our evaluation show that recommender algorithms, such as CF-shrinkage, CF-significance weighting, and BRISMF, when applied in the context of product-line configuration can efficiently support decision makers in a most efficient selection of features.
Article
Context-aware recommender systems leverage the value of recommendations by exploiting context information that affects user preferences and situations, with the goal of recommending items that are really relevant to changing user needs. Despite the importance of context-awareness in the recommender systems realm, researchers and practitioners lack guides that help them understand the state of the art and how to exploit context information to smarten up recommender systems. This paper presents the results of a comprehensive systematic literature review we conducted to survey context-aware recommenders and their mechanisms to exploit context information. The main contribution of this paper is a framework that characterizes context-aware recommendation processes in terms of: i) the recommendation techniques used at every stage of the process, ii) the techniques used to incorporate context, and iii) the stages of the process where context is integrated into the system. This systematic literature review provides a clear understanding about the integration of context into recommender systems, including context types more frequently used in the different application domains and validation mechanisms-explained in terms of the used datasets, properties, metrics, and evaluation protocols. The paper concludes with a set of research opportunities in this field.
Book
From the Publisher: Accessibility is about making a website accessible to those with aural, visual or physical disabilities - or rather, constructing websites that don't exclude these people from accessing the content or services being provided.The purpose of this book is to enable web professionals to create and retrofit accessible websites quickly and easily. It includes discussion of the technologies and techniques that are used to access websites, and the legal stipulations and precedents that exist in the US and around the world. The main body of the book is devoted to the business of making websites and their content accessible: testing techniques, web development tools, and advanced techniques. The book concludes with a quick reference checklist for creating accessible websites. This is a practical book with lots of step-by-step examples, supported with a Section 508 checklist enabling developers to refer to the book as they work as well as a complete list of accessibility testing and approval sites. What's great about this book? It will teach you how to make your content accessible to people with disabilities Explains in detail how to test sites for accessibility issues It teaches you how to use a wide range of accessibility tools effectively You'll learn how to make your web site fully section 508 compliant Includes a detailed coverage of accessibility law, and is full of practical examples Includes tutorial on accessible authoring with Flash MX
Chapter
A Software Product Line (SPL) aims to support the development of a family of similar software products from a common set of shared assets. SPLs represent a long-term investment and have a considerable life-span. In order to realize a return-on-investment, companies dealing with SPLs often plan their product portfolios and software engineering activities strategically over many months or years ahead. Compared to single system engineering, SPL evolution exhibits higher complexity due to the variability and the interdependencies between products. This chapter provides an overview on concepts and challenges in SPL evolution and summarizes the state of the art. For this we first describe the general process for SPL evolution and general modeling concepts to specify SPL evolution. On this base, we provide an overview on the state-of-the-art in each of the main process tasks which are migration towards SPLs, analysis of (existing) SPL evolution, planning of future SPL evolution, and implementation of SPL evolution.
Article
Maintenance of unused features leads to unnecessary costs. Therefore, identifying unused features can help product owners to prioritize maintenance efforts. We present a tool that employs dynamic analyses and text mining techniques to identify use case documents describing unused features to approximate unnecessary features. We report on a preliminary study of an industrial business information system over the course of one year quantifying unused features and measuring the performance of the approach. It indicates the relevance of the problem and the capability of the presented approach to detect unused features.
Chapter
Before we go any further, we would like to begin by providing the reader with a step-by-step introduction to the methodological debate surrounding expert interviews. In doing so, we will start with a brief discussion of the generally accepted advantages and risks of expert interviews in research practice (1). We will follow this by outlining current trends in the sociological debate regarding experts and expertise, since expert interviews are — at least on the surface — defined by their object, namely the expert (2). We will then conclude with a look at the current methodological debate regarding expert interviews, an overview of the layout and structure of this book, as well as summaries of the 12 articles it contains (3).
Article
The use of learning technologies is becoming ubiquitous in higher education. As a result, there is a pressing need to develop methods to evaluate their accessibility to ensure that students do not encounter barriers to accessibility while engaging in e-learning. In this study, sample online units were evaluated for accessibility by automated tools and by student participants (in sessions moderated and unmoderated by researchers), and the data from these different methods of e-learning accessibility evaluation were compared. Nearly all students were observed encountering one or more barriers to accessibility while completing the online units, though the automated tools did not predict these barriers and instead predicted potential barriers that were not relevant to the study participants. These data underscore the need to carry out student-centered accessibility evaluation in addition to relying on automated tools and accessibility guideline conformance as measures of accessibility. Students preferred to participate in unmoderated sessions, and the data from the unmoderated sessions were comparable to that from the more traditional moderated sessions. Additional work is needed to further explore methods of student-centered evaluation, including different variations of unmoderated sessions. © 2015 Association for Educational Communications and Technology
Article
Highly configurable systems allow users to tailor software to specific needs. Valid combinations of configuration options are often restricted by intricate constraints. Describing options and constraints in a variability model allows reasoning about the supported configurations. To automate creating and verifying such models, we need to identify the origin of such constraints. We propose a static analysis approach, based on two rules, to extract configuration constraints from code. We apply it on four highly configurable systems to evaluate the accuracy of our approach and to determine which constraints are recoverable from the code. We find that our approach is highly accurate (93% and 77% respectively) and that we can recover 28% of existing constraints. We complement our approach with a qualitative study to identify constraint sources, triangulating results from our automatic extraction, manual inspections, and interviews with 27 developers. We find that, apart from low-level implementation dependencies, configuration constraints enforce correct runtime behavior, improve users’ configuration experience, and prevent corner cases. While the majority of constraints is extractable from code, our results indicate that creating a complete model requires further substantial domain knowledge and testing. Our results aim at supporting researchers and practitioners working on variability model engineering, evolution, and verification techniques.
Article
Context Due to increased competition and the advent of mass customization, many software firms are utilizing product families–groups of related products derived from a product platform–to provide product variety in a cost-effective manner. The key to designing a successful software product family is the product platform, so it is important to determine the most appropriate product platform scope related to business objectives, for product line development. Aim This paper proposes a novel method to find the optimized scope of a software product platform based on end-user features. Method The proposed method, PPSMS (Product Platform Scoping Method for Software Product Lines), mathematically formulates the product platform scope selection as an optimization problem. The problem formulation targets identification of an optimized product platform scope that will maximize life cycle cost savings and the amount of commonality, while meeting the goals and needs of the envisioned customers’ segments. A simulated annealing based algorithm that can solve problems heuristically is then used to help the decision maker in selecting a scope for the product platform, by performing tradeoff analysis of the commonality and cost savings objectives. Results In a case study, PPSMS helped in identifying 5 non-dominated solutions considered to be of highest preference for decision making, taking into account both cost savings and commonality objectives. A quantitative and qualitative analysis indicated that human experts perceived value in adopting the method in practice, and that it was effective in identifying appropriate product platform scope.
Article
Web accessibility means that disabled people can effectively perceive, understand, navigate, and interact with the web. Web accessibility evaluation methods are needed to validate the accessibility of web pages. However, the role of subjectivity and of expertise in such methods is unknown and has not previously been studied. This article investigates the effect of expertise in web accessibility evaluation methods by conducting a Barrier Walkthrough (BW) study with 19 expert and 57 nonexpert judges. The BW method is an evaluation method that can be used to manually assess the accessibility of web pages for different user groups such as motor impaired, low vision, blind, and mobile users.Our results show that expertise matters, and even though the effect of expertise varies depending on the metric used to measure quality, the level of expertise is an important factor in the quality of accessibility evaluation of web pages. In brief, when pages are evaluated with nonexperts, we observe a drop in validity and reliability. We also observe a negative monotonic relationship between number of judges and reproducibility: more evaluators mean more diverse outputs. After five experts, reproducibility stabilizes, but this is not the case with nonexperts. The ability to detect all the problems increases with the number of judges: With 3 experts all problems can be found, but for such a level 14 nonexperts are needed. Even though our data show that experts rated pages differently, the difference is quite small. Finally, compared to nonexperts, experts spent much less time and the variability among them is smaller, they were significantly more confident, and they rated themselves as being more productive. The article discusses practical implications regarding how BW results should be interpreted, how to recruit evaluators, and what happens when more than one evaluator is hired.Supplemental materials are available for this article. Go to the publisher's online edition of Human–Computer Interaction for statistical details and additional measures for this article.
Article
This article presents nearly 10 year's worth of System Usability Scale (SUS) data collected on numerous products in all phases of the development lifecycle. The SUS, developed by Brooke (1996)2. Brooke , J. 1996. “SUS: A “quick and dirty” usability scale”. In Usability evaluation in industry, Edited by: Jordan , P. W. , Thomas , B. A. Weerdmeester and McClelland , I. L. 189–194. London: Taylor & Francis. View all references, reflected a strong need in the usability community for a tool that could quickly and easily collect a user's subjective rating of a product's usability. The data in this study indicate that the SUS fulfills that need. Results from the analysis of this large number of SUS scores show that the SUS is a highly robust and versatile tool for usability professionals. The article presents these results and discusses their implications, describes nontraditional uses of the SUS, explains a proposed modification to the SUS to provide an adjective rating that correlates with a given score, and provides details of what constitutes an acceptable SUS score.
Using combined expertise to evaluate web accessibility
  • J Brewer
Inès Gam, Raul Mazo, and Henda Ghezala. 2021. Devising Configuration Guidance with Process Mining Support
  • Houssem Chemingui
  • Camille Salinesi
Houssem Chemingui, Camille Salinesi, Inès Gam, Raul Mazo, and Henda Ghezala. 2021. Devising Configuration Guidance with Process Mining Support. (2021).
Software product linespractices and patterns
  • Paul Clements
  • Linda M Northrop
Paul Clements and Linda M. Northrop. 2002. Software product linespractices and patterns. Addison-Wesley.
How can feature usage be tracked across product variants?
  • Oscar Díaz
  • Raul Medeiros
  • Mustafa Al-Hajjaji
Oscar Díaz, Raul Medeiros, and Mustafa Al-Hajjaji. 2023. How can feature usage be tracked across product variants? Implicit Feedback in Software Product Lines. (2023), 43. https://arxiv.org/abs/2309.04278 Manuscript submitted for publication.