Conference Paper

Towards Software Quality and User Satisfaction through User Interfaces

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This paper is a position paper of my PhD thesis: https://www.researchgate.net/publication/264194567_Enhancing_Software_Quality_and_Quality_of_Experience_through_User_Interfaces With this work we expect to provide the community and the industry with a solid basis for the development, integration, and deployment of software testing tools. As a solid basis we mean, on one hand, a set of guidelines, recommendations, and clues to better comprehend, analyze, and perform software testing processes, and on the other hand, a set of robust software frameworks that serve as a starting point for the development of future testing tools.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Thesis
Full-text available
1 Introduction 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Human-Computer Interaction . . . . . . . . . . . . . . . . . . . . 2 1.1.2 Software Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.3 Data Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.1.4 Software Usability . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.1.5 Quality of Experience . . . . . . . . . . . . . . . . . . . . . . . . . 10 Enhancing Software Quality . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.2.1 Block 1: Achieving Quality in Interaction Components Separately . 12 1.2.2 Block 2: Achieving Quality of User-System Interaction as a Whole . 14 1.3 Goals of this PhD Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1.4 Publications Related to this PhD Thesis . . . . . . . . . . . . . . . . . . . . 19 1.5 Software Contributions of this PhD Thesis . . . . . . . . . . . . . . . . . . 22 1.5.1 OHT: Open HMI Tester . . . . . . . . . . . . . . . . . . . . . . . . 23 1.5.2 S-DAVER: Script-based Data Verification . . . . . . . . . . . . . . 24 1.5.3 PALADIN: Practice-oriented Analysis and Description of Multi-modal Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 CARIM: Context-Aware and Ratings Interaction Metamodel . . . . 25 1.6 Summary of Research Goals, Publications, and Software Contributions . . 25 1.7 Context of this PhD Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 1.8 Structure of this PhD Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2 Related Work 2.1 Group 1: Approaches Assuring Quality of a Particular Interaction Component 30 2.2 Validation of Software Output . . . . . . . . . . . . . . . . . . . . 30 2.1.1.1 Methods Using a Complete Model of the GUI . . . . . . 31 2.1.1.2 Methods Using a Partial Model of the GUI . . . . . . . . 32 2.1.1.3 Methods Based on GUI Interaction . . . . . . . . . . . . 32 Validation of User Input . . . . . . . . . . . . . . . . . . . . . . . . 33 2.1.2.1 Data Verification Using Formal Logic . . . . . . . . . . . 34 2.1.2.2 Data Verification Using Formal Property Monitors . . . . 35 2.1.2.3 Data Verification in GUIs and in the Web . . . . . . . . . 36 Group 2: Approaches Describing and Analyzing User-System Interaction as a Whole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.2.1 Analysis of User-System Interaction . . . . . . . . . . . . . . . . . 37 2.2.1.1 Analysis for the Development of Multimodal Systems . . 37 2.2.1.2 Evaluation of Multimodal Interaction . . . . . . . . . . . 41 2.2.1.3 Evaluation of User Experiences . . . . . . . . . . . . . . 44 Analysis of Subjective Data of Users . . . . . . . . . . . . . . . . . 45 2.2.2.1 User Ratings Collection . . . . . . . . . . . . . . . . . . 45 2.2.2.2 Users Mood and Attitude Measurement . . . . . . . . . . 47 Analysis of Interaction Context . . . . . . . . . . . . . . . . . . . . 49 2.2.3.1 Interaction Context Factors Analysis . . . . . . . . . . . 49 2.2.3.2 Interaction Context Modeling . . . . . . . . . . . . . . . 50 3 Evaluating Quality of System Output 3.1 Introduction and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.2 GUI Testing Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.3 Preliminary Considerations for the Design of a GUI Testing Architecture . 57 3.3.1 Architecture Actors . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.3.2 Organization of the Test Cases . . . . . . . . . . . . . . . . . . . . 57 3.3.3 Interaction and Control Events . . . . . . . . . . . . . . . . . . . . 58 The OHT Architecture Design . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.4.1 The HMI Tester Module Architecture . . . . . . . . . . . . . . . . 60 3.4.2 The Preload Module Architecture . . . . . . . . . . . . . . . . . . . 61 3.4.3 The Event Capture Process . . . . . . . . . . . . . . . . . . . . . . 63 3.4.4 The Event Playback Process . . . . . . . . . . . . . . . . . . . . . . 64 The OHT Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.5.1 Implementation of Generic and Final Functionality . . . . . . . . . 66 3.5.1.1 Generic Data Model . . . . . . . . . . . . . . . . . . . . 66 3.5.1.2 Generic Recording and Playback Processes . . . . . . . . 66 Implementation of Specific and Adaptable Functionality . . . . . . 67 3.5.2.1 Using the DataModelAdapter . . . . . . . . . . . . . . . 68 3.5.2.2 The Preloading Process . . . . . . . . . . . . . . . . . . . 68 3.5.2.3 Adapting the GUI Event Recording and Playback Processes 69 3.7 Technical Details About the OHT Implementation . . . . . . . . . 70 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.6.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.6.2 The Test Case Generation Process . . . . . . . . . . . . . . . . . . 73 3.6.3 Validation of Software Response . . . . . . . . . . . . . . . . . . . 74 3.6.4 Tolerance to Modifications, Robustness, and Scalability . . . . . . . 75 3.6.5 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 76 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4 Evaluating Quality of Users Input 4.1 Introduction and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.2 Practical Analysis of Common GUI Data Verification Approaches . . . . . 82 4.3 Monitoring GUI Data at Runtime . . . . . . . . . . . . . . . . . . . . . . . 83 4.4 Verification Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.4.1 Rule Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.4.2 Using the Rules to Apply Correction . . . . . . . . . . . . . . . . . 87 4.4.3 Rule Arrangement . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.4.4 Rule Management . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.4.4.1 88 Loading the Rules . . . . . . . . . . . . . . . . . . . . . xviiContents 4.4.4.2 Evolution of the Rules and the GUI . . . . . . . . . . . . 89 Correctness and Consistency of the Rules . . . . . . . . . . . . . . 90 4.5 The Verification Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.6 S-DAVER Architecture Design . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.6.1 Architecture Details . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.6.2 Architecture Adaptation . . . . . . . . . . . . . . . . . . . . . . . . 94 4.7 S-DAVER Implementation and Integration Considerations . . . . . . . . . 95 4.8 Practical Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.8.1 Integration, Configuration, and Deployment of S-DAVER . . . . . 99 4.8.2 Defining the Rules in Qt Bitcoin Trader . . . . . . . . . . . . . . . 100 4.8.3 Defining the Rules in Transmission . . . . . . . . . . . . . . . . . . 103 4.8.4 Development and Verification Experience with S-DAVER . . . . . 106 4.9 Performance Analysis of S-DAVER . . . . . . . . . . . . . . . . . . . . . . 106 4.10 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4.10.1 A Lightweight Data Verification Approach . . . . . . . . . . . . . 108 4.10.2 The S-DAVER Open-Source Implementation . . . . . . . . . . . . . 110 4.10.3 S-DAVER Compared with Other Verification Approaches . . . . . . 111 4.11 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 5 Modeling and Evaluating Quality of Multimodal User-System Interaction 115 5.1 Introduction and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 116 5.2 A Model-based Framework to Evaluate Multimodal Interaction . . . . . . . 118 5.2.1 Classification of Dialog Models by Level of Abstraction . . . . . . 119 5.2.2 The Dialog Structure . . . . . . . . . . . . . . . . . . . . . . . . . 120 5.2.3 Using Parameters to Describe Multimodal Interaction . . . . . . . 121 5.2.3.1 Adaptation of Base Parameters . . . . . . . . . . . . . . 121 5.2.3.2 Defining new Modality and Meta-communication Param- eters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 5.2.3.3 Defining new Parameters for GUI and Gesture Interaction 123 5.2.3.4 Classification of the Multimodal Interaction Parameters . 124 5.3 Design of PALADIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 5.4 Implementation, Integration, and Usage of PALADIN . . . . . . . . . . . . 129 5.5 Application Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 5.6 Assessment of PALADIN as an Evaluation Tool . . . . . . . . . . . 132 5.5.1.1 Participants and Material . . . . . . . . . . . . . . . . . 134 5.5.1.2 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . 136 5.5.1.3 Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . 137 Usage of PALADIN in a User Study . . . . . . . . . . . . . . . . . 140 5.5.2.1 Participants and Material . . . . . . . . . . . . . . . . . 140 5.5.2.2 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . 144 5.5.2.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 5.6.1 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . 146 5.6.2 Practical Application of PALADIN . . . . . . . . . . . . . . . . . . 147 5.6.3 Completeness of PALADIN According to Evaluation Guidelines . . 148 5.6.4 Limitations in Automatic Logging of Interactions Parameters . . . 151 5.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 5.8 Parameters Used in PALADIN . . . . . . . . . . . . . . . . . . . . . . . . . 152 6 Modeling and Evaluating Mobile Quality of Experience 163 6.1 Introduction and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 164 6.2 Context- and QoE-aware Interaction Analysis . . . . . . . . . . . . . . . . 166 6.2.1 Incorporating Context Information and User Ratings into Interaction Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 6.2.2 Arranging the Parameters for the Analysis of Mobile Experiences . 168 6.2.3 Using CARIM for QoE Assessment . . . . . . . . . . . . . . . . . . 169 Context Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 6.3.1 Quantifying the Surrounding Context . . . . . . . . . . . . . . . . 170 6.3.2 Arranging Context Parameters into CARIM . . . . . . . . . . . . . 173 User Perceived Quality Parameters . . . . . . . . . . . . . . . . . . . . . . 173 6.4.1 Measuring the Attractiveness of Interaction . . . . . . . . . . . . . 173 6.4.2 Measuring Users Emotional State and Attitude toward Technology Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 6.5 Arranging User Parameters into CARIM . . . . . . . . . . . . . . . 177 CARIM Model Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 6.5.1 The Base Design: PALADIN . . . . . . . . . . . . . . . . . . . . . 177 6.5.2 The New Proposed Design: CARIM . . . . . . . . . . . . . . . . . 178 6.6 CARIM Model Implementation . . . . . . . . . . . . . . . . . . . . . . . . 181 6.7 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 6.7.1 Participants and Material . . . . . . . . . . . . . . . . . . . . . . . 183 6.7.2 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 6.7.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 6.9 Comparing the Two Interaction Designs for UMU Lander 185 Validating the User Behavior Hypotheses . . . . . . . . . 186 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 6.8.1 Modeling Mobile Interaction and QoE . . . . . . . . . . . . . . . . 188 6.8.3 CARIM Implementation and Experimental Validation . . . . . . . 190 CARIM Compared with Other Representative Approaches . . . . . 191 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 7 Conclusions and Further Work 7.2 Conclusions of this PhD Thesis . . . . . . . . . . . . . . . . . . . . . . . . 196 7.1.2 Driving Forces of this PhD Thesis . . . . . . . . . . . . . . . . . . 196 Work and Research in User-System Interaction Assessment . . . . 197 7.1.3 Goals Achieved in this PhD Thesis . . . . . . . . . . . . . . . . . . 200 Future Lines of Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 Bibliography 205 A List of Acronyms 231
Conference Paper
We have developed a platform named Advanced Test Environment (ATE) for supporting the design and the automatic execution of UX tests for applications running on Android smartphones. The platform collects objective metrics used to estimate the UX. In this paper, we investigate the extent that the metrics captured by ATE are able to approximate the results that are obtained from UX testing with real human users. Our findings suggest that ATE produces UX estimations that are comparable to those reported by human users. We have also compared ATE with three widespread benchmark tools that are commonly used in the industry, and the results show that ATE outperforms these tools.
Article
Full-text available
Software testing is usually used to report and/or assure about the quality, reliability and robustness of a software in the given context or scenario where it is intended to work. This is specially true in the case of user interfaces, where the testing phase is critical before the software can be accepted by the final user and put in execution mode. This paper presents the design, and the later implementation as a contribution to the open-source community, of a Human-Machine Interface (HMI) testing architecture, named OpenHMI-Tester. The current design is aimed to support major event-based and open-source windowing systems, thus providing generality, besides some other features such as scalability and tolerance to modifications in the HMI design process. The proposed architecture has been also integrated as part of a complex industrial scenario, which helped to identify a set of realistic requirements of the testing architecture, as well as to test it with real HMI developers.
Article
Full-text available
Software testing is a very important phase in the software development process. These tests are performed to ensure the quality, reliability, and robustness of software within the execution context it is expected to be used. Some of these tests are focused on ensuring that the graphical user interfaces (GUIs) are working properly. GUI Testing represents a critical step before the software is deployed and accepted by the end user. This paper describes the main results obtained from our research work in the software testing and GUI testing areas. It also describes the design and implementation of an open source architecture used as a framework for developing automated GUI testing tools.
Article
Full-text available
This paper presents a new approach to automatically generate GUI test cases and validation points from a set of annotated use cases. This technique helps to reduce the effort required in GUI modeling and test coverage analysis during the software testing process. The test case generation process described in this paper is initially guided by use cases describing the GUI behavior, recorded as a set of interactions with the GUI elements (e.g., widgets being clicked, data input, etc.). These use cases (modeled as a set of initial test cases) are annotated by the tester to indicate interesting variations in widget values (ranges, valid or invalid values) and validation rules with expected results. Once the use cases are annotated, this approach uses the new defined values and validation rules to automatically generate new test cases and validation points, easily expanding the test coverage. Also, the process allows narrowing the GUI model testing to precisely identify the set of GUI elements, interactions, and values the tester is interested in.
Conference Paper
The usability of Free/Libre/Open Source Software (FLOSS) is a new challenge for HCI professionals. Although HCI professionals are working on usability issues in FLOSS, the CHI community has not yet organized with respect to FLOSS. The purpose of this SIG is to bring together HCI professionals and researchers to discuss current issues in FLOSS. Specifically, this SIG looks at usability, the role of HCI expertise, and design rationale in FLOSS projects.
Conference Paper
In this SIG experienced usability test practitioners and HCI researchers will discuss common errors in usability test facilitation. Usability test facilitation is the actual encounter between a test participant and a facilitator from the moment the test participant arrives at the test location to the moment the test participant leaves the test location. The purpose of this SIG is to identify common approaches to usability test facilitation that work and do not work, and to come up with realistic suggestions for how to prevent typical problems.
Conference Paper
Although the World-Wide-Web (WWW) has significantly enhanced open-source software (OSS) development, it has also created new challenges for quality assurance (QA), especially for OSS with a graphical-user interface (GUI) front-end. Distributed communities of developers, connected by the WWW, work concurrently on loosely-coupled parts of the OSS and the corresponding GUI code. Due to the unprecedented code churn rates enabled by the WWW, developers may not have time to determine whether their recent modifications have caused integration problems with the overall OSS; these problems can often be detected via GUI integration testing. However, the resource-intensive nature of GUI testing prevents the application of existing automated QA techniques used during conventional OSS evolution. In this paper we develop new process support for three nested techniques that leverage developer communities interconnected by the WWW to automate model-based testing of evolving GUI-based OSS. The "innermost" technique (crash testing) operates on each code check-in of the GUI software and performs a quick and fully automatic integration test. The second technique {smoke testing) operates on each day's GUI build and performs functional "reference testing" of the newly integrated version of the GUI. The third (outermost) technique (comprehensive GUI testing) conducts detailed integration testing of a major GUI release. An empirical study involving four popular OSS shows that (1) the overall approach is useful to detect severe faults in GUI-based OSS and (2) the nesting paradigm helps to target feedback and makes effective use of the WWW by implicitly distributing QA
Article
In this paper, we present a technique for selective capture and replay of program executions. Given an application, the technique allows for (1) selecting a subsystem of interest, (2) capturing at runtime all the interactions between such subsystem and the rest of the application, and (3) replaying the recorded interactions on the subsystem in isolation. The technique can be used in several scenarios. For example, it can be used to generate test cases from users' executions, by capturing and collecting partial executions in the field. For another example. it can be used to perform expensive dynamic analyses off-line. For yet another example, it can be used to extract subsystem or unit tests from system tests. Our technique is designed to be efficient, in that we only capture information that is relevant to the considered execution. To this end, we disregard all data that, although flowing through the boundary of the subsystem of interest, do not affect the execution. In the paper, we also present a preliminary evaluation of the technique performed using SCARPE, a prototype tool that implements our approach.
Article
Test designers widely believe that the overall effectiveness and cost of software testing depends largely on the type and number of test cases executed on the software. This article shows that the test oracle, a mechanism that determines whether a software is executed correctly for a test case, also significantly impacts the fault detection effectiveness and cost of a test case. Graphical user interfaces (GUIs), which have become ubiquitous for interacting with today's software, have created new challenges for test oracle development. Test designers manually “assert” the expected values of specific properties of certain GUI widgets in each test case; during test execution, these assertions are used as test oracles to determine whether the GUI executed correctly. Since a test case for a GUI is a sequence of events, a test designer must decide: (1) what to assert; and (2) how frequently to check an assertion, for example, after each event in the test case or after the entire test case has completed execution. Variations of these two factors significantly impact the fault-detection ability and cost of a GUI test case. A technique to declaratively specify different types of automated GUI test oracles is described. Six instances of test oracles are developed and compared in an experiment on four software systems. The results show that test oracles do affect the fault detection ability of test cases in different and interesting ways: (1) Test cases significantly lose their fault detection ability when using “weak” test oracles; (2) in many cases, invoking a “thorough” oracle at the end of test case execution yields the best cost-benefit ratio; (3) certain test cases detect faults only if the oracle is invoked during a small “window of opportunity” during test execution; and (4) using thorough and frequently-executing test oracles can compensate for not having long test cases.
Article
Monitoring-Oriented Programming (MOP) is a formal framework for software development and analysis, in which the developer specifies desired properties using definable specification formalisms, along with code to execute when properties are violated or validated. The MOP framework automatically generates monitors from the specified properties and then integrates them together with the user-defined code into the original system. The previous design of MOP only allowed specifications without parameters, so it could not be used to state and monitor safety properties referring to two or more related objects. In this paper we propose a parametric specification-formalism-independent extension of MOP, together with an implementation of JavaMOP that supports parameters. In our current implementation, parametric specifications are translated into AspectJ code and then weaved into the application using off-the-shelf AspectJ compilers; hence, MOP specifications can be seen as formal or logical aspects. Our JavaMOP implementation was extensively evaluated on two benchmarks, Dacapo and Tracematches, showing that runtime verification in general and MOP in particular are feasible. In some of the examples, millions of monitor instances are generated, each observing a set of related objects. To keep the runtime overhead of monitoring and event observation low, we devised and implemented a decentralized indexing optimization. Less than 8% of the experiments showed more than 10% runtime overhead; in most cases our tool generates monitoring code as efficient as the hand-optimized code. Despite its genericity, JavaMOP is empirically shown to be more efficient than runtime verification systems specialized and optimized for particular specification formalisms. Many property violations were detected during our experiments; some of them are benign, others indicate defects in programs. Many of these are subtle and hard to find by ordinary testing.
Conference Paper
"Nightly/daily building and smoke testing" have become widespread since they often reveal bugs early in the software development process. During these builds, software is compiled, linked, and (re)tested with the goal of validating its basic functionality. Although successful for conventional software, smoke tests are difficult to develop and automatically rerun for software that has a graphical user interface (GUI). In this paper, we describe a framework called DART (daily automated regression tester) that addresses the needs of frequent and automated re-testing of GUI software. The key to our success is automation: DART automates everything from structural GUI analysis; test case generation; test oracle creation; to code instrumentation; test execution; coverage evaluation; regeneration of test cases; and their re-execution. Together with the operating system's task scheduler, DART can execute frequently with little input from the developer/tester to retest the GUI software. We provide results of experiments showing the time taken and memory required for GUI analysis, test case and test oracle generation, and test execution. We also empirically compare the relative costs of employing different levels of detail in the GUI test cases.
Article
A Computing Community is a group of cooperating machines that behave like a single system and runs all general-purpose applications---without any modifications to the shrink-wrapped binary applications or the operating system. In order to realize such a system, we inject a wrapper DLL into an application at runtime that manages the execution of the application and endows it with features such as virtualization and mobility. This paper describes the concept of virtualization, and the mechanism of injection and the implementation of a wrapper DLL. We focus on one kind of applications, those that use sockets to communicate with other processes. We show how these processes can migrate between machines without disrupting the socket communications. We have implemented the software that needs to be injected into the application to enable this feature. Handling more application types is part of the continued research in the Computing Communities project. 1. Introduction The ...
OHT Plus Product Family OHT+Inspector: https://sourceforge
  • Saes-University Cá
  • Murcia
Cá SAES -University of Murcia, " OHT Plus Product Family, " OHT+Inspector: https://sourceforge. net/projects/ohtpinspector OHT+Experience: https: //sourceforge.net/projects/ohtpexperience OHT+Behavior: https://sourceforge.net/projects/ohtpbehavior, 2010.
Ieee 1012-2004 -ieee standard for software verification and validation
  • E Engineers
I. of Electrical and E. Engineers, " Ieee 1012-2004 -ieee standard for software verification and validation, " IEEE. IEEE, June 2005, pp. 0–110, revision of IEEE Std 1012-1998. [Online]. Available: \U{http://ieeexplore. ieee.org/servlet/opac?punumber=9958}
Open Tester for Us-ability Prototype
  • Saes-University Cá
  • Murcia
Cá SAES -University of Murcia, " Open Tester for Us-ability Prototype, " http://sourceforge.net/projects/ohtu, 2009.
Tips and Tricks for Avoiding Common Problems in Usability Test Facilitation, " in CHI '08: CHI '08 extended abstracts on Human factors in computing systems
  • R Molich
  • C Wilson
R. Molich and C. Wilson, " Tips and Tricks for Avoiding Common Problems in Usability Test Facilitation, " in CHI '08: CHI '08 extended abstracts on Human factors in computing systems. New York, NY, USA: ACM, 2008, pp. 2379–2382.
Mop: an efficient and generic runtime verification framework, " in OOPSLA '07: Proceedings of the 22nd annual ACM SIGPLAN conference on Object-oriented programming systems and applications
  • F Chen
F. Chen and G. Ros, " Mop: an efficient and generic runtime verification framework, " in OOPSLA '07: Proceedings of the 22nd annual ACM SIGPLAN conference on Object-oriented programming systems and applications. New York, NY, USA: ACM, 2007, pp. 569–588.
Aspect-Oriented GUI Verification Framework
  • Saes-University Of Cátedra
  • Murcia
OpenHMI-Tester Prototype
  • Saes-University Of Cátedra
  • Murcia
Automated GUI Testing Validation Guided by Annotated Use Cases
  • Gi Jahrestagung
  • S Lni
  • E Fischer
  • R Maehle
  • Reischuk
OHT Plus Product Family OHT+Inspector: https://sourceforge.net/projects/ohtpinspector OHT+Experience: https://sourceforge.net/projects/ohtpexperience OHT+Behavior: https://sourceforge
  • Saes-University Of Cátedra
  • Murcia
Open Tester for Usability Prototype
  • Saes-University Of Cátedra
  • Murcia