ArticlePDF Available

Abstract and Figures

Software testing is usually used to report and/or assure about the quality, reliability and robustness of a software in the given context or scenario where it is intended to work. This is specially true in the case of user interfaces, where the testing phase is critical before the software can be accepted by the final user and put in execution mode. This paper presents the design, and the later implementation as a contribution to the open-source community, of a Human-Machine Interface (HMI) testing architecture, named OpenHMI-Tester. The current design is aimed to support major event-based and open-source windowing systems, thus providing generality, besides some other features such as scalability and tolerance to modifications in the HMI design process. The proposed architecture has been also integrated as part of a complex industrial scenario, which helped to identify a set of realistic requirements of the testing architecture, as well as to test it with real HMI developers.
Content may be subject to copyright.
OpenHMI-Tester: An Open and
Cross-Platform Architecture for GUI Testing
and Certification
Pedro Luis Mateo Navarro, Gregorio Mart´ınez P´erez
Departamento de Ingenier´ıa de la Informaci´on y las Comunicaciones
University of Murcia, 30.071 Murcia, Spain
Diego Sevilla Ruiz
Departamento de Ingenier´ıa y Tecnolog´ıa de Computadores
University of Murcia, 30.071 Murcia, Spain
Abstract
Software testing is usually used to report and/or assure about the quality, reliability
and robustness of a software in the given context or scenario where it is intended to
work. This is specially true in the case of user interfaces, where the testing phase is
critical before the software can be accepted by the final user and put in execution
mode. This paper presents the design, and the later implementation as a contribu-
tion to the open-source community, of a Human-Machine Interface (HMI) testing
architecture, named OpenHMI-Tester. The current design is aimed to support major
event-based and open-source windowing systems, thus providing generality, besides
some other features such as scalability and tolerance to modifications in the HMI
design process. The proposed architecture has been also integrated as part of a
complex industrial scenario, which helped to identify a set of realistic requirements
of the testing architecture, as well as to test it with real HMI developers.
Key words: Graphical User Interfaces, GUI Testing, Testing Tools, GUI
Verification, Test-based Frameworks, GUI Certification
Email addresses:
pedromateo@um.es (Pedro Luis Mateo Navarro),
gregorio@um.es (Gregorio Mart´ınez P´erez),
dsevilla@um.es (Diego Sevilla Ruiz).
Preprint submitted to IJCSSE Special Issue on Open Source CertificationJuly 1, 2009
1 Introduction
GUIs (Graphical User Interfaces) can constitute as much as 60 percent of the
code of an application today [1], and their importance is increasing with the
recognition of their usefulness [2]. Given their importance in current devel-
opments, testing GUIs for correctness can enhance the safety of the entire
system, robustness and usability [3]. This should lead us conclude that ad-
vanced GUI testing tools are present in most of the developments today, but
it is not true in most cases.
While use of GUIs continues increasing, GUI testing has, until recently, re-
mained a neglected area of research [2]. Although there is evidence that ad-
vanced development processes and tools have helped organizations reduce the
time to build products, they have not yet been able to significantly reduce the
time and effort required to test them. Clearly, there is a need for improvement
in testing support [4], but since GUIs have special characteristics, techniques
developed to test conventional software cannot be directly applied to GUI
testing [2].
Apart from other limitations we can find in GUI Testing (i.e. coverage criteria,
verification process, etc.), one of the most important limitations comes from
the amount of different windowing system employed in developments today.
GUIs can be implemented using a windowing system chosen from a wide range
of alternatives, so open architectures that support different windowing systems
should be a clear requisite for GUI testing tools.
This is why the research, design, and development of an open and cross-
platform GUI Testing tool is an interesting challenge.
Nowadays in Graphical User Interface Testing we can classify available meth-
ods, tools and technologies in three different approaches, depending on the
way the test case generation process is performed.
The first approach builds a complete testing model matching the whole GUI
Model of the application. This model includes all the objects or components
and their properties, and it is analyzed in order to explore all the possible
paths on an automated test case generation process [5,6,7,8,9,10,11].
Techniques belonging to the second approach do not build a complete GUI
Model: they build a smaller model corresponding only to the part of the GUI
to be tested, thus reducing the number of generated test cases. To build and
annotate that model, these techniques usually employ modeling languages
such as UML [4,12].
Finally, techniques belonging to the third approach do not build any model,
2
as test cases are generated directly using the GUI to be tested. These tech-
niques normally use capture and replay tools which capture events from the
tested application and use them to generate test cases that replay the actions
performed by the user [13]. These techniques allow developers to perform a
lightweight testing process that only takes into account the required elements,
actions and properties of the GUI to be tested, avoiding the rest of unneeded
test cases that get automatically generated using other approaches [14].
In this paper we describe a GUI Testing tool architecture belonging to the third
approach described above. We propose an open architecture which describes
a capture/replay tool [15,16] based on GUI Events System, which represents
one of the most used techniques in current windowing systems. The OpenHMI-
Tester architecture, by performing a non-intrusive application hooking, is able
to capture generated events from an application and post new events to it.
The proposed architecture also allows adapting the implementation of a few
modules of the OpenHMI-Tester to fit both the windowing system and the
operating system in use.
This paper is structured as follows. Related work is presented in Section 2.
In Section 3 we present the requirements that provide the driving philosophy
behind this design. The OpenHMI-Tester architecture is described in Section 4
and the implementation in Section 5. In Section 6 we discuss some of the most
relevant issues of the design presented in this paper. Finally, Section 7 provides
conclusions and lines of future work.
2 Related work
As commented before, GUI Testing Tools can be cassified in three different
approaches depending on the test case generation process:
First approach: tools that build a complete GUI Model which is explored
on an automated test case generation process.
Second approach: tools that build a smaller model corresponding to the part
of the GUI to be tested. These tools use modeling languages to define and
annotate the model in order to guide the test case generation process.
Third approach: these tools do not build any model, as test cases are gen-
erated directly using the GUI itself to be tested. These techniques usually
use capture and replay tools.
One of the techniques belonging to the first approach is the described in [6] by
Memon, Banerjee, and Nagarajan. They describe the GUI Ripping process, a
method which traverses all the windows of the GUI and analyses all the events
and elements that may appear to automatically build a model composed of
3
a GUI Forest (a tree composed of all the GUI elements) and an Event-Flow
Graph EFG (a graph which describes all the GUI events). This model has to
be verified, fixed and completed manually by the developer.
In [7] and [8] they describe DART Framework, which follows this philosophy.
In DART, once the model is built and manually verified, the process explores
all the possible test cases. Of those, the developers select the set of test cases
identified as meaningful, and the Oracle Generator
1
creates the expected out-
put. Finally, test cases are automatically executed and their output compared
with the Oracle expected results.
In [5] and [9] White, Almezen, and Alzeidi describe a technique that follows a
similar approach. They describe a GUI Model based on reduced FSMs (finite-
state machines.) They also introduce the concept of Responsibility (a desired
or expected activity in tue GUI) and define a Complete Interaction Sequence
(CIS) as a sequence of GUI objects and actions that will raise an identified
responsibility.
Once the test cases are automatically generated from the model, they are
executed to find “defects” (serious departures from the specified behavior)
and “surprises” (user-recognized departures from the expected behavior, but
not explicitly indicated in the specifications of the GUI.)
This approach [18] focuses part of its efforts on building a model of the GUI
from which the test cases are generated automatically. Create and maintain
these models is a very expensive process [19]. Since a GUI is often composed
of a complex hierarchy of widgets in which many of them are irrelevant to
the developer (the same happens with their properties), this process may
generate a vast amount of “useless” and “senseless” test cases. It gets worse if
we consider GUI elements like graphic panels or similar, which have complex
properties whose values are very difficult to store and maintain.
This leads to other problems, such a scalability and modifications tolerance.
In these techniques, adding a new GUI element (e.g. a new widget or event)
has two worrying side effects: first, it may cause the set of generated test
cases to grow exponentially (all paths are explored); second, it forces a GUI
Model update (and a manual verification and completion) and the regeneration
of all affected test cases. The problem gets worse if we consider that the
model contains all the properties of the GUI elements, so minimal changes in
the appearance or distribution of the GUI (e.g. the window-border size has
changed) may cause a lot of errors during the validation process because the
expected outputs used by oracles may become obsolete [1].
1
A Test Oracle [17] is a mechanism which generates outputs that a product should
have for determining, after a comparison process, whether the product has passed
or failed a test.
4
Finally, there are other limitations associated with this approach as, for in-
stance, the fact that the model has to be manually corrected and completed,
or that the test case generation process does not take into account dynamic
GUI behavior performed by the application code (for example, disabling a
widget), or specific widgets properties (e.g. its “disabled” property).
In [4] Vieira, Leduc, Hasling, Subramanyan, and Kazmeier describe a method
belonging to the second approach in which UML Use Cases and Activity
Diagrams are used to respectively describe which functionalities should be
tested and how to test them. The main goal in this approach is to generate
test cases automatically from an enriched UML Model.
Models may be enriched in two ways: first, the refinement of activities on UML
Activity Diagrams in order to improve the accuracy (e.g. decreasing the level
of abstraction); second, making annotations on the activity diagrams by using
custom UML Stereotypes which represent additional test requirements.
The basis of this approach is closer to the needs of GUI verification, because
testing an scenario usually can be performed in three steps: launch the GUI,
perform several use cases in sequence, and exit. Its scalability is better than
the previously mentioned approach, because it focuses its efforts only on a
section of the model, though the combination of functionalities would lead to
a very large number of test cases. The use case refinement also helps to reduce
the number of generated test cases. On the other hand this method, as the
previously described approach, has two very important limitations: first, the
developers have to spend so much effort building, refining and annotating the
model, which is not a lightweight process (inconceivable in some methodologies
such as, for instance, Extreme Programming [20]); second, these techniques
have a low tolerance to modifications, since a change in the GUI forces to
review and update the model and to regenerate the affected test cases.
Finally, the techniques belonging to the third approach work as follows: once
the application to be tested is launched, the developer interacts with the GUI
which generate GUI events that are automatically captured and stored into a
test case. This test case can be replayed whenever the developer wants.
The generated test cases can be completed by adding new actions or meta-
events in order to insert messages, verification points, or anything that can
help to refine it. The process of capturing, executing, and analyzing executions
is an example of Observation-based Testing [13].
In [14] Steven, Chandra, Fleck, and Podgurski describe a tool for capturing
and replaying Java [21] program executions called jRapture. jRapture employs
an unobstrusive capture process that captures interactions (GUI, file, and
console inputs) between a Java program and the environment. The captured
inputs are replayed with exactly the same input sequence observed during
5
capture.
The event capture process uses a modified version of the Java API which lets
jRapture interact directly with the underlying operating system or window-
ing system (Peer Components). During this process, jRapture, along with the
modified API, constructs a System Interaction Sequence (SIS) which repre-
sents the sequence of inputs to the program together with other information
necessary for the future replay. Once the SIS sequences are correctly created,
they can be replayed by reproducing the effects of calls to Java API methods
(stored in the SIS).
The OpenHMI-Tester also belongs to this approach, and follows a philosophy
similar to jRapture, but it provides an open and portable architecture instead
(as described in Section 4). This allows OpenHMI-Tester to be independent
of the operating system (e.g. Windows, Linux, FreeBSD, etc.), windowing
toolkits and systems (e.g. Qt [22] or GTK+ [23]), event capture processes (e.g.
by using event listeners or peer components), event filtering rules (e.g. capture
only GUI events or also capture signaling events), event execution techniques
(e.g. by sending captured events or by using a GUI Interaction API), etc.
All these mix and match features depend on the actual configuration of the
software being tested.
The OpenHMI-Tester architecture is described in Section 4; its implementa-
tion in Section 5, and discussions to this approach are included in Section 6.
3 Requirements
Before we discuss the details of the architecture of the OpenHMI-Tester, the
requirements that provide the driving philosophy behind this design are in-
troduced. These requirements have been extracted from GUI developments
belonging to medium and large applications done under industrial environ-
ments.
Since one of the strongest requirements that we impose to the OpenHMI-Tester
is that it has to be cross-platform and open to any windowing system (e.g. Qt,
GTK, etc.), it has to have an open and flexible architecture. The OpenHMI-
Tester also should perform a non-intrusive application hooking to the tested
software in order to be compatible with both software under development and
old developed software.
This architecture has to allow us to implement a clear and easy to use tool
which includes both event capture process and event execution process. These
processes should work into the real software in order to ensure that test case
6
execution process (explained later in Section 4) matches with the real exe-
cution of the tested software. Also the architecture should provide the tester
a stable testing environment (e.g. missing objects tolerance, window moving
and resizing support, etc.) and a good overall performance during capture and
execution processes.
The developer also should be able to implement advanced testing features (e.g.
property pickers,
2
screenshooters, etc.) under this architecture.
Finally, the OpenHMI-Tester architecture also requires a data model descrip-
tion which supports the representation of any test case and any GUI or non-
GUI event in a scalable way.
3
The architecture also should allow the developer
to add new events or actions which include new functionality to the test cases
(e.g. pauses, breakpoints, messages, etc.) and to implement a validation pro-
cess which check if any object property value has the expected value during
the test case playback.
The exact representation of this data model is left to the developer (either
Markup languages, Script code, etc.), but in any case it has to allow its storing
and retrieving during capture and execution processes respectively.
4 Software architecture
The architecture presented in this paper is composed of two main software
elements, each one with a different purpose. The first one is the HMI Tester,
whose aim is to control record (capture) and playback (execution) processes
and manage test suite creation and maintenance. The other one is the Preload
Module, a software element which will behave like a module “injected” on the
tested application, capturing the generated events and executing new events.
Both modules will communicate with each other.
This architecture also may be divided into two parts according to the func-
tionality of the modules. Some functionality is implemented as generic and
never changes (e.g. record and playback processes); this functionality is de-
picted using non-colored boxes in Figure 1, and represents the major part of
the whole proposed architecture. Other functionality has to be adapted in or-
der to support the characteristics of the testing environment (e.g. operating
system, GUI system, etc.); it is represented by colored boxes in Figure 1.
2
A Property Picker is a tool used in GUI Testing which allows the tester to check
the properties of a selected object.
3
By scalable we mean that as the size of the GUI system increases, the number of
tests required by the strategy increases linearly with the size of the GUI system [9].
7
Figure 1. HMI Tester and Preload Module architecture.
As we can see in Figure 1, the whole process involves communication between
three software elements: the two mentioned before and the tested application.
The communication between the tested applications and the Preload Module
will be performed using GUI events: during capture process the tested ap-
plication will generate events that will be captured by the Preload Module;
during execution process the Preload Module will post new events which will
be executed by the tested application. The communication between the Preload
Module and the HMI Tester will be performed using sockets or any other IPC
mechanism: during capture process the Preload Module will send events gen-
erated by the tested application to the HMI Tester for it to store; during
execution process the HMI Tester will send events to the Preload Module for
it to execute them on the tested application.
Communication between the HMI Tester and the Preload Module will be done
by using a communications channel, but how is the communication between
the Preload Module and the tested application possible? What is the method
or process by which two independent applications can send events to each
other? The response is preloading.
Preloading method [24] [25] involves including new functionality on a “closed”
application by preloading a library which includes new classes and methods. In
the HMI Tester architecture, the Preload Module represents a dynamic library
which includes the functionality needed to perform preloading action, event
capture and execution processes and communication to the other side. When
the tested application is launched by the HMI Tester (in both event capture
process and event execution process), it first enables in the operating system
the preload option pointing to the Preload Module dynamic library and then
8
launches the application to be tested. During tested application launching, the
Preload Module will be loaded and all testing functionality will be included in
the tested application. The HMI Tester will be able to communicate with the
Preload Module and use its functionality as mentioned above.
4.1 Architecture Actors
In the OpenHMI-Tester architecture. two diferent roles or actors can be iden-
tified. The tester corresponds to the user that interacts with the OpenHMI-
Tester environment. The tester also provides the application to be tested. The
other one is called developer, whose responsibility is to write the code to adapt
the OpenHMI-Tester to a given testing environment (implement colored boxes
in Figure 1).
4.2 Data Model overview
The Data Model (Figure 2) is the data structure used to describe a set of
test cases which can be performed over the tested application. This struc-
ture, which is divided into three levels (like other existing datamodels such
as CppUnit [26] or jUnit [27]), can include all the necessary information to
represent all the characteristics of a set of tests (test suite).
Figure 2. Data Model.
The Data Model is structured as follows:
Test Suite: this element includes a set of test cases referring to the same
application and usually with a common goal. It may also include meta
information and a referece to the tested application.
Test Case: this element describes a set of ordered test items to be per-
formed on the tested application and it also may include meta information
(e.g. test case description and purpose).
9
Test Item: it is the smallest element in the data model description and
represents a single action which can be performed on the tested application
and its meta-information. In the OpenHMI-Tester a test item matchs an
event as described below in Subsection 4.3.
The fact that a complete description of a test suite is encapsulated in a single
object eases other tasks and processes like, for instance, to dump a test suite
description to a file or apply a filter to an entire test suite.
4.3 Events overview
Along the OpenHMI-Tester architecture will appear different types of events
which can include different kinds of information. In this architecture, each
event is represented by a Test Item object, in which the “type” and “subtype”
values determine its nature.
These events are classified in four groups according to their purpose:
GUI Events: events that contain information related to GUI elements
(e.g. layout change events, mouse click event, etc.) These events normally
are posted towards a single widget.
Non-GUI Events: these events do not contain information related to the
GUI and their elements, but may contain relevant information about the
tested application (e.g. timer events, meta-calls events, etc.)
Meta Events: events defined by the developer that implements actions
which are not natively supported by the windowing system (e.g. messages
in dialog boxes, pauses, sounds, etc.)
Control Events: these events are also defined by the developer and are used
in control signaling along the architecture (e.g. “execution process started”
event).
4.4 HMI Tester architecture
HMI Tester Module is the software element with which the tester will interact
and has two main functions: the first one involves controlling both test case
recording (event capture) process and test case playback (event execution)
process; the second one is to manage test suite lifecycle (e.g. create a new test
suite, add new test cases, include received test items in current test case, etc.).
It also provides a graphical user interface which allows the tester to perform
the tasks metioned above.
In order to perform control and keep track of event capture and execution pro-
10
cesses (explained below in sections 4.6 and 4.7), the HMI Tester communicates
to the Preload Module by using sockets or an IPC mechanism.
Figure 3. HMI Tester detailed architecture.
As shown in Figure 3, the HMI Tester Module architecture is composed of a set
of modules which include the functionality necessary to manage recording and
playback processes, and a special module whose aim is to perform, depending
on the operating system, the preloading process described at the beginning of
this section. Also, another special submodule lets the developer adding his or
her own representation of the Data Model to the architecture.
The most significant modules are decribed as follows:
Data Model Adapter: to integrate into the OpenHMI-Tester architecture
a custom representation of the data model, the developer has to implement
his own Data Model Adapter submodule in order to provide Data Model
Manager with the functionality needed to manage test suites lifecycle.
Comm module: this module includes functionality related to communi-
cations. It will be used by both the Playback Control Module to send new
events (GUI events and Control events) to the Preload Module, and the
Recording Control Module to receive captured (and controlling) event data.
Recording Control module: this module controls test case recording
(capture) process by sending control signaling events to the Preload Module
when necessary; it also manages (and stores in the current test case) event
captured data received from the Preload Module.
Playback Control module: this module controls test case playback (ex-
11
ecution) process by sending GUI events to the Preload Module and, when
necessary, including control signaling events to guide the process remotely.
Preloading Action module: this module is intended to perform the
preloading process on the operating system where the testing process is
being done (so, it has to be adapted depending on the OS). The Preloading
Process includes the establishment of the preloading library and then the
launching of the tested application.
4.5 Preload Module architecture
The Preload Module is the software element which will hook up to the tested
application in order to capture the generated events and post new events
received from the HMI Tester. As we can see in Figure 4, it is composed of
some modules that implement common behavior (e.g. communication to the
HMI Tester, event data encoding, etc.) and other modules whose behavior
changes depending on the windowing system.
Figure 4. Preload Module detailed architecture.
Modules that implement common behavior are the following:
Logic module: its main task is the initialization process, in which all mod-
ules (Comm, Event Consumer and Event Executor) have to be created,
installed and initializated in order to hook up the Preload Module to the
tested application properly.
Comm module: it works similarly as described in last subsection. In this
case, it is used by the rest of modules to send data (e.g. captured events
data) to the HMI Tester; it also delivers the received messages to the corre-
sponding module (e.g. control events will be delivered to Logic module and
12
normal events to Event Executor module).
As mentioned at the beginning of this section, in order to design an open
architecture some modules have to be extended depending on the windowing
system. These modules are the following:
Preloading Control module: this module is responsible for detect the
application launching somehow and call the Logic initialization method.
Since this module might use non-cross platform methods, it is probably
that it also has to be extended depending on the operating system.
Event Consumer module: this module captures generated events, man-
age the data contained in them and notify that a new event has been cap-
tured for it to be sent. This module should also implement a configuration
method if the installation of one instance or any similar process has to be
performed.
Event Executor module: this module executes events received from the
HMI Tester. Once a new event is received, it has to extract event data and
post it into the application event system (or execute an equivalent action)
which performs the requested behavior. This module should also implement
a configuration method if needed.
4.6 Event Capture process
Capture Process is the process by which the HMI Tester gets events generated
by the tested application. In order for the HMI Tester not to be intrusive,
it uses the Preload Module, which captures events generated by the tested
application by preloading some classes while launching the application to be
Figure 5. Event Capture process.
13
tested.
Capture process can be summarized in the following steps:
(1) Event Generation: while the tester interacts with the GUI (e.g. a but-
ton clicked, a key pressed, etc.), it is generating events (GUI events and
non-GUI events). Data included in these events has to be captured and
sent to the HMI Tester.
(2) Event Capture and Control signaling: on this phase, the Preload
Module gets the events generated by the tested application and encapsu-
lates their relevant data on new objects (Test Items) which will be sent
to the HMI Tester. Control Signaling is performed on this phase too; the
Preload Module may notify the HMI Tester the tested application state
(e.g. the execution has finished) or other interesting information.
(3) Event Handle: “Comm Module” in the HMI Tester notifies that a new
Test Item has been received. If it is a control event it has to be handled;
if not, it will be stored.
(4) Event Store and Control Event Handle: the new Test Item is stored
in the corresponding Test Case respecting its order of arrival unless it is
a control event, then it will be handled.
4.7 Event Execution process
Execution process is the process by which the HMI Tester send stored (and
control signaling) events to the Preload Module. When the Preload Module
receives new events, it will post them into the tested application event system.
These events describe the actions to be performed over the tested GUI.
Figure 6. Event Execution process.
14
Execution process may be described in these steps:
(1) Event Dispatching and Control Signaling: events (Test Items) stored
in the current Test Case are sent to the Preload Module; new control
events could also be sent in order to notify about the process state (e.g.
execution finished), actions to be taken (e.g. stop capturing events), etc.
(2) Event Pre-handle: when a new event is received in the Preload Module,
its type value is used to decide if the event is a GUI event (it has to
be posted to the tested application) or a control event (handled by the
Preload Module).
(3) Event Posting and Control Event Handle: the Preload Module per-
forms the received action (event) by posting a new system event (built
from the received event data) into the application or by executing an
equivalent action (e.g. call the click method in a GUI button). If the
received event is a control event it has to be handled by the Preload
Module.
(4) Event Handle: posted events (or posted indirectly by the equivalent
action performed by the Preload Module) arrive to the GUI event system
and are fulfilled.
5 Implementation
As stated before, a few modules of the OpenHMI-Tester architecture have to
be adapted to suit the windowing environment used by the tested application.
The adaptation encompasses implementing just the specific behavior required
to interact with that particular environment. By leveraging the common archi-
tecture, the adaptation modules thus allow the OpenHMI-Tester architecture
to be flexible enough to support a wide range of existing operating and win-
dowing systems.
This section shows some of the implementation details of the common func-
tionality; then, we describe the implementation of an OpenHMI-Tester Proto-
type using the Qt toolkit on a X11 windowing system.
5.1 Common Functionality Implementation Details
5.1.1 Generic Data Model Implementation
In OpenHMI-Tester, a data model is introduced to describe the different tests
that can be performed to an application. As we can see in Figure 7, each event
is identified as a test item. Test cases are introduced as a set of ordered test
15
items to be performed. Finally, a test suite represents a set of test cases for
the application.
Figure 7. Generic Data Model Hierarchy.
This Generic Data Model is flexible enough to represent all the data carried
by the different types of events needed by the adaptation modules in a generic
way. The data maps allow storing different data associated to each event; each
element of the data model has also its own properties.
The adaptation modules, in turn, are responsible for converting the different
events between the module’s internal representation and the generic represen-
tation, in order for them to be managed by the architecture.
5.1.2 Recording Process Implementation
The recording process refers to the process of capturing and storing the events
produced during an interaction with the tested application.
In the OpenHMI-Tester Prototype the recording process is implemented us-
ing a series of control events to signal the start of the recording, pause, stop,
etc. (Figure 8). During the process, the set of events produced by the appli-
cation are stored in the actual test case.
Figure 8. Control Signaling Events Hierarchy.
16
All the control signaling events are defined as derived of a generic Con-
trolTestItem which also inherits from the generic test item. These control
events are used in both recording (capture) and playback (execution) pro-
cesses.
5.1.3 Playback Process Implementation
The playback process reproduces sequences of events captured in a previous
recording process.
4
The events are injected into the application as if a real
human tester would be using it.
In the OpenHMI-Tester Prototype the playback process is implemented using
the test items (events) stored in a test case. These events are sent from the
HMI Tester to the Preload Module, which will handle them and will answer
with a CTI EventExecuted control event for each executed test item (the control
event allows to synchronize the process.) Similar signaling events are used to
indicate the start, stop, and pause of the playback process, etc.
5.2 Specific Functionality Implementation Details
The adaptation to a given windowing and operating environment requires a
few specific code to be written and plugged into the architecture.
5.2.1 Data Model Adapter Implementation
The adaptation modules can define a test suite using the representation that
better suits the specific environment. However, they have to provide code
to convert the information of the test suite back and forth into the generic
representation described in Subsection 5.1.1. This piece of code conforms the
Data Model Adapters, which allow the Data Model Manager to manage the
information of the test suites in a generic way.
In the OpenHMI-Tester Prototype, where it has been selected a XML descrip-
tion as the Test Suite representation, this problem has been dealt by using two
methods: one method which creates a Test Suite object from a given file path
by using a XML DOM Parser;
5
another one which performs the opposite
process (Test Suite to XML file) using a set of XML visitors.
6
4
The sequence of events stored in a Test Case could have been modified by an
external editing tool.
5
A XML DOM Parser creates a data tree from a XML file.
6
A XML Visitor is capable of extracting the data from an object and returning
the corresponding XML string.
17
5.2.2 Preloading Process Implementation
One of the key points of the OpenHMI-Tester architecture is the process of
preloading a library that hooks into the unmodified application being tested
and allows intercepting the flow of graphical events managed by the applica-
tion. This process is divided into two steps:
First, the HMI Tester has to indicate the operating system that the preload
library has to be loaded just before the tested application. The Preloading
Action Module is in charge of executing this specific code that depends on the
operating environment. On most UNIX systems, such as the one used in the
prototype, the LD PRELOAD environment variable is used to modify the normal
behavior of the dynamic linker.
The second step is performed in the Preload Module whenever the application
starts. The Preloading Control Module includes an interception function that
filters the events received by the application. In the case of the combination
of Qt under X11, the QWidget::x11Event call is captured.
5.2.3 Capture Process Implementation
The representation of the events by the adaptation modules is usually arranged
in a hierarchical way. This event hierarchy can leverage the test item object
included in the generic Data Model described in Section 5.1.1. Figure 9 shows
the three-level event architecture chosen for the OpenHMI-Tester Prototype.
Figure 9. OpenHMI-Tester Prototype Events Hierarchy for Qt.
Once an event hierachy has been defined, it is necessary to capture the events
and send them to the HMI Tester module. The capture process has to be
performed depending on the chosen windowing system (e.g. by installing event
filters as in Qt [22] and in GTK+ [23], using event listeners, or using peer
components as in Java [21]). In the prototype we use a Qt Event Filter which
gets stablished at the preloading code initialization. Events of interest for the
module are then redirected to the Event Consumer Module.
18
5.2.4 Execution Process Implementation
The process of executing (applying) the events received from the HMI Tester
module is what we call the execution process.
In the OpenHMI-Tester Prototype, when a new event is received, the Event
Executor Module classifies it using the “type” and “subtype” attributes. Once
the type of the event is determined, and the generic test item is converted to
its proper specific most-derived class, the event executor extracts the event
data and performs the action required by that event.
Some of the functionality required by the events may be simulated. For in-
stance, if the mouse movement is not relevant, a simulation of movement may
be carried from the starting point to the destination point.
5.3 “Open HMI Tester” Prototype
To show the validity of our approach, as stated before, a prototype implemen-
tation has been developed following the architecture introduced in this paper.
The prototype adapted the generic architecture to a specific environment con-
sisting of:
A Linux distribution as the operating system.
Trolltech’s Qt4 toolkit [22] under X-Window as the windowing system.
An XML [28] description for the representation of the Test Suite.
As described in this section, this prototype implementation includes the basic
functionality related to the event capture and execution processes.
It captures events from the tested application, uses them to build a Test Suite
structure and stores this structure on a file by using a XML representation. The
set of captured events includes a basic representation of mouse and keyboard
events (e.g mouse press, mouse double click, key press, etc.) and some window
events (e.g. close window).
Also, it is capable of executing the events mentioned above. The execution
process uses a Test Suite object built from a representation included in a XML
file and allows the tester to choose among all available Test Cases to execute
them. All the captured events can be performed on the tested application.
This prototype also implements some extra functionality as, for instance, the
mouse moving simulation.
19
5.3.1 Prototype technical specifications
This prototype has been written C++ and uses Qt library version 4.x. This
prototype can be downloaded from [29].
Figure 10. Open HMI Tester prototype at work.
5.3.2 Prototype validation
In order to check the viability of the proposed architecture and implemen-
tation, the OpenHMI-Tester Prototype has been tested with some of the Qt
available demo applications offered by Trolltech in [30]. The set of applications
selected for testing include those with a rich GUI with many of the common
widgets, as well as other special widgets (e.g. graphic panels, calendar widgets,
etc.).
The first performance analysis obtained during the evaluation of the OpenHMI-
Tester Prototype (Figure 10 belongs to one of the performed tests) presents a
promising result; during the testing process the OpenHMI-Tester could cap-
ture the events generated during the tester interactions with the different GUIs
and replayed them later by simulating keyboard and mouse events. Neverthe-
less, since the OpenHMI-Tester Prototype has been released quite recently, its
implementation has to be refined in order to improve a few aspects of the
simulation such as the drag-and-drop movement and other advanced actions
which can be performed by a human tester.
20
6 Discussion
As mentioned in Section 2, the OpenHMI-Tester belongs to the approach
in which no model or representation is built since test cases are generated
directly over the area of the GUI of interest to the tester. The OpenHMI-
Tester uses a capture and execution process which work as follows: once the
application to be tested is launched, the tester performs a set of actions in the
GUI which generate GUI events. These GUI events are automatically captured
and stored to be used later to generate a test case that executes the actions
performed by the tester. Generated test cases can be refined and completed
by adding meta-events (e.g. messages, verification points, etc.). The process of
capturing, replaying and analyzing executions is an example of Observation-
based Testing [13].
6.1 Architecture
The OpenHMI-Tester architecture is fully portable since its definition is not
linked to any operating system or any windowing system. The open architec-
ture of the OpenHMI-Tester makes it agnostic to the windowing system (e.g.
Qt or GTK), the event capture process (e.g. by using event listeners or peer
components), the event filtering rules (e.g. capture only GUI events or also
capture signaling events), the event execution technique (e.g. by sending cap-
tured events or by using a GUI Interaction API), etc.; the mix and match of
features depend on the available implementation used by the developer at some
point. Also, the developer may use his own implementation in order to have
capture and execution processes under control. Adding these implementations
to the architecture allows developer to control what events are captured and
how, during the capture process, and also to select what events are going to
be executed and how, during the execution process.
The architecture describes a lightweight method to automatically generate
test cases. The developer do not need to build (and maintain) any model
definition or GUI representation, as they only have to select the area to be
tested (it might involve the whole GUI), launch the target application, and
perform the actions corresponding to the test case. A new test case including
all the actions performed in the last step will be automatically generated and
it may be executed as many times as needed. Then, these test cases can be
used as the application evolves to check that he expected functionality of the
GUI application holds against application changes (i.e. replayed as regression
tests).
21
6.2 Test Case Generation
In the other mentioned approaches, the test generation phase includes search-
ing all possible test cases by traversing a GUI model or representation (e.g.
DART [7,8] and GUI Model Driven Testing [4]). However, in the approach
presented in this paper, the test case generation process is tester-guided since
the tester is responsible for indicating, during the capture process, what wid-
gets and actions are relevant in the application by performing actions within
the GUI. The coverage criteria, then, is determined by the tester.
A tester-guided test case generation process allows the tester to focus testing
efforts on the relevant widgets, widget properties and actions, and avoiding all
those not interesting for the test. In most cases this leads to a smaller group
of generated test cases. Sometimes, however, since we are describing a human-
guided process, some test cases may be missing, causing that those widgets or
actions to be out of the testing process. Robustness of the process would not
be affected but it would be incomplete if those widgets and actions were on
the testing plan; therefore the responsibility of creating a complete test suite
falls on the tester.
6.3 Verification Process
In the approaches mentioned previously, the verification process involves gen-
erating a set of expected results by using a Test Oracle.
7
These results are
then compared with the obtained output after the test case execution.
In [1] Atif Memon introduces the use of test oracle tools in GUI testing. Since
usually a test oracle compares the output with the expected results once the
test case execution has finished, Memon describes that during GUI testing the
GUI should be verified step by step, and not only at the end of the test case
execution, because the final output may be correct but intermediate outputs
might be incorrect.
Thus, in OpenHMI-Tester we describe two ways to perform verification pro-
cess: the first way is as simple as a visual verification performed by the tester
during a test case execution. The fact that the tester has to check if everything
is done correctly during test case execution may be a very tedious process, so
a semi-automated alternative is proposed. The OpenHMI-Tester allows the de-
veloper to implement their own verification method. We propose a verification
process based on an on-demand introduction of meta-events called verifica-
7
A Test Oracle is a tool that generates expected results for a test case and compares
them with the obtained results. It is usually invoked after the test case execution.
22
tion points which could verify one or more properties belonging to one or
more GUI elements at a given time. During test case execution process these
verification points would be executed to perform the requested verifications
and the results may be reported to the tester at the end of the execution.
These verification points may include, for example, expected values in text
boxes or the background color of some other widget.
The fact that the test case generation and execution processes are performed
on the software at execution time, causes that the actual application is acting
on the GUI. So, all the modifications and restrictions stored both in the ap-
plication code and in the widget properties are active, and are thus tested by
the OpenHMI-Tester at execution time. This does not necessarily holds if the
test case generation process is not performed at the real application execution
time, as in [7] and [4], where the test cases are extracted from a model.
6.4 Modifications Tolerance, Robustness and Scalability
If a new element is added to the GUI, the tester can deal with the problem
in two ways: by creating a new test case involving the new GUI object (a new
test case would be added to the existing test suite) or by editing an old test
case and adding the new actions to be performed (the structure of the test
suite would not be modified). If a GUI element is removed, the tester can also
deal with the problem in two ways: by replacing the test cases having actions
over the deleted object by other test cases, or by deleting them from those test
cases. Should the the tester decide not to take any of the proposed solutions,
the execution process can foresee the missing widgets and do not perform the
corresponding actions.
This makes the OpenHMI-Tester architecture highly tolerant to change, since
the tester can chose among several options to approach a GUI modification (or
even let the system take care). The fact that no model has to be maintained
also helps the test case generation process to be more tolerant to changes.
Another strong point in the OpenHMI-Tester architecture is robustness. The
fact that human intervention is kept to a minimum during test case generation
process, in which the tester only has to perform the actions that are going to
be analysed later, increases the process robustness as the tester does not have
to build, verify or complete any model or test case description. Moreover, the
generated test cases can be edited to add meta-events or to remove existing
events. Robustness is a very important feature all over the process, because
keeping it during capture and test generation processes will let the system
perform a better event execution process. In order not to put the process
robustness at risk, the editing process should be performed by using an editing
23
tool which allows the tester to edit test cases in a safe way.
Scalability is another of the strengths of this architecture. When new elements
or actions are added to the GUI, as commented a few paragraphs before, the
tester may create a new test case involving those new elements or actions, or
might simply edit an old test case and add the new actions to be performed. If
the tester chooses the first option, the test suite (set of generated test cases)
will increase linearly in the worst case, but if the tester opts for the second
option, the test suite size will not be increased. The tester will be solely
responsible for the amount of test cases that will compose the test suite, since
a sufficiently high number of test cases has to be generated to test all the
relevant functionality of the GUI.
6.5 Performance Analysis
Since the OpenHMI-Tester Prototype has been released quite recently, its per-
formance has not yet been evaluated extensively in open-source frameworks.
However, the first performance analysis obtained during the evaluation of the
downloadable prototype (described in subsection 5.3) presents promising re-
sults.
During capture process (which implies catching events, handling and pack-
aging event information and sending these events to the HMI Tester,) the
observed behavior does not impose any problem or delay during both event
handling and data transmission. Since delays are negligible, capture process is
done at the same time the tester is performing the test over the tested appli-
cation. During the execution process (which implies event sending, managing
and executing,) the observed behavior is the same as the described before.
The fact that relevant GUI events, in most cases, do not represent a half of
the amount of available events, and that the event generation is performed
on-demand, in response to the actions done by the tester, leads to an irregular
data flow between the HMI Tester and the tested application, which in turn
makes the architecture to work without any major efficiency problem.
The fact that no GUI model or representation has to be built allows the
OpenHMI-Tester avoid tedious test case and oracle generation processes, which
might delay the process with somehow complex GUIs. In the architecture de-
scribed in this paper, both test case generation and execution processes are
performed in real time, while the tester is interacting with the GUI or while
the events are being sent to the Preload Module respectively.
Nonetheless, the performance of the architecture might be compromised if
the developer does not use an efficient implementation of the event handling
in both capture and execution processes. If the implementation chosen by
24
the developer does not perform an acceptable event filtering during capture
process, and is not able to rule out unsuitable events (e.g. no GUI events,
application timing events, object hierarchy events, etc.), it could lead to the
emergence of bottlenecks during event handling and data transmission.
Another performace limitation could be that a capture/replay tool may spend
a lot of efforts trying to replay executions faithfully, but it cannot execute them
with complete fidelity [14]. The execution environment may have different
features (e.g. open windows, screen resolution, system memory, CPU speed,
etc.) than the capture environment, and to store all the environment features
and all the performed events would be inconceivable. In most cases it poses no
problem because a small set of GUI events is enough to simulate tester actions,
and environment configuration does not matter (an application should work
fine on a wide range of environment configurations).
7 Conclusions and Future Work
The Human-Machine Interface (HMI) of any software represents the means,
in terms of inputs and outputs, by which users interact with that software.
Although it usually tends to be as simple as possible, in medium and large
projects, specially in the industry, the HMI used to control a given system or
platform takes usually a significative part of the design and development time.
In fact, it requires of specialised tools to be developed and tested. Although
plenty of tools exist to assist with the design and implementation, testing-
support tools are not that frequent, specially in the open-source community.
Additionally, testing platforms in use for other parts of the software are not
directly applicable to HMI. The development of such systems would help re-
ducing the time needed to develop a software product, as well as providing
robustness and increasing the level of usability of the final product. In this
context, this paper provides the definition of a general and open HMI test-
ing architecture named OpenHMI-Tester and the details of an open-source
implementation, whose requirements have been driven by industrial HMI ap-
plications, and which has been also tested with real scenarios and applications.
As a statement of direction, we are currently working on the implementa-
tion of the adapting modules for different windowing systems, and performing
extensive performance measurements in real scenarios. We are also working
on the design and latter implementation, also as another contribution to the
open-source community, of an editor which helps any user of the OpenHMI-
Tester architecture to create any meta-action that can be of certain interest
in a given scenario. This tool will also help editing any currently existing test
and adapt it as needed.
25
Acknowledgements
This paper has been partially funded by the Catedra SAES of the University
of Murcia initiative, which is a joint effort between SAES (Sociedad An´onima
de Electr´onica Submarina, http://www.electronica-submarina.com/) and
the University of Murcia to work on open-source software, and real-time and
critical information systems.
References
[1] A. Memon, GUI Testing: Pitfalls and Process, IEEE.
[2] A. Memon, Coverage Criteria for GUI Testing, ACM.
[3] A. M. Memon, A Comprehensive Framework for Testing Graphical User
Interfaces, Ph.D. thesis, Department of Computer Science, University of
Maryland (2001).
[4] M. Vieira, J. Leduc, B. Hasling, R. Subramanyan, J. Kazmeier, Automation of
GUI Testing Using a Model-driven Approach, Siemens Corporate Research.
[5] L. White, H. Almezen, Generating Test Cases for GUI Responsibilities Using
Complete Interaction Sequences, IEEE Transactions on SMC Associate Editors.
[6] A. Memon, I. Banerjee, A. Nagarajan, GUI Ripping: Reverse Engineering
of Graphical User Interfaces for Testing, IEEE 10th Working Conference on
Reverse Engineering (WCRE’03).
[7] A. Memon, I. Banerjee, N. Hashmi, A. Nagarajan, DART: A Framework
for Regression Testing “Nightly/daily Builds” of GUI Applications, IEEE
Internacional Conference on Software Maintenance (ICSM’03).
[8] A. Memon, Q. Xie, Studing the Fault-Detection Effectiveness of GUI Test Cases
for Rapidly Envolving Software, IEEE Computer Society.
[9] L. White, H. Almezen, N. Alzeidi, User-Based Testing of GUI Sequences
and Their Interactions, IEEE 12th International Symposium on Software
Reliability Engineering (ISSRE’01).
[10] X. Yuan, A. M. Memon, Using GUI Run-Time State as Feedback to Generate
Test Cases, in: ICSE ’07: Proceedings of the 29th International Conference on
Software Engineering, IEEE Computer Society, Washington, DC, USA, 2007,
pp. 396–405 (May 23–25 2007).
[11] M. R. Karam, S. M. Dascalu, R. H. Hazim´e, Challenges and Opportunities
for Improving Code-Based Testing of Graphical User Interfaces, Journal of
Computational Methods in Sciences and Engineering.
26
[12] C. Mingsong, Q. Xiaokang, L. Xuandong, Automatic Test Case Generation for
UML Activity Diagrams, Proceedings of the 2006 International Workshop on
Automation of Software Test.
[13] D. Leon, A. Podgurski, L. White, Multivariate visualization in observation-
based testing, Proceedings of the 22nd International Conference on Software
Engineering (Limerick, Ireland).
[14] J. Steven, P. Chandra, B. Fleck, A. Podgurski, jRapture: A Capture/Replay
Tool for Observation-Based Testing, ACM.
[15] Ronsee, Bosschere, RecPlay: a Fully Integrated Practical Record/Replay
System, ACM Transactions on Computer Systems.
[16] A. Orso, B. Kennedy, Selective Capture and Replay of Program Executions,
in: Workshop on Dynamic Analysis (WODA), ACM, St. Louis, Missouri, USA,
2005 (July 2005).
[17] Q. Xie, A. M. Memon, Designing and Comparing Automated Test Oracles for
GUI-Based Software Applications, ACM Transactions on Software Engineering
and Methodology 16, 1, 4.
[18] A. C. D. Neto, R. Subramanyan, M. Vieira, G. H. Travassos, A Survey on
Model-Based Testing Approaches: a Systematic Review, Proceedings of the 1st
ACM International Workshop on Empirical Assessment of Software Engineering
Languages and Technologies: held in conjunction with the 22nd IEEE/ACM
International Conference on Automated Software Engineering (ASE).
[19] A. Memon, An Event-Flow Model of GUI-based Applications for Testing,
Software Testing Verification and Reliability.
[20] K. Beck, eXtreme Programming, http://www.XProgramming.com/software.
htm (2003).
[21] Sun Microsystems Inc., Java 2 SDK, Standard Edition Documentation (1995-
2000).
[22] Trolltech Inc., Qt Cross-Platform Application Framework, http://www.
qtsoftware.com/ (2009).
[23] The GTK+ Team, The GIMP Toolkit (GTK), version 2.x, http://www.gtk.
org (2009).
[24] R. Nasika, P. Dasgupta, Transparent Migration of Distributed Communicating
Processes, in: 13th ISCA International Conference on Parallel and Distributed
Computing Systems (PDCS), Las Vegas, Nevada, USA, 2000 (November 2000).
[25] S.-J. Horng, M.-Y. Su, J.-G. Tsai, A Dynamic Backdoor Detection System
Based on Dynamic Link Libraries, International Journal of Business and
Systems Research.
[26] CppUnit: C++ Unit Testing Framework, http://cppunit.sourceforge.net
(2009).
27
[27] E. Gamma, K. Beck, JUnit: Java unit testing framework, http://junit.
sourceforge.net (2009).
[28] W3C, XML: Extensible Markup Language, http://www.w3.org/XML/ (2009).
[29] Catedra SAES of the University of Murcia, OpenHMI-Tester Prototype, http:
//sourceforge.net/projects/openhmitester/ (2009).
[30] Trolltech Inc., Trolltech Qt 4.x Demo Applications, http://doc.trolltech.
com/4.0/ (2009).
28
... In earlier work, our group described a GUI testing architecture aimed at supporting major eventbased and open-source GUI platforms [41]. This architecture is based on the interception of GUI events by using GUI introspection. ...
Article
Runtime verification (RV) provides essential mechanisms to enhance software robustness and prevent malfunction. However, RV often entails complex and formal processes that could be avoided in scenarios in which only invariants or simple safety properties are verified, for example, when verifying input data in Graphical User Interfaces (GUIs). This paper describes S-DAVER, a lightweight framework aimed at supporting separate data verification in GUIs. All the verification processes are encapsulated in an independent layer and then transparently integrated into an application. The verification rules are specified in separate files and written in interpreted languages to be changed/reloaded at runtime without recompilation. Superimposed visual feedback is used to assist developers during the testing stage and to improve the experience of users during execution. S-DAVER provides a lightweight, easy-to-integrate and dynamic verification framework for GUI data. It is an integral part of the development, testing and execution stages. An implementation of S-DAVER was successfully integrated into existing open-source applications, with promising results. Copyright © 2015 John Wiley & Sons, Ltd.
... La arquitectura Open-HMI Tester (OHT) [1] se trata de una arquitectura abierta (no está ligada a ningún sistema operativo ni sistema de ventanas) para pruebas de interfaz gráfica. Esta arquitectura (Figura 1) está compuesta por una serie de módulos que implementan una funcionalidad genérica, y que por lo tanto nunca cambian (representados en la figura por las cajas no coloreadas); y un conjunto de módulos que deben ser adaptados con el fin de dotar a la arquitectura de la funcionalidad necesaria para poder operar sobre un sistema operativo y sistema de ventanas concretos (estos módulos están representados en la figura por cajas coloreadas). ...
... Estos monitores están basados en el Autómata de Seguridad (Schneider [25]) y el Autómata de Edición (Ligatti et al. [13]). En trabajos anteriores se propuso el Open HMI Tester [15,20], un framework abierto y adaptable para el desarrollo de herramientas de pruebas. Este framework es capaz de extraer la información incluida en los eventos GUI generados por la interacción del usuario utilizando introspección no intrusiva. ...
... La arquitectura Open-HMI Tester (OHT) [1] se trata de una arquitectura abierta (no está ligada a ningún sistema operativo ni sistema de ventanas concretos) para pruebas de Revista Española de Innovación, Calidad e Ingeniería del Software, Vol.5, No. 4, 2009 ISSN: 1885-4486 © ATI, 2009 8 interfaz gráfica. Esta arquitectura (figura 1) está compuesta por una serie de módulos que implementan la funcionalidad genérica, y que por lo tanto nunca cambian (representados en la figura por las cajas no coloreadas), y un conjunto de módulos que deben ser adaptados (representados en la figura por las cajas coloreadas) con el fin de dotar a la arquitectura de la funcionalidad necesaria para poder operar sobre un entorno de pruebas (sistema operativo, sistema de ventanas, etc.) concreto. ...
Article
Full-text available
Software testing is a very important phase in the software development process. These tests are performed to ensure the quality, reliability, and robustness of software within the execution context it is expected to be used. Some of these tests are focused on ensuring that the graphical user interfaces (GUIs) are working properly. GUI Testing represents a critical step before the software is deployed and accepted by the end user. This paper describes the main results obtained from our research work in the software testing and GUI testing areas. It also describes the design and implementation of an open source architecture used as a framework for developing automated GUI testing tools.
Article
GUI testing is essential to provide validity and quality of system response, but applying it to a development is not straightforward: it is time consuming, requires specialized personnel, and involves complex activities that sometimes are implemented manually. GUI testing tools help supporting these processes. However, integrating them into software projects may be troublesome, mainly due to the diversity of GUI platforms and operating systems in use. This work presents the design and implementation of Open HMI Tester (OHT), an application framework for the automation of testing processes based on GUI introspection. It is cross-platform, and provides an adaptable design aimed at supporting major event-based GUI platforms. It can also be integrated into ongoing and legacy developments using dynamic library preloading. OHT provides a robust and extensible basis to implement GUI testing tools. A capture and replay approach has been implemented as proof of concept. Introspection is used to capture essential GUI and interaction data. It is used also to simulate real human interaction in order to increase robustness and tolerance to changes between testing iterations. OHT is being actively developed by the Open-source Community and, as shown in this paper, it is ready to be used in current software projects.
Article
Prototypes are described as a successful mechanism to incorporate user-experience design (UX) into Agile developments, but their integration into such developments is not exempt from difficulties. Prototypes and final applications are often developed using different tools, which hinders the collaboration between designers and developers and also complicates reuse. Moreover, integrating stakeholders such as clients and users into the Agile process of designing, evaluating, and refining a prototype is not straightforward mainly because of its iterative nature. In an attempt to tackle these problems, this work presents the design and implementation of a new framework in which scripting languages are used to code prototyped behaviors. Prototyping is then treated as a separate aspect that coexists and runs together with final functionality. Using this framework communication is enhanced because designers and developers work in parallel on the same software artifact. Prototypes are fully reused and iteratively added with final functionality while prototyped behaviors are removed. They can be also modified on the fly to implement participatory design techniques.
Thesis
Full-text available
1 Introduction 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Human-Computer Interaction . . . . . . . . . . . . . . . . . . . . 2 1.1.2 Software Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.3 Data Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.1.4 Software Usability . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.1.5 Quality of Experience . . . . . . . . . . . . . . . . . . . . . . . . . 10 Enhancing Software Quality . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.2.1 Block 1: Achieving Quality in Interaction Components Separately . 12 1.2.2 Block 2: Achieving Quality of User-System Interaction as a Whole . 14 1.3 Goals of this PhD Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1.4 Publications Related to this PhD Thesis . . . . . . . . . . . . . . . . . . . . 19 1.5 Software Contributions of this PhD Thesis . . . . . . . . . . . . . . . . . . 22 1.5.1 OHT: Open HMI Tester . . . . . . . . . . . . . . . . . . . . . . . . 23 1.5.2 S-DAVER: Script-based Data Verification . . . . . . . . . . . . . . 24 1.5.3 PALADIN: Practice-oriented Analysis and Description of Multi-modal Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 CARIM: Context-Aware and Ratings Interaction Metamodel . . . . 25 1.6 Summary of Research Goals, Publications, and Software Contributions . . 25 1.7 Context of this PhD Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 1.8 Structure of this PhD Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2 Related Work 2.1 Group 1: Approaches Assuring Quality of a Particular Interaction Component 30 2.2 Validation of Software Output . . . . . . . . . . . . . . . . . . . . 30 2.1.1.1 Methods Using a Complete Model of the GUI . . . . . . 31 2.1.1.2 Methods Using a Partial Model of the GUI . . . . . . . . 32 2.1.1.3 Methods Based on GUI Interaction . . . . . . . . . . . . 32 Validation of User Input . . . . . . . . . . . . . . . . . . . . . . . . 33 2.1.2.1 Data Verification Using Formal Logic . . . . . . . . . . . 34 2.1.2.2 Data Verification Using Formal Property Monitors . . . . 35 2.1.2.3 Data Verification in GUIs and in the Web . . . . . . . . . 36 Group 2: Approaches Describing and Analyzing User-System Interaction as a Whole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.2.1 Analysis of User-System Interaction . . . . . . . . . . . . . . . . . 37 2.2.1.1 Analysis for the Development of Multimodal Systems . . 37 2.2.1.2 Evaluation of Multimodal Interaction . . . . . . . . . . . 41 2.2.1.3 Evaluation of User Experiences . . . . . . . . . . . . . . 44 Analysis of Subjective Data of Users . . . . . . . . . . . . . . . . . 45 2.2.2.1 User Ratings Collection . . . . . . . . . . . . . . . . . . 45 2.2.2.2 Users Mood and Attitude Measurement . . . . . . . . . . 47 Analysis of Interaction Context . . . . . . . . . . . . . . . . . . . . 49 2.2.3.1 Interaction Context Factors Analysis . . . . . . . . . . . 49 2.2.3.2 Interaction Context Modeling . . . . . . . . . . . . . . . 50 3 Evaluating Quality of System Output 3.1 Introduction and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.2 GUI Testing Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.3 Preliminary Considerations for the Design of a GUI Testing Architecture . 57 3.3.1 Architecture Actors . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.3.2 Organization of the Test Cases . . . . . . . . . . . . . . . . . . . . 57 3.3.3 Interaction and Control Events . . . . . . . . . . . . . . . . . . . . 58 The OHT Architecture Design . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.4.1 The HMI Tester Module Architecture . . . . . . . . . . . . . . . . 60 3.4.2 The Preload Module Architecture . . . . . . . . . . . . . . . . . . . 61 3.4.3 The Event Capture Process . . . . . . . . . . . . . . . . . . . . . . 63 3.4.4 The Event Playback Process . . . . . . . . . . . . . . . . . . . . . . 64 The OHT Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.5.1 Implementation of Generic and Final Functionality . . . . . . . . . 66 3.5.1.1 Generic Data Model . . . . . . . . . . . . . . . . . . . . 66 3.5.1.2 Generic Recording and Playback Processes . . . . . . . . 66 Implementation of Specific and Adaptable Functionality . . . . . . 67 3.5.2.1 Using the DataModelAdapter . . . . . . . . . . . . . . . 68 3.5.2.2 The Preloading Process . . . . . . . . . . . . . . . . . . . 68 3.5.2.3 Adapting the GUI Event Recording and Playback Processes 69 3.7 Technical Details About the OHT Implementation . . . . . . . . . 70 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.6.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.6.2 The Test Case Generation Process . . . . . . . . . . . . . . . . . . 73 3.6.3 Validation of Software Response . . . . . . . . . . . . . . . . . . . 74 3.6.4 Tolerance to Modifications, Robustness, and Scalability . . . . . . . 75 3.6.5 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 76 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4 Evaluating Quality of Users Input 4.1 Introduction and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.2 Practical Analysis of Common GUI Data Verification Approaches . . . . . 82 4.3 Monitoring GUI Data at Runtime . . . . . . . . . . . . . . . . . . . . . . . 83 4.4 Verification Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.4.1 Rule Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.4.2 Using the Rules to Apply Correction . . . . . . . . . . . . . . . . . 87 4.4.3 Rule Arrangement . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.4.4 Rule Management . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.4.4.1 88 Loading the Rules . . . . . . . . . . . . . . . . . . . . . xviiContents 4.4.4.2 Evolution of the Rules and the GUI . . . . . . . . . . . . 89 Correctness and Consistency of the Rules . . . . . . . . . . . . . . 90 4.5 The Verification Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.6 S-DAVER Architecture Design . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.6.1 Architecture Details . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.6.2 Architecture Adaptation . . . . . . . . . . . . . . . . . . . . . . . . 94 4.7 S-DAVER Implementation and Integration Considerations . . . . . . . . . 95 4.8 Practical Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.8.1 Integration, Configuration, and Deployment of S-DAVER . . . . . 99 4.8.2 Defining the Rules in Qt Bitcoin Trader . . . . . . . . . . . . . . . 100 4.8.3 Defining the Rules in Transmission . . . . . . . . . . . . . . . . . . 103 4.8.4 Development and Verification Experience with S-DAVER . . . . . 106 4.9 Performance Analysis of S-DAVER . . . . . . . . . . . . . . . . . . . . . . 106 4.10 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4.10.1 A Lightweight Data Verification Approach . . . . . . . . . . . . . 108 4.10.2 The S-DAVER Open-Source Implementation . . . . . . . . . . . . . 110 4.10.3 S-DAVER Compared with Other Verification Approaches . . . . . . 111 4.11 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 5 Modeling and Evaluating Quality of Multimodal User-System Interaction 115 5.1 Introduction and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 116 5.2 A Model-based Framework to Evaluate Multimodal Interaction . . . . . . . 118 5.2.1 Classification of Dialog Models by Level of Abstraction . . . . . . 119 5.2.2 The Dialog Structure . . . . . . . . . . . . . . . . . . . . . . . . . 120 5.2.3 Using Parameters to Describe Multimodal Interaction . . . . . . . 121 5.2.3.1 Adaptation of Base Parameters . . . . . . . . . . . . . . 121 5.2.3.2 Defining new Modality and Meta-communication Param- eters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 5.2.3.3 Defining new Parameters for GUI and Gesture Interaction 123 5.2.3.4 Classification of the Multimodal Interaction Parameters . 124 5.3 Design of PALADIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 5.4 Implementation, Integration, and Usage of PALADIN . . . . . . . . . . . . 129 5.5 Application Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 5.6 Assessment of PALADIN as an Evaluation Tool . . . . . . . . . . . 132 5.5.1.1 Participants and Material . . . . . . . . . . . . . . . . . 134 5.5.1.2 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . 136 5.5.1.3 Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . 137 Usage of PALADIN in a User Study . . . . . . . . . . . . . . . . . 140 5.5.2.1 Participants and Material . . . . . . . . . . . . . . . . . 140 5.5.2.2 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . 144 5.5.2.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 5.6.1 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . 146 5.6.2 Practical Application of PALADIN . . . . . . . . . . . . . . . . . . 147 5.6.3 Completeness of PALADIN According to Evaluation Guidelines . . 148 5.6.4 Limitations in Automatic Logging of Interactions Parameters . . . 151 5.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 5.8 Parameters Used in PALADIN . . . . . . . . . . . . . . . . . . . . . . . . . 152 6 Modeling and Evaluating Mobile Quality of Experience 163 6.1 Introduction and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 164 6.2 Context- and QoE-aware Interaction Analysis . . . . . . . . . . . . . . . . 166 6.2.1 Incorporating Context Information and User Ratings into Interaction Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 6.2.2 Arranging the Parameters for the Analysis of Mobile Experiences . 168 6.2.3 Using CARIM for QoE Assessment . . . . . . . . . . . . . . . . . . 169 Context Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 6.3.1 Quantifying the Surrounding Context . . . . . . . . . . . . . . . . 170 6.3.2 Arranging Context Parameters into CARIM . . . . . . . . . . . . . 173 User Perceived Quality Parameters . . . . . . . . . . . . . . . . . . . . . . 173 6.4.1 Measuring the Attractiveness of Interaction . . . . . . . . . . . . . 173 6.4.2 Measuring Users Emotional State and Attitude toward Technology Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 6.5 Arranging User Parameters into CARIM . . . . . . . . . . . . . . . 177 CARIM Model Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 6.5.1 The Base Design: PALADIN . . . . . . . . . . . . . . . . . . . . . 177 6.5.2 The New Proposed Design: CARIM . . . . . . . . . . . . . . . . . 178 6.6 CARIM Model Implementation . . . . . . . . . . . . . . . . . . . . . . . . 181 6.7 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 6.7.1 Participants and Material . . . . . . . . . . . . . . . . . . . . . . . 183 6.7.2 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 6.7.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 6.9 Comparing the Two Interaction Designs for UMU Lander 185 Validating the User Behavior Hypotheses . . . . . . . . . 186 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 6.8.1 Modeling Mobile Interaction and QoE . . . . . . . . . . . . . . . . 188 6.8.3 CARIM Implementation and Experimental Validation . . . . . . . 190 CARIM Compared with Other Representative Approaches . . . . . 191 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 7 Conclusions and Further Work 7.2 Conclusions of this PhD Thesis . . . . . . . . . . . . . . . . . . . . . . . . 196 7.1.2 Driving Forces of this PhD Thesis . . . . . . . . . . . . . . . . . . 196 Work and Research in User-System Interaction Assessment . . . . 197 7.1.3 Goals Achieved in this PhD Thesis . . . . . . . . . . . . . . . . . . 200 Future Lines of Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 Bibliography 205 A List of Acronyms 231
Conference Paper
This paper is a position paper of my PhD thesis: https://www.researchgate.net/publication/264194567_Enhancing_Software_Quality_and_Quality_of_Experience_through_User_Interfaces With this work we expect to provide the community and the industry with a solid basis for the development, integration, and deployment of software testing tools. As a solid basis we mean, on one hand, a set of guidelines, recommendations, and clues to better comprehend, analyze, and perform software testing processes, and on the other hand, a set of robust software frameworks that serve as a starting point for the development of future testing tools.
Conference Paper
Full-text available
This paper describes a systematic review performed on model-based testing (MBT) approaches. A selection criterion was used to narrow the initially identified four hundred and six papers to focus on seventy-eight papers. Detailed analysis of these papers shows where MBT approaches have been applied, the characteristics, and the limitations. The comparison criteria includes representation models, support tools, test coverage criteria, the level of automation, intermediate models, and the complexity. This paper defines and explains the review methodology and presents some results.
Article
Full-text available
The research presented in this paper introduces an execution model for graphical user interfaces (GUIs) that we have developed and formalized as a sequence of actions and finite output states. This model has allowed us to investigate the possibility of applying code-based testing methodologies to testing graphical user interfaces. Our findings highlighted challenges and revealed opportunities to adapt code-based testing methodology to verify the correctness of such interfaces. In particular, the "All-OP-DUs" technique provides important error detection capability and can be applied effectively to test GUIs. This paper also introduces Xtester, the GUI testing tool we are currently building to empirically evaluate our proposed testing criteria.
Conference Paper
Full-text available
This paper describes an ongoing research on test case generation based on Unified Modeling Language (UML). The described approach builds on and combines existing techniques for data and graph coverage. It first uses the Category-Partition method to introduce data into the UML model. UML Use Cases and Activity diagrams are used to respectively describe which functionalities should be tested and how to test them. This combination has the potential to create a very large number of test cases. This approach offers two ways to manage the number of tests. First, custom annotations and guards use the Category- Partition data which allows the designer tight control over possible, or impossible, paths. Second, automation allows different configurations for both the data and the graph coverage. The process of modeling UML activity diagrams, annotating them with test data requirements, and generating test scripts from the models is described. The goal of this paper is to illustrate the benefits of our model-based approach for improving automation on software testing. The approach is demonstrated and evaluated based on use cases developed for testing a graphical user interface (GUI).
Article
We present a two-layer backdoor detection system in the article. In the first-layer, Zhang and Paxson's method is applied to identify keystroke interactive connection from network traffic. In the second-layer, we adopt the Dynamic Link Library (DLL) injection technique to record all DLLs employed by the programme that evokes such interactive connection. Compared the recorded data with some pre-defined Common Feature Tables, the second-layer can then determine whether the monitored programme is a backdoor. By experiments, the best result of our system got 94.44% detection rate while False Positive was zero. In the case, the overall accuracy was 97.22%.
Article
An abstract is not available.
Conference Paper
The test case generation from design specifications is an important work in testing phase. In this paper, we use UML activity diagrams as design specifications, and present an automatic test case generation approach. The approach first randomly generates abundant test cases for a JAVA program under testing. Then, by running the program with the generated test cases, we can get the corresponding program execution traces. Last, by comparing these traces with the given activity diagram according to the specific coverage criteria, we can get a reduced test case set which meets the test adequacy criteria. The approachcan also be used to check the consistency between the program execution traces and the behavior of UML activity diagrams.
Conference Paper
We describe the design of jRapture: a tool for capturing and replaying Java     program executions in the field. jRapture works with Java binaries (byte code) and any compliant implementation of the Java virtual machine. It employs a lightweight, transparent capture process that permits unobtrusive capture of a Java program's executions. jRapture captures interactions between a Java program and the system, including GUI, file, and console inputs, among other types, and on replay it presents each thread with exactly the same input sequence it saw during capture. In addition, jRapture has a profiling interface that permits a Java program to be instrumented for profiling  after its executions have been captured. Using an XML-based profiling specification language a tester can specify various forms of profiling to be carried out during replay.
Article
In this paper, we present a technique for selective capture and replay of program executions. Given an application, the technique allows for (1) selecting a subsystem of interest, (2) capturing at runtime all the interactions between such subsystem and the rest of the application, and (3) replaying the recorded interactions on the subsystem in isolation. The technique can be used in several scenarios. For example, it can be used to generate test cases from users' executions, by capturing and collecting partial executions in the field. For another example. it can be used to perform expensive dynamic analyses off-line. For yet another example, it can be used to extract subsystem or unit tests from system tests. Our technique is designed to be efficient, in that we only capture information that is relevant to the considered execution. To this end, we disregard all data that, although flowing through the boundary of the subsystem of interest, do not affect the execution. In the paper, we also present a preliminary evaluation of the technique performed using SCARPE, a prototype tool that implements our approach.