Content uploaded by Selem Charfi
Author content
All content in this area was uploaded by Selem Charfi on May 01, 2014
Content may be subject to copyright.
This article was downloaded by: [79.175.119.90]
On: 23 April 2014, At: 10:09
Publisher: Taylor & Francis
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,
37-41 Mortimer Street, London W1T 3JH, UK
International Journal of Human-Computer Interaction
Publication details, including instructions for authors and subscription information:
http://www.tandfonline.com/loi/hihc20
Widgets Dedicated to User Interface Evaluation
Selem Charfia, Abdelwaheb Trabelsib, Houcine Ezzedinea & Christophe Kolskia
a LAMIH-UMR CNRS, University of Valenciennes and Hainaut-Cambrésis, Valenciennes, France
b University of Sfax, Sfax, Tunisia
Accepted author version posted online: 02 Jan 2014.Published online: 01 Apr 2014.
To cite this article: Selem Charfi, Abdelwaheb Trabelsi, Houcine Ezzedine & Christophe Kolski (2014) Widgets
Dedicated to User Interface Evaluation, International Journal of Human-Computer Interaction, 30:5, 408-421, DOI:
10.1080/10447318.2013.873280
To link to this article: http://dx.doi.org/10.1080/10447318.2013.873280
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained
in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no
representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the
Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and
are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and
should be independently verified with primary sources of information. Taylor and Francis shall not be liable for
any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever
or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of
the Content.
This article may be used for research, teaching, and private study purposes. Any substantial or systematic
reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any
form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://
www.tandfonline.com/page/terms-and-conditions
Intl. Journal of Human–Computer Interaction, 30: 408–421, 2014
Copyright © Taylor & Francis Group, LLC
ISSN: 1044-7318 print / 1532-7590 online
DOI: 10.1080/10447318.2013.873280
Widgets Dedicated to User Interface Evaluation
Selem Charfi1, Abdelwaheb Trabelsi2, Houcine Ezzedine1, and Christophe Kolski1
1LAMIH-UMR CNRS, University of Valenciennes and Hainaut-Cambrésis, Valenciennes, France
2University of Sfax, Sfax, Tunisia
In this article, evaluation-based widgets are proposed as a con-
tribution to assist evaluators for early evaluation of user interfaces.
This contribution imbricates the ergonomic quality evaluation pro-
cess into widgets used for user-interface graphical composition.
In other words, these widgets evaluate themselves according to a
defined set of ergonomic guidelines. The proposed widgets indicate
the possible interface design ergonomic inconsistencies as a notifi-
cation to the designer. The guidelines set can be modified through
an interface dedicated to guidelines definition into XML files. The
proposed widgets are intended for the evaluation of different kind
of user interfaces: WIMP, web, and mobile. An experimental eval-
uation, involving these evaluation-based widgets, is proposed to
illustrate and to validate the approach.
1. INTRODUCTION
Software verification and validation is a common practice in
the field of software engineering (Sommerville, 2010). Among
the tested aspects is the software user interface (UI). The UI
evaluation domain is a very rich domain in terms of research,
concepts, tools, and techniques.1This domain dates back
more than 40 years (Bardzell, 2011; Nielsen, 1994; Rogers,
Sharp, & Preece, 2011; Wright, Blythe, McCarthy, Gilroy, &
Harrison, 2006). It consists on improving and optimizing inter-
active systems to reduce erroneous, incorrect, inappropriate,
and ineffective user actions. It is generally based on utility and
usability as quality criteria (Folmer & Bosch, 2004; Grudin,
1992; Juristo, Moreno, & Sanchez-Segura, 2007; Nielsen, 1994;
Rafla, Robillard, & Desmarais, 2006). In some cases, user
interface evaluation is essential, such as in the case of critical
systems (power production, transportation, aeronautics, health
care domains, etc.; Boy, 2011;Kortum,2009).
In the international human–computer interaction commu-
nity, this research is abundant and revolves essentially around
Address correspondence to Christophe Kolski, University
of Valenciennes and Hainaut Cambrésis (UVHC), Laboratoire
d’Automatique, de Mécanique et d’Informatique industrielles
et Humaines (LAMIH-UMR CNRS), Le Mont Houy, F-59313
Valenciennes cedex 9, France. E-mail: Christophe.Kolski@univ-
valenciennes.fr
Color versions of one or more of the figures in the article can be
found online at www.tandfonline.com/hihc.
approaches, tools, and techniques. Each of them possesses
its specificities and requirements. They differ according to
many features and mainly according to the application stage
of the software development process phase (e.g., waterfall
systems development life-cycle phases: requirements, design,
implementation, verification and maintenance; Medvidovic &
Jakobac, 2005).
We distinguish essentially four categories of evaluation
tools:
• Tools that are used on the interactive systems once
finished (e.g., Access Enable by Brinck, Hermann,
Minnebo, & Hakim, 2002, and EISEval by Tran,
Ezzedine, & Kolski, 2008);
• Tools that are used by evaluation experts to evalu-
ate advanced prototypes (e.g., Cognitive Walkthrough
method and its numerous extensions and variants;
Mahatody, Sagar, & Kolski, 2010; Wharton, Bradford,
Jeffries, & Franzke, 1992);
• Tools for interface generation that consider diverse
usability aspects (Folmer & Bosch, 2004; Savidis &
Stephanidis, 2006); and
• Tools for evaluating interactive systems since the
first development phases (e.g., THEA2by Pocock,
Harrison, Wright, & Johnson, 2001).
The last category of evaluation tools explicitly couples the
design phase and the evaluation phase (Nielsen, 1994; Tarby,
Ezzedine, & Kolski, 2008). One of its advantages is that the
evaluation process is less costly. We do not need to improve and
correct the UI that has already been implemented. Correcting an
already implemented interface can turn out to be expensive in
1In several bibliographical resources, authors use the term “evalua-
tion method.” In this article we opt for the term “evaluation technique.”
This choice is due to the fact that a method is generally defined as
an ordered set of principals, rules, steps, and so on. The technique is
defined as a set of processes and practical means for an activity. Thus,
we think that “technique” is the most adequate term due to the fact
that generally there is not a well-ordered process for UI evaluation.
Typically, evaluation tools are meant to automatically support some
underlying evaluation techniques.
2THEA is a technique for designing interactive systems that are
resilient to user erroneous actions, in which the evaluation takes place
in the first stages of the software development cycle.
408
Downloaded by [79.175.119.90] at 10:09 23 April 2014
WIDGETS DEDICATED TO USER INTERFACE EVALUATION 409
terms of effort and time. According to Nielsen (1994), it can be
100 times more expensive to correct an already designed system
than to correct it at the early stages of the systems development
life cycle.
Dix, Finlay, Abowd, and Beale (2003) distinguished mainly
two categories of UI evaluation techniques:
•Evaluation through expert analysis techniques. These
techniques concentrate mainly on evaluating the sys-
tem design by the designer and/or the expert evalu-
ation. They aim at identifying any aspects than can
lead to use difficulties or can violate known cogni-
tive principles. The main advantage is the fact that the
used evaluation process is not costly due to the fact
that it does not require testing the system with users.
Illustrative examples of such techniques are Cognitive
Walkthrough and Heuristic Evaluation.
•Evaluation through user participation techniques. This
techniques set includes empirical techniques, exper-
imental techniques, observational techniques, query
techniques, and techniques using physiological mon-
itoring. It needs user participation to test the system.
The system can be a prototype, at early version or in
the final state.
Filippi and Barattin distinguished another categorie that con-
cerns a “hybrid” category. The associated evaluation techniques
involve user and expert during the evaluation process (Filippi &
Barattin, 2012).
Although there are many tools for user interface evalu-
ation, evaluators still find difficulties to evaluate UI. First,
the evaluation process is complex and difficult to establish in
order to identify the UI utility and usability problems (Hearst,
2009). In addition, the early evaluation tools are rare. Note
that early evaluation tools are mainly structured into three
categories: heuristic evaluation, usability principle application,
and usability tests on system prototype (Hvannberg, Law, &
Lárusdóttir, 2007).
Usually to proceed to an early evaluation, evaluators have
to conduct the prototyping technique (Buxton, 2007; Leonidis,
Antona, & Stephanidis, 2012). Indeed, the prototype implemen-
tation is fast and therefore inexpensive. These prototypes are
improved and modified until UIs conform to specific usability
standards (Konstan, 2011). Early evaluation requires one or
more experienced evaluators to exploit ergonomic guidelines
(or heuristics) for UI evaluation (Salvendy & Turley, 2002).
Among the early evaluation existing approaches, we can men-
tion Tarby early evaluation approach (Tarby et al., 2008). It is
based on aspect-oriented programming. This paradigm enable
to “graft” traces into the evaluated system kernel since the
first phase of system development life cycles (Delannay, 2003).
In other words, the interactive system evaluation is taken into
consideration since the first development phase.
As mentionned previously, UI evaluation is generally sup-
ported only in the latest phase of the interactive system devel-
oppement cycle (e.g., testing phase in the Waterfall systems
development life cycle; Larman & Basili, 2003). Then, many
designers neglect UI evaluation cause to hardware and time
constraints.
The major motivation of the present work is to simplify UI
evaluation process. In addition, we intend to automate the eval-
uation process in order to provide more reliable results. In this
article, we propose to automate the evaluation process. Then,
we intend to adopt a UI evaluation approach. This evaluation
is based on the inspection of the UI usability by exploiting
ergonomic guidelines.
In this article, we are especially interested in tools that vali-
date the ergonomic guidelines in the UI evaluated. This interest
is due to the fact that such evaluation is not costly according to
other evaluation techniques. In addition, it is simple to estab-
lish and to obtain reliable results. Section 2 presents the state of
the art for UI evaluation tools. Section 3 proposes our widgets
dedicated to UI evaluation. They can be seen as a global tool
for automated ergonomic guideline validation during the inter-
face design process. Section 4 applies our approach to a network
supervision system. Section 5 reports the results obtained and
discusses our approach. Section 6 concludes the article and
proposes perspectives for future research work.
2. TOOLS BASED ON ERGONOMIC GUIDELINES
VALIDATION FOR UI EVALUATION
Interaction devices are currently omnipresent in all domains.
They are various and different according to many aspects
(screen size, support medium, etc.). Among the interaction
devices we can cite: PC, smartphones, tablets. With such
devices, users can access the information wherever and when-
ever they want (Bacha, Oliveira, & Abed, 2011). This device
diversity poses new challenges for UI evaluation. Therefore, the
evaluation tools are presented in the following three main cate-
gories, which are related to the interface on which the interactive
system operates (Figure 1):
Interactive systems evaluation
tools
WIMPUI
Web UI
Mobile UI
•ERGOVAL [Farenc, 2001]
•Sherlock [Grammenos et al., 2000]
•Synop [Kolski, 1999]
•Web Tango [Ivory and Hearst, 2001]
•Takata Tool [Takara et al., 2004]
•Destine [Beirekdar, 2004]
•Magenta [leporini et al., 2006]
•WAEX [Centeno et al., 2007]
•EvalAccess [Abascal et al., 2006]
•TAW [Taw, 2006]
•Re Web [Ricca and Tonella, 2001]
•LIFT Machine [Usablenet, 2004] •Tarby Approach [Tarby et al., 2006]
•Google Analytics [Google, 2010]
•WebXACT [WbXACT]
•Tarby Approach [Tarby et al., 2006]
•WebSat [NIST Web Metrics]
•[Ocawa [Ocawa, 2002]
•Access Eanable [Brinck et al., 2002]
•A-Prompt [ATRC, 2002]
•TestWeb [Ricca and Tonella, 2001]
•EISEval et al., 2012]
•Tarby approach [Tarby et al., 2006]
FIG. 1. Classification of the tools for user interface (UI) evaluation.
Downloaded by [79.175.119.90] at 10:09 23 April 2014
410 S. CHARFI ET AL.
•WIMP3UI evaluation tools: This category lists all the
tools allowing the evaluation of WIMP UIs. These UIs
operate generally on personal computers. The interac-
tion between the interface and the user is mainly based
on the use of mouse, screen, and keyboard. In this cat-
egory, there are not many tools. In fact, it is difficult to
evaluate the ergonomic quality of such interfaces due
to the fact that they are implemented through differ-
ent programming languages. In addition, their source
code is not often available to the evaluator. This is
the reason why evaluating such systems mostly con-
sists of integrating specific mechanisms and techniques
(e.g., MESIA electronic informer, Trabelsi, Ezzedine,
&Kolski,2009, and questionnaire exploitation,4van
Velsen, van der Geest, & Klaassen, 2011) to collect
information for the evaluation. This information is ana-
lyzed to determine the ergonomic quality of the user
interface and/or to detect the interface’s ergonomic
inconsistencies.
•WebUI (or WUI)5evaluation tools: This category
includes the majority of the existing evaluation tools.
It is dedicated for evaluating web pages. It is easier to
evaluate the web pages’ ergonomic quality than that of
a WIMP interface. Generally, the evaluator has access
to the HTML code in order to identify the graphic con-
trol attribute values for the evaluation. Therefore, the
evaluation principle generally lies in the inspection of
the conformity of the interface, according to guide-
lines set (e.g., ReWeb and TestWeb, Ricca & Tonella,
2000; AccessEnable, Brinck et al., 2002; EvalAccess,
Abascal, Arue, Farjado, & Garay, 2006).
•Mobile UI evaluation tools: Nowadays, interactive
systems operating on mobile phones, tablets, and per-
sonal digital assistant terminals are evolving exponen-
tially. Numerous applications are increasingly avail-
able with the iPhone, Android, and Mobile Windows
(Rogers et al., 2011), for example (Monk, Carroll,
Parker, & Blythe, 2004). Nevertheless, these system
evaluation techniques and tools are rather rare. For
instance, Lift Machine (UsableNet, Inc., 2004) evalu-
ates BlackBerry6terminal application interfaces.
Table 1 lists representative evaluation tools, presenting some
features for each tool:
3WIMP is the acronym for Windows, Icons, Menus and Pointing
devices. WIMP user interfaces are the traditional user interfaces in
which the interaction is based on the mouse and the keyboard.
4To ensure usability tests, there are three types of questions: pretest
questions, posttask questions, and posttest questions (Sauro & Dumas,
2009).
5WUI is the acronym for Web User Interface. They are the user
interfaces specifically for web pages, and they are used through the
Internet browser.
6http://us.blackberry.com/
• Acquisition: The technique to acquire data for the
evaluation process (e.g., source code parser, textual
description, questionnaire, electronic informer, and log
file); the information can be captured automatically or
manually.
• The evaluated user interface: WUI, WIMP or mobile
interfaces.
• Provided service: nonrespected ergonomic guidelines
and/or UI correction suggestions.
• Evaluation type: static (UI graphic control attributes)
and/or dynamic (UI interaction).
• Design phase: specification, design, implementation,
or final system testing.
• The inspected quality factor: accessibility, utility, and
usability.
• Automation: According to the evaluation process
phases introduced by Ivory and Hearst (2001), we
distinguish three phases:
◦Acquire the necessary data for the evaluation
process,
◦Analyze the acquired data, and
◦Critique the user interface using the analyzed data to
develop suggestions.
Every automation phase can be done manually (M),
semiautomatically (S) or automatically (A).
• Flexibility: whether the evaluation tool allows the
evaluator to select the guidelines to be evaluated and to
add new guidelines to the ergonomic guidelines (EG)
database.
• The type of the evaluation tool: website or software.
• Contributor: the user, the evaluator and/or the designer.
Table 1 lists some of the existing evaluation tools. These
tools do not evaluate different types of UIs. For example, they
evaluate only web or WIMP user interfaces. Table 1 illus-
trates that most of the user interface evaluation tools (16 tools
from 20) evaluate WUI. Although the mobile applications are
increasingly widespread, mobile UI evaluation tools are rare
(only one tool from 20). In addition, the existing tools are
applied during the last phase of the system development life
cycle: the testing phase (in the waterfall systems development
life cycle; Larman & Basili, 2003). Tools proposing an early
evaluation are few (e.g., THEA; Pocock et al., 2001) and the
Tarby approach7(Tarby et al., 2008). Most of the tools in
Table 1 do not provide an automated evaluation process. As seen
in Table 1, 13 of 20 tools do not propose automatic critiques;
either S or M, can be found. Then, the evaluation is done
manually during the acquisition, analysis, and critique.
7In the first phase of the interactive system development cycle, this
approach grafts use-based features on the functional kernel, thus facil-
itating the evaluation phase. The approach is based on aspect-oriented
programming and tasks.
Downloaded by [79.175.119.90] at 10:09 23 April 2014
TABLE 1
List of the Existing User Interface (UI) Evaluation Tools
Web UI Evaluation WIMP UI
Mobile
UI
Tools Testweb Takaka
A
Prompt
Access
Enabe
HTML
Toolbox
Lift
Machine Taw Bobby Magenta Destine Waex Hyper AT Ocawa Ergoval EISEval Sherlock THEA MESIA
Tar by
Approacha
Access
Enable
Input
Acquisition
Parser XXXXXXXXXXXX XX X X
Textual Description X
Questionnaire XX
Electronic Informer XX
Log file XXX
Output
Provided service
Nonrespected EG X X X X X X X X X X X X X X X X
Correction suggestions X X
Evauation type
Static X X X X X X X X X X X X X X X X X
Dynamic X XXXXXX
Design phase
Specification X
Design
Implementation
Final system testing X X X X X X X X X X X X X X X X X X X X
Quality factor
Accessibility X X X X X X X X X X
Utility X X X X
Usability X X X X X X X X X X X X X X
Automation
Acquisition S A A A A A A A A A A A A A A S M A A A
Analysis A A A A A AAA S AA A A A A A A A A A
Critiques M M M A A A A M M M A M A M M S M M M A
Flexibility
EG selection X X X X X X X X X X X X X
EG addition X X X X X X X X X
Tool type
Web site X X X X X
Software X X X X X X X X X X X X X X X X
Contributor
User XX XX XX
Evaluator X X X X X X X X X X X X X X X X X X X
Designer XX X
aThis approach can be applied for Windows, Icons, Menus and Pointing devices (WIMP), web, and mobile user interfaces.
411
Downloaded by [79.175.119.90] at 10:09 23 April 2014
412 S. CHARFI ET AL.
FIG. 2. The general functioning of the evaluation process modeled through business process modeling notations (BPMN).
Most of the evaluation tools in Table 1 propose only
nonrespected ergonomic guidelines as evaluation results. Some
tools propose the graphic elements that do not correspond to
inspected ergonomic guidelines. They do not generally cor-
rect the UI automatically. However, they propose suggestions
to improve the evaluated interface. In addition, the evaluation
process is not easy to set up in the tools presented in Table 1.
In fact, they require a good preparation of the UI evaluation and
specific knowledge of the tools for the evaluation process.
3. PROPOSITION: WIDGETS DEDICATED TO UI
EVALUATION
Evaluating the user interface can be defined as the valida-
tion of the user interfaces conformity to ergonomic guidelines
(Abascal et al., 2006; Beirekdar, Vanderdonckt, & Noirhomme,
2002). Based on this definition, our approach evaluates a set
of ergonomic guidelines, which are integrated into the widgets
that constitute the user interface. This evaluation is made locally
by the widget. In other words, our approach exploits a graphic
interface widget set. These widgets are able to self-evaluate
according to the predefined guidelines.
3.1. General Presentation of Our Approach
Our approach is composed of three widget categories. Each
category is dedicated to each UI type: WIMP, Web, and Mobile
UI. Each category is encapsulated into a DLL8file, thus making
8Dynamic Link Library: a format a file used by Windows operating
system. It is used to contain library used by programs.
their exploitation easier. The objective of these widgets is to pro-
vide self-evaluation according to ergonomic guidelines. These
guidelines are defined in advance by the evaluator. The origi-
nality of this approach lies in the coupling between the design
phase and the evaluation phase.
The proposed evaluation process is automated during the
three evaluation levels: acquisition, analysis and critiques
(Section 2). Widget use is intended for WYSIWYG9program-
ming environments. The proposed widgets are mostly used to
aid the evaluator to evaluate usability, which is an interactive
system’s ease of use in order to execute well-defined tasks; it
guarantees intuitive handling and learnability as well as support
for using the graphic user interface.
This approach is classified under Evaluation through expert
analysis techniques (Dix et al., 2003). As shown in Figure 2,
it requires three contributors: a programmer, a designer, and an
evaluator. The designer has to conceive the interactive system’s
graphic interface. The programmer has to implement the per-
sonalized widgets. The evaluator has to specify and to select
the guideline to use for the evaluation. The evaluation is based
on the interface presentation according to ergonomic guide-
lines. It detects aspects related to these guidelines in the user
interface.
The proposed widgets suggest, as evaluation report, two
reports:
9WYSIWYG is the acronym for “What you see is what you get.”
This acronym is used to indicate development environments that allow
composing user interfaces visually.
Downloaded by [79.175.119.90] at 10:09 23 April 2014
WIDGETS DEDICATED TO USER INTERFACE EVALUATION 413
• The first report informs the designer about ergonomic
inconsistencies with specified guidelines. This report
shows the widget aspects that do not correspond to the
specified guidelines and contains suggestions to solve
the ergonomic inconsistencies of the widget.
• The second report is a PDF file, which contains the
ergonomic inconsistencies of the widgets and recom-
mendations for improving the interface. It includes the
different widgets notifications.
In other words, the first report is specific to a widget, and
the second one concerns the whole user interface that was
evaluated.
Figure 2 illustrates the proposed evaluation process, which
revolves around two major stages. First, the evaluator selects
EG for the evaluation. Then the evaluator formalizes and defines
the guidelines for the evaluation process during the specification
phase (in the sense of the requirement phase in the waterfall
systems development life cycle; Larman & Basili, 2003). These
guidelines are saved into XML files (a file per guideline). Each
XML file is created with a dedicated interface.
Figure 3 illustrates the actions performed by the widget
during its creation. First, it initializes itself via an inherited
constructor from the widget library provided by development
environment. Then the widget evaluates its conformity to spec-
ified guidelines set. As mentioned earlier, this set, specified
by the evaluator, is appropriate for the interface type (i.e.,
WIMP, WUI, or mobile interfaces). Finally, the widget notifies
the evaluator about ergonomic inconsistencies; this notification
contains the nonrespected guidelines and improvement sugges-
tions (Figure 4). If the widget is coherent with the guidelines set,
it informs the designer that there are no inconsistencies accord-
ing to selected EG. Then, the widget gives the global report of
the inconsistencies detected related to the specified guidelines
and suggestions for improvement.
3.2. Widgets Dedicated to Early UI Evaluation
The proposed widgets appear to be similar to those pro-
posed by the WYSIWYG development environments. In fact,
they deal with the same functions. Shown on the widget tool-
bar, they can be used according to the Drag and Drop principle.
Although there is no apparent specificity to the designer. These
widgets are endowed with additional mechanisms to evaluate
their conformity according to the ergonomic guidelines set. The
proposed widgets are separated into three categories: WIMP
interface design, Web UI design, and Mobile UI design. Each
category is encapsulated in a DLL file, which can be used to
insert these widgets in the widget toolbar. In this article, our
widgets are intended for “MS Visual Studio 2010” development
environment. It is possible to extend these categories to other
environments (“MS Borland C ++”, “Eclipse”, etc.).
The pseudocode illustrates the self-evaluation process with
the proposed widgets (Figure 5). First, the widget initializes
itself on the graphic UI using the inherited constructor. Then, it
loads the ergonomic guidelines related to its type into a queue.
Next, it analyzes its conformity to the guidelines according
to the logical and arithmetic operator type used (e.g., supe-
rior, inferior, equal, different). Each operator is associated to
a method. The widget appeals to the appropriate method by giv-
ing the attributes and guideline values as an argument. At the
end of the queue parsing, the widget notifies the designer of the
ergonomic guideline inconsistencies and saves these inconsis-
tencies in the evaluation report (PDF file). Note that a guideline
can be applied for more than a widget (e.g., applied for textbox,
label, and button). This guideline is defined only one time,
and the associated widgets are mentioned in the “Component
Tag”).
The operating principle of these widgets is described next.
Once created using the Drag and Drop, the personalized wid-
get launches the inherited constructor from the original class,
proposed by the development environment framework. Then
it traces its shape on the interface. Next, it selects ergonomic
guidelines, which are associated with its type (e.g., button, text
field, checkbox). It extracts its attribute values to develop a com-
parison, which gives information about the widget’s conformity
to ergonomic guidelines (Figure 3).
3.3. Ergonomic Guideline Modeling
According to Vanderdonckt (1999), an ergonomic guide-
line is a design and/or evaluation principle to be observed to
obtain and/or guarantee an ergonomic interface. Generally, it
comes from other disciplines, such as software engineering,
or from observing or studying interactive system users. They
are usually expressed in natural language to guide the designer
and/or the evaluator to obtain useful, accessible and usable
interfaces.
The proposed tool (Figure 6), called the Ergonomic
Guidelines Manager, defines standardized guidelines so that
these guidelines can be exploited for the UI evaluation. This
tool allows the following:
• Consulting the saved ergonomic guidelines (Search
tab)—The search can be done via the guideline iden-
tifier, name, or reference.
◦Adding new guideline (Add tab)—The guideline
identifier is automatically generated by the system.
The evaluator has to specify the name and the bib-
liographical reference, as well as all the widgets
type to which the guideline can be applied. The
tool proposes a widget list to the evaluator. In addi-
tion, it should express the guidelines through logical
operators (e.g., equal, superior, inferior, between,
equal, different, different from the group) and widget
attributes (e.g., title, police, size, color, background).
Then, the generated inconsistencies and correction
suggestions have to be specified.
Downloaded by [79.175.119.90] at 10:09 23 April 2014
414 S. CHARFI ET AL.
FIG. 3. Activity diagram for the widget self-evaluation process.
◦Modifying the existing guidelines (Modify tab)—
The guidelines saved in XML files can be modified,
except for the guideline identifier.
◦Configuring the Ergonomic Guidelines Manager
(Configuration tab)—The path for saving evaluation
reports and XML file must be specified (Figure 6).
Let us take the guideline example: “An icon is a graphic that
takes up a small portion of screen real estate and provides a
quick, intuitive representation of an action, a status, or an app”
(Android, 2012). Figure 7 shows the XML representation of this
guideline.
Another example is “Keep it brief: Use short phrases with
simple words. People are likely to skip sentences if they’re long”
(Android, 2012). Then we estimate the text label should not
exceed 30 characters per label. Then, this guideline is modeled
as follows (Figure 8):
Another example is “Given the unpredictability of colour
screens and users, the choice can be very complicated. The
Downloaded by [79.175.119.90] at 10:09 23 April 2014
WIDGETS DEDICATED TO USER INTERFACE EVALUATION 415
FIG. 4. Notification example.
FIG. 5. The pseudo-code of the widget dedicated to user interface evaluation.
colour is often best used to highlight key information. In gen-
eral, do not use more than three primary colours for informa-
tion” (Watzman, 2002). This example is modeled as follows
(Figure 9).
The EG should be contextualized, adequately interpreted,
then unambiguously specified and structured to be “quantifi-
able” and then suitable for being used with evaluation widgets.
Once contextualized, the EG have to be defined using a formal
FIG. 6. The Ergonomic Guidelines Manager.
FIG. 7. Example of a guideline expressed in XML notation.
FIG. 8. A second example of a guideline expressed in XML notation.
FIG. 9. A third example of a guideline expressed in XML notation.
Downloaded by [79.175.119.90] at 10:09 23 April 2014
416 S. CHARFI ET AL.
FIG. 10. The process of ergonomic guideline definition into XML file.
language. Typically, they are expressed in ergonomic manu-
als in natural language, making exploiting them rather difficult.
The EG exploitation remains at their contextual interpretations.
Many languages are proposed for defining ergonomic guide-
lines (e.g., Guideline Definition Language [GDL; Beirekdar
et al., 2002], Guideline Abstraction Language [GAL; Leporini
& Paterno, 2004], Unified Guideline Language [UGL; Arrue,
Vigo, Aizpurua, & Abascal, 2007]). Therefore, many ergonomic
guidelines cannot be expressed. These developed languages are
complicated and demand special tools for using them. Arrue
et al. (2007) proposed UGL, which is a specific language for
better guideline management. They also proposed a tool for
modeling guidelines, which is dedicated for evaluating web site
accessibility (Takata, Nakamura, & Seki, 2004). The guideline
definition languages cited are based on the XML notations for
reliability and simplicity. They are dedicated for evaluating web
sites.
In our approach, we opted for a simpler guideline model
(Figure 10). Our guideline modeling process consists of choos-
ing the guideline to be considered for the design or evaluation
phase. Second, the guideline’s graphic aspect10 (e.g., font, size,
color, dimension) has to be specified. Third, the widget type
associated to the guideline has to be selected. Fourth, the
guideline is expressed through the arithmetical (e.g., superior,
10Note that a guideline can deal with more than one aspect (e.g.,
font color and control size); it is defined through two distinct guidelines
(one aspect per guideline).
inferior, equal) and logical (e.g., and, or) operators.11 Finally,
the guideline is associated with the engendered inconsistency
and the suggestions for improvements. The guideline is saved
into an XML file (Figure 7).
3.4. Exploitation of Ergonomic Guidelines by the
Proposed Widgets for the UI Evaluation Process
For its self-evaluation, the widget goes through the guide-
lines selected by the evaluator. It duplicates in its memory
the guidelines in which the guideline type appears with <
EG_widgets >tagintheXMLfile(Figure 7). Then the wid-
get evaluates its conformity according to these guidelines. For
every guideline, the widget identifies the selected operator (e.g.,
superior, inferior). A procedure corresponds to each operator.
As inputs, the widget provides its attributes values and the
recommended guideline values for the argument. Every time
an inconsistency is detected, the character chains “recomm”
and “error” are furnished by the detected design inconsistency
and the improvement suggestions. At the end of self-evaluation
process, the widget notifies the designer with these charac-
ters chains in order to inform him or her about the detected
ergonomic inconsistency.
11Like related aspects, the guideline can support only one opera-
tor by guideline. Thus, it is not possible to combine between several
operators to define one guideline.
Downloaded by [79.175.119.90] at 10:09 23 April 2014
WIDGETS DEDICATED TO USER INTERFACE EVALUATION 417
4. EXPERIMENTAL EVALUATION
To validate and improve the proposed early evalua-
tion approach, an experimental evaluation is proposed in
this section. It deals with a system dedicated for network
supervision prototype (Figure 10). The prototypes are conceived
using the proposed widgets.
4.1. Evaluated System: The Information Assistance System
The Information Assistance System (IAS)12 is a system ded-
icated for the transportation network information presentation.
It is used by network regulators. Its main aim is to inform human
regulators about different vehicles position in the transport net-
work. In addition to this, it enables regulators to communicate
with vehicles drivers and passengers via sending messages
(Figure 11).
4.2. Design/Evaluation Process
As mentioned earlier, the proposed evaluation process is cou-
pled to the design phase. The evaluation is done through the
interface design with the proposed evaluation widgets. Before
proceeding to the interface design, evaluators have to select an
EG set to take it into the consideration for the evaluation/design
process. These EG are defined into XML files. The selected
rules focus on: the writing size, the writing color, the writing
font, image dimensions, graphic components size, and menu
item number.
Then the designer compose graphically the user interface
with the proposed widgets. Every time, a widget is added design
errors and recommendation are displayed as a notification to the
FIG. 11. A prototype of Information Assistance System (IAS) implemented
using evaluation Widgets.
12The IAS is a cooperative project involving an industrial partner
(Transvilles) and several research laboratories (LAGIS, LAMIH, and
INRETS). This project is sponsored by the Nord-Pas de Calais regional
authorities and by the FEDER (Fonds Européen de Développement
Régional—European fund for regional development).
user. Once the interface is finalized, the designer disposes of a
global report about the design error and improvement sugges-
tion. Indeed, during the IAS design, the designer is informed
by a set of recommendations proposed by the different UI com-
ponents (Figure 12). The used widget for the IAS design are
button, label, picture box, text box, and combo-box.
We defined an ergonomic guideline set for every UI design
or evaluation phase. Each set revolved around the information
display:
• Character size, color, and font;
• Size and number of the pictures and icons;
• Text length;
• Widget dimensions;
• Color number used;
• Global interface density; and
• Background color.
These guidelines were used to evaluate the usability of the
interface. Our early evaluation verified the interfaces’ confor-
mity to the specified guidelines.
4.3. Evaluation Results
The IAS prototype evaluation did not raise major problems
of usability. Indeed, design errors detected revolve mainly were
about writing font adopted by the IDE “MS Visual 2010”(the
used font is “Microsoft Sans Serif” while the associated EG
recommends the use of the font used by the operating system).
In addition, one of the selected EG recommends to use writing
font size equal to 10 points. Meanwhile, the used font size is
8.25 points.
5. RESULTS AND DISCUSSION
In the IU design or evaluation phase, the proposed approach
is an easy and effective method to assist user interface evalua-
tors for early evaluation. It provides information about usability
problems. Depending on the type of interactive system inter-
face, an ergonomic guideline can be differently interpreted. One
of our approach’s advantages is the notification provided to the
designer concerning the detected ergonomic inconsistancies.
In our approach, the UI evaluation is established during
the design phase which makes it possible to save time and
resources. Indeed, ergonomic inconsistencies are detected in the
early stages of systems development life cycle. As expressed in
Section 1, Nielsen (1994) thinks that it is 100 times cheaper to
correct errors during the first design phase than the last phase.
The proposed evaluation provides design errors and improve-
ment suggestions list. Although the evaluator can evaluate the
conformity to the guidelines, he or she cannot evaluate the
ergonomic quality. Our widgets do not indicate the quality of
the user interface evaluated.
Compared to existing UI evaluation techniques, our
approach is easy to apply during earliest phases of systems
development life cycle: the design phase (in the case of the
Downloaded by [79.175.119.90] at 10:09 23 April 2014
418 S. CHARFI ET AL.
FIG. 12. A screenshot of the user interface design/evaluation of the Information Assistance System (IAS) using the evaluation-based widgets.
waterfall Systems development life cycle; Larman & Basili,
2003). As shown in Table 1, most of the tools are applied dur-
ing the evaluation phase. Only one technique, THEA (Pocock
et al., 2001), evaluates in the design phase. In addition, our
approach can be applied to web, WIMP and mobile UI. The
evaluation process consists on detecting ergonomic inconsis-
tencies in the evaluated UI. Furthermore, this approach is not
limited to ergonomic guidelines set for evaluating the quality
ergonomic of the interactive systems; the guidelines are defined
into XML files. Note that the supported ergonomic guideline
for user evaluation are simple one that can be defined through
the graphical interface controls and can be defined through
the proposed logical and mathematical operators. For instance,
the guideline “Controls should allow individual users ease of
access to media components that serve their individual needs”
ISO/DIS 14915-2 (ISO 14915-2, 2001) can not be supported in
the proposed evaluation process through the evaluation widgets.
Table 2 compares the proposed approach to those presented
in Table 1. Our approach makes it possible to save time in the
evaluation of the UI. The evaluation process through the pro-
posed approach is totally automated, with acquisition, analysis,
and critique phases. The main advantage, compared to existing
approaches, is the fact that it is applied in early stages of sys-
tem development. In addition to that, unlike most of the tools,
the proposed approach do not hard code the guidelines into the
evaluation engine. They are coded externally of the evaluation
engine. As the guidelines are coded externally as XML files,
they can easily be modified. The proposed approach focuses on
the static presentation of a UI not like THEA technique that is
dedicated to asking questions and exploring interactive system
designs to know how a device functions in a scenario. The pro-
posed approach is used independently to use scenarios. Another
aspect remains in the fact that the proposed approach can be
applied for WIMP, web, and mobile UI.
6. CONCLUSION
This article presents an approach for the early UI evaluation.
The originality of this research lies in imbricating evaluation
into the widgets. This evaluation is based on the widgets check-
ing conformity to a set of ergonomic guidelines. The advantage
of our approach is its ease of use during the design or evalua-
tion phase. In addition, it integrates new ergonomic guidelines
without touching the widget source code. These widgets can be
used for WIMP, web, and mobile UI. The proposed approach
does not require user participation into the evaluation process.
It belongs to the category of tools related the evaluation through
analysis techniques (in the sense of Dix et al., 2003).
As a perspective for future research, we will integrate more
widgets in the evaluation process (in three categories). Our wid-
gets were developed for studying the feasibility of our approach.
In addition, we will improve the quality of the evaluation
reports. Consequently, we will use report standards, such as
RDL (Microsoft Corporation, 2009) and EARL (World Wide
Web Consortium, 2009), making the evaluation reports easier
to manage and to understand. The evaluation report schould
integrate graphs for a better understanding.
Our approach identifies only the ergonomic inconsistencies
within the widgets. This evaluation is done locally at the widget
level, evaluating the interface’s conformity with the guidelines,
widget by widget; it does not evaluate the whole interface. This
can prove to be inadequate because the interface may contain
ergonomic inconsistencies when the widgets are in conformity
with the specified guidelines. Therefore, we suggest combining
Downloaded by [79.175.119.90] at 10:09 23 April 2014
WIDGETS DEDICATED TO USER INTERFACE EVALUATION 419
TABLE 2
Our Approach Compared With the Existing Tools Shown in
Table 1
Tools
Existing Tools
(20 Tools in
Total)
Our
Approach
Input
Acquisition
Parser 16 X
Textual Description 1
Questionnaire 2
Electronic Informer 2
Log file 2
Evaluated UI
Web UI 14 X
WIMP UI 5 X
Mobile UI 1 X
Output
Provided service
Nonrespected EG 16 X
Correction suggestions 2 Perspective
Evaluation Type
Static 17 X
Dynamic 7
Design phase
Specification 1
Design 1 X
Implementation 0
Final system testing 20
Quality factor
Accessibility 11 X
Utility 4
Usability 14 X
Automation
Acquisition 17 automatically A
Analysis 19 automatically A
Critiques 7 automatically A
Flexibility
EG selection 13 A
EG addition 9 A
Tool type
Web site 5
Software 16 X
Contributor
User 16
Evaluator 19 X
Designer 3 X
Note. UI =user interface; WIMP =Windows, Icons, Menus and
Pointing devices.
our approach with an approach permitting a dynamic evaluation
of the interaction between the user and the interactive system.
One limitation of the proposed widgets is the fact that they
support only basic features. We intend to develop the proposed
widgets by taking into consideration their behavior (they are
activated or not, the associated events, etc.). In addition to that,
we intend to integrate Artificial Intelligence into the proposed
widgets to allow them to communicate and handle design prob-
lems and to support the evaluation of distributed interfaces (de
la Guía, Penichet, Garrido, & Albertos, 2012).
As perpective, we intend also to extend the evaluation pro-
cess to other systems development life cycle phases (e.g., in the
case of Waterfall systems development life cycle: the implemen-
tation, verification, and maintenance phases) in order to take
into consideration more aspects for the evaluation and then to
get better evaluation results.
We also intend to extend this approach to support the eval-
uation of other types of user interfaces such as: Post-WIMP
and Distributed UI (Lepreux, Kubicki, Kolski, & Caelen, 2012;
Tesoriero & Lozano, 2012).
FUNDING
This research is partially financed by the International
Campus on Safety and Intermodality in Transportation, the
Nord/Pas-de-Calais Region, the European Community, the
Regional Delegation for Research and Technology, the Ministry
of Higher Education and Research, and the CNRS.
REFERENCES
Abascal, J., Arue, M., Farjado, I., & Garay, N. (2006). An expert-based usability
evaluation of the EvalAccess web service. In R. Navarro-Prieto & J. Lorés
(Eds.), HCI related papers of Interacción (pp. 1–17). New York, NY:
Springer.
Android. (2012). User interface guide. Retrieved from http://developer.android.
com/design/get-started/principles.html
Arrue, M., Vigo, M., Aizpurua, A., & Abascal, J. (2007). Accessibility guide-
lines management framework. In C. Stephanidis (Ed.), Universal access in
human-computer interaction. Applications and services (Lecture Notes in
Computer Science, Vol. 4556, pp. 3–10). Berlin, Germany: Springer.
Bacha, F., Oliveira, K., & Abed, M. (2011). Using context modeling and domain
ontology in the design of personalized user interface. International Journal
on Computer Science and Information Systems,6, 69–94,
Bardzell, J. (2011). Interaction criticism: An introduction to the practice.
Interacting With Computers,23, 604–621.
Beirekdar, A., Vanderdonckt, J., & Noirhomme, M. (2002). A framework and
a language for usability automatic evaluation of web sites by static analysis
of HTML source code. In Proceedings of the 4th International CADUI (pp.
337–348). Dordrecht, the Netherlands: Kluwer Academic.
Boy, G. A. (2011). The handbook of human-machine interaction: A human-
centered design approach. Florida Institute of Technology, Florida Institute
for Human and Machine Cognition, and NASA Kennedy Space Center.
Brinck, T., Hermann, D., Minnebo, B., & Hakim, A. (2002). AccessEnable:
A tool for evaluating compliance with accessibility standards.
CHI’2002 Workshop on Automatically Evaluating the Usability of
Web Sites, 906–907.
Downloaded by [79.175.119.90] at 10:09 23 April 2014
420 S. CHARFI ET AL.
Buxton, B. (2007). Sketching user experiences: Getting the design right
and the right design (Interactive technologies). Burlington, MA: Morgan
Kaufmann.
de la Guía, E., Penichet, V. R., Garrido, J. E., & Albertos, F. (2012). Design and
evaluation of a collaborative system that supports distributed user interfaces.
International Journal of Human-Computer Interaction,28, 768–774.
Delannay, G. (2003). A generic traceability tool. Retrieved from http://www.
info.fundp.ac.be/∼pth/fundpdocs/gde.pdf
Dix, A., Finlay, J., Abowd, G., & Beale, R. (2003). Human–computer interac-
tion (3rd ed.). Upper Saddle River, NJ: Prentice Hall.
Filippi, S., & Barattin, D. (2012). Generation, adoption, and tuning of
usability evaluation multimethods. International Journal of Human-
Computer Interaction,28, 406–422.
Folmer, E., & Bosch, J. (2004). Architecting for usability: A survey. Journal of
Systems and Software,70, 61–78.
Grudin, J. (1992). Utility and usability: Research issues and development
contexts. Interacting with Computers,4, 209–217.
Hearst, M. A. (2009). Search user interface. New York, NY: Cambridge
University Press.
Hvannberg, E., Law, E., & Lárusdóttir, M. (2007). Heuristic evaluation:
Comparing ways of finding and reporting usability problems. Interacting
With Computers,19, 225–240.
ISO 14915-2. (2001). DIS 14915-2, Software ergonomics for multimedia
user interfaces - Part 2: Multimedia navigation and control. Geneva,
Switzerland: International Organization for Standardization.
Ivory, M., & Hearst, M. (2001). The state of the art in automated usability
evaluation of user interfaces. ACM Computing Surveys,33, 173–197.
Juristo, N., Moreno, A. M., & Sanchez-Segura, M. I. (2007). Analysing the
impact of usability on software design. Journal of Systems and Software,
80, 1506–1516.
Konstan, J. (2011). Tutorial/HCI for recommender systems: A tutorial.
Proceedings of the 16th International Conference on Intelligent User
Interfaces, 463–464.
Kortum, P. (2009). HCI beyond the GUI: Design for haptic, speech, olfactory,
and other nontraditional interfaces (Interactive technologies). Burlington,
MA: Morgan Kaufmann.
Larman, C., & Basili, V.R. (2003). Iterative and incremental development: A
brief history. Computer,36(6), 47–56.
Leonidis, A., Antona, M., & Stephanidis, C. (2012). Rapid prototyping of adapt-
able user interfaces. International Journal of Human-Computer Interaction,
28, 213–235.
Leporini, B., & Paternò, F. (2004). In-creasing usability when interacting
through screen readers. International Journal Universal Access in the
Information Society,3, 57–70.
Lepreux, S., Kubicki, S., Kolski, C., & Caelen, J. (2012). From Centralized
interactive tabletops to Distributed surfaces: The Tangiget concept.
International Journal of Human-Computer Interaction,28, 709–721.
Mahatody, T., Sagar, M., & Kolski, C. (2010). State of the art on the cognitive
walkthrough method, its variants and evolutions. International Journal of
Human-Computer Interaction,26, 741–785.
Medvidovic, N., & Jakobac, V. (2005). Using software evolution to focus
architectural recovery. Automated Software Engineering,13, 225–256.
Microsoft Corporation. (2009). Report definition language specification (3rd
ed.). Redmond, WA: Author.
Monk, A. F., Carroll, J., Parker, S., & Blythe, M. (2004). Why are mobile phones
annoying? Behaviour and Information Technology,23, 33–42.
Nielsen, J. (1994). Heuristic evaluation. In J. Nielsen & R. L. Mack (Eds.),
Usability inspection methods (pp. 25–62). New York, NY: Wiley.
Pocock, S., Harrison, M., Wright, P., & Johnson, P. (2001). THEA—A tech-
nique for human error assessment early in design. In M. Hirose (Ed),
Human–computer interaction: INTERACT’01 (pp. 247–254). Amsterdam,
the Netherlands: IOS Press.
Rafla, T., Robillard, P. N., & Desmarais, M. (2006). Investigating the impact of
usability on software architecture through scenarios: A case study on Web
systems. Journal of Systems and Software,79, 415–426.
Ricca, F., & Tonella, P. (2000). Web site analysis: Structure and evolu-
tion. Proceedings of the 16th IEEE International Conference on Software
Maintenance (ICSM’00), 76–86.
Rogers, Y., Sharp, H., & Preece, J. (2011). Interaction Design: beyond human-
computer interaction (3rd ed.). Honoken, NJ: Wiley.
Salvendy, G., & Turley, L. (2002). Effectiveness of user testing and heuristic
evaluation as a function of performance classification. Behaviour &
Information Technolog,21, 137–143.
Sauro, J., & Dumas, J. S. (2009). Comparison of three one-question, post-
task usability questionnaires. In Proceedings of the SIGCHI Conference
on Human Factors in Computing Systems, CHI ‘09 (pp. 1599–1608). New
York, NY: ACM.
Savidis, A., & Stephanidis, C. (2006). Automated user interface engineer-
ing with a pattern reflecting programming language. Automated Software
Engineering,13, 303–339.
Sommerville, I. (2010). Software engineering (9th ed.). Reading, MA: Addison-
Wes l ey.
Takata, Y., Nakamura, T., & Seki, H. (2004). Accessibility verification of
WWW documents by an automatic guideline verification tool. Proceedings
of the 37th Annual Hawaii International Conference on System Sciences
(HICSS’04).
Tarby, J. C., Ezzedine, H., & Kolski, C. (2008). Prevision of evaluation by
traces during the software design of interactive systems: Two approaches
compared. In A. Seffah, J. Vanderdonckt, & M. Desmarais (Eds.), Human-
centered software engineering: Architectures and models-driven integration
(pp. 257–276). New York, NY: Springer.
Tesoriero, R., & Lozano, M.D. (2012). Distributed User Interfaces: Applications
and Challenges, International Journal of Human-Computer Interaction,
28(11), 697–699. doi:10.1080/10447318.2012.715048
Trabelsi, A., & Ezzedine, H. (2013). Evaluation of an Information Assistance
System based on an agent-based architecture in transportation domain: first
results. International Journal of Computers, Communications and Control,
8(2), 320–333.
Tran, C. D., Ezzedine, H., & Kolski, C. (2008). Evaluation of agent-based inter-
active systems: proposal of an electronic informer using Petri Nets. Journal
of Universal Computer Science,14, 3202–3216.
UsableNet Inc. (2004). LIFT for Dreamweaver Nielsen Norman Group edition.
Retrieved from http://www.usablenet.com/productsservices/lfdnng/lfdnng.
html
Vanderdonckt, J. (1999). Development milestones towards a tool for working
with guidelines. Interacting With Computers,12, 81–118.
van Velsen, L., van der Geest, T., & Klaassen, R. (2011). Identifying usability
issues for personalization during formative evaluations: A comparison of
three methods. International Journal of Human-Computer Interaction,27,
670–698.
Watzman, S. (2002). Visual design principles for usable interfaces. In A. Sears
& J. A. Jacko (Eds.), The human–computer interaction handbook (pp.
263–285). Boca Raton, FL: CRC Press.
Wharton, C., Bradford, J., Jeffries, J., & Franzke, M. (1992). Applying cogni-
tive walkthroughs to more complex user interfaces: Experiences, issues and
recommendations. Proceedings of CHI ’92, 381–388.
World Wide Web Consortium, (2009). Evaluation and Report Language 1.0.
Retrieved from http://www.w3.org/TR/EARL10/
Wright, P., Blythe, M., McCarthy, J., Gilroy, S., & Harrison, M. (2006). User
experience and the idea of design. In Proceedings of the 12th International
Conference on Interactive Systems: Design, Specification, and Verification
(pp. 1–14). Berlin, Germany: Springer.
ABOUT THE AUTHORS
Selem Charfi obtained his Ph.D. at the University of
Valenciennes (France) in 2013. His research concerns human–
computer interaction (HCI), agent-based architecture models of
interactive systems, software engineering, and HCI evaluation,
with application to the supervision of transport systems. He is
the coauthor of several papers in international conferences. He
is involved in several research networks.
Downloaded by [79.175.119.90] at 10:09 23 April 2014
WIDGETS DEDICATED TO USER INTERFACE EVALUATION 421
Abdelwaheb Trabelsi obtained his Ph.D. at the University
of Valenciennes (France) in 2006. He is an assistant profes-
sor in Computer Science at the University of Sfax (Tunisia).
A member of the LOGIC laboratory, he is involved in several
research networks and projects. He specializes in HCI and SE
for interactive systems.
Houcine Ezzedine obtained his Ph.D. in 1985. He is pro-
fessor of Industrial Computer Science at the University of
Valenciennes (France) and member of the “Human-Computer
Interaction and Automated Reasoning” research group in the
LAMIH. He is involved in several research networks, projects,
and associations. He specializes in human–computer interaction
and software engineering for interactive systems.
Christophe Kolski obtained his Ph.D. in 1989. He spe-
cializes in human–computer interaction, software engineer-
ing for interactive system design and evaluation, adap-
tive User Interface, tangible and distributed interaction.
He is a professor of Computer Science at the University
of Valenciennes (France), and a member of the LAMIH
laboratory.
Downloaded by [79.175.119.90] at 10:09 23 April 2014