Evaluating Usability Evaluation Methods: Criteria, Method and a Case Study

DOI: 10.1007/978-3-540-73105-4_63 In book: Human-Computer Interaction, Part I, HCII 2007, LNCS 4550, Publisher: Springer, Editors: J. Jacko, pp.569-578


The paper proposes an approach to comparative usability evaluation that incorporates important relevant criteria identified
in previous work. It applies the proposed approach to a case study of a comparative evaluation of an academic website employing
four widely-used usability evaluation methods (UEMs): heuristic evaluation, cognitive walkthroughs, think-aloud protocol and
co-discovery learning.

Download full-text


Available from: Panayiotis Koutsabasis
  • Source
    • "Usability evaluation method refers to any method or procedure used to undertake usability evaluation of a specific application " s user interface to highlight usability trouble areas. Some UEMs provide additional output, such as usability problem reports, to categorise usability issues according to type, to map issues to causative features from within the systems design, or to recommend alternative design solutions [Hartson et al., 2001]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Traditional in-lab usability testing has long been the standard method for evaluating and improving the usability of software interfaces. In-lab testing, though effective, has its drawbacks such as unavailability of representative end-users, high testing costs, and the difficulty of reproducing a user"s everyday environment. To overcome these issues, various alternative usability evaluation methods (UEMs) have been developed over the past two decades. One of the most commonly used is the remote usability testing method. Ever since remote usability testing was introduced fourteen years ago, its effectiveness has been judged and evaluated against that of traditional in-lab testing. However, there is a distinct lack of research exploring the effectiveness of the various modes of remote usability testing. This research aimed to conduct a comparative study of two types of remote usability testing methods namely: synchronous and asynchronous remote usability testing. These two methods were compared through an evaluation of a website, which involved three points of comparison: number and type of problems discovered, overall task performance, and test participants" satisfaction. The results of the study showed that the synchronous testing method performed better than the asynchronous testing method in terms of the number and types of usability problems discovered, although no statistical significant differences were found. The participants in the synchronous test were notably more successful than the participants in the asynchronous test in completing the test tasks. However, the asynchronous test participants were significantly quicker than synchronous participants in performing those tasks. Participants in the synchronous tests also scored slightly higher satisfaction rate with regards to the targeted website. However, asynchronous participants were considerably more satisfied with the remote method that they had participated in. The paper concludes with a set of recommendations for conducting such research.
    Full-text · Article · Jul 2013
  • Source
    • "Software developers are struggling to manage the software complexities while embedded system developers are trying to manage the hardware or firmware complexities (Cavet et al., 2007; Gulliksen, 2007; Kim et al., 2007; Navarre et al., 2011). However, the physical interface and interaction complexities are often being neglected resulting in the development of complex and poorly interactive embedded systems (Coskun and Grabowski, 2005; Dix et al., 2009; Ferscha et al., 2007; Hare et al., 2009; Koutsabasis et al., 2007; Majid et al., 2011; Meier et al., 2011; Reddy et al., 2010). Although all the development efforts are aimed to facilitate the user but the user ends up getting difficult products to work with. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Embedded systems are becoming more significant in our daily lives with the advent of ubiquitous computing. The increasing demands of multifarious functionalities and other factors lead to an increased focus of development on internal software issues. Negligence towards the interaction aspects of physical interface is resulting in the generation of interaction complexities for the user. This work evaluates, compares, and highlights the significance of physicality aspects of embedded system interfaces using five subjects including; washing machine; camera; oven; sound system; and MP3 player. The quantitative evaluation approach helps in a simple investigation by applying the numeric values for each aspect. The result analysis highlights the significance of exposed state, tangible transition, and inverse action over other physicality aspects. This study is especially valuable for the embedded system developers who may not have exposure or expertise to Human-Computer Interaction or its sub-field, Physicality. Managing and incorporating physicality aspects in embedded systems is a key factor for producing natural interaction products.
    Full-text · Article · Mar 2013
  • Source
    • "In order that usability evaluation methods (UEMs) and their evaluation can be fully comprehended, one must understand usability evaluation. Koutsabasis et al. (2007) defined usability evaluation as the appraisal of a particular application " s user interface, an interaction metaphor or method, or an input device, to determine its actual or likely usability [Koutsabasis et al., 2007]. Overall, usability evaluation can be split "

    Full-text · Article · Jan 2013
Show more