ArticlePDF Available

Introduction to the Special Issue on Usability and User Experience: Methodological Evolution

Taylor & Francis
International Journal of Human-Computer Interaction
Authors:
  • MeasuringU

Abstract

This special issue focuses on the evolution of development and assessment methodologies related to usability and user experience. The five articles have a diverse range of topics, including comparison of moderated and unmoderated think-aloud usability sessions, a new usability inspection method based on concept mapping, analysis of the fitness of scrum (agile) and kanban (lean) development methodologies to incorporate user experience methodologies, exploration of the relation between expectations and user experience, and a case study describing difficulties encountered when assessing usability “in the wild.” These articles should prove to be of value to practitioners and researchers with an interest in the evolution of usability and user experience methodologies.
This article was downloaded by: [James R. Lewis]
On: 27 August 2015, At: 12:26
Publisher: Taylor & Francis
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: 5 Howick Place,
London, SW1P 1WG
Click for updates
International Journal of Human-Computer Interaction
Publication details, including instructions for authors and subscription information:
http://www.tandfonline.com/loi/hihc20
Introduction to the Special Issue on Usability and User
Experience: Methodological Evolution
James R. Lewisa
a IBM Corporation, Delray Beach, Florida, USA
Accepted author version posted online: 26 Jun 2015.
To cite this article: James R. Lewis (2015) Introduction to the Special Issue on Usability and User Experience: Methodological
Evolution, International Journal of Human-Computer Interaction, 31:9, 555-556, DOI: 10.1080/10447318.2015.1065689
To link to this article: http://dx.doi.org/10.1080/10447318.2015.1065689
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained
in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no
representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the
Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and
are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and
should be independently verified with primary sources of information. Taylor and Francis shall not be liable for
any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever
or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of
the Content.
This article may be used for research, teaching, and private study purposes. Any substantial or systematic
reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any
form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://
www.tandfonline.com/page/terms-and-conditions
Intl. Journal of Human–Computer Interaction, 31: 555–556, 2015
Copyright © Taylor & Francis Group, LLC
ISSN: 1044-7318 print / 1532-7590 online
DOI: 10.1080/10447318.2015.1065689
Introduction to the Special Issue on Usability and User
Experience: Methodological Evolution
James R. Lewis
IBM Corporation, Delray Beach, Florida, USA
This special issue focuses on the evolution of development and
assessment methodologies related to usability and user experience.
The five articles have a diverse range of topics, including compari-
son of moderated and unmoderated think-aloud usability sessions,
a new usability inspection method based on concept mapping,
analysis of the fitness of scrum (agile) and kanban (lean) develop-
ment methodologies to incorporate user experience methodologies,
exploration of the relation between expectations and user expe-
rience, and a case study describing difficulties encountered when
assessing usability “in the wild.” These articles should prove to be
of value to practitioners and researchers with an interest in the
evolution of usability and user experience methodologies.
1. INTRODUCTION
This is the second of two special issues of the journal this
year devoted to the topics of usability and user experience.
The first focused on the application of psychometrics to the
development and evaluation of standardized instruments for
the assessment of perceived usability and user experience. The
articles in this issue explore other aspects of the evolution
of methodologies conducted by user experience professionals
during product design, development, and evaluation.
Evolution occurs as a consequence of the interaction between
changing environments and natural selection, which is itself
made possible by the process of mutation. Over the past decade,
usability and user experience practitioners have encountered
“environmental” changes as a result of the following:
The introduction of unmoderated usability testing
methods (Albert, Tullis, & Tedesco, 2010).
The adoption in some development settings of agile
and lean methodologies—methodologies in which it
can be difficult to incorporate user experience input and
assessments (Stellman & Greene, 2014).
The expansion of the concerns of user researchers
beyond the domain of classical usability to broader
conceptions of the user experience (Diefenbach, Kolb
& Hassenzahl, 2014).
Address correspondence to James R. Lewis, 7329 Serrano Terrace,
Delray Beach, FL 33446, USA. E-mail: jimlewis@us.ibm.com
The contributors to this special issue have conducted
research that informs our current knowledge of the conse-
quences of these evolutionary pressures and points the way to
appropriate methodological adaptations that should be of value
to both researchers and practitioners.
2. CONTRIBUTIONS TO THIS ISSUE
2.1. What Do Thinking-Aloud Participants Say?
A Comparison of Moderated and Unmoderated Usability
Sessions
The thinking-aloud (TA) method has its roots in the cog-
nitive psychology of the early 1980s (Lewis, 2012,2014).
The decades since its introduction have led to variation in
TA practice. One distinction that researchers have examined
is that of strict versus relaxed TA, where “relaxed” refers not
to the mental state of participants but rather to relaxing the
procedures of the strict TA protocols of Ericsson and Simon
(1980), with consequent variation in explanations to partici-
pants about how to do TA, practice periods, styles of reminding
participants to TA, prompting intervals, and styles of inter-
vention(Boren&Ramey,2000). Another line of research
has been in the differences in the verbal data collected in
moderated and unmoderated usability studies—research that
has produced inconsistent findings (Lewis, 2012,2014). In this
issue, Hertzum, Borland, and Kristoffersen connect these lines
of research with their comparison of user verbalizations in the
context of moderated and unmoderated usability studies using
relaxed TA.
2.2. Concept Mapping Usability Evaluation: An
Exploratory Study of a New Usability Inspection Method
Usability inspection methods have roots in the 1980s
(Nielsen & Mack, 1994) and have evolved over the decades to
the processes of expert review, heuristic review, and cognitive
walkthrough. In this issue, Bias, Moon, and Hoffman propose
a new inspection method based on methodology of Concept
Mapping and provide an initial assessment of its utility when
adapted into a usability inspection method.
555
Downloaded by [James R. Lewis] at 12:26 27 August 2015
556 J. R. LEWIS
2.3. Whose Experience Do We Care About? Analysis of
the Fitness of Scrum and Kanban to User Experience
Agile and lean methodologies have made their way into
mainstream software development methodology. Despite the
purported advantages of these methods, it can be difficult
to incorporate the user experience (UX) practices that are
part of more traditional development methodologies. During
interviews and analyses of the fitness of agile (scrum) and
lean (kanban) to incorporate UX methodologies, Law and
Lárusdóttir describe the strengths and weaknesses of scrum
and kanban and discuss the potential consequences of devel-
oper confusion regarding the “user” and the “customer” when
working in the context of these customer-centric development
methodologies.
2.4. An Exploration of the Relation Between Expectations
and User Experience
Currently, there are multiple and incompatible theories
about the relationship between user expectations before an
experience and the effect of expectations on users’ assess-
ments of the quality of experiences after they have happened.
Michalco, Simonsen, and Hornbæk report the results of exper-
iments investigating how expectations—in particular, expecta-
tion disconfirmations—affect UX measures.
2.5. Challenges to Assessing Usability in the Wild: A Case
Study
Lindgaard provides a case study of her experience spend-
ing 3 months in an Australian manufacturing plant during the
transition from an older to a newer system for plant manage-
ment. Unlike other contexts of use in which users have choice
regarding the adoption of new technology, in this business-to-
employee setting the adoption of the new system was manda-
tory, and there were serious trust and communication failures
between management and the affected plan employees. No lives
were lost, but the consequences of management downplaying
the likely severity of several potential usability issues turned out
to be quite costly.
3. AFTERWORD
The articles in this special issue are truly multinational, with
contributions from the United States, Europe, and Australia
and from a combination of university researchers and industrial
practitioners. The key topics—think-aloud, usability inspection,
integration with software development, effect of expectation,
and complexity of real-world consulting—should resonate with
most user experience practitioners. I hope that both practitioners
and researchers will benefit from the new research and insights
presented in this special issue of the journal.
ACKNOWLEDGEMENTS
Editing a special issue of the journal is an effortful but deeply
rewarding experience. I sincerely thank Gavriel Salvendy for
giving me the editorial freedom to seek out and work with the
talented researchers who graciously volunteered their time and
effort as authors and reviewers to the contents of this (and the
previous) issue of the journal.
REFERENCES
Albert, W., Tullis, T., & Tedesco, D. (2010). Beyond the usability lab:
Conducting large-scale online user experience studies. Burlington, MA:
Morgan Kaufmann.
Boren, T., & Ramey, J. (2000). Thinking aloud: Reconciling theory and practice.
IEEE Transactions on Professional Communications,43, 261–278.
Diefenbach, S., Kolb, N., & Hassenzahl, M. (2014). The “Hedonic” in human-
computer interaction: History, contributions, and future research directions.
In Proceedings of the 2014 Conference on Designing Interactive Systems -
DIS 14 (pp. 305–314). New York, NY: ACM.
Ericsson, K. A., & Simon, H. A. (1980). Verbal reports as data. Psychological
Review,87, 215–251.
Lewis, J. R. (2012). Usability testing. In G. Salvendy (Ed.), Handbook of
human factors and ergonomics (4th ed., pp. 1267–1312). New York, NY:
Wiley.
Lewis, J. R. (2014). Usability: Lessons learned ... and yet to be learned.
International Journal of Human-Computer Interaction,30, 663–684.
Nielsen, J., & Mack, R. L. (1994). Usability inspection methods.NewYork,
NY: Wiley.
Stellman, A., & Greene, J. (2014). Learning agile: Understanding scrum, XP,
lean, and kanban. Sebastopol, CA: O’Reilly Media.
Downloaded by [James R. Lewis] at 12:26 27 August 2015
... Most moderated usability sessions involve users being guided through their use of a platform or program by an individual who is able to address their immediate concerns and help them work through problems that interfere with their ability to complete their assigned task [13]. Moderated usability sessions, until more recent years, have been one of the preferred methods for testing system usability [21]. However, due to their time, cost, and scope benefits, unmoderated usability evaluation methods have grown in popularity. ...
Chapter
Usability studies are a crucial part of developing user-centered designs and they can be conducted using a variety of different methods. Unmoderated usability surveys are more efficient and cost-effective and lend themselves better to larger participant pools in comparison to moderated usability surveys. However, unmoderated usability surveys could increase the collection of unreliable data due to the survey participants’ careless responding (CR). In this study, we compared the remote moderated and remote unmoderated usability testing sessions for a web-based simulation and modeling software. The usability study was conducted with 72 participants who were randomly assigned into a moderated and unmoderated groups. Our results show that moderated sessions produced more reliable data in most of the tested outcomes and that the data from unmoderated sessions needed some optimization in order to filter out unreliable data. We discuss methods to isolate unreliable data and recommend ways of managing it.
... The background for this paper is that research in usability and user experience (UX) evaluation methodologies continues to be an important and active area of research (James R Lewis, 2015). Classical concurrent think aloud (CTA) usability testing in a lab remains one of the most used UX evaluation methods among UX professionals worldwide. ...
Article
Full-text available
The usability movement has historically always sought to empower end-users of computers, so that they understand what is happening and can control the outcome. In this paper, we develop and evaluate a ‘Textual feedback’ tool for usability and UX evaluation that can be used to empower well-educated, but low-status, users in UX evaluations in countries and contexts with high power distances. The proposed tool contributes to the HCI community’s pool of localized UX evaluation tools. We evaluate the tool with 40 users from two socio-economic groups in real-life UX usability evaluations setting in Malaysia. The results indicate that the Textual Feedback tool may help participants to give their thoughts in UX evaluation in high power distance contexts. In particular, the Textual Feedback tool helps high status females and low status males express more UX problems than they can do with traditional CTA alone. We found that classic concurrent think aloud UX evaluation works fine in high power contexts, but only with the addition of Textual feedback to mitigate the effects of socio-economic status in certain user groups. We suggest that future research on UX evaluation look more into how to empower certain user groups, such as low status female users, in UX evaluations done in high power distance contexts.
... While the SUS is still much in use, there have been further developments in the evaluation of perceived usability (J. R. Lewis, 2015a, b). Those include primarily questionnaires that are shorter such as the Usability Metric for User Experience (Bosley, 2013; Finstad, 2010; J. R. Lewis, Utesch, & Maher, 2015) and questionnaires that consider the emotional and experiential aspects of usability, such as the Emotional Metric Outcome Questionnaire (Borsci, Federici, Bacci, Gnaldi, & Bartolucci, 2015; J. R. Lewis & Mayes, 2014). ...
Article
This study investigated whether the audio-visual spatial design could improve and how it improved the user experience of bare-hand interaction (BHI) in virtual reality (VR), by adjusting spatial audio accordingly (no sound, audiovisual spatial congruency, audio-visual spatial in congruency). We asked 32 participants to complete the grasping task in two interaction scenarios with three audio-visual spatial conditions. We evaluated the user experience in terms of both intrinsic cognitive load and usability through four metrics: subjective cognitive load scores, HbO2 data in the corresponding brain regions, task time, and user satisfaction. The results showed that in near range interaction, although there was no significant difference between audio-visual spatial in congruency and congruency in terms of cognitive load and usability, they both significantly improved the user experience compared to the no sound group. In distant range interaction, audio-visual spatial in congruency significantly reduced the cognitive load and increased usability, thus improving the user experience.
Chapter
Full-text available
Covers the basics of usability testing plus some statistical topics (sample size estimation, confidence intervals, and standardized usability questionnaires).
Conference Paper
Full-text available
Over the recent years, the notion of a non-instrumental, hedonic quality of interactive products received growing interest. Based on a review of 151 publications, we summarize more than ten years research on the hedonic to provide an overview of definitions, assessment tools, antecedents, consequences, and correlates. We highlight a number of contributions, such as introducing experiential value to the practice of technology design and a better prediction of overall quality judgments and product acceptance. In addition, we suggest a number of areas for future research, such as providing richer, more nuanced models and tools for quantitative and qualitative analysis, more research on the consequences of using hedonic products and a better understanding of when the hedonic plays a role and when not.
Conference Paper
Full-text available
Over the recent years, the notion of a non-instrumental, hedonic quality of interactive products received growing interest. Based on a review of 151 publications, we summarize more than ten years research on the hedonic to provide an overview of definitions, assessment tools, antecedents, consequences, and correlates. We highlight a number of contributions, such as introducing experiential value to the practice of technology design and a better prediction of overall quality judgments and product acceptance. In addition, we suggest a number of areas for future research, such as providing richer, more nuanced models and tools for quantitative and qualitative analysis, more research on the consequences of using hedonic products and a better understanding of when the hedonic plays a role and when not.
Article
Full-text available
The philosopher of science J. W. Grove (1989) once wrote, “There is, of course, nothing strange or scandalous about divisions of opinion among scientists. This is a condition for scientific progress” (p. 133). Over the past 30 years, usability, both as a practice and as an emerging science, has had its share of controversies. It has inherited some from its early roots in experimental psychology, measurement, and statistics. Others have emerged as the field of usability has matured and extended into user-centered design and user experience. In many ways, a field of inquiry is shaped by its controversies. This article reviews some of the persistent controversies in the field of usability, starting with their history, then assessing their current status from the perspective of a pragmatic practitioner. Put another way: Over the past three decades, what are some of the key lessons we have learned, and what remains to be learned? Some of the key lessons learned are:• When discussing usability, it is important to distinguish between the goals and practices of summative and formative usability.• There is compelling rational and empirical support for the practice of iterative formative usability testing—it appears to be effective in improving both objective and perceived usability.• When conducting usability studies, practitioners should use one of the currently available standardized usability questionnaires.• Because “magic number” rules of thumb for sample size requirements for usability tests are optimal only under very specific conditions, practitioners should use the tools that are available to guide sample size estimation rather than relying on “magic numbers.”
Article
Full-text available
Proposes that verbal reports are data and that accounting for them, as well as for other kinds of data, requires explication of the mechanisms by which the reports are generated, and the ways in which they are sensitive to experimental factors (instructions, tasks, etc). Within the theoretical framework of human information processing, different types of processes underlying verbalization are discussed, and a model is presented of how Ss, in response to an instruction to think aloud, verbalize information that they are attending to in short-term memory (STM). Verbalizing information is shown to affect cognitive processes only if the instructions require verbalization of information that would not otherwise be attended to. From an analysis of what would be in STM at the time of report, the model predicts what could be reliably reported. The inaccurate reports found by other research are shown to result from requesting information that was never directly heeded, thus forcing Ss to infer rather than remember their mental processes. (112 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Thinking-aloud protocols may be the most widely used method in usability testing, but the descriptions of this practice in the usability literature and the work habits of practitioners do not conform to the theoretical basis most often cited for it: K.A. Ericsson and H.A. Simon's (1984) seminal work. After reviewing Ericsson and Simon's theoretical basis for thinking aloud, we review the ways in which actual usability practice diverges from this model. We then explore the concept of speech genre as an alternative theoretical framework. We first consider uses of this new framework that are consistent with Ericsson and Simon's goal of eliciting a verbal report that is as undirected, undisturbed and constant as possible. We then go on to consider how the proposed new approach might handle problems that arise in usability testing that appear to require interventions not supported in the older model.
Book
Usability testing and user experience research typically take place in a controlled lab with small groups. While this type of testing is essential to user experience design, more companies are also looking to test large sample sizes to be able compare data according to specific user populations and see how their experiences differ across user groups. But few usability professionals have experience in setting up these studies, analyzing the data, and presenting it in effective ways. Online usability testing offers the solution by allowing testers to elicit feedback simultaneously from 1,000s of users. Beyond the Usability Lab offers tried and tested methodologies for conducting online usability studies. It gives practitioners the guidance they need to collect a wealth of data through cost-effective, efficient, and reliable practices. The reader will develop a solid understanding of the capabilities of online usability testing, when it's appropriate to use and not use, and will learn about the various types of online usability testing techniques.*The first guide for conducting large-scale user experience research using the internet *Presents how-to conduct online tests with 1000s of participants - from start to finish *Outlines essential tips for online studies to ensure cost-efficient and reliable results.
Book
Usability testing and user experience research typically take place in a controlled lab with small groups. While this type of testing is essential to user experience design, more companies are also looking to test large sample sizes to be able compare data according to specific user populations and see how their experiences differ across user groups. But few usability professionals have experience in setting up these studies, analyzing the data, and presenting it in effective ways. Online usability testing offers the solution by allowing testers to elicit feedback simultaneously from 1,000s of users. Beyond the Usability Lab offers tried and tested methodologies for conducting online usability studies. It gives practitioners the guidance they need to collect a wealth of data through cost-effective, efficient, and reliable practices. The reader will develop a solid understanding of the capabilities of online usability testing, when it's appropriate to use and not use, and will learn about the various types of online usability testing techniques.
Conference Paper
Usability inspection methods, based on informed intuition s about interface design quality, hold promise of providing faster, more cost-effective ways to generate usability evaluations, compared to empirical user evaluation methods . Examples of inspection methods include heuristic evaluation (Nielsen & Molich, 1990), usability walkthroughs (Bias, 1991 ; Karat & Bennett, 1991a, 1991b), cognitive walk -throughs (Lewis, Polson, Wharton & Reiman, 1990), and applications of guidelines in walkthroughs (Jeffries, Miller, Wharton, & Uyeda, 1991). These methods have been used in development for some time in one form or another (perhap s by other names), often because there is simply no alternative like user testing. Usability inspection methods have been an object of research in the last two years or so . Progress has been made in refining methods, and understanding their role i n usability engineering.
Book
Usability inspection is the generic name for a set of costeffective ways of evaluating user interfaces to find usability problems. They are fairly informal methods and easy to use.