Content uploaded by Marc Hassenzahl
Author content
All content in this area was uploaded by Marc Hassenzahl on Apr 24, 2015
Content may be subject to copyright.
1
Towards Practical User Experience Evaluation Methods
Kaisa Väänänen-Vainio-Mattila
Tampere University of Technology
Human-Centered Technology
(IHTE)
Korkeakoulunkatu 6
33720 Tampere, Finland
kaisa.vaananen-vainio-
mattila@tut.fi
Virpi Roto
Nokia Research Center
P.O.Box 407
00045 Nokia Group, Finland
virpi.roto@nokia.com
Marc Hassenzahl
University of Koblenz-Landau
Economic Psychology and
Human-Computer Interaction,
Campus Landau, Im Fort 7
76829 Landau, Germany
hassenzahl@uni-landau.de
ABSTRACT
In the last decade, User eXperience (UX) research in the
academic community has produced a multitude of UX
models and frameworks. These models address the key
issues of UX: its subjective, highly situated and dynamic
nature, as well as the pragmatic and hedonic factors leading
to UX. At the same time, industry is adopting the UX term
but the practices in the product development are still largely
based on traditional usability methods. In this paper we
discuss the need for pragmatic UX evaluation methods and
how such methods can be used in product development in
industry. We conclude that even though UX definition still
needs work it seems that many of the methods from HCI
and other disciplines can be adapted to the particular
aspects of UX evaluation. The paper is partly based on the
results of UX evaluation methods in product development
(UXEM) workshop in CHI 2008.
Author Keywords
User experience, evaluation methods, product development.
ACM Classification Keywords
H5.m. Information interfaces and presentation (e.g., HCI):
Miscellaneous.
INTRODUCTION
Companies in many industrial sectors have become aware
that designing products and services is not enough, but
designing experiences is the next level of competition [19,
22]. Product development is no longer only about
implementing features and testing their usability, but about
designing products that are enjoyable and support
fundamental human needs and values. Thus, experience
should be a key concern of product development.
There are many definitions for UX, but not an agreed one
[16]. However, even the most diverse definitions of user
experience all agree that it is more than just a product's
usefulness and usability [2,6,17,18,23,26]. In addition, they
stress the subjective nature of UX: UX is affected by the
user’s internal state, the context, and perceptions of the
product [2,6,17].
However, definitions alone are not sufficient for the proper
consideration of UX throughout product development.
Product development in its current form needs tools from
the very early concept to market introduction: UX must be
assessable and manageable. An important element of this is
a set of evaluation methods focused on UX.
Apparently, there is a gap between the research
community's and the product developers' understanding of
what UX is and how it should be evaluated (see Figure 1).
Figure 1. Currently the academic UX research and industrial
UX development are focusing on different issues.
As an attempt to close the gap, we organised a workshop on
UX evaluation methods for product development (UXEM)
[25] in the context of the CHI 2008 conference on human
factors in computing. The aim of the workshop was to
identify truly experiential evaluation methods (in the
research sense) and discuss their applicability and
practicability in engineering-driven product development.
In addition, we hoped for lively and fruitful discussions
between academics and practitioners about UX itself and
evaluation methods. In this paper, we present the central
findings of this workshop.
GAP
Hedonic aspects,
emotions
Co-experience
Dynamics of
experience
etc. etc.
Functionality,
usability
Novelty
Product life
cycle
etc. etc.
Research,
academics:
UX the
ories,
models,
frameworks
Companies,
industry:
Practical UX work
in product
development
CURRENT STATE OF UX EVALUATION METHODS
Traditionally, technology-oriented companies have tested
their products against technical and usability requirements.
Experiential aspects were predominantly the focus of the
marketing department, which tried to create a certain image
of a product through advertising. For example, when
Internet became an important channel in communicating the
brand and image, technical and usability evaluations of
Web sites needed to be integrated with more experiential
goals [4,15]. Today, industry is in need of user experience
evaluation methods for a wide variety of products and
services.
User-centered development (UCD) is still the key to
designing for good user experiences. We must understand
users’ needs and values first, before designing and
evaluating solutions. Several methods exist for
understanding users and generating ideas in the early phases
of concept design, such as Probes [5] or Contextual Inquiry
[3]. Fewer methods are available for concept evaluation
that would assess the experiential aspects of the chosen
concept.
EXPERIENTIAL EVALUATION METHODS
A number of evaluation methods were presented and
discussed in the UXEM workshop. However, only a few
were "experiential" in the sense of going beyond traditional
usability methods by emphasizing the subjective, positive
and dynamic nature of UX.
Isomursu's "experimental pilots" [11], for example, stress
the importance of evaluating before (i.e., expectation),
while (i.e., experience) and after product use (i.e.,
judgment). This acknowledges the subjective and changing,
dynamic nature of UX: expectations influence experience,
experience influences retrospective judgments and these
judgments in turn set stage for further expectations and so
forth. In addition, Isomursu points at the importance of
creating an evaluation setting, which resembles an actual
use setting. UX is highly situated; its assessment requires a
strong focus on situational aspects. Roto and colleagues as
well as Hoonhout [21,10] stress the importance of positive
emotional responses to products and embrace the notion
that task effectiveness and efficiency (i.e., usability) might
be not the only source for positive emotions. Their focus is
on early phases of development where idea creation and
evaluation is closely linked and short-cycled.
Hole and Williams suggest "emotion sampling" as an
evaluation method [9]. While using a product, people are
repeatedly prompted to assess their current emotional state
by going through a number of questions. This approach
takes UX evaluation a step further, by focusing on the
experience itself instead of the product. However, in the
context of product development additional steps would
have to be taken to establish a causal link between a
positive experience and the product: how does the product
affect the measured experience. Bear in mind, that product
evaluation is not interested in experiences per se but in
experiences caused by the product at hand.
Two further methods presented in the workshop (Repertory
Grid, Multiple Sorting) [1,12] make use of Kelly’s
"personal construct psychology" [e.g., 13]. Basically, these
are methods to capture the personal meaning of objects.
They have a strong procedural structure, but are open to any
sort of meaning, whether pragmatic or hedonic.
Interestingly, the methods derived from Kelly’s theory tend
to provide both a tool for analysis and evaluation [7]. The
results give an idea of the themes, topics, concerns people
have with a particular group of products (i.e., content). At
the same time, all positive and negative feelings (i.e.,
evaluations) towards topics and products become apparent.
Finally, Heimonen and colleagues [8] use "forced choice"
to evaluate the "desirability" of a product. This method
highlights another potential feature of UX, which may pose
additional requirements for UX evaluation methods: There
might be drivers of product appeal and choice, which are
not obvious to the users themselves. Tractinsky and Zmiri
[24], for example, found hedonic aspects (e.g, symbolism,
beauty) to be predictive of product choice. When asked,
however, participants gave predominantly pragmatic
reasons for the choice. Note that the majority of the
"experiential" methods discussed so far rely on people's self
report. This might be misleading, given that experiential
aspects are hard to justify or even to verbalize. In other
words, choice might be driven by criteria not readily
available to the people choosing. Forced choice might bring
this out.
All in all, a number of interesting approaches to measure
UX were suggested and discussed in the workshop. All of
them addressed at least one key feature of UX, thereby
demonstrating that "experiential" evaluation is possible.
More work, however, has to be done to integrate methods to
capture more aspects of UX simultaneously. In addition,
methods need to be adapted to the requirements of
evaluation in an industrial setting. So far, most suggested
methods are still demanding in the sense of the skills and
time required.
REQUIREMENTS FOR PRACTICAL UX EVALUATION
METHODS
In industry, user experience evaluation is done in order to
improve a product. Product development is often a hectic
process and the resources for UX evaluation scarce.
Evaluating early and often is recommended, as the earlier
the evaluations can be done, the easier it is to change the
product to the right direction.
The early phases of product development are challenging
for UX evaluation, since at that point, the material available
about the concept may be hard to understand and assess for
the participants [10, 21]. In the early phases, it is not
possible to test the non-functional concept in the real
context of use, although user experience is tied to the
context [6]. We need good ideas for simulating real context
3
in a lab [14]. Later on, when prototypes are stable enough
to be handed for field study participants, UX evaluation
becomes much easier. The most reliable UX evaluation data
comes from people who have actually purchased and used a
product on the market. This feedback helps improving the
future versions of the product.
In summary, the UXEM workshop presentations and group
works produced the following requirements for practical
UX evaluation methods:
Valid, reliable, repeatable
For managing UX also in a big company
Fast, lightweight, and cost-efficient
For fast-pace iterative development
Low expertise level required
For easy deployment (no extensive training needed)
Applicable for various types of products
For comparisons and trend monitoring
Applicable for concept ideas, prototypes, and products
For following how UX develops during the process
Suitable for different target user groups
For a fair outcome
Suitable for different product lifecycle phases
For improving e.g. taking into use, repurchasing UX
Producing comparable output (quantitative and qualitative)
For UX target setting and iterative improvement
Useful for the different in-house stakeholders
As UX is multidisciplinary, many company
departments are interested in UX evaluation results.
Clearly, it is not possible to have just one method that
would fulfill all the requirements above. Some of the
requirements may be contradictory, or even unrealistic. For
example, a method which is very lightweight may not
necessarily be totally reliable. Also, it might be challenging
if not impossible to find a method which is suitable for
different types of products, product development phases,
and product lifecycle phases. We thus need to have a toolkit
of experiential methods to be used for the different
purposes.
In the UXEM workshop, we noticed that there is not always
a clear line between the design and evaluation methods,
since evaluating current solutions often gives ideas for new
ones. On the other hand, companies do need evaluation
methods that focus in producing UX scores or a list of pros
and cons for a pool of concept ideas in an efficient way.
After the product specification has been approved, the
primary interest is to check that the user experience
matches the original goal. In this phase, the methods
applied are clearly about evaluation, not about creating new
ideas.
DISCUSSION AND CONCLUSIONS
Obviously, applying and developing methods for UX
evaluation requires an understanding of what UX actually
is. This is still far from being settled. Although everybody
in the workshop agreed that the UX perspective adds
something to the traditional usability perspective, it was
hard to even put a name to this added component: Is it
"emotional", "experiential" or "hedonic"? The lack of a
shared understanding on what UX means was identified as
one of the major problems of UX evaluation in its current
state. As long we do not agree or at least take a decision on
what we are looking for, we cannot pose the right questions.
Without an idea of the appropriate questions, selecting a
method is futile. Nevertheless, once a decision is made
for example to take a look at the emotional consequences of
product use there seem to be a wealth of methods already
in use within HCI or from other disciplines, which could be
adapted to this particular aspect of evaluation.
Working with UX evaluation is a double task: We have to
understand UX and make it manageable and measurable.
Given the fruitful discussions in the workshop, a practice-
driven development of the UX concept may be a valid road
to a better understanding of UX. "UX is what we measure"
might be an approach as long as there is no accepted
definition of UX at hand. However, this approach requires
some reflection on the evaluation needs and practices. By
discussing the implicit notions embedded in the evaluation
requirements and methods, we might be able to better
articulate what UX actually should be. The UXEM
workshop and this paper hopefully open up the discussion.
ACKNOWLEDGMENTS
We thank all participants of the UXEM workshop: Jim
Hudson, Jon Innes, Nigel Bevan, Minna Isomursu, Pekka
Ketola, Susan Huotari, Jettie Hoonhout, Audrius
Jurgelionis, Sylvia Barnard, Eva Wischnewski, Cecilia
Oyugi, Tomi Heimonen, Anne Aula, Linda Hole, Oliver
Williams, Ali al-Azzawi, David Frohlich, Heather Vaughn,
Hannu Koskela, Elaine M. Raybourn, Jean-Bernard
Martens, and Evangelos Karapanos. We also thank the
programme committee of UXEM: Anne Aula, Katja
Battarbee, Michael "Mitch" Hatscher, Andreas Hauser, Jon
Innes, Titti Kallio, Gitte Lindgaard, Kees Overbeeke, and
Rainer Wessler.
REFERENCES
1. al-Azzawi, A., Frohlich, D. and Wilson, M. User
Experience: A Multiple Sorting Method based on
Personal Construct Theory, Proc. of UXEM,
www.cs.tut.fi/ihte/CHI08_workshop/papers.shtml
2. Alben, L. (1996), Quality of Experience: Defining the
Criteria for Effective Interaction Design. Interactions,
3, 3, pp. 11-15.
3. Beyer, H. & Holtzblatt, K. (1998). Contextual Design.
Defining Customer-Centered Systems. San Francisco:
Morgan Kaufmann.
4. Ellis, P., Ellis, S. (2001), Measuring User Experience.
New Architect 6, 2 (2001), pp. 29-31.
5. Gaver, W. W., Boucher, A., Pennington, S., & Walker,
B. (2004). Cultural probes and the value of uncertainty.
Interactions.
6. Hassenzahl, M., Tractinsky, N. (2006), User
Experience – a Research Agenda. Behaviour and
Information Technology, Vol. 25, No. 2, March-April
2006, pp. 91-97.
7. Hassenzahl, M. & Wessler, R. (2000). Capturing
design space from a user perspective: the Repertory
Grid Technique revisited. International Journal of
Human-Computer Interaction, 12, 441-459.
8. Heimonen, T., Aula, A., Hutchinson, H. and Granka, L.
Comparing the User Experience of Search User
Interface Designs, Proc. of UXEM,
www.cs.tut.fi/ihte/CHI08_workshop/papers.shtml
9. Hole, L. and Williams, O. Emotion Sampling and the
Product Development Life Cycle, Proc. of UXEM,
www.cs.tut.fi/ihte/CHI08_workshop/papers.shtml
10. Hoonhout, J. Let's start to Create a Fun Product:
Where Is My Toolbox?, Proc. of UXEM,
www.cs.tut.fi/ihte/CHI08_workshop/papers.shtml
11. Isomursu, M. User experience evaluation with
experimental pilots, Proc. of UXEM,
www.cs.tut.fi/ihte/CHI08_workshop/papers.shtml
12. Karapanos, E. and Martens, J.-B. The quantitative side
of the Repertory Grid Technique: some concerns, Proc.
of UXEM,
www.cs.tut.fi/ihte/CHI08_workshop/papers.shtml
13. Kelly, G. A. (1963). A theory of personality. The
psychology of personal constructs, paperback. New
York: Norton.
14. Kozlow, S., Rameckers, L., Schots, P. (2007). People
Research for Experience Design. Philips white paper.
http://philipsdesign.trimm.nl/People_Reseach_and_Ex
perience_design.pdf
15. Kuniavsky, M. (2003), Observing The User Experience
– A Practitioner’s Guide to User Research. Morgan
Kaufmann Publishers, Elsevier Science, USA
16. Law, E., Roto, V., Vermeeren, A., Kort, J., &
Hassenzahl, M. (2008). Towards a Shared Definition
for User Experience. Special Interest Group in CHI’08.
Proc. Human Factors in Computing Systems 2008, pp.
2395-2398.
17. Mäkelä, A., Fulton Suri, J. (2001), Supporting Users’
Creativity: Design to Induce Pleasurable Experiences.
Proceedings of the International Conference on
Affective Human Factors Design, pp. 387-394.
18. Nielsen-Norman group (online). User Experience –
Our Definition.
http://www.nngroup.com/about/userexperience.html
19. Nokia Corporation (2005), Inspired Human
Technology. White paper available at
http://www.nokia.com/NOKIA_COM_1/About_Nokia/
Press/White_Papers/pdf_files/backgrounder_inspired_
human_technology.pdf
20. Norman, D.A., Miller, J., and Henderson, A. (1995)
What you see, some of what's in the future, and how
we go about doing it: HI at Apple Computer. Proc. CHI
1995, ACMPress (1995), 155.
21. Roto, V., Ketola, P. and Huotari, S. User Experience
Evaluation in Nokia, Proc. of UXEM,
www.cs.tut.fi/ihte/CHI08_workshop/papers.shtml
22. Seidel, M., Loch, C., Chahil, S. (2005). Quo Vadis,
Automotiven Industry? A Vision of Possible Industry
Transformations. European Management Journal, Vol.
23, No. 4, pp. 439–449, 2005
23. Shedroff, N. An Evolving Glossary of Experience
Design, online glossary at
http://www.nathan.com/ed/glossary/
24. Tractinsky, N. & Zmiri, D. (2006). Exploring attributes
of skins as potential antecedents of emotion in HCI. In
P.Fishwick (Ed.), Aesthetic computing (pp. 405-421).
Cambridge, MA: MIT Press.
25. Väänänen-Vainio-Mattila, K., Roto, V., & Hassenzahl,
M. (2008). Now Lets Do It in Practice: User
Experience Evaluation Methods for Product
Development. Workshop in CHI’08. Proc. Human
Factors in Computing Systems, pp. 3961-3964.
http://www.cs.tut.fi/ihte/CHI08_workshop/index.shtml
26. UPA (Usability Professionals’ Association): “Usability
Body of Knowledge”,
http://www.usabilitybok.org/glossary (22.5.2008)