Conference PaperPDF Available

Abstract and Figures

While the consideration of User Experience (UX) has become embedded in research and design processes, UX evaluation remains a challenging and strongly discussed area for both researchers in academia and practitioners in industry. A variety of different evaluation methods have been developed or adapted from related fields, building on identified methodology gaps. Although the importance of mixed methods and data-driven approaches to get well-founded study results of interactive systems has been emphasized numerous times, there is a lack of evolved understandings and recommendations of when and in which ways to combine different methods, theories, and data related to the UX of interactive systems. The workshop aims to gather experiences of user studies from UX professionals and academics to contribute to the knowledge of mixed methods, theories, and data in UX evaluation. We will discuss individual experiences, best practices, risks and gaps, and reveal correlations among triangulation strategies.
Content may be subject to copyright.
Triangulation in UX Studies: Learning
from Experience
Ingrid Pettersson
Volvo Car Group
Sweden
ingrid.pettersson@
volvocars.com
Andreas Riener
University of Applied Sciences
Ingolstadt (THI),
Germany
andreas.riener@thi.de
Anna-Katharina Frison
University of Applied Sciences
Ingolstadt (THI),
Germany
anna-katharina.frison@thi.de
Jesper Nolhage
Volvo Car Group
Sweden
jesper.nolhage@
volvocars.com
Florian Lachner
University of Munich (LMU)
Germany
florian.lachner@ifi.lmu.de
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the Owner/Author.
Copyright is held by the owner/author(s).
DIS’17 Companion, June 10-14, 2017, Edinburgh, United Kingdom
ACM 978-1-4503-4991-8/17/06.
http://dx.doi.org/10.1145/3064857.3064858
Abstract
While the consideration of User Experience (UX) has be-
come embedded in research and design processes, UX
evaluation remains a challenging and strongly discussed
area for both researchers in academia and practitioners
in industry. A variety of different evaluation methods have
been developed or adapted from related fields, building on
identified methodology gaps. Although the importance of
mixed methods and data-driven approaches to get well-
founded study results of interactive systems has been em-
phasized numerous times, there is a lack of evolved un-
derstandings and recommendations of when and in which
ways to combine different methods, theories, and data re-
lated to the UX of interactive systems. The workshop aims
to gather experiences of user studies from UX profession-
als and academics to contribute to the knowledge of mixed
methods, theories, and data in UX evaluation. We will dis-
cuss individual experiences, best practices, risks and gaps,
and reveal correlations among triangulation strategies.
Author Keywords
User Experience; Evaluation; Mixed Methods; Triangulation
ACM Classification Keywords
H.5.2 [Information interfaces and presentation (e.g., HCI)]:
Evaluation/methodology
Workshop Summaries
DIS 2017, June 10–14, 2017, Edinburgh, UK
341
Introduction
As an academic discipline, the field of User Experience
(UX) research has a multi-disciplinary heritage, involving
a variety of different perspectives that focused on studying
human experiences with products, systems, and services.
This led to a wide spectrum of methods that are used for
studying users’ experiences. Traditional Human-computer
interaction (HCI) theory has passed on methodological ap-
proaches akin to those used in usability evaluation studies.
Other disciplines that have significantly influenced UX re-
search are those of social sciences, ethnography, and
philosophy.
There have been great efforts in academia to create new
methods for effectively evaluating UX, aimed at both aca-
demic and industrial application [1]. Our proposition in this
workshop is, however, that we often do not need to develop
new methods but rather use existing tools and approaches
from the wide flora of UX evaluation more efficiently. UX
evaluation is no longer an unknown territory and we want to
encourage reflection on established approaches as well as
lessons learned along the way. We want to explore the ex-
isting know-how of UX professionals, from academia and in-
dustry, in combining different UX evaluation methods (e.g.,
qualitative and quantitative methods) within so called mixed
methods approaches and triangulation strategies.
Background & Motivation
Past workshops in the ACM community have already ex-
plored UX methods from different perspectives [4, 6, 3,
5]. However, a focus on triangulation, also called mixed
methods, or multi-method approaches, is still missing. To
combine different ways of research to get a more holistic
view on UX is nowadays one of the key areas for further UX
research [1, 4, 8]. Within a SIG session Roto et al. 2009
[4] analyzed UX evaluation methods in the industrial and
Figure 1: How can holistic User Experience (UX) evaluation be
optimized by triangulation?
academic context. They revealed that rich data can be col-
lected by applying mixed methods e.g., through the com-
bination of system logging with subjective user statements
from questionnaires and interviews. The authors conclude
that mixing methods allows to understand the reasoning be-
hind the concept of UX. Van Turnhout et al. [7] investigated
common mixed research approaches of the NordiCHI pro-
ceedings 2012 to lay a foundation for further research and
a more thoughtful application of multi-methods. However,
best practices for using such multi-method perspectives in-
spired by the needs of academia and industry are not yet
explored in depth.
Employing a mix of methods and theories to study a sub-
ject has been claimed to contribute to more reliable, holistic
and well-motivated understandings of a phenomenon [2].
Furthermore, a mixed methods approach can uncover un-
expected results, generate important and unforeseen re-
search questions while at the same time providing answers
to those new questions. This is particularly important for
complex topics, such as the concept of UX. We argue that
investigating UX design and evaluation from different angles
will lead to a well-founded understanding of UX.
Workshop Summaries
DIS 2017, June 10–14, 2017, Edinburgh, UK
342
Workshop Theme & Goal
Researchers and practitioners have developed their own
best practices over decades based on experiences, reflec-
tion, theoretic background, or intuition. We want to bring
this wide-spread knowledge together and learn from each
other by uncovering basic challenges, aims, and strategies
related to UX work.
It will be an opportunity to share experiences with different
UX evaluation methods, collect empirical data of practices,
and a way to jointly suggest ways of improving the learn-
ing process from user studies. Finally, we want to support
a more holistic understanding of the quality of a certain ex-
perience, which should be applicable for research projects
in academia and industry. Specifically, we want to answer
following questions:
What are the motivations and the outcomes of differ-
ent UX research and evaluation methods?
How do we best draw conclusions from multiple and
different sources, such as qualitative and quantitative
or attitudinal and behavioral data?
Can combinations of contrasting theories that exist in
UX be better exploited, and if so how?
How can we define best practices and where are
gaps or development needs in mixed method ap-
proaches in the field of UX?
Duration
The presented theme and questions shall be discussed and
edited in one full-day workshop.
Intended Outcome & Future Work
Our ambition is that the workshop will evolve and spread
knowledge as well as awareness of how to get more out of
UX studies. Consequently, participants will be able to apply
particular methods more efficiently and effectively. A coop-
eratively developed mixed method map will summarize the
outcomes. In combination with an already ongoing litera-
ture review on documented UX studies, the outcomes of
the workshop will unfold the state of the art of using mixed
method approaches in UX research. Further future work
can be identified during the day and within the networking
session.
REFERENCES
1. Javier A Bargas-Avila and Kasper Hornbæk. 2011. Old
wine in new bottles or novel challenges: a critical
analysis of empirical studies of user experience. In
Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems. ACM, 2689–2698.
2. R Burke Johnson, Anthony J Onwuegbuzie, and Lisa A
Turner. 2007. Toward a definition of mixed methods
research. Journal of mixed methods research 1, 2
(2007), 112–133.
3. Marianna Obrist, Virpi Roto, and Kaisa
Väänänen-Vainio-Mattila. 2009. User experience
evaluation: do you know which method to use?. In
CHI’09 Extended Abstracts on Human Factors in
Computing Systems. ACM, 2763–2766.
4. Virpi Roto, Marianna Obrist, and Kaisa
Väänänen-Vainio-Mattila. 2009a. User experience
evaluation methods in academic and industrial
contexts. In Proceedings of the Workshop UXEM,
Vol. 9. Citeseer.
5. Virpi Roto, Kaisa Väänänen-Vainio-Mattila, Effie Law,
and Arnold Vermeeren. 2009b. User experience
Workshop Summaries
DIS 2017, June 10–14, 2017, Edinburgh, UK
343
evaluation methods in product development
(UXEM’09). In IFIP Conference on Human-Computer
Interaction. Springer, 981–982.
6. Kaisa Väänänen-Vainio-Mattila, Virpi Roto, and Marc
Hassenzahl. 2008. Now let’s do it in practice: user
experience evaluation methods in product
development. In CHI’08 extended abstracts on Human
factors in computing systems. ACM, 3961–3964.
7. Koen van Turnhout, Arthur Bennis, Sabine Craenmehr,
Robert Holwerda, Marjolein Jacobs, Ralph Niels,
Lambert Zaad, Stijn Hoppenbrouwers, Dick Lenior, and
René Bakker. 2014. Design patterns for mixed-method
research in HCI. In Proceedings of the 8th Nordic
Conference on Human-Computer Interaction: Fun,
Fast, Foundational. ACM, 361–370.
8. Arnold POS Vermeeren, Effie Lai-Chong Law, Virpi
Roto, Marianna Obrist, Jettie Hoonhout, and Kaisa
Väänänen-Vainio-Mattila. 2010. User experience
evaluation methods: current state and development
needs. In Proceedings of the 6th Nordic Conference on
Human-Computer Interaction: Extending Boundaries.
ACM, 521–530.
Workshop Summaries
DIS 2017, June 10–14, 2017, Edinburgh, UK
344
... Interim analysis. After about half of the time needed for the categorization in step 4, we organized a workshop at DIS 2017 [84] to discuss initial insights with researchers in the UX field. At the workshop, we presented first insights of our review, including, for example, types of products studied, UX dimensions addressed, referenced UX theory, employed methods, and triangulation approaches. ...
... During the process of writing this paper, we presented our results at a workshop [84] and discussed them with UX experts of academia and industry (N=8). This helped us to critically analyze and assess existing approaches of UX evaluation method application from a practical and non-biased perspective. ...
Conference Paper
Full-text available
User experience (UX) evaluation is a growing field with diverse approaches. To understand the development since previous meta-review efforts, we conducted a state-of-the-art review of UX evaluation techniques with special attention to the triangulation between methods. We systematically selected and analyzed 100 papers from recent years and while we found an increase of relevant UX studies, we also saw a remaining overlap with pure usability evaluations. Positive trends include an increasing percentage of field rather than lab studies and a tendency to combine several methods in UX studies. Triangulation was applied in more than two thirds of the studies, and the most common method combination was questionnaires and interviews. Based on our analysis, we derive common patterns for triangulation in UX evaluation efforts. A critical discussion about existing approaches should help to obtain stronger results, especially when evaluating new technologies.
... It rather supports the assumption that both sources of data are necessary to derive a holistic impression of an interface. This has also been emphasized by Pettersson et al. [8,142], who expressed the urgency of triangulation in user studies. We appeal to researchers in the field of human-automation interaction to always consider additional behavioral observations. ...
Article
Full-text available
During the last decade, research has brought forth a large amount of studies that investigated driving automation from a human factor perspective. Due to the multitude of possibilities for the study design with regard to the investigated constructs, data collection methods, and evaluated parameters, at present, the pool of findings is heterogeneous and nontransparent. This literature review applied a structured approach, where five reviewers investigated n = 161 scientific papers of relevant journals and conferences focusing on driving automation between 2010 and 2018. The aim was to present an overview of the status quo of existing methodological approaches and investigated constructs to help scientists in conducting research with established methods and advanced study setups. Results show that most studies focused on safety aspects, followed by trust and acceptance, which were mainly collected through self-report measures. Driving/Take-Over performance also marked a significant portion of the published papers; however, a wide range of different parameters were investigated by researchers. Based on our insights, we propose a set of recommendations for future studies. Amongst others, this includes validation of existing results on real roads, studying long-term effects on trust and acceptance (and of course other constructs), or triangulation of self-reported and behavioral data. We furthermore emphasize the need to establish a standardized set of parameters for recurring use cases to increase comparability. To assure a holistic contemplation of automated driving, we moreover encourage researchers to investigate other constructs that go beyond safety.
... As outlined in chapter 2, there is a heterogeneity of measures for the assessment of automated vehicle HMIs. Since there are two types of data (i.e., self-report and behavior), researchers need to include both in user studies to derive a holistic picture of the system and HMI (Pettersson, Frison, Lachner, Riener, & Nolhage, 2018;. Thus, one can make robust statements about both whether people can successfully operate a system (observational data) and hold a favorable attitude towards the system (self-report data). ...
Thesis
Full-text available
Driving automation systems have already entered the commercial market and further advancements will be introduced in the near future. Level 3 automated driving systems are expected to increase safety, comfort and traffic efficiency. For the human driver, these functions and according human-machine interfaces are a novel technology. In the human factors domain, research and development faces two challenges which are (1) the conceptualization of intuitive and easy to use interfaces and (2) the development of a methodological framework to evaluate these interfaces. In technology evaluation, a methodological phenomenon has frequently been reported which is called the preference-performance dissociation. It describes the outcome of studies where users' preference (i.e., self-report) does not match their performance (i.e., interaction behavior). This phenomenon poses a threat to the evaluation of automated vehicle HMIs. Therefore, the present thesis first reports investigations on how to operationalize both performance and preference. Moreover, the understanding (i.e., mental model) of automated vehicle HMIs was hypothesized as an influential precursor of performance and included in the present work. Using the insights of the operationalization part, the second part of the thesis aimed at finding out about factors that exert influence on preference and/or performance. Investigated factors were the number of use case repetitions, feedback on operator performance, user education and a post-hoc statistical analysis. To reach the operationalization and variation aims, three driving simulator studies with a total of N=225 participants were conducted. The main outcomes were that (1) a sophisticated recommendation regarding preference questionnaire application could be given. Furthermore, (2) insights into the development of behavioral measures over time and their relation to a satisfaction measure could be given. Concerning mental models, (3) the present work showed that it takes repeated interaction to evolve accurately and gaze measures could also be used for capturing these processes. In addition, (4) feedback on operator performance was found to influence preference but not performance while (5) user education increased understanding and subsequent performance but did not affect preference. Eventually it showed that (6) users of different performance levels report similar preference. The theoretical contribution of this work lies in insights into the formation of the two sources of data and its potential to both explain and predict the preference-performance dissociation. The practical contribution lies in the recommendation for research methodology regarding how to operationalize measures and how to design user studies concerning the number of use cases and user education approaches. Finally, the results gained herein do not only apply to automated vehicle HMIs but might also be generalized to related domains such as aviation, robotics or health care.
... Analogously, Pettersson and colleagues conducted a systematic analysis on 100 academic papers, written between 2010 and 2016, describing empirical studies on UX evaluation with a specific attention to reliability, by specifically addressing the triangulation of methods in UX assessment [30,31]. This up-to-date analysis of the state-of-the-art highlights results comparable to the studies above discussed. ...
Chapter
Artificial Intelligence is increasingly integrating into everyday life and is becoming an increasingly pervasive reality. Domestic AI-enhanced devices are aggressively conquering new markets, nevertheless such products seem to respond to the taste for novelty rather than having a significant utility for the user, remaining confined to the dimension of the gadget or toy. Interestingly, although AI has been indicated as a new material for designers, the design discipline has not yet fully tackled this issue. Moving from these premises, the MeEt-AI research program aims at developing a new UX assessment method specifically addressed to AI-enhanced domestic devices and environments. Accordingly, we frame the project within the vast and variegated field of UX assessment methods, focusing on three main aspects of UX assessment – methodology, UX dimensions and analyzed objects – by looking at what current methods propose from the standpoint of AI-enhanced domestic products and environments. What emerges are general considerations that are at the basis of the positioning of the MeEt-AI research program.
... Year Publications 2018 [105,142,150,30,57,29,85,115,146,66] 2017 [90,5,2,152,45,164,58,79,123,120,136,99,109,36,95,139] 2016 [119,59,160,78,82,107,143,76,39,98,6,108,25,35,121] 2015 [51,38,159,126,140,144,97,84,112] 2014 [37,156,111,96,113,163,155,52,80,94,13,83] 2013 [158,81,104,14,157,162,9,114,16,62,40] 2012 [118,65,55,24,135] 2011 [161,103,149,141,41,89,69,56,28] 2010 [12,21,154,148,132,8,11,26,77,110,106] We hope that the insights from our survey and the recommendations based on our discussion will inspire the community to strengthen their efforts in addressing this challenge and thus identify new and established ways to evaluate CSTs. ...
... As outlined above, there is a heterogeneity of measures for the assessment of HMIs for automated. Since there are two types of data, we suggest researchers to include both in user studies (so called multi-method approach, see also [89,90]). Thus, one can make robust statements about both whether people can successfully operate a system (observational data) and have a positive attitude towards the system (self-report data). ...
Conference Paper
Full-text available
It will not be long until Level 3 Automated Driving Systems (L3 ADS) enter the consumer market. An important role corresponds to methodology development. The present paper gives impetus to the process of developing robust methods for evaluating Human-Machine Interfaces (HMI) for L3 ADS. First, a literature review on automotive interfaces concerning methodology application is outlined showing that studies often lack to provide both self-report and observational data. To derive a comprehensive image of HMI quality, we recommend multi-method approach in user research. Subsequently, we provide an overview of state-of-the-art self-report and observational measures. From the availability of measures and the necessity to include both in user studies, the issue of the performance-preference dissociation arises. It threatens study designs and interpretation of results. Following methodological recommendations from the present work supports researchers and practitioners in the area of automated driving for proper study design and interpretation of study results.
Chapter
In recent years, the goal of companies to retain customers through good usability has evolved into a more holistic view to enhance the user experience. The purely pragmatic view is to be extended by hedonic aspects in order to touch the users also on the emotional level. Although everyone talks about user experience (UX), it still seems to be just “old wine in new bottles”. Despite extensive UX theory research in recent years, UX is still often used as a synonym for usability. Due to increasing vehicle automation, the automotive industry now also has to rethink its (long) existing processes and develop new strategies in order to keep its customers loyal to the brand in the future. Traffic will change fundamentally—and drivers will often neither drive themselves nor own a vehicle. With this book chapter we want to create the basis for this transformation process. After an overview of the current state of UX practice in the development of user interfaces for vehicle automation, the topic is systematically unfolded from the perspective of academia (literature studies) and industry (expert interviews). Based on the findings, the “DAUX framework” is presented as part of a need-centered development approach. It serves as a structured guide on how to define and evaluate UX in consideration of the challenges of automated driving. For this purpose, it provides guidelines on how (a) relevant needs for hypotheses/UI concept development can be identified and (b) UX can be evaluated by triangulating behavioral-, product-, and experience-oriented methods. To demonstrate its potential, the framework is applied in three case studies, each addressing a different level of automation (SAE L2, SAE L3, and SAE L4). This demonstrates that the “DAUX framework” promotes a holistic view of UX to encourage the development of UIs for driving automation. In particular, it is intended to help resolve technical constraints faced by designers and developers in the different levels of automation with the aim to create a positive UX.
Conference Paper
Full-text available
In this paper we discuss mixed-method research in HCI. We report on an empirical literature study of the NordiCHI 2012 proceedings which aimed to uncover and describe common mixed-method approaches, and to identify good practices for mixed-methods research in HCI. We present our results as mixed-method research design patterns, which can be used to design, discuss and evaluate mixed-method research. Three dominant patterns are identified and fully described and three additional pattern candidates are proposed. With our pattern descriptions we aim to lay a foundation for a more thoughtful application of, and a stronger discourse about, mixed-method approaches in HCI.
Article
Full-text available
The purpose of this article is to examine how the field of mixed methods currently is being defined. The authors asked many of the current leaders in mixed methods research how they define mixed methods research. The authors provide the leaders' definitions and discuss the content found as they searched for the criteria of demarcation. The authors provide a current answer to the question, What is mixed methods research? They also briefly summarize the recent history of mixed methods and list several issues that need additional work as the field continues to advance. They argue that mixed methods research is one of the three major “research paradigms” (quantitative research, qualitative research, and mixed methods research). The authors hope this article will contribute to the ongoing dialogue about how mixed methods research is defined and conceptualized by its practitioners.
Article
Full-text available
In this paper, we investigate 30 user experience (UX) evaluation methods that were collected during a special interest group session at the CHI2009 Conference. We present a categorization of the collected UX evaluation methods and discuss the range of methods from both academic and industrial perspectives.
Conference Paper
Full-text available
High quality user experience (UX) has become a central competitive factor of product development in mature consumer markets. Although the term UX is widely used, the methods and tools for evaluating UX are still inadequate. This SIG session collects information and experiences about UX evaluation methods used in both academia and industry, discusses the pros and cons of each method, and ideates on how to improve the methods.
Conference Paper
Full-text available
This paper reviews how empirical research on User Experience (UX) is conducted. It integrates products, dimensions of experience, and methodologies across a systematically selected sample of 51 publications from 2005-2009, reporting a total of 66 empirical studies. Results show a shift in the products and use contexts that are studied, from work towards leisure, from controlled tasks towards open use situations, and from desktop computing towards consumer products and art. Context of use and anticipated use, often named key factors of UX, are rarely researched. Emotions, enjoyment and aesthetics are the most frequently assessed dimensions. The methodologies used are mostly qualitative, and known from traditional usability studies, though constructive methods with unclear validity are being developed and used. Many studies use self-developed questionnaires without providing items or statistical validations. We discuss underexplored research questions and potential improvements of UX research.
Conference Paper
Full-text available
The recent shift of emphasis to user experience (UX) has rendered it a central focus of product design and evaluation. A multitude of methods for UX design and evaluation exist, but a clear overview of the current state of the available UX evaluation methods is missing. This is partly due to a lack of agreement on the essential characteristics of UX. In this paper, we present the results of our multi-year effort of collecting UX evaluation methods from academia and industry with different approaches such as literature review, workshops, Special Interest Groups sessions and an online survey. We have collected 96 methods and analyzed them, among other criteria, based on the product development phase and the studied period of experience. Our analysis reveals development needs for UX evaluation methods, such as early-stage methods, methods for social and collaborative UX evaluation, establishing practicability and scientific quality, and a deeper understanding of UX.
Conference Paper
Full-text available
High quality user experience (UX) has become a central competitive factor of product development in mature consumer markets [1]. Although the term UX originated from industry and is a widely used term also in academia, the tools for managing UX in product development are still inadequate. A prerequisite for designing delightful UX in an industrial setting is to understand both the requirements tied to the pragmatic level of functionality and interaction and the requirements pertaining to the hedonic level of personal human needs, which motivate product use [2]. Understanding these requirements helps managers set UX targets for product development. The next phase in a good user-centered design process is to iteratively design and evaluate prototypes [3]. Evaluation is critical for systematically improving UX. In many approaches to UX, evaluation basically needs to be postponed until the product is fully or at least almost fully functional. However, in an industrial setting, it is very expensive to find the UX failures only at this phase of product development. Thus, product development managers and developers have a strong need to conduct UX evaluation as early as possible, well before all the parts affecting the holistic experience are available. Different types of products require evaluation on different granularity and maturity levels of a prototype. For example, due to its multi-user characteristic, a community service or an enterprise resource planning system requires a broader scope of UX evaluation than a microwave oven or a word processor that is meant for a single user at a time. Before systematic UX evaluation can be taken into practice, practical, lightweight UX evaluation methods suitable for different types of products and different phases of product readiness are needed. A considerable amount of UX research is still about the conceptual frameworks and models for user experience [4]. Besides, applying existing usability evaluation methods (UEMs) without adaptation to evaluate UX may lead to some scoping issues. Consequently, there is a strong need to put UX evaluation from research into practice.
Conference Paper
As the selection of products and services becomes profuse in the technology market, it is often the delighting user experience (UX) that differentiates a successful product from the competitors. Product development is no longer about implementing features and testing their usability, but understanding users' daily lives and evaluating if a product resonates with the in-depth user needs. Although UX is a widely adopted term in industry, the tools for evaluating UX in product development are still inadequate. Based on industrial case studies and the latest research on UX evaluation, this workshop forms a model for aligning the used UX evaluation methods to product development processes. The results can be used to advance the state of "putting UX evaluation into practice".
User experience evaluation methods in product development (UXEM'09)
  • Kaisa Virpi Roto
  • Effie Väänänen-Vainio-Mattila
  • Arnold Law
  • Vermeeren
Design patterns for mixed-method research in HCI
  • Arthur Koen Van Turnhout
  • Sabine Bennis
  • Robert Craenmehr
  • Marjolein Holwerda
  • Ralph Jacobs
  • Lambert Niels
  • Stijn Zaad
  • Dick Hoppenbrouwers
  • René Lenior
  • Bakker