Content uploaded by Andreas M. Klein
Author content
All content in this area was uploaded by Andreas M. Klein on May 25, 2021
Content may be subject to copyright.
Toward a User Experience Tool Selector
for Voice User Interfaces
Doctoral Consortium
Andreas M. Klein
Department of Computer Languages and Systems
University of Seville
Seville, Spain
andreas.klein@iwt2.org
ABSTRACT
Voice user interfaces (VUI) are currently a trending topic but the
ability to measure and improve the user experience (UX) is still
missing. We aim to develop a tool selector as a web application
that can provide a suitable tool to measure UX quality of VUIs.
Te UX tool selector for VUIs will include a UX measurement
toolbox containing several existing and new VUI assessment
methods. Te UX tool selector will provide context-dependent
measurement recommendations without prior extensive research
to evaluate and improve VUIs.
CCS CONCEPTS
• Human-centered computing • Human computer interaction
(HCI) • HCI design and evaluation methods • Accessibility
KEYWORDS
User Experience, Voice User Interfaces, Web Application,
Conversational User Interfaces, Voice Assistants, Evaluation,
Measurement
ACM Reference format:
Andreas M. Klein. 2021. Toward a User Experience Tool Selector for Voice
User Interfaces. In Proceedings of 18th International Web for All
Conference (W4A’21), April 19-20, 2021, Ljubljana, Slovenia, 2 pages.
https://doi.org/10.1145/3430263.3456728
1 MOTIVATION
Voice user interfaces (VUIs) [1] or voice assistants (VAs) [2]
(general assistants with various services) have developed into a
leading-edge technology with a wide range of applications.
Nowadays, VAs are available in many devices and systems, e.g.,
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the Owner/Author.
W4A '21, April 19–20, 2021, Ljubljana, Slovenia
© 2021 Copyright is held by the owner/author(s).
ACM ISBN 978-1-4503-8212-0/21/04.
https://doi.org/10.1145/3430263.3456728
Alexa (Amazon), Google Assistant (Google), and Siri (Apple).
Since UX is an essential aspect for evaluating interactive systems,
there are already several approaches to measure UX quality of
VUIs [3]–[5]. However, these methods require extensive
resources and there are no general guidelines to apply UX
assessment tools to VUIs.
Additionally, our pilot study showed users’ concerns and VAs
revealed technical limitations [6]. In contrast, persons with
disability use VUIs very intensively for activities such as web
surfing or games. Hence, we see a demand for UX evaluation that
focuses on the context of use and easy-to-apply measures.
Terefore, we aim to design a UX tool selector for VUIs (see
Figure 1) as a web application that allows one to select a suitable
UX measurement tool to evaluate a VUI test object. Te selector
draws on a UX measurement toolbox that contains standard UX
measurement methods for VUI evaluation. Te context of use of
the given VUI test object determines the tool selection.
Figure 1: Proposed UX tool selector for VUIs
To the best of our knowledge, this is the first approach to
provide context-dependent UX measurement recommendations
for assessing UX quality of VUIs.
2 METHODOLOGY
For the researcher's interpretation and evaluation of the proposed
thesis, we apply the standardized design science research
methodology (DSRM) [7]. Following the six DSRM steps, we aim
to answer the research questions (RQs) below:
W4A’21, April 19-20, 2021, Ljubljana, Slovenia
1. W
hat factors should be considered when selecting UX tools
for VUIs?
2. Which tools are needed to measure UX quality of VUIs?
3. How can the VUI context of use be comprehensively
captured?
4. How can the UX tool selector for VUIs be designed?
5. Are the recommendations for measurement relevant?
To answer these research questions, we plan to apply several
methods: systematic literature review (SLR) [8], user study,
structured interview, observation with VUI users from Germany
and Spain, and a case study or Delphi study.
3 RESEARCH PLAN
First, we intend to conduct a SLR to search for factors to select UX
tools for VUIs (RQ1) and to determine which tools exist to
evaluate UX quality of VUIs (RQ2). Current tools are, e.g., UEQ+
framework [9], diary study [10], a conditional voice recorder [11],
or psychophysiological approaches as an addition to traditional
UX assessment [5].
We will then conduct an extensive survey and interviews to
capture the context of use (RQ3) to enable, e.g., an extension of
the UEQ+ framework. We have contact to users with visual
impairments or motor impairments to capture a context of use.
Afterward, we will design the UX tool selector (RQ4) and
evaluate it (RQ5) using distinct methods, e.g., a case study and/or
Delphi study.
4 PRELIMINARY RESULTS
A brief literature search revealed that no such UX tool selector for
VUIs is available (RQ1). Applying the tool selector and measuring
with the recommended UX assessment tool will help to meet
users’ needs and increase VUI acceptance. Our pilot study showed
that 49% of technology-based users have no interest in VA use, as
voice interaction is still a challenge due to factors such as speech
intelligibility issues and privacy concerns [6]. VUI improvements
are currently needed to increase UX quality and adoption of such
highly available cutting-edge technology.
In our preliminary literature review we did not find any tool
to measure the complete UX quality of VUIs (RQ2). Therefore, a
new approach for a flexible evaluation of VUI systems was
extended within the UEQ+ modular framework (UEQ+ is based on
the user experience questionnaire) [4, 12]. The UEQ+ contains 20
scales to measure specific UX aspects, which can be combined into
a product-related questionnaire. The newly constructed scales for
voice quality consider the UX aspects of VUIs and fill the voice
interaction gap within UEQ+ [13].
We conducted a pre-test of semi-structured interviews with
visually impaired persons, asking "How do people with disabilities
use VAs?" We asked this in order to identify potential research
gaps considering web accessibility, the measuring tool, and the
context of use (RQ3). We must evaluate the results to finalize our
guidelines with further interviews and observations. VUIs have
great potential within this user group, as persons with disabilities
A. M. Klein
can interact with VAs very effectively, they have a high frequency
of use, and they have a wide range of user experiences [14].
The core of this work is to design a tool selector (RQ4) as a web
application that can be used to select suitable measurements from
the toolbox with regard to the context in which the VUI test object
is used. We consider the VUI context of use holistically, from an
overall business goal to a detailed answer to a follow-up question
[1]. The tool selector will provide recommendations for
measuring with a push of a button.
5 NEXT STEPS
The next step is to find factors and UX measurement tools for
VUIs with a SLR. Then we will conduct the validation of the UEQ+
scales for voice quality to be included in the toolbox. Furthermore,
we plan to continue with structured interviews of VUI power
users to capture the contexts of use.
ACKNOWLEDGMENTS
Tis work was supported by the NICO project (PID2019-
105455GB-C31) from the Ministerio de Ciencia, Innovación y
Universidades (Spanish Government).
REFERENCES
[1] M. H. Cohen, J. P. Giangola, and J. Balogh, Voice User Interface Design. Addison-
Wesley, 2004.
[2] M. B. Hoy, “Alexa, Siri, Cortana, and More: An Introduction to Voice
Assistants,” Med. Ref. Serv. Q., vol. 37, no. 1, pp. 81–88, 2018, doi:
10.1080/02763869.2018.1404391.
[3] A. B. Kocaballi, L. Laranjo, and E. Coiera, “Measuring User Experience in
Conversational Interfaces: A Comparison of Six Questionnaires,” 2018, doi:
10.14236/ewic/HCI2018.21.
[4] A. M. Klein, A. Hinderks, M. Schrepp, and J. Thomaschewski, “Measuring User
Experience Quality of Voice Assistants,” CISTI, Jun. 2020, pp. 1–4, doi:
10.23919/CISTI49556.2020.9140966.
[5] F. Le Pailleur, B. Huang, P.-M. Léger, and S. Sénécal, “A New Approach to
Measure User Experience with Voice-Controlled Intelligent Assistants: A Pilot
Study,” in Human-Computer Interaction. Multimodal and Natural Interaction,
2020, pp. 197–208.
[6] A. M. Klein, A. Hinderks, M. Rauschenberger, and J. Thomaschewski,
“Exploring Voice Assistant Risks and Potential with Technology-based Users,”
WEBIST '20 -Volume 1: WEBIST, 2020, pp. 147–154, doi:
10.5220/0010150101470154.
[7] K. Peffers, T. Tuunanen, M. A. Rothenberger, and S. Chatterjee, “A Design
Science Research Methodology for Information Systems Research,” J. Manag.
Inf. Syst., vol. 24, no. 3, pp. 45–77, 2007, doi: 10.2753/MIS0742-1222240302.
[8] B. Kitchenham and S. Charters, “Guidelines for performing Systematic
Literature Reviews in Software Engineering,” 2007. [Online]. Available:
https://www.elsevier.com/__data/promis_misc/525444systematicreviewsguide.
pdf.
[9] M. Schrepp and J. Thomaschewski, “Design and Validation of a Framework for
the Creation of User Experience Questionnaires,” Int. J. Interact. Multimed. Artif.
Intell., p. S. 88–95, 2019, doi: 10.9781/ijimai.2019.06.006.
[10] J. Lau, B. Zimmerman, and F. Schaub, “Alexa, Are You Listening? Privacy
Perceptions, Concerns and Privacy-Seeking Behaviors with Smart Speakers,”
Proc. ACM Hum.-Comput. Interact., vol. 2, no. CSCW, Nov. 2018, doi:
10.1145/3274371.
[11] M. Porcheron, J. E. Fischer, S. Reeves, and S. Sharples, “Voice Interfaces in
Everyday Life,” CHI, 2018, pp. 1–12, doi: 10.1145/3173574.3174214.
[12] B. Laugwitz, T. Held, and M. Schrepp, “Construction and Evaluation of a User
Experience Questionnaire,” in HCI and Usability for Education and Work, 2008.
[13] A. M. Klein, A. Hinderks, M. Schrepp, and J. Thomaschewski, “Construction of
UEQ+ Scales for Voice Quality” MuC '20, 2020, pp. 1–5, doi:
10.1145/3404983.3410003.
[14] F. Masina et al., “Investigating the Accessibility of Voice Assistants With
Impaired Users: Mixed Methods Study,” J. Med. Internet Res., 2020, doi:
10.2196/18431.