Conference PaperPDF Available

UMUX-LITE: when there's no time for the SUS

  • MeasuringU

Abstract and Figures

In this paper we present the UMUX-LITE, a two-item questionnaire based on the Usability Metric for User Experience (UMUX) [6]. The UMUX-LITE items are This system's capabilities meet my requirements and This system is easy to use." Data from two independent surveys demonstrated adequate psychometric quality of the questionnaire. Estimates of reliability were .82 and .83 -- excellent for a two-item instrument. Concurrent validity was also high, with significant correlation with the SUS (.81, .81) and with likelihood-to-recommend (LTR) scores (.74, .73). The scores were sensitive to respondents' frequency-of-use. UMUX-LITE score means were slightly lower than those for the SUS, but easily adjusted using linear regression to match the SUS scores. Due to its parsimony (two items), reliability, validity, structural basis (usefulness and usability) and, after applying the corrective regression formula, its correspondence to SUS scores, the UMUX-LITE appears to be a promising alternative to the SUS when it is not desirable to use a 10-item instrument.
Content may be subject to copyright.
UMUX-LITE – When There’s No Time for the SUS
James R. Lewis
IBM Software Group
Boca Raton, FL
Brian S. Utesch
IBM Software Group
Durham, NC
Deborah E. Maher
IBM Software Group
Cambridge, MA
In this paper we present the UMUX-LITE, a two-item
questionnaire based on the Usability Metric for User
Experience (UMUX) [6]. The UMUX-LITE items are
“This system’s capabilities meet my requirements” and
“This system is easy to use.” Data from two independent
surveys demonstrated adequate psychometric quality of the
questionnaire. Estimates of reliability were .82 and .83
excellent for a two-item instrument. Concurrent validity
was also high, with significant correlation with the SUS
(.81, .81) and with likelihood-to-recommend (LTR) scores
(.74, .73). The scores were sensitive to respondents’
frequency-of-use. UMUX-LITE score means were slightly
lower than those for the SUS, but easily adjusted using
linear regression to match the SUS scores. Due to its
parsimony (two items), reliability, validity, structural basis
(usefulness and usability) and, after applying the corrective
regression formula, its correspondence to SUS scores, the
UMUX-LITE appears to be a promising alternative to the
SUS when it is not desirable to use a 10-item instrument.
Author Keywords
System Usability Scale; SUS; Usability Metric for User
Experience; UMUX; UMUX-LITE; psychometric
evaluation; usability evaluation; standardized
questionnaires; satisfaction measures
ACM Classification Keywords
H.5.2. Information interfaces and presentation (e.g., HCI):
User Interfaces–Evaluation/Methodology.
General Terms
Human Factors; Design; Measurement.
Research Motivation
A typical summative usability test includes the assessment
of satisfaction along with assessments of effectiveness and
efficiency [7, 14]. Starting in the late 1980s, standardized
usability questionnaires appropriate for usability testing
began to appear [14]. One of the most popular of these is
the System Usability Scale (SUS), accounting for an
estimated 43% of post-test questionnaire usage [12].
With just ten items, the SUS is fairly short, but in our
practice we have encountered situations in which an even
more concise questionnaire is desirable. This is especially
the case when post-test debriefing involves a large number
of questions or when the satisfaction questionnaire is part of
a much larger survey. For this reason we were intrigued
when we came across the Usability Metric for User
Experience (UMUX) [6] a four-item questionnaire
claimed to be an effective proxy for the SUS.
Despite the initial research supporting the use of the
UMUX as a proxy for the SUS [6], a recent review of the
UMUX [8] raised several criticisms, including:
How much time do respondents really save when
answering four rather than ten questions?
A parallel analysis [4] of the eigenvalues from a
principal components analysis of UMUX scores
suggested a bidimensional rather than the claimed
unidimensional structure.
Because we routinely use the SUS, we decided to continue
collecting SUS scores while simultaneously collecting
UMUX scores in pursuit of two research goals:
1. Attempt to replicate the results reported in the
original UMUX research [6].
2. Conduct additional item and structural analyses to
investigate the feasibility of further reducing the
number of UMUX items to use in a quickly-
conducted unidimensional surrogate of the SUS –
Next, we provide summaries of the key properties and prior
psychometric research of the SUS and UMUX, followed by
analyses and conclusions based on new data from two
independent surveys.
The System Usability Scale (SUS)
The SUS is a ten-item questionnaire using five-point scales.
Responses to SUS items are recoded to produce an overall
SUS score that ranges from 0 to 100 in 2.5 point
increments. Although a self-described “quick-and-dirty”
questionnaire [3], the SUS appears to have excellent
psychometric properties (estimates of reliability typically
exceeding 0.9, significant concurrent validity with ratings
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full ci
tation on the first page. To copy otherwise,
or republish, to post on servers or to redistribute to lists, requires prior
specific permission and/or a fee.
CHI 2013, April 27–May 2, 2013, Paris, France.
Copyright © 2013 ACM 978-1-4503-1899-0/13/04...$15.00.
Session: Evaluation Methods 2
CHI 2013: Changing Perspectives, Paris, France
of user friendliness, and sensitivity to variables such as
system and frequency of use) [1, 2, 9].
The SUS is available in two versions. The Standard version
has items with mixed tone – odd items have a positive tone;
even items have a negative tone. In the Positive version, all
items have a positive tone. Sauro and Lewis [13] found that
the Positive version had advantages over the Standard
version with regard to reductions in misinterpretation,
mistakes, and miscoding. Both versions had high reliability
(Standard: 0.92; Positive: 0.96), and had no significant
difference in their mean scores. There was no evidence of
acquiescence or extreme response biases in the Positive
version. Table 1 shows the item content for both versions
of the SUS.
Although generally treated as a unidimensional measure,
recent analyses suggest that the SUS is more likely a
bidimensional measure, with factors associated with the
constructs of Usable (Items 1-3, 5-9) and Learnable (Items
4, 10) [2, 9].
Item Standard Positive
1 I think that I would like to
use this system frequently.
I think that I would like to
use this system frequently.
2 I found the system
unnecessarily complex.
I found the system to be
3 I thought the system was
easy to use.
I thought the system was
easy to use.
I think that I would need
the support of a technical
person to be able to use
this system.
I think I could use the
system without the support
of a technical person.
I found the various
functions in the system
were well integrated.
I found the various
functions in the system
were well integrated.
I thought there was too
much inconsistency in this
I thought there was a lot of
consistency in the system.
I would imagine that most
people would learn to use
this system very quickly.
I would imagine that most
people would learn to use
this system very quickly.
8 I found the system very
cumbersome to use.
I found the system very
9 I felt very confident using
the system.
I felt very confident using
the system.
I needed to learn a lot of
things before I could get
going with this system.
I could use the system
without having to learn
anything new.
Table 1. Standard and Positive Versions of the SUS.
The Usability Metric for User Experience (UMUX)
The UMUX [6] is a relatively new standardized usability
questionnaire designed to get a measurement of perceived
usability consistent with the SUS, but using fewer items
that more closely conformed to the ISO definition of
usability (effective, efficient, satisfying) [7]. UMUX items
vary in tone and have seven scale steps from 1 (strongly
disagree) to 7 (strongly agree). Starting with an initial pool
of 12 items, the final UMUX had four items that included a
general question similar to the Single Ease Question (“This
system is easy to use”) [11] and the best candidate item
from each of the item sets associated with efficiency,
effectiveness, and satisfaction, where “best” meant the item
with the highest correlation to the concurrently collected
overall SUS score. Using a recoding scheme similar to the
SUS, a UMUX score can range from 0 to 100. The four
UMUX items are:
1. This system’s capabilities meet my requirements.
2. Using this system is a frustrating experience.
3. This system is easy to use.
4. I have to spend too much time correcting things
with this system.
To validate the UMUX, Finstad (its developer) [6] had
users of two systems, one with a reputation for poor
usability (System 1, n = 273) and the other perceived as
having good usability (System 2, n = 285), complete the
UMUX and the Standard SUS. As expected, the reliability
of the SUS was high, with a coefficient alpha of 0.97. The
reliability of the UMUX was also high, with a coefficient
alpha of 0.94. The UMUX scores for the two systems were
significantly different (t(533) = 39.04, p < 0.01) with
System 2 getting better scores than System 1 (evidence of
sensitivity). More importantly, there was an extremely high
correlation between the SUS and UMUX scores (r = 0.96, p
< 0.001), providing evidence of strong concurrent validity
and suggesting that the UMUX was statistically equivalent
to the SUS.
As part of two independent surveys, we had an opportunity
to simultaneously capture responses to the SUS, the
UMUX, and a likelihood-to-recommend (LTR) item.
Respondents were IBM employees with varying amounts of
experience with the evaluated system (from using the
system once every few months to more than once a day). In
one survey, respondents completed the Positive version of
the SUS (n = 402); in the other they completed the Standard
version (n = 389).
UMUX Item Analysis
Table 2 shows the correlations (including 99% confidence
intervals) for the UMUX items with the Positive and
Standard versions of the SUS. Across the datasets the
intervals were identical for the odd-numbered (positive
tone) items. The correlations with the negative tone items,
in contrast, were significantly different between the positive
and standard versions of the SUS. Item 4 had the lowest
correlation with the SUS in both datasets.
Session: Evaluation Methods 2
CHI 2013: Changing Perspectives, Paris, France
Item r(Standard) r(Positive)
1 .69 - .75 - .81 .69 - .75 - .80
2 .75 - .80 - .84 .52 - .60 - .68
3 .76 - .81 - .85 .76 - .81 - .85
4 .57 - .66 - .72 .27 - .38 - .49
Table 2. UMUX Item to SUS Correlations (with 99%
confidence intervals).
UMUX and SUS Factor Analyses
Table 3 shows the results of a factor analysis of the UMUX
items combined across the datasets (analyses by dataset
showed the same pattern). As predicted in the review of the
original UMUX research [8], the UMUX had a clear
bidimensional structure with positive-tone items aligning
with one factor and negative-tone items aligning with the
other, a solution supported by parallel analysis of the
eigenvalues [4].
Item Tone Factor 1 Factor 2
1 Pos 0.762 0.295
2 Neg 0.485 0.716
3 Pos 0.776 0.393
4 Neg 0.235 0.659
Table 3. Factor Analysis of the UMUX.
Item Factor 1 Factor 2
1 0.351 0.203
2 0.731 0.326
3 0.697 0.341
4 0.226 0.698
5 0.734 0.209
6 0.668 0.252
7 0.653 0.403
8 0.743 0.412
9 0.600 0.507
10 0.330 0.634
Table 4. Factor Analysis of the SUS.
Table 4 shows the results of a factor analysis of the SUS
items combined across the datasets (analyses by dataset
showed the same pattern). The results essentially replicated
the results of previous analyses showing a two-factor
structure [2, 9], with one exception -- Item 1 didn't strongly
associate with either factor (but did have a higher loading
on the first factor, consistent with earlier findings).
Psychometric Quality of the UMUX
In general, our results replicated the findings reported by
Finstad [6]. For the two datasets, the UMUX correlated
significantly with the SUS (with 99% confidence intervals;
Standard: .87 - .90 - .92; Positive: .74 - .79 - .84). Although
this is significantly less than the originally claimed
correlation of .96 (99% confidence interval ranging from
0.95 to 0.97), it is evidence of concurrent validity. The
estimated reliabilities of the UMUX were adequate (.87,
.81), but like the correlations with the SUS, quite a bit less
than the originally reported value of .97. For both datasets,
there was no significant difference between the mean SUS
and mean UMUX scores (extensive overlap between the
99% confidence intervals), consistent with the original data.
Potential UMUX Variants
There are many potential variants of the UMUX, but based
on the item and factor analyses above, two stand out. One
variant would be to drop Item 4 due to its relatively low
correlation with the SUS, leaving three items. A second
would be to drop the negative-tone items leaving the two
positive-tone items, Item 1 associated with usefulness
(functional adequacy) and Item 3 associated with usability
(ease-of-use). Despite the common wisdom that attitudinal
questionnaires should contain a mix of positively- and
negatively-toned items, there is a body of research that
argues against this practice [5, 10, 13, 15, 16].
We decided to pursue the second variant composed of the
positive-tone UMUX items. The primary reasons for this
choice were the parsimony of the resulting instrument (two
items) and its connection through the content of the items to
the Technology Acceptance Model [5], a questionnaire
from the market research literature that assesses the
usefulness and ease-of-use of systems, and has an
established relationship to likelihood of future use.
Psychometric Assessment of the UMUX-LITE
Psychometric analyses using just the positive-tone items of
the UMUX indicated good psychometric quality. Like the
full UMUX, this metric correlated significantly with both
Standard and Positive versions of the SUS (.81, .85) and
with LTR (.73, .74), evidence of concurrent validity.
Coefficient alpha indicated acceptable scale reliability (>
.7) for both datasets (.83, .82).
Unlike the full UMUX scores, there was a small but
statistically significant difference between the overall SUS
scores and the scores based just on UMUX Items 1 and 3.
To compensate for that difference, we used linear
regression on the combined samples (n = 791) to compute
Session: Evaluation Methods 2
CHI 2013: Changing Perspectives, Paris, France
UMUX-LITE = .65(UMUX(1,3)) + 22.9
In that formula, (UMUX(1,3)) refers to a UMUX score
computed from just Items 1 and 3, using a SUS-like
procedure to obtain a score that ranges from 0 to 100
(specifically, subtract 1 from each 7-point item, add them
together, then multiply by 100/12). Applying the regression
equation to compute the UMUX-LITE from UMUX(1,3)
brought the UMUX-LITE scores into correspondence with
the SUS scores for both datasets.
We set out to see if an independent assessment of the
UMUX would lead to the same results as the original
research that produced the UMUX [6]. We replicated many
of the original findings with regard to typical goals of
psychometric evaluation (adequate reliability and validity),
although our estimates of reliability and validity tended to
be lower than those from the original research. One notable
exception was that our structural analysis indicated that the
UMUX is bidimensional rather than unidimensional, with
items aligning on factors as a function of the tone of the
item (positive/negative).
We also wanted to see if an even shorter questionnaire
based on the UMUX would have acceptable psychometric
properties and, using item and structural analysis, settled on
an instrument based on its positive-tone items the
UMUX-LITE. This two-item instrument (with adjustment
based on a regression equation to match it to the SUS) had
acceptable reliability and validity – in fact, its psychometric
properties were very good given it only has two items.
Although these results are encouraging, it is important to
keep in mind that this is just a first step. In contrast to the
relatively rich research literature on the SUS, the only
published research to date for the UMUX is the original
paper [6] and this one, and this paper is the first to provide
psychometric properties of the UMUX-LITE. Until
researchers have validated the UMUX-LITE across a wider
variety of systems, we do not recommend its use
independent of the SUS. Given the promising results so far,
however, we do recommend that practitioners and
researchers who use the SUS include the UMUX-LITE
items in their work to begin building independent databases
for future evaluation of its reliability, validity, and
sensitivity. We certainly intend to do so.
1. Bangor, A., Kortum, P.T., and Miller, J.T. An empirical
evaluation of the System Usability Scale. International
Journal of Human-Computer Interaction, 24 (2008),
2. Borsci, S., Federici, S., and Lauriola, M. On the
dimensionality of the system usability scale: A test of
alternative measurement models. Cognitive Processes,
10 (2009), 193-197.
3. Brooke, J. SUS: A “quick and dirty” usability scale. In:
Jordan, P., Thomas, B., Weerdmeester, B. (Eds.),
Usability Evaluation in Industry. Taylor & Francis,
London, UK, (1996) 189–194.
4. Coovert, M.D., and McNelis, K. Determining the
number of common factors in factor analysis: A review
and program. Educational and Psychological
Measurement, 48 (1988), 687-693.
5. Davis, D. Perceived usefulness, perceived ease of use,
and user acceptance of information technology. MIS
Quarterly, 13 (1989), 319-339.
6. Finstad, K. The usability metric for user experience.
Interacting with Computers, 22 (2010), 323-327.
7. ISO 9241-11. Ergonomic Requirements for Office
Work with Visual Display Terminals (VDTs). Part 11:
Guidance on Usability, 1998.
8. Lewis, J. R. Critical review of “The Usability Metric
for User Experience”. Interacting with Computers (In
9. Lewis, J.R., and Sauro, J. The factor structure of the
System Usability Scale. In: Kurosu, M. (Ed.), Human
Centered Design, HCII 2009. Springer-Verlag,
Heidelberg, Germany, (2009) 94-103.
10. Pilotte, W.J., and Gable, R.K. The impact of positive
and negative item stems on the validity of a computer
anxiety scale. Educational and Psychological
Measurement, 50 (1990), 603-610.
11. Sauro, J., and Dumas, J.S. Comparison of three one-
question, post-task usability questionnaires. In Proc.
CHI 2009, ACM Press (2009), 1599-1608.
12. Sauro, J., and Lewis, J.R. Correlations among
prototypical usability metrics: Evidence for the
construct of usability. In Proc. CHI 2009, ACM Press
(2009), 1609-1618.
13. Sauro, J., and Lewis, J.R. When designing usability
questionnaires, does it hurt to be positive? In Proc. CHI
2011, ACM Press (2011), 2215-2223.
14. Sauro, J., and Lewis, J. R. Quantifying the User
Experience: Practical Statistics for User Research.
Morgan-Kaufmann, Waltham, MA, USA, 2012.
15. Schmitt N., and Stuits, D. Factors defined by negatively
keyed items: The result of careless respondents? Applied
Psychological Measurement, 9 (1985), 367-373.
16. Stewart, T.J., and Frye, A.W. Investigating the use of
negatively-phrased survey items in medical education
settings: Common wisdom or common mistake?
Academic Medicine, 79 (2004), S1-S3.
Session: Evaluation Methods 2
CHI 2013: Changing Perspectives, Paris, France
... We perform both quantitative and qualitative data analysis. Throughout the paper, we measure usability using the UMUX Lite questionnaire [34]. We compare responses to the UMUX Lite across conditions with Pearson's chi-squared test (χ 2 ). ...
... Usability, Trust, Security, Privacy and Satisfaction We asked participants to answer usability, perceived trust, security and privacy and satisfaction questions. For usability, we asked participants the two items UMUX lite scale [34]. Based on prior work [44], we built a 10-item scale of perceived trust, security and privacy (cf. ...
Conference Paper
Communication tools with end-to-end (E2E) encryption help users maintain their privacy. Although messengers like Whats-App and Signal bring E2E encryption to a broad audience, past work has documented misconceptions of their security and privacy properties. Through a series of five online studies with 683 total participants, we investigated whether making an app's E2E encryption more visible improves perceptions of trust, security, and privacy. We first investigated why participants use particular messaging tools, validating a prior finding that many users mistakenly think SMS and e-mail are more secure than E2E-encrypted messengers. We then studied the effect of making E2E encryption more visible in a messaging app. We compared six different text disclosures, three different icons, and three different animations of the encryption process. We found that simple text disclosures that messages are "encrypted" are sufficient. Surprisingly, the icons negatively impacted perceptions. While qualitative responses to the animations showed they successfully conveyed and emphasized "security" and "encryption," the animations did not significantly impact participants' quantitative perceptions of the overall trustworthiness, security, and privacy of E2E-encrypted messaging. We confirmed and unpacked this result through a validation study, finding that user perceptions depend more on preconceived expectations and an app's reputation than visualizations of security mechanisms.
... In addition, two questionnaires were administered to quantitatively measure participants' satisfaction with the framework and their potential loyalty. Specifically, UMUX-Lite [12] and NPS were used [13], to ask participants the following questions: ...
... • The overall UMUX-Lite score for the framework was 6.21, which is a very good score, corresponding to a SUS score of 79.32 [12], and qualifies the overall experience with the framework with an A-Grade [14]. • The NPS score was very high (M: 9.5, SD: 0.76), with almost all participants rating it with 9 or 10, thus being classified as 'promoters' of the framework. ...
Full-text available
Advancements in AI and ML approaches are the reason for the current hype of this technology. A lot of products and services, either in consumer-facing solutions, as well as in the industrial context, embrace the advancement of smart algorithms. Designing such systems entails several challenges, including designing for black-box decision-making with a potentially infinite and unknown set of UI manifestations, delivering easy-to-understand explanations, involving end-users in requirements specification and product evaluation, and communication with software engineers and data scientists among others. Although designers are today equipped with several UX tools for capturing and presenting users’ experience with the products they are designing, the question that arises in the AI context is whether and how existing contemporary tools can adapt and scale to support the design of AI-enabled interactive systems. Therefore, AI and ML are perceived as a new design material. This work aims to assist researchers and practitioners involved in AI-infused projects by proposing a framework to collect and document these. The framework was designed following a workshop with representative stakeholders, through which different use cases were presented and elaborated. Evaluation of the framework highlighted that it is an easy to use and useful tool for documenting use cases and communicating them to a wide audience.
... Pengujian Usability biasanya digunakan System Usability Scale (SUS) yang merupakan suatu kuesioner yang digunakan untuk penilaian kegunaan pengguna [15]. SUS sendiri merupakan penilaian yang paling populer dalam melakukan pengujian Usability [16]- [18]. ...
Full-text available
Untuk mengukur suatu perangkat lunak lebih spesifik berbasis telepon seluler smartphone dapat diterima oleh pengguna maka dilakukan pengujian Usability. Untuk membuat quisioner yang dapat digunakan untuk melakukan pengujian System Usability Scale (SUS) yang dinyatakan valid dan reliabel. Quisioner diuji menggunakan Expert Review dan Product-Moment Coefficient untuk uji validitas, serta Cronbach Alfa untuk uji reliabilitas. Berdasarkan hasil uji yang dilakukan didapatkan 10 butir quisioner untuk uji SUS dengan seluruh butir dinyatakan valid secara Expert Review dan Product-Moment Coefficient, serta reliabel dengan skor Cronbach Alfa 0,778086452. Terdapat beberapa penelitian terkait pembuatan quisioner untuk uji SUS dimana bahasa yang digunakan bahasa indonesia dan bahasa inggris dengan jumlah pertanyaan sebanyak 10 butir. Penelitian ini sendiri memberikan opsi lain quisioner dengan mengadopsi penelitian yang sudah dilakukan dengan melakukan pengujian validitas dan realibilitas untuk menguji suatu perangkat lunak menggunakan uji SUS.
... Users were asked to fill out a questionnaire every evening. To avoid too many questions repeatedly each day, a questionnaire based on the usability metric for user experience lite (UMUX-LITE) was used [12]. At the end of the test, a 20 minute interview was conducted in which the users were asked to complete a more detailed questionnaire based on the system usability scale (SUS) developed by ...
Conference Paper
Full-text available
A holistic determination and improvement of the quality of the indoor environment includes, in addition to the “classic” parameters such as air temperature and humidity, other influencing variables such as air quality, noise, and lighting conditions (brightness, color temperature). Since Covid-19, air quality came back into focus. The interaction of these factors in their entirety has an effect on people and significantly determines their well-being and performance. This paper presents the implementation of a monitoring system for these indoor comfort variables (temperature, humidity, wind speed, CO2, VOC, lighting, and noise) based on the Arduino microcontroller ecosystem and corresponding sensor technology. This setup is complemented by the development of a graphical user interface (GUI) with an interactive feedback system. Via touchscreen or the accompanying app for desktop PCs, users can monitor real-time measurements and change settings such as the model of thermal comfort, algorithm parameters that are used to predict comfort indices, database connection, application programming interface, or the language of the software. Feedback can be augmented by using system notifications, color notifications in the GUI, and changeable animated images according to user preferences. Furthermore, user tests were conducted to investigate the system usability and to explore the differences between these two interaction possibilities. During the user testing phase (N = 4), two questionnaires based on the usability metric for user experience lite (UMUX-LITE) and the system usability scale (SUS) proved the high usability of this monitoring system. Additionally, it was found that users increasingly prefer to use the touchscreen as the testing phase progressed.
... 2.7.6. Usability Metric for User Experience (UMUX)-LITE We will ask the intervention group about the system usability at the post-intervention stage, including two items; "This system's capabilities meet my requirements" and "This system is easy to use," and the score range is "0 (strongly disagree) to 7 (strongly agree)" [47]. ...
Full-text available
Most nursing simulation programs focus on persons’ healthcare needs in hospital settings, and little is known about how to identify them in home settings. This study aims to develop and validate a virtual reality (VR) simulation program for nursing students to improve their clinical reasoning skills and confidence in assessing persons’ healthcare needs in home settings. We developed a VR simulation program based on a literature review and expert discussion. In Phase 1, home visit nurses or public health nurses will validate the program through their interviews in 2022. In Phase 2, we will conduct a pilot and main single-blinded randomized trial for nursing students to confirm the effectiveness from 2022 and 2023. Participants will be randomly allocated into an intervention group using VR simulations and a control group receiving videos regarding three kinds of community residents’ lives [1:1]. After obtaining informed consent, the students will submit their anonymous data to the researchers to prevent associating their grade evaluation. The primary outcome will be their clinical reasoning skills. The second outcome will include their satisfaction and self-confidence. This study will examine the effectiveness of improving their clinical reasoning skills and confidence in assessing persons’ healthcare needs in home settings.
Full-text available
Mid-air gestures as a new form of human–computer interaction has a wide range of satisfaction factors, for which the primary and secondary relationships and hierarchical relationships between factors are unclear. By examining usability definitions, collecting satisfaction questionnaires and user interviews, 30 observed variables were obtained and a scale was developed. A total of 310 valid questionnaires were collected, and six latent variables were summarized through factor analysis. The matrix quantitative analysis of latent variables based on interpretative structural model theory was used to construct a hierarchical model of influencing factors of satisfaction with mid-air gestures. The study shows that the influencing factors of mid-air gesture satisfaction can be divided into three levels. The first layer of attractiveness is the direct influencing factor on the surface and the goal of mid-air gesture design. In the second layer, Simplicity and Efficiency, Simplicity and Tiredness, and Tiredness and Friendliness interact with each other. Simplicity positively affects Friendliness, and Efficiency positively affects Tiredness. The third layer, Intuitiveness is the root layer influencing factor, which affects Simplicity. This study provides a theoretical basis for the design of mid-air gesture so that it can be designed and selected more objectively.
Tourism is important due to its benefits and role as a commercial activity that creates demand and growth in many industries. Tourism is vital not only in increasing economic activities but also in generating additional employment and revenue. Malaysia has increased its efforts in diversifying the economy and decreasing its dependence on exports by promoting increased tourism in the country. For this reason, the use of information technology in tourism has increased. Hence, mobile applications and applet tools play an important role among Internet users. This research reviews 24 standardised usability questionnaires in the literature for choosing the appropriate usability instrument. Then, this study investigated the usability measurement scales of the well-known mobile application, Malaysia Trip Planner, on the basis of Nielsen’s usability principles. Therefore, this study could provide future research directions and recommendations on improving the attributes of such applications.
Antecedents of technology acceptance (TA) are known to be positively associated with measures such as usage intention, behavioral intention, attitude, and satisfaction. Although technology acceptance is investigated widely in prior research, it is not currently clear which variables or factors drive technology acceptance and under different service contexts or conditions. To examine the strength these effects in the artificial intelligence literature, we adopt a meta-analysis approach. We have scoped the literature on artificial intelligence, acceptance measures, and factors affecting acceptance in extant literature. We narrowed our search to business context to find AI-based tools that users, consumers, and customers interact with transactionally, such as chatbots. Findings show AI-based technology factors affect acceptance differently in various service industry contexts as preliminary results. These results have critical implications for researchers and practitioners studying which type of AI-based technology strengthen consumers use in different service contexts. These preliminary findings will be extended to look at interactive relationships of factors affecting acceptance in different contexts.KeywordsTechnology acceptanceArtificial intelligenceMeta-analysisAI factors
Effective tropical cyclone risk reduction is possible only if all relevant threats are considered and analyzed. Recently, weather organizations have expanded their forecast products to include more information about tropical cyclone hazards. Of these products, the Hurricane Threats and Impacts (HTI) graphics are gaining support. Although the HTI graphics have been operational since 2015, no user evaluations of their efficacy have been published. In an online experiment with 114 non-expert participants, we explored the effect of prior tropical cyclone experience and storm characteristics on HTI perceived ease of use, task completion time, and comprehension. Overall, perceived ease of use and comprehension scores were low across the experience groups. Completion times were similar across the groups and storms. The results suggest that the HTI graphics are misinterpreted by the public and may not be effective in communicating tropical cyclone hazards.
Full-text available
Usability does not exist in any absolute sense; it can only be defined with reference to particular contexts. This, in turn, means that there are no absolute measures of usability, since, if the usability of an artefact is defined by the context in which that artefact is used, measures of usability must of necessity be defined by that context too. Despite this, there is a need for broad general measures which can be used to compare usability across a range of contexts. In addition, there is a need for "quick and dirty" methods to allow low cost assessments of usability in industrial systems evaluation. This chapter describes the System Usability Scale (SUS) a reliable, low-cost usability scale that can be used for global assessments of systems usability.
In 2010, Kraig Finstad published (in this journal) ‘The Usability Metric for User Experience’—the UMUX. The UMUX is a standardized usability questionnaire designed to produce scores similar to the System Usability Scale (SUS), but with 4 rather than 10 items. The development of the questionnaire followed standard psychometric practice. Psychometric evaluation of the final version of the UMUX indicated acceptable levels of reliability (internal consistency), concurrent validity, and sensitivity. Critical review of this research suggests that its weakest element was the structural analysis, which concluded that the UMUX is unidimensional based on insufficient evidence. Mixed-tone item content and parallel analysis of the eigenvalues point to a possible two-factor structure. This weakness, however, is of more theoretical than practical importance, given the overall scale’s apparent reliability, validity, and sensitivity.
For a sample of 270 high school students, the differences between positive and negative item stems are studied using three forms of a computer anxiety scale (original scale, negated items, and mixed stems) to ascertain if the items for each form tended to define a single construct or two different constructs. Internal-consistency estimates of reliability for each form yielded alpha coefficients which ranged from .73 for the mixed-stem format to .95 for the original format. Confirmatory factor analysis (LISREL VI) was employed to test the hypothesis that, for high school students, negative and positive item stems are indicative of different latent variables. The findings tended to support the hypothesis and were consistent with those obtained by other researchers (Benson, 1987 Benson and Hocevar, 1985; Schmitt and Stults, 1985). It was concluded that one should view results with caution when the instrument includes mixed stems, as the two sets of items do not appear to define a single construct.
A frequently occurring phenomenon in factor and cluster analysis of personality or attitude scale items is that all or nearly all questionnaire items that are nega tively keyed will define a single factor. Although sub stantive interpretations of these negative factors are usually attempted, this study demonstrates that the negative factor could be produced by a relatively small portion of the respondents who fail to attend to the negative-positive wording of the items. Data were generated using three different correlation matrices, which demonstrated that regardless of data source, when only 10% of the respondents are careless in this fashion, a clearly definable negative factor is gener ated. Recommendations for instrument development and data editing are presented.
Determining the number of common factors is one of the most important decisions which must be made in the application of factor analysis. Several different approaches and techniques are reviewed here along with associated strengths and weaknesses. It is argued that a combination of approaches will lead to the best judgment regarding the number of factors to retain. A computer program is available which presents the number of factors to retain as suggested by both discontinuity and parallel analyses. Utilization of the program removes the negative aspect associated with the use of each technique.