ArticlePDF Available

The Effect of Perceived Hedonic Quality on Product Appealingness

Authors:

Abstract and Figures

Usability can be broadly defined as quality of use. However, even this broad definition neglects the contribution of perceived fun and enjoyment to user satisfaction and preferences. Therefore, we recently suggested a model taking "hedonic quality" (HQ; i.e., non-task-oriented quality aspects such as innovativeness, originality, etc.) and the sub-jective nature of "appealingness" into account (Hassenzahl, Platz, Burmester, & Leh-ner, 2000). In this study, I aimed to further elaborate and test this model. I assessed the user perceptions and evaluations of 3 different visual display units (screen types). The results replicate and qualify the key findings of Hassenzahl, Platz, et al. (2000) and lend further support to the model's notion of hedonic quality and its importance for subjective judgments of product appealingness.
Content may be subject to copyright.
The Effect of Perceived Hedonic Quality
on Product Appealingness
Marc Hassenzahl
Department of Psychology, Darmstadt University of Technology
Germany
Usability can be broadly defined as quality of use. However, even this broad definition
neglects the contribution of perceived fun and enjoyment to user satisfaction and pref-
erences. Therefore, we recently suggested a model taking “hedonic quality” (HQ; i.e.,
non-task-oriented quality aspects such as innovativeness, originality, etc.) and the sub-
jective nature of “appealingness” into account (Hassenzahl, Platz, Burmester, & Leh-
ner, 2000).
In this study, I aimed to further elaborate and test this model. I assessed the user per-
ceptions and evaluations of 3 different visual display units (screen types). The results
replicate and qualify the key findings of Hassenzahl, Platz, et al. (2000) and lend fur-
ther support to the model’s notion of hedonic quality and its importance for subjective
judgments of product appealingness.
1. INTRODUCTION
Since the 1980s, usability as a quality aspect of products has become more and more
important. Nevertheless, the exact meaning of the term usability remains fuzzy.
There are at least two distinct perspectives on usability (Bevan, 1995). It can either be
thought of as a narrow product-oriented quality aspect complementing, for exam-
ple, reliability (i.e., free of error) and portability, or as a broad, general “quality of
use”; in other words, “that the product can be used for its intended purpose in the
real world” (Bevan, 1995, p. 350).
The quality of use approach defines the usability of a product as its efficiency
and effectiveness, together with the satisfaction of the user in a given “context of
use” (see Bevan & Macleod, 1994; International Organization for Standardization
INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION,
13
(4), 481–499
Copyright © 2001, Lawrence Erlbaum Associates, Inc.
Marc Hassenzahl is now at Darmstadt University of Technology. I am grateful to Manfred Wäger and
Silke Gosibat from Siemens Corporate Technology—User Interface Design (CT IC 7) for collecting the
data presented in this article and giving me the opportunity to analyze and report it. I also thank Jim
Lewis for helpful methodological and editorial comments and Uta Sailer for many clarifying comments
on an earlier draft of this article.
Request for reprints should be sent to Marc Hassenzahl, Darmstadt University of Technology, De-
partment of Psychology, Steubenplatz 12, 64293 Darmstadt, Germany. E-mail: hassenzahl@
psychologie.tu-darmstadt.de
[ISO], 1998). Efficiency and effectiveness are product characteristics that can be ob-
jectively assessed, whereas satisfaction is the subjectively experienced positive or
negative attitude toward a given product (ISO, 1998). Taking a close look at the ac-
tual measurement of satisfaction, it appears that some current approaches may test
user’s recognition of design objectives rather than actual user satisfaction. For ex-
ample, the Software Usability Measurement Inventory (SUMI; Kirakowski & Cor-
bett, 1993; see also Bevan & Macleod, 1994) contains five subscales (Efficiency,
Helpfulness, Control, Learnability, and Positive Affect). Except for affect, all of
these aspects map onto the efficiency or effectiveness claim of quality in use. To
give a second example, the End User Computing Satisfaction Instrument (Doll &
Torkzadeh, 1988; Harrison & Rainer, 1996) describes satisfaction as a higher order
construct, including content and accuracy of the provided information, ease of use,
timeliness, and format of information. Again, all these subdimensions point at the
perceived efficiency or effectiveness of the product. Satisfaction, so this perspective
assumes, is the mere consequence of recognizing the quality designed “into” a
product, leading to a simple equation: If users perceive the product as effective and
efficient, they will be satisfied. Thus, assuring efficiency and effectiveness and
making it obvious to the user should guarantee satisfaction.
Studies from the Technology Acceptance literature suggest a more complex per-
spective. A study investigating the impact of perceived usefulness (i.e., usability
and utility) and perceived fun on usage of software products, and user satisfaction
in a work context (Igbaria, Schiffman, & Wieckowski, 1994) demonstrated an al-
most equal effect of perceived fun and perceived usefulness on system usage. Per-
ceived fun had an even stronger effect on user satisfaction than perceived useful-
ness. The authors concluded that “fun features” (e.g., sounds, games, cartoons)
might encourage people to work with new software products.
An experiment by Mundorf, Westin, and Dholakia (1993) partly confirmed this
conclusion. They analyzed the effect of so-called hedonic components (i.e., non-
task-related “fun factors” such as music) on enjoyment and intention of using a
screen-based information system. They varied some of the most basic hedonic
components: color versus monochrome, graphics versus nongraphics, and music
versus nonmusic. The inclusion of hedonic components increased enjoyment as
well as usage intentions. Hence, fun or enjoyment seems to be an aspect of user ex-
perience that contributes to overall satisfaction with a product. Moreover, fun or
enjoyment may be stimulated by product features that do not necessarily increase
user efficiency and effectiveness—or that even partially hamper those quality as-
pects (Carroll & Thomas, 1988). Thus, even the broad definition of quality of use as
previously presented omits the important aspect of hedonic quality. This calls for
an expanded concept of usability.
1.1. Appealing Products: A Suggested Research Model
Hassenzahl, Platz, et al. (2000) recently suggested and tested a research model that
addresses ergonomic (usability) and hedonic aspects as key factors for appealing—
and thus satisfying—products. It consists of three separate layers: (a) objective
product quality (intended by the designers), (b) subjective quality perceptions and
482 Hassenzahl
evaluations (cognitive appraisal by the users), and (c) behavioral and emotional
consequences (for the user).
Hassenzahl, Platz, et al. (2000) suggest that a product might be described by a
large number of different quality dimensions (e.g., predictability, controllability,
etc.) grouped into two distinct quality aspects: ergonomic quality and hedonic
quality. Ergonomic quality (EQ) refers to the usability of the product, which ad-
dresses the underlying human need for security and control. The more EQ a
product has, the easier it is to reach task-related goals with effectiveness and effi-
ciency. EQ focuses on goal-related functions or design issues. Hedonic quality
(HQ) refers to quality dimensions with no obvious—or at least a second order—
relation to task-related goals such as originality, innovativeness, and so forth. HQ
is a quality aspect addressing human needs for novelty or change and social
power (status) induced, for example, by visual design, sound design, novel inter-
action techniques, or novel functionality. A product can possess more or less of
these two quality aspects.
To have an effect, users must first perceive these different quality aspects. A
product intended to be very clear in presenting information inevitably fails if users
cannot perceive this intended clarity. The correspondence of intended and appar-
ent (perceived) quality of a product can be low (Kurosu & Kashimura, 1995), indi-
cating differences in how designers (or usability experts) think of a product and
how the users perceive it. If the major design goal is the efficiency or effectiveness
of the product, objective usability is the appropriate evaluation target (e.g., oper-
ationalized by performance measures). By contrast, if the goal is to design a rich
and appealing user experience (Laurel, 1993) with a product, focusing on individu-
als’ perceptions (i.e., subjective usability, user-perceived quality) would be more
appropriate (Leventhal et al., 1996). This justifies a separate model layer that explic-
itly addresses user perceptions.
On the basis of their perceptions, users may form a judgment of the product’s
appealingness (APPEAL). In other words, the user may weight and combine the
perceived EQ and HQ into a single judgment. APPEAL (or a lack of) manifests itself
in a global judgment about the products (e.g., good vs. bad).
Perception and its subsequent evaluation are similar to the concept of cognitive
appraisal. This appraisal can have two different outcomes. On one hand, it may
lead to behavioral consequences, such as increased usage frequency, increased
quality of work results, or decreased learning time. On the other hand, it may lead
to emotional consequences, such as feelings of enjoyment or fun or satisfaction (or
frustration or distress or disappointment). To view emotions as a consequence of
the user’s cognitive appraisal is generally consistent with numerous emotion theo-
ries (see Ekman & Davidson, 1994, for an overview; Ortony, Clore, & Collins, 1988,
for a more specific example). Behavioral and emotional consequences may be re-
lated to each other. For example, Igbaria et al. (1994) found that perceived fun and
actual usage of software systems were correlated.
The proposed model puts emotions such as fun, satisfaction, joy, or anger slightly
out of focus, viewing them as consequences of a cognitive appraisal process. The
model’s primary concern is determining whether HQ perceptions are a valuable
roadtoincreaseaproduct’sAPPEAL. However,itislikely that a productthatempha-
sizes HQ may elicit different emotions from a product that emphasizes EQ.
Effect of Perceived Hedonic Quality 483
Figure 1 summarizes the key elements of the proposed research model.
In a study directed at testing parts of the research model, Hassenzahl, Platz, et al.
(2000) concentrated on the cognitive appraisal stage of the model. Specifically, they
investigated (a) whether users perceive EQ and HQ independently and (b) how the
judgment of APPEAL depends on the perceptions of the two different quality
aspects:
1. Robinson (1993, as cited in Leventhal et al., 1996) stated that the artifacts
people choose to use are interpretable as statements in an ongoing “dialog” that
people have with their environment. He suggested that the artifact’s quality as-
pects, which satisfy the requirements of that dialog, may be independent from the
artifact’s usability and utility. Taking this assumption into account, one may expect
independent perception of HQ and EQ. Indeed, a principal components analysis
(PCA) of perceived quality dimensions in the form of a semantic differential (e.g.,
simple–complex, dull–exciting) revealed two independent components consistent
with the a priori proposed EQ and HQ groups. Internal consistency (Cronbach’s α)
of both EQ and HQ proved to be very high (> .90). This confirmed the assumption
that users perceive HQ and EQ consistently, distinguishing task-related aspects
from non-task-related aspects.
2. It is possible that although individuals may perceive HQ, they do not consider
it important for a judgment of APPEAL. In other words, users may find a product
innovative (possessing HQ) but may be indifferent about it or find other aspects
much more important. Hassenzahl, Platz, et al. (2000) measured APPEAL in the
form of a semantic differential (e.g., good–bad; Cronbach’s α= .95). To determine
the contribution of HQ and EQ to APPEAL, they performed a regression analysis.
The results showed an almost equal contribution of HQ and EQ to APPEAL. The
484 Hassenzahl
FIGURE 1 Research model.
data indicated a simple averaging model in which EQ and HQ have equal weights.
This finding is consistent with the averaging model of information integration the-
ory (Anderson, 1981), which proposes the integration of different pieces of informa-
tion by an averaging process into a multiattributive judgment (such as APPEAL).
To conclude, users perceive EQ and HQ aspects independently, and these quali-
ties appear to contribute equally to the overall judgment of a product’s APPEAL.
These findings support the cognitive appraisal layer of the research model.
1.2. Aims of This Study
Hassenzahl, Platz, et al. (2000) also discussed some limitations of their study: First, a
lack of ecological validity is apparent. They used special stimuli (software proto-
types) to induce EQ and HQ perceptions. The observed effects may heavily depend
on the stimulus material provided, limiting generalizability. Furthermore, partici-
pants had to perceive and evaluate stimuli (software prototypes) that represented a
very simple task from a domain that was not relevant for the participants’ day-
to-day life. In a sense, the whole study was more or less context free, which may
have led to an artificial situation.
Second, the participants had an interaction time of only about 2 min with each
stimulus (software prototype). This very short interaction time may have led to
superficial cognitive processing of the stimuli, which in turn may have influ-
enced the results.
Third, although internal consistency and factorial validity are important indica-
tors of the reliability of the used semantic differential, the question of whether the
scales really measure the hypothesized quality aspects and the user’s evaluation
(i.e., the construct validity of the scales) remains unanswered.
To provide further information about the validity of the research model for ap-
pealing products, I examine in this article some of those limitations and some addi-
tional aspects. Specifically, I address the following research questions:
Question 1 (Q1).
Is it possible to replicate the results of Hassenzahl, Platz, et
al. (2000) with a real product from a domain that matches the experiences and con-
cerns of the participants and provides a longer interaction time to build perceptions
and judgment? A replication of key findings under different conditions would lend
support to the reliability and validity of the research model.
The comparison of three different screen types—namely a standard CRT, a liq-
uid crystal display (LCD), and a so-called virtual screen (VS; i.e., an image pro-
jected onto the desk by a ceiling-mounted projector)—provided the opportunity to
study different but existing products that serve the identical purpose of displaying
an image. An evaluation of real computer displays has more relevance than the
evaluation of artificial software. Moreover, this type of evaluation involves differ-
ent tasks and an extended interaction time relative to Hassenzahl, Platz, et al.
Effect of Perceived Hedonic Quality 485
(2000). For these reasons, this study should broaden the experiential basis for the
perception of EQ and HQ and the judgment of APPEAL.
Question 2 (Q2).
Do EQ, HQ, and APPEAL scales actually measure the in-
tended constructs? Evidence of this would contribute to the validity of the scales
and the research model.
To address Q2, an independent assessment of constructs similar to EQ, HQ, or
APPEAL is necessary for comparison. In a usability study, Hassenzahl (2000)
found that individuals who spent a higher proportion of task-completion time
with usability problem-handling expended more mental effort, defined as the
amount of energy an individual has to activate to meet the perceived task de-
mands (Arnold, 1999). Individuals who experience many or hard-to-overcome
usability problems or both (indicated by a higher rate of expended mental effort)
should perceive the system as considerably less ergonomic and should rate the
system as having less EQ. Furthermore, a higher expenditure of mental effort
goes together with a reduced liking of a given system (Arnold, 1999). Given that
liking and appeal are similar concepts, individuals with a higher expenditure of
mental effort should rate the system to be less appealing (APPEAL). The percep-
tions of HQ should be independent from the expenditure of mental effort due to
HQ’s non-task-related character.
Question 3 (Q3).
Is it possible to provide an a priori prediction of the relative
amount of user-perceived EQ and HQ per product from a given set of products?
Moreover, can APPEAL be predicted on the basis of an averaging rule as assumed
by Hassenzahl, Platz, et al. (2000)? A successful prediction of the screen types’ EQ,
HQ, and APPEAL would support the construct validity of the research model.
An informal inspection of the different screen types led to the following predic-
tions. From an ergonomic perspective, the VS had some apparent drawbacks com-
pared to the other two screen types. First, the image on the desk was relatively
large, covering a significant amount of desk space. Users could perceive this as im-
practical for daily use. Second, the image was projected flatly onto the desk, mak-
ing the required viewing angle awkward. In a normal sitting posture, users would
experience image distortions. Most likely, they would compensate for this with an
inconvenient, forward-bending seated posture. Based on these two drawbacks, it is
likely that users would perceive the VS as having less EQ than the other two
screens. Even though the CRT and LCD had different underlying technologies and
screen sizes (19 in. [48.26 cm] and 15 in. [38.1 cm], respectively), the perceptible dif-
ferences among them were minimal with regard to ergonomic variables such as
reading speed, screen clarity, brightness, and contrast.
From a hedonic perspective, users should perceive the CRT as less hedonic than
the other two screen types. Almost every computer user will have worked with and
be familiar with a CRT. This is probably not true for the flat LCD and is surely not
true for the VS. The CRT also lacks the potential gain in status derived from having
a “smart” LCD or a “cool” VS sitting on one’s desk. If this is true, users should per-
486 Hassenzahl
ceive the CRT as having less HQ than the other two screens. If the APPEAL of the
product is a consequence of perceptions of EQ and HQ, the LCD should have a
higher APPEAL than the other two screens.
2. METHOD
2.1. Participants
Fifteen individuals (6 women and 9 men) participated in the study. Most partici-
pants were Siemens employees from Corporate Technology, Munich, Germany. All
participants worked on visual display terminals regularly as a part of their job (e.g.,
secretaries, multimedia or Web designers, students). The sample’s mean age was
35.4 years (ranging from 22 to 56). Computer expertise (measured with a five-item
questionnaire) varied from moderate (10 participants) to high (5 participants).
2.2. Screen Types
The three tested screen types were (a) a standard screen with a 19-in. CRT,
(Siemens MCM 1902); (b) a 15-in. LCD, (Siemens MCF 3811 TA); and (c) a 24-in.
VS, that is, a projection of the screen from the ceiling flatly onto the user’s desk
(VS, Sharp Vision LC).
2.3. Measures
A semantic differential (see Hassenzahl, Platz, et al., 2000) was used to measure per-
ceived EQ, perceived HQ, and APPEAL of the screen type. It consists of 23
seven-point scale items with bipolar verbal anchors (see Table 1). All verbal anchors
were originally in German.
The Subjective Mental Effort Questionnaire (SMEQ; Arnold, 1999; German
translation by Eilers, Nachreiner, & Hänecke, 1986; Zijlstra, 1993; Zijlstra & van
Doorn, 1985) was applied to measure the expended mental effort. The SMEQ is a
unidimensional rating scale ranging from 0 to 220. Different verbal anchors such as
hardly effortful or very effortful facilitate the rating process.
2.4. Procedure
The study took place in the usability laboratory of Siemens Corporate Technology.
Each participant came separately into the laboratory. After an introduction and in-
structions by the experimenter, each participant sat at a desk adapted from a typical
office workplace. Each participant worked through three different tasks (a mah-
jongg game, an Internet search task, and a text editing task) with each screen type
(CRT, LCD, and VS). The tasks covered many different aspects of working with a
Effect of Perceived Hedonic Quality 487
computer. Participants used the screen types in random order. After finishing the
three tasks with one screen type, each participant completed the SMEQ and the se-
mantic differential, then repeated the procedure with the remaining screen types.
Questionnaires concerning computer expertise and general demographics and a
short interview completed the session. A whole session took from 2 to 3 hr; the inter-
action time with a single screen type ranged from 30 to 45 min.
3. RESULTS
3.1. Replication of Hassenzahl, Platz, et al. (2000; Q1)
Scale Validity: Principal Components Analysis
A slope analysis (Coovert & McNelis, 1988) of the eigenvalues from a PCAof the
EQ and HQ items indicated a two-component solution. The varimax-rotated solu-
tion had a reasonably clear structure with HQ items loading on the first component
488 Hassenzahl
Table 1: Bipolar Verbal Scale Anchors
Scale Item Anchors
EQ 1 Comprehensible Incomprehensible
EQ 2 Supporting Obstructing
EQ 3 Simple Complex
EQ 4 Predictable Unpredictable
EQ 5 Clear Confusing
EQ 6 Trustworthy Shady
EQ 7 Controllable Uncontrollable
EQ 8 Familiar Strange
HQ 1 Interesting Boring
HQ 2 Costly Cheap
HQ 3 Exciting Dull
HQ 4 Exclusive Standard
HQ 5 Impressive Nondescript
HQ 6 Original Ordinary
HQ 7 Innovative Conservative
APPEAL 1 Pleasant Unpleasant
APPEAL 2 Good Bad
APPEAL 3 Aesthetic Unaesthetic
APPEAL 4 Inviting Rejecting
APPEAL 5 Attractive Unattractive
APPEAL 6 Sympathetic Unsympathetic
APPEAL 7 Motivating Discouraging
APPEAL 8 Desirable Undesirable
Note. Verbal anchors of the differential are translated from German. EQ =
ergonomic quality; HQ = hedonic quality; APPEAL = appealingness.
and EQ items loading on the second component (Table 2, OVERALL). Together, the
two components accounted for approximately 59% of the total variance. A Kai-
ser–Meyer–Olkin (Kaiser & Rice, 1974) measure of sample adequacy was .67, ex-
ceeding the required minimum of .5.
PCA requires a sample of independent measurements, but the present sample
violates this requirement. Each participant evaluated each of the three screens and
thus contributed three dependent measurements. This could reduce error variance,
making the correlations in the matrix submitted to the PCA appear stronger than
they actually were, which in turn may lead to an overestimation of explained vari-
ance or an unjustifiably clear component structure.
To check for this possible bias, I computed separate PCAs for each screen type.
Because the model under test has two components (EQ and HQ), I restricted the so-
lutions to an extraction of two components. Table 2 (CRT, LCD, VS) shows the re-
sulting varimax-rotated solutions. The variance explained ranged from 50% to 60%
of the total variance. Even in the worst case (LCD = 50.5%), the percentage of ex-
plained variance of the separate solution did not substantially differ from the
OVERALL solution (OVERALL = 59.3%). Furthermore, each separate PCA more or
less replicated the structure found in the OVERALL analysis, with HQ items con-
sistently loading on one component and EQ items on the other. The only two obvi-
ous exceptions were the strong loading (.779) of the EQ item “supporting–obstruct-
ing” on the HQ component in the CRT analysis and the medium loading (.556) of
the HQ item “innovative–conservative” on the EQ component of the VS analysis.
A PCA of the APPEAL items revealed only one major component, which ac-
counted for about 68% of the variance. A Kaiser–Meyer–Olkin measure of sample
adequacy was .83. Separate PCAs for each screen type all revealed one major
component that accounted for 78% (CRT), 69% (LCD), and 54% (VS) of the vari-
ance, respectively.
Internal Consistency and Characteristics of the Single Scales
Table 3 summarizes the internal consistency (Cronbach’s α) and general charac-
teristics of each scale. Internal consistency was satisfactory. These results justify cal-
culating EQ, HQ, and APPEAL values for each participant by averaging the single
scale items.
Predicting the Participant’s Judgment of Appealingness
Both EQ and HQ should affect the participant’s judgment of APPEAL. To check
this assumption, I conducted a regression analysis of EQ and HQ on APPEAL over
all screen types (Table 4, OVERALL) and for each screen type separately (Table 4,
CRT, LCD, VS).
Over all screen types, EQ and HQ effectively predicted APPEAL. Consistent
with the assumptions, EQ and HQ contributed almost equally to APPEAL (see
standardized regression coefficients, β). However, for the single screen types the re-
Effect of Perceived Hedonic Quality 489
490
Table 2: Factorial Validity of Ergonomic Quality (EQ) and Hedonic Quality (HQ) Over All Screen Types and for Each Screen Type Separately
Principal Components With Varimax Rotation
OVERALL CRT LCD VS
Scale Item HQ EQ HQ EQ HQ EQ HQ EQ
EQ
Comprehensible–incomprehensible .675 .667 .715 .534
Supporting–obstructing .514 .779 .480 –.654
Simple–complex .759 .648 .754 .857
Predictable–unpredictable –.483 .504 .716 .786
Clear–confusing .740 .707 .717 .712
Trustworthy–shady .802 .787 .798 .753
Controllable–uncontrollable .887 .827 .892 .914
Familiar–strange –.418 .636 .589 –.684 .701
HQ
Interesting–boring .857 .807 –.409 .652 .419
Costly–cheap .519 .408 .609 .782
Exciting–dull .842 .728 .847
Exclusive–standard .855 .826 .757 .729
Impressive–nondescript .612 .594 –.659 .645
Original–ordinary .882 .889 .859 .755
Innovative–conservative .877 .884 –.511 .556
Eigenvalue 4.89 4.01 5.23 3.77 3.01 4.57 3.52 4.76
Variance explained (%) 32.57 26.76 34.88 25.15 20.04 30.45 23.48 31.71
Note. CRT = cathode-ray tube; LCD = liquid crystal display; VS = virtual screen. OVERALLN= 45 (15 participants×3prototypes). CRT, LCD, and VS N= 15;
component loadings < .40 are omitted.
sults differed. Unlike the other screen types, LCD showed a low (but still signifi-
cant) multiple correlation (adjusted R2). For the CRT, EQ and HQ equally contrib-
uted to APPEAL, whereas for the LCD APPEAL was based more on EQ. For the VS,
HQ had a more pronounced role. Although, the inspection of regression coeffi-
cients (b) implied differences in the contribution of either EQ or HQ to APPEAL,
depending on the screen type, the 95% confidence intervals for the regression coef-
ficient strongly overlapped. Thus, the data do not allow for a definite rejection of
the equal contribution hypothesis.
3.2. Scale Validity: Correlation of SMEQ With EQ, HQ, and APPEAL (Q2)
It is reasonable to hypothesize that a high SMEQ value indicates usability problems
experienced while performing the tasks. Participants should attribute a certain pro-
portion of these problems to the EQ of the tested screen type, resulting in a percep-
Effect of Perceived Hedonic Quality 491
Table 3: Internal Consistency and Scale Characteristics
Scale Cronbach’s αM SD Minimum Maximum
EQ .83 1.37 0.96 –1.75 2.88
HQ .90 0.60 1.32 –2.43 2.86
APPEAL .93 1.09 1.32 –2.63 3.00
SMEQ Xa69.20 51.80 0 185
Note. N = 45 (15 participants × 3 screen types). EQ = ergonomic quality; HQ = hedonic quality;
APPEAL = judgement of appeal; SMEQ = Subjective Mental Effort Questionnaire (Arnold, 1999).
aThe SMEQ is a single item measurement tool; therefore, no Cronbach’s αcan be computed.
Table 4: Regression Analysis of EQ and HQ on APPEAL Over All Screen Types
and for Each Screen Type Separately
Criterion Adjusted R2Predictors βb SE b 95% CI for b
APPEAL (OVERALL) .62** EQ*** .62 .85 .13 .59 1.11
HQ*** .61 .61 .09 .42 .80
APPEAL (CRT) .70** EQ** .53 1.01 .29 .38 1.65
HQ** .55 .74 .20 .29 1.18
APPEAL (LCD) .33* EQ* .71 .91 .31 .23 1.59
HQa.47 .59 .30 –.08 1.25
APPEAL (VS) .61** EQ* .40 .41 .17 .03 .79
HQ** .63 .79 .22 .32 1.26
Note. OVERALL N= 45 (15 participants × 3 screen types). EQ = ergonomic quality; HQ = hedonic
quality; APPEAL = judgement of appealingness; CI = confidence interval; CRT = cathode-ray tube; LCD
= liquid crystal display; VS = virtual screen. CRT, LCD, and VS N= 15.
ap< .10.
*p< .05. **p< .01. ***p< .001, two-tailed.
tion of lower EQ. A highly significant negative correlation of SMEQ with EQ (r=
–.61, p< .01, two-tailed; N= 45) supported this assumption. Furthermore, experi-
encing usability problems (manifest in high SMEQ values) should lead to reduced
product APPEAL, an assumption supported by a highly significant negative corre-
lation of SMEQ and APPEAL (r= –.58, p< .01, two-tailed; N= 45). Finally, if HQ is a
perception that is distinct and independent from perceptions of EQ, HQ should not
correlate with experienced usability problems (indicated by high SMEQ values). In
other words, experiencing usability problems should not affect a product’s per-
ceived HQ. Consistent with this assumption, the correlation between SMEQ and
HQ was almost zero (r= .01, ns, two-tailed; N= 45).
3.3. A Priori Prediction of EQ, HQ, and APPEAL (Q3)
Figure 2 shows the mean scale values of EQ, HQ, and APPEAL for each screen type
(CRT, LCD, VS).
I performed three separate repeated measurements analyses of variance
(ANOVAs) with screen type (CRT, LCD, VS) as a within-subjects factor and each
scale as a dependent variable (EQ, HQ, APPEAL). For each scale there was a signifi-
cant main effect of screen type. Detailed analyses (by planned comparisons—re-
peated method) showed that participants perceived VS as less ergonomic (EQ)
than LCD, with no differences between CRT and LCD. They perceived CRT as less
hedonic (HQ) than LCD, with no differences between LCD and VS. Furthermore,
they rated LCD as more appealing (APPEAL) than CRT and VS (see Table 5).
492 Hassenzahl
FIGURE 2 Mean scale values (EQ, HQ, APPEAL) for each screen type (CRT, LCD,
VS).
4. DISCUSSION
4.1. Replication of Hassenzahl, Platz, et al. (2000; Q1)
Q1 addressed the replication of the results of a previous study (Hassenzahl, Platz, et
al., 2000) under very different conditions. In contrast to Hassenzahl, Platz, et al., this
study included real products (computer screens) relevant to the participants, in-
creasing the ecological validity of the results. Moreover, the three different tasks and
the increased amount of interaction time gave participants the opportunity to build
up a reasonable experiential basis for their perceptions and judgments.
PCAs
The PCAs indicated satisfactory component validity and internal consistency of
EQ, HQ, and APPEAL. EQ and HQ are two distinctly perceived groups of quality
dimensions respectively concerned with task-oriented or non-task-oriented qual-
ity aspects of the product. The results of the PCA of APPEAL items supported the
notion of APPEAL being a unidimensional construct.
The OVERALL PCA of EQ and HQ items suffered from lack of independence
of measurement. In an attempt to correct for this, I conducted separate analyses
for each screen. However, with a sample size of 15 participants and 15 variables,
the sample size to variables ratio of these analyses was only 1:1, which is lower
than even the least conservative recommendations (e.g., Gorsuch, 1997, recom-
mends a sample size to variables ratio of 3:1). PCAs with a low sample size to
variables ratio tend to produce unstable component structures. For this reason, it
is encouraging that each separate PCA resulted in components that explained a
Effect of Perceived Hedonic Quality 493
Table 5: Results of Repeated Measurements Analyses of Variance
for Each Scale (EQ, HQ, APPEAL) and Planned Comparisons
Scale Effect F df Significance
EQ Main effect 6.70 1.425a.011
CRT vs. LCD .17 1 ns
LCD vs. VS 11.87 1 .004
HQ Main effect 22.20 1.568a.000
CRT vs. LCD 34.75 1 .000
LCD vs. VS 2.56 1 ns
APPEAL Main effect 5.70 1.452a.017
CRT vs. LCD 7.75 1 .015
LCD vs. VS 17.39 1 .001
Note. EQ = ergonomic quality; HQ = hedonic quality; APPEAL = judgement of appealingness; CRT=
cathode-ray tube; LCD = liquid crystal display; VS = virtual screen.
aGreenhouse-Geisser corrected.
substantial portion of the total variance and, apart from minor exceptions,
showed a consistent component structure.
The EQ items “predictable–unpredictable” and “familiar–strange” showed a
moderate negative loading on the HQ component (–.483 and –.418, respectively).
This indicates that a perception of a predictable, familiar product—two positive at-
tributes from the usability perspective—goes together with a reduced perception
of HQ. Carroll and Thomas (1988) pointed out that “ease of use” (i.e., EQ aspects)
and “fun of use” (i.e., HQ aspects) may not necessarily complement each other.
From a design perspective they may even be antagonistic, exclusive design goals.
To make a product hedonic may only be possible by sacrificing part of its EQ and
vice versa. Product designers should realize that HQ and EQ may be mutually ex-
clusive concepts that they must bring into balance.
To conclude, the PCA presented herein is generally consistent with the preced-
ing analyses. Nevertheless, it is important to replicate this analysis with a larger
number of independent samples. A larger sample size would permit the use of con-
firmatory factor analysis or even structural equation methods to explicitly check
the assumed relations among latent factors in the model.
Regression Analyses
The successful regression analysis of EQ and HQ on APPEAL over all screen
types lent support to the notion that EQ and HQ perceptions equally contribute
to the judgment of a product’s APPEAL and tentatively supported the model of
cognitive appraisal with its distinction between perception and evaluation. Sub-
jective APPEAL appears to be a judgment that integrates different perceptions
into a global construct. As proposed by information integration theory (Ander-
son, 1981; see also Dougherty & Shanteau, 1999), this might be the result of a cog-
nitive averaging process. Such an averaging process implies, for example, that in-
creased EQ can compensate for a lack of HQ with regard to the subjective
evaluation of a product’s APPEAL.
However, the regression analyses done separately for each screen type render a
picture more complicated than a simple cognitive integration process. For the CRT,
APPEAL was almost the numerical average of EQ and HQ. For the LCD, EQ af-
fected the judgment of APPEAL more than HQ, and vice versa for the VS. This ef-
fect may be due to the participants’ familiarity with the screen types. Participants
had more experience with CRTs than with LCDs or VSs. Contrary to familiar ob-
jects, individuals may judge unfamiliar objects on the basis of particularly salient
object features. Thus, the high-image quality of the LCD and the surprising novelty
of the VS may lead to a higher weight of EQ and HQ, respectively.
4.2. Scale Validity: Correlation of SMEQ With EQ, HQ, and APPEAL (Q2)
Q2 addressed the topic of scale validity. I assessed construct validity of EQ, HQ, and
APPEAL by correlating each scale with a measure of SMEQ. As expected, EQ and
494 Hassenzahl
APPEAL values were greater when participants experienced less severe usability
problems (as indicated by a low SMEQ value). This lends support to the view that
the EQ scale measures perceived (experienced) EQ and that the occurrence of us-
ability problems during task execution lowers APPEAL. The lack of correlation of
HQ with SMEQ points to the ability of the two scales (EQ and HQ) to discriminate
between the distinct constructs.
However, validation of scales and their underlying constructs is far from being
complete. The correspondence of EQ to other, more comprehensive usability evalu-
ation questionnaires, such as the ISONORM 9241/10 (e.g., Prümper, 1999) or the
IsoMetrics (Gediga, Hamborg, & Düntsch, 1999) deserves study. Second, experi-
ments including more satisfaction and liking-oriented questionnaires such as the
SUMI (Kirakowski & Corbett, 1993) and QUIS (e.g., Shneiderman, 1998) could elu-
cidate the APPEAL construct. Regarding APPEAL, this construct integrates differ-
ent product aspects (e.g., EQ, HQ), which is typical for judgments or evaluations.
Also, APPEAL items are semantically evaluative (in contrast to the EQ and HQ
items), but it is important to find a way to determine if EQ and HQ are truly more
perceptual and APPEAL more evaluational in nature. Finally, it is important to de-
velop methods for the explicit confirmation of the validity of the HQ construct. So
far, I have only shown that HQ is different from EQ.
4.3. Prediction of EQ and HQ (Q3)
Q3 explored whether it is possible for an expert to use the research model to predict
the relative EQ, HQ, and judgment of APPEAL of a given set of products. Detailed
ANOVAs confirmed the a priori stated predictions.
However, there is one point to be discussed in more detail. In this study, the pre-
diction of the relative amount of EQ and HQ per screen type was based on careful
reasoning. From an interaction perspective, computer screens are fairly simple
products that presumably serve only one practical purpose: to display information.
Thus, display quality (readability) is one crucial criterion for predicting EQ. More-
over, the screen’s intended context of use, on top of an office desk, is a well-known
and understood context. Thus, the second criterion for EQ is the way the screen in-
tegrates into the workplace. Regarding HQ, it was reasonable to identify the nov-
elty of the technology as the main source of HQ (see also Hassenzahl, Burmester, &
Sandweg, 2000), for example, by the status gained from possessing a new technol-
ogy in the social context of an organization.
The prediction of EQ and HQ heavily depends on a thorough analysis of the
product and its context of use. A rational analysis might be adequate, given a sim-
ple technology in a well-understood context of use. For more complex products,
used by experts in a special context of use, prediction of EQ is more or less impossi-
ble without reference to the wide array of analysis techniques used in the field of
human–computer interaction (e.g., usability testing, expert evaluation methods,
task and context analysis). The same is true for HQ, but here there are no spe-
cialized analysis techniques used in the industry. Therefore, it is important to de-
velop appropriate analytical techniques to help product designers or usability en-
Effect of Perceived Hedonic Quality 495
gineers to gather hedonic requirements (i.e., user requirements addressing hedonic
needs) that are appropriate in a certain context of use. The information gathered
must support product designers or usability engineers in finding ways to fulfill
hedonic requirements. Because EQ and HQ appear to be at least partially incom-
patible, finding trade-offs is also likely to be an important part of an appropriate
analytical technique.
A first attempt to devise such a technique is the Structured Hierarchical Inter-
viewing for Requirement Analysis (SHIRA; Hassenzahl, Wessler, & Hamborg,
2001; see also Hassenzahl, Beu, & Burmester, 2001). This interviewing technique
seeks to contextualize abstract attributes such as original, innovative, and so forth.
First, the interviewee explains what to his or her mind, for example, the term origi-
nality connotes for a specific product. Hereby, the transition from an abstract design
goal (e.g., “to be original”) to a number of concrete design goals (e.g., “to look dif-
ferent”) is made. Second, the interviewee indicates ways to meet the concrete de-
sign goals (e.g., “do not use the typical Windows gray as the dominant color”).
These solution proposals further typify the ideographic meaning of a concrete de-
sign goal. By aggregating the participants’ ideographic views, a product’s design
space can be explored. If the attributes serving as the starting point of the interview
are hedonic in nature, SHIRA has the potential to support the exploration of these
and, thus, might help in gathering hedonic requirements.
To conclude, the successful prediction of EQ and HQ by referring to the research
model clearly contributes to the validity of the model. However, it does not imply
that a prediction of EQ and HQ based solely on the intuitions of product designers
or usability engineers or both is likely to be successful without the use of appropri-
ate analytical techniques.
5. HEDONIC QUALITY IN PRACTICE
To simultaneously consider EQ and HQ alters the focus of traditional usability engi-
neering. In this section, I provide examples of how HQ affected some of the projects
in which I was involved as a usability engineer.
In a project to design a telephone-based interface for a home automation system
(Sandweg, Hassenzahl, & Kuhn, 2000), an evaluation of a first design revealed
minor usability problems, an acceptable perceived EQ but a lowered HQ and
APPEAL. We concluded that this lack of HQ (and APPEAL) was because of the de-
sign decision to use only spoken menus. A combination of spoken menus and
nonspeech sounds was found to have an enriching effect on user–system interac-
tion (Stevens, Brewster, Wright, & Edwards, 1994), which can be compared to the
enriching effect of graphics on script (Brewster, 1998). Without the simultaneous
consideration of perceived EQ and HQ, we might have missed the potential conse-
quences of using speech sounds only on the APPEAL of the auditory interface.
HQ played an even more pronounced role in the redesign of a Web site we per-
formed. A small-scale questionnaire study showed that the Web site’s design
lacked HQ, whereas EQ was perceived and experienced as very good. Participants
described the design as visually dull and boring. This result provided an interest-
496 Hassenzahl
ing and more or less new opportunity. Instead of ensuring a minimum of usability
problems (as usability engineers usually do), we had to make the design more in-
teresting, without compromising the good usability. We decided to perform a care-
ful visual redesign, using mouse-over effects to provide a stronger impression of
interactivity and some small animations to make transitions between pages more
interesting. Unfortunately, no subsequent evaluation was performed, and thus, the
question of whether HQ was increased by the redesign remains unanswered.
In a current project, we evaluated a redesigned software application for pro-
gramming and configuring industrial automation solutions. The goals of the rede-
sign were to increase usability—especially learnability—and to be recognized as
innovative and original. Having both goals in mind, our client asked us to explic-
itly assess the EQ and HQ of the software to find appropriate trade-offs between
usability and innovation. This involves, for example, the identification of “good”
innovations, which enrich user experience without impairing usability.
These examples show the importance of taking HQ aspects into account. Had
we not done so, we would have failed to recognize the potential improvement of
the APPEAL of the home automation interface. The Web site’s unappealing design
would have gone unnoticed and a pure usability evaluation of the industrial auto-
mation software would have neglected important, market-relevant design goals.
Moreover, we have experienced an additional positive side effect since the intro-
duction of HQ in our usability engineering practice: a better relationship with our
clients’ marketing departments. The relation between human factors and market-
ing is often described as chilly or distant (Nardi, 1995; but see Atyeo, Sidhu, Coyle,
& Robinson, 1996, for a more positive perspective). This might be explained by the
typically narrow perspective of usability engineering or human factors on product
design. By solely focusing on usability and utility, some design goals (e.g., innova-
tion) important to marketing (and to the user) are neglected.
6. CONCLUSION
The work discussed in this article provides a model for stimulating and guiding fur-
ther research on product APPEAL. Many questions remain unanswered. For exam-
ple, the relation of the APPEAL of a product to the consequences of this judgment is
unknown. It would be interesting to study whether different distributions of EQ
and HQ in comparable products (i.e., ergonomic-laden vs. hedonic-laden product
types) may lead to different emotional reactions (e.g., satisfaction vs. joy). To give an
example, although APPEAL of the CRT and the VS in this study is comparable, it
stems from different sources: Participants perceived the CRT as mainly ergonomic
(i.e., ergonomic laden), but they perceived the VS as mainly hedonic (i.e., hedonic
laden). Although judged to be equally appealing, emotional or behavioral conse-
quences might be quite different.
A second, most important topic of further research should be the analysis and
transformation of hedonic requirements into actual product design. This means de-
termining which features of a product intended for a certain context of use will in-
duce the perception of HQ aspects. In other words, what makes a certain product
Effect of Perceived Hedonic Quality 497
appear to be interesting, innovative, or original? To answer this question, it will be
necessary to complement the approach here with research that is more qualitative
in nature.
With this article, I have attempted to put HQ—a quality aspect mostly irrelevant
to the efficiency and effectiveness of a product—into perspective. Taking it into ac-
count when designing (or evaluating) a product allows software designers to go a
step further, from simply making a product a useful tool to designing rich user ex-
periences (Laurel, 1993). Whether a product’s intended use is at work or at home,
software designers should consider the gathering and analysis of hedonic require-
ments in addition to usability and functional requirements.
REFERENCES
Anderson, N. H. (1981). Foundations of information integration theory. New York: Academic
Press.
Arnold, A. G. (1999). Mental effort and evaluation of user interfaces: A questionnaire ap-
proach. In Proceedings of the HCI International 1999 Conference on Human Computer Interac-
tion (pp. 1003–1007). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Atyeo, M., Sidhu, C., Coyle, G., & Robinson, S. (1996). Working with marketing. In Proceedings
of the CHI 1996 Conference Companion on Human Factors in Computing Systems: Common
ground (pp. 313–314). New York: ACM, Addison-Wesley.
Bevan, N. (1995). Usability is quality of use. In Proceedings of the HCI International 1995 Confer-
ence on Human Computer Interaction (pp. 349–354). Mahwah, NJ: Lawrence Erlbaum Asso-
ciates, Inc.
Bevan, N., & Macleod, M. (1994). Usability measurement in context. Behaviour & Information
Technology, 13, 132–145.
Brewster, S. A. (1998). Using nonspeech sounds to provide navigation cues. Transactions on
Computer-Human Interaction, 5, 224–259.
Carroll, J. M., & Thomas, J. C. (1988). Fun. SIGCHI Bulletin, 19(3), 21–24.
Coovert, M. D., & McNelis, K. (1988). Determining the number of common factors in factor
analysis: A review and program. Educational and Psychological Measurement, 48, 687–693.
Doll, W. J., & Torkzadeh, G. (1988). The measurement of end user computing satisfaction. MIS
Quarterly, 12, 259–274.
Dougherty, M. R. P., & Shanteau, J. (1999). Averaging expectancies and perceptual experi-
ences in the assessment of quality. Acta Psychologica, 101, 49–67.
Eilers,K., Nachreiner,F.,&Hänecke, K. (1986). Entwicklungund Überprüfung einer Skalazur
Erfassungsubjektiv erlebter Antrengung[Development and evaluation of a questionnaire
to assess subjectively experienced effort]. Zeitschrift für Arbeitswissenschaft, 40, 215–224.
Ekman, P., & Davidson, R. J. (1994). The nature of emotion. New York: OxfordUniversity Press.
Gediga, G., Hamborg, K.-C., & Düntsch, I. (1999). The IsoMetrics usability inventory: An
operationalization of ISO 9241–10 supporting summative and formative evaluation of
software systems. Behaviour & Information Technology, 18, 151–164.
Gorsuch, R. L. (1997). Exploratory factor analysis: Its role in item analysis. Journal of Personal-
ity Assessment, 68, 532–560.
Harrison, A. W., & Rainer, R. K. (1996). A general measure of user computing satisfaction.
Computers in Human Behavior, 12(1), 79–92.
Hassenzahl, M. (2000). Prioritising usability problems: Data-driven and judgement-driven
severity estimates. Behaviour & Information Technology, 19, 29–42.
Hassenzahl, M., Beu, A., & Burmester,M. (2001). Engineering joy. IEEE Software, 1&2, 70–76.
498 Hassenzahl
Hassenzahl, M., Burmester, M., & Sandweg, N. (2000). Perceived novelty of functions—A
source of hedonic quality. Interfaces, 42, 11.
Hassenzahl, M., Platz, A., Burmester,M., & Lehner, K. (2000). Hedonic and ergonomic quality
aspects determine a software’s appeal. In Proceedings of the CHI 2000 Conference on Human
Factors in Computing Systems (pp. 201–208). New York: ACM, Addison-Wesley.
Hassenzahl, M., Wessler, R., & Hamborg, K.-C. (2001). Exploring and understanding product
qualities that users desire. In Proceedings of the IHM/HCI Conference on human-computer in-
teraction, Volume 2 (pp. 95–96). Toulaise, France: Cépadinès.
Igbaria, M., Schiffman, S. J., & Wieckowski, T. J. (1994). The respective roles of perceived use-
fulness and perceived fun in the acceptance of microcomputer technology. Behaviour & In-
formation Technology, 13, 349–361.
ISO. (1998). Ergonomic requirements for office work with visual display terminals (VDTs)—Part 11:
Guidance on usability (ISO No. 9241). Geneva, Switzerland: Author.
Kaiser, H. F., & Rice, J. (1974). Little Jiffy, Mark IV. Educational and Psychological Measurement,
34, 111–117.
Kirakowski, J., & Corbett, M. (1993). SUMI: The Software Measurement Inventory. British
Journal of Educational Technology, 24, 210–212.
Kurosu, M., & Kashimura, K. (1995). Apparent usability vs. inherent usability. In Proceedings
of the CHI 1995 Conference Companion on Human Factors in Computing Systems (pp. 292–293).
New York: ACM, Addison-Wesley.
Laurel, B. (1993). Computers as theatre. Reading, MA: Addison-Wesley.
Leventhal, L., Teasley, B., Blumenthal, B., Instone, K., Stone, D., & Donskoy, M. V. (1996). As-
sessing user interfaces for diverse user groups: Evaluation strategies and defining charac-
teristics. Behaviour & Information Technology, 15, 127–137.
Mundorf, N., Westin, S., & Dholakia, N. (1993). Effects of hedonic components and user’s
gender on the acceptance of screen-based information services. Behaviour & Information
Technology, 12, 293–303.
Nardi, B. (1995). Some reflections on scenarios. In J. M. Carroll (Ed.), Scenario-based design: En-
visioning work and technology in system development (pp. 397–399). New York: Wiley.
Ortony, A., Clore, G. L., & Collins, A. (1988). The cognitive structure of emotions. Cambridge,
MA: Cambridge University Press.
Prümper, J. (1999). Test IT: ISONORM 9241/10. In Proceedings of the HCI International 1999
Conference on Human Computer Interaction (pp. 1028–1032). Mahwah, NJ: Lawrence Erl-
baum Associates, Inc.
Sandweg, N., Hassenzahl, M., & Kuhn, K. (2000). Designing a telephone-based interface for a
home automation system. International Journal of Human–Computer Interaction, 12,
401–414.
Shneiderman, B. (1998). Designing the user interface: Strategies for effective human–computer in-
teraction (3rd ed.). Reading, MA: Addison-Wesley.
Stevens, R. D., Brewster, S. A., Wright, P. C., & Edwards, A. D. N. (1994). Providing an audio
glance at algebra for blind readers. In G. Kramer & S. Smith (Eds.), Proceedings of ICAD ’94
(pp. 21–30). Santa Fe, NM: Santa Fe Institute, Addison-Wesley.
Zijlstra, R. (1993). Efficiency in work behaviour: A design approach for modern tools. Delft, The
Netherlands: Delft University Press.
Zijlstra, R., & van Doorn, L. (1985). The construction of a scale to measure subjective effort.
Delft, The Netherlands: Delft University of Technology, Department of Philosophy and
Social Sciences.
Effect of Perceived Hedonic Quality 499
... Due to a literature gap regarding examination of the user experience of e-learning platforms, which are an important part of SLE, the aim of our research is to evaluate VLE platforms used in a higher education setting from a user experience viewpoint to get an insight into their attractiveness perceived by students using them. Our research model is based on adapted factors from the User Experience (UX) Model from Hassenzahl [47], Laugwitz, et al. [48], Schrepp [49], and Schrepp et al. [50][51][52], which are perceived efficiency (EF), perceived perspicuity (PE), perceived dependability (DE), perceived stimulation (ST), perceived novelty (NO), and perceived attractiveness (AT). From a sustainability point of view, perceived attractiveness could be one of the features of an e-learning tool for long-term use. ...
... Our research model is based on adapted factors from the user experience model [47][48][49][50][51][52]. Table 3. From a sustainability point of view, perceived attractiveness (AT) could be one of the features of an e-learning tool for long-term use. ...
... With this research, we examined whether user experience (UX) factors (named perceived efficiency (EF), perceived perspicuity (PE), perceived dependability (DE), perceived stimulation (SI), and perceived novelty (NO)) affect perceived attractiveness (AT), which is an important factor in the sustainability of e-learning tools. We adapted factors from UX model proposed from Hassenzahl [47], Laugwitz et al. [48], Schrepp [49], and Schrepp et al. [50][51][52]. UX model through UEQ questionnaire (available [128]) measures means of items, where the scale of the items is between −3 (terrible) to +3 (excellent). ...
Article
Full-text available
E-learning platforms have become more and more complex. Their functionality included in learning management systems is extended with collaborative platforms, which allow better communication, group collaboration, and face-to-face lectures. Universities are facing the challenge of advanced use of these platforms to fulfil sustainable learning goals. Better usability and attractiveness became essential in successful e-learning platforms, especially due to the more intensive interactivity expected from students. In the study, we researched the user experience of students who have used Moodle, Microsoft Teams, and Google Meet. User experience is, in most cases, connected with a person’s perception, person’s feelings, and satisfaction with the platform used. Data were collected using a standard UEQ questionnaire. With this research, we examined whether user experience factors: perceived efficiency, perceived perspicuity, perceived dependability, perceived stimulation, and perceived novelty affect perceived attractiveness, which is an important factor in the sustainability of e-learning tools. The collected data were processed using SmartPLS. The research study showed that all studied factors have a statistically significant impact on perceived attractiveness. Factor perceived stimulation has the strongest statistically significant impact on the perceived attractiveness of e-learning platforms, followed by perceived efficiency, perceived perspicuity, perceived novelty, and perceived dependability.
... Subsequently, the different layer arrangements (condensed, widening, equidistant, see Fig. 2) with randomized order among participants were investigated in the three main blocks of the laboratory experiment. Each block consisted of 18 tasks with different numbers of layers (6,9,12,15,18,21). The amount of layers was randomly selected for each task. ...
... Apart from recording demographic information, the NASA-RTLX [3], a shortened version of the NASA-TLX [8], was used to separately record subjective mental load after each of the three blocks of the laboratory study in order to relate it to the different layer arrangements. To assess the perceived system qualities as well as the general attractiveness of the elastic display, we applied the AttrakDiff questionnaire [9,10]. In addition, we recorded the participants' feedback on the prototype to more accurately capture the user experience. ...
... EVALUATION OF WEBSITE QUALITY WQ [12], HQ [13], and WEBQUAL [14] are three questionnaires that can be used for measuring website quality. A special focus is placed on website quality in these three questionnaires (Table I). ...
... HQ: HQ is an aspect of quality that is relatively unconnected to the effectiveness and efficiency of a product. Software designers are able to go beyond making a product a useful tool by considering this when designing (or evaluating) a product [13]. ...
... e image is the first information channel in the consumer-product interaction [14], and consequently, aesthetics becomes a key factor in the product judgement by consumers [15][16][17]. It has been shown, for instance, that products with higher hedonic qualities are more appealing [18]. An in-depth analysis of these complex aspects of user-product interaction is carried out in [19]. ...
Article
Full-text available
Consumer behavior knowledge is essential to designing successful products. However, measuring subjective perceptions affecting this behavior is a complex issue that depends on many factors. Identifying visual cues elicited by the product’s appearance is key in many cases. Marketing research on this topic has produced different approaches to the question. This paper proposes the use of Noise-Based Reverse Correlation techniques in the identification of product form features carrying a particular semantic message. This technique has been successfully utilized in social sciences to obtain prototypical images of faces representing social stereotypes from different judgements. In this work, an exploratory study on subcompact cars is performed by applying Noise-Based Reverse Correlation to identify relevant form features conveying a sports car image. The results provide meaningful information about the car attributes involved in communicating this idea, thus validating the use of the technique in this particular case. More research is needed to generalize and adapt Noise-Based Reverse Correlation procedures to different product scenarios and semantic concepts.
... A different interpretation defines user experience as a set of distinct quality criteria [1] that includes the classical usability criteria or pragmatic qualities, such as efficiency, controllability, or learnability, and non-goal directed or hedonic quality criteria [3] like stimulation, novelty, or aesthetics [4]. This definition has the advantage that it splits the general notion of user experience into a number of quality criteria, thereby describing the distinct and relatively well-defined aspects of user experience. ...
Article
Full-text available
Context Software development companies use Agile methods to develop their products or services efficiently and in a goal-oriented way. But this alone is not enough to satisfy user demands today. It is much more important nowadays that a product or service should offer a great user experience—the user wants to have some positive user experience while interacting with the product or service. Objective An essential requirement is the integration of user experience methods in Agile software development. Based on this, the development of positive user experience must be managed. We understand management in general as a combination of a goal, a strategy, and resources. When applied to UX, user experience management consists of a UX goal, a UX strategy, and UX resources. Method We have conducted a systematic literature review (SLR) to analyse suitable approaches for managing user experience in the context of Agile software development. Results We have identified 49 relevant studies in this regard. After analysing the studies in detail, we have identified different primary approaches that can be deemed suitable for UX management. Additionally, we have identified several UX methods that are used in combination with the primary approaches. Conclusions However, we could not identify any approaches that directly address UX management. There is also no general definition or common understanding of UX management. To successfully implement UX management, it is important to know what UX management actually is and how to measure or determine successful UX management.
... Users' involvement is critical, as their experience confirms the success or failure of a product [44]. Accordingly, Stickel et al. [104] and Hassenzahl [38] have pointed out that user satisfaction is an important feature that determines whether the product has met a user's expectation. From this review of related work, it becomes clear that the potential of using wearable devices to enhance user QoE of viewing multimedia content has largely been ignored by the literature. ...
Article
Full-text available
Quality of Experience (QoE) is inextricably linked to the user experience of multimedia computing and, although QoE has been explored in relation to other types of multimedia devices, thus far its applicability to wearables has remained largely ignored. Given the proliferation of wearable devices and their growing use to augment and complement the multimedia user experience, the need for a set of QoE guidelines becomes imperative. This study meets that need and puts forward a set of guidelines tailored exclusively towards wearables’ QoE. Accordingly, an extensive experimental investigation has been undertaken to see how wearables impact users’ QoE in multiple sensorial media (mulsemedia) context. Based on the exploratory study, the findings have shown that the haptic vest (KOR-FX) enhanced user QoE to a certain extent. In terms of adoption, participants reported they would generally incorporate the heart rate (HR) monitor wristband (Mio Go) into their daily lives as opposed to the haptic vest. Other findings revealed that human factors play a part in user’s attitudes towards wearables and predominantly age was the major influencing factor. Moreover, the participants’ HR varied throughout the experiments, suggesting an enhanced level of engagement whilst viewing the multimedia video clips. Furthermore, the results suggest that there is a potential future for wearables, if the QoE is a positive one and if the design of such devices are appealing as well as unobtrusive.
Chapter
In view of the weak quantitative research foundation of the user experience and the lack of comprehensive measurement in the automobile industry, this research proposes a user experience evaluation measurement for the automotive industry products. Firstly, analyze the process of the experience to deconstruct the hierarchy of user experience: referring to the “three-dimensional structure model” which is specifically divided into three levels - product usability, multimodal perception and psychological experience, so that an evaluation framework of user experience of the automobiles is constructed. Secondly, specify the measurement system of user experience from three dimensions - behavioral performance, sensory perception and psychological experience: according to the characteristics of different measurement dimensions, the user experience is decomposed into the measurable evaluation indicators. Finally, use the analytic hierarchy process, to integrate the selected indicators according to a certain weight, so as to comprehensively evaluate the user experience in the process of using car. This paper conducts a specific study on the user-centered quantitative method of user experience. The research results lay a theoretical foundation for guiding empirical evaluation of automotive products,which also can provide some reliable suggestions for main engine factories and automobile sales servicshop to further improve the user experience in the process of using cars.
Article
The present work compares the two-alternative forced choice (2AFC) task to rating scales for measuring aesthetic perception of neural style transfer-generated images and investigates whether and to what extent the 2AFC task extracts clearer and more differentiated patterns of aesthetic preferences. To this aim, 8250 pairwise comparisons of 75 neural style transfer-generated images, varied in five parameter configurations, were measured by the 2AFC task and compared with rating scales. Statistical and qualitative results demonstrated higher precision of the 2AFC task over rating scales in detecting three different aesthetic preference patterns: (a) convergence (number of iterations), (b) an inverted U-shape (learning rate), and (c) a double peak (content-style ratio). Important for practitioners, finding such aesthetically optimal parameter configurations with the 2AFC task enables the reproducibility of aesthetic outcomes by the neural style transfer algorithm, which saves time and computational cost, and yields new insights about parameter-dependent aesthetic preferences. © 2022 The Author(s). Published with license by Taylor & Francis Group, LLC.
Chapter
Full-text available
Article
1. Introduction The study of emotion Types of evidence for theories of emotion Some goals for a cognitive theory of emotion 2. Structure of the theory The organisation of emotion types Basic emotions Some implications of the emotions-as-valenced-reactions claim 3. The cognitive psychology of appraisal The appraisal structure Central intensity variables 4. The intensity of emotions Global variables Local variables Variable-values, variable-weights, and emotion thresholds 5. Reactions to events: I. The well-being emotions Loss emotions and fine-grained analyses The fortunes-of-others emotions Self-pity and related states 6. Reactions to events: II. The prospect-based emotions Shock and pleasant surprise Some interrelationships between prospect-based emotions Suspense, resignation, hopelessness, and other related states 7. Reactions to agents The attribution emotions Gratitude, anger, and some other compound emotions 8. Reactions to objects The attraction emotions Fine-grained analyses and emotion sequences 9. The boundaries of the theory Emotion words and cross-cultural issues Emotion experiences and unconscious emotions Coping and the function of emotions Computational tractability.
Article
This article contrasts traditional versus end-user computing environments and report on the development of an instrument which merges ease of use and information product items to measure the satisfaction of users who directly interact with the computer for a specific application. Using a survey of 618 end users, the researchers conducted a factor analysis and modified the instrument. The results suggest a 12-item instrument that measures five components of end-user satisfaction - content, accuracy, format, ease of use, and timeliness. Evidence of the instrument's discriminant validity is presented. Reliability and validity is assessed by nature and type of application. Finally, standards for evaluating end-user applications are presented, and the instrument's usefulness for achieving more precision in research questions is explored.
Article
Determining the number of common factors is one of the most important decisions which must be made in the application of factor analysis. Several different approaches and techniques are reviewed here along with associated strengths and weaknesses. It is argued that a combination of approaches will lead to the best judgment regarding the number of factors to retain. A computer program is available which presents the number of factors to retain as suggested by both discontinuity and parallel analyses. Utilization of the program removes the negative aspect associated with the use of each technique.
Article
As end user computing (EUC) becomes more pervasive in organizations, a need arises to measure and understand the factors that make EUC successful. EUC success is viewed as a subclass of organizational information system (IS) success, having distinct characteristics that distinguish it from other sources of organizational computing success. Namely, the success of applications developed by the information systems department (ISD), software vendors, or outsourcing companies. The literature shows that despite the volitional nature of end user computing, end user satisfaction is the most popular measure EUC success. Moreover, despite known limitations reported in the literature, self-reported scales are the instruments of choice by most researchers. This paper explores the literature on EUC success measurement and discusses the main issues and concerns researchers face. While alluding to the difficulty of devising economic and quantitative measures of EUC success, recommendations are made including the use of unobtrusive measures of success, take into account contextual factors, use well-defined concepts and measures and seek a comprehensive integrated model that incorporates a global view.
Article
This study examined the effects of two main factors affecting microcomputer technology acceptance: perceived usefulness and perceived fun. We examined whether users are motivated to accept a new technology due to its usefulness or fun. Results of this study suggest that perceived usefulness is more influential than perceived fun in determining whether to accept or reject microcomputer technology. We also examined the impact of computer anxiety on acceptance. Results showed that computer anxiety had both direct and indirect effects on user acceptance of microcomputer technology, through perceived usefulness and fun. We also found attitude (satisfaction) to be less influential than perceived usefulness and fun. Implications for the design and acceptance of microcomputer technology and future research are discussed.