ArticlePDF Available

Abstract and Figures

Today, social networks are crucial commodities that allow people to share different contents and opinions. In addition to participation, the information shared within social networks makes them attractive, but success is also accompanied by a positive User eXperience (UX). Social networks must offer useful and well-designed user-tools, i.e., sets of widgets that allow interaction among users. To satisfy this requirement, Episodic User eXperience (EUX) yields reactions of users after having interacted with an artifact. Anticipated User eXperience (AUX) grants the designers the capacity to recollect users’ aspirations, assumptions, and needs in the initial development phase of an artifact. In this work, we collect UX perceived in both periods to contrast user expectations and experiences offered on social networks, in order to find elements that could improve the design of user-tools. We arrange a test where participants (N=20) designed prototypes on paper to solve tasks and then did the same tasks on online social networks. Both stages are assessed with the help of AttrakDiff, and then we analyze the results through t-tests. The results we obtained suggest that users are inclined towards pragmatic aspects of their user-tools expectations.
Content may be subject to copyright.
applied
sciences
Article
Measuring Anticipated and Episodic UX of Tasks in
Social Networks
Luis Martín Sánchez-Adame , José Fidel Urquiza-Yllescas and Sonia Mendoza *
Computer Science Department, CINVESTAV-IPN, Mexico City 07360, Mexico;
luismartin.sanchez@cinvestav.mx (L.M.S.-A.); fuy@computacion.cs.cinvestav.mx (J.F.U.-Y.)
*Correspondence: smendoza@cs.cinvestav.mx
Received: 30 October 2020; Accepted: 17 November 2020; Published: 19 November 2020


Abstract:
Today, social networks are crucial commodities that allow people to share different contents
and opinions. In addition to participation, the information shared within social networks makes
them attractive, but success is also accompanied by a positive User eXperience (UX). Social networks
must offer useful and well-designed user-tools, i.e., sets of widgets that allow interaction among
users. To satisfy this requirement, Episodic User eXperience (EUX) yields reactions of users after
having interacted with an artifact. Anticipated User eXperience (AUX) grants the designers the
capacity to recollect users’ aspirations, assumptions, and needs in the initial development phase of
an artifact. In this work, we collect UX perceived in both periods to contrast user expectations and
experiences offered on social networks, in order to find elements that could improve the design of
user-tools. We arrange a test where participants (
N=
20) designed prototypes on paper to solve
tasks and then did the same tasks on online social networks. Both stages are assessed with the help
of AttrakDiff, and then we analyze the results through t-tests. The results we obtained suggest that
users are inclined towards pragmatic aspects of their user-tools expectations.
Keywords:
Anticipated User eXperience; assessment methods; Episodic User eXperience; social
networks; user-tools
1. Introduction
The popularity of social networks has increased in recent years [
1
], especially due to the COVID-19
pandemic [
2
,
3
]. However, they are not a new topic but are much less unknown. Social networks
have been studied by Computer Science researchers for a long time and from different angles, that is
why we can find several definitions in the state-of-the-art [
4
8
]. Among all, we adopted the one
by Boyd and Ellison [
9
]—“web-based services that allow individuals to (1) construct a public or
semi-public profile within a bounded system, (2) articulate a list of other users with whom they share
a connection, and (3) view and traverse their list of connections and those made by others within the
system. The nature and nomenclature of these connections may vary from site to site”—because it
denotes the main elements of social networks and their interaction.
Sociability and usability are vital factors for any social network. Sociability refers, of course, to the
contact and exchange of information among users, whereas usability enables technology to allow those
exchanges [10,11].
A result of sociability is participation. Therefore, several studies have been carried out in order
to understand the motives of individuals to engage in a social network [
12
15
]. We believe that
the progressing of any social network heavily relies on: (1) the collaboration among its users to
create contents and make contributions to the community [
16
], and (2) the user interaction with
businesses, organizations, colleagues, family members, and friends to create together their production
and consumption experience and meet their necessities [1719].
Appl. Sci. 2020,10, 8199; doi:10.3390/app10228199 www.mdpi.com/journal/applsci
Appl. Sci. 2020,10, 8199 2 of 17
While it is true that User eXperience (UX) is a crucial factor for interaction among users on any
digital platform, it is not the only aspect to consider. Social networks are a complex phenomenon.
Therefore, it is useless to oversimplify them and try to study them from a single front [
20
]. For example,
since the eighties, Grudin studied why collaborative work applications fail [
21
]. A perfect example that
there is no formula for success is that of Google+. Despite having elements of good design, it never
succeeded and ended up closing [22].
Although UX is not the only aspect that should concern developers, it is essential to help make
interactions among members of a social network as seamless as possible. The two components of UX
are hedonism and pragmatism [
23
,
24
]. The hedonic components refer to the preferences, convictions,
sensations, and conclusions of users that arise from the anticipated or episodic usage of a system,
product, or service. The pragmatic components come from the features of the assessed system, such as
functionality, interactive behavior, supporting capabilities, usability, and performance [25].
While the core of UX is the current experience of usage, this is not enough to completely cover all
the relevant issues that can be studied. UX is a highly dynamic concept, since it changes continuously
when interacting with an artifact [
26
]. People can have diverse and very different experiences before,
during, and after interacting with a product [
27
]. Consequently, it is a critical design aspect to be able
to measure the UX of an artifact at multiple times [28].
We can explain the concept of UX over time through four periods. Each period is dynamic and
can be viewed as an iterative process within and among those stages [27]:
Anticipated UX (AUX):
Obtained before the use of an artifact from imagination, expectations,
and existing experiences.
Momentary UX (MUX): Perceived during the usage period of an artifact.
Episodic UX (EUX): Conceived after the use of an artifact through reflections of the experience.
Cumulative UX (CUX): Determined over time by the recollection of multiple periods of use.
Periods are essential because user responses may be different, e.g., when measuring momentary
UX, it can result in a visceral response from the user. While if UX is measured some time after the use
of an artifact, the user can remember more positive things and suppress the negative ones [
29
]. In this
way, a study that considers more than one period could be more enriching.
While EUX is simply the experience that is obtained after having used a system, product,
or service [
30
], AUX has to do with attitudes and experiences that the user assumes to happen
when envisioning using an artifact [
31
]. Thus, the goal of an AUX assessment is recognizing whether a
determined idea offers the type of UX anticipated by developers for potential users [
32
]. Making AUX
trials has been established worthily, even if there are not many research works on this subject [
27
,
33
35
].
This paper gives continuity and expands the research that we have previously published on this
topic [
36
,
37
]. The objective of the present work was to know whether there are and which are the
differences between User eXpectations (AUX) and the experiences they find on social networks (EUX).
Identifying these contrast elements could help improve the design of the user-tools. Thus, we propose
the following hypothesis for our study:
There is no significant difference in perceived UX between the prototypes imagined by the participants
and the actual social networks performing the same tasks.
To confirm or refute this hypothesis, we propose a method that allows us to assess the AUX and
EUX of daily tasks on social networks: sending messages, sharing multimedia, and doing searches.
Our participants (
N=
20) completed these tasks in two phases. In the former phase, they had to
make a paper prototype with the elements they considered necessary to solve the task; once their
prototype was finished, they evaluated it with the AttrakDiff questionnaire [
38
]. In the latter phase,
the participants solved the task in real social networks and, in the same way, they evaluated it with
AttrakDiff. Our main finding is that user expectations are mainly composed of pragmatic aspects.
Appl. Sci. 2020,10, 8199 3 of 17
The organization of this article is as follows. First, we present a brief analysis of related works
(Section 2). After describing the research methodology that we follow to develop our proposal
(Section 3), we explain our assessment method (Section 4). Subsequently, we report all the details of
our tests (Section 5), followed by the results of said tests (Section 6). After that, we depict the discussion
of our results, as well as the implications and limitations of our study (Section 7). Finally, we expose
our conclusions and some proposals for future work (Section 8).
2. Related Work
In this section, we present a brief review of some outstanding works that involve AUX and
EUX. We classify them into these two groups because it seems that this is the trend in most UX work.
The former group are researchers who study popular systems in the market and then propose theories
(Section 2). The latter group are those who, after studying theoretical works, use their knowledge to
propose changes in practical systems (Section 2). We consider that our work is a hybrid approach,
trying to bring together the best of both paths.
2.1. From Practice to Theory
Practice is vital, as it allows collecting people’s opinions and reactions. Aladwan
et al.
[
39
]
designed a framework through review searches and constructed a prototype that describes user
anticipations and experiences, using instructional fitness applications. The main limitation of this work
is the difficulty in unraveling ambiguous user reviews.
Although, in general, qualitative evaluations are complicated to analyze because they precisely
lend themselves to ambiguities, they are an indispensable resource if the investigation is about
transferring real-world interactions to a virtual environment. Such is the case of Moser et al. [
40
] who
organized workshops for children around the world. Through various types of activities, they managed
to gather children’s expectations and idealizations regarding games. Although they detailed the way
to capture AUX, they did not make comparisons, nor propose elements for the design of GUIs.
The works of Margetis et al. [
41
], and Zhang et al. [
42
] also fall into this area of gathering the
users’ know-how. The former ones created an augmented reality (AR) system that facilitates reading
and writing in books without being invasive to users. In addition to a heuristic evaluation, there is no
evidence of AUX evaluation, only of EUX after testing the prototype. The latter authors designed a
card game that encourages the practice of people who are learning a foreign language. Even though in
their design they did an AUX study, there are no contrasts with EUX.
User expectations are also gathered when new environments are studied. For example,
Kukka et al. [
43
] investigated the integration of Facebook content in three-dimensional applications.
They created design guidelines based on the problems they could identify in this kind of environment.
Being a preliminary investigation, they did not compare AUX and EUX. Another example is
Wurhofer et al. [
44
] who examined the context of UX motorists. Through a study of cumulative
UX, they compared expectations against the real experiences of drivers. Despite that this is a study of
UX over time, it does not include GUIs.
2.2. From Theory to Practice
Theory is essential because it identifies and proposes elements that can be used to design and
evaluate systems. Such is the case of Magin et al. [
45
] who described possible factors that cause a
negative UX using apps. Through a prototype app, AUX and EUX were measured by the participants.
They concluded that the lack of usability causes negative emotions. Similarly, Sato et al. [
46
] reported
a series of elements used in multi-agent systems that can possibly be applied in Communities of
Practice (CoP). Though the impact that these elements would have on UX can be deduced, they did
not evaluate UX.
The works gathered here are a sample of how AUX and EUX studies can be applied, as well as
their worth. Although these studies present elements that stand out, particularly in AUX or EUX,
Appl. Sci. 2020,10, 8199 4 of 17
neither describes which dimension (or dimensions) are more critical for one or the other period. Table 1
summarizes and compares each of the works analyzed in this section.
Table 1. Synthesis of related works.
Work Highlights AUX and EUX
Evaluation Limitations Context
[39]
Framework for user
anticipations Online reviews Ambiguous user reviews Mobile apps
[40] Envisioned gameplay ideas Workshops Lack of generalizations Games
[41] AR system for books Questionnaires
Absence of end-users
evaluations
Augmented
reality
[42]
Card game to practice a foreign
language
Paper
prototypes No results overtime
Language
learning
[43]
Social networks user-tools in
3D applications
Paper
prototypes
No contrast between AUX
and EUX 3D applications
[44] Drivers’ UX over time Interviews
Evaluation involves many
resources Driving UX
[45] Aspects that cause deficient UX Questionnaires Preliminary study Mobile apps
[46]
Elements of multi-agent
systems for CoP None
UX analysis are not
presented CoP
3. Research Methodology
As a guide to carry out our research, we use the Design Science Research Methodology (DSRM)
process model by Peffers et al. [
47
]. This methodology was selected because it has been used in works
that are under the same UX study spectrum. For example, Carey et al. [
48
] used this methodology to
develop and validate their interactive evaluation instrument—their goal was to improve the process
for mobile service innovation. Strohmann et al. [
49
] followed DSRM to create recommendations for
the representation and interaction design of virtual in-vehicle assistants. Lastly, Kumar et al. [
50
] used
this methodology to design an app that provides remote students with a learning support.
The DSRM iterative process consists of a research entry point and six stages [
47
].
The initiation point could be problem-centered, objective-centered, design-and-development-centered,
or client/content-centered. The six stages of the methodology are:
1. Identify problem and motivate: define the problem, show importance.
2. Define objectives of a solution: what would a better artifact accomplish?
3. Design and development: solution artifact.
4. Demonstration: find a suitable context. Use artifact to solve the problem.
5. Evaluation: observe how effective and efficient the artifact is. Iterate back to design.
6. Communication: scholarly and professional publications.
In our case, we selected the research entry point objective-centered initiation of DSRM, given that our
aim was to help improve the design of user-tools. Regarding the first step, identify problem and motivate,
we have already highlighted the role that UX plays in the design of user-tools within social networks.
The second step of the methodology, define objectives of a solution, concerns the construction of the
assessment method, whose objective is to compare between AUX and EUX to find elements of contrast.
The third step, design and development, refers to the specification of the proposed assessment method.
The fourth and fifth steps, demonstration and evaluation, are respectively the tests we prepared and the
outcomes we achieved. The final step, communication, is exposed along with this article. To refine the
proposed AUX and EUX assessment method, we will initiate succeeding iterations in the design and
development step.
4. Assessment Method
As already described in Section 3, this segment presents stages two and three of DSRM applied to
our proposal, i.e., define objectives of a solution, and design and development.
Appl. Sci. 2020,10, 8199 5 of 17
4.1. Define Objectives of a Solution
Social networks have problems in the two areas that comprise them: technological (the platform
that supports them) and social (misinformation problems, lack of motive, and guidance) [
51
].
User-tools can help to solve the problems in these areas (see Figure 1), which are vital in a successful
social network [5255].
User-tools are groups of widgets that make up the GUI of a social network, in order to allow users
to perform tasks and communicate with each other, e.g., friend lists, newsfeeds, chats, and publishing
menus. The granularity of user-tools is dictated by activities, i.e., a specific set of widgets, that allows
solving a specific activity, conforms a user-tool.
+Technology:
-Social network
platform
+Social:
-Misinformation
-Motivation
-Guidance
User-Tools
AUX and EUX Hedonic
Factors
Pragmatic
Factors
Figure 1. Design elements and influence of user-tools.
As we have been mentioning, user-tools are the elements that allow interaction among users on a
social network, so its design should be a primary issue. For this, our task focuses on contrasting AUX
and EUX, since with this we hope to identify which dimensions of UX have the most significant weight
in each period. Therefore, we introduce a six step assessment method (see Figure 2) to be explained in
the following subsection.
Set
Goals
Identify
Tasks
Identify
User-Tools
Assess
AUX
Assess
EUX
Compare
Results
Figure 2.
Steps of the Anticipated User eXperience (AUX) and Episodic UX (EUX) assessment methods.
4.2. Design and Development
Here, we describe each step of our assessment method. To demonstrate how our proposal works,
we take the basic case when one person uses a chat to make contact with another person:
Set Goals:
This step is about the objectives that developers need to achieve, e.g., a chat must
allow users to communicate effectively with each other.
Identify Tasks:
It refers to the stages that the user has to follow with the aim of attaining the
aforementioned objectives, e.g., a user has to recognize the receiver of the message, display the
direct message option or window, compose the message and finally send it.
Identify User-tools:
This step involves determining which user-tools are available to accomplish
the previously identified tasks, e.g., avatars, user profiles, lists, buttons, commands, and text boxes.
Assess AUX:
It concerns an AUX evaluation over the prototyped artifact. This stage can be done
with various tools, e.g., low-fidelity prototypes [
56
,
57
], or techniques such as The Wizard of
Oz [
58
,
59
]. Nevertheless, the important thing is to stimulate the creativity of participants, so that
we can obtain their idealizations and expectations. To know what aspects should be taken into
account at this stage, we rely on the bases proposed by Yogasara et al. [31]:
Appl. Sci. 2020,10, 8199 6 of 17
Intended Use:
It is about the practical connotation of each user-tool, e.g., the functioning of
a chat from the user’s point of view.
Positive Anticipated Emotion:
It refers to agreeable feelings that the user expects undergoing
as a result of the interaction with a user-tool, e.g., satisfaction after sending a message,
happiness when the answer comes, generally pleased for not receiving errors, or any other
type of alert.
Desired Product Characteristics:
As for this aspect, we accommodated the principles
suggested by Morville [
60
] to our case of study. These principles specify that a user-tool must
be worthy, functional, helpful, attractive, attainable, honest, and discoverable.
User Characteristics:
It concerns the mental and physical faculties of users, e.g., developing
a generic chat does not imply the same endeavor for developing one intended for children or
for seniors, since each group has specific needs.
Experiential Knowledge:
We need to know the background of users, because they rely on
their experience to gather information, then compare and contrast, e.g., a user might ask
whether the new chat is more suitable than the one provided by Facebook.
Favorable Existing Characteristics:
This aspect is about the properties that users have
identified in the past as assertive in comparable tools, e.g., a user could think that they
enjoy the chat from another platform thanks to the response time, availability, and ease of
use.
Assess EUX:
This step involves conducting an EUX assessment over the developed artifact.
For this step, we need at least a mid-fidelity prototype [
61
,
62
], i.e., something so that participants
can already experience the tool on a PC or a mobile device. However, to make the comparison of
results achievable, it is vital to evaluate all the aspects taken into account for the AUX assessment,
e.g., if NASA TLX questionnaire [63] was used in the AUX assessment, it is necessary to reapply
it, this time for EUX, being careful to measure similar parts or functionalities between both stages.
Compare Results:
Once AUX and EUX assessments were carried out, the results have to be
contrasted, so that developers can make resolutions on the design of user-tools, placing side
by side the idealizations of users and reality, and examining whether their propositions were
developed or not, e.g., compare the evaluations of the NASA TLX questionnaire of the prototype
and developed chat.
5. Demonstration and Evaluation
This section represents steps 4 and 5 of the DSRM methodology. It details the Materials
(Section 5.1) and Method (Section 5.2) that we used in our tests.
5.1. Materials
To carry out our tests, we use basic materials. For the development of prototypes, we have
stationery such as sheets of paper, pens, pencils, and markers of various colors. Whereas for social
media tests, we used a 15-inch laptop with internet access, and Firefox as a web browser. For each
social network, we created a new user profile.
An essential factor that can compromise the validity and reliability of a study is improvisation.
Choosing the wrong instrument invalidates the results, no matter how rigorous a study’s proposed
methodology was [
64
,
65
]. That is why we weigh in on the various factors that could affect our tests.
AttrakDiff, since its original proposal in 2003 [
38
], has been used in multiple tests to measure UX
based on its pragmatic and hedonic factors [
23
]. In each study, experts have used this tool, and it has
been tested for validity and reliability in different contexts [
66
73
], it has been translated into various
languages [
26
], and it has been modified to suit the specific needs of particular experiments [
74
].
In addition, it is simple to answer and does not represent a burden for participants [
75
]. All these
results made us choose AttrakDiff as a valid tool to study UX.
Appl. Sci. 2020,10, 8199 7 of 17
The AttrakDiff full questionnaire is composed of 28 semantic pairs, i.e., pairs of words that make
a strong contrast to each other (e.g., good-bad). Through these semantic pairs, the questionnaire
measures the following aspects [76]:
Pragmatic Quality:
It refers to the perceived quality of manipulation, i.e., effectiveness and
efficiency of use.
Hedonic Quality—Identity: It indicates the user ’s self-identification with the artifact.
Hedonic Quality—Stimulation:
It means the human need for individual development,
i.e., improvement of knowledge and skills.
Attractiveness: It reports the overall worth of an artifact based on perceived quality.
The hedonic and pragmatic dimensions are autonomous of each other and provide evenly to the
UX evaluation [
23
]. We use a printed version, in English, of the questionnaire available on the official
website of the tool (http://attrakdiff.de/index-en.html). All participants had the same materials at
their disposal.
5.2. Method
Since we try to study the user-tools of social networks, and we have one independent variable
with two factors, prototypes and social networks, our tests follow a basic design [
77
]. Moreover,
since we had only one group of participants who were exposed to both factors, our tests have a
within-group design [77].
Our only dependent variable is UX, of course, but since it is a latent variable and therefore cannot
be measured directly [
78
], we have AtrakDiff, which with its four dimensions helps us measure the
UX perceived by our participants (cf. Section 5.1).
Finally, our control variables are the environment where we carried out the tests, since all the
participants were exposed to the same conditions (e.g., materials, noise and light levels, desk, chair,
and room). The characteristics of our participants were also controlled (cf. Section 5.2.1). Table 2
summarizes the variables of our tests. The method for conducting our tests has been widely used by
various authors in similar contexts [7982].
Table 2. Variables of our study.
Independent Variable Dependent Variable Control Variables
User-tools
(prototypes and social networks)
UX
(Pragmatic Quality, Hedonic Quality—Identity,
Hedonic Quality—Stimulation, Attractiveness)
Ambient,
and Participants
5.2.1. Participants
We used an opportunistic sample to recruit our participants, given that they are all members of
our department. All participants gave their informed consent for inclusion before they participated
in the study. In addition, the study was conducted in accordance with the Declaration of Helsinki,
and the protocol was approved by the Ethics Committee of our department.
Our testing group was composed of 20 participants (five of whom were females), whose average
age was 28.15, and the maximum and minimum ones were 38 and 20, respectively. We made the
decision of limiting their age to a range between 20 and 40 years, in order to prevent our results from
being biased by participants with particular needs (e.g., oversimplify the language and instructions
used or make the fonts of the GUIs larger). Although we know that it is a rather small sample, it is
within the average for this kind of test [72].
Participants were selected because of their familiarity with social networks. We think that people
unconnected to such platforms constrain their potential to perform the assigned tasks, causing an
invalidating impact on our study. Moreover, we believe that better results will be obtained if
participants have experience with social networks.
Appl. Sci. 2020,10, 8199 8 of 17
5.2.2. Procedure
We carried out the AUX and EUX assessment of user-tools in a peaceful ambiance to limit
outer sources of noise in our study. Each volunteer individually participates in the testing sessions,
which were conducted by a moderator in situ.
As the first step of our tests, participants filled a questionnaire about their demographic
information and former contact with social networks. Afterwards, participants performed the tasks
and assessments.
Each session had a length of around 40 min. We run the tests in 20 days, i.e., one participant per
day. All tests were done around 10 a.m.; we did this to try to have a similar state in each participant.
The results of the aforementioned questionnaire reported that YouTube is proven to be the most
used platform by our participants with 100% of usage. As for Facebook, it got a moderated use with
47%, and Reddit was the least used with 2%. Therefore, we decided to use these three platforms to
asses EUX.
First of all, we said that our goal was to improve the design of user-tools through the contrast of
AUX and EUX. To achieve this, we devised the following three tasks that represent common activities
within social networks. Participants would have to complete each one twice, one for AUX, and one for
EUX, during the trial:
1. Message: Transmit a private message to another user.
2. Publication: Share multimedia.
3. Search: Look for somebody or for a certain theme.
To identify the required user-tools for accomplishing each task, we analyzed different ways,
e.g., giving them user-tools made up of paper cut-outs. Nevertheless, if we provided participants with
a predetermined set of user-tools, they would have prejudices, i.e., we would obtain very similar results
between each participant, including the possibility of identical prototypes, consequently limiting their
feedback significantly. Thus, for the AUX assessment step, the best alternative was that each participant
created their own user-tools.
The next two steps of our method are the AUX and EUX assessments of user-tools:
Prototypes construction:
First, we asked participants to imagine that they took the role of a Web
designer with the aim of creating a novel GUI for a social network. Afterwards, relying on their
experience, they had to create three paper prototypes, corresponding to the three tasks previously
defined. Participants had to draw GUI elements to solve the tasks, just as if they were designing a
website GUI. In our pilot tests, we obtained prototypes similar to the one depicted in Figure 3a,
so we resolved to design a canvas to make it easier for the participants to create their prototypes.
Figure 3b–d show random samples of prototypes in our actual tests. When they concluded
the construction and description of each prototype, participants had to assess them with the
AttrakDiff questionnaire. Therefore, this stage allowed participants to explain their decisions
about how they conceive the behavior of the GUI, the rationale behind their designs, and the
user-tools required to accomplish each task. In this manner, we assess the AUX of user-tools.
Tasks using online social networks:
Once the three prototypes and their assessments were
concluded, we asked participants to carry out the same three tasks, but now using online
social networks. Hence, on Reddit, participants transmitted a private message to another user;
on Facebook, they shared multimedia, and on YouTube, they sought for a somebody or for a
certain topic. Like in the previous stage, after finishing each task, they had to assess it through the
AttrakDiff questionnaire. Just like that, we assess the EUX of user-tools.
In this way, and taking into account that each participant made six evaluations, we finished with
120 questionnaires: 60 corresponding to AUX and 60 to EUX.
Appl. Sci. 2020,10, 8199 9 of 17
Figure 3. Samples of prototypes from the pilot tests (a) and from the actual tests (bd).
6. Results
Seven semantic pairs correspond to each dimension of AttrakDiff. The ratings go from one to
seven, and the higher, the better. Table 3contains the results from the 120 questionnaires, the means
(µ), and the standard deviations (σ) of each dimension for the three tasks.
Table 3. AttrakDiff dimensions results.
Pragmatic Quality Identity Stimulation Attractiveness
Message
AUX µ5.65 4.75 3.42 5.17
σ0.29 0.95 0.53 0.29
EUX µ3.22 3.50 3.52 3.30
σ0.49 0.44 0.51 0.31
Publication
AUX µ5.57 4.77 3.70 5.10
σ0.44 0.69 0.45 0.48
EUX µ5.97 5.35 3.99 5.74
σ0.23 1.08 1.19 0.29
Search
AUX µ5.51 4.67 2.98 4.96
σ0.54 1.02 0.57 0.52
EUX µ6.23 5.42 3.75 6.02
σ0.45 1.05 1.31 0.26
Figure 4is the graphical representation of the results. For all plots, the X-axis contains the four
dimensions of AttrakDiff, and the Y-axis measures their averages. As the legend indicates, clear bars
are the measurements of AUX, while dark bars represent the results of EUX in our three tasks: Figure 4a
contrasts the results for messages, Figure 4b does the same for publications, and Figure 4c for searches.
Appl. Sci. 2020,10, 8199 10 of 17
EUX EUX
EUX
(a) Messages (b) Publications
(c) Searches
Pragmatic
Quality Identity Stimulation Attractiveness Pragmatic
Quality Identity Stimulation Attractiveness
Pragmatic
Quality Identity Stimulation Attractiveness
Figure 4. AttrakDiff results for messages (a), publications (b), and searches (c).
In assessing these results, we also look at the reliability scores for the different dimensions. Table 4
shows the Cronbach’s alpha values for the AttrakDiff dimensions in each task (αlevel =0.05).
Table 4. AttrakDiff dimensions reliability analysis (Cronbach’s alpha values).
Message Publication Search
Dimension AUX EUX AUX EUX AUX EUX
(0.82) (0.87) (0.83) (0.83) (0.78) (0.83)
Pragmatic Quality 0.79 0.87 0.80 0.62 0.83 0.70
Identity 0.56 0.65 0.53 0.67 0.62 0.57
Stimulation 0.92 0.83 0.94 0.76 0.86 0.77
Attractiveness 0.81 0.93 0.80 0.86 0.76 0.93
To contrast the results of the tests, and given that the design we have is within-groups with
an independent variable of two factors, the statistical analysis we performed was a paired-samples
t-test [
83
]. In this way, we determined whether there are significant differences between the means of
each dimension of AttrakDiff in the AUX and EUX tests for each task (see Table 5). To obtain all the
statistical analyzes, we use the R language.
Table 5. p
values for paired-samples t-tests (comparisons between AUX and EUX in each dimension).
Dimension Message Publication Search
Pragmatic Quality 2.79 ×106* 0.18 0.01 *
Identity 3.08 ×105* 0.05 * 0.003 *
Stimulation 0.82 0.53 0.08
Attractiveness 6.53 ×106* 0.06 0.0005 *
p0.05 significant.
Appl. Sci. 2020,10, 8199 11 of 17
7. Discussion
Table 3clearly shows that the paper prototypes were better evaluated than their counterpart in
Reddit. Moreover, Figure 4a reveals something similar. The prototypes for messages were the only
ones where the assessment of AUX exceeded that of EUX. This is likely because, for most participants,
this was their first time using Reddit. It can also be attributed to Reddit offering a negative UX, since it
was not so easy for participants to use their previous experiences on a new platform.
Even though participants were free to design their user-tools at their convenience, based on their
experience, real social networks gave them more satisfying experiences. Figure 4b,c show that, indeed,
all the dimensions were superior in social networks, although it is interesting that there is a difference,
but not that much. An intriguing observation is that the participants were quite incisive in criticizing
their prototypes, i.e., they complained that they did not do a good job, because they did not have the
experience or knowledge necessary to design a GUI.
In general, we can say that the reliability of data is good, since most of the dimensions obtained
good results (>0.7) as can be seen in Table 4. The result that stands out the most is that of the Hedonic
Quality—Identity dimension, as none of the tests was significant. This could come to mean that
AttrakDiff has a weakness to measure the Identity dimension. Of course, we would need more evidence
to verify or refute that assumption.
Table 5suggests which results of the t-test with paired-samples allows us to reject our null
hypothesis. The comparison between AUX and EUX of the messages task were significant in the
dimensions of Pragmatic Quality,Identity, and Attractiveness. For the publications task, only Identity
was significant, while for the search task, Pragmatic Quality,Identity, and Attractiveness were significant.
These significant dimensions indicate that, in these tests, we can refute the null hypothesis, because
there is a significant difference between the UX perceived by the participants between the prototypes
and the social networks. It is interesting to note that the only dimension that was consistently not
significant in any task was Stimulation.
According to Aladwan et al. [
39
], when users of fitness applications were physically stressed by
exercise and tried to use said apps with no avail, their stress increased, as their expectations were not
met. This makes sense with our findings, since it is likely that, in an altered state of mind, users will
need to rely on pragmatic elements that are familiar to them. Something similar happens with the
tests carried out by Kukka et al. [
43
], Margetis et al. [
41
], Wurhofer et al. [
44
], and Zhang et al. [
42
],
as their participants focused on interactions that they considered safe, when they found themselves in
an unfamiliar environment.
Magin et al. [
45
] studied the possible sources of negative emotions in UX (e.g., anger, sadness, and
confusion). They determined that a significant part came from instrumental elements, i.e., usability,
which agrees with our findings, since users expect things like that a button is active under certain
circumstances or that a selected item can be removed, i.e., practical tasks.
The work by Moser et al. [
40
] is interesting because the expectations they measured
came from children. It seems that their imagination was more oriented to hedonic aspects,
mainly self-identification, since they cared that the games reflected their personality and decisions.
It is striking because it goes against our findings: perhaps the AUX perceived by children has more
weight in the hedonic factors, which could indicate a future path of investigation.
7.1. Implications
An exciting result we obtained was that the Stimulation means were not significant, as it could
indicate that participants thought about basic user-tools to make their prototypes and found similarly
essential elements in social networks. Now, we know that if we want to draw more reliable conclusions
from this, we will need to do more research. However, we could speculate that the experience and
imagination of the participants are limited to the essential elements that are commonly found in
all GUIs, i.e., they prefer to play it safe. Users are looking for security rather than looking for new
Appl. Sci. 2020,10, 8199 12 of 17
experiences when testing new GUIs, so Stimulation could become a more decisive factor when they are
already familiar with GUIs.
Such behavior could also indicate that user expectations are more grounded in pragmatic aspects
than in hedonic ones. This could have significant implications. For example, it would imply that,
when creating new GUIs, designers have to pay more attention to including basic user-tools that
allow users to efficiently complete tasks, since user expectations would be mainly focused on practical
aspects, e.g., that they imagine a button, its action, but not how it looks.
7.2. Limitations
The results presented in this work could have been affected by the sampling of our participants.
Given that each evaluation took around 40 min, having a random sample would represent a significant
challenge. Our participants did not receive any kind of incentive.
Similarly, the limitations of the within-groups design make it difficult to control the effects of
learning and fatigue. We try to alleviate this by offering a comfortable and relaxed environment for the
participants and reiterate them that they were helping us to evaluate the systems, and that we were
not evaluating them [77].
8. Conclusions and Future Work
UX evaluation is always valuable, regardless of the nature or purpose of the evaluated artifact.
In this paper, we proposed a study that compares the AUX and EUX of user-tools through daily
tasks in social networks. Our tests revealed that our participants build their expectations with
pragmatic criteria, i.e., hedonic and attractiveness aspects were secondary when they were building
their prototypes.
Our research contributes to further increasing the understanding of UX, how perceived
experiences are measured, and which factors are most relevant at a certain point in an evaluation
or development. As we already explained in the discussion (Section 7), our results quantitatively
confirmed that AUX seems to be mainly composed of pragmatic aspects. The development of this idea
could lead to improving existing evaluation methods and the creation of new ones.
As future work, we intend to replicate our tests, but this time with children. As the work by
Moser et al. [
40
] suggests, children can build prototypes with hedonic aspects in mind, i.e., we would
expect to obtain results opposite to what we found. We also consider it essential to use other
questionnaires besides AttrakDiff, which would help validate our conclusions quantitatively. While in
this work we focus on social networks, our assessment method can be used in multiple areas. To prove
this, we will use this proposal to assess a chatbot that attends the teaching-learning process in
middle schools.
Author Contributions:
Conceptualization, L.M.S.-A. and S.M.; methodology, L.M.S.-A., S.M., and J.F.U.-Y.;
formal analysis, L.M.S.-A.; supervision, S.M. and J.F.U.-Y.; validation, S.M. and J.F.U.-Y.; writing—original draft
preparation, L.M.S.-A.; writing—review and editing, S.M. and J.F.U.-Y.; funding acquisition, S.M. All authors have
read and agreed to the published version of the manuscript.
Funding:
This research was funded by “Fondo SEP-CINVESTAV de Apoyo a la Investigación (Call 2018)”.
Number of project 120 titled “Desarrollo de un chatbot inteligente para asistir el proceso de enseñanza/aprendizaje
en temas educativos y tecnológicos”.
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
Carta, S.; Podda, A.S.; Recupero, D.R.; Saia, R.; Usai, G. Popularity Prediction of Instagram Posts. Information
2020,11, 453. [CrossRef]
2.
Wiederhold, B.K. Social Media Use During Social Distancing. Cyberpsychol. Behav. Soc. Netw.
2020
,23, 275–276.
[CrossRef] [PubMed]
Appl. Sci. 2020,10, 8199 13 of 17
3.
Király, O.; Potenza, M.N.; Stein, D.J.; King, D.L.; Hodgins, D.C.; Saunders, J.B.; Griffiths, M.D.; Gjoneska, B.;
Billieux, J.; Brand, M.; et al. Preventing problematic internet use during the COVID-19 pandemic: Consensus
guidance. Compr. Psychiatry 2020,100, 152–180. [CrossRef] [PubMed]
4.
Chen, L.S.; Chang, P.C. Identifying crucial website quality factors of virtual communities. In Proceedings of the
International MultiConference of Engineers and Computer Scientists, Hong Kong, China, 17–19 March 2010;
Volume 1, pp. 17–19.
5.
El Morr, C.; Eftychiou, L. Evaluation Frameworks for Health Virtual Communities. In The Digitization of
Healthcare: New Challenges and Opportunities; Menvielle, L., Audrain-Pontevia, A.F., Menvielle, W., Eds.;
Palgrave Macmillan UK: London, UK, 2017; pp. 99–118.
6.
Lee, F.S.; Vogel, D.; Limayem, M. Virtual community informatics: A review and research agenda. JITTA J.
Inf. Technol. Theory Appl. 2003,5, 47.
7.
Preece, J.; Abras, C.; Maloney-Krichmar, D. Designing and Evaluating Online Communities: Research
Speaks to Emerging Practice. Int. J. Web Based Commun. 2004,1, 2–18. [CrossRef]
8.
Wang, Y.; Li, Y. Proactive Engagement of Opinion Leaders and Organization Advocates on Social Networking
Sites. Int. J. Strateg. Commun. 2016,10, 115–132. [CrossRef]
9.
Boyd, D.M.; Ellison, N.B. Social Network Sites: Definition, History, and Scholarship. J. Comput. Mediat. Commun.
2007,13, 210–230. [CrossRef]
10.
Chen, V.H.H.; Duh, H.B.L. Investigating User Experience of Online Communities: The Influence of
Community Type. In Proceedings of the 2009 International Conference on Computational Science and
Engineering, Vancouver, BC, Canada, 29–31 August 2009; Volume 4, pp. 509–514.
11.
Preece, J. Online Communities: Designing Usability and Supporting Socialbilty; John Wiley & Sons, Inc.: Hoboken,
NJ, USA, 2000.
12.
Jacobsen, L.F.; Tudoran, A.A.; Lähteenmäki, L. Consumers’ motivation to interact in virtual food
communities—The importance of self-presentation and learning. Food Qual. Prefer.
2017
,62, 8–16. [CrossRef]
13.
Nov, O.; Ye, C. Why Do People Tag?: Motivations for Photo Tagging. Commun. ACM
2010
,53, 128–131.
[CrossRef]
14.
Tella, A.; Babatunde, B.J. Determinants of Continuance Intention of Facebook Usage Among Library and
Information Science Female Undergraduates in Selected Nigerian Universities. Int. J. E-Adopt. (IJEA)
2017
,
9, 59–76. [CrossRef]
15.
Zhou, T. Understanding online community user participation: A social influence perspective. Internet Res.
2011,21, 67–81. [CrossRef]
16.
Lamprecht, J.; Siemon, D.; Robra-Bissantz, S. Cooperation Isn’t Just About Doing the Same Thing—Using
Personality for a Cooperation-Recommender-System in Online Social Networks; Collaboration and Technology;
Yuizono, T., Ogata, H., Hoppe, U., Vassileva, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2016;
pp. 131–138.
17.
Fragidis, G.; Ignatiadis, I.; Wills, C. Value Co-creation and Customer-Driven Innovation in Social Networking
Systems; Exploring Services Science; Morin, J.H.; Ralyté, J., Snene, M., Eds.; Springer: Berlin/Heidelberg,
Germany, 2010; pp. 254–258.
18.
Mai, H.T.X.; Olsen, S.O. Consumer participation in virtual communities: The role of personal values and
personality. J. Mark. Commun. 2015,21, 144–164. [CrossRef]
19.
McCormick, T.J. A success-Oriented Framework to Enable Co-Created E-Services; The George Washington
University: Washington, DC, USA, 2010.
20.
Ling, K.; Beenen, G.; Ludford, P.; Wang, X.; Chang, K.; Li, X.; Cosley, D.; Frankowski, D.; Terveen, L.;
Rashid, A.M.; et al. Using Social Psychology to Motivate Contributions to Online Communities. J. Comput.
Mediat. Commun. 2005,10. [CrossRef]
21.
Grudin, J. Why CSCW Applications Fail: Problems in the Design and Evaluation of Organizational Interfaces.
In Proceedings of the 1988 ACM Conference on Computer-supported Cooperative Work, CSCW ’88, Portland,
OR, USA, 26–28 September 1988; pp. 85–93. [CrossRef]
22.
Talin. Why Google+ Failed. 2019. Available online: https://onezero.medium.com/why-google-failed-
4b9db05b973b (accessed on 14 October 2019).
23. Hassenzahl, M. The hedonic/pragmatic model of user experience. Towards UX Manif. 2007,10, 10–14.
Appl. Sci. 2020,10, 8199 14 of 17
24.
Hassenzahl, M.; Platz, A.; Burmester, M.; Lehner, K. Hedonic and Ergonomic Quality Aspects Determine a
Software’s Appeal. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems,
CHI ’00, The Hague, The Netherlands, 1–6 April 2000; pp. 201–208.
25.
ISO. Ergonomics of Human-System Interaction-Part 210: Human-Centred Design for Interactive Systems; Technical
Report; International Organization for Standardization: Geneva, CH, USA, 2010.
26.
Lallemand, C.; Gronier, G.; Koenig, V. User experience: A concept without consensus? Exploring
practitioners’ perspectives through an international survey. Comput. Hum. Behav.
2015
,43, 35–48. [CrossRef]
27.
Roto, V.; Law, E.L.C.; Vermeeren, A.; Hoonhout, J. 10373 Abstracts Collection—Demarcating User eXperience.
In Proceedings of the Dagstuhl Seminar on Demarcating User Experience; Hoonhout, J., Law, E.L.C., Roto, V.,
Vermeeren, A., Eds.; Schloss Dagstuhl—Leibniz-Zentrum fuer Informatik, Germany: Dagstuhl, Germany,
2011; Number 10373 in Dagstuhl Seminar Proceedings.
28.
Karapanos, E.; Zimmerman, J.; Forlizzi, J.; Martens, J.B. Measuring the dynamics of remembered experience
over time. Modelling user experience—An agenda for research and practice. Interact. Comput.
2010
,
22, 328–335. [CrossRef]
29.
Kujala, S.; Roto, V.; Väänänen-Vainio-Mattila, K.; Karapanos, E.; Sinnelä, A. UX Curve: A method for evaluating
long-term user experience. Feminism and HCI: New Perspectives. Interact. Comput.
2011
,23, 473–483. [CrossRef]
30.
Winckler, M.; Bernhaupt, R.; Bach, C. Identification of UX dimensions for incident reporting systems
with mobile applications in urban contexts: A longitudinal study. Cogn. Technol. Work
2016
,18, 673–694.
[CrossRef]
31.
Yogasara, T.; Popovic, V.; Kraal, B.J.; Chamorro-Koc, M. General characteristics of anticipated user experience
(AUX) with interactive products. In Proceedings of the IASDR2011: The 4th World Conference on Design
Research: Diversity and Unity, Delft, The Netherlands, 31 October–4 November 2011; pp. 1–11.
32.
Stone, D.; Jarrett, C.; Woodroffe, M.; Minocha, S. User Interface Design and Evaluation; Morgan Kaufmann
Series in Interactive Technologies; Morgan Kaufman: San Francisco, CA, USA, 2005.
33.
Bargas-Avila, J.A.; Hornbæk, K. Old Wine in New Bottles or Novel Challenges: A Critical Analysis of
Empirical Studies of User Experience. In Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems, CHI ’11, Vancouver, BC, Canada, 7–12 May 2011; pp. 2689–2698.
34.
Karapanos, E.; Zimmerman, J.; Forlizzi, J.; Martens, J.B. User Experience over Time: An Initial Framework.
In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’09, Boston, MA,
USA, 4–9 April 2009; pp. 729–738.
35.
Vermeeren, A.P.O.S.; Law, E.L.C.; Roto, V.; Obrist, M.; Hoonhout, J.; Väänänen-Vainio-Mattila, K.
User Experience Evaluation Methods: Current State and Development Needs. In Proceedings of the
6th Nordic Conference on Human-Computer Interaction: Extending Boundaries, NordiCHI ’10, Reykjavik,
Iceland, 16–20 October 2010; pp. 521–530.
36.
Sánchez-Adame, L.M.; Mendoza, S.; González-Beltrán, B.A.; Rodríguez, J.; Meneses Viveros, A. AUX and
UX Evaluation of User Tools in Social Networks. In Proceedings of the 2018 IEEE/WIC/ACM International
Conference on Web Intelligence (WI), Santiago, Chile, 3–6 December 2018; pp. 104–111. [CrossRef]
37.
Sánchez-Adame, L.M.; Mendoza, S.; González-Beltrán, B.A.; Rodríguez, J.; Viveros, A.M. UX Evaluation Over
Time: User Tools in Social Networks. In Proceedings of the 2018 15th International Conference on Electrical
Engineering, Computing Science and Automatic Control (CCE), Mexico City, Mexico, 5–7 September 2018;
pp. 1–6. [CrossRef]
38.
Hassenzahl, M.; Burmester, M.; Koller, F. AttrakDiff: Ein Fragebogen zur Messung wahrgenommener
hedonischer und pragmatischer Qualität. In Mensch & Computer; Springer: Berlin/Heidelberg, Germany,
2003; pp. 187–196.
39.
Aladwan, A.; Kelly, R.M.; Baker, S.; Velloso, E. A Tale of Two Perspectives: A Conceptual Framework of User
Expectations and Experiences of Instructional Fitness Apps. In Proceedings of the 2019 CHI Conference on
Human Factors in Computing Systems, CHI ’19, Glasgow, Scotland, UK, 4–9 May 2019; pp. 394:1–394:15.
[CrossRef]
40.
Moser, C.; Chisik, Y.; Tscheligi, M. Around the World in 8 Workshops: Investigating Anticipated Player
Experiences of Children. In Proceedings of the First ACM SIGCHI Annual Symposium on Computer-human
Interaction in Play, CHI PLAY ’14, Toronto, ON, Canada, 18–22 October 2014; pp. 207–216.
41.
Margetis, G.; Zabulis, X.; Koutlemanis, P.; Antona, M.; Stephanidis, C. Augmented interaction with physical
books in an Ambient Intelligence learning environment. Multimed. Tools Appl.
2013
,67, 473–495. [CrossRef]
Appl. Sci. 2020,10, 8199 15 of 17
42.
Zhang, E.; Culbertson, G.; Shen, S.; Jung, M. Utilizing Narrative Grounding to Design Storytelling Games
for Creative Foreign Language Production. In Proceedings of the 2018 CHI Conference on Human Factors in
Computing Systems, CHI ’18, Montreal, QC, Canada, 21–26 April 2018; pp. 197:1–197:11.
43.
Kukka, H.; Pakanen, M.; Badri, M.; Ojala, T. Immersive Street-level Social Media in the 3D Virtual City:
Anticipated User Experience and Conceptual Development. In Proceedings of the 2017 ACM Conference on
Computer Supported Cooperative Work and Social Computing, CSCW ’17, Portland, OR, USA, 25 February–1
March 2017; pp. 2422–2435.
44.
Wurhofer, D.; Krischkowsky, A.; Obrist, M.; Karapanos, E.; Niforatos, E.; Tscheligi, M. Everyday
Commuting: Prediction, Actual Experience and Recall of Anger and Frustration in the Car. In Proceedings
of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications,
AutomotiveUI ’15, Nottingham, UK, 1–3 September 2015; pp. 233–240.
45.
Magin, D.P.; Maier, A.; Hess, S. Measuring Negative User Experience. In Design, User Experience, and
Usability: Users and Interactions; Marcus, A., Ed.; Springer: Berlin/Heidelberg, Germany, 2015; pp. 95–106.
46.
Sato, G.Y.; de Azevedo, H.J.S.; Barthès, J.P.A. Agent and multi-agent applications to support distributed
communities of practice: A short review. Auton. Agents Multi-Agent Syst. 2012,25, 87–129. [CrossRef]
47.
Peffers, K.; Tuunanen, T.; Rothenberger, M.; Chatterjee, S. A Design Science Research Methodology for
Information Systems Research. J. Manag. Inf. Syst. 2007,24, 45–77. [CrossRef]
48.
Carey, K.; Helfert, M. An Interactive Assessment Instrument to Improve the Process for Mobile Service
Application Innovation. In HCI in Business; Fui-Hoon Nah, F., Tan, C.H., Eds.; Springer International
Publishing: Berlin/Heidelberg, Germany, 2015; pp. 244–255.
49.
Strohmann, T.; Höper, L.; Robra-Bissantz, S. Design Guidelines for Creating a Convincing User Experience
with Virtual In-vehicle Assistants. In Proceedings of the 52nd Hawaii International Conference on System
Sciences, Maui, HI, USA, 8–11 January 2019; pp. 4813–4822.
50.
Kumar, B.A.; Chand, S. Mobile App to Support Teaching in Distance Mode at Fiji National University:
Design and Evaluation. Int. J. Virtual Pers. Learn. Environ. (IJVPLE) 2018,8, 25–37. [CrossRef]
51.
Koh, J.; Kim, Y.G.; Butler, B.; Bock, G.W. Encouraging Participation in Virtual Communities. Commun. ACM
2007,50, 68–73. [CrossRef]
52.
Apostolou, B.; Bélanger, F.; Schaupp, L.C. Online communities: Satisfaction and continued use intention.
Inf. Res. 2017,22, 774.
53.
Hummel, J.; Lechner, U. Social profiles of virtual communities. In Proceedings of the 35th Annual Hawaii
International Conference on System Sciences, Big Island, HI, USA, 10 January 2002; pp. 2245–2254.
54.
Iriberri, A.; Leroy, G. A Life-cycle Perspective on Online Community Success. ACM Comput. Surv.
2009
,
41, 11:1–11:29. [CrossRef]
55.
Preece, J. Sociability and usability in online communities: Determining and measuring success. Behav. Inf. Technol.
2001,20, 347–356. [CrossRef]
56.
Virzi, R.A. What can you Learn from a Low-Fidelity Prototype? Proc. Hum. Factors Soc. Annu. Meet.
1989
,
33, 224–228. [CrossRef]
57.
Walker, M.; Takayama, L.; Landay, J.A. High-Fidelity or Low-Fidelity, Paper or Computer? Choosing Attributes
when Testing Web Prototypes. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2002,46, 661–665. [CrossRef]
58.
Maulsby, D.; Greenberg, S.; Mander, R. Prototyping an Intelligent Agent Through Wizard of Oz.
In Proceedings of the INTERACT ’93 and CHI ’93 Conference on Human Factors in Computing Systems,
CHI’93, Amsterdam, The Netherlands, 24–29 April 1993; pp. 277–284. [CrossRef]
59.
Davis, R.C.; Saponas, T.S.; Shilman, M.; Landay, J.A. SketchWizard: Wizard of Oz Prototyping of Pen-based
User Interfaces. In Proceedings of the 20th Annual ACM Symposium on User Interface Software and
Technology, UIST ’07, Newport, RI, USA, 7–10 October 2007; pp. 119–128. [CrossRef]
60.
Morville, P. Experience Design Unplugged. In ACM SIGGRAPH 2005 Web Program; ACM: New York, NY,
USA, 2005.
61.
Coyette, A.; Kieffer, S.; Vanderdonckt, J. Multi-fidelity Prototyping of User Interfaces. Human-Computer
Interaction—INTERACT 2007; Baranauskas, C., Palanque, P., Abascal, J., Barbosa, S.D.J., Eds.; Springer:
Berlin/Heidelberg, Germany, 2007; pp. 150–164.
62.
D-LABS. Medium-Fidelity-Prototyping. 2019. Available online: https://www.d-labs.com/en/services-and-
methods/medium-fidelity-prototyping.html (accessed on 14 October 2019)
Appl. Sci. 2020,10, 8199 16 of 17
63.
Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of empirical and
theoretical research. In Advances in Psychology; Elsevier: Amsterdam, The Netherlands, 1988; Volume 52,
pp. 139–183.
64.
Hernández-Sampieri, R.; Torres, C.P.M. Metodología de la Investigación; McGraw-Hill Interamericana: Ciudad
de México, México, 2018; Volume 4.
65.
Kothari, C.R. Research Methodology: Methods and Techniques; New Age International: New Delhi, India, 2004.
66.
Isleifsdottir, J.; Larusdottir, M. Measuring the User Experience of a Task Oriented Software. In Proceedings of
the international Workshop on Meaningful Measures: Valid Useful User Experience Measurement, Reykjavik,
Iceland, 18 June 2008; Volume 8, pp. 97–101.
67.
Takahashi, L.; Nebe, K. Observed Differences between Lab and Online Tests Using the AttrakDiff Semantic
Differential Scale. J. Usability Stud. 2019,14, 65–75.
68.
Hassenzahl, M.; Monk, A. The Inference of Perceived Usability From Beauty. Hum. Comput. Interact.
2010
,
25, 235–260. [CrossRef]
69.
Braun, P. Attrakdiff, I feel so I am ? Measuring affects tested by digital sensors. In Digital Klee Esquisses
Pédagogiques. Enquête sur le futur de la forme. Présent Composé (Rennes); Les Presses du Réel (Dijon): Dijon,
France, 2020; pp. 140–154.
70.
Ribeiro, I.M.; Providência, B. Quality Perception with Attrakdiff Method: A Study in Higher Education.
In Advances in Design and Digital Communication; Martins, N., Brandão, D., Eds.; Springer International
Publishing: Cham, The Netherlands, 2021; pp. 222–233.
71.
Klaassen, R.; op den Akker, R.; Lavrysen, T.; van Wissen, S. User preferences for multi-device context-aware
feedback in a digital coaching system. J. Multimodal User Interfaces 2013,7, 247–267. [CrossRef]
72.
Díaz-Oreiro, I.; López, G.; Quesada, L.; Guerrero, L.A. Standardized Questionnaires for User Experience
Evaluation: A Systematic Literature Review. Proceedings 2019,31, 1014. [CrossRef]
73.
Lallemand, C.; Koenig, V. Measuring the Contextual Dimension of User Experience: Development of the
User Experience Context Scale (UXCS). In Proceedings of the 11th Nordic Conference on Human-Computer
Interaction: Shaping Experiences, Shaping Society, NordiCHI ’20, Tallinn, Estonia, 25–29 October 2020.
[CrossRef]
74.
Isomursu, P.; Virkkula, M.; Niemelä, K.; Juntunen, J.; Kumpuoja, J. Modified AttrakDiff in UX Evaluation of
a Mobile Prototype. In Proceedings of the International Conference on Advanced Visual Interfaces, AVI ’20,
Salerno, Italy, 28 September–2 October 2020. [CrossRef]
75.
Walsh, T.; Varsaluoma, J.; Kujala, S.; Nurkka, P.; Petrie, H.; Power, C. Axe UX: Exploring Long-term
User Experience with iScale and AttrakDiff. In Proceedings of the 18th International Academic MindTrek
Conference: Media Business, Management, Content & Services, AcademicMindTrek ’14, Tampere, Finland,
4–6 November 2014; pp. 32–39.
76.
Hu, J.; Le, D.; Funk, M.; Wang, F.; Rauterberg, M. Attractiveness of an Interactive Public Art
Installation; Distributed, Ambient, and Pervasive Interactions; Streitz, N., Stephanidis, C., Eds.; Springer:
Berlin/Heidelberg, Germany, 2013; pp. 430–438.
77.
Lazar, J.; Feng, J.H.; Hochheiser, H. Chapter 3—Experimental design. In Research Methods in Human Computer
Interaction, 2nd ed.; Lazar, J., Feng, J.H., Hochheiser, H., Eds.; Morgan Kaufmann: Boston, MA, USA, 2017;
pp. 45–69. [CrossRef]
78. Sauro, J. Measuring the Quality of the Website User Experience; University of Denver: Denver, CO, USA, 2016.
79.
Bevan, N.; Liu, Z.; Barnes, C.; Hassenzahl, M.; Wei, W. Comparison of Kansei Engineering and AttrakDiff
to Evaluate Kitchen Products. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human
Factors in Computing Systems, CHI EA ’16, San Jose, CA, USA, 7–12 May 2016; pp. 2999–3005. [CrossRef]
80.
Merz, B.; Tuch, A.N.; Opwis, K. Perceived User Experience of Animated Transitions in Mobile User Interfaces.
In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems,
CHI EA ’16, San Jose, CA, USA, 7–12 May 2016; pp. 3152–3158. [CrossRef]
81.
Aula, A.; Khan, R.M.; Guan, Z. How Does Search Behavior Change as Search Becomes More Difficult?
In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, Atlanta, GA,
USA, 10–15 April 2010; pp. 35–44. [CrossRef]
Appl. Sci. 2020,10, 8199 17 of 17
82.
Chin, J.; Fu, W.T. Interactive Effects of Age and Interface Differences on Search Strategies and Performance.
In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, Atlanta, GA,
USA, 10–15 April 2010; pp. 403–412. [CrossRef]
83.
Lazar, J.; Feng, J.H.; Hochheiser, H. Chapter 4—Statistical analysis. In Research Methods in Human Computer
Interaction, 2nd ed.; Lazar, J., Feng, J.H., Hochheiser, H., Eds.; Morgan Kaufmann: Boston, MA, USA, 2017;
pp. 71–104. [CrossRef]
Publisher’s Note:
MDPI stays neutral with regard to jurisdictional claims in published maps and institutional
affiliations.
c
2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
... Relatedly, during the initial phases of developing new products, the absence of functional prototypes often hinders the assessment of user experience through tangible interactions with the product [22,23]. During such early stages of the design process, the anticipated user experience (AUX) assessment method, wherein participants envision the product design concept and anticipate future experiences, has proven to be a viable and practical approach, before actual product experiences occur [22][23][24][25]. Similarly, ISO 9241-210 [26] also defined user experience as a "person's perceptions and responses resulting from the use and/or anticipated use of a product, system or service". ...
... The AUX assessment is commonly conducted through methods such as freehand design sessions or survey/interview techniques. The efficacy of AUX assessments in gaining insights into user needs and their practicability and association with actual experience-based evaluations has been previously demonstrated and discussed [22][23][24][25][26][27]. Therefore, the current study also employed the AUX assessment method, providing respondents with illustrated images of each of the 15 design alternatives (Table 5) and instructing them to evaluate each design based on their imagination of future interactions and experiences with it. ...
Article
Full-text available
This study conducted a survey to identify the best ergonomic operation method, in-vehicle location, and the effects of their combination on electronic gearshifts. A total of 15 different design alternatives were derived through combinations of three operation methods (lever slide, button push, and dial rotation) and five in-vehicle locations (left wheel spoke, right wheel spoke, upper center fascia, lower center fascia, and center console). A total of 40 respondents with diverse ages and driving experiences evaluated the 15 different design alternatives across nine ergonomic evaluation measures (accuracy, efficiency, rapidity, learnability, intuitiveness, safety, preference, memorability, and satisfaction). The study results indicated that: (1) the lever slide and button push were superior to dial rotation for the operation method; (2) the lower center facia and center console were superior for the in-vehicle location, and (3) implementing the lever slide method in the center console location was found to lead to the best combination of the operation method and in-vehicle location, while implementing the button push method in the right wheel spoke or upper center fascia location also showed relative superiority. The study findings are expected to contribute to the ergonomic design of electronic gearshifts that can enhance the driver’s gear-shifting experience, thereby improving driving performance and safety.
... The interaction design we propose is intended to help people with little or no experience with any software tool, as our premise is to save them from navigating through menus and settings that can be overwhelming. Instead, we move them to a familiar environment (a chat like WhatsApp), making them perceive it as a safer environment, hence improving the overall UX [63]. Additionally, the chat format keeps interactions concise, as the conversational agent will always make end-users provide all the necessary data no matter how they choose to initiate a task. ...
Article
Full-text available
Recently, in the commercial and entertainment sectors, we have seen increasing interest in incorporating chatbots into websites and apps, in order to assist customers and clients. In the academic area, chatbots are useful to provide some guidance and information about courses, admission processes and procedures, study programs, and scholarly services. However, these virtual assistants have limited mechanisms to suitably help the teaching and learning process, considering that these mechanisms should be advantageous for all the people involved. In this article, we design a model for developing a chatbot that serves as an extra-school tool to carry out academic and administrative tasks and facilitate communication between middle-school students and academic staff (e.g., teachers, social workers, psychologists, and pedagogues). Our approach is designed to help less tech-savvy people by offering them a familiar environment, using a conversational agent to ease and guide their interactions. The proposed model has been validated by implementing a multi-platform chatbot that provides both textual-based and voice-based communications and uses state-of-the-art technology. The chatbot has been tested with the help of students and teachers from a Mexican middle school, and the evaluation results show that our prototype obtained positive usability and user experience endorsements from such end-users.
... Despite the relationships between momentary UX and episodic UX, various factors will interact intricately during the actual user experience, and the final satisfaction (episodic UX) will be determined from the accumulation of experiences at each stage. This view is supported by Sánchez-Adame [18], who writes that, as an example, the user might experience a strong, albeit temporary, negative reaction when evaluating momentary UX during usage, but when episodic UX is measured again after usage, the user may be more likely to prioritize good aspects over bad ones. These data are interesting because the evaluative judgment at each stage is related to overall final satisfaction with the product. ...
Article
Full-text available
User experience (UX) evaluation investigates how people feel about using products or services and is considered an important factor in the design process. However, there is no comprehensive UX evaluation method for time-continuous situations during the use of products or services. Because user experience changes over time, it is difficult to discern the relationship between momentary UX and episodic or cumulative UX, which is related to final user satisfaction. This research aimed to predict final user satisfaction by using momentary UX data and machine learning techniques. The participants were 50 and 25 university students who were asked to evaluate a service (Experiment I) or a product (Experiment II), respectively, during usage by answering a satisfaction survey. Responses were used to draw a customized UX curve. Participants were also asked to complete a final satisfaction questionnaire about the product or service. Momentary UX data and participant satisfaction scores were used to build machine learning models, and the experimental results were compared with those obtained using seven built machine learning models. This study shows that participants’ momentary UX can be understood using a support vector machine (SVM) with a polynomial kernel and that momentary UX can be used to make more accurate predictions about final user satisfaction regarding product and service usage.
Article
In the pursuit of an ergonomic gear indicator design for dial-type gearshifts, this study conducted a survey to comparatively evaluate eight different gear indicator layout-design alternatives that were derived by combining two gear-position sequences (P-R-N-D and D-N-R-P) and four gear indicator locations (up, down, left, and right sides of the gear selector). A total of 61 respondents of various ages and driving experiences evaluated each design alternative on nine subjective measures (accuracy, efficiency, rapidity, learnability, intuitiveness, safety, preference, memorability, and satisfaction). In addition, a freehand-design session was conducted to capture the respondents’ preferred gear indicator layouts. The study revealed: (1) superiority of P-R-N-D over D-N-R-P for gear-position sequence; (2) superiority in the order of up, left, right, and down for gear indicator location; (3) the layout with P-R-N-D above the gear selector as the best combination; and (4) notable relative superiority of the layout with D-N-R-P to the left of the gear selector, despite not being a combination of superior sequence and location. Interpretations and explanations for these study findings are discussed from various perspectives, contributing insights to the ergonomic design of gear indicators for dial-type gearshifts that can prevent driver confusion/misoperation and address safety concerns associated with conventional dial-type gearshifts.
Article
Full-text available
While HCI literature offers general frameworks for understanding user-centred quality, specific application areas may call for more detailed contextualisation of it. This paper focuses on socio-technical context of online news commenting by investigating speculative UI interventions intended to influence users’ emotions and social behaviour. To understand the aspects of quality that matter to users in such UI interventions, we conducted an international online survey (N = 439) and qualitatively analysed respondents’ first impressions of eight different design proposals. The findings describe contextually relevant socio-technical viewpoints and offer actionable considerations for design. For example, the findings imply that designers should be mindful of possible unintentional misuse that may result from the UI reinforcing specific emotional states or affording stigmatisation of individual users. The study advances understanding of which aspects of quality should be considered when designing and deploying UI interventions for digital media services and evaluating them with potential end-users.
Chapter
User Experience (UX) from the simplest perspective is defined in how people feel when using a product or service, which is fundamental to the success or failure of any product in the market. On the other hand, Massive Open Online Courses (MOOCs) have become one of the most popular trends in the field of education, reaching great popularity among several universities, which offer MOOCs through prestigious platforms, however, most of them do not meet the expectations and satisfaction of users, and mechanisms have not yet been designed to comprehensively measure the UX in these platforms. Therefore, the objective of this paper is to develop a comprehensive framework for the evaluation of UX in MOOC platforms from a technological point of view, after a systematic review of the literature to identify the most frequently applied and/or important evaluation approaches, which are analyzed and organized according to the following components: technological criteria and MOOC indicators, type of users, UX dimensions and UX factors. Through this approach it is possible to evaluate the UX in individual components, compare it between similar products and measure it over time.KeywordsMassive Open Online CoursesUser ExperienceUsability
Article
Full-text available
Predicting the popularity of posts on social networks has taken on significant importance in recent years, and several social media management tools now offer solutions to improve and optimize the quality of published content and to enhance the attractiveness of companies and organizations. Scientific research has recently moved in this direction, with the aim of exploiting advanced techniques such as machine learning, deep learning, natural language processing, etc., to support such tools. In light of the above, in this work we aim to address the challenge of predicting the popularity of a future post on Instagram, by defining the problem as a classification task and by proposing an original approach based on Gradient Boosting and feature engineering, which led us to promising experimental results. The proposed approach exploits big data technologies for scalability and efficiency, and it is general enough to be applied to other social media as well.
Preprint
Full-text available
Predicting the popularity of posts on social networks has taken on significant importance in recent years, and several social media management tools now offer solutions to improve and optimize the quality of published content and to enhance the attractiveness of companies and organizations. Scientific research has recently moved in this direction, with the aim of exploiting advanced techniques such as machine learning, deep learning, natural language processing, etc., to support such tools. In light of the above, in this work we aim to address the challenge of predicting the popularity of a future post on Instagram, by defining the problem as a classification task and by proposing an original approach based on Gradient Boosting and feature engineering, which led us to promising experimental results. The proposed approach exploits big data technologies for scalability and efficiency and is general enough to be applied to other social media as well.
Article
Full-text available
As a response to the COVID-19 pandemic, many governments have introduced steps such as spatial distancing and “staying at home” to curb its spread and impact. The fear resulting from the disease, the ‘lockdown’ situation, high levels of uncertainty regarding the future, and financial insecurity raise the level of stress, anxiety, and depression experienced by people all around the world. Psychoactive substances and other reinforcing behaviors (e.g., gambling, video gaming, watching pornography) are often used to reduce stress and anxiety and/or to alleviate depressed mood. The tendency to use such substances and engage in such behaviors in an excessive manner as putative coping strategies in crises like the COVID-19 pandemic is considerable. Moreover, the importance of information and communications technology (ICT) is even higher in the present crisis than usual. ICT has been crucial in keeping parts of the economy going, allowing large groups of people to work and study from home, enhancing social connectedness, providing greatly needed entertainment, etc. Although for the vast majority ICT use is adaptive and should not be pathologized, a subgroup of vulnerable individuals are at risk of developing problematic usage patterns. The present consensus guidance discusses these risks and makes some practical recommendations that may help diminish them.
Article
Full-text available
Standardized questionnaires are one of the methods used to evaluate User Experience (UX). Standardized questionnaires are composed of an invariable group of questions that users answer themselves after using a product or system. They are considered reliable and economical to apply. The standardized questionnaires most recognized for UX evaluation are AttrakDiff, UEQ, and meCUE. Although the structure, format, and content of each of the questionnaires are known in detail, there is no systematic literature review (SLR) that categorizes the uses of these questionnaires in primary studies. This SLR presents the eligibility protocol and the results obtained by reviewing 946 papers from four digital databases, of which 553 primary studies were analyzed in detail. Different characteristics of use were obtained, such as which questionnaire is used more extensively, in which geographical context, and the size of the sample used in each study, among others.
Conference Paper
Full-text available
We present a conceptual framework grounded in both users' reviews and HCI theories, residing between practices and theories as a form of intermediate-level knowledge in interaction design. Previous research has examined different forms of intermediary knowledge such as conceptual structures, strong concepts, and bridging concepts. Within HCI, these forms are generic and rise either from theories or particular instances. In this work, we created and evaluated a conceptual framework for a specific domain (instructional fitness apps). We first extracted the particular instances using users' online reviews and conceptualised them as an expectations and experiences framework. Second, within the framework, we evaluated the artefact related constructs using Norman's design principles. Third, we evaluated beyond the artefact related constructs using distributed cognition theory. We present an analysis of such intermediate-level knowledge with the aim of informing future designs.
Chapter
The perceived quality is subjective and based on a number of factors, including the user's own experience. In turn, higher education has become a service industry with divergences in short-term (commercial) or medium/ long-term (pedagogical) actions. Thus, an analysis of perceived quality was made using the Attrakdiff method with 282 students from the humanities, exact and health areas (97 of them in production engineering), 81 former students of production engineering and 47 professors, all from the same institution higher education, located in Recife, Brazil. We sought to analyze whether there was a divergence of perceptions. The analysis was performed using the Description of word pairs and the Diagram of Average Values. At the end of the research, in a comparative analysis, it was possible to perceive that teachers and alumni are the ones that have the best perceived quality of the service and the students are the most rigorous. However, in general, everyone has a perceived positive quality.