Available via license: CC BY 4.0
Content may be subject to copyright.
IJARCCE
ISSN (Online) 2278-1021
ISSN (Print) 2319-5940
International Journal of Advanced Research in Computer and Communication Engineering
Vol. 10, Issue 2, February 2021
DOI 10.17148/IJARCCE.2021.10203
Copyright to IJARCCE IJARCCE 20
This work is licensed under a Creative Commons Attribution 4.0 International License
Evaluating and Improving the User Experience
(UX) of Organization Workflow
Abrar Mubarak Munef1, Wajdi Al Jedaibi2, Emad AlBassam3
Department of Computer Science, Faculty of Computing and Information Technology,
King Abdulaziz University, Jeddah, 21589, Saudi Arabia1,2,3
Abstract: User Experience (UX) is becoming popular as a necessary success factor across many sectors and
organizations, including the software development industry such as a workflow management system. This system is
important to deal with complex needs of organizations to control and organize their routine processes and to manage it
in a better way. To assess the performance and the quality of the workflow system, it is essential to evaluate UX using
different criteria as UX tries to fulfil the user’s needs. This research aims to provide a framework to assess the user
experience of using workflow management system from multiple perspectives such as (i) Ease of Use, (ii) Ease of
Learning, (iii) System Usefulness, (iv) Informational Quality, (v) Interface Quality, and (vi) Overall experience. In
addition, the framework validated using real case scenarios to assess the current state of UX for the organization’s
workflow systems. The collected responses analysed using different statistical techniques to understand the performance
of the proposed model. The results suggested a high level of correlation among evaluation criteria, whereas Cronbach
Alpha, and Split-Half Reliability Test shows the excellent performance of the model in evaluating the UX criteria. The
research is helpful for the organizations to assess the quality and use of workflow management system from the user's
perspective.
Keywords: User Experience, Organization Workflow, User Experience Evaluation, User Experience Criteria.
I. INTRODUCTION
Nowadays, User Experience (UX) has become more important and popular as a success factor across many sectors and
organizations. It is the result of the interaction of the user, system, and context. The word “experience” means all aspects
of how users use an interactive product [1] such as (i) The way they feel in their hands, (ii) How they understand how it
works, (iii) How they feel about it while using it, and (iv) How to serve their purposes and its appropriateness in the entire
context in which they use it [2]. i.e., the experiential, affective, meaningful, and valuable aspects of product use [3]. In
the context of UX, it is a series of events over time during a user’s interaction with the software product [4]. The workflow
system in any organization is described as the set of processes, resources available, the people required, and the
interactions among them necessary to accomplish a given organizational goal [5], upon which evaluation of the user
experience for the organization workflow system was needed.
Measuring and Evaluation UX is important for the purpose of having a good system or service. It could give additional
ideas to users’ perception of the particular criteria of the system [1]. UX has been evaluated using a variety of criteria
such as effectiveness, efficiency, ease of use, and ease of learning, and others. By evaluating, the researcher can
recommend the requirements needed in developing and improving a system. It focuses on selecting the most excellent
design to ensure the development and improvement of the software is in the correct direction, and fulfilling the users’
needs [1], [6]. The extent of the user experiences a range of human reactions that would be measured to include pleasure.
And the circumstances under which they would be measured to include anticipated use and consideration of use. [7].
The lifecycle of UX has three levels: before, during and after interacting with the system, the goal to achieve improved
user experience focus on measurement and evaluating the system after interaction.
According to the findings in this study, users of a workflow system in an organization faced some difficulty in searching
for information, follow up or tracking specific transactions or complete the tasks that need to be completed within the
system. In addition, lack of a proper and clear data organization framework also makes information gathering from the
system quite difficult. All these lead to the complexity involved in the use of the system and reduce the user productivity.
Based on the above problem statement the main objective of this study is; to understand the user experience process, and
evaluating the criteria for an organization’s workflow management system.
IJARCCE
ISSN (Online) 2278-1021
ISSN (Print) 2319-5940
International Journal of Advanced Research in Computer and Communication Engineering
Vol. 10, Issue 2, February 2021
DOI 10.17148/IJARCCE.2021.10203
Copyright to IJARCCE IJARCCE 21
This work is licensed under a Creative Commons Attribution 4.0 International License
II. RELATED WORK
A. User Experience (UX) Process
UX has been defined as the value provided to users when they use products or services. On the other hand, UX design is
the process undertaken to enhance user satisfaction with products and services by improving accessibility, usability and
the pleasure provided while transacting with the product or service [8]. However, developing UX to the ultimate level of
user satisfaction is not the responsibility of a single person of a single team; rather, explains that it is the organization’s
overall vision [9]. UX process is defined as an iterative method that facilitates the continuous improvement of designs
by incorporating user feedback [10]. User feedback must be used as much as possible at every phase of developing the
UX design. A UX design process will entail a similar approach as that adopted in design thinking consisting of five
phases which empathize with users (this involves learning about the users); defining the problem (this involves
identifying the users’ needs); ideating (this involves generating design ideas); and prototyping (this involves transforming
ideas into practical examples) [11]. However, despite the standardized phases, the UX process will necessarily differ
from project to project, organization to organization and designer to designer depending on factors such as nature of the
project, deadlines, client, and experience of the UX designer. The expanded definition of UX process to six phases as
highlighted below [12]:
● Understand: the designer should understand requirements so as to create user personas and define use cases
● Research: the designer must analyse competitors, research the latest UX trends and observe guidelines
● Sketch: the designer gathers ideas, draws sketches and wireframes, and then evaluates and re-draws them
● Design: the designer designs images, creates prototypes, and defines the UX guidelines
● Implement: the functionality is implemented and experience built.
● Evaluate: the designer performs usability testing, creates audit reports, and identifies improvements.
B. User Experience (UX) Evaluation
UX evaluation is defined as the collection of tools, methods, and skills used in uncovering the way users perceive
products, services and non-commercial items (or even a combination of these three also known as a system) before,
during and after they interact with them [13]. Another study [9] pointed out that evaluating UX is complex because of its
subjective nature besides the fact that it is dynamic over time and context-dependent. The importance of improving UX
evaluation is that the more sectors mature, the more they take for granted the technical reliability and usability of their
products. According to [14], this forces consumers to start looking for alternative products that offer them more engaging
UX. Therefore, in order to retain customers and attract new ones, it is imperative that businesses evaluate UX accurately
and meaningfully enough for them to be able to provide more engaging UX [8].
As noted in [9], UX is understood in general terms as being inherently dynamic and this is associated with people’s ever-
changing emotional and internal states and the differences in their circumstances before, during and after they come into
contact and interact with a product. Thus, it is not accurate to simply view UX as a concept to be evaluated after interacting
with a product; rather the evaluation should also be before and during the interaction. On one hand, it is important to
evaluate short-term UX mainly in consideration of the dynamic changes in their needs and goals related to contextual
factors. On the other hand, [15] also pointed out the importance of knowing how and why UX evolves over time. Further,
the users’ values influence their UX with products and services, hence the importance of considering this relationship in
the design process. From the above assertions, it is inferred that looking beyond static aspects and investigating temporal
aspects of UX (i.e., to understand how UX changes over time) is important in UX evaluation.
The most common approaches towards UX evaluation include methods such as Attribute Analysis; Formal Experiment;
Question or Survey; and the Goal/Question/Metric (GQM) Paradigm [16]. The attribute analysis method entails a
heuristic analysis that focuses on usability whereby experts compare the design of a digital product to a list of predefined
principles to identify areas in which the product does not follow those principles (the name heuristics is derived from the
list of principles) [16]. However, [17] criticized this method and argues that traditional usability evaluations are
essentially different from UX evaluation. In explanation, they point out that while the emphasis of usability is on
efficiency and effectiveness, UX additionally entails hedonic attributes besides the pragmatic ones and this makes it
subjective. However, another research argued that while the objective measures such as the number of clicks, errors or
time spent on executing a task are not valid UX measures, it is important to consider them to understand the user’s feeling
about using the product [16]. This is because user expectations and motivation affect their experience more than in
conventional usability.
C. Elements of User Experience (UX)
IJARCCE
ISSN (Online) 2278-1021
ISSN (Print) 2319-5940
International Journal of Advanced Research in Computer and Communication Engineering
Vol. 10, Issue 2, February 2021
DOI 10.17148/IJARCCE.2021.10203
Copyright to IJARCCE IJARCCE 22
This work is licensed under a Creative Commons Attribution 4.0 International License
When one interacts with a website, several decisions are made while they surf and the decisions are usually in
consideration of all the actions the user can make. Therefore, users can be motivated to continue using and interacting
with the website (or product) when their experience is enhanced and their needs satisfied [18]. According to [19], the
decisions made by the system are built upon each other whereby they inform and influence the aspects of UX. Although
the elements are numerous, they can comprehensively be compressed into five, namely strategy, scope, structure, skeleton
and surface. Each element depends on the one directly below it and every decision made at one layer will affect the
decision on the subsequent layers. The strategy element entails the reason for the website, product, or application, why it
is created, who it is targeting, why the target audience is willing to use it and why they need it [20]. The goal of this
element is to define business objectives and user needs and it can be achieved via a strategic research process using
interviews to review competing organizations or products. The scope element defines the content and functional
requirements that must be aligned with the strategic goals to fulfil them. As described by [12] the content requirements
as the information (videos, images, text, audio, etc.) needed to provide value without which it will be difficult to estimate
the magnitude of the project and the time required to complete it. Functional requirements are the requirements related
to how the features of the product interrelate and work with each other. Ideally, the users need the features to accomplish
their objectives. In contextualizing content versus functional needs [12], for example the feature could be having a media
player to play songs on one hand and the content, on the other hand, are the audio files for the songs.
The user-product interaction, the way the system behaves when a user interacts with it is defined in the structure element,
and the way it is organized and prioritized [5]. This element is divided into two components, Information Architecture
and Interaction Design. Given the functional requirements, the interaction design defines the way a user can interact with
the product and how, in turn, the system behaves in response to such interactions. Information architecture, also given
the content requirements, defines how content elements are arranged and organized in order to facilitate human
understanding. A good interaction design will help users achieve their goals as it communicates functionality and
interactivity effectively. It also informs users on state changes such as files that have been saved while they interact. It
helps prevent user errors such as when the system prompts a user to verify potentially risky actions such as a deletion.
On its part, good information architecture will organize, categorize and prioritize information basing on business
objectives and user needs. It makes it easy for the user to understand and surf through the information presented to them.
D. User Experience (UX) Criteria
The common criteria used to measure UX are satisfactions, usability, ratings, user tasks and product description.
According to [13], it is important to capture ratings as well as the reasons for the ratings. User satisfaction described by
[11] considered the most relevant criteria of UX and this is based on the almost obvious assumption that a bad experience
is not likely to make users satisfied. According to [21], users in the real world will more likely talk of their frustrations
as opposed to how satisfied they are. Thus, a practical approach would involve asking them to rate their experiences
using, say, a 5-point Likert scale ranging from very dissatisfied to very satisfied. Surveys are also considered an effective
way of capturing satisfaction ratings along with the feedback provided when using a website or within an application.
The usability criteria describes how easily users can accomplish the tasks they set out to do. Although [19] argue that
usability may not be the differentiator it once was, they acknowledge that it is still important to a product’s UX and this
is because a product that is difficult to use will not provide great UX. A practical approach to capturing overall usability
is to ask the users how ranging from extremely hard to use to extremely easy to use, they would describe a product. Thus,
in contrast with the view presented by [19], [22] opine that usability is a key UX criteria. System Usability Scale (SUS)
has been used commonly to measure usability and it comprises 10 questions asked to product users or following the
procedures of usability testing. The score is considered the most useful for purposes of benchmarking usability and this
could be historical as in relative to a product before change or against similar products. However, [22] also caution that
SUS is not a percentage despite the fact that the score is out of 100. Rather, it is a relative scale that should be applied
with care.
Another criterion is engagement this may be considered important for most websites but is characteristically an
ambiguous category [17]. However, [23] argue that UX teams can make real contributions towards understanding the
degree to which users interact with a website, the attention they pay to it, the amount of time spent in a flow state, and
how good they eventually feel about it. Although time remains an important factor in engagement criteria, it must be used
alongside other categories such as event streams, page views, or scrolling at certain intervals. Further, because
engagement is a tricky category to read, it generates better results when used in combination with qualitative insights
[13]. Rating criteria provide a way of judging the quality of a product and users can be asked to provide overall ratings
along with ratings for a product’s different features. A 5-point scale is acceptable but it also important to capture the
reasons for the ratings [16]. Users may be asked to rate, for instance, the entire website or specific aspects of its features.
Tasks are considered to be at the core of a product because products that do not support user tasks are not expected to
provide a great user experience [24]. According to [25], criteria for user tasks need to be captured immediately after the
IJARCCE
ISSN (Online) 2278-1021
ISSN (Print) 2319-5940
International Journal of Advanced Research in Computer and Communication Engineering
Vol. 10, Issue 2, February 2021
DOI 10.17148/IJARCCE.2021.10203
Copyright to IJARCCE IJARCCE 23
This work is licensed under a Creative Commons Attribution 4.0 International License
user has attempted a task and this generally implies following usability testing. However, it is also worth noting that user
task criteria may also require focusing on a number of tasks such as completion rate, error rate, the average number of
errors, time spent on tasks and the ease of completion. Effectiveness criteria describe the completeness and accuracy with
which a user can achieve a specified goal and can be calculated by measuring the completion rate. Grudin (2017) explains
that with regards to speeds and errors, the essential scope of tasks has to be completed at a level better than some level
of performance required [9]. Users must be able to use a system after a given time period of proper training or first self-
usage. According to [22], Learnability is a measure of the degree to which a user interface can be learned not only quickly
but also effectively whereby the typical measure is learning time. It has also been established that user interfaces are
easier to learn when they are designed to be user-friendly based on fundamental psychological properties and when they
are familiar.
III. PROPOSED FRAMEWORK AND EXPERIMENTAL DESIGN
In this study, the proposed framework is to conduct an empirical experiment of workflow system at Saudi public
organization to evaluate and assess the current state of user experience (UX) for the workflow system. We identified a
preferred UX criteria and analysed their weaknesses based on the survey study. In addition, the relationship between UX
criteria to fulfil user satisfaction is understood to suggest improved UX criteria and apply the suggested criteria on an
existing workflow system. Our proposed framework is explained in following two sub-sections (A & B):
A. Identification Phases of Criteria
To identify the criteria our study was conducted in various phases:
● Discover phase for defining proposed criteria. First of all, we study the user experience of an organization
workflow system from the previous user experience criteria and studies (as discussed in previous section) that can be
preferred criteria for workflow system. We classify the proposed criteria according to the ease of use, ease of learning,
system usefulness, informational quality, and interface quality.
● Testing phase for analysing potential solutions. Then, we took into account those aspects that can be potential
problems for users using a workflow system. Also gathering opinions of previous experiences from the authorities
concerned with (responsible for) the system. For this purpose, we analysed an existing workflow system, to conduct an
empirical study.
● Empirical phase for identifying issues. Then, according to those aspects we collected feedback from a number
of employees through a distributed survey to discover preferred UX criteria to solve the problems identified.
B. In depth review of an organizational workflow system
Our empirical experiment of workflow system is related to the Request Management System (RMS) at King Abdulaziz
University as a representative Saudi Public Organization. We choose this organization workflow system because of the
many different fields, categories, and functions that are found in most other organizations, such as administrative, faculty,
doctor and others, and different positions like Head of Department, Vice/Deputy, manager, Supervisor and Employee. It
is one of the leading universities in the Kingdom at the local and regional level. The workflow system at the organization
is powered by various technologies. Various support systems have therefore been developed to aid in information
management through the use of singular centred solutions. The software is implemented in various departments within
and integrated into one single system for effective management. The integration used in the departments allows seamless
information flow within the university across the departments. Operations at the university are greatly enhanced through
the application of technologies that enable real time communication. The workflow system enables receiving and
transferring incoming transactions to entities or persons and viewing confidential transactions attachments, also inquiry
reports showing the incoming and outgoing for each department during specific time periods and the completion rate.
Some of the negative impacts include the fact that the design is not responsive. As shown in Fig. 1 and Fig. 2, it does not
fit all screens thus creating multiple problems for the users. The elements of the interface are not compliant with the
acceptable standards, and the information is not labelled or categorized clearly.
IJARCCE
ISSN (Online) 2278-1021
ISSN (Print) 2319-5940
International Journal of Advanced Research in Computer and Communication Engineering
Vol. 10, Issue 2, February 2021
DOI 10.17148/IJARCCE.2021.10203
Copyright to IJARCCE IJARCCE 24
This work is licensed under a Creative Commons Attribution 4.0 International License
Fig. 1 Appearance of Request Management System (RMS) on Various Devices-I
Fig. 2 Appearance of Request Management System (RMS) on Various Devices-II
Secondly, completing tasks is complex as per the experience of using this workflow system. In Fig. 3 and Fig. 4 users
find it quite difficult to complete the tasks that need to be completed within the system. It is not clear what are the
consequences of various actions, thus creating a lot of confusion; also, it is quite difficult to insert description or
comments to the transactions. Another negative impact is the fact that viewing several pages is not straightforward.
Viewing the department’s or section’s referrals is a difficult affair.
Fig. 3 Completing Task Page-I
Fig. 4 Completing Task Page-II
IJARCCE
ISSN (Online) 2278-1021
ISSN (Print) 2319-5940
International Journal of Advanced Research in Computer and Communication Engineering
Vol. 10, Issue 2, February 2021
DOI 10.17148/IJARCCE.2021.10203
Copyright to IJARCCE IJARCCE 25
This work is licensed under a Creative Commons Attribution 4.0 International License
Fig. 5 and Fig. 6 illustrates a major problem when searching for information because the function is not practical, and it
is hence ineffective. So, making follow-up or tracking the transactions after they have been made are difficult because
this leads to a poor flow of information. The fact that it is quite difficult to track the transactions contributes to the overall
difficulty of the system Lack of a proper and clear data organization framework should also make information gathering
from the system quite difficult. Finally, the complexity experienced in the use of the system reduces the user productivity
and satisfaction.
Fig. 5 Follow-up the transactions
Fig. 6 Search Function
C. Survey Study
I. Survey Development
The web-based electronic forms are used to get the responses to the survey. The benefits of using web-based electronic
forms include the efficiency of data collection and it is easier to use the online forms to reach a number of individuals in
the target population and easier to collate and analyse the results. The survey consists of closed-ended questions, which
are formulated based on the objectives, research question and the review of the organization workflow system. The
questions were organized into two groups: the first contained 5 demographic questions, 2 multiple choice questions about
the most issues that users experienced in the workflow system and the frequently used actions gathering opinions from
the authorities concerned with (responsible for) the system, while the third group of questions was organized into 6
sections. Each section in the third group represents different criteria in 29 questions as shown in Table 1. Finally, the last
question represents their opinion about the overall workflow system.
TABLE 1 QUESTIONS RELATED TO CRITERIA
Sections
Criteria
Questions Number
Percentage
1
Ease of use
Q8-Q9-Q10-Q11-Q12-Q13-Q14-Q15
26.67
2
Ease of learning
Q16-Q17-Q18-Q19
13.33
3
System Usefulness
Q20-Q21-Q22-Q23-Q24
16.67
4
Informational Quality
Q25-Q26-Q27-Q28-Q29-Q30-Q31-Q32
26.67
5
Interface Quality
Q33-Q34-Q35-Q36
13.33
6
Overall
Q37
3.33
The questions are used in multiple choice type format to measure the respondents and scale from 1-5. Whereas, 1 means
strongly disagree and 5 means strongly agree, I don’t know is associated with 0. The Likert categorical scale is used to
give a weight for each answer from low to a high score as shown in Table 2.
IJARCCE
ISSN (Online) 2278-1021
ISSN (Print) 2319-5940
International Journal of Advanced Research in Computer and Communication Engineering
Vol. 10, Issue 2, February 2021
DOI 10.17148/IJARCCE.2021.10203
Copyright to IJARCCE IJARCCE 26
This work is licensed under a Creative Commons Attribution 4.0 International License
TABLE 2 LIKERT CATEGORICAL SCALE
Weighted Mean
Possible Answers
From 0 to 0.83
I don’t Know
From 0.84 to 1.66
Strongly Disagree
From 1.67 to 2.49
Disagree
From 2.50 to 3.32
Neutral
From 3.33 to 4.15
Agree
From 4.16 to 5
Strongly Agree
II. Sample Selection
A sample was selected based on the Deanships in Saudi Public organization. The population sample included people who
used the workflow system with different positions in the organization at King Abdulaziz University. The respondents
already have practical experience in using the workflow system.
D. Data Collection
The respondents were given access to the online survey using the web services forms. The responses of the survey are
collected from the period of 8th April 2019 until 13th September 2019. The data has been recorded and updated
simultaneously as responses are received. The answers have been exported as an excel sheet and organized in the SPSS
spreadsheet with the code sheet that has been improved rely on the Likert categorical scale to evaluate the attitudes from
the data of the survey results. The answers are organized into separate rows and columns with the allocated attitudinal
score. The answers to each question have been assigned with numerical values for the data analysis.
E. Data Analysis
The Statistical Package for the Social Sciences (SPSS) was used to analyze the survey data, which is a statistical analysis
software that also supports data management and documentation. The application can be used for various statistical
methods including descriptive statistics, bivariate statistics, cluster analysis, and linear regression [26]. The main
statistical analyses are correlation and mean that have been implemented include overall multi-dimensions of each
criterion. To provide valid and reliable answers of the survey, the Cronbach Alpha and Split-half are computed and the
Validity Pearson product-moment correlation coefficient is used. The line charts are provided for the clarification
percentage of frequency and predict the relationship between each criterion. Other information was organized in the pie
chart for the first group of general questions and the bar charts for the second group.
IV. RESULTS AND ANALYSIS
A. Demographic Question’s Result
The demographic questions offered two personal questions such as gender and age, and three questions related to the job
such as years of experience, the position, and the field of the organization to ensure in-depth information is provided.
The last two questions in this section are related to (1) if participants had experienced an issue in an organization workflow
system such as (Design not responsive, information is not labelled or categorized, completing tasks is complex, search
function is not practical, insert description or comments to the transactions are difficult), and (2) the tasks the participants
usually do such as (receiving and transferring transactions, creating transactions, activating and determining the
permissions of the communications staff, inquiry and search about the transactions, print reports and follow up the
transactions).
I. The Gender Question
Fig. 7 shows the percentage of participants’ gender, which illustrates that there were 89.5% were males and 10.5% were
females participated.
Fig. 7 The Percentage of Participants Gender
IJARCCE
ISSN (Online) 2278-1021
ISSN (Print) 2319-5940
International Journal of Advanced Research in Computer and Communication Engineering
Vol. 10, Issue 2, February 2021
DOI 10.17148/IJARCCE.2021.10203
Copyright to IJARCCE IJARCCE 27
This work is licensed under a Creative Commons Attribution 4.0 International License
II. The Age Question
Fig. 8 shows the percentage of participants based on their age group. The highest percentage is 29.8% for the participants
belonging to (30-34 and 35-39) years of age. And the lowest percentage is 7% for the people belonging to age under 30
and ranges between 45-49 years of age.
Fig. 8 The Percentage of Participants Age
III. Years’ Experience Question
Fig. 9 illustrates the participant’s years of practical experience in using workflow systems. It shows that most of the
people having 5 to 9 years of experience consist of 33.3% of the total participants. On the other side, 31.6% participants
had 10-14 years of working experience.
Fig. 9 The Percentage of Years’ Experience of Participants
IV. Organization field Question
Fig. 10 shows the percentage of the field of organization the participants work on, which shows the participant work on
Computer Information Technology and Education field. The highest percentage is 73.7% for Computer Information
Technology followed by 26.3% for Education.
Fig. 10 The Percentage of the Field Organization
V. Participants Position Question
Fig. 11 shows the percentage of years of practical experience in using the workflow system, which the highest percentage
is 33.3% for (5-9) years’ experience.
IJARCCE
ISSN (Online) 2278-1021
ISSN (Print) 2319-5940
International Journal of Advanced Research in Computer and Communication Engineering
Vol. 10, Issue 2, February 2021
DOI 10.17148/IJARCCE.2021.10203
Copyright to IJARCCE IJARCCE 28
This work is licensed under a Creative Commons Attribution 4.0 International License
Fig. 11 The Percentage of the Participants Position
VI. The participants experience Question
Fig. 12 represents the number of participants that experienced one or more of the lists shown in Table 3. It was found
that 30 participants see the design is not responsive (does not fit all screens). Then, 28 participants have difficulty in
Search Function is not practical. Also, 25 participants experienced the follow-up or tracking the transactions after transfer
are difficult and Information is not labelled or categorized clearly. In addition, at least 22 participants have difficulty
viewing the department’s/section’s referrals and 21 participants faced viewing several pages is not straightforward and
interface elements are not following the normal standard.
Fig. 12 The Percentage of Participants Experience Question
TABLE 3 LIST OF PARTICIPANTS EXPERIENCED IN WORKFLOW SYSTEM
Choice No.
Participants’ Experienced
Choice 1
Design is not responsive (does not fit all screens).
Choice 2
Interface elements are not following the normal standard.
Choice 3
Information is not labelled or categorized clearly.
Choice 4
Completing tasks is complex.
Choice 5
The consequence of actions is confusing.
Choice 6
The system does not send me notifications.
Choice 7
Search Function is not practical.
Choice 8
Viewing several pages is not straightforward.
Choice 9
Viewing the department’s/section’s referrals are difficult.
Choice 10
Insert description or comments to the transactions are difficult.
Choice 11
Transfer the transactions is complex.
Choice 12
Follow-up or tracking the transactions after transfer are difficult.
(52.63%) 30
(35.09%) 20
(43.86%) 25
(31.58%) 18
(33.33%) 19
(26.32%) 15
(49.12%) 28
(36.84%) 21
(38.60%) 22
(12.28%) 7
(1.75%) 1
(43.86%) 25
0 5 10 15 20 25 30 35
Choice 1
Choice 2
Choice 3
Choice 4
Choice 5
Choice 6
Choice 7
Choice 8
Choice 9
Choice 10
Choice 11
Choice 12
IJARCCE
ISSN (Online) 2278-1021
ISSN (Print) 2319-5940
International Journal of Advanced Research in Computer and Communication Engineering
Vol. 10, Issue 2, February 2021
DOI 10.17148/IJARCCE.2021.10203
Copyright to IJARCCE IJARCCE 29
This work is licensed under a Creative Commons Attribution 4.0 International License
VII. Frequently used actions Question
Fig. 13 shows one or more of the frequently used actions of participants shown in Table 4 List of frequently used actions
in the workflow system. Also, the table shows the percentage of each actions.
Fig. 13 The Percentage of Frequently used Actions Question
TABLE 4 LIST OF FREQUENTLY USED ACTIONS IN WORKFLOW SYSTEM
Action No.
Frequently used actions
Action 1
Receiving and transferring incoming transactions to entities or persons and viewing attachments.
Action 2
View confidential transactions attachments for employees who have been sent to them.
Action 3
Activating and determining the permissions of the communications staff in the department based on
the type of transaction.
Action 4
Export the transactions by barcode for trading and recognize it.
Action 5
Create transactions within the agency and direct them to employees to accomplish the tasks
assigned to them and follow them.
Action 6
Inquiry and search about the transactions in more than one way with printability
Action 7
Inquiry about the personal transactions outgoing and know the action taken by the persons
transferred to them
Action 8
The ability of follow-up transactions and knowing what has been done with the possibility of
exporting them to Excel file.
Action 9
Print reports showing the incoming and outgoing for each department during specific time periods
and the completion rate.
Action 10
Follow-up the job performance for departments or employees during a specific period and know
their status, with the possibility of exporting the reports to Excel file.
Action 11
Reports (Follow up the transactions of employees and know the status and the time spent on
completed transactions).
B. UX Criteria Questions
This section represents the results of the survey were analysed based on calculating the mean, standard deviation, and
correlation coefficient. Each table represents different criteria in 30 questions with the results of criteria analysis based
on their opinion about workflow system which contains frequency(F), percentage (%), criteria(C), question(Q),
measure(M), the responses from “I Don’t Know to Strongly Agree”, question mean (Q-Mean), criteria mean (C Mean)
and Total result in percentage (Result %).
I. Ease of use criteria question
Table 5 shows ease of use criteria practices that are covered by questions from 8 to 15. The mean values of all practices
range from (2.84) to (3.52), so the result of ease of use criteria is 66%.
(70.18%) 40
(21.05%) 12
(15.79%) 9
(5.26%) 3
(29.82%) 17
(54.39%) 31
(35.09%) 20
(22.81%) 13
(15.79%) 9
(14.04%) 8
(26.32%) 15
0 5 10 15 20 25 30 35 40 45
Action 1
Action 2
Action 3
Action 4
Action 5
Action 6
Action 7
Action 8
Action 9
Action 10
Action 11
IJARCCE
ISSN (Online) 2278-1021
ISSN (Print) 2319-5940
International Journal of Advanced Research in Computer and Communication Engineering
Vol. 10, Issue 2, February 2021
DOI 10.17148/IJARCCE.2021.10203
Copyright to IJARCCE IJARCCE 30
This work is licensed under a Creative Commons Attribution 4.0 International License
C
Q
M
I
Don’t
Know
Strongly
not
Agree
Not
Agree
Neutral
Agree
Strongly
Agree
Q-
Mean
C
Mean
Result
%
Ea
se
of
us
e
Q8
F
-
5
5
20
17
10
3.386
3.3004
66%
%
-
8.8
8.8
35.1
29.8
17.5
Q9
F
1
3
7
15
21
10
3.438
%
1.8
5.3
12.3
26.3
36.8
17.5
Q10
F
1
4
1
24
12
15
3.526
%
1.8
7.0
1.8
42.1
21.1
26.3
Q11
F
2
3
3
20
17
12
3.456
%
3.5
5.3
5.3
35.1
29.8
21.1
Q12
F
2
3
5
20
16
11
3.368
%
3.5
5.3
8.8
35.1
28.1
19.3
Q13
F
1
4
12
16
14
10
3.193
%
1.8
7.0
21.1
28.1
24.6
17.5
Q14
F
2
7
13
16
14
5
2.842
%
3.5
12.3
22.8
28.1
24.6
8.8
Q15
F
2
2
12
16
17
8
3.193
%
3.5
3.5
21.1
28.1
29.8
14.0
TABLE 5 EASE OF USE CRITERIA ANALYSIS
II. Ease of learning criteria question
Table 6 shows ease of learning criteria practices that are covered by questions from 16 to 19. The mean values of all
practices range from (3.42) to (3.54), so the result is 69.21%.
C
Q
M
I
Don’t
Know
Strongly
not
Agree
Not
Agree
Neutral
Agree
Strongly
Agree
Q-
Mean
C
Mean
Result
%
Eas
e of
lear
nin
g
Q16
F
2
4
5
13
21
12
3.456
3.4605
69.21
%
%
3.5
7.0
8.8
22.8
36.8
21.1
Q17
F
1
3
7
11
23
12
3.543
%
1.8
5.3
12.3
19.3
40.4
21.1
Q18
F
1
3
5
19
20
9
3.421
%
1.8
5.3
8.8
33.3
35.1
15.8
Q19
F
1
4
5
18
18
11
3.421
%
1.8
7.0
8.8
31.6
31.6
19.3
TABLE 6 EASE OF LEARNING CRITERIA ANALYSIS
III. System usefulness criteria question
Table 7 shows system usefulness criteria practices that are covered by questions from 20 to 24. The mean values of all
practices range from (3.07) to (3.43), so the result is 65.26%.
TABLE 7 SYSTEM USEFULNESS CRITERIA ANALYSIS
C
Q
M
I
Don’t
Know
Strongly
not
Agree
Not
Agree
Neutral
Agree
Strongly
Agree
Q-
Mean
C
Mean
Result
%
Sy
ste
m
Us
ef
ul
ne
ss
Q20
F
2
3
7
17
19
9
3.315
3.2632
65.26
%
%
3.5
5.3
12.3
29.8
33.3
15.8
Q21
F
2
4
7
15
21
8
3.280
%
3.5
7.0
12.3
26.3
36.8
14.0
Q22
F
1
4
7
14
19
12
3.438
%
1.8
7.0
12.3
24.6
33.3
21.1
Q23
F
4
8
4
14
18
9
3.070
%
7.0
14.0
7.0
24.6
31.6
15.8
Q24
F
2
6
6
17
16
10
3.210
%
3.5
10.5
10.5
29.8
28.1
17.5
IJARCCE
ISSN (Online) 2278-1021
ISSN (Print) 2319-5940
International Journal of Advanced Research in Computer and Communication Engineering
Vol. 10, Issue 2, February 2021
DOI 10.17148/IJARCCE.2021.10203
Copyright to IJARCCE IJARCCE 31
This work is licensed under a Creative Commons Attribution 4.0 International License
IV. Informational Quality Criteria Question
Table 8 shows informational quality criteria practices that are covered by questions from 25 to 32. The mean values of
all practices range from (2.85) to (3.19), so the result is 60.53%.
TABLE 8 INFORMATIONAL QUALITY CRITERIA ANALYSIS
C
Q
M
I
Don’t
Know
Strongly
not
Agree
Not
Agree
Neutral
Agree
Strongly
Agree
Q-
Mean
C
Mean
Result
%
In
fo
rm
ati
on
al
Q
ua
lit
y
Q25
F
4
6
9
16
14
8
2.947
3.0263
60.53
%
%
7.0
10.5
15.8
28.1
24.6
14.0
Q26
F
3
9
8
18
11
8
2.859
%
5.3
15.8
14.0
31.6
19.3
14.0
Q27
F
1
4
14
16
14
8
3.087
%
1.8
7.0
24.6
28.1
24.6
14.0
Q28
F
1
8
11
16
13
8
2.982
%
1.8
14.0
19.3
28.1
22.8
14.0
Q29
F
1
5
9
19
13
10
3.193
%
1.8
8.8
15.8
33.3
22.8
17.5
Q30
F
2
7
6
20
14
8
3.070
%
3.5
12.3
10.5
35.1
24.6
14.0
Q31
F
1
7
6
16
20
7
3.193
%
1.8
12.3
10.5
28.1
35.1
12.3
Q32
F
3
9
7
18
13
7
2.877
%
5.3
15.8
12.3
31.6
22.8
12.3
V. Interface Quality Question
Table 9 shows interface quality criteria practices that are covered by questions from 33 to 36. The mean values of all
practices range from (2.91) to (3.29), so the result is 60.61%.
TABLE 9 INTERFACE QUALITY CRITERIA ANALYSIS
C
Q
M
I
Don’t
Know
Strongly
not
Agree
Not
Agree
Neutral
Agree
Strongly
Agree
Q-
Mean
C
Mean
Result
%
Inte
rfac
e
Qua
lity
Q33
F
1
6
8
11
22
9
3.298
3.0307
60.61
%
%
1.8
10.5
14
19.3
38.6
15.8
Q34
F
3
6
11
11
20
6
3.0
%
5.3
10.5
19.3
19.3
35.1
10.5
Q35
F
2
9
10
12
19
5
2.912
%
3.5
15.8
17.5
21.1
33.3
8.8
Q36
F
2
6
13
15
16
5
2.912
%
3.5
10.5
22.8
26.3
28.1
8.8
VI. Overall System Question
Table 10 shows overall practice that is covered by question 37. The mean values of this practice (3.00), so the result is
60%. TABLE 10 OVERALL SYSTEM ANALYSIS
C
Q
M
I
Don’t
Know
Strongly
not
Agree
Not
Agree
Neutral
Agree
Strongly
Agree
Q-
Mean
C
Mean
Result
%
Ove
rall
Q37
F
2
10
6
15
16
8
3.0
3.0
60%
%
3.5
17.5
10.5
26.3
28.1
14.0
IJARCCE
ISSN (Online) 2278-1021
ISSN (Print) 2319-5940
International Journal of Advanced Research in Computer and Communication Engineering
Vol. 10, Issue 2, February 2021
DOI 10.17148/IJARCCE.2021.10203
Copyright to IJARCCE IJARCCE 32
This work is licensed under a Creative Commons Attribution 4.0 International License
C. UX Criteria correlation
I. Overview
The purpose of the simple linear correlation analysis is to determine the type and strength of the relationship between
two variables, which is denoted by r. The sample as an estimate of the correlation coefficient, and the previous
determination of the purpose of the correlation coefficient, we find that it focuses on two points:
Relationship type: Take three types by correlation coefficient signal as follows:
1. If the correlation coefficient is negative (r <0) there is an inverse relationship between the two variables, meaning
that the increase of one of the two variables is accompanied by a decrease in the second variable.
2. If the correlation coefficient is positive (r> 0) there is a positive relationship between the two variables, meaning
that the increase of one of the variables is accompanied by an increase in the second variable.
3. If the correlation coefficient is zero (r = 0), this indicates the lack of correlation between the two variables.
The strength of the relationship: The strength of the relationship can be judged in terms of the degree of proximity or
distance from it (±1) as shown in Table 11, where the correlation coefficient value falls within the range (-1 < r < 1). We
can calculate the correlation of the equation:
rp= 𝑛∑𝑥𝑦−(∑𝑥)(∑𝑦)
√(𝑛 ∑𝑥2−(∑𝑥)2)(𝑛 ∑𝑦2−(∑ 𝑦)2)
TABLE 11 STRENGTH OF THE CORRELATION
Strength of the relationship
r
Completely Positive Correlated
+1
Strong Positive Correlated
+ 0.70 to 0.99
Moderate Positive Correlated
+ 0.50 to 0.69
Weak Positive Correlated
+ 0.01 to 0.49
No Correlation
0
Weak Negative Correlated
- 0.01 to 0.49
Moderate Negative Correlated
- 0.50 to 0.69
Strong Negative Correlated
- 0.70 to 0.99
Completely Negative Correlated
-1
II. Correlation between each criterion
Table 12 shows the correlation between each criteria ease of use (C1), ease of learning (C2), System usefulness (C3),
informational quality (C4), interface quality (C5), and overall system (C6). The correlation between informational quality
and interface quality is a strong positive correlated value of (0.877).
TABLE 12 CORRELATION BETWEEN EACH CRITERIA
Correlation
C1
C2
C3
C4
C5
C6
C1
1
C2
.841**
1
C3
.855**
.857**
1
C4
.773**
.766**
.818**
1
C5
.759**
.752**
.821**
.877**
1
C6
.744**
.715**
.839**
.822**
.845**
1
D. Reliability and Validity Analysis Result
I. Overview
In reliability analysis, inner consistency is used to evaluate the reliability of a summated scale where several items are
summed to form a total score. This evaluates reliability in reliability analysis focuses on the internal consistency of the
set of questions forming the scale. Table 13 shows how to interpret Cronbach's alpha value.
TABLE 13 CRONBACH'S ALPHA SCALE
Unacceptabl
e
Poor
Questionabl
e
Accepted
Good
Excellent
α
0
0.49
0.50
0.59
0.60
0.69
0.70
0.79
0.80
0.89
0.90
1
IJARCCE
ISSN (Online) 2278-1021
ISSN (Print) 2319-5940
International Journal of Advanced Research in Computer and Communication Engineering
Vol. 10, Issue 2, February 2021
DOI 10.17148/IJARCCE.2021.10203
Copyright to IJARCCE IJARCCE 33
This work is licensed under a Creative Commons Attribution 4.0 International License
II. Alpha Cronbach reliability Analysis
The result of the reliability test for the criteria that was measured using ALPHA Cronbach reliability is (.981) as shown
on Table 14.
TABLE 14 RELIABILITY STATISTICS
Cronbach's Alpha
N of Items
.981
30
III. Split-Half Reliability Test
The Split-Half Reliability test assesses the internal consistency of a test, such as questionnaires. There, it measures the
extent to which all parts of the test contribute equally to what is being measured. It split the scale questions into two parts
based on odd and even numbered questions. Then, the score of half numbers is associated in reliability analysis. High
correlations between the two parts show high internal consistency in reliability analysis. Coefficient alpha or Cronbach's
alpha is used in reliability analysis. Table 14 shows the result of a split-half scale for 30 questions which is “a.” the items
from Q8 to Q22 and “b.” the items from Q23 to Q37. The split- half coefficient equal to (0.920), which falls in the
excellent range. TABLE 14 SPLIT-HALF RELIABILITY RESULT
Cronbach's Alpha
Part 1
Value
.963
N of Items
15a
Part 2
Value
.973
N of Items
15b
Total N of Items
30
Correlation Between Forms
.861
Spearman-Brown Coefficient
Equal Length
.925
Unequal Length
.925
Guttman Split-Half Coefficient
.920
V. CONCLUSION
The aim of this study is to conduct an empirical experiment of the user experience of a workflow system related to
Request Management System (RMS) at King Abdulaziz University at Saudi Public Organization. Using five suggested
criteria of UX, which are ease of use, ease of learning, system usefulness, informational quality and interface quality.
The key element of the study was to use people’s opinion in the system development to reveal possible problems and
then eliminate them in design. The feedback was taken into account and helped to define preferred criteria to how content,
efficiency of use and visual appearance of the system can be improved. According to the results all criteria were judged
to be approximately equally by the participants, although ease of learning appeared to be slightly higher, and therefore
the least influential. As a result, the statistical analysis highlighted the performance of the model is high and the model
can be applied in other organizations to measure the quality and use of workflow management systems from the user's
perspective. It can use the suggested criteria to assess and improve the UX of other organization’s workflow systems to
fulfill the user’s needs and provide positive experiences that are most conducive to business success. In future, the
proposed framework can be enhanced based on the findings in this study, whereas the model will be validated using
multiple case studies to compare and analyse the user experience in different organizations.
REFERENCES
[1] “Paredes, R. K., & Hernandez, A. A. (2017, December). Measuring the quality of user experience on web services: A case of university in the
Philippines. In 2017IEEE 9th International Conference on Humanoid, Nanotechnology, Information Technology, Communica.”
[2] “T. R. Haaksma, M. D. T. de Jong and J. Karreman, "Users’ Personal Conceptions of Usability and User Experience of Electronic and Software
Products," in IEEE Transactions on Professional Communication, June 2018, pp. 116-132.”
[3] “Arnold P. O. S. Vermeeren, Effie Lai-Chong Law, Virpi Roto, Marianna Obrist, Jettie Hoonhout, and Kaisa Väänänen-Vainio-Mattila. 2010.
User experience evaluation methods: current state and development needs. In Proceedings of the 6th Nordic Conference on .”
[4] “Lew, Philip, Luis Olsina, and Li Zhang. ‘Quality, quality in use, actual usability and user experience as key drivers for web application
evaluation.’ International Conference on Web Engineering. Springer, Berlin, Heidelberg, 2010..”
[5] “Deterding, S. (2015). The lens of intrinsic skill atoms: A method for gameful design. Human–Computer Interaction, 30(3-4), 294-335.”
[6] “Kerry Rodden, Hilary Hutchinson, and Xin Fu. 2010. Measuring the user experience on a large scale: user-centered metrics for web applications.
In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’10). ACM, New York, NY, USA,.”
[7] “Law, Effie L-C., and Paul Van Schaik. ‘Modelling user experience–An agenda for research and practice.’ Interacting with computers 22.5
(2010): 313-322..”
[8] “Stanton, N. A., Salmon, P. M., Rafferty, L. A., Walker, G. H., Baber, C., & Jenkins, D. P. (2017). Human factors methods: a practical guide for
engineering and design. CRC Press.”
[9] “Grudin, J. (2017). Obstacles to participatory design in large product development organizations. In Participatory Design (pp. 99-119). CRC
Press.”
[10] “Øvad, T., & Larsen, L. B. (2015, August). The prevalence of UX design in agile development processes in industry. In 2015 Agile Conference
(pp. 40-49). IEEE.”
IJARCCE
ISSN (Online) 2278-1021
ISSN (Print) 2319-5940
International Journal of Advanced Research in Computer and Communication Engineering
Vol. 10, Issue 2, February 2021
DOI 10.17148/IJARCCE.2021.10203
Copyright to IJARCCE IJARCCE 34
This work is licensed under a Creative Commons Attribution 4.0 International License
[11] “Del Rio, M., Cortes, L., & Salamanca, L. (2016, August). The use of relevant content in mexican government websites: A ranking proposal to
increase citizen-oriented websites. In 2016 4th International Conference on User Science and Engineering (i-USEr) (p.”
[12] “Appiah, S., Benites, P., Chang, R., & Geleg, T. (2018). UI/UX Proposal for the Montgomery County Department of Health and Human Services.
Partnership for Action Learning in Sustainability (PALS).”
[13] “Ghazali, M., Hussein, I., Mahmud, M., & Md Noor, N. L. (2018, April). Smart City Readiness in Malaysia: The Role of HCI and UX. In
Proceedings of the Asian HCI Symposium’18 on Emerging Research Collection (pp. 17-20). ACM.”
[14] “Kamau, G., Njihia, J., & Wausi, A. (2016, May). E-government websites user experience from public value perspective: Case study of iTax
website in Kenya. In 2016 IST-Africa Week Conference (pp. 1-8). IEEE.”
[15] “Pretorius, M. (2017, June). Categorisation of digital government services informed by user research. In The Proceedings of 17th European
Conference on Digital Government ECDG 2017 (p. 145).”
[16] “Mercer, K., Giangregorio, L., Schneider, E., Chilana, P., Li, M., & Grindrod, K. (2016). Acceptance of commercially available wearable activity
trackers among adults aged over 50 and with chronic illness: a mixed-methods evaluation. JMIR mHealth and uHeal.”
[17] “Horst, D., Fitzpatrick, S., Panzer, R., Lebson, C., & Pass, J. (2016, September). Panel: Peculiarities in Human Factors Consulting on Government
Projects. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 60, No. 1, pp. 499-5.”
[18] “Porter, C. (2015). Designing for experience-a requirements framework for enrolment based and public facing e-government services (Doctoral
dissertation, UCL (University College London)).”
[19] “Shneiderman, B., Plaisant, C., Cohen, M., Jacobs, S., Elmqvist, N., & Diakopoulos, N. (2016). Designing the user interface: strategies for
effective human-computer interaction. Pearson.”
[20] “Abujarad, F., O’Bara, I., Swierenga, S. J., & Raile, E. D. (2017, July). The Role of UX in Government System Expansion. In International
Conference of Design, User Experience, and Usability (pp. 559-569). Springer, Cham.”
[21] “Oppermann, R. (2017). Adaptive user support: ergonomic design of manually and automatically adaptable software. Routledge.”
[22] “Malinverni, L., Mora-Guiard, J., & Pares, N. (2016). Towards methods for evaluating and communicating participatory design: A multimodal
approach. International Journal of Human-Computer Studies, 94, 53-63.”
[23] “Blomberg, J., Giacomi, J., Mosher, A., & Swenton-Wall, P. (2017). Ethnographic field methods and their relation to design. In Participatory
Design (pp. 123-155). CRC Press.”
[24] “Noor, N. L. M., Harun, A. F., Adnan, W. A. W., Saman, F. M., & Noh, M. A. M. (2016, August). Towards the conceptualization of citizen user
experience: Citizens’ preference for emotional design in E-Government portal. In 2016 4th International Conference o.”
[25] “Kuliga, S. F., Thrash, T., Dalton, R. C., & Hölscher, C. (2015). Virtual reality as an empirical research tool—Exploring user experience in a real
building and a corresponding virtual model. Computers, Environment and Urban Systems, 54, 363-375.”
[26] “Saunders, M., Lewis, P. & Thornhill, A. (2016). Research Methods for Business Students (7th ed.). Landon: Pearson.”