Content uploaded by Malin Wik
Author content
All content in this area was uploaded by Malin Wik on Oct 05, 2016
Content may be subject to copyright.
1
Supporting Usability Studies
in Uganda
A case study contributing to the planning phase of usability
facilities
Att främja användbarhetsstudier i Uganda
Bidrag till planeringen av ett resurscenter för användbarhets-
studier
Malin Wik
Faculty of Economics, Communication and IT
Information Systems
Bachelor thesis, 15 ECTS
Supervisor: John Sören Pettersson
Examiner: Remigijus Gustas
2012-06-19
Serial number:
Abstract
Usability studies are conducted as a part of the usability engineering process, ensuring the
usability of a developing product. Such usability studies can be conducted in a usability
laboratory, or at the anticipated context of use. At the School of Computing & Informatics
Technology (CIT) at Makerere University in Kampala, Uganda, plans for usability facilities
are being evolved.
This study maps what facilities are beneficial for CIT at Makerere University to adapt in order
to fulfil the potential stakeholders’ needs, as well as enabling the stakeholders to conduct
wanted usability studies. Furthermore, the study presents various usability engineering
methods, to be compared with the needs of the stakeholders.
26 potential stakeholders of the usability facilities answered two different surveys. The result
shows that the stakeholders’ conceptions about usability studies in some cases are
misconceptions, why educational activities about usability and usability studies should be
planned alongside the development of the facilities. Further the study shows that the facilities
must support usability studies conducted in field as well as studies conducted in a controlled
laboratory environment. Moreover, the facilities need to provide facilities for testing mobile
services, web applications, user interfaces, and provide for stress and load testing.
Keywords: usability engineering, usability studies, user involvement, usability facilities,
laboratories, in-field studies, controlled laboratory environment
Table of Contents
Abstract ........................................................................................................................ 2
1. Introduction ............................................................................................................ 5
1.1 Background ................................................................................................................... 5
1.2 Scope .............................................................................................................................. 5
1.3 Target groups ................................................................................................................ 6
1.4 Structure of this thesis ................................................................................................ 6
2. Working with usability ......................................................................................... 8
2.1 Introduction .................................................................................................................. 8
2.2 Definition of usability ................................................................................................ 8
2.3 Why should usability be ensured? ........................................................................... 9
2.4 Usability engineering ................................................................................................. 9
2.4.1 The Usability engineering lifecycle model ..................................................................... 10
2.4.2 Testing usability ................................................................................................................. 13
2.5 What can be tested? ................................................................................................... 21
2.6 Who can ensure usability? ....................................................................................... 22
2.7 Where can usability be tested? ............................................................................... 24
2.7.1 Usability facilities .............................................................................................................. 25
2.7.2 Should usability studies be conducted in a usability laboratory or in field? ........... 28
2.8 Uganda ......................................................................................................................... 29
2.9 Makerere University ................................................................................................. 31
3. Methodology ......................................................................................................... 32
3.1 Choosing topic ........................................................................................................... 32
3.2 Choosing respondents .............................................................................................. 32
3.3 Data from primary sources ...................................................................................... 33
3.3.1 The two surveys ................................................................................................................. 33
3.3.2 Pilot testing the digital survey ......................................................................................... 35
3.4 Data from secondary sources .................................................................................. 36
3.4.1 Choosing and collecting data from secondary sources ................................................ 36
3.5 Research model .......................................................................................................... 37
4. Results .................................................................................................................... 38
4.1 The potential stakeholders’ take on usability ..................................................... 38
4.1.1 What does usability studies mean to you? When, why and how are they
conducted? ................................................................................................................................... 38
4.2 The potential stakeholders’ usability engineering processes .......................... 39
4.2.1 Are the users of the software you develop involved in the collection and
specification of requirements? How are the users involved? .............................................. 39
4.3 The stakeholders’ potential usage of the usability facilities ............................ 39
4
4.3.1 If a usability lab was established at Makerere University, would you be interested
in using it? .................................................................................................................................... 39
4.3.2 Where would you want to use the facilities? ................................................................. 40
4.3.3 When would you want to use the facilities? .................................................................. 40
4.3.4 State services you would want the facilities to provide, and what services your
organization would use ............................................................................................................. 41
4.3.5 How are these services your organization would use currently met? ...................... 42
4.3.6 Would your organization be willing to pay (subsidized) for the services? .............. 43
4.3.7 What might be your issues of concern that you would want addressed before you
can trust and use the facility? ................................................................................................... 43
5. Analysis .................................................................................................................. 45
5.1 The potential stakeholders’ take on usability ..................................................... 45
5.2 The stakeholders’ usability engineering processes ............................................ 47
5.3 The stakeholders’ potential usage of the usability facilities ............................ 48
5.3.1 When would the stakeholders would like to use the facilities ................................... 48
5.3.2 Services the facilities should provide ............................................................................. 49
5.3.3 How the stakeholders’ needs for services wanted are currently met ........................ 55
5.3.4 The stakeholders’ willingness to pay for the services provided ................................. 56
5.3.5 The stakeholders’ issues of concern ................................................................................ 56
5.4 Answering the research questions ......................................................................... 57
5.4.1 What conceptions or misconceptions of usability studies do the potential
stakeholders have? ..................................................................................................................... 57
5.4.2 What needs do the stakeholders have for usability facilities? .................................... 58
5.5 Validity issues ............................................................................................................ 59
6. Conclusions ........................................................................................................... 60
6.1 Services needed to be provided by the usability facilities ............................... 60
6.2 The facilities at Makerere University .................................................................... 61
6.2.1 Hardware ............................................................................................................................ 61
6.2.2 Software .............................................................................................................................. 61
6.2.3 Miscellaneous ..................................................................................................................... 61
6.2.4 Next step to further develop the plans of the usability facilities ................................ 61
Acknowledgement ................................................................................................... 62
Bibliography ............................................................................................................. 63
Appendices ................................................................................................................ 65
Appendix 1: The first survey ......................................................................................... 65
Appendix 2: The digital survey .................................................................................... 68
5
1. Introduction
In the field of Human-Computer Interaction (HCI) usability is an important aspect, to which
all HCI practitioners should strive (Leventhal & Barnes 2008). Computer-based interactive
systems benefits from being developed with a human-centred perspective, enhancing
usability.
“Computer-based interactive systems vary in scale and complexity. Examples
include off-the-shelf (shrink-wrap) software products, custom office systems,
process control systems, automated banking systems, Web sites and applications,
and consumer products such as vending machines, mobile phones and digital
television.” (ISO 9241-210:2010 p.1)
As pointed out in the citation above, computer-based interactive systems can mean
various products or systems. In this study such computer-based interactive systems will be
referred to as systems or products.
As claimed in the ISO standard for Human-centred design for interactive systems,
systems with high usability bring both commercial and technical benefits to the users as well
as the organization developing the system, the suppliers of the systems and more (ISO 9241-
210:2010). Usability engineering, which is further explained in Chapter 2, makes sure that a
high degree of usability is ensured throughout the whole development cycle of a product.
Usability studies are an important part of the usability engineering, used to reach the point
where the usability and therefore the success of a system or a product are ensured. Usability
studies can be conducted in both laboratories as well as in the context of anticipated use.
1.1 Background
Makerere University is the biggest university in Uganda, situated in the capital city,
Kampala. At Makerere University plans of facilities for usability studies are being developed.
The facilities are thought to be situated at the School of Computing & Informatics Technology
(CIT) at Makerere University, working as a part of the education for the students taking
courses at CIT. Usability testing is a subject that the students get in contact with in theory, but
they never get an opportunity to perform usability testing in a controlled laboratory
environment. Local organizations, companies and other universities in Kampala are thought
of as potential users of the usability testing facilities as well. Therefore, the facilities must be
accommodated to fit the needs of Makerere University, other universities and the
organizations and businesses in Kampala as well. The present study has aimed to find out
what needs prospective stakeholders of the usability facilities have, their conceptions as well
as misconceptions of usability studies, and what the facilities should consist of in order to
fulfil the stakeholders’ needs.
1.2 Scope
The main purpose of this study is to find out what kinds of usability facilities are beneficial
for CIT at Makerere University to have. The university educates students for the society in
large and the facilities must be relevant for many parties outside the university to be valid to
6
include in the education curricula and research. Therefore, the facility should also be of
interest to such parties. Because external parties are potential stakeholders in addition to CIT,
and the facilities might need to be adapted for the use of such partners, two research
questions were formulated:
a) What conceptions or misconceptions of usability studies do the potential
stakeholders have?
b) What needs do the stakeholders have for usability facilities?
This study focuses on usability facilities adapted to a specific geographical (and thus
economical) context (Kampala, Uganda), why other locations logically may be excluded. This
study might be possible to adapt to similar geographical and economical contexts as the one
it was conducted in, but as will be obvious from the account of the (potential) stakeholders
which we have not only identified but also managed to get responses from, much of the
wisdom of the present report lies in the sensitivity for the specific contextual circumstances.
For the possibility to generalize the results, the reader will have to be equally context-
sensitive in his or her specific research setting. Probably, it is as much the different
considerations for gathering data that are of interests as it is the results in themselves.
This study is not focused on accessibility (testing). Where usability concerns the use
of a specified user, accessibility is about “usability of a product, service, environment or
facility by people with the widest range of capabilities” (ISO 9241-171).
Nor are cost calculations for establishing usability facilities included in this study,
since they can vary widely from country to country and from time to time.
1.3 Target groups
This study can be of use to CIT at Makerere University and to others, such as organizations or
universities, who are planning to adopt usability facilities in a certain context. Especially
helpful could this study be for those who are planning for usability facilities in similar
contexts as the one in this study.
1.4 Structure of this thesis
In order to gather data about what needs, conceptions and misconceptions the stakeholders
have, two surveys have been used. The first survey was put together and handed out to 17
potential stakeholders by Dr Baguma. The analysis of the first survey showed that additional
information was needed in order to map the needs of as many potential stakeholders as
possible. Therefore a second survey was put together and sent to 25 additional potential
stakeholders. The methodology of this study is presented in Chapter 3.
In order to form the second survey as well as to compare and validate the data
collected from both surveys, an extensive review of theories about usability engineering,
usability facilities and usability testing were conducted. The review therefore gives a
background needed to understand and validate the results from the surveys. The result from
the review is presented in Chapter 2. Chapter 4 contains the results from the surveys. In
7
Chapter 5 the results from the surveys are compared with the literature presented in Chapter
2. The conclusions reached in the analysis are finally summed up and presented in Chapter 6.
8
2. Working with usability
The aim with this study is to answer what kinds of usability facilities would be beneficial for
CIT at Makerere University to have and provide. Therefore various usability engineering
methods and techniques are presented in this chapter, as well as what usability facilities can
contain, where, when, how and why usability studies are conducted. The usability facilities
are to be adapted to a geographical (thus economical) context, why information about the
context is also presented in this chapter.
2.1 Introduction
To test a product can mean that you test different aspects of the product. Functionality is one
example of a feature of a product that can be tested. “Functionality refers to what the product
can do,” explains Dumas and Redish in their guide to usability testing (1999, p.4).
Functionality is the functions or the tasks that can be performed by using the product. But
according to Dumas and Redish functionality is nothing without usability. The authors mean
that a product can have several functions but if the user does not know that the functions
exist or how to use them then the functions are useless. Since how to use the product is
important, usability comes to mind. Usability is another aspect of a product, system or service
that can be tested, according to the authors (ibid.). But what is usability?
2.2 Definition of usability
In this study the definition of usability as developed by International Organization of
Standardization will be used. Usability means the “extent to which a system, product or
service can be used by specified users to achieve specified goals with effectiveness, efficiency
and satisfaction in a specified context of use” (ISO 9241-210:2010).
This means that if a product, system or service should have a high degree of
usability, its ability to allow the intended person interacting with the product to complete
goals with effectiveness, efficiency and satisfaction must be met to a high extent. Effectiveness
means the “accuracy and completeness with which users achieve specified goals” (ibid.).
Moreover, efficiency is defined as the “resources expanded in relation to the accuracy and
completeness with which users achieve goals” (ibid.). The definition used in this study for
satisfaction is: “Freedom from discomfort, and positive attitudes towards the use of the
product.” (ISO 9241-11:1998)
Usability can be planned for, measured and incorporated throughout the whole
product’s development cycle (ISO 9241-11:1998). This process is called usability engineering.
Usability tests are a tool used to help with that process, ensuring that a high degree of
usability is met. (Dumas & Redish 1999) How usability is engineered and tested is further
explained in section 2.4.
9
2.3 Why should usability be ensured?
“Usability is an important consideration in the design of products because it is concerned
with the extent to which the users of products are able to work effectively, efficiently and
with satisfaction.” (ISO 9241-11:1998)
Dumas and Redish (1999, p.10) argues that usability must be striven for if the product
that is being developed are going to be successful: “In short, to have a successful product, the
design must be driven by the goal of meeting users’ needs.”
Nielsen (1992) argues that the work with products usability should be conducted
iteratively, allowing changes to the product throughout the whole development cycle.
Further the author mean that “It is much too expensive to change a completely implemented
product, especially if testing reveals the need for fundamental changes in the interface
structure.” (Nielsen 1992, p.13) Thus, usability should be tested or engineered into a product
as a way of ensuring the products success as well as saving both time and money (Nielsen
1992; Dumas & Redish 1999). Usability engineering can also be conducted as a way to find
out what kind of attributes and functionality the product should have (Nielsen 1992). This
can help with pinpointing what functionality the product really should have. The users of a
product probably won’t complain of too much functionality, but if the users won’t use the
functionality since it’s redundant, the time spent developing the functionality is wasted.
Thus, money is wasted on developing functionality that isn’t needed or wanted by the users.
Usability studies and engineering therefore can help in the development of a product to focus
on the right things. (Nielsen 1993)
Nielsen (1992) argues that users of today won’t put up with bad design, since better
functionality can be found in other products, as the supply of today is wider than in the early
days of computers. Some users interpret the user interface as the whole product. Imagine if
the user interface isn’t usable – then the whole product becomes useless to the user.
(Leventhal & Barnes 2008) Dumas and Redish also stresses that usability is a major part of a
products success, and that “Ease of use has become a major point of competition.” (Dumas &
Redish 1999, p.10). Therefore the conclusion of that usability and the users’ point of view
really do matter today can be drawn.
Dumas and Redish mean that usability is good for everybody, from the users of the
product to the company providing the product.
2.4 Usability engineering
In the early days of software development, the development process often did not follow an
iterative cycle but a sequential time line, later called the waterfall model. The sequential
development process follows a number of phases, where each phase contributes to the next
phase. Each phase is finalized before moving on to the next phase, and the previous, finalized
phases are never returned to. (Leventhal & Barnes 2008)
Royce (1970) presents this model for software development, which is following a
sequential line. Though, the author also provides models including iteration and stresses that
if not including steps beyond analysis and coding, bigger software projects are condemned to
fail. When Royce (1970, p.329) presents the sequential development process (nowadays
10
called the water fall model) he also argue that this model is “risky and invites failure”.
Further Royce means that errors and problems (that cannot be found during the step of
analysis) found in the second last step, testing, will bring the development back to the first
step, and therefore increasing time schedules and/or costs by 100 per cent.
According to Leventhal and Barnes (2008, p.57) there are some problems with the
waterfall model, problems that are “especially significant when developing a user interface”.
What the waterfall model lacks, according to the authors, is called iteration (though, as stated
above, the sequential model were from the beginning argued to benefit from iteration).
Iteration means that the phases of the development sometimes are returned to, if or as it
mostly is when needed, during the development process. Leventhal and Barnes mean that in
reality the sequential development process is not beneficial, just as Royce stated in 1970.
Leventhal and Barnes state this since the early phases of the development process often needs
to be returned to, for example when the requirements change (as they usually do in a project).
Nielsen (1993) argues that the user interface won’t be finished and available for user tests
until the very last minute, when everything else has already been developed, when using the
waterfall model. Further the author means that user tests cannot be conducted earlier without
(a prototype of) the graphical interface, since users do not understand technical specifications
on a system or interface. Gould and Lewis (1985) recommend iterative design as one of the
three principles needed for a successful design (along with the other two principles: “Early
Focus on Users and Tasks” and “Empirical Measurement”). Nielsen (1992, p.13) argues that it
is “nearly impossible to design a user interface right the first time, we need to test, prototype
and plan for modification by using iterative design”.
The development process needs to be iterative if the developed product or system is
going to be successful (Gould & Lewis 1985; Nielsen 1992, 1993; Leventhal & Barnes 2008).
“Usability engineering is not a one-shot affair where the user interface if fixed up
before the release of a product. Rather, usability engineering is a set of activities
that ideally take place throughout the lifecycle of the product, with significant
activities happening at the early stages before the user interface has even been
designed.” (Nielsen 1993, p. 71)
Nielsen (1993) also suggests that usability must be taken care of throughout the
whole developing process of a product, if the result should be as good as possible. This
process is called usability engineering. This model is explained below, in the following
section.
2.4.1 The Usability engineering lifecycle model
Ensuring usability is a big part of a products’ success (Dumas & Redish 1999). As stated in the
end of section 2.1, ensuring usability can be a part of the whole development cycle. But it is
not enough to think about usability when the product is almost or completely finished.
“Usability is not a surface gloss that can be applied at the last minute. […] Therefore, usability
has to be built in from the beginning.” (Dumas & Redish 1999, p.8) The authors mean that
usability has to be considered during the whole development lifecycle. Nielsen (1992, 1993)
also argues that usability should not be ensured just before the final release of the product,
but throughout the whole lifecycle of a product. A lot of work with a product can be done
before starting the implementing, making sure that the right functionality gets implemented
11
and therefore costly changes to the final product can be avoided (Nielsen 1992, 1993). Nielsen
(1993, p.72) states that “The life cycle model emphasizes that one should not rush straight into
design.” and further describes the usability engineering lifecycle model. The usability
engineering life cycle model is developed from Gould and Lewis (1985) three design
principles, expanding them into a model of a number of defined stages (Nielsen 1992;
Mayhew 1999). Gould and Lewis (1985, p.300) advise that three main principles are in focus
when designing a product and engineering usability: “Early Focus on Users and Tasks”,
“Empirical Measurement” and “Iterative Design”. Mayhew (1999) mean that the usability
engineering lifecycle model is used to apply an engineering perspective to the process of
developing user interfaces with high usability. Mayhew (1999) contrasts the usability
engineering to the software engineering, meaning that the processes are the same though the
tasks and models may differ. Further Mayhew argues that both the software and usability
engineering is about defining requirements and goals, working iteratively with design and
tests in order to reach and fulfil the goals.
Mayhew (1999, p.5-6) describes the usability engineering lifecycle containing the
following steps:
• “Structured usability requirements analysis tasks
• An explicit usability goal setting task, driven directly from requirements analysis
data
• Tasks supporting a structured, top-down approach to user interface design driven
directly from usability goals and other requirements data
• Objective usability evaluation tasks for iterating design towards usability goals”
Mayhew argues that the usability engineering lifecycle model is a structured engineering
technique that helps the team developing a product or system, and ensuring usability in the
process. Further the author mean that specific tasks are supposed to be executed during the
development process, tasks that are all striving to fulfil requirements and goals that are set to
ensure usability. Mayhew also points out the importance of an iterative developing process,
where the usability evaluation of the product are supposed to be conducted iteratively,
making sure that each iteration gets the product closer to the usability goals.
Nielsen’s usability engineering lifecycle model
1. Know the user
a. Individual user characteristics
b. The user’s current and desired tasks
c. Functional analysis
d. The evolution of the user and the job
2. Competitive analysis
3. Setting usability goals
a. Financial impact analysis
4. Parallel design
5. Participatory design
6. Coordinated design of the total interface
7. Apply guidelines and heuristic analysis
8. Prototyping
12
9. Empirical testing
10. Iterative design
a. Capture design rationale
11. Collect feedback from field use
Table 1. The usability engineering lifecycle model (Nielsen 1993, p.72, Table 7).
Nielsen presented one usability engineering lifecycle model in 1992, and another slightly
altered usability engineering lifecycle model in 1993 (see Table 1). The later model is used in
this study, though the main characteristics of the two models are the same.
All the steps in the lifecycle model might not be crucial (or possible because of time
or financial constraints) to go through in all development projects, and the steps don’t have to
be followed in numerical sequence (Nielsen 1992). Nielsen (1992,1993) argues that not all
development teams can afford conducting the whole usability engineering lifecycle model,
but argues further that all teams at least should get to know their users by visiting the users
workspace (see step 1 of the usability engineering lifecycle model), let the user participate
throughout the design process (see step 5), design iteratively (step 10), use prototyping (step
8) and conduct user tests (step 9). How the user tests and other usability engineering methods
can be conducted is explained in section 2.4.
Predesign stage
The first stage is the predesign stage, where the first step is getting to know the user (see
Table 1). Listing the a) user characteristics (such as computer experience etc.), b) analysing
what the goals the users have (and what they need in order to achieve these goals), c) how
users conduct specific tasks (and if the execution can be improved in the system), and d) how
the users change while using the system (for example turning into experts) and how the
system should handle this “evolution”, are all part of the first step in the usability
engineering life cycle. Step two (2) in the lifecycle is to analyse already existing products in
the same field as the system being developed. The analysis consists of identifying the existing
systems strengths and weaknesses. This can be done by testing the existing systems on users
(more about user tests under section 2.4.2) or comparing systems if several systems exist. Step
three (3) in Nielsen’s model is about defining when the system fulfils the required usability,
how to measure if the requirements are fulfilled and what attributes that should get the
biggest attention while developing. Depending on what the system are supposed to be used
for, Nielsen (1993) mean that different attributes should get different amount of attention. In
this step the author mean that a “Financial Impact Analysis” should be performed as well.
This analysis should give a picture of what benefits, financially, the system will contribute to
the company adapting the system. Next step (step number four) in the usability engineering
lifecycle model is to get a range of interface design suggestions from the designers on the
developing team. The thought with Parallel design is that designers should work
individually with developing rough drafts of how the interface should be designed. Having a
couple of designers work individually and independent will, according to the author (1993,
p.86), give as “much as diversity as possible”. Nielsen (1993) means that when the drafts then
13
are finished, the best features can be merged into one interface to be evaluated, or if the drafts
are so different that they can’t be merged: further develop the designs so that a few
prototypes can be developed (see section 2.4.2) and the evaluated. The author means that the
Parallel design is good cost-wise, since the developers are working on several design ideas
parallel.
The design stage
Stage two in Nielsen’s (1993, 1992) usability engineering lifecycle model is the design stage,
where step number five (5) is to include the users in the development team (more about
participatory design, see section 2.4.2). To coordinate the Total Interface means that all the
different parts of the product (such as guides, different releases, documentation, the product
itself) should be consistent. Nielsen (1993, p.90) means that this can be done by having one
person “coordinate the various aspects of the interface” and by developing a sharing
mentality (such as code sharing) throughout the project. “Apply guidelines and heuristic
analysis”, step seven (7) in Nielsen’s model, is about letting experts evaluate and analyse the
system by using standards and guidelines (see section 2.4.2. Next step in the model is
prototyping, a method where a prototype of the product or interface are developed (for
example sketched on a piece of paper) and then tested on a user (see section 2.4.2). Step nine
(9) is where tests are carried out with real users. The test are conducted either to evaluate the
(developing) interface against the usability goals (earlier established) or evaluating if the
interface works or not for the users, and why. Methods commonly used in this step are
according to Nielsen (1992) Thinking Aloud, Constructive interaction, questionnaires,
observation and logging (for further information about these methods, see section 2.4.2). Step
ten (10) is to conduct the design iteratively, which means for example that the usability
problems that were recognized in the previous step should be somehow dealt with and then
tested on users again. Nielsen (1992) argues that it is important not to over use the test
subjects, conducting tests on every single design detail. Instead the author (1992, p.19) means
that users “should be conserved for the testing of major iterations”.
Post design stage
“Collect feedback form field use” is the last step in Nielsen’s usability engineering lifecycle
model. It is conducted in order to collect data about the system’s usability for further
developments (either for the same system or other, future projects). (Nielsen 1993)
2.4.2 Testing usability
“Some type of usability testing fits into every phase of a development lifecycle.” (Rubin &
Chisnell 2008, p.27) Usability testing is not only to be conducted when the implementation
and development is finished. Usability testing can and should according to Rubin and
Chisnell (2008) be conducted during the whole development lifecycle. This means that
usability tests should be performed from the start to the end of the development.
“Usability testing is appropriate iteratively from predesign (test a similar product
or earlier version), through early design (test prototypes), and throughout
development (test different aspects, retest changes).” (Dumas & Redish 1999, p.26)
14
According to Dumas and Redish (1999) tests should be conducted during the whole
development lifecycle, therefore testing should be conducted iteratively throughout the
development of the product.
“Testing usability means making sure that people can find and work with the functions to
meet their needs.” (Dumas & Redish 1999, p.4) By conducting usability tests, the usability of
the product can be measured and evaluated. The usability test shows if people can use the
product’s functions to perform a task.
“[…] we use the term usability testing to refer to a process that employs people as
testing participants who are representative of the target audience to evaluate the
degree to which a product meets specific usability criteria.” (Rubin & Chisnell
2008, p.21)
Since the usability concerns the user and the user’s needs, then the usability testing should
also focus on the user. Therefore Rubin and Chisnell (2008) mean that the contemplated user
should be included in the usability test as a test person. Dumas and Redish also points out the
importance of having the expected user (or the ones already using the product) represented
in the usability tests. Otherwise the test will not show credible results.
Since the expected user should be involved in the tests, if no current users exists, the
testing can go on already before the products is fully developed.
Though, usability testing is not just about conducting a test, usability testing involves
a lot of different techniques, methods and tasks (Dumas & Redish 1999). Conducting this
research, the word testing has been found to mean several methods and techniques, such as:
experimenting, exploring, prototyping, evaluating, inspecting, all in order to improve the
products’ or the systems’ usability.
User tests
User tests are tests that are performed involving the intended end user of the system or
product. During the user tests it is the user who reveals usability problems comparing to for
example evaluations made by experts, and “is the most fundamental usability method”
according to Nielsen. According to Dumas and Redish the user tests is the best way for
finding major usability problems in a system. User tests can sometimes be described just as
usability testing, but usability testing can also be methods and techniques where the users are
not involved in the actual test. Below a variety of such usability testing and usability
engineering techniques and methods will be explained.
Usability engineering methods and techniques
Participatory Design is a method where the actual user of the system participates in the
design process as a part of the design team (Nielsen 1993; Rubin & Chisnell 2008; Leventhal &
Barnes 2008). In order to get substantial feedback, the design ideas need to be presented to
the user in such a way that the user can comprehend, since the user is not a designer. Nielsen
(1993, p.89) suggests that “Instead of voluminous system specifications, concrete and visible
designs” should be presented to the user. There is a danger with the participatory design, that
the user might become too much a part of the development team, making the user hold back
15
negative feedback and other input valuable to the development (Rubin & Chisnell 2008).
Mayhew (1999) also suggests that the technique doesn’t really involve the user in the “initial
design process”, which is, according to the author, a bad thing.
Observation is described as a way to gather information about how the users
normally perform tasks. The technique is conducted by going to the users normal workplace
and without interfering observing them performing their daily work. By observing the user
unpredictable user scenarios and tasks can be discovered. (Nielsen 1994) Mayhew (1999)
predicates that the user better can explain how and why a certain task is performed while it is
carried out, than explaining the actions at another time, as during an interview. Mayhew
(1999) also suggests that the user isn’t always aware of how a task is performed, why asking
about it during an interview would just contribute with falsely data. Rubin and Chisnell
(2008) describe the Observation technique as Ethnographic Research. Observations can also
be done in a controlled laboratory environment, for example watching the user use the
system, but is then to be seen as a part of a user test.
Card sorting can be used to group and categorize content, and for using the right
wordings and labels in the user interface (Rubin & Chisnell 2008). Nielsen (1993) describes
the technique as inexpensive. Further the author (1993, p.127) describes how the technique is
carried out: “each concept is written on a card, and the user sorts the cards into piles”. Rubin
& Chisnell (2008) suggests that the user can be given cards that are not sorted into categories,
and the assignment to write a label for the cards. Further the authors suggest that the
technique also can be carried out by letting the user sort the cards into already existing
categories.
Questionnaires, Interviews and Surveys does not study the user using a system, but
asks about what the user thinks about using the system or the system, interface etc. it self
(Nielsen 1993). Rubin and Chisnell (2008) mean that the method is good for getting a
generalised view. Though, further the authors argue that since the method only asks for the
users views and does not study the user using the system, it should not replace the user tests.
Focus Groups are used to gather information from a group of users, their opinions
and feelings about certain topics. The focus groups always contain a group of users; Nielsen
suggests 6-9 users per group. Rubin and Chisnell (2008) claims that the focus groups shall be
used in the early stages of the development cycle, while Nielsen (1993) states that the focus
groups can be used both during the early stages of development but also after the system has
been used for a period of time. Rubin and Chisnell (2008) argues that the users in focus
groups only tell what they want to tell, why the method should not be used instead of user
tests.
Logging actual use is according to Nielsen (1993) usually used after a system is
already in use by users, but this technique can also be used during the development process.
The technique is conducted by letting the computer automatically log data about how the
user uses a system. The log file then can show how the system is used (such as how often
tasks are performed, how long time tasks takes to perform etc.). The data can be used to
evaluate for example if certain functionality is used etc. But as the author argues on page 221
“A major problem with logging data is that it only shows what the users did but not why
16
they did it”, why the data it self might not be tellingly about the systems actual usability. If
the data is going to be used as a part of a bigger evaluation where the user are asked to
explain the data, for example why certain tasks where performed in this particular way, the
author argues that it must be done very carefully.
User Feedback can be collected in different ways, Nielsen (1993) exemplifies that it
can be collected directly in the system itself, when conducting beta testing or by providing
the users with a specific email address where feedback can be sent. Further the author argues
that user feedback is an easy way to gather data about the system in use, since it is the user
themselves who takes the initiative of sharing their thoughts and feelings.
Thinking Aloud is used in order to getting to know what the user (test participant) is
thinking and feeling while performing tasks (Rubin & Chisnell 2008). Therefore the users
conceptions and misconceptions of the system or interface easily can be identified (Nielsen
1993). Nielsen (1993, p.195) describes the technique as maybe being the “single most valuable
usability engineering method”. The method is carried out by asking the user or test
participant to verbalize his or hers thoughts and feelings while using a system. This can feel
unnatural to the user, why it might distract the user from the actual task. Also the method
might simplify the task that the user is performing, since how the task is being performed is
getting so much more attention than it normally would. (Rubin & Chisnell 2008) Dumas and
Redish (1999) suggests that the user gets to practice the thinking aloud-technique before the
actual test start, just to get the user to “warm up”.
Constructive interaction, codiscovery learning is used just as Thinking Aloud (see
the section above), where the test participant verbalizes thoughts and feelings throughout the
test. The difference between the two methods is that constructive interaction uses two test
subjects, who perform the tasks together (and talks to each other). Nielsen (1992) argues that
the technique though demands additional test subjects, but in the same time the method can
feel more natural for the test subjects (especially if the test subjects are children). (Nielsen
1992)
Follow-Up Studies is the most reliable method giving the most correct data for
evaluating usability according to Rubin and Chisnell (2008). The authors state this since when
the follow-up studies are conducted, all the contributing aspects and characteristics are in
place, and why an accurate picture and view of how usable the system are can be measured.
Eye-tracking is a technique that can be used to evaluate where a user looks at a
screen (i.e. an interface) (Nielsen 1993). According to Benyon (2010) the eye-tracking, or eye-
movement tracking, can show what in the interface that attracts the user’s attention, and what
parts that are completely overlooked. Eye-tracking software can be used to record what the
screen was showing while the user was looking at it (Nielsen & Pernice 2010). Nielsen and
Pernice (2010) stresses that the eye-tracking technology cannot explain why some parts are
looked at and why some are not, neither can the eye-tracking show what the user was feeling
or thinking when looking at a certain thing. Therefore the eye-tracking technique cannot
show why certain parts of the interfaced were looked at and why some parts were
overlooked.
17
Rubin and Chisnell (2008) argue that eye-tracking devices are expensive, and that the
data can be hard to interpret.
Prototyping is a technique where a prototype is being used to evaluate a system or a
product. A prototype means a “representation of all or part of an interactive system, that,
although limited in some way, can be used for analysis, design and evaluation” (ISO 9241-
210:2010). A prototype can be a product with fully working interactivity, but less developed
functionality, a simple paper sketch or a “static mock-up” (ISO 9241-210:2010).
The main characteristic of the prototype is that it’s interactive, according to Benyon.
Further Benyon (2010, p.184) describes that “Prototypes may be used to demonstrate a
concept (e.g. a prototype car) in early design, to test details of that concept at a later stage and
sometimes as a specification for the final product.” The author claims, “The point is to
explore ideas, not to build an entire parallel system or product.” (Benyon 2010, p.95). This
means that prototypes are supposed to be developed as a part of the process of
understanding what is to be developed, and to evaluate the design and ideas with both the
development team as well as with the costumers and users.
Nielsen (1993, p.94) states “The entire idea behind prototyping is to save on the time
and cost to develop something that can be tested with real users.” The strength of prototypes
is that the prototype can give an insight in how the system will feel, what it can do, or how it
will look when it’s finished. The advantage of prototyping is that just a part of the system is
being prototyped and then letting the user, the team, the stakeholder etc. see, try and evaluate
the prototyped part. Benyon (2010, p.185) states that prototypes are especially beneficial to
show for “clients and ordinary people”, since they will not understand technical descriptions
and such. Nielsen (1993, p.94) describes that there are “two dimensions of prototyping:
Horizontal prototyping keeps the features but eliminates depth of functionality, and vertical
prototyping gives full functionality for a few features.” These two dimensions mean that
prototypes that show the features of the system can be constructed or prototypes that shows
the systems functionality. Nielsen (1993, p.95) claims that the horizontal prototyping will
show how “well the entire interface “hangs together” and feels as a whole”. Further Nielsen
states that the vertical prototypes show a specific function in whole, and enables that specific
function to be fully evaluated and tested. Therefore, depending on where in the usability
engineering lifecycle the system is at, different types of prototypes are beneficial to develop.
Benyon (2010, p.185) claims that there are two types of prototypes: “low fidelity (lo-
fi) and high fidelity (hi-fi)”. Further Benyon states that the high-fidelity prototypes often
looks like what the final system will look like, but is not having all the functionality that the
finished system will have. The low-fidelity prototypes are according to Benyon (2010, p.187)
often made from paper and are concentrating on basic ideas of how the finished system
should be when it’s implemented, such as “content, form and structure, the ‘tone’ of the
design, key functionality requirements and navigational structure”.
According to Leventhal and Barnes (2008), the horizontal prototype is a high-fidelity
prototype on a wide range of features, but is low-fidelity in terms of functionality. Further
they state that the vertical prototype is “high-fidelity on only a portion of the final product”,
i.e. some of the features are in high-fidelity.
18
High-fidelity prototypes
The high-fidelity prototypes focus on the details of the system or design, and can sometimes
function for showing the “final design” for the user or customer (Benyon 2010). Examples of
high-fidelity prototypes are video prototypes or prototypes using the Wizard of Oz technique
(though, important to point out is that Wizard of Oz prototypes can also be seen as low
fidelity prototypes, described in a later section, since they do not need to feel or look
“finished”). Below the Wizard of Oz technique will be further described.
Wizard of Oz
The Wizard of Oz-technique is a method where the functionality of the system is
being controlled and handled by a human (the “wizard”), instead of the system it self. This
means that the functionality does not have to be implemented in order to be tested on users.
(Nielsen 1993) The user who interacts with the prototype is unaware of that the input and
output from the prototype are handled by the “wizard” (Leventhal & Barnes 2008). The
technique requires that the “wizard” has some experience so that the prototype and the
possible interaction are held at a reasonable, manageable level, so that the prototype work in
a way that the user is “fooled” to believe that the user is in “control” and that the interaction
is real (Nielsen 1993; Leventhal & Barnes 2008). The roles can also be switched by letting the
user be the “wizard”, and letting the developer be the user. Then the developer gets to see
what kind of output the user thinks that the system should give, depending on what
interaction is carried out. (Pettersson 2003)
At Karlstad University in Sweden a laboratory called Ozlab has been set up, where a
system based on the Wizard of Oz-technique are used. The Ozlab system makes it possible to
test the interactivity of multimedia products, before any programming is done. This is a
beneficial strategy especially where paper prototypes are not suitable for testing the
interaction between the user and the system. (Molin & Pettersson 2003) To be able to use the
Ozlab only a few pictures and some “wizard”-supporting functions need to be set up. This
enables the Ozlab system to be used for “explorative experiments” (p.78), where
improvisation is a part of the development work, and further as an aid in the requirements
work. The requirement work for multimedia products is according to the authors a
complicated job, since the requirements for multimedia products for example can be hard to
explicitly express and make measurable. The Ozlab can, according to the authors, be used for
showing layout alternatives to the client, including interactivity. Therefore the client can be a
part of the requirement work, without the costly need for the development team to develop a
set of product alternatives. The requirements then are visualized instead of just written, and
possibly loose ideas and thoughts can be clearer presented.
“Admittedly, even if this method makes it impossible for the designer /developer to
fool her- or himself, the whole set-up is built on fooling someone else” (Pettersson 2003,
p.163) Since the Wizard of Oz technique is built upon “tricking” the test subject into believing
that the interaction is real, the technique comes with some ethical obligations. The test subject
should, according to Pettersson, be informed about how the test were carried out after the test
is finished, and if the test subjects want the collected data to be deleted, the test leader should
agree.
19
Though Pettersson argue that fooling the test subject isn’t always required in order to
use the Wizard of Oz technique successfully. Depending on what kind of product being
tested, the interactivity can be tested without fooling the test subject. For example if a mobile
interface is being tested on a computer screen, it may be clear to the test subject that the
interface and the interaction is not real. According to Pettersson the Ozlab then rather become
a tool for communication for the development team.
Disadvantages with high-fidelity prototypes
There are some disadvantages with high-fidelity prototypes according to Benyon. “A
problem with developing hi-fi prototypes is that people believe them!” (Benyon 2010, p.185)
Benyon mean that the high-fidelity prototypes looks so real and fully implemented that the
user or costumer can be tricked into believing that that is the case. “Another problem with hi-
fi prototyping is that it suggests such a system can be implemented.” (p.196) Further Benyon
holds that some things that are implemented in the prototype using techniques in animation
programmes etc. can fool the customer into believing that the implementations are possible
for the actual system. But some functions might not be able to be implemented in the actual
programming language used for the actual system.
Another aspect of the prototypes is the time delays. “If you can anticipate the length
of any delays, build them into the prototype.” (Dumas and Redish 1999, p.75) The response
time that will be present in the actual system might be missing in the prototype. This might
make the users interpretation of the prototype more positive than for the actual system, since
the time delays might be shorter. Dumas & Redish argues that the prototypes should contain
such response times, if the user feedback should be accurate.
Low-fidelity prototypes
Low-fidelity prototypes can be, and often are, made of paper. They are then called paper-
prototypes or paper mock-ups (Nielsen 1993). The main characteristics of the low-fidelity
prototypes are, according to Benyon, that they are fast developed, fast used and quickly
thrown away. Since they are developed so easily and quickly, they are also cheap to use as a
usability engineering tool (Rubin & Chisnell 2008). Benyon means that the low-fidelity
prototypes focuses on design ideas rather than details of the design and system.
“The value of the paper prototype or paper-and-pencil evaluation is that critical
information can be collected quickly and inexpensively. One can ascertain those
functions and features that are intuitive and those that are not, before one line of
code has been written.” (Rubin & Chisnell 2008, p.18)
Rubin and Chisnell mean that paper-prototypes are cheap to use as a part of the
usability engineering, and that the results is easily collected, even though it’s “critical
information” that is very important for the products success. Further the Rubin and Chisnell
mean that this critical information can be collected before any time (and money) have been
used for implementing features and functions of the product.
Nielsen states that prototyping can be used for designing the interface, and that user
tests can be performed on those prototypes adding understanding and information of how
the interface and system should be implemented.
20
Disadvantages with low-fidelity prototypes
Disadvantages, according to Benyon, with low-fidelity prototypes are that they can be fragile,
and when shown and used by a lot of people the prototype may be worn, impaired or
shredded. Further Benyon mean that a risk with low-fidelity prototype also can be that too
much detail are included into the prototype, making it hard to understand. Though, if too
little details are included, the users might add the details them selves, or just “simply watch
low-fidelity prototypes since they have only limited interactivity.” (Leventhal & Barnes 2008,
p.198), and therefore decreasing the users feedback.
To assess, inspect or evaluate usability
Usability inspection is a term used by Mack and Nielsen (1994) to group together different
usability engineering methods. Leventhal and Barnes describe these techniques as usability
assessment. Usability evaluation is another term used for the same methods and techniques
(Rubin & Chisnell 2008). These methods are not used in the earliest parts of the usability
engineering life cycle (see section 2.4.1) since they inspect and assess the interface. To be able
to inspect the interface, it has to be somewhat developed, although not implemented. (Mack
& Nielsen 1994) Leventhal and Barnes (2008) state that the evaluation techniques conducted
by experts (see methods below) can be used early in the development cycle, nipping some of
the usability problems in their buds. Though, further the authors mean that some techniques
are better used later in the development cycle. When to assess and inspect an interface
therefore depends on what technique or method that is used.
Methods for usability assessment, inspection or evaluation
According to Mack and Nielsen (1994) usability inspection methods beneficially are used as a
part of an iteratively focused development cycle, as well as combined with user tests. The
usability inspection methods can be used first, then after the design has been updated and
revised, the interface can go through user tests as well. Dumas and Redish also suggests that
the techniques and methods are combined with user tests, since the severances of the
problems found vary between the techniques and methods. The heuristic evaluation for
example, as stated earlier in this section, often maps the local and less severe problems, while
the user tests maps the problems that can have an actual effect on the usability and the users’
experience.
Leventhal and Barnes (2008) as well as Mach and Nielsen (1994) list a couple of
usability evaluation techniques:
Analytic Evaluation can be used in order to foresee or describe how an interface will
or should perform. Leventhal and Barnes (2008, p.214) explains that the Analytic Evaluation
can predict “how long it will take users to operate a screen”, which then can be used to assess
different interfaces against each other.
Evaluation by Experts or Heuristic evaluation is a way for a group of usability
experts to find usability problems within an interface. The problems are found by the experts,
working individually, by looking at the interface using a set of principles (i.e. heuristics).
(Mack & Nielsen 1994; Leventhal & Barnes 2008) Nielsen (1993) mean that more usability
problems can be found if the experts are allowed to, after theirs evaluations are done, to
21
communicate their findings to each other. Further the author mean that it is recommended to
use 3-5 experts, maximising the findings.
The disadvantage with the heuristic evaluation is that the experts usually just find
minor and less severe problems. This means that the problems found may not be severe
problems from the user’s point of view, and the problems found might not even be worth
spending time fixing, since they might not improve the usability of the system. (Dumas &
Redish 1999)
Guideline reviews means when the interface is checked if it is unison with usability
guidelines. Such documents can, according to Mack and Nielsen (1994), contain 1000
guidelines each, why the authors mean that the method is not commonly practiced since it
demands such a high level of knowledge and expertise in guideline documents. Nielsen
(1993, p.91) though argues “In any given project, several different levels of guidelines should
be used”. The levels are “general guidelines”, “category-specific guidelines”, and “product-specific
guidelines”. These different levels will add a more specific advice the deeper level you apply
guidelines from. An interface can also be reviewed through a set of standards. Nielsen (1993,
p.92) differentiates guidelines and standards, where a standard “specifies how the interface
should appear to the user” and a guideline “provides advice about the usability
characteristics of the interface”.
Pluralistic walkthroughs is conduced by walking through a specific scenario and
discussing usability problems related with the scenario and interface, with people associated
with the product being developed, such as users and developers. (Mack & Nielsen 1994)
Standards inspections infer that an expert evaluates an interface according to given
standards. This method aims for getting all similar systems on the market obtaining the same
standards. (Mack & Nielsen 1994)
Cognitive walkthroughs simulates and evaluates the ease of learning an interface,
which is a process that can be seen as a problem solving process or as a “complex guessing
strategy” (Dumas & Redish 1999, p.68). The user prefers to use this process, or guessing
strategy, since the user then can learn the interface while using the interface (Wharton et al.
1993; Mack & Nielsen 1994; Dumas & Redish 1999; Leventhal & Barnes 2008) Dumas and
Redish argues that the cognitive walkthrough isn’t as good for finding usability problems as
other methods, such as heuristic evaluations and user tests.
Formal usability inspections are a formalized method for engineers and inspectors
to find and describe usability problems in an efficient and time effective way. (Kahn & Prail
1993)
Feature inspection is a technique where the features of a system are being inspected
and evaluated after how well they help the users reach the intended goals and perform their
tasks. (Mack & Nielsen 1994)
2.5 What can be tested?
Dumas and Redish (1999) argues most if not all products benefits from usability studies and
usability engineering, since everything that is used or read by a user has a interface that can
be improved.
22
An interface, or user interface (UI), is the part of the product that the user interacts
with (even though the user interface wasn’t interactive from the beginning according to
Nielsen, 1993). The UI can, according to Leventhal and Barnes (2008) by some users be
interpreted as the system itself, even though the UI is just the visual representation and
boundary between the user and of the functional part of the system.
Human-Computer Interaction (HCI), sometimes called CHI (Computer-Human
Interaction), is a huge field of study which usability and UI is a part of (Leventhal & Barnes
2008). Today the acronym HCI is more commonly used, since the field of HCI focuses on how
humans interact with computers (systems). Usability is a common goal in the field of HCI.
(ibid.)
Dumas and Redish (1999, p.27) suggests that usability testing can be used to ensure
the usability of for example questionnaires, “interviewing techniques”, “instructions for non-
computer products”, hardware, documentation, or software. The authors argues that the
products or techniques tested can be medical products, consumer products, application
software, engineering devices or from other areas like “voice response systems” or navigation
systems. Further the authors argue that the usability tests are always supposed to be
conducted as a way to ensuring an improvement of the products usability.
2.6 Who can ensure usability?
As earlier established (see section 2.4) a product’s usability should be analysed by asking and
evaluating if the product’s intended user if the product fulfils the user’s needs and
conceptions. Therefore the user must be included in the usability engineering process. Since
the usability engineering is an iterative process consisting of a set of different tasks, how the
user is involved differs from cycle to cycle and from task to task (for more information about
these tasks, see section 2.4.1).
Though it is important to acknowledge that the user does not always know what is
best for her or him, the user does not always know what she or he wants. Neither can the
development team put the work with the design into to the users hands (for example through
a possibility for the user to adapt the user interface in the finish product as she or he wants),
since all user groups will not have the confidence or knowledge enough to use such feature.
The later aspect also goes hand in hand with the previous, the user would not always know
what is right for her or him why such design decisions should not be put in hers or his hands.
Though it is not beneficial to fully exclude the user from the development process, just
putting all the design decisions into the hands of the designer. (Nielsen 1993)
The designer is blind to flaws of the system, since the designer’s knowledge of how
the system works (or should work) fills the gaps. For the designer and the team involved in
the development of the system or the product all such gaps will be filled – making them
believe that the system or product is perfect. A user lacks such knowledge of the system or
product and will not be able to fill in the gaps, why the system will be hard to use or hard to
understand. The designers and developers therefore are not suitable to participate as test
participants in user tests, nor are they suitable to make all design decisions based on their
own liking and feeling. (Nielsen 1993)
23
The same goes with letting a powerful person in the organization or company review
the system, the manager is not representing the user and the user’s needs any more than the
developer and the designer does. (Nielsen 1993)
To be able to ensure usability a couple of different resources are needed, beyond the
activities of the usability engineering lifecycle model. Conducting user tests are endorsed as a
complement to activities such as heuristic evaluation. (Mach & Nielsen 1994; Dumas &
Redish 1999) To be able to conduct user tests a test team is needed. A test team can consist of
just one person, but a few more persons are preferred (Dumas & Redish 1999). Below the test
team and the roles included will be explained.
Putting together a usability test team
Depending on the type of test conducted different roles need to be casted. Dumas and Redish
(1999) argues that the test team members should adapt the following roles: test administrator,
briefer, camera operator, data recorder, help desk operator, product expert and narrator
(these roles are explained below). Further the authors mean that each team member can adapt
a couple of these roles each, since the authors argue that three people are the most beneficial
test team size to have. A bigger test team would require a big space where the test is
conducted, while a smaller team would demand the people on the test team to be very
experienced. Dumas and Redish (p.234) argues that “some usability groups can only afford to
have one usability specialist conducting each test” and they then argues for the
disadvantages of this set up. The authors mean that when conducting tests with this one-
person-set up some compromises must be done, since one person can’t shoulder all roles.
According to Dumas and Redish this compromise often comes in collecting data. The authors
mean that some data might be missed or even collected since it’s hard to observe, take notes,
assist the test subject (if needed), log data and take care of the cameras (if used during the
test) etc. all at the same time. After the test the observations and data cannot be compared
with data collected by someone else. Also the authors mean that a much longer time might be
needed for analysing recordings from the test if observations couldn’t be done during the
test.
Dumas and Redish argues that the test administrator leads both the team as well as
the test. The test administrator has the responsibility of seeing through that the test runs as
planned, why the test administrator has the responsibility of handing out tasks and work to
the rest of the team. The authors mean that the test administrator often is “the project leader
for the entire testing project” (p. 242).
The briefer takes care of the test participants. This includes for example welcoming
the test participant and explaining how the test will be conducted as well as what obligations
(and rights) the test participant have. (Dumas & Redish 1999)
Camera operator is a role that handles the equipment that records audio and video.
According to Dumas and Redish it is important that the camera operator understand what is
supposed to be recorded, what to focus on.
The data recorder is a very busy role to shoulder according to Dumas and Redish,
and the most time consuming one. The data recorder is responsible for taking notes of
24
everything of interest during the test, from for example what the test participants says to how
many times the test participant “incorrect field entries” (p.245).
The one shouldering the role of the help desk operator will have to assist the test
participant when needed during the test (how much help the participant is allowed to get
during the test must be decided on before hand, too little or too much help will make the
collected data less accurate). (Dumas & Redish 1999)
The product expert is responsible for that the product being tested is up and running.
During a test the product may crash, and the product expert is supposed to handle this so
that the test can continue as soon as possible.
Dumas and Redish argues that the narrator’s responsibility is to interpret what the
test participant is doing and maybe saying, and then communicate this information to the
data recorder who then logs the information.
Rubin and Chisnell (2008) argue that the test moderator is the most important role of
the usability test team. According to the authors the test moderator is responsible for taking
care of the test participant before, during and after the test. The responsibilities of the test
moderators also include collecting data, as well as compiling and comparing the data after
the test. The authors mean that the test moderator should be someone that is not deeply
involved in the product’s development, since “it is almost impossible to remain objective
when conducting a usability test of your own product” (p.45). Though, if there is no one else
that can do the usability testing, a person that is involved in the products development is
better as a test moderator, than conducting no test at all.
2.7 Where can usability be tested?
Usability studies can be conducted in either a controlled laboratory environment or in an
uncontrolled environment, so-called field testing, testing “in the wild” or “in-situ studies”
(Rogers et al. 2007).
Rubin and Chisnell (2008, p.93) argues that “Rather, a commitment to user-centered
design and usability must be embedded in the very philosophy and underpinning of the
organization itself in order to guarantee success.”. The authors mean that just because an
organization implements a usability laboratory, it does not mean that the usability will be
present in all developed products by it self. The usability laboratory must be used, and the
organization it self must adapt their processes so that usability is concerned and engineered
throughout the development process. Further the authors mean that the most important thing
is that the person(s) conducting the usability tests and evaluation have the right
understanding and knowledge about methods and techniques. The authors mean that if that
knowledge is missing then the usability laboratory, no matter how advanced and well
equipped, will be useless.
Nielsen (1994) also argues that the first action towards engineering usability is not to
invest in and to build a usability laboratory. Further the author claims that:
“Once a company has recognized the benefits of the systematic use of usability
methods to improve its products, management often decides to make usability a
permanent part of the development process and to establish resources to facilitate
the use of usability methods by the various project teams. Two such typical
25
usability resources are the usability group and the usability laboratory.” (Nielsen
1994)
This means that the organization first should make the effort of making the usability
engineering a part of the development process. The usability laboratory is the second step.
Capra, Andre, Brandt, Collingwood and Kempic (2009) discuss what should be a part
of the usability laboratory, and if a “standing lab” is necessary or not. Capra et al. (2009)
discuss if some organizations benefits from having a portable laboratory set up instead. The
authors further discuss what facilities the both setups should or could contain.
Rubin and Chisnell argues that the usability laboratory can be expensive to set up,
but that an expensive laboratory is not needed for conducting usability tests. Further the
authors argue that all usability tests do not have to be conducted in a laboratory. Some tests
are better conducted in other environments. Rogers (2011) argues that more and more
usability studies are conducted outside of the controlled laboratory environment, in field,
such as in the streets or in people’s homes.
Rubin and Chisnell discuss the possibility of conducting remote usability tests. This
technique is good if collecting data from test participants far away from where the team is
situated. When conducting remote usability tests the Internet is mostly used.
Though, the laboratories do not have to be only for user tests. Nielsen (1994) argues
that the laboratory can be used for more than just conducting usability tests. Other activities
that have to do with usability engineering can also be conducted in the usability laboratory.
Nielsen (1994) claims that activities such as focus groups and task analysis as well as
participatory design also can be valuable to perform in the usability laboratory. The later is
especially beneficial if the set up includes video cameras. Further the author mean that
heuristic evaluation also can be conducted in the laboratory.
During this study it have been found that there are few “recipes” for how the
usability laboratory should be set up, what it should contain etc. (see section 3.4). Though,
there are a few facilities that are common in usability laboratories, both in permanent and
portable labs. In the following section such facilities for usability laboratories and field-testing
will be explored and explained.
2.7.1 Usability facilities
When conducting usability tests with users it is common to conduct the tests in a usability
laboratory. Nielsen (1993) argues that a laboratory set up specifically for conducting user tests
is not obligatory in order to conduct tests. Further the author argue that the laboratory can
make the test procedures easier, and conducting the user tests as a part of the development
process stand a better chance of being a part of every development cycle and project.
Below a couple of facilities that are commonly used in usability laboratories will be
explained.
26
Common facilities in usability laboratories
Figure 1. A usability laboratory, according to Nielsen (1993, p.201, Figure 20)
Many usability laboratories consist of at least two rooms, one room where the test participant
participates in the usability test and another room where the usability test team is situated
(called either the observation room or the control room) (see Figure 1). Some usability
laboratories also have an executive viewing room that overlooks the test room as well as the
observation room, as seen in Figure 1. (Nielsen 1993, 1994; Dumas & Redish 1999; Pettersson
2003)
Nielsen (1993) argues that the usability laboratory often is sound proof so that the test
team can talk with each other without disturbing the test participant. If the usability
laboratory consists of at least two rooms, the observation room and the test room is often
separated by a one-way mirror, allowing the test team to observe the test and test participant
(Nielsen 1993, 1994).
Computers are used both in the test room and the observation room. Depending on
how flexible the laboratory needs to be either stationary computers or laptops can be used. In
the observation room additional monitors can be available, showing the test participants
screen and the view of the cameras. (Nielsen 1993; Pettersson 2003; Capra et al. 2009)
Cameras are common in the usability laboratory and are used to record the usability
test. The cameras can be either portable or mounted, depending on how flexible the usability
laboratory needs to be (Nielsen 1993; Capra et al. 2009). The cameras usually can be
controlled via the observation room, enabling the test team to focus on different things
throughout the test. Nielsen (1993) argues that the cameras in the lab often show a view of the
whole test room, the participants face, and the parts that the test participant interacts with
during the test (such as the computer screen, instructions etc.).
27
When conducting usability tests it can sometimes be suitable for the team in the
observation room to talk to the test participant or give instructions. When conducting tests
using the Wizard of Oz-technique, the output from the system sometimes can be in audio
(Pettersson 2003). For this a microphone is needed in the observation room, as well as
speakers in the test room. A microphone can also be suitable to have in the test room so that
the test participant can be recorded (Pettersson 2003).
Pettersson (2003) argues that a voice disguiser can be suitable when conducting
Wizard of Oz-tests, allowing the test moderator to work as both a “wizard” and a test
moderator. Since the interview after the test can contain questions about the audio feedback,
the test moderators voice then should be disguised during the test, so that the test participant
does not hesitate on giving honest feedback.
For logging data during the test data-logging software can be used (see the role data
recorder under section 2.6) Software that records the screen can also be used in order to log
data during the test. (Dumas & Redish 1999; Pettersson 2003; Capra et al. 2009)
Eye-tracking devices are not common in usability laboratories according to Nielsen
(1993) but are sometimes available (see section 2.4.2 for more information about eye-tracking).
Other facilities that can be usable to have in the laboratory is: a printer, audio mixer, portable
wall (if the laboratory consists of only one room) and headphones (Dumas & Redish 1999).
Common facilities in portable laboratories
To perform usability studies outside the usability laboratory is becoming more common
(Rogers 2011). Some usability studies are better conducted in the context of use (Rubin &
Chisnell 2008). Nielsen (1993) argues that a portable laboratory does not need to contain other
facilities than a notepad and a laptop where the software being tested is running. Further the
author argues that the portable laboratories usually consist of a few more facilities. Common
facilities in portable laboratories, or facilities used for in-situ studies, are explained below.
When conducting in-field studies a camera is often used to record the usability test as
well as the user’s expressions and reactions (Nielsen 1993; Rogers et al. 2007). If the test runs
for a long period of time it is wise to use a stand for the camera (Nielsen 1993). In order to
also record the user’s comments at least one microphone is beneficial. Nielsen argues that the
built in microphones (in the web camera, USB-camera or other video equipment) often does
not provide good sound quality, why external microphones are beneficial to use. Further
Nielsen claim that additional microphones can be beneficial if other sounds or comments
than the users’ are supposed to be recorded. Capra et al. (2009) argue that test can be
successfully recorded with a web camera and the built-in microphone.
Eye-tracking devices can be a part of a portable laboratory, since portable eye-
tracking devices are available (Tobii 2011).
28
2.7.2 Should usability studies be conducted in a usability laboratory or in
field?
Rubin and Chisnell (2008) argue that all usability tests should not be conducted in a
controlled testing environment such as a usability laboratory, since other environment fits the
product being tested better and give more accurate data.
Nielsen (1993, p.205) argues that with a portable usability laboratory “any office can
be rapidly converted to a test room, and the user testing can be conducted where the users
are rather than having to bring the users to a fixed location.”
Rogers (2011) argues that designing and evaluating outside the laboratory and
controlled environments are becoming common, thanks to new technologies, materials and
methods. Prototypes can be designed and combined in field by interaction designers rather
than just by engineers and scientists as the author mean that the case was earlier. Rogers
mean that results from in-field studies differ from test results from studies conducted in a
controlled laboratory environment. The author means that the controlled laboratory
environment does not include properties of HCI that is present in real life. Rogers mean that
HCI in real life is not conducted as it often is in a controlled laboratory, since the laboratory
does not present the distractions and disruptions that would normally be present when a user
interacts with a computer (system). The author mean that theories about interaction design
and HCI that derives from studies conducted in laboratories is not fully applicable, since the
theories do not take into consideration the actual context of use. Rogers mean that a part of
the solution would be “importing different theories into interaction design that have been
developed to explain behaviour as it occurs in the real world, rather than having been
condensed in the lab.” (Rogers 2011, p.60). Further the author argues that new theories
should be developed from the research conducted in field, and that how the already available
theories are used should be developed and adjusted to in-field studies.
Though, according to Kaikkonen, Kallio, Kekäläinen, Kankainen, and Cankar (2005)
in-field studies is time consuming, and that testing in a controlled laboratory environment
can sometimes replace the in-field testing. This decision should, according to Kaikkonen et al.
depend on what is being tested. The authors mean that if it is the interaction in a mobile
application that is being tested, a test conducted in the usability laboratory works just as well
as in-field testing (the same usability problems are found), and is more effective.
Kjeldskov and Stage (2004) developed various techniques of testing mobile applications in a
controlled laboratory environment. They used techniques that were supposed to imitate
using a mobile application in a real life environment, which typically demands focus from the
user, such as using a mobile application while walking in a street of a city. They found that
just letting the test subject sit down at a table and use the talk aloud-technique (see section
2.4.2) showed as much and more usability problems as the techniques imitating scenarios that
demands shifting focus. The authors also compared their laboratory techniques to a test done
in an actual street. The difference between the techniques was according to Kjeldskov and
Stage (2004) that the test subject pointed out the most critical problems when performing the
tests that demanded shift of focus, while the test where the test subject sat down showed the
most critical usability problems and less critical problems as well.
29
Razak, Hafit, Sedi, Zubaidi and Haron (2010) compared usability testing in laboratories with
in-field studies, where the test participants were children. In their laboratory studies the
authors used usability testing guidelines but for the in-field studies the authors mean that no
such guidelines exists, why they “applied some techniques suggested from the social studies”
(p.104) instead. Razak et al. conducted the in-field studies in a pre-school, where the school’s
computers were used. The computers were placed in an area where children that did not
participate in the test could “disturb” the test participants. The authors mean that it was
“very hard to prevent other children from disturbing the test participants due to its physical
location” (p.107). According to Rogers (2011) the whole idea with in-field studies is that such
disruptions as Razak et al. describe occur, allowing the test team to evaluate how well (or
poorly) the product can be used and understood, despite disruptions and disturbances.
Razak et al. found through their studies that usability problems are not suitable to
find during in-field studies with children as test participants, but that the “field study is more
suitable for understanding children experience with technology [sic]” (p. 108). The authors
suggest that some steps are to be included to the guidelines they used, such as visiting the
children in their natural environment (of use) early in the development cycle. This suggestion
is similar to the first step of Nielsen’s usability engineering lifecycle model (see section 2.4.1
The Usability engineering lifecycle model). Further Razak et al. mean that a pilot test should
be conducted with a “child representative” (p.108) and that the laboratory should be made
safe for the children to spend time in.
According to Kjeldskov and Stage (2004), Kaikkonen et al. (2005) and Razak et al.
(2010) studies where usability problems are meant to be found are not necessarily beneficial
to conduct in field. Rogers, Connelly, Tedesco, Hazlewood, Kurtz, Hall, Hursey and Toscos
(2007) though are of a different opinion. Rogers et al. (p.337) argue, “Traditional evaluation
methods and metrics, designed for controlled laboratory settings, fail to capture the
complexities and richness of the real world in which the applications are placed.” Rogers et
al. conducted an in-situ study that “greatly improving both its [a mobile learning device]
situated use and usability” (p.338). The in-situ study showed how the mobile learning device
was used, rather than how it was supposed to be used. The authors argue that everything
should and cannot be collected through in-situ studies; the challenge is to combine techniques
that gather data that is needed for the research. Rogers et al. combined data logging, video
capturing the use, focus groups (consisting of the users that participated in the in-situ study),
and logging users comments throughout the in-situ study. Rogers et al. argues that while the
in-situ study were time consuming and demanded a big effort from the test team, the
usability problems found in their in-situ study could not have been found in a laboratory
environment or anticipated on beforehand. The authors mean for example that the usability
problems was not found when conducting a heuristic evaluation on beforehand.
2.8 Uganda
Uganda is one of the fastest growing economies in the sub-Saharan Africa. Though, the
industrialization and the service sector are currently held back because of the daily power
30
outages, caused by the lack of energy that Uganda is suffering from. The lack of power in
Uganda is a result of increasing power demand, drought, erratic power distribution system
as well as delays in the plans for further expanding power generation. (World Bank &
Wagner 2010)
The population in Uganda were estimated to 33,425,000 people in 2010 (United
nations 2011). In 2009 the subscribers of telephones in Uganda were 9,617,267 people, of these
9,383,734 people were subscribers of cellular phones, and just 233,533 people were
subscribers of fixed telephones, according to statistics from Uganda Communications
Commission (UCC) stated and put together by Uganda Bureau of Statistics (Uganda Bureau
of Statistics n.d.).
“The short message service (SMS) is popular in Uganda. According to UCC, some
294 million SMS messages were sent during the January–March 2009 period,
compared to 190 million in the preceding quarter (October–December 2008).”
(International Telecommunication Union 2009)
The citation above illustrates how big the usage of SMS-services and the communication
technique are in Uganda. Ugandans can use their mobile phones for services that let them
receive weather forecasts, sports results, other information from databases such as farming
techniques and for using their mobile phones for m-banking services (ITU 2009). M-banking
is a service that is used for withdrawing money from ATM-machines, transfer money and
pay bills via the mobile phone (Wireless Federation 2011b). For low-literacy users some
service providers offer voice-sms, making the information available for those who cannot
read or write (International Telecommunication Union 2009).
Even though the mobile industry is reported to be big and increasing in Uganda, the
Wireless Federation (2011a) reported that the performance of the mobile networks is low,
since many calls are blocked or dropped (two service providers were said to have 11,1 versus
15,2 per cent blocked and dropped calls).
According to Uganda Communications Commission (2012), all unregistered SIM-
cards in Uganda must be registered during 2012-2013. This also includes Internet modems for
computers and mobile fixed lines. The unregistered SIM-cards will not be usable after 1st
March 2013. This registration is done to identify mobile phone SIM card owners, “track
criminals who use phones for illegal activities” etc. (ibid.)
The Internet connectivity in Uganda is increasing. Earlier Uganda depended on
satellites for Internet connectivity, but in 2009 the country were connected to the fibre optics
that were installed on the African east coast (BuddeComm 2012b). Though, most of the
connection to the Internet is provided by wireless options. Wireless techniques (such as the
standards WiMAX (4G) and 3G1) has improved and expanded the Internet connectivity in
Uganda. According to the International Telecommunication Union (2009) 12,5 per cent of the
population in Uganda use the Internet. BuddeComm (2012a) state that less than 20% of the
people in Uganda are connected to the Internet or have bank accounts.
1 Wikipedia. Electronic: http://en.wikipedia.org/ [Inspected: 2012-05-23]
31
2.9 Makerere University
Makerere University is along with Mbarara University the biggest in terms of academic
research (World Bank & Wagner 2010).
At CIT at Makerere University the undergraduate students can attend courses in interaction
design, among other subjects.
“Human Computer Interaction” is a course that focuses on different aspects on HCI
(such as psychology, ergonomics, human information processing, design principles) and how
a user interacts with an interface. (Makerere University n.d. a)
“User Interface Design” is a course focusing on the design, implementation and
evaluation of user interfaces such as graphical user interfaces and web sites. The course
includes teaching the students how to identify users’ tasks and needs, by using different
techniques such as prototyping. Both courses are given at undergraduate level. (Makerere
University n.d. a)
On the Master’s level, the course “Web design and Usability” enables students
without earlier education in IT, as well as students with earlier IT education to learn about
Web Design (languages like HTML, XML, CSS, JavaScript, PHP), multimedia technologies
(Flash) and how to create a web site with high degree of usability. (Makerere University n.d.
b)
The students at Makerere University do not get any opportunity to conduct usability
studies in controlled laboratory environment; as such facilities are not currently available at
Makerere University. Though, as stated above, the students do come in contact with usability
and activities related to the field of HCI.
32
3. Methodology
To be able to fulfil the purpose of this study – to find out what kinds of usability facilities that
are beneficial for CIT at Makerere University to have, and therefore answering a) What
conceptions or misconceptions of usability studies do the stakeholders have? and b) What
needs do the stakeholders have for usability facilities? – both primary and secondary sources
of data and information have been used. This chapter presents how the topic and methods
were chosen, how data has been collected and analysed, as well as how the research has been
carried out.
3.1 Choosing topic
Thanks to John Sören Pettersson, professor and dean at Karlstad University, and his contacts
with Dr Rehema Baguma, senior lecturer at the School of Computing & IT, College of
Computing & Information Sciences at Makerere University the opportunity for me to go to
Kampala, Uganda, and collect data for this study came up. Dr Baguma is responsible for the
development of the plans of the usability facilities at Makerere University, and before this
study started the plans were in an early stage where stakeholders just had been contacted.
This study has been a part of developing those plans further.
My interest in usability studies was developed during courses taken at Karlstad
University as a part of my education.
3.2 Choosing respondents
The respondents in this study are representatives of the potential stakeholders (for simplicity,
sometimes referred to as just “the stakeholders” in this study) of the usability facilities at
Makerere University in Kampala, Uganda. The respondents were picked purposively, some
for this study in particular. Thus, before my work started, 20 stakeholders were initially
picked and invited via email to a consultative meeting. 17 of these 20 invited stakeholders
participated, representing 9 different companies, organizations and universities. These
companies and organizations are from different sectors such as telecommunication sector,
non-profitable organizations dealing with human rights, an organization dealing with
information technology, web and application development, and a university. The
stakeholders from universities participating in the consultative meeting belong to Makerere
University. At the meeting a survey was handed out in order to map the needs of the
stakeholders (see section 3.3.1 The two surveys).
My field research started after the consultative meeting were held. The collected data
at the consultative meeting were then analysed for this study. The analysis showed that
stakeholders from areas such as software developing, economics and health organizations
were missing, stakeholders who were seen as crucial for both the plans of the usability
facilities as well as this study. Therefore 25 additional stakeholders from 10 companies and
organizations were purposively chosen and encouraged to participate in a digitally
distributed survey. Some of the respondents were personal contacts to Dr Baguma. These 10
33
companies and organizations represented sectors of software development (private sector
and from Makerere University), banking, health, mobile and telecom.
3.3 Data from primary sources
Data from primary sources means first hand information like observations or answers in a
survey (Patel & Davidson 2011).
All respondents (the stakeholders) in this study are sources of primary data, and the
method used to gather this data in has been through surveys.
“Surveys can be used at any time in the lifecycle but are most often used in the
early stages to better understand the potential user. An important aspect of
surveys is that their language must be crystal clear and understood in the same
way by all readers, a task impossible to perform without multiple tested iterations
and adequate preparation time. “ (Rubin & Chisnell 2008, p.18)
According to Rubin and Chisnell (2008) a survey is a tool often used to collect data
about the potential user. Since this research has been a preparatory work, i.e. it has been
conducted in the early stages of the project; a survey was decided to be a good method for
gathering data. Surveys, though, have the disadvantage of being static. Once the survey is
handed out to the respondents, little or nothing can be changed. If the questions in the survey
aren’t clear enough or aren’t understood by the respondents, little can be done. There is also a
risk with surveys that the respondent interprets the questions in another way than intended.
(Rubin & Chisnell 2008) The risk with surveys therefore is that the degree of validity2 is low,
since other data than the intended one are collected (Silverman 2010).
Conducting interviews with stakeholders might have been the most obvious method
of collecting data for this study, but because of my time in Kampala collecting data was
limited (I were in Kampala for four weeks) and none of the additional stakeholders were
contacted beforehand for the second survey I did not have the time to conduct interviews.
Conducting interviews would have demanded that the data from the first survey were
analysed before my arrival in Uganda, so that what additional information needed to be
collected was already clear. Such an analysis was not possible to conduct before I arrived in
Kampala. In Kampala it was also clear that it would have been hard for me to arrange
interviews with the local stakeholders. The bureaucracy of the companies and organizations
would hinder me as a student to interview stakeholders about how they conceive their
development of products and services.
Thus, handing out a digital survey was considered to be the best option. Since
surveys were also a tool used to gather data in the project before my arrival, the data from the
second survey could be used to immense the data from the first survey.
3.3.1 The two surveys
The work with collecting data for this study has been done iteratively, where two surveys
have been handed out to purposively picked respondents.
2 Validity: ”The extent to which an account accurately represents the social phenomena to which it refers”
(Hammersley 1990, cited Silverman 2010, p.439)
34
The first survey
The first survey was handed out at a consultative meeting that was held at Makerere
University in Kampala. The meeting was held with 17 people representing companies,
organizations and Makerere University who were thought to be possible stakeholders of the
usability facilities (see 3.2 Choosing respondents). The meeting started with an introduction
to what usability studies are and what they are conducted for. Then a survey was handed out
to all the participants at the meeting, in order to collect the stakeholders’ needs and
conceptions (and misconceptions) of the usability facilities. The survey consisted of 11 fill-in
questions about what the stakeholder would want to see in the facilities and what the
stakeholder would like to use the facilities for (see appendix 1). Fill-in questions are questions
that allow the respondents to write the answers themselves, instead of fixed alternatives with
checkboxes or scales such as the likert scale (Rubin & Chisnell 2008).
After answering the survey the participants had the opportunity to ask additional
questions, which they did (see appendix 1). Dr Baguma was the person holding the
consultative meeting and who put the first survey together.
The second survey
It was decided that a second survey was to be handed out to additional stakeholders after an
analysis of the initially collected data from the first survey was conducted. As mentioned in
3.2, this decision was made because information from certain stakeholder groups was
missing. Because this study aimed to map as much as possible of the stakeholders needs and
conceptions, all possible stakeholders needed to be included. The analysis also showed that
certain information needed for answering the research questions of this study was missing,
such as what conceptions the stakeholders have about usability and usability studies.
What conceptions the stakeholders have about usability studies might affect how to
interpret their answers to the rest of the questions in the survey. Therefore the second survey
included a question where the respondents were asked to explain what usability studies
mean to them, and when, why and how such studies are conducted. Their (mis)conceptions
might show that knowledge of usability studies must be disseminated in Ugandan companies
and organizations, before the usability facilities would be successful. The stakeholders
(mis)conceptions might also show that the usability facilities could be used to educate or train
the companies and organizations in Uganda to better understand usability, getting them to
realise that usability engineering is a beneficial process for both themselves as well as for the
users of their products.
Therefore it was decided that a question about how the requirement engineering are
conducted would be included in the digital survey as well.
The choice of method for collecting data meant that no verbally introduction was
held to the second survey in contrast to the first survey. Since the data from the first survey
showed that the knowledge about usability and usability studies wasn’t widespread among
the stakeholders, it was decided that a written introduction would be attached to the survey.
This introduction would give the stakeholders just as much information as they need to be
able to answer the questions of the survey. By looking at the questions asked at the
35
consultative meeting, and roughly using the same introduction as given at the consultative
meeting, the written introduction to the second survey was phrased and tested in a pilot (see
below); the final version looked like this;
Usability testing is a process where products including software products are
evaluated by testing them on users. This process can be a part of the whole
development cycle, from requirement engineering, to evaluating the finished
product. Usability tests can be conducted in a defined laboratory set up or at the
place of use. The purpose of usability testing is to measure how a product meets
the user’s expectations and needs: if the product is easy and efficient to use, easy
to learn and error free. Our interest in this exercise is in testing users’ ability to
deal with computer programs, mobile services, and web sites.
The purpose of this survey is to find out from key stakeholders like you, if you
would find such a facility useful and what services you would want from it.
Please spare for us a few minutes and let us know what you think by filling this
short survey.
Setting up the second survey
The second survey (referred to as “the digital survey”) was set up using the survey-tool in
Google Docs. This is a free tool which is available online, why it was chosen for this study. To
distribute the survey a link can be handed out to participants, who then answer the survey
and submit it directly in the web browser. The answers are automatically collected and put
together in a spread sheet by Google Docs. The data is available for export3.
The digital survey consisted of 16 questions. The questions in the digital survey was
structured after three thematic groups (except for the demographic data), which were 1) the
respondents’ view of usability, 2) the usability engineering processes of the stakeholders, and
3) the stakeholders’ potential use of usability facilities at the university.
Two of the questions were yes/no-questions while the rest were fill-in questions. All
questions were on the same page so that the participant easily could scroll down the page to
see all the questions. The written introduction was included in the beginning of the survey.
The respondents were invited to participate in the survey via email. In addition to the
link pointing to the survey, the email contained a short introduction to what the survey is
about. The emails were sent from a CIT-email-address, supposedly adding trustworthiness to
the survey itself.
To minimize the errors and bugs in the survey a pilot test on the digital survey was
conducted (Rubin & Chisnell 2008,).
3.3.2 Pilot testing the digital survey
A pilot test is used to get the errors and “bugs” out of the test (Rubin & Chisnell 2008).
“The importance of conducting one or more pilot tests cannot be overstated.” say Rubin and
Chisnell (2008, p.215). They mean that conducting at least one pilot test is of high importance,
why the digital survey used to gather data in this study were pilot tested.
“Ideally, you should use a ‘‘real’’ participant, perhaps someone who is on the lower
end of the expertise scale for what you are doing in this test.” (Rubin & Chisnell 2008, p.215)
The test subject of the pilot test was thought to be as alike to the stakeholders answering the
3 Available formats for export from Google Docs survey tool are Excel, OpenOffice, PDF, CSV, Text, HTML.
36
survey as possible. Therefore the participant of the pilot test of the digital survey was in the
IT/web industry, and were said to have good knowledge in English.
The pilot test was conducted by inviting the test subject to the survey via email. In
the email information about the pilot test was included, as well as additional questions for the
test subject to answer. These questions were:
• “Did you fill out the whole survey?”
• “Did you read the whole introduction? If no: Why?”
• “Did you find anything in the survey hard to understand (phrasing, words,
questions, introduction, etc.)? What?”
• “Do you have any further comments?”
Only one pilot test was conducted on the digital survey. Ideally a few more pilot tests would
have been conducted after changes was done to the survey, but because of time constraints
no more pilot tests were conducted. The pilot test showed:
• That the introduction to the survey was too long – why the respondent didn’t read it
all.
• That one question, referring to an earlier question, referred to the wrong question.
• That the test subject wanted to change the order of some of the questions.
After the pilot test the introduction were shortened and the errors stated above were
corrected.
3.4 Data from secondary sources
Secondary sources means knowledge already available. Data from secondary sources has
earlier been created or collected by someone for another study or research. (Patel & Davidson
2011)
3.4.1 Choosing and collecting data from secondary sources
The result of the literature review is reported in chapter 2. In this study Libris and Ebrary
have been used to find books useful for the study in addition from what has been gained
from courses taken by the author. Libris is a database over books available at Swedish
libraries and it was used to find books available in the library at Karlstad University. Libris
can also be used for an inter-library loan, but this service wasn’t used during the study since
all books were available at the library at Karlstad University. Ebrary is a service that
electronically provides books, and the access is granted via Karlstad University.
The study also includes articles found in databases. The databases that were searched
through for this study were listed as databases within Computer Science and Information
Systems by the library at Karlstad University. The library at Karlstad University offers full-
length articles from some databases, which service where used during this study. The
databases where articles were searched for was INSPEC/Engineering Village and IEEE
Explore. Some of the articles that were found in the databases were not available as full text
articles, why they were searched for in full text in Google Scholar as well. In some of the cases
the articles were not available in full length, why they were not a part of this study.
Google Scholar has also been used for searching for other articles, to make sure no
articles of interest were missed. Some of the articles found in both the databases and Google
Scholar weren’t available in full length and not used in this study.
37
A number of keywords were used in this study in order to find articles4. Using
keywords followed by an asterisk, which instructs the database to include results that
contains the word, started the search for articles. This means that the search results from
using for example the keyword “lab*”, will include both “laboratory” and “laboratories”.
These searches showed that usability is a broad topic, which meant the search needed to be
narrowed down. Several keywords were then used to accomplish a more exact search.
The keywords used in this study derive from reading some of the abstracts and keywords
included in the articles from the first searches. As the search continued and more abstracts
were read, more keywords were included and combined.
Much of the literature used in this study is written by big names with a lot of
experience in the field of usability and HCI. I decided what articles to use and not to use by
reading the article abstracts. This was done since the title isn’t always mirroring the content
of the article. Thanks to the bibliography in both books and articles I was able to find more
articles and books, as well as being able to rank the literature found. The ranking of the
articles and books used as a part of this study was done by looking at how much the articles
and books are referred to by other authors. A lot of referrals have been seen as a mark of
quality, why the book/article has been included in this study.
3.5 Research model
The intention with this study is to analyse the stakeholders’ conceptions, misconceptions and
their needs. This data are then to be compared with the data from secondary sources like
articles and books. The result of the analysis is recommendations for the usability facilities.
The research was conducted by collecting data from secondary sources, in order to
give a theoretical framework to this study. The data from the secondary sources were used to
compare and validate the data from the primary sources. Data from primary sources have
been collected in order to map needs, conceptions and misconceptions of the stakeholders. By
comparing the secondary and the primary data, the analysis was conducted by interpreting
the answers to the questionnaires according to the perspectives given in the literature and to
the relevance they have for the plans to set up usability facilities at CIT. The conclusions are
recommendations for what usability facilities would be beneficial for CIT to have. The
analysis were conducted by reading and putting together the respondents’ answers, by using
three thematic groups (except for the demographic data), which were 1) the respondents’
view of usability, 2) the usability engineering processes of the stakeholders, and 3) the
stakeholders’ potential use of usability facilities at the university.
4 The following search words were used to find articles in the databases:
use*, usa*, eval*, lab*, test*, equip*, mobile*, usability lab, usability engineering, usability studies, usability tests,
usability, laboratories, computer software selection and evaluation, testing, mobile, equipment, rural, Uganda,
developing, countries, cross-cultural, non-Western countries, usability testing environments, internet, living lab,
design engineering, laboratory design, program testing, software usability laboratories, telecommunication
equipment, test facilities.
38
4. Results
Respondents of the first survey
20 persons were invited to the consultative meeting. 17 persons attended the consultative
meeting and answered the first survey. The respondents of the first survey and also
attendances of the consultative meeting were from the telecommunication sector, non-
profitable organizations dealing with human rights, an organization dealing with information
technology, web and application development, and one university.
The respondents in the first survey are not referred to with numbers, since the
compilation did not include such groupings. The answers to the first survey are available in
full length in appendix 1.
Respondents of the digital survey
The digital survey was sent to 25 additional potential stakeholders. 9 persons answered the
digital survey. The respondents of the digital survey stated that they were holding the
following positions of their organizations: Head of fraud prevention, project manager, senior
developer, web administrator (2 respondents), technical manager, applications developer,
and software developer. The respondents of the digital survey belong to the following
sectors: economics, universities, telecom/mobile, and software development. Two of the
respondents from the software development sector belong to the same company. Two of the
respondents from universities belong to the same university.
The respondents of the digital survey are referred to with numbers. The answers to
the digital survey are available in full length in appendix 2
4.1 The potential stakeholders’ take on usability
In this section the answers from question four (4) in the digital survey will be presented.
4.1.1 What does usability studies mean to you? When, why and how are
they conducted?
As explained in section 3.3.1, the first survey did not include any questions about what
conceptions the stakeholders have about usability studies. The second survey included a
question where the respondents were asked to explain what usability studies mean to them,
and when, why and how such studies are conducted. Below are two examples of what the
respondents of the digital survey stated that usability studies are:
“Usability studies refer to the fact finding techniques on how easy a user is able to
use something. Which something may be a website, system, phone, car, computer
etc.” (R5)
“Usability is the science and art of ensuring that designs can easily be interpreted
by first time users, learnt and providing the functionality intended to translate
into satisfaction.” (R6)
39
The stakeholders highlighted that usability studies involves the user or the targeted
user. The respondents said that the usability studies are conducted as a process of getting to
know how “easy and intuitive” (R2) the system are, how well a product “meets its intended
use / purpose” (R3) and how “friendly, efficient, relevant” (R4) the system is to its users.
Respondent 6 also included “first time users” in his or hers answer. Another argued that
usability studies are about finding out how “easy to learn, easy to use and how stable the
product is” (R7).
4.2 The potential stakeholders’ usability engineering processes
In the following section the answers from the question 13 (Are the users of the software you
develop involved in the collection and specification of requirements?) and question 14 (If yes:
How are the users involved?) from the digital survey will be presented. These answers show
how the potential stakeholders conduct their usability engineering processes today.
4.2.1 Are the users of the software you develop involved in the collection
and specification of requirements? How are the users involved?
As explained in section 3.3.1, the first survey a question about how the work with collecting
and specifying requirements for the software that the stakeholders are developing was not
included. The respondents of the digital survey were asked if the users of the software that
the stakeholders develop are involved in the collection and specification of requirements. All
the respondents stated that the users are involved; all the stakeholders said yes. Then the
stakeholders were asked to describe how the users are involved.
Four respondents described how the users of the software being developed are
included in the work with the requirements by saying that they conduct interviews,
discussions or holds meetings with the users. Another claimed that
“The users are involved during requirement analysis to identify what exactly is
needed of the new system and their expectations.
Also users are involved at the testing phase to know whether the desidned system
meets their needs. [sic]” (R8)
4.3 The stakeholders’ potential usage of the usability facilities
The answers reported in this section are from both the first survey and the second survey.
The answers show for what and how the potential stakeholders would like to use the
usability facilities, if such facilities were to be established by Makerere University. This
section include answers from question number 2-9 in the first survey, as well as question
number 5-6, 9-12 in the digital survey.
4.3.1 If a usability lab was established at Makerere University, would you
be interested in using it?
All respondents of the two surveys stated that they were interested in using the usability
laboratory if it were established at Makerere University.
40
4.3.2 Where would you want to use the facilities?
The majority of the respondents of the first survey answered that they want to conduct their
usability tests in other environments than in a controlled laboratory environment.
The digital survey showed that the respondents wanted to use the facilities both in a
controlled laboratory environment as well as in field. Five potential stakeholders stated that
they wanted to use the facilities in other places than a laboratory at Makerere University. One
respondent said that he or she wanted to use the facilities at the company the stakeholder
belongs to. Three other stakeholders stated similar needs, such as “Both at the school and in
the field” (R7), “In an environment where customers are and other staff” (R9) and ”at various
national reseach centres [sic]” (R1). One stakeholder explicitly asked for a portable usability
laboratory by stating
“Also, a mobile usability lab might be a good idea (like if the facility could have a
mobile unit equipped with means of performing tests "out in the wild", on the
streets or right at the target-user's premises (e.g for products meant to be used in
hospital, the army, factories, etc)” (R3)
The same stakeholder also wanted to be able to use the facilities at Makerere University when
stating ”Preferably at the University premises (advantage of high concentration of highly
skilled stakeholders and a good share of totally un-skilled / green users).”
The answer “CoCIS Block B” (R4) also shows that some of the stakeholders want to
be able to use the facilities at Makerere University, since CoCIS is an abbreviation of College
of Computing and Information Sciences (at Makerere University) and “Block B” is a building
at the campus, where the college is situated. One stakeholder stated that he or she wanted to
use the facilities “In the useability lab [sic]” (R8), which is interpreted as that the stakeholder
would like to have access to a permanent usability facility at Makerere University.
4.3.3 When would you want to use the facilities?
In the first survey the stakeholders were asked if they had any preferences of when they
would like to use the facilities (see question 8 in appendix 1). Five of the eight answers were
answers such as “Soon” and “Hopefully this year”. Two respondents gave answers that
implicates that they would like to use the facilities when the timing is right, given that their
answers were: “During product test launch” and ”when I have a new app that I want to
launch”. One stakeholder said that he or she wanted to use the facilities first “When it has
achieved a good percentage of good testing results”. In the digital survey the respondents
were asked the same question, and their answers showed that five of the nine stakeholders
answering the digital survey want to use the facilities soon, by giving answers such as ”As
soon as its available” (R5), “before the end of the year 2012”(R4), ”ASAP” (R6), ”By October
2012” (R7). Three of them gave answers that shows that when they would like to use the
facilities is depending on the project the stakeholder is in, if the price is right or ”wherever a
need arises” (R8). One stakeholder answered that he or she wanted to use the facilities “any
time” (R1).
41
4.3.4 State services you would want the facilities to provide, and what
services your organization would use
Question four was divided in two parts in both surveys (what services the stakeholder would
want the usability facility to have, and what services the stakeholder’s organization would
use). The answers of the two questions from both surveys are put together and presented as
one in this study. The first survey showed that the stakeholders would like the facility to
contain:
• Stress testing facilities
• Ability to simulate connectivity challenges in the field
• Relevant audiences for the systems
• Testing for the design
• Cross platform
• Open source
• Back end testing
• Interface testing
• Entire evaluation of the software product
• 3G coverage
• Wifi, computers, GSM coverage
• Experts
• Address grass root users
• Policy on issue of licenses
• Mobility to reach out to the user environment
• Factors in the local challenges such as power cuts
• A way to enable users express their version of the product
• Some level of networking(both wired and wireless)
• Mobile apps testing
• Platform(os) independent
• Mobile testing lab
• Tests to be in a typical working environment
• Training and evaluation reports and design testing
• Load and Stress testing infrastructure
• Virtual environment
• Consultation on user interface prototypes(e.g. discover_ ability, localization, back
trackability), assessment of client’s ability to state their needs
• Web apps ease of use
• Mobile apps ease of use
• Simulation lab
• Training section
• Testing section
The first survey showed that the stakeholders would use diverse testing services.
Such services would be used to test functional usability, integration, synergy and networking,
stress testing, load times, response time, designs and interface testing. One respondent
wanted to test User environment, which I interpret as that the stakeholder would like to test a
product in field, performing in situ-studies. One stakeholder wanted that the facilities should
supply with experts that can observe the intended (naïve) users.
Two stakeholders answering the first survey brought up the issue of testing “Sms-
web based platforms” and “GSM and Data”. Further one respondent of the digital survey
wanted to use the facilities for “mobile money services testing” and “internet banking
testing”.
42
There were not many wishes for certain equipment in the first survey, but one
respondent stated that his or hers organization would use “All IT products”, which leads to
what the respondents of the digital survey stated. Wishes for equipment and environment
were few in the digital survey as well, but two respondents expressed wishes for a well-
equipped laboratory when it comes to computers:
“a fully fledged networked computer labaratory with developent applications.
[sic]”(R1) “Enough Computers and Peripherals please!” (R3)
One respondent also had wishes about the facility it self, saying that the space should
be big enough and dedicated, making it possible for users of the facilities to change the space
in order to work as the environment where the product will be used by the end-user.
Another respondent had a wish that the laboratory should be possible to use
elsewhere, when he or she stated that it should be possible to conduct field studies on new
products. The respondent was arguing that products that are supposed to be used by “the
common man in Uganda” also should be tested “on” the common man.
One respondent answered that the facility should have “Client’s version of the
product design”.
The digital survey provided answers that show what types of services the
stakeholders would use in the facilities. Respondent number 2 answered that the facilities
should contain “Data entry operational tools.” (R2).
As brought up by the stakeholders in the first survey, one stakeholder answering the
digital survey wanted to use the facilities to “3) Research into new devices (tabs, smart
phone) usability” (R2). One stakeholder wanted the facility to supply:
“- A service to explain to users what a new product does.
- Collection of comments and responses from users.
- provide the actual product to users and measure easy of use, ease of learning and
error rate.” (R7)
There were some wishes for practicalities of the usability facility as well. One
stakeholder wanted to “prevent bureaucracy” (R3) by putting up a remote access service, that
is available for external parties. The remote access would be used for external parties that
want to book the facilities, monitor, request, and assess the tests.
4.3.5 How are these services your organization would use currently met?
To the question “How are the services your organization would use currently met?” the
respondents of the first survey answered:
• Pre-Launch tests
• Pilots
• in house testing
• UI experts
• We do in-house testing by one of the developers with one of the users.
• They are not
• Fitness for purpose verification through Uganda National Bureau of Standards
(UNBS)
• External resources hired for the job
• Management review, regular demos
• Interactive development involving client users
43
The respondents of the digital survey answered the question by stating:
“(example) is relying heavily on the mother organisation for product development, testing
and evaluation.”(R1).
“We are not a perfect team (no team is :-), but we try to have on board a wide
range of skilled minds. Currently, when someone works on a product, you use
principles to do the first set of usability tests, next someone else from the rest of
staff can offer to emulate a client / user to further test, and eventually, we push
the product out as a beta for the initial set of external users (who might not be the
exact end / final users) to test and provide feedback.” (R3)
R3 stated that they perform tests during the early parts of the development cycle by
using members of the development team. When the product is somewhat developed, the
product is beta tested by users, but these users are not always the intended end-users of the
product. Another respondent stated that his or hers organisation also use their team members
for developing their products, by holding “scrum discussions” (R5).
One respondent (R6) stated that his or hers organisation follow standards (heuristics)
for the design of menus, fonts etc., of their products. Further the respondent argues “but a lot
still needs to be done”.
“1. We deploy the software before it is widely tested by end users or possible end
users and users learn as they use the software.
2. We keep close links with the users through the development process and they
keep testing versions and new functionality as they are completed” (R7)
Respondent R7 stated that his or hers organisation educates their users in the use of
their software, before it is further tested. Further the respondent said that their users are
testing new functionality of the software when such are implemented. Another respondent
(R2) stated that their users are involved in the development process by giving feedback, and
through this feedback improvement of the software is implemented.
4.3.6 Would your organization be willing to pay (subsidized) for the
services?
Only the first survey included this question, and the respondents’ answers were:
• 12 answered Yes
• 1 answered No
• 1 answered Yes and No
The respondent who answered no stated that the reason was:
• Because the cost may be too high for our organization
4.3.7 What might be your issues of concern that you would want addressed
before you can trust and use the facility?
The respondents of the first survey stated that their issues of concern that they would want
addressed before trusting and using the facility were:
• Intellectual property issues
• non disclosure agreement, a required time frame
• confidentiality, commitment and adherence to completion
• Usability testing goes inline with load and stress testing: The other tests before
usability testing should be factored in because if the product is not efficient it will fail
• Mobility
44
• Human resources
• Harmonize issues of standards with the National Standards Body ( UNBS) for ease of
enforcement
• Confidentiality of software products provided
• Quality of testing
• Need for confidentiality
• Arrangements of Non Disclosure Agreement where info will not be passed on to
competitors
• Ip protection and standards
The respondents’ answers to the digital survey showed that the three of the respondents had
issues of concern about confidentiality such as stated below
“Prior to performing usability tests in the provided facility / premises, there
should be clear and transparent legal steps taken to safeguard the product owner's
interests from prying eyes and evil intentions / users!” (R3)
“Also, I would be happy if the facility would offer some sort of guarantee that
they offer reliable services-- as in ability to allow clients to question the methods
employed in assessing their products, but this might vary from client to client
anyway...” (R3)
The same respondent stated that he or she wanted that the facility could show the
respondent’s clients that the services and therefore methods and activities conducted and
used in the facility is reliable.
“- The facility should be able to involve an equivalent of end users in the process.
(example) is a software is meant for secondary school teachers, the facility should
be able to involve a secondary school teacher from (examples)” (R7)
R7 would like to see that the facility could provide the tests with the intended end
users of the product being tested, so that the tests contribute with accurate data.
The facility should be adapted, according to R8, to persons with disabilities, by
enable access to those who cannot “go to high storage buildings”, and by providing
headphones and LCD screens (instead of CRT screens) for those with hearing disabilities and
“eye problems” (R8). Further the same respondent wanted the facilities to be air-conditioned.
45
5. Analysis
This chapter will interpret the answers to the questionnaires according to the perspectives
given in the literature and to the relevance they have for the plans to set up usability facilities
at CIT. The chapter is structured after the three major thematic groups of questions in the
surveys (except for the demographic data), that is, 1) the respondents’ view of usability, 2) the
usability engineering processes of the stakeholders, and 3) the stakeholders’ potential use of
usability facilities at the university. The third section is the largest as this is the main focus of
this study; it is looking ahead and analysing what usability facilities would be beneficial for
CIT to have in the future. Finally, a fourth section condenses all the analyses into two precise
answers of the two research question that was put in section 1.2: a) What conceptions or
misconceptions of usability studies do the potential stakeholders have? and b) What needs do
the stakeholders have for usability facilities?
5.1 The potential stakeholders’ take on usability
As explained in section 3.3.1, the first survey did not include any questions about what
conceptions the stakeholders have about usability studies. The second survey included a
question where the respondents were asked to explain what usability studies mean to them,
and when, why and how such studies are conducted. This question was added since what
conceptions the stakeholders have about usability studies might affect how to interpret their
answers to the rest of the questions in the survey. The analysis of the feedback from the first
survey was performed, it showed that the stakeholders’ conceptions of usability studies
might be that such studies are conducted first when the product is finalised; that the usability
studies are performed as an inspection. If the stakeholders have severe misconceptions about
usability studies then it would be hard for them to give accurate answers to what services
they would like to use in the facilities, for example. Their (mis)conceptions might also show
that the knowledge of usability studies must be extended in Ugandan companies and
organizations, before the usability facilities would be successful. The stakeholders
(mis)conceptions might show that the usability facilities could be used to educate or train the
companies and organizations in Uganda to better understand usability, getting them to
realise that usability engineering is a beneficial process for both them selves, the users of their
products, and therefore the Ugandan society in general.
According to what Chapter 2 has shown that usability studies are about, and what
the stakeholders answering the digital survey stated, the stakeholders have an accurate
conception about what usability studies can be and why they are conducted. However, some
of the respondents’ conceptions of how and when the usability studies are conducted did
differ from what Chapter 2 showed.
One respondent stated that usability studies are performed on “a finished product”
(R7). As Chapter 2 explained, usability studies can be conducted on finished products as well,
but the work to ensure usability is more beneficial if it is started early in the development
process. “Through series of interactions with the intended end-user in meetings, discussion
46
groups, opening and trying out already existent sites.” (R5) is an answer that indicates that
one respondent interprets usability studies to be performed at finished products, and not
really by letting the users use the product, but rather study usability through talking with
users in focus groups and meetings. These methods can be used for gathering feelings and
opinions from users, but should be complemented by user tests as well (Rubin & Chisnell
2008). Three respondents didn’t answer how and when the usability studies are supposed to
be conducted at all. Either they do not know, or the question was not phrased well enough.
One respondent stated that the usability studies are performed during “system
simulation time” (R1). This indicates that the respondent thinks that usability studies should
be performed before the system is fully developed, as a way to find out how the system
should work and what functionality it should contain, which the authors reviewed in
Chapter 2 agree on.
One respondent stated that “usability studies should be employed during design and
construction of a product, and before a product is rolled out to the public / final user
domain.” (R3), just as Chapter 2 stated to be a valuable development process.
“Any product designed to be used (regardless of who the user is), can be tested for
usability by employing users chosen using stochastic sampling from the possible
user domain, to use / interface with the product (or its prototype) in free AND
controlled sessions, and notes / scores taken of their satisfaction and efficiency
while using it.” (R3)
The same respondent (see citation above) stated that the user tests could be
performed either by letting the user use the product, or by letting the user use a prototype of
the product. This implies that the respondent is aware of that usability studies can be
conducted before the product is fully developed, such as to evaluate the product using a
prototype. Though it is unclear to me what the respondent means by free sessions, but I
interpret it as that the respondent means that free sessions is the opposite to controlled
sessions, i.e. free sessions are such conducted in field.
“conducting usability tests would vary from product to product, mostly
depending on the kind of technology under test and the intended user / target
domain.” (R3)
Further the respondent is aware of that the usability studies vary from product to
product (see citation above).
These answers, even though some of the respondents have accurate conceptions of
usability studies, shows that some work can be done in order to get the stakeholders to
understand that usability studies and the facilities can be used throughout the whole
development process. This would make the facilities used more if established by or at
Makerere University, and would further gain the Ugandan society in general as the software
and products being developed are made with usability in mind from the beginning.
Since three respondents did not provide any answer to the question, it is possible that
it would have been clearer if the questions were divided into at least two parts. One where
the respondents was asked what usability studies mean to them and one part where the
respondents were asked to illustrate when, why and how usability studies are conducted.
One of the three respondents, who did not answer the question, provided citations
from different websites about what usability studies are, why it was hard to interpret
47
whether the respondent actually understand what usability studies are or not. The chosen
citations from the websites though were accurate. They explained and answered the later part
of question 4, but I will not analyse the answer in this study, since I wanted the respondent’s
conceptions about usability studies; not a website’s conception.
5.2 The stakeholders’ usability engineering processes
In the first survey a question about how the work with collecting and specifying
requirements for the software that the stakeholders are developing was not included, as
explained in section 3.3.1. But after the analysis of the feedback from the first survey was
performed, it showed that the stakeholders’ conceptions of usability studies might be that the
studies are conducted first when the product is finalised; that the usability studies are
performed as an inspection. Therefore it was decided that a question about how the
requirement engineering are conducted would be included in the digital survey.
Two of the respondents said that they visit their users premises. One of these
stakeholders described how their users are being involved in the development process, by
further stating that
“[…] - we conduct interviews,
- we review organizational documents with the help of users
- we document the process, do visual representations and review the process with
the users.
- we involve users thru the development process so as to keep on track. Users
confirm that that is what they want or help us incorporate change before long.”
(R7)
There were two answers that did not describe how the users are involved, these were
“They provide usually 50 - 70% of the system functional requirements” and “Our clients’
participation in the SDLC5 varies from product to product, but we tend to follow an Agile
approach many times.” (R3) This describes that the users are involved but not really how
their users are involved. Thus, some of the respondents did not actually describe how their
users are involved. Perhaps the question should have asked for details but there is the
possibility that some of the stakeholder representatives answering the digital survey are not
involved in the work with the requirements, why this work cannot be described any further.
The answer “There is a project manager who interacts with them to identify the needs.” (R9)
implies that this can be the case.
By analysing the respondents’ answers I draw the conclusion that the respondents do
not see the work with requirements as usability engineering methods, nor do they see
usability engineering methods as a way to involve their users in the work with specifying
requirements. This implicates, just as the respondents’ answers to the prior question, that the
potential stakeholders do have misconceptions about usability studies. Their misconceptions
must taken care of if the usability facilities are to be successful and used to the fullest.
5 SDLC is "Software Development Life Cycle" or "System Design Life Cycle" according to Wikipedia. (2012-05-15)
Electronic: http://en.wikipedia.org/wiki [Inspected: 2012-05-29]
48
5.3 The stakeholders’ potential usage of the usability facilities
Even though all 26 potential stakeholders have stated that they are interested in using the
usability laboratory if one were to be established at Makerere University, it cannot be
interpreted as that the laboratory already has 26 clients. Showing interest in a project is not
the same as actually paying for and using such facilities. Though, there might be additional
potential stakeholders who did not participate in this research who are potential clients.
Therefore the conclusion cannot be made that the usability facilities will have 26 clients from
the start, or that the laboratory would “only” have 26 clients; there might be additional clients
who are yet to be contacted and informed.
Since the majority of the respondents of the first survey answered that they wanted to
use the facilities in other places than in a laboratory, most of the facilities need to be portable.
The digital survey shows, just as the first survey, that the stakeholders would like the
possibility of using the laboratory outside the university’s premises. But because the students
of Makerere University are going to use the facilities as a part of their education, the facilities
need to be available at the university as well. In addition some of the stakeholders (2 in the
first survey, 3 in the digital) would benefit from if the facilities were accessible at the campus
premises.
5.3.1 When would the stakeholders would like to use the facilities
It is a positive fact that many of the respondents said that they would like to use the facilities
soon, since if the facilities are established at Makerere University, potential “clients” that are
aware of the facilities are available from the beginning. Though, the stakeholders raised two
issues that need to be addressed if establishing usability facilities.
Respondent number 3 of the digital survey said that he or she wanted to see some
results from the usability tests conducted in the usability facilities before using it him- or
herself, seems to somewhat have misinterpreted the facilities, or having some doubts about
the actual benefits or expertise that the facilities could add to his or hers development or
organization/company. The demand of showing what the facilities have accomplished is a
hard demand to meet. It is not simply a question of distributing reports from conducted
usability evaluations -- the parties involved in these may not at all want CIT to disclose
information, as several respondents wrote (see section 4.3). Moreover, what would good
testing results mean? Is it how many usability problems that have been found in the facilities,
or maybe how much the facilities have contributed regarding usability of the tested products?
If through usability test few usability problems are found – are the tests really unproductive
or is the product being tested so well developed that few problems are found? Or is the
usability tests conducted in a poor way? By conducting a heuristic evaluation many usability
problems can be found, but as the Chapter 2 showed, the heuristic evaluation may not
identify the most severe usability problems from the intended end users’ point of view.
Further the usability engineering lifecycle model by Nielsen (1993) shows that the iterative
development cycle is an important part of the development process. If the usability problems
found through usability studies are not dealt with – the usability of the product will not be
improved. The results of the usability tests must be handled; the development process must
49
be iterative if the product’s usability is to be ensured. Problems found in usability studies are
depending on in which iteration the product is in and how the studies are conducted (Nielsen
1993). Nielsen argues that it is hard to develop a perfect interface from the beginning, why
many usability problems can be found during the first iteration. But if no usability studies are
conducted at all – naturally no usability problems are found (by the development team). One
possible solution to this issue is to provide an executive viewing room, though this would
demand that the facilities are situated in a fairly large dedicated space, where a mirrored wall
are installed. An executive viewing room would let the external parties watch the tests being
conducted, letting them decide on the quality of the usability evaluations for them selves.
Though this would demand that they do agree on getting their own products being tested at
least once, since other parties may not want CIT to disclose information or results from tests.
Recording the tests would provide another alternative for the external parties to decide on
the quality of the tests being conducted, if watching the recordings. Though this would also
demand that they agree on that at least one usability test is conducted, of the same reason as
stated earlier.
”As soon as they are available and affordable -- because there's already a bunch of
products that would benefit from such a facility.” (R3)
This respondent identified an issue that needs to be dealt with if Makerere University
are going to establish the usability facilities, the cost issue. How much is reasonable to charge
for the services and facilities; what are the external parties that use the usability facilities
going to pay? How much are the stakeholders willing to pay for such services? If the facilities
are adapted to the potential stakeholders needs researched in this study, the charge should be
adapted to that the external parties derive from companies of different sizes, and to the fact
that some of the stakeholders are stand-alone developers. Depending on what services the
facility provides, the charge could be adapted thereafter. The charge could also vary
depending on what services the client is using. One question is if all services should have the
same charge of use. Portable facilities could be charged for by letting the clients pay a rent.
Included in the rent “risks” of using devices elsewhere than a controlled laboratory could be
incorporated. Risks with letting clients use facilities outside a laboratory could be that the
facilities are forgot or stolen, or just that the facilities are worn out. One solution to the charge
issue could be, as the question about the willingness of the respondents to pay for the
services, that the charge would be subsidized.
5.3.2 Services the facilities should provide
Indeed, the potential stakeholders want to use the usability facilities if established at
Makerere University. There are probably additional potential stakeholders with needs that
could be contacted about the usability laboratory, who are not aware of the benefits of
conducting usability studies and usability engineering. Therefore some needs might not been
included in this study, since those needs were not collected. Though, all the respondents
participating in this study have stated that they are interested in using the usability
laboratory if one is to be established at Makerere University.
50
If the stakeholders had not used any usability facilities before, it might be hard for
them to state what services they would like to see as part of the facility possibly being
established at Makerere University. But this did not seem to be a problem since the
respondents had many opinions of what services such facilities should contain, and what
facilities they would use. Though, not many of the stakeholders provided answers that
showed their interest in using services as stated in Chapter 2. Some of the respondents of the
first survey stated that they wanted to perform