Content uploaded by Melinda C. Braun
Author content
All content in this area was uploaded by Melinda C. Braun on Oct 19, 2020
Content may be subject to copyright.
Accessibility of Different Natural User Interfaces for
People with Intellectual Disabilities
Melinda Braun∗, Matthias W¨
olfel∗, Gregor Renner†, Christian Menschik‡
∗Karlsruhe University of Applied Sciences, Germany, {melinda.braun,matthias.woelfel}@hs-karlsruhe.de
†Catholic University of Applied Sciences, Freiburg, Germany, gregor.renner@kh-freiburg.de
‡Furtwangen University, Germany, christian.menschik@hs-furtwangen.de
Abstract—Digital technologies have many advantages for
users, such as virtually unlimited access to information, enter-
tainment, and communication. Most modern human-computer
interfaces are developed under the assumption that they will
be used by a person with typical physical, intellectual, and
perceptual abilities. Although some operating systems already
include accessibility features, in most cases the effective use
of the respective interface can be severely restricted if a
person’s abilities deviate from this norm. To what extent
the class of ‘natural‘ user interfaces—including touch, voice
and touchless—are accessible to people with intellectual and
possibly motor disabilities is an important but not yet investi-
gated question. Therefore, this paper investigates the current
accessibility of these three interface types. First, we conducted
a field study to figure out how the target group interacts
with these types of interfaces in general. Second, quantitative
data on cognitive and motor skills was collected using parts
of the Questionnaire for Observing Communicative Skills -
Revision (OCS-R) which is widely used in institutions for
people with disabilities in Germany. Finally, the accessibility
of each interface type was analyzed with the help of the data
obtained from the questionnaire and an expert survey, which
determined the important and unimportant skills required for
each interface. These findings show how usable different types
of natural user interfaces are for this target group.
Keywords-touch user interface; voice user interface; touch-
less user interface; accessibility; cognitive disabilities; intellec-
tual disabilities; digital divide
I. INTRODUCTION
Digital technologies have many advantages for users, such
as virtually unlimited access to information, entertainment,
and communication. Although these opportunities would
also be relevant for people with intellectual disabilities (ID),
for this target group it is particularly difficult to use infor-
mation and communication technology (ICT). This results
in a digital divide—a separation between users and those
who are excluded from usage due to various constraints—
between people with (primarily intellectual) disabilities and
the ”networked ordinary citizen” [1]. We refer to persons
with ID, following the World Health Organization (WHO),
as persons with “significantly reduced ability to understand
new or complex information and to learn and apply new
skills (impaired intelligence)” [2]. This includes a spectrum
from persons with mild learning disabilities and reading and
writing skills or people using simple language, to persons
with profound intellectual disabilities, e.g. without apparent
language comprehension or intentionality. This may imply
access barriers to use digital devices, as they are mainly
developed under the assumption that they will be used by
a person with typical physical, intellectual, and perceptual
abilities. Although some operating systems already include
accessibility features, in most cases the effective use of the
provided interface can be severely restricted if a person’s
abilities deviate from this norm. It is interesting to note that
this effect is not exclusive to people with disabilities, but
now also includes your heritage or the regions where you
grew up, as, for example, the dialect you speak is not well
understood by automatic speech recognition [3].
Although there are a lot of configuration options available
today, it has not yet been investigated to what extent new
types of interfaces and their configuration options can be
used by people with ID. While traditional interfaces required
simple algorithms that could be easily written in code, new
interfaces rely on pattern recognition which requires training
data. The problem within this training data in particular lies
in its sparsity and bias as it covers only the general public.
For people with disabilities (PWD), access to information
may be limited by barriers, such as access to spoken infor-
mation for deaf people, written information for blind people,
or writing with a pen or typewriter for people with motor
disabilities. On the one hand, ICT has created new barriers
in its development, on the other hand, assistive technologies
have opened up new possibilities, such as screen readers,
braille lines and braille keyboards for blind people. Special
keyboards and/or special input systems with a single sensor
in combination with a scanning process via the alphabet, as
used by Stephen Hawking, enabled people with severe motor
disabilities to participate via the computer in a completely
new way.
In addition, more and more interfaces types have been de-
veloped over the last decades—from keyboard and mouse to
speech or gesture and now even brain-computer-interfaces—
this variety can make it difficult for many PWD to use
certain technologies and to select which of those technolo-
gies can be usable or adapted to fit their particular needs.
Therefore, more research is needed to study the use of
different interfaces by people with ID and different cognitive
211
2020 International Conference on Cyberworlds (CW)
2642-3596/20/$31.00 ©2020 IEEE
DOI 10.1109/CW49994.2020.00041
or motor abilities needed to use those interfaces today.
This can help to find out what interfaces are suitable for
individuals with different ID or what needs to be adapted in
those interfaces. This paper provides a first glance at what
abilities are needed to use different types of interfaces.
II. RELATED WORK
Current research on technological equipment of people
with ID indicates that the target group rarely uses smart-
phones or other digital devices. Research on structural barri-
ers has shown that people with ID often do not have access
to technological devices and often live in institutions that
don’t provide internet access [4], [5]. Individual problems
in computer-aided thinking and digital competence make
it difficult for people with ID to use structurally complex
digital devices. As research also shows, ICT can have many
benefits for people with ID in terms of participation in the
physical world. ICT can be used to increase participation in
social interaction [6], in indoor pathfinding [7], in access to
leisure activities [8], in teaching skills for daily living [9],
and even as a replacement for augmentative and alternative
communication (AAC) devices [10].
The universal design approach (aka design for all)was
developed to make it easier for PWD to access standard
commercial technologies, since these technologies are often
cheaper to purchase and are updated more regularly than
special assistive technology. This approach aims to design
and develop systems that can be used by everyone, no
matter the physical or cognitive abilities. Since the abilities
of users are very diverse, it is almost impossible to take
everything into account when designing technologies, but
some mistakes that make access to technology difficult for
certain groups of people can be avoided [11].
Education and skills training are common areas where
digital devices are used by people with ID. In order to
develop an e-training platform for the target group on which
the use of common applications such as Facebook, YouTube
or WhatsApp can be trained, Ferreras et al. found that the
most common problem preventing the target group from
using ICTs is the difficulty in finding suitable apps and that
they lack knowledge on how to use them [12].
Due to a large number of different technologies, inter-
faces, and applications available today it is often difficult
to find suitable and accessible interfaces or devices for
people with ID. Current research refers mainly to special
applications and functions and special types of disabilities,
not to the type of interface in relation to people with ID
in general. To limit the digital divide, the use of natural
interfaces, as discussed below, and their possible adaptation
for people with ID is needed.
A. Touch User Interfaces
We refer to touch user interfaces to any type of computer-
pointing input technology that requires to touch a surface
with or without a display. In the last decade, touch interfaces
have replaced other forms of interfaces and are now widely
used on smartphones, tablets and are also getting more
attention as an alternative input device on PCs [13].
While smartphones and touch interfaces are ubiquitous in
society, people with ID still face problems in their use, such
as small screen, text or button sizes, difficult error handling,
not enough provided feedback and a large number of in-
teraction methods (tap, flick, pinch etc.) [14], [15]. People
with additional visual impairment in particular experience
problems interacting with touch interfaces, although there
are some touch interfaces with accessibility features [16].
Saenz de Urturi Breton et al. offer a set of guidelines that
developers can consider when designing accessible touch
interfaces to improve usability [15].
B. Voice User Interfaces
We refer to voice user interfaces—which are also known
as conversational user interfaces and include voice assis-
tants—to any type of interfaces that lets the user interact
with a machine and perform tasks with voice input and can
also include voice output [17]. Voice user interfaces can run
on devices such as PCs and smartphones but are now more
commonly used on specially designed devices such as Echo
from Amazon (with Alexa) or Google Home (with Google
Assistant).
According to Pradhan et al., voice assistants can un-
intentionally be accessible to people with disabilities and
increase efficiency and independence when using a digital
device. At the same time, people with speech impairments
can face accessibility issues due to speech recognition of the
device [18]. Even for people with ID, voice assistants can
be a suitable interface for operating a device. Balasuriya
et al. [19] observed people with ID using voice assistants
to perform specific tasks and showed that most of the
participants could easily use their voice to activate the
interface at the first attempt. Some participants had difficul-
ties pronouncing “Siri” or “Google”, others needed several
attempts to activate the interface, but then had problems
consistently maintaining correct pronunciation. Participants
appreciated voice assistants, as they can avoid spelling and
typing problems. For this reason, Balasuriya et al. propose
adjustable input settings to make voice interfaces accessible
to speech-impaired users [19].
C. Touchless User Interfaces
We refer to touchless user interfaces to any type of
interface which allows us to command the computer via
body motion and gestures without physically touching a
keyboard, mouse, or screen and also exclude the class
of voice user interfaces. Touchless user interfaces have
not yet entered the mainstream, but are widely used in
special applications such as gaming (e.g. using the Microsoft
Kinect), as an input modality in virtual reality (e.g. using
212
a Leap Motion Controller), and head- or eye-tracking (i.e.
Camera Mouse, Tobii Dynavox). In the field of assistive
technologies, touchless approaches exist that enable the
control of wheelchairs or other assistive devices. They can
recognize specific movements of the hands, face or other
parts of the body and thus are especially interesting for
people who are restricted in motor control and cannot use
buttons, joysticks or touch [20].
Currently there is not much research on touchless user
interfaces and people with (intellectual) disabilities. Saenz-
de-Urturi and Garcia-Zapirain Soto developed and tested a
Kinect-based game to correct poor posture of elderly people
that took into account cognitive and physical disabilities
of their target group. They stated that those interfaces
have much potential for specialized forms of (elderly) care
applications that are low-cost and enjoyable [21].
D. Adjustability of User Interfaces
Current research indicates that commercial technologies
have the potential to contribute to assistive or educational
settings for people with ID. It also shows, however, that
these technologies either need to be adapted or users with
ID need to be guided by non-disabled persons in order to
exploit the full potential of the respective device [19], [22],
[23]. While simpler interfaces can be adjusted without much
effort—e.g. extending a knob by clay or replacing one button
with another one—interfaces that heavily rely on pattern
recognition, such as the investigated interfaces, are much
harder to be adapted.
III. METHODOLOGY &TARGET GROUP
In order to find out how interfaces need to be adapted
in the future, this study must first determine the current
accessibility-status of various user interfaces when used
by people with ID. For this first overview, touch, voice,
and touchless interfaces have been considered. Since there
is no standardized method or questionnaire to date that
examines the use of different user interfaces by people with
ID, an experimental approach using an observational study,
a questionnaire and an accessibility analysis was chosen.
Of course, such a study can never be comprehensive and
depends on the use case. In our observation, we decided on
a generic application for navigating and playing music that
can be well designed for different interface types.
A. Questionnaire for Observing Communicative Skills
To find out what basic cognitive and motor skills the
people with ID have, a questionnaire using modules of
the Questionnaire for Observing Communicative Skills -
Revision (OCS-R) was distributed to the participating in-
stitutions and filled out by carers or managers for the
individuals involved. The OCS-R is a structured diagnostic
observation instrument for assessing communicative abilities
and forms of expression of children, adolescents, and adults
with limitations in their communicative abilities or their
communicative development [24]. The OCS-R is initially
not designed for interfaces-related decisions and of course
cannot take into account all parameters that would be
required to define the fit to a particular interface, but the
importance of this questionnaire lies in its availability as
it is already present (filled in) in many institutions. This
fact can speed up and simplify the process of selecting
suitable interfaces enormously since the carers of people
with ID do not have to fill out an additional questionnaire.
In addition, institutions often employ people without a
technical background. A questionnaire that is too interface-
specific could be quite difficult to be filled out by people
without technical background knowledge. This makes the
OCS-R a potentially suitable tool to find out which interface
types might be suitable for an individual or which interface
may be used when adapted correctly.
To find out which categories or parameters of the OCS-R
are relevant for the three interface types mentioned above,
all items were later evaluated by experts in the field of
human-machine interaction or interaction/interface design.
This evaluation resulted in an adapted version of the OCS-
R, which contains only the relevant items about interface-
specific abilities.
B. Target Group
This study included individuals with a range of differ-
ent forms and degrees of ID, some with additional motor
impairments. All participants were recruited from three
different institutions in southern Germany. The majority of
the participants (76.9%) live in residential groups in these
facilities. Individual, less restricted participants (23.1%) live
in outpatient residential groups, which also belong to the re-
spective facilities. Intellectual disabilities include a spectrum
from persons with mild learning disabilities to persons with
profound ID, e.g. without apparent intentionality. Since this
study uses oral and visual tasks related to different interface
technologies, the study includes an observation to find out
if the participants are able to complete these tasks or not.
Not all participants who took part in the observation filled
out the questionnaire afterwards, for this reason the number
of participants in the observation is higher than in the rest
of the study. The aim of this study is to include “all” people
with intellectual disabilities as far as possible, thus obtaining
a cross-section of the target group.
Access to this target group is made particularly difficult
by the fact that some individuals cannot give their consent
to studies themselves. For people without this possibility,
for example, parents or legal advisors must give their con-
sent before the individuals participate in scientific studies.
This process can be quite difficult and time-consuming. In
order to represent the best possible interests of all parties
involved, we have had an ethical application approved by
213
the German Society for Educational Science (DGfE) and
only use anonymized data.
IV. STUDY
In this section, we describe our user study which took
place from February to March 2020. A total of 91 par-
ticipants were observed using one of the three interfaces.
Of those 91 people, 23.1% had prior technical experi-
ence (i.e. experience with a smartphone or tablet). 14.3%
additionally—besides their ID—had some sort of visual
impairment, 48.4% had language restrictions (such as un-
clear pronunciation, communication only through sounds or
complete lack of speech), 4.4% had hearing impairments and
18.7% some other kind of physical limitation (like using a
wheelchair). The age of the participants ranges from 31 to
79 years, with the average of 55 years.
A. Observation of Interface Use
In order to find out how the target group interacts with the
different types of interfaces, the participants were observed
and observations noted. The allocation of the respective
interface was randomized. The task for the
•touch user interface was playing music on an iPad
via touch gestures and tried out by 48.4% of the
participants.
•voice user interface was playing music on an iPad
via their voice (Siri) and tried out by 32.8% of the
participants.
•touchless user interface was playing music on a laptop
with gestures of their hands and tried out by 18.7% of
the participants. We used an application with the Leap
Motion controller where you have to stretch your index
finger and swipe to trigger an action.
B. Questionnaire (OCS-R)
In our adapted questionnaire we used the OCS-R modules
“basic communication skills” containing questions about
signal production, signal perception, and interaction, “per-
ception” containing questions about general perception and
specific perceptual competencies and “motor skills”. In
detail:
•Signal production questions the abilities in vocaliza-
tions and spoken language as well as gestures and
manual signs. It consists of 17 items, e.g.: ”can speak
single words intelligibly”, ”uses conventional terms”,
”can speak simple sentences intelligibly”, can point
to objects purposefully”, or ”uses conventional ges-
tures/manual signs” [24].
•Signal perception includes questions about the abili-
ties in speech comprehension and ways of perceiving
information. It consists of 11 items, e.g.: ”can gather
information from pictorial symbols”, ”can gather infor-
mation from spoken language”, or ”understands simple
prompts” [24].
•Interaction includes 14 items that determine whether
the individual has difficulties in conversation or differ-
ent situations when interacting with people, e.g.: ”can
focus their attention on a person or object”, ”can start
a conversation with someone of their own initiative”,
or ”can maintain a conversation” [24].
•Perception first asks about general perception (is the
individual able to see/hear correctly) and then ques-
tions specific perceptual competencies. It consists of
11 items, e.g.: ”can hear with no impairments”, ”can
see with no impairments”, ”can recognize shapes”, or
”can recognize pictorial symbols” [24].
•Motor skills asks about the individuals’ motor abilities.
It consists of 14 items, e.g.: ”can press a switch”, ”can
purposefully look in a direction”, ”can purposefully
grasp an object”, ”can hold an object”, or ”can point
accurately [24].
The answers range from 1 = never, 2 = rarely, 3 =
frequently and 4 = always [24]. The questionnaire was
filled out by caregivers for 51 of the 91 persons that also
participated in the observations.
C. Interface Accessibility
In order to find out whether the examined interfaces
can currently be used by the participants, we asked seven
experts working in the field of human-machine interaction
or interaction/interface design to give their opinion on the
importance of the statements of the OCS-R for the three
interface types surveyed. The expert survey used a scale
from 4 = very important to 1 = not important for each
statement. With these results, the existing skills of the 51
participants—collected from the questionnaire (OCS-R)—
can be compared with the relevant skills for each interface
to find out how many of the participants can currently use
the respective interfaces.
V. F INDINGS
In this section, we analyze the collected data and present
our findings.
A. Observation of Interface Use
Table I gives a summary of the interface usage, task
understanding, and task completion. In discussions with the
participants and their carers it became clear that many of
the participants would like to use more technology in their
daily life to help them with various everyday tasks. Most
participants were very interested and had fun using the
devices. However, difficulties currently exist in this regard,
for instance in the procurement and financing of technology
such as smartphones or tablets and the availability of Internet
access in German institutions. It is especially difficult to find
out what kind of technology or interfaces and applications
are available for an individual person and to teach the
person how to use this technology. Furthermore, some care
214
Table I
OBSERVATION OF INTERFACE USE
User Interface In Total Task Understood Task Completed
understood partially understood not understood completed partially completed not completed
Touch 44 37 (84.1%) 6 (13.6%) 1 (2.3%) 25 (56.8%) 12 (27.3%) 7 (15.9%)
Voice 30 15 (50.0%) 6 (20.0%) 9 (30.0%) 8 (26.7%) 2 (6.7%) 20 (66.7%)
Touchless 17 12 (70.6%) 3 (17.7%) 2 (11.8%) 9 (52.9%) 2 (11.8%) 6 (35.3%)
workers expressed concerns regarding data privacy and se-
curity, when people with ID would use such devices without
knowing privacy settings.
Touch User Interface: The task of playing music on a
touch interface was understood by 37 out of 44 (84.1%)
participants and completed by 25 (56.8%) participants. In
this study group, only 4 participants (9.1%) had previous
experience with technology. Of these 4 people, 3 could com-
plete the task without problems. Due to impaired vision, one
of the four people could not complete the task but claimed
to have experience with voice commands on an iPhone. It
should be noted that of the remaining 40 participants with
no previous technical experience, 22 (55.0%) were able to
complete the task. A lot of people who did not complete
the task fully had problems with small play button size,
letting go of the button (pressing too long and using it as
a physical button), keeping their whole hand on the screen
and not being able to only use one finger of their hand to
touch.
Voice User Interface: In the observation of voice interface
use, 15 out of 30 (50.0%) participants understood the task
and only 8 (26.7%) did complete the task. A lot of partic-
ipants had some sort of speech restriction and background
noises caused additional errors. This study group consisted
of only 6 people (20.0%) with previous technical experience.
Of these 6 people, 4 were able to complete the task. One
person did not want to use voice and used touch instead and
another was not able to stay concentrated during the task.
Touchless User Interface: Trying out the touchless user
interface, 12 out of 17 (70.6%) participants understood what
they had to do and 9 (52.9%) completed the task. The
task was particularly difficult for those with problems in
the fine motor skills of their hands, as they were unable to
stretch a finger in isolation. In addition, many participants
did not understand the concept of gesture recognition, which
was shown by the fact that they touched the controller
and wanted to use it as a button. In this study group, 11
participants (64.7%) had previous technical experiences. 7
of those people could complete the task without problems,
one did not want to try. Although the other three understood
the task and tried to complete it, they could not operate the
interface properly due to motor limitations of their hands.
Among the remaining participants with no previous technical
knowledge, only two were able to complete the task.
B. Questionnaire (OCS-R)
Here we present the results of the respective categories of
the OCS-R.
Signal Production: Looking at all 51 participants, the
mean value in this category is 2.5 (rarely to frequently).
Looking at the distribution in Table II, it is becoming clear
that 47.1% of all participants have scores below 2.4, meaning
they rarely to never have these abilities. Only 25.5% of
all participants always have these abilities. This shows that,
overall, the target group shows large gaps in skills in this
category.
Signal Perception: The mean value in this category is 2.9
(frequently). The majority of participants have these abilities
frequently (52.9%) or always (21.6%). 25.5% rarely or never
have these abilities (Table II).
Interaction: The mean value is 2.3 (rarely). Looking at the
percentage values in Table II, it is obvious that the majority
of the participants have problems in this category. 54.9%
rarely or never have these abilities.
Perception: The mean value in this category 2.8 (fre-
quently). 31.4% rarely or never have the abilities in this
category, but the majority (68.6%) of participants have
scored over 2.4 (frequently or always).
Motor skills: The participants have an average score of
3.4 (frequently) in this category. Only a few have larger
problems here, with scores of 2.4 or lower (7.8%). The
majority frequently or always has all motor skills queried
(Table II). As our target group is people with ID, these
results were predictable. Not all people with an ID have
additional problems with motor skills, but there are still
participants who lack some of these skills.
C. Interface Accessibility
This section examines the results of the expert survey and
identifies the relevant skills of the OCS-R for each interface.
The important skills are then compared to the actual skills
the participants have. This helps to find out how many of
the participants can currently use the respective interfaces.
In the following accessibility analyses, only those items of
the OCS-R that were considered important by the experts
were looked at.
Touch User Interface: As seen in Table III, 18 differ-
ent skills from all five categories are important for touch
interface use. In the category ”signal production” 1 of 17
skills is important, in ”signal perception” 2 of 11 skills are
important, in ”interaction” 2 of 14 skills are important, in
215
Table II
QUESTIONNAIRE CATEGORIES
Range Signal production Signal perception Interaction Perception Motor skills
always (>3.5)) 25.5% 21.6% 11.8% 17.6% 60.8%
frequently (2.5..3.5) 27.5% 52,9% 33.3% 51.0% 31.4%
rarely (1.5..2.5) 21.6% 19.6% 31.4% 27.5% 7.8%
never (<1.5) 25.5% 5.9% 23.5% 3.9% 0.0%
Table III
EXPERT SURVEY:IMPORTANT ABILITIES
User Interface Signal Production Signal Perception Interaction Perception Motor Skills
Touch 1/17 (5.9%) 2/11 (18.2%) 2/14 (14.3%) 6/11 (54.6%) 7/14 (50.0%)
Voice 6/17 (35.3%) 8/11 (72.7%) 10/14 (71.4%) 1/11 (9.1%) 0/14 (0.0%)
Touchless 3/17 (17.7%) 3/11 (27.3%) 2/14 (14.3%) 1/11 (9.1%) 2/14 (14.3%)
”perception” 6 of 11 are important and in ”motor skills” 7
of 14 skills are important for touch interface use. The mean
values for importance range from 4.0 (very important) to
1.0 (unimportant). Only values above 2.5 were classified as
important.
Comparing the important skills with the actual skills of
the individuals (Table IV), 19.6% of the participants would
be able to use a touch interface without any problems. The
majority (51.0%) would still have the important abilities
most of the time, maybe facing some minor problems while
using. For 29.5% there would be major problems because
they lack the abilities that are essential for the use of touch
interfaces.
Voice User Interface: As seen in Table III, there have
been examined 25 relevant abilities for voice interface use
belonging to the categories “signal production” (6 of 17),
”signal perception” (8 of 17), “interaction” (10 of 14) and
“perception” (1 of 11).
Looking at the important abilities for voice interfaces,
25.5% would be able to use this interface type without any
problems, as they have a score of 3.5 or higher (Table IV).
39.2% would still be able to use the interface, possibly facing
some issues while using. 35.3% would most likely not be
able to use voice interfaces without any adjustments.
Touchless User Interface: According to the experts, 11
skills from all five categories are important for the efficient
use of touchless interfaces (Table III). In the category ”signal
production” 3 of 17 skills have been rated important, in
”signal perception” 3 of 11 skills, in ”interaction” 2 of 14,
in ”perception” 1 of 11 and in ”motor skills” 2 of 14.
19.6% of the participants would be able to use touchless
user interfaces, as they have a score of 3.5 or higher in the
important skills (Table IV). 47.1% maybe would face some
problems, most likely still being able to use the interface.
33.3% would not be able to use touchless user interfaces or
face major problems while using them.
Comparisment between the User Interfaces: Looking at
the individual participants and their scores in all three
interfaces, it is interesting to note that only 13.7% can always
use all of the three (Table V). For these people today’s
Table IV
MEAN VALUE OF IMPORTANT ABILITIES AND INTERFACE TYPES
Range Touch Voice Touchless
always (>3.5)) 19.6% 25.5% 19.6%
frequently (2.5..3.5) 51.0% 39.2% 47.1%
rarely (1.5..2.5) 27.5% 29.4% 29.4%
never (<1.5) 2.0% 5.9% 3.9%
Table V
NUMBER OF USABLE INTERFACES
Range can use at least ... interface(s)
three two one
always (>3.5) 13.7% 19.6% 31.4%
at least frequently (>2.5) 54.9% 64.7% 76.5%
at least rarely (>1.5) 90.2% 94.1% 98.0%
interfaces are most likely usable and accessible, and they
do not experience a digital divide when dealing with them.
19.6% can always use at least two of the interfaces and
31.4% can always use at least one of the interfaces. 54.9%
can use all three, 64.9% can use two, and 76.5% can use at
least one of the interfaces frequently. For this group it would
still be possible to use all or some of the interfaces, maybe
facing smaller problems. It is promising to note that 90.2%
can at least rarely use all three, two (94.1%) or one (98.1%)
of the interfaces. For people with scores below 2.5 (can
rarely use...), it would make sense to work out adaptations
to enable them to participate in digital technologies.
To investigate the correlation between the interface pos-
sible usage we used Spearman’s rank. Touch and voice
interfaces show a moderate positive relationship of 0.56,
touch and touchless interfaces show a strong positive re-
lationship of 0.91, and voice and touchless interfaces show
a positive relationship of 0.78. This shows, not surprisingly,
that a person who is able to operate a touch interface
could most likely also operate a touchless interface and vice
versa. It is interesting to observe that voice and touchless
interfaces seem to have more in common than voice and
touch interfaces.
216
VI. CONCLUSION
This study examined the current state of accessibility and
the need for adaptation of present natural user interfaces
when used by people with intellectual disabilities. It focused
on the three interface types touch, voice and touchless and
consisted of three parts: an on-site observation, a question-
naire and an accessibility analysis. In conclusion, in the
on-site observation, most of the test persons understood
the operation of touch interfaces, but only slightly more
than half of the participants were able to complete the
task. Problems exist especially when people with ID have
additional visual difficulties or limited fine motor skills.
Moreover, a large number of test persons found it difficult
to operate a virtual button. Observing the voice interface
use, it is noticeable that many of the participants had some
sort of speech restriction, which made it difficult for them
to use the interface or complete the task they were given.
The touchless interface was particularly difficult for those
having problems with motor skills. In our observation, most
of the participants trying the touchless interface were quite
technology-conscious with rather mild ID. This could bias
the results in this regard.
The expert survey revealed important skills for the respec-
tive interface types. For touch user interfaces this resulted
in 18 important skills from all five categories of the ques-
tionnaire, for voice user interfaces 25 important skills from
the categories “signal production”, “interaction” and “signal
perception” and for touchless user interfaces 11 skills from
all five categories were identified. In the future, it might be
possible to extend the questionnaire with further skills that
are important for the respective interface types in order to
make it even more detailed and accurate.
The analysis of the accessibility status of the current
interface types touch, voice, and touchless showed that there
still exist large gaps in this area, but also that there is great
potential for improvement. People with ID currently face
major problems in accessing, selecting, or using different
types of interfaces. In our study, 29.5% would most likely
not be able to use a touch interface, 35.3% would not be able
to use a voice interface and 33.3% would not be able to use
a touchless interface. 51.0% with touch, 39.2% with voice,
and 47.1% with touchless interfaces would still face minor
problems in daily use, having mean scores between 2.5 and
3.4 (only frequently and not always having the important
abilities). The comparison showed the strongest correlation
between touch and touchless interfaces, meaning that people
who are or aren’t able to use touch interfaces are most likely
able or not able to also use touchless interfaces. For our
target group—people with intellectual disabilities—we share
the statement of Pradhan et al. that accessibility issues can
occur in voice interfaces due to speech recognition problems
[18].
This analysis and the prior observation in the participating
institutions once again highlighted the existing digital divide.
At the moment, there are major obstacles in the use of tech-
nology for people with ID. It is noticeable that the technical
infrastructure in german institutions is little or non-existent
in terms of technology. Only a few of the participants in the
observation had technical equipment available themselves
or were able to use the Internet. The caregivers are usually
not technically competent and do not know which technical
devices might be suitable for an individual. Moreover, the
acquisition of technical equipment is often a problem from a
financial point of view. Nevertheless, the participants in the
observation and the staff and managers of the institutions
were very interested in changing the current status and
integrating more technology into the everyday life of people
with ID. Some even had more or less precise ideas about
the areas of life in which technology could be of particular
help the participants.
This study had some limitations, since there is no stan-
dardized method or questionnaire to date that examines the
use of different interface types by people with ID. We
had to choose an experimental approach using parts of
the OCS-R, which is initially not designed for interface-
related decisions. In addition, fewer participants were able
to try out the touchless interface because the Leap Motion
Controller was not available for testing at first. However, the
results show that the three interfaces types are accessible
to some people with ID, but the majority faces major
problems while using. In order to make current interface
types usable for people with any level of knowledge or
ability, accessibility or universal design decisions can be
made in the development process. Because the abilities
of users are very diverse, it is almost impossible to take
everything into account when designing technologies [11].
While some mistakes can be avoided by those decisions,
people with ID have to be trained in interface use and
existing interfaces must be adapted or modified so that the
individual can use them in the best possible way. In addition,
the best fitting interface based on the abilities of the person
in question must be selected, for which this study should
provide a first indication. Further studies will be necessary
to investigate other types of interfaces and abilities and how
exactly they have to be adapted to fit the needs of people
with ID.
ACKNOWLEDGMENT
This study is part of a project funded by the Federal
Ministry of Education and Research (BMBF) in Germany
within the framework of the program “FH Sozial 2017”.
We would also like to thank the participating institutions
for people with disabilities. Without their help and patience,
it would have not been possible to conduct this survey.
REFERENCES
[1] D. Lussier-Desrochers, C. L. Normand, A. Romero-Torres,
Y. Lachapelle, V. Godin-Tremblay, M.-v. Dupont, J. Roux,
217
L. P´
epin-Beauchesne, and P. Bilodeau, “Bridging the digital
divide for people with intellectual disability,” CP, vol. 11,
no. 1, May 2017.
[2] W. H. O. (WHO), “Definition: in-
tellectual disability.” [Online]. Available:
http://www.euro.who.int/en/health-topics/noncommunicable-
diseases/mental-health/news/news/2010/15/childrens-right-to-
family-life/definition-intellectual-disability
[3] A. Koenecke, A. Nam, E. Lake, J. Nudell, M. Quartey,
Z. Mengesha, C. Toups, J. R. Rickford, D. Jurafsky, and
S. Goel, “Racial disparities in automated speech recognition,”
Proceedings of the National Academy of Sciences, vol. 117,
no. 14, pp. 7684–7689, 2020.
[4] S. Johansson, J. Gulliksen, and C. Gustavsson, “Disability
digital divide: the use of the internet, smartphones, computers
and tablets among people with disabilities in Sweden,” Univ
Access Inf Soc, Mar. 2020.
[5] National Telecommunications and Information Administra-
tion and Economics and Statistics Administration, “Exploring
the Digital Nation: America’s Emerging Online Experience,”
U.S. Department of Commerce, Tech. Rep., Jun. 2013.
[6] M. Donnelly, R. Bond, M. Mulvenna, L. Taggart, D. Hill,
P. Fitzsimons, S. Martin, and A. Hassiotis, “Facilitating social
connectedness for people with autism and intellectual dis-
ability using an interactive app,” in Proceedings of the 32nd
International BCS Human Computer Interaction Conference
32, 2018, pp. 1–4.
[7] J. C. Torrado, G. Montoro, and J. Gomez, “Easing the
integration: A feasible indoor wayfinding system for cognitive
impaired people,” Pervasive and Mobile Computing, vol. 31,
pp. 137–146, Sep. 2016.
[8] G. E. Lancioni, N. N. Singh, M. F. O’Reilly, J. Sigafoos,
G. Alberti, V. Perilli, V. Chiariello, G. Grillo, and C. Turi, “A
tablet-based program to enable people with intellectual and
other disabilities to access leisure activities and video calls,”
Disability and Rehabilitation: Assistive Technology, vol. 15,
no. 1, pp. 14–20, Jan. 2020.
[9] P.-F. Wu, H. I. Cannella-Malone, J. E. Wheaton, and C. A.
Tullis, “Using Video Prompting With Different Fading Pro-
cedures to Teach Daily Living Skills: A Preliminary Exami-
nation,” Focus Autism Other Dev Disabl, vol. 31, no. 2, pp.
129–139, Jun. 2016.
[10] C. Mongeau and D. Lussier-Desrochers, “Mobile Technolo-
gies Used as Communication Support System for People with
Intellectual Disabilities: An Exploratory Study,” in Advances
in Design for Inclusion, G. Di Bucchianico and P. F. Kercher,
Eds. Cham: Springer, 2018, vol. 587, pp. 254–263.
[11] J. Abascal, “Human-computer interaction in assistive tech-
nology: from ”Patchwork” to ”Universal Design”,” in IEEE
International Conference on Systems, Man and Cybernetics,
vol. vol.3. Yasmine Hammamet, Tunisia: IEEE, 2002, p. 6.
[12] A. Ferreras, R. Poveda, M. Qu´
ılez, and N. Poll, “Improving
the Quality of Life of Persons with Intellectual Disabilities
Through ICTs,” Stud Health Technol Inform, vol. 242, pp.
257–264, 2017.
[13] R. G ¨
undogdu, A. Bejan, C. Kunze, and M. W¨
olfel, “Ac-
tivating people with dementia using natural user interface
interaction on a surface computer,” in Proceedings of the 11th
EAI International Conference on Pervasive Computing Tech-
nologies for Healthcare - PervasiveHealth ’17. Barcelona,
Spain: ACM Press, 2017, pp. 386–394.
[14] P. Williams and S. Shekhar, “People with Learning Disabili-
ties and Smartphones: Testing the Usability of a Touch-Screen
Interface,” Education Sciences, vol. 9, no. 4, p. 263, Oct.
2019.
[15] Z. Saenz de Urturi Breton, F. Jorge Hernandez,
A. Mendez Zorrilla, and B. Garcia Zapirain, “Mobile
communication for Intellectually Challenged people: A
proposed set of requirements for interface design on touch
screen devices,” Commun Mob Comput, vol. 1, no. 1, p. 1,
2012.
[16] M. Rodriguez-Sanchez, M. Moreno-Alvarez, E. Martin,
S. Borromeo, and J. Hernandez-Tamames, “Accessible smart-
phones for blind users: A case study for a wayfinding system,”
Expert Systems with Applications, vol. 41, no. 16, pp. 7210–
7222, Nov. 2014.
[17] R. Dasgupta, Voice User Interface Design: Moving from GUI
to Mixed Modal Interaction. Berkeley, CA: Apress, 2018.
[18] A. Pradhan, K. Mehta, and L. Findlater, “”Accessibility Came
by Accident”: Use of Voice-Controlled Intelligent Personal
Assistants by People with Disabilities,” in Proceedings of
the 2018 CHI Conference on Human Factors in Computing
Systems - CHI ’18. Montreal QC, Canada: ACM Press, 2018,
pp. 1–13.
[19] S. S. Balasuriya, L. Sitbon, A. A. Bayor, M. Hoogstrate, and
M. Brereton, “Use of voice activated interfaces by people with
intellectual disability,” in Proceedings of the 30th Australian
Conference on Computer-Human Interaction. Melbourne
Australia: ACM, Dec. 2018, pp. 102–112.
[20] G. Kouroupetroglou and P. Das, Eds., Assistive Technologies
and Computer Access for Motor Disabilities:, ser. Advances
in Medical Technologies and Clinical Practice. IGI Global,
2014.
[21] Z. Saenz-de Urturi and B. Garcia-Zapirain Soto, “Kinect-
Based Virtual Game for the Elderly that Detects Incorrect
Body Postures in Real Time,” Sensors, vol. 16, no. 5, p. 704,
May 2016.
[22] T. Barlott, T. Aplin, E. Catchpole, R. Kranz, D. Le Goullon,
A. Toivanen, and S. Hutchens, “Connectedness and ICT:
Opening the door to possibilities for people with intellectual
disabilities,” Journal of Intellectual Disabilities, pp. 1–19,
Feb. 2019.
[23] B. A. Jimenez and K. Alamer, “Using Graduated Guidance
to Teach iPad Accessibility Skills to High School Students
With Severe Intellectual Disabilities,” J Spec Educ Technol,
vol. 33, no. 4, pp. 237–246, Dec. 2018.
[24] M. Scholz, M. Wagner, and J. M. Stegkemper, “OCS-
R Manual (Version 1.06),” 2019. [Online]. Available:
www.ocs-r.com
218