Content uploaded by Lena Norberg
Author content
All content in this area was uploaded by Lena Norberg on Nov 05, 2014
Content may be subject to copyright.
Proc. 10th Intl Conf. Disability, Virtual Reality & Associated Technologies
Gothenburg, Sweden, 2–4 Sept. 2014
2014 ICDVRAT; ISBN 978-0-7049-1546-6
1
Web Accessibility by Morse Code Modulated Haptics for Deaf-Blind
Lena Norberg1, Thomas Westin1, Peter Mozelius1, Mats Wiklund1
1Department of Computer and Systems Sciences, Stockholm University,
Forum 100, Kista, Stockholm, SWEDEN
1{lenan, thomasw, mozelius, matsw}@dsv.su.se
1www.dsv.su.se
ABSTRACT
Providing information using a modality that is both non-visual and non-auditory such as haptic
feedback, may be a viable approach regarding web accessibility for deaf-blind. Haptic navigation
systems have been shown to be easy to learn (Venesvirta 2008), and modulating navigation related
information as patterns of vibrations has been shown to be perceived as natural and non-intrusive
(Szymzcak, Magnusson and Rassmus-Gröhn 2012). To minimise the bandwidth needed, a varying
length encoding scheme such as Morse code may be considered. A prototype Morse code vibration
modulated system for web page navigation was developed, using a standard game controller as a
means of output. Results show that simulated deaf-blind test subjects using the system were able to
navigate a web site successfully in three cases out of four, and that in some situations a version of
the system with a higher degree of manual interaction performed better.
1. INTRODUCTION
Deaf-blind relies on routines and layout, where routines are the temporal ordering of events, while the layout is
the spatial arrangement (Goode, 1990). The routines are signed in relation to context in the sense that the same
sign can provide different meanings in different temporal and spatial contexts. The shared knowledge about both
routines and layout enables an interpretation of the limited repertoire of expressions at hand for a person who is
deaf-blind. Such signs are not equal to using a more generic sign language, which may also be used when
communicating with a wider group than the family. (Ibid.) Thus, the interpretation is a key consideration in data
collection as well as in design of human-computer interaction.
As discussed in (Thinus-Blanc & Gaunet, 1997) and (Klatzky, 1998), vision impaired persons are highly
dependent on non-visual clues in their surroundings when navigating an environment. Much work has been done
developing such clues, e.g. in the form of speaking signs and tactile rails on subway platforms. Without such
clues, many places in the physical world pose a risk to the blind (Ceipidor et al., 2007). Also, regarding web
content, work aimed at improving the accessibility for deaf and blind people, respectively, have been addressed
(Debevc, Kosec, & Holzinger, 2011; Di Blas, Paolini, & Speroni, 2004). Providing non-visual and non-auditory
clues, e.g. by haptic feedback in some form, may thus be a viable approach to enable web accessibility for deaf-
blind.
The situation touches on possible independency challenges, as summarized by Fiedler (1991, p. 87):
“Disabled people wanted access to, and enablement for, the same range of opportunities and responsibilities as
their able bodied peers.”. While this issue is not limited to visually impaired, blind people lack allocentric
frames of reference and as a consequence when navigating an environment are much dependent on tactile and
audio clues in their surroundings (Klatzky, 1998; Thinus-Blanc & Gaunet, 1997). Other studies have suggested
using audio in addition to the haptic experience (Gutschmidt, Schiewe, Zinke, & Jürgensen, 2010; Sepchat,
Monmarché, Slimane, & Archambault, 2006), which may be useful for those deaf-blind users that have some
auditive ability.
Furthermore, overview is important for a blind person (Karlsson & Magnusson, 1994). Typically, a screen
reader presents a selection of text at a time, depending on the web page element currently in focus. It should be
noted that a screen reader is independent of the output mode of the information, which may be presented with
devices for speech synthesis or Braille. A screen reader may also present an overview of e.g. the number and
type of elements on a web page. However, as many web pages are not properly designed according to W3C web
accessibility standards, using screen readers can be problematic. For example, Lazar, Allen, Kleinman and
Malarkey (2007) made a study of 100 blind users of screen readers using time diaries where they recorded their
Proc. 10th Intl Conf. Disability, Virtual Reality & Associated Technologies
Gothenburg, Sweden, 2–4 Sept. 2014
2014 ICDVRAT; ISBN 978-0-7049-1546-6
2
frustration while using the web. The problems were mainly caused by poor web design and were often time-
consuming or even unsolvable for the user. (Ibid.)
According to Ford and Walhof (1999) Braille reading speeds of upward 200 to 400 words per minute can be
achieved when learnt at a young age. In a study by Mousty and Bertelson (1985) mean reading speeds were
123.0 and 106.3 words/min for congenital and late blindness, respectively. As discussed by Thurlow (1986)
Braille is the most established coding system, however with literacy rate not higher than 20% of the blind
population (Lazar et al., 2007). Further, Braille has shown to be difficult both to learn and to discriminate
tactually (Thurlow, 1986). While Braille is well established, the approach with Morse coded vibrations has some
advantages. For mobile applications the relatively small form factor of vibrating actuators enables increased
mobility. Further, the cost of a Braille display relative to a vibrating actuator is important to consider, especially
in developing regions.
In a participatory design approach Zhu, Kuber, Tretter, and O'Modhrain (2011) tested a haptic assistive web
interface using HTML-mapping and a force-feedback mouse. Findings showed that participants were able to
identify objects presented haptically, and develop a structural representation of layout from exploring content.
Further, a comparison between three different haptic devices providing non-visual access to the web was made.
Three areas of limitations were listed: ergonomic, device and psychophysical. In the first the users freedom of
natural motion might be restricted; in the second, the design of the device gives different ways of experiencing a
haptic force, which affect the performance and user satisfaction; the third show that certain haptic properties can
be extracted more efficiently than others. (Ibid.)
To overcome limitations imposed by the lack of tactile feedback on touch screens, V-Braille, represent
Braille characters with haptic vibration (Jayant, Acuario, Johnson, Hollier, & Ladner, 2010). With V-Braille, the
screen is divided into six squares each corresponding to one of the six dots, which together represent a singular
Braille character. Results of a reading test showed that it took between 4.2 and 26.6 seconds to read a V-Braille
character for nine deaf-blind test users. The nine test users also reported they were very enthusiastic about V-
Braille. (Ibid.)
People suffering from Ushers syndrome, constituting about 50% of the deaf-blind in the US, are more likely
to become deaf-blind as adults due to aging rather than being affected from birth (Jayant et. al 2010). According
to Venesvirta (2008) "Haptic navigation devices can be learnt to use fast, even after short practise.”. This could
be especially beneficial for people who become deaf-blind at an older age when learning obstacles may be
higher, such as a hearing impaired person who develops macular degeneration later in life. From a training
perspective, we therefore suggest the use of a haptic modality to support deaf-blind people when navigating the
Web.
Navigation using vibrations has also been used by Szymzcak, Magnusson and Rassmus-Gröhn (2012) in the
Lund Time-Machine, a system that uses sound and vibration feedback to help users navigate through the
medieval part of a city. While perceiving sounds from the middle ages, bearings and distances to points of
interest was communicated through vibrations. The system was implemented as an Android app and used on a
mobile phone. Findings include that patterns of vibrations used to communicate direction and distance are
perceived as natural and non-intrusive by the users. (Ibid.) From a usability perspective, we therefore suggest
that vibration may be an appropriate modality to support deaf-blind users who navigate the Web.
An interesting aspect of the findings in (Pascale, Mulatto, & Prattichizzo, 2008) is that while using variations
in the vibrations themselves has potential to convey more information, the test subjects still reported some
remaining difficulties, indicating the need for a more elaborated encoding/modulation scheme. Thus, this study
examines Morse encoded vibrations as a possible venue. As Morse code is a varying length encoding system
(Golin & Rote, 1998) it may, given a character distribution in accordance with Morse code’s intended frequency
distribution, be used to represent information with a minimum of vibration bandwidth.
There are a number of technical prerequisites to implement Morse encoded vibrations. First, the application
needs access to the information to be presented, which in this pilot study consists of text accessible through a
web browser. Second, the application must be able to communicate with hardware, preferably via USB which at
current is the de facto standard hardware interface for human interface devices, in this case an Xbox360
controller. Since there is only output to the device there is no need of handling polling or interrupts of the
hardware (Gregory, 2009). Third, the application should be open and platform independent, to ensure both
longevity and scalability of the solutions, as far as possible.
2. RESEARCH PROBLEM & QUESTIONS
While haptics has been found to be a viable communication approach for deaf-blind, Morse encoded haptics
adds the advantage of representing information with a minimum of vibration bandwidth. This approach may
Proc. 10th Intl Conf. Disability, Virtual Reality & Associated Technologies
Gothenburg, Sweden, 2–4 Sept. 2014
2014 ICDVRAT; ISBN 978-0-7049-1546-6
3
allow the use of low-cost devices with some haptic capabilities such as handheld game controllers. As these have
not been implemented with the purpose of web browsing by deaf-blind users, the outcome of such use is unclear.
In this pilot study the questions are: 1) Is a game-type controller suitable for implementing a vibrating web
interface intended for the deaf-blind?; 2) Are simulated deaf-blind users (blindfolded and with ear protection)
able to discern Morse-coded information modulated through vibrations, enabling them to understand the content
of menu links?; 3) How will system designs with different degrees of interaction affect the outcome? The
significance of finding answers to these questions is related to possible redesign of the solution presented here,
followed by the inclusion of real deaf-blind users in tests.
3. METHODOLOGY
3.1 Design Science
The overall framework for this study is design science, a special strand in design research that has its roots in the
areas of information systems and IT. Design science aim to create new and innovative artefacts to support people
in using, maintaining and developing IT devices and systems. The artefacts in this sense are human made
solutions to practical problems, and can be physical devices as well as blueprints, methods, or sets of guidelines.
(Johannesson & Perjons, 2012). These artefacts are not isolated phenomena since they are embedded in larger
contexts and have relationships to people and people’s problems.
A design science process can be divided into five main steps where the first two, involving explicating the
problem by investigating and analysing the practical difficulty or need at hand, and defining requirements for a
solution, have been touched upon so far. The remaining steps of the design science process involve one or more
iterations of developing, demonstrating and evaluating the various implementations of the artefact, out of which
the first two versions will be discussed in the remainder of this paper.
During the iterations of the evaluation phase, the design science framework typically use traditional research
strategies such as experiments or case studies to compare the different versions of the artefact and its relation to
the intended users. Hence, the study complies with (Johannesson & Perjons, 2012) in that the two iterations
performed so far were treated as an experiment, comparing the first two implementations of the artefact using
two groups of test subjects.
3.2 Limitations
In this study we constrained the area of interest to information accessible through a web browser, and also not to
make a solution for devices such as screen readers. The reason was to focus on communication principles and to
be able to evaluate the outcome with pilot test subjects, before focusing on the optimal technical solution for a
final implementation. None of the test subjects had any previous experience using Morse code, thus the only
such training the test subjects received was during the 20 minute familiarisation period immediately prior to the
test session. At this stage of the testing the main objective was to evaluate the possibility of detecting different
combinations of long and short vibrations, which is a prerequisite to interpret Morse code.
3.3 Study Setup
A pilot study using four test subjects simulating deaf-blindness using blindfolds and ear protection was carried
out. The study included development of a prototype software system to explore how the translation of the
information could be done, and to evaluate how simulated deaf-blind users perceived the vibrating Morse
signals. Since deaf-blind individuals are scarce, it is important to preserve this resource to test situations where
technical errors and flaws in test design have been addressed, motivating the initial use of non-deaf-blind test
subjects. One example of a pilot study successfully using two fully sighted and hearing test subjects wearing
blindfolds and ear protection to simulate deaf-blindness can be found in (Owen, 2008). Issues of using simulated
test-subjects versus actual deaf-blind are discussed in (Ranjbar, 2008), noting that deaf-blind subjects are more
used to interpret vibrations. However, from a practical perspective the simulated setup can be motivated (ibid).
Demographic data of the test subjects in our study were as follows:
• Test subject 1: 38-year-old male, trained professional 3D-graphics designer
• Test subject 2: 40-year-old male, trained professional web site developer
• Test subject 3: 44-year-old male, senior university lecturer in Computer Science
• Test subject 4: 46-year-old female, project manager with a B.A. in Pedagogy
Proc. 10th Intl Conf. Disability, Virtual Reality & Associated Technologies
Gothenburg, Sweden, 2–4 Sept. 2014
2014 ICDVRAT; ISBN 978-0-7049-1546-6
4
The prototype, called the GamePadServer, was implemented as a local web server on the user’s PC, which
modulates the vibrations of an Xbox360 controller. The GamePadServer handles two events, triggered by the
user in a web client; the connect event is triggered by a button on a web page when the user wants to use the
device; the message event is triggered by another button and sends the text to be translated into Morse encoded
vibrations to the GamePadServer. A widely available, consumer oriented Xbox360 hand-held game controller
was used as a means of output.
Data collection was performed using a subset of the cognitive walkthrough method, where the test subject
conducts a task with a predefined goal typical of an end-user scenario, while playing the role of a person in the
defined target group. Further, the task is conducted in a predefined system. (Wharton, Rieman, Lewis, & Polson,
1994) As the predefined system, the Stockholm University computer- and systems sciences department website
was used. The Swedish language version was used, with link names later translated to English for the purpose of
appearing in this paper.
Four test sessions were conducted, with each test subject present individually. The test subjects wore
blindfolds and ear protection, simulating deaf-blindness. During all four tests, the GamePadServer system
allowed the user to detect the edges of the web page through vibrations in the hand-held controller. When the
cursor was moved near the browser borders it caused the controller to vibrate accordingly.
The first group of two test subjects used version 1 of the GamePadServer system, requiring the user to click
on a link (not causing the link to be followed) to initiate the Morse code vibrations representing the link to start.
The links can be detected by the transmission of a link prefix when the cursor hovers over the link. The version 1
system then waits for the user to click again if (s)he wants to follow the link.
The second group of two test subjects used version 2 of the GamePadServer system, not requiring the user to
click on anything for the Morse code vibrations to start. This follows after an initial transmission of a link prefix
identifying the link type. In the version 2 system, a link prefix consisting of a fast train of five vibrations is sent
when the cursor hovers over a image link, while a link prefix consisting of a single long low-frequency vibration
is sent when the cursor hovers over a text link. If the user then takes no further action for a short interval, the
Morse code vibrations representing the link will start automatically.
Figure 1. Test setup showing use of laptop touch pad for input and Xbox360 controller for output.
Test subjects were blindfolded and wore ear protection.
Each test subject was given 15 minutes time to familiarise themselves with the system before the test began.
This limited time represents the only training in interpreting Morse code that the test subjects received, as none
of them had any such training before participating in the test.
Both groups of test subjects were asked to perform the same set of five tasks (T):
• Find a link on the page and indicate when one link was found (T1)
Proc. 10th Intl Conf. Disability, Virtual Reality & Associated Technologies
Gothenburg, Sweden, 2–4 Sept. 2014
2014 ICDVRAT; ISBN 978-0-7049-1546-6
5
• Move the mouse pointer and indicate when it approaches the edge of the browser window (T2)
• Find the link "Research" (T3)
• Find the link "Employee" and follow it to get to that webpage (T4)
• Find the link "Library" on the "Employees" page (T5)
For tasks 1 and 2, two questions were answered, based on the performance of the test subject. Since tasks 3-5
involve locating a specific entity, a third question is also answered.
The questions (Q) answered in conjunction with the tasks were:
• Will the user find the control that is addressed in the task? (Q1)
• Will the user recognise that the control is of the right type? (Q2)
• Will the user recognise that the control is the specific control sought for? (Q3, only relevant for T3-T5,
and in the case of T4 also implying success in using the control)
After the test each test subject had an opportunity to express possible additional information regarding the test
subject’s perception of the experience.
4. RESULTS
4.1 Hardware
The Xbox360 controller was found to be of limited use. The vibration motors have slow acceleration and
deceleration rates, making the signalling unnecessary time consuming. Still, even with the limitations of the
controller, the user test showed that it was possible to discern types of vibrations, e.g. links and edges of pages,
and even though the test subjects were inexperienced with Morse code, three out of four were also able to
distinguish between several links.
4.2 Test Sessions
Results from the four test sessions are shown in table 1. Questions ultimately answered positively but only after
several tries or other initial difficulty are noted as “With effort”. Please note that questions and answers in this
context does not refer to regular question/answer sessions, but were rather answered by the test leader by
observing the performance of the test subjects, also taking into account some verbal indications being made in
the process. Test subject 4 aborted task 5 after failing to achieve a positive result regarding question 1, and thus
choose not to attempt the activities associated with question 2 and 3 for that task.
Table 1. Results from testing the GamePadServer version 1 and 2 with two test subjects (Ts) each.
Five predefined tasks were attempted while simulating deaf-blindness, evaluated through two to
three questions each, as detailed in the methodology section.
GamePadServer ver. 1
GamePadServer ver. 2
Ts 1
Ts 2
Ts 3
Ts 4
Task 1
Q1
Yes
Yes
Yes
Yes
Q2
Yes
Yes
Yes
Yes
Task 2
Q1
Yes
Yes
Yes
Yes
Q2
Yes
Yes
Yes
Yes
Task 3
Q1
Yes
Yes
Yes
Yes
Q2
With effort
With effort
With effort
With effort
Q3
Yes
Yes
No
No
Task 4
Q1
With effort
With effort
With effort
No
Q2
Yes
With effort
Yes
No
Q3
Yes
Yes
Yes
No
Task 5
Q1
Yes
Yes
Yes
No
Q2
No
No
With effort
Aborted
Q3
No
No
Yes
Aborted
4.3 Quotes from Test Subjects
In addition to the results listed in table 1, the test subjects expressed the following additional information
regarding their perception of using the system (translated by the authors for the purpose of appearing in this
paper):
Proc. 10th Intl Conf. Disability, Virtual Reality & Associated Technologies
Gothenburg, Sweden, 2–4 Sept. 2014
2014 ICDVRAT; ISBN 978-0-7049-1546-6
6
“I gradually made a mental image of the web page, and to some extent I began to count the number of links I
passed to know where I were on the page. It took a long time waiting for a link to finish vibrating, before it was
all through. For someone skilled in Morse Code, the system could probably use a higher speed. The controller is
a bit clumsy, the vibrations are not so easy to distinguish from one another.” (Ts 1)
“It was interesting. But hard. I got tired towards the end. I tried to remember where the links were. The hand
controller vibrate very much. It wasn’t pleasant, and quite big.” (Ts 2)
“Sometimes there was a delay when I moved the cursor to a new place, it took a little time before the
vibrations changed. You should make it possible to jump between links with the arrow keys. Morse vibrations
seems very fast, maybe you should slow it down a bit. Maybe you could put in a function to pause, but it should
only pause between two words.” (Ts 3)
“It was easy to feel the difference between an image link and a text link, but seems extremely hard to
understand what the link says. It was hard to see in my mind where the cursor was on the page. It was hard to
find the Research and Employees links. At the end I just felt a blur of vibrations, a got really tired.” (Ts 4)
5. DISCUSSION
5.1 Tested tasks
As shown in table 1, task 1 and 2 were carried out successfully by all four test subjects, indicating that both
version 1 and 2 of the GamePadServer were capable of communicating basic navigation information such as the
presence of a link or a page border to the user. Regarding task 3, involving locating a particular link, some
differences are shown. While all test subjects found the link (Q1), it took additional effort in the form of one or
more retries before they were convinced that it was the correct one (Q2).
In the case of task 3, neither of the test subjects in the GamePadServer version 2 group (Ts 3 and Ts 4)
achieved a positive result regarding Q3, and thus were not able to identify the link as the particular one sought
for. This was not a problem for test subjects 1 and 2 in the GamePadServer version 1 group. One possible
explanation for this is that the manual behaviour of the version 1 system, requiring a click on the link before
starting to transmit the Morse code representing the link, provided an opportunity for the test subjects to gather
their thoughts before continuing. This may be a desirable arrangement for non-experienced users, while it is still
possible that the automatic behaviour of GamePadServer 2, automatically continuing with the transmission of the
link after a short pause without waiting for any action from the user, may be desirable for more experienced
users.
Tasks 4 and 5, implying both locating and following a particular link, and in the case of task 5 then locating
another link on the new page, showed more varying results. In task 4, both test subjects in the GamePadServer 1
group succeeded in following the link in question (Q3) after varying degree of effort, while one of the test
subjects in the GamePadServer 2 group succeeded (with no effort) and the other was unsuccessful altogether.
In contrast, in the number 5 task, none of the test subjects in the the GamePadServer 1 group succeeded
(other than initially locating a link (Q1), however not the right one. Here, test subject 3 (using GamePadServer 2)
reached the ultimate goal (Q3) for task 5, after some effort identifying the link to be followed. Test subject 4
gave verbal accounts of being tired and choose to abort the remainder of task 5 after failing initial identification
of a link (Q1), but since none of the GamePadServer 1 users succeeded with this task it was only accomplished
successfully using GamPadServer 2, by one of its test subjects. While this pilot study does provide the insight of
a working concept, a larger study is needed to evaluate the implementation details further.
5.2 Verbal feedback
Opinions on speed of the Morse code transmissions varied, from “For someone skilled in Morse Code, the
system could probably use a higher speed” (Ts 1) to “Morse vibrations seems very fast, maybe you should slow
it down a bit” (Ts 3). Two of the test subjects mentioned undesirable properties of the Xbox360 controller when
used in this context, describing it as “clumsy” (Ts 1) and “quite big” (Ts 2). This indicates that smaller
controllers, possibly including those designed to be attached to the body rather than hand-held, may be used
instead.
It is worth noting that three of the four test subjects spontaneously expressed various notions of picturing the
web page, or remembering the positions of links on the web page, in their minds. However, adopting such an
approach, and trying to relate that information to what was happening on the web page, may not be typical for
actual deaf-blind persons. It seems possible that the test subjects, being sighted and only temporarily simulating
(deaf-)blindness, retain a visually oriented mindset not necessarily present among (deaf-)blind users attempting
to use the GamePadServer system.
Proc. 10th Intl Conf. Disability, Virtual Reality & Associated Technologies
Gothenburg, Sweden, 2–4 Sept. 2014
2014 ICDVRAT; ISBN 978-0-7049-1546-6
7
6. CONCLUSIONS
A game-type controller can be successfully used as an output device for a vibrating web interface intended for
the deaf-blind. Based on feedback from test subjects, its size and shape are perceived less ideal by some,
indicating that more suitable devices likely can be found.
Simulated deaf-blind users (blindfolded and with ear protection) were able to identify vibrations indicating
presence of links and page borders, and in three cases out of four were able to discern Morse-coded information
modulated through vibrations, enabling them to understand the content of menu links.
A manual system design requiring test subjects to click on links to start Morse code transmissions was
successfully used in two cases where a design with a more automatic interaction principle was unsuccessful,
while the automatic system was successfully used (with effort) in one case where the manual system was
unsuccessful.
7. FUTURE RESEARCH
Initial positive results from the work described here merits a study involving a larger number of test subjects, as
well as a longer usage period of the developed technology. A shift from technical proof-of-concept to a focus on
users' perception of the tested accessibility mechanisms is a natural next step.
Focusing on perceived usefulness may lead to a deeper understanding regarding preferred functionality by
the intended target group. Further refinement should ideally be conducted as a teamwork involving users and
researchers in the form of a constant feedback loop, utilising the users' attitudes as a frame of reference when
selecting future features of the system.
The strategy of remembering displayed by the test subjects in some cases may be related to the need for
routine and layout (Goode, 1990). This is dependent on the website structure to be static, which neither the end-
user nor the developer of accessibility solutions have any control over. Thus, a question for further research is to
find a method to explore if existing web server technology intended to detect whether a previously visited
website structure has changed, can be feasibly made use of, thereby guiding the user in the choice between
strategies of exploring or guessing.
Acknowledgements: This work was made possible by funds from The Swedish Post and Telecom Authority.
8. REFERENCES
Ceipidor, U. B., D’Atri, E., Medaglia, C. M., Mei, M., Serbanati, A., Azzalin, G., ... & D’Atri, A. (2007). A
RFID System to help visually impaired people in mobility. RFIDLab–CATTID University of Rome “La
Sapienza” Rome, Italy.
Debevc, M., Kosec, P., & Holzinger, A. (2011). Improving multimodal web accessibility for deaf people: sign
language interpreter module. Multimedia Tools and Applications, 54(1), pp. 181-199.
Di Blas, N., Paolini, P., & Speroni, M. (2004). Usable accessibility” to the Web for blind users. Proceedings of
8th ERCIM Workshop: User Interfaces for All, Vienna.
Fiedler, B. (1991). Housing and independence. In M. Oliver (Ed.), Social Work, Disabled People and Disabling
Environments. London: Jessica Kingsley.
Fjord, L. L. (1996). Images of Difference: Deaf and Hearing in the United States. Anthropology and Humanism,
21(1), pp. 55-69.
Ford, S., Walhof, R. (1999). Braille reading speed: Are you willing to do what it takes? Braille Monitor
(4/15):4/20/10.
Golin, M. J., & Rote, G. (1998). A Dynamic Programming Algorithm for Constructing Optimal Prefix-Free
Codes with Unequal Letter Costs. IEEE Transactions on Information Theory, 44(5).
Goode, D. A. (1990). On understanding without words: Communication between a deaf-blind child and her
parents. Human Studies, 13(1), pp. 1-37.
Gregory, J. (2009). Game Engine Architecture. Boca Raton: A K Peters/CRC Press.
Gutschmidt, R., Schiewe, M., Zinke, F., & Jürgensen, H. (2010). Haptic emulation of games: haptic Sudoku for
the blind. Proceedings of the 3rd International Conference on Pervasive Technologies Related to Assistive
Environments. ACM.
Jayant, C., Acuario, C., Johnson, W., Hollier, J., & Ladner, R. (2010). V-braille: haptic braille perception using a
touch-screen and vibration on mobile phones. Proceedings of the 12th international ACM SIGACCESS
conference on Computers and accessibility, Orlando, Florida, USA.
Johannesson, P., & Perjons, E. (2012). A Design Science Primer (1 ed.). Createspace.
Proc. 10th Intl Conf. Disability, Virtual Reality & Associated Technologies
Gothenburg, Sweden, 2–4 Sept. 2014
2014 ICDVRAT; ISBN 978-0-7049-1546-6
8
Karlsson, G., & Magnusson, A.-K. (1994). Blinda personers förflyttning och orientering. En fenomenologisk-
psykologisk studie. Rapporter / Stockholms universitet, Psykologiska institutionen, vol. 74. Stockholm
University, Department of Psychology.
Klatzky, R. L. (1998). Allocentric and Egocentric Spatial Representations: Definitions, Distinctions, and
Interconnections. C. Freksa, C. Habel & K. F. Wender (Eds.), Spatial cognition - An interdisciplinary
approach to representation and processing of spatial knowledge, pp. 1-17. Berlin Heidelberg: Springer.
Lazar, J., Allen, A., Kleinman, J., & Malarkey, C. (2007). What frustrates screen reader users on the web: A
study of 100 blind users. International Journal of human-computer interaction, 22(3), pp. 247-269.
Mousty, P., & Bertelson, P. (1985). A study of braille reading: 1. Reading speed as a function of hand usage and
context. The Quarterly Journal of Experimental Psychology Section A: Human Experimental Psychology,
37:2, 217-233.
Owen, J. M. (2008). Quiet vehicle avoidance systems for blind and deaf-blind pedestrians. VCU Bioinformatics
and Bioengineering Summer Institute, Final report.
de Pascale, M., Mulatto, S., & Prattichizzo, D. (2008). Bringing haptics to second life for visually impaired
people. Haptics: Perception, Devices and Scenarios, pp. 896-905. Springer Berlin Heidelberg.
Ranjbar, P. (2008). Vibrotactile identification of signal-processed sounds from environmental events presented
by a portable vibrator: A laboratory study. Iranian Rehabilitation Journal, 6(7), 24-38.
Sepchat, A., Monmarché, N., Slimane, M., & Archambault, D. (2006). Semi Automatic Generator of Tactile
Video Games for Visually Impaired Children. Computers Helping People with Special Needs, 4061, pp. 372-
379.
Szymczak, D., Magnusson, C., & Rassmus-Gröhn, K. (2012). Guiding tourists through haptic interaction:
vibration feedback in the lund time machine. P. Isokoski & J. Springare (Eds.), Haptics: Perception, Devices,
Mobility, and Communication, pp. 157-162. Springer Berlin Heidelberg.
Thurlow, W. R. (1986). Some comparisons of characteristics of alphabetic codes for the deaf-blind. Human
Factors: The Journal of the Human Factors and Ergonomics Society, 28(2), 175-186.
Thinus-Blanc, C., & Gaunet, F. (1997). Representation of space in blind persons: vision as a spatial sense?
Psychological bulletin, 121, pp. 20-42.
Venesvirta, H. (2008). Haptic Navigation in Mobile Context. Presented at the 2008 Seminar on Haptic
Communication in Mobile Contexts, University of Tampere.
Wharton, C., Rieman, J., Lewis, C., & Polson, P. (1994). The Cognitive Walkthrough: A practitioner’s guide. In
J. Nielsen & R. L. Mack (Eds.), Usability inspections methods, pp. 105-140. New York: Wiley.
Zhu, S., Kuber, R., Tretter, M., & O'Modhrain, M. S. (2011). Identifying the effectiveness of using three
different haptic devices for providing non-visual access to the web. Interacting with Computers, 23(6), pp.
565-581.