Content uploaded by Arshad Nasser
Author content
All content in this area was uploaded by Arshad Nasser on Oct 10, 2020
Content may be subject to copyright.
FingerTalkie: Designing A Low-cost Finger-worn Device for Interactive Audio
Labeling of Tactile Diagrams
ARSHAD NASSER, City University of Hong Kong, Hong Kong
TAIZHOU CHEN, CAN LIU, and KENING ZHU, City University of Hong Kong, Hong Kong
PVM RAO, Indian Institute of Technology Delhi, India
Traditional tactile diagrams for the visually-impaired (VI) use short Braille keys and annotations to provide additional information in
separate Braille legend pages. Frequent navigation between the tactile diagram and the annex pages during the diagram exploration
results in low eciency in diagram comprehension. We present the design of FingerTalkie, a nger-worn device that uses discrete colors
on a color-tagged tactile diagram for interactive audio labeling of the graphical elements. Through an iterative design process involving
8 VI users, we designed a unique oset point-and-click technique that enables the bimanual exploration of the diagrams without
hindering the tactile perception of the ngertips. Unlike existing camera-based and nger-worn audio-tactile devices, FingerTalkie
supports one-nger interaction and can work in any lighting conditions without calibration. We conducted a controlled experiment
with 12 blind-folded sighted users to evaluate the usability of the device. Further, a focus-group interview with 8 VI users shows their
appreciation for the FingerTalkie’s ease of use, support for two-hand exploration, and its potential in improving the eciency of
comprehending tactile diagrams by replacing Braille labels.
CCS Concepts: •Human-centered computing →Accessibility systems and tools.
Additional Key Words and Phrases: Audio-tactile diagram, nger-worn device, oset point and click, blind, visually impaired.
ACM Reference Format:
Arshad Nasser, Taizhou Chen, Can Liu, Kening Zhu, and PVM Rao. 2018. FingerTalkie: Designing A Low-cost Finger-worn Device
for Interactive Audio Labeling of Tactile Diagrams. In Woodstock ’18: ACM Symposium on Neural Gaze Detection, June 03–05, 2018,
Woodstock, NY. ACM, New York, NY, USA, 18 pages. https://doi.org/10.1145/1122445.1122456
1 INTRODUCTION
Images and diagrams are an integral part of many educational materials [
10
]. Tactile diagram is the representation
of an image in a simplied form that makes the content accessible by touch. They are widely adopted in textbooks
for the visually impaired (VI) people. Several studies [
4
,
14
,
33
] have shown that tactile perception is good for the
comprehension of graphical images and tactile diagrams proved to be useful for the VI students for learning graphically
intensive subjects. Apart from tactile textbooks, tactile diagrams are widely used in public spaces as maps and oor
plans for guiding VI people.
Despite the wide acceptance of tactile diagrams, they are often limited by their spatial resolution and local perception
range [
26
]. The traditional tactile graphics makes use of the Braille annotations as a type of markup for the discrete
areas of tactile diagrams. However, Tatham [
39
] states that, the extensive use of Braille annotations in can worsen
the overall legibility of the tactile graphics. While textures and tactile patterns are prominently used for marking
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not
made or distributed for prot or commercial advantage and that copies bear this notice and the full citation on the rst page. Copyrights for components
of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to
redistribute to lists, requires prior specic permission and/or a fee. Request permissions from permissions@acm.org.
©2018 Association for Computing Machinery.
Manuscript submitted to ACM
1
Woodstock ’18, June 03–05, 2018, Woodstock, NY Arshad Nasser, Taizhou Chen, Can Liu, Kening Zhu, and PVM Rao
Fig. 1. Deciphering tactile images; (A)Exploring the tactile image, Braille keys and symbols with two hands (B)Using both hands to
decipher the Braille legend on the consecutive page (C)Exploring the tactile image, Braille keys and symbols with two hands
areas, it still involves nding the key and the corresponding description which are often placed in other pages. The
number of textures that could be clearly distinguishable remains limited and can vary on the tactile acuity of the user
[
16
,
40
]. Additionally, the Braille legend of a diagram is placed on multiple pages which demands ipping of pages for
comprehending pictorial information. This in turn complicates the interpretation of tactile images [
18
]. Another reason
for excluding Braille annotations from tactile graphics is due to the inclusivity of Braille among the VI community.
Research [
6
] shows that the number of blind people who can read Braille and it can be estimated that an even smaller
proportion can read Braille-labelled tactile graphics. Another argument to reduce Braille labels is to limit the tactile
complexity of the graphics. A widely adopted alternative is to combine tactile graphics with interactive assistive
technologies. Recent studies have shown that the tactile diagrams complemented with interactive audio support is
advantageous according to the usability design goals(ISO 9241) [
7
]. There are various existing devices and approaches
(mentioned in section 3) for audio-tactile graphics. However,the factors pertaining to wearability, setup time, eects of
the ambient lighting conditions and scalability were not fully investigated in the existing audio-tactile methodologies.
In this paper, we present the design of FingerTalkie, a nger-worn interactive device with an oset point-and-click
method that can be used with existing tactile diagrams to obtain audio descriptions. Compared to the existing interactive
audio-tactile devices, FingerTalkie does not use camera based methods or back-end image processing. Our concept
leverages the usage of color tactile diagrams which gaining popularity, thus reducing the barrier for technology adoption.
The FingerTalkie device was designed through an iterative user-centred design process, involving 8 visually-impaired
users. Minimal and low-cost hardware has helped in the design of a standalone and compact device. We conducted a
controlled experiment with 12 blind-folded sighted users to evaluate the usability of the device. The results showed that
the user performance of pointing and clicking with FingerTalkie could be inuenced by the size and the complexity of
the tactile shape. We further conducted a focus-group interview with 8 VI users. The qualitative result showed that
compared existing audio-based assistive products in the market, the VI users appreciated FingerTalkie’s ease of setup,
support for two-hand exploration of the tactile diagrams, and potential in improving the eciency of comprending
tactile diagrams.
2 RELATED WORK
We discuss prior work related to two areas of our system: (i) audio-/touch-based assistive devices for VI users and (ii)
nger-based wearable interfaces.
2
FingerTalkie Woodstock ’18, June 03–05, 2018, Woodstock, NY
2.1 Audio-/Touch-based Assistive Technologies
Adding auditory information (e.g., speech, verbal landmarks, earcons, and recorded environmental sounds) to the tactile
diagrams has been considered as an ecient way of improving the reading experience of VI users [
7
,
28
]. Furthermore,
it was intuitive for VI users to obtain such auditory information with their ngers touching the tactile diagrams or other
tangible interfaces. Early prototypes, such as KnowWhere [
24
], 3DFinger [
34
], Tangible Newspaper [
38
], supported
computer-vision-based tracking of VI user’s nger on 2D printed material (e.g., maps and newspaper) and retrieval
of the corresponding speech information. Nanayakkara et al. [
30
] developed EyeRing, a nger-worn device with an
embedded camera connected to an external micro-controller for converting the printed text into speech output based
on OCR and text-to-speech techniques. Later, the same research group developed FingerReader [
37
] and FingerReader
2.0 [
5
], to assist blind users in reading of printed text on the go by harnessing the technologies of computer vision and
cloud-based object recognition. Shi et al. [
35
] developed Magic Touch, a computer-vision-based system that augments
printed graphics with audio les associated with specic locations on the model. The system used external webcam to
track user’s nger on the 3D-printed object, and retrieve the corresponding audio information. Later, Shi et al. [
36
]
expanded the functionality of Magic Touch to Markit and Talkit with the feature of touch-based audio annotation on
the 3D-printed object. Using the front camera of a smart tablet and a front-mounted mirror, the Tactile Graphics Helper
[
13
] tracked a student’s ngers as the user explores a tactile diagram, and allowed the student to gain clarifying audio
information about the tactile graphic without sighted assistance. Several researchers have also developed hand gesture
for interactive 2D maps for the VI [9,21].
These works suggested that the camera-based nger-tracking method can be used for VI users to retrieve audio
information by touching physical objects. However, there are major drawbacks in using camera-based technologies
including back-end processing hardware, size of the camera and system, the requirement for ambient light, and diculty
with near focus distance. Furthermore, it was costly to embed a camera and set up an external connection to the
processing hardware. Due to these limitations, this solution may not be suitable for VI users in the developing countries.
Besides computer-vision-based nger tracking, researchers also investigated other techniques based on embedded
sensors, such as Pen Friend [
22
], Near Field Communication (NFC)/Radio-frequency identication (RFID) reader [
42
],
and QR-code readers [
1
,
3
], for retrieving audio with the tactile diagrams. While these devices may overcome the
requirement for high resolution as in the camera-based solution, they often require users to hold devices in their hands,
thus keeping at least one hand constantly occupied. As the distal phalanx of the index ngers (Figure 2) are primarily
used for exploring Braille and tactile diagrams, it is advised that VI users’ hands should not be occupied by any other
means[
11
]. Moreover, it is dicult to paste a Pen Friend label or RFID tag or QR code in smaller regions and areas with
irregular boundaries on a tactile diagram. In addition, QR-code detection demands an optimal amount of ambient light
for the reader to operate, which makes it quite unusable in low light conditions [
3
]. Talking Tactile Tablet (TTT) [
25
], in
turn, may support the user reading the tactile diagram with both the hands and get an audio feedback simultaneously.
However, the size and weight of the device makes it non-portable.
In this paper, we explain the design and implementation of FingerTalkie in a nger-wearable form factor, with cheap,
o-the-shelf and robust color-sensing technology. It supports audio retrieval from color-printed tactile diagrams without
any extra hardware embedded in the diagrams. Our technical experiments showed that FingerTalkie can retrieve correct
audio information in low-light or even dark settings.
3
Woodstock ’18, June 03–05, 2018, Woodstock, NY Arshad Nasser, Taizhou Chen, Can Liu, Kening Zhu, and PVM Rao
Fig. 2. (a) Parts of the fingers (b) Bending of fingers during tactile reading
2.2 Finger-based Wearable Interfaces
Wearable devices for the hand often focused on the ngers since it is one of the most sensitive part and most often
used for grasping and exploring the environment. The design of the interaction technique in FingerTalkie was largely
inspired by existing wearable nger-based interaction for general purposes. Fukumoto and Tonomura’s FingerRing [
12
]
in 1994 was considered to be the rst digital prototype exploring a nger-worn interface. It embedded an accelerometer
into the form factor of a nger ring to detect gesture input in the form of taps performed with the ngertips. Since then,
various technologies have been used to implement ring-shape input devices. For instance, Nenya by Ashbrook et al. [
2
]
detected nger rotation via magnetic tracking. Yang et al. introduced Magic Finger [
46
] with IR beacons to recognize
surface textures. Ogata et al. [
31
] developed iRing using infrared reection to detect directional gesture swipes and
nger bending. Jing et al. developed Magic Ring [
20
] with an accelerometer to detect motion gestures of the index
nger. eRing [
44
] employed electric eld sensing to detect multiple nger gestures. OctaRing [
27
] achieved multi-touch
input by pressure-sensing, and LightRing [
23
] fused the results of infrared proximity sensing and a gyroscope to locate
the ngertip on any surface for cursor pointing and target selection. All these existing nger-based input techniques
utilized embedded motion sensors in the ring-shape form factor, to achieve surface or mid-air gesture recognition.
When it comes to designing nger-based interaction for VI users reading tactile diagram, one should take into account
the ease of input registration and the robustness of input detection. Motion sensors may face the issue of robustness
due to low sensor bandwidth. As discussed before, VI users often understand the tactile diagrams with both hands
resting on and touching the diagrams. Thus, performing complex gestures on the surface or mid air may cause fatigue.
To ensure the robustness of nger-based interaction, researchers leveraged thumb-to-nger touch with buttons
[
15
,
37
] and touch sensors [
8
,
45
]. Inspired by these conguration, we incorporated a button in the FingerTalkie device
for VI users to register the input. The choice of using buttons instead of sensors aimed to further reduce the cost of the
device. Dierent from the existing devices mostly with buttons on the side of the proximal phalanx, we investigated
the placement of the button around the nger through iterative design processes, and designed the one-nger oset-
clicking input technique in our nal prototype. The quantitative and the qualitative studies suggested that VI users
could successfully explore the tactile diagrams and retrieve corresponding audio information using the oset-clicking
technique with the button placed in front of the nger tip.
3 OUR SOLUTION - FINGERTALKIE
Based on the problems and challenges identied in existing literature, we designed a device with embedded color sensor
on the ngertip that does not obstruct the nger movements or the touch-sensing area of the nger tip. The initial
design of the FingerTalkie device is illustrated in Figure 3. The color sensor on the tip of the nger can read/sense colors
printed on a tactile diagram. A user can click the button on the proximal phalanx to play the audio associated to the
colored area, via an external device (e.g., laptop, smartphone or smartwatch) that is connected to it wirelessly. The
4
FingerTalkie Woodstock ’18, June 03–05, 2018, Woodstock, NY
Fig. 3. First prototype sketch
external device handles the computation and stores the database of colors and mapped audio les. In the following we
describe the rationale behind our design choices.
3.1 Problems and Considerations
There are several studies that investigated the haptic exploration styles of the visually impaired and the sighted people
[
17
]. When using two hands to explore the tactile diagram and its annex Braille legend page, VI users may use one hand
as a stationery reference point(gure 2C) or move both hands simultaneously(gure 2B). The exploration strategies
consists of usage of only one nger (index) or multiple ngers[
17
]. The precise nature of these exploratory modes and
their relations to performance level remain obscure [
41
]. Nevertheless, a common problem with tactile diagrams is its
labelling. Braille labelling becomes cumbersome as it often becomes cluttered and illegible due spatial constraints [
39
].
Moreover, associating the Braille legend on the separate pages disrupts the referencing and reduces the immediacy of
the graphic, thereby resulting in comprehension issues[18].
To address this issue, several existing studies associates the auditory information with touch exploration, to enhance
the experience of VI users obtaining information through physical interfaces. Finger-worn devices with motion sensors
and camera based setup can be costly and dicult to calibrate and set up. These devices also requires the user to aim a
camera, which can be dicult for blind users [
21
,
43
,
47
,
48
], and use one of their hands to hold the camera, preventing
bimanual exploration of the diagram, which can be necessary for good performance [
29
]. Based on the above factors
and constraints, we formulated the following design considerations for developing a system that:
(1)
Allow users to use both hands to probe tactile boundaries without restricting the movement and the tactile
sensation of nger tips.
(2)
Support the access to real-time audio feedback while exploring discrete areas of tactile diagram irrespective of
the boundary conditions (irregular boundaries, 2.5D diagrams, textured diagrams etc.)
(3)
Is portable, easy to set-up, inexpensive and easily adaptable with the existing tactile graphics for VI users in
developing countries.
5
Woodstock ’18, June 03–05, 2018, Woodstock, NY Arshad Nasser, Taizhou Chen, Can Liu, Kening Zhu, and PVM Rao
Fig. 4. First prototype
3.2 Design Rationale
Existing interactive technologies for audio-tactile diagrams include embedding physical buttons or capacitive touch,
RGB camera with QR code, text recognition and RFID tags to map audio to the discrete areas. These technologies lack
exibility as the users have to focus to particular points within the tactile area to trigger the audio. Moreover, it is
dicult for QR codes and RFID tags to be used with the tactile diagrams with irregular boundary lines. By further
exploring a simpler sensing mechanisms, the idea of color tagging and sensing for audio tactile may oer advantages
over other methods due to the following reasons:
(1)
Contrasting colors have been widely used in tactile diagrams for assisting low vision and color-blind people for
easy recognition of boundaries and distinct areas. The device could leverage the potential of existing colored
tactile diagrams, without requiring the fabrication of new ones.
(2) The non colored tactile diagram can be colored with stickers or easily painted.
(3)
The color-sensing action is unaected by ambient lighting with the usage of a sensor module with an embedded
white LED light.
(4)
Color sensors are low-cost, frugal technology with low power consumption and low requirement on background
processing.
4 ITERATIVE DESIGN AND PROTOTYPING
Follow the design considerations and the conceptual design, We adopted a multiple-stage iterative design process
involving 8 VI users evaluating 3 prototypes.
4.1 First Prototype
We followed the existing work on nger-worn assistive device [
30
] to design the rst prototype of FingerTalkie. As
shown in Figure 4, it consisted of two wearable parts: (i) a straight 3D-printed case to be worn at the middle phalanx
with the color sensor (Flora TCS34725A) at the tip, and (ii) a push button which was sewed to another velcro as a ring
worn on the nger base. A velcro strap was attached on the 3D-printed case to cater to dierent nger sizes.
For this prototype, we used an Arduino UNO with a laptop(Macbook Pro) as the external peripherals. The wearable
part of the prototype device was connected to the Arduino UNO using thin wires. We used Arduino IDE with the
standard audio package library to store color-to-audio proles and perform the back-end processing.
6
FingerTalkie Woodstock ’18, June 03–05, 2018, Woodstock, NY
Fig. 5. The tactile diagram used in the pilot studies.
User Study 1 - Design
The main goal of testing the rst prototype was to investigate the feasibility of the hardware setup, and collect user
feedback on the early design and the prototype of FingerTalkie.
4.1.1 Participants. For the rst pilot study, we recruited 4 congenitally blind participants (4 males) aged between 27 to
36 (Mean = 31.5, SD = 3.6). All the participants were familiar with using tactile diagrams.
4.1.2 Apparatus. We tested the rst prototype with a simple tactile diagram of two squares(blue and pink color) as
shown in Figure 5. Pressing the button on the device while pointing to the area within the squares activates dierent
sounds.
4.1.3 Task and Procedure. The participants were initially given a demo on how to wear the prototype and to point
and click on a designated area. Then they were asked to wear the prototype on their own and adjust the velcro strap
according to their comfort. Later, the tactile diagram5was given to them and the participants were asked to explore and
click within the tactile shapes to trigger dierent sounds played on the laptop speaker. Each participant could perform
this action as many times as they wanted within 5 minutes. After all the participants performed the above task, a group
interview was conducted. The participants were asked about the subjective feedback on the wearability, ease of use,
drawbacks and issues faced while using the device and possibilities for improvement.
Study 1 - Feedback and Insights
All the participants showed positive responses and stated that it was a new experience for them. They did not face any
dicultly in wearing the device. One participants accidentally pulled o the wires that connected the device[to the
Arduino] while trying to wear the prototype. All the participants reported that the device was lightweight and it was
easy to get the real-time audio feedback. 3 participants reported that the device doesn’t restrict the movements of their
ngers during exploration of the diagram. For one participant, we noticed that the color sensor at the tip of the device
was intermittently touching the embossed lines on the tactile diagram. This was due to his peculiar exploration style
where the angle of exploration of the ngers with respect to the diagram surface was higher compared to the rest of the
participants. This induced the problem of unintended sensor touching on tactile diagram during exploration. Moreover,
the embossed elevations can also vary based on the type of the tactile diagrams which could worsen obstruction for the
color sensor.
7
Woodstock ’18, June 03–05, 2018, Woodstock, NY Arshad Nasser, Taizhou Chen, Can Liu, Kening Zhu, and PVM Rao
Fig. 6. Second prototype and the angular compensation at the tip
4.2 Second Prototype
In order to avoid the unwanted touching of color sensor while exploring a tactile diagram, we axed the sensor at an
angular position with respect to the platform. We observed the participants ngers were at an angle of 45
◦
with respect
to the tactile diagram. Thus, we redesigned the tip of the device and xed the color sensor at an angle of 45
◦
as shown
in Figure 6. The overall length of the nger wearable platform was also reduced from 6 cm to 5 cm.
The second prototype is a wrist-worn stand-alone device as shown in Figure 6. It consisted of an Arduino Nano, 7.2v
LiPo battery, a 5V regulator IC with and an HC-05 Bluetooth module. All the components are integrated into a single
PCB board that is connected to the nger-worn part with exible ribbon wires. This design solved the problem of the
excess tangled wires as the device could now connect with the laptop wirelessly through Bluetooth.
User Study 2 - Design
We evaluated the second prototype with another user study to assess the new design and gain insights for further
improvement.
4.2.1 Participants. During the second pilot study, we ran a hands-on workshop with 4 visually impaired people (3 male
and 1 female) aged between 22 to 36 years (Mean=29, SD=2.7). We used the second prototype and the tactile diagrams
of squares that were used in rst pilot study.
4.2.2 Task and Procedure. The users were initially given a demo on how to wear the prototype and then to point
and click on a designated area. Later they were asked to wear the prototype on their own and adjust the velcro strap
according to their comfort. Then, they were asked to explore the tactile diagram and click within the tactile shapes.
Whenever the participant pointed within the squares and pressed the button correctly, Tone A
1
was played on the
laptop speakers. When they made a wrong point-and-click (outside the squares), Tone B
2
was played to denote the
wrong pointing. Each participant was given 10 minutes for the entire task. After the entire task, the participants were
individually asked to provide their feedback regarding the ease of use, the drawbacks and issues faced while using the
device and the potential areas of improvement.
1‘Glass’ sound le in the MacOS sound eects
2‘Basso’ sound le in the MacOS sound eects
8
FingerTalkie Woodstock ’18, June 03–05, 2018, Woodstock, NY
Fig. 7. Le: Final standalone prototype, Center: Internal hardware Right: Exploring the tactile diagram with the final prototype.
4.3 Study 2 - Feedback and Insights
We observed that with the rened length and angle of contact of the device, the participants were able to explore the
tactile diagrams more easily. However, two participants said the they found it dicult to simultaneously point to the
diagram and press the button on the proximal phalanx. One participant said, “I feel that the area being pointed by [my]
nger shifts while simultaneously trying to press the button on the index nger”. We found that the above mentioned
participants had relatively stubby thumbs, which might had increased the diculty of clicking the button while pointing.
This means that the activation button on the distal phalanx may not be suitable for all the users ergonomically. Another
participant who is partially visually impaired was concerned about the maximum number of colors (or discrete areas)
the sensor could detect and whether colors could be reused.
5 FINAL PROTOTYPE
Based on the ndings from the two user studies, we came up with a novel point-and-click technique and nalized the
design of the device with further hardware improvements to make it a complete standalone device.
5.1 Oset Point-and-Click Technique
We replaced the button at the proximal phalanx of the nger with a limit-switch button on the tip of the nger-worn
device as shown in Figure 7. The color sensor is then attached to the limit-switch. The purpose of this design is to avoid
aecting the pointing accuracy when the users simultaneously point the device and click the button on the proximal
phalanx. With the new design, the users can click the button by simply tilting the nger forward and also get tactile
click feedback on their nger tip.
5.2 RFID Sensing for Color Reuse
In order to enable the reuse of colors across dierent tactile diagrams, we introduced a mechanism to support multiple
audio-color mapping proles. This was achieved by embedding an RFID-reader coil in the FingerTalkie device. One
unique RFID tag was attached to each tactile diagram. Before reading the main content, the user scaned the tag to read
the color-audio-mapping prole of the current diagram. A micro 125KHz RFID module was embedded on top of the
Arduino Nano. We made a sandwiched arrangement of Arduino Nano, a much smaller HC-05 bluetooth chip and the
RFID chip, creating a compact arrangement of circuits on top of the nger worn platform. An RFID coil with a diameter
of 15 mm was placed on top of the limit switch, to support the selection of audio-color-mapping prole through the
oset pointing interaction.
9
Woodstock ’18, June 03–05, 2018, Woodstock, NY Arshad Nasser, Taizhou Chen, Can Liu, Kening Zhu, and PVM Rao
5.3 Interaction Flow
The user begins exploring a tactile diagram by hovering the FingerTalkie device over the RFID tage, which is placed at
the top left corner of the tactile diagram and marked by a small tactile dot. The page selection is indicated by an audio
feedback denoting the page number or title of the diagram.The user can then move the nger to the rest of the diagram
for further exploration. To retrieve audio information about a colored area, the user uses the point-and-click technique
by pointing to the area with an oset and tilting the nger to click.
6 EVALUATING THE OFFSET POINT-AND-CLICK TECHNIQUE
We have designed FingeTalkie with a new interaction technique that requires users to point to areas with an oset
distance and tilt to click. Can users perform it eciently and accurately? To answer this question, we conducted a
controlled experiment to formally evaluate the performance of this new technique and the usability of the FingerTalkie
device. Participants were asked to use FingerTalkie device to point and click within the tactile areas in predened
graphical shapes. The following hypotheses are tested:
•H1: It is faster to select larger tactile areas than smaller ones.
•H2: It is slower to perform a correct click for areas with sharper angles.
•H3: It is more error-prone to select smaller tactile areas than larger ones.
•H4: It yields more error to select the shapes with sharper angles.
6.1 Design
We employed a [4
×
3] within-subject experiment design with two independent factors: Size (Small, Medium, Large) and
Shape (Circle, Square, Triangle and Star). The tactile diagrams we used are made of ashcards with a size of 20
×
18 cm.
The tactile shapes were created by laser cutting a thick paper board which gave 1.5mm tactile elevation for the tactile
shapes. We used tactile diagrams of four basic gures: circle, triangle, square and star based on the increasing number
of edges and corners and decreasing angular measurements between the adjacent sides. We made 3 dierent sizes
(large, medium and small) of each shape as shown in Figure 8. The large size of all the shapes were made in a way that
it can be inscribed in a circle of 5 cm. The medium size was set to 40%(2cm) of the large size and the smallest size being
20%(1cm). According to tactile graphics guidelines [
40
], the minimum area that can be perceived on a tactile diagram is
25.4mm ×12.5mm. We chose our smallest size slightly below this threshold to include the worst case scenario.
All the elevated shapes were of blue color and the surrounding area was in white color as shown in Figure 8. All the
shapes were placed at the vertical center of the ashcard. The bottom of each shape was at a xed distance from the
bottom of the ashcard as seen in the gure 8. This was done in order to maintain consistency while exploring the
shapes and to mitigate against shape and size bias.
6.2 Participants
To eliminate biases caused by prior experience with tactile diagrams, we recruited 12 sighted users (5 female) and
blind-folded them during the experiment. They were recruited from a local university aged between 25 and 35 years
(Mean=30, SD=2.8). 8 out of 12 participants were right-handed. None of them had any prior experience in using tactile
diagrams.
10
FingerTalkie Woodstock ’18, June 03–05, 2018, Woodstock, NY
Fig. 8. Tactile flashcards
Fig. 9. Testing setup
6.3 Apparatus
The testing setup involved the nger-worn device connected to an Arduino Nano which interfaces with a laptop. The
testing table as shown in Figure 9consisted of a xed slot to which the ashcards could be removed and replaced
manually by the moderator. A press button (Figure 9) was placed beneath the ashcard slot in order to trigger the start
command whenever the user was ready to explore the next diagram.
6.4 Task and Procedure
The experiment begins with a training session before going into the measured session. The participants are blindfolded
and asked to wear the FingerTalkie prototype. During the training session, the participants are briefed about the motive
of the experiment and also guided through the actions to be performed during the tests. A dummy tactile ashcard
of blue colored square (side of 20 mm) is used for the demo session. In order to avoid bias, the shape and position
of the tactile image on the ashcard are not revealed or explained. The participants are asked to explore the tactile
ashcard and asked to point-and-click within the area of the tactile shape. When a click is received while pointing
within the shape, Tone A (‘Glass’ sound le in the MacOS sound eects) is played to notify the correct operation. When
the point-and-click occurred outside the tactile boundary (the white area), Tone B (‘Basso’ sound le in the MacOS
sound eects) is played to denote the error. The participants are allowed to exercise the clicks as many times as they
wanted during the training sessions. The training session for each participants took about 5 minutes.
During the measured session, the participants are asked to register correct clicks for given tactile ashcards as fast
and accurate as possible. The moderator gives an audio cue to notify the participants every time a tactile ashcard is
replaced. The participant will then have to press the start button on the bottom of the setup (Figure 9) and explore
the ashcard, point within the boundary of the tactile area and perform a click. Once a correct click is received, the
moderator replaces the ashcard and participants start the next trial until all trails are nished. If the participants
11
Woodstock ’18, June 03–05, 2018, Woodstock, NY Arshad Nasser, Taizhou Chen, Can Liu, Kening Zhu, and PVM Rao
Fig. 10. Mean Task Completion Time of all shapes classified based on their sizes
performs a wrong click, they can try as many times as they want to achieve the correct click until the session reaches
the timeout (75 seconds). The order of trials in each condition is counterbalanced with a Latin Square. This design
results in (4×shapes )*(3×sizes)*(2×r eplication)*(12 ×participants) = 228 measured trials.
6.5 Data Collection
We collected: 1) Task Completion Time, recorded from pressing the button to achieving a correct click (click within the
boundary of the each shape on the ashcard) and 2) the error rate by logging in the number of wrong clicks of each
ashcard before the correct click was registered.
6.6 Results
We post-processed the collected data by removing four outliers that were more/less than the mean values by more
than two times of the standard deviations. The two-way repeated measures ANOVA was then performed on the
TaskCompletionTime and the NumberOfErrors with the Size and the Shape as the independent variables. The mean time
and the mean number of errors for achieving a correct click for all the shapes and sizes are shown in Figure 10 and
Figure 11 .
6.6.1 H1-TaskCompletionTime (Size). There was a signicant eect of size on TaskCompletionTime [F(2,22)=3.94, p<0.05,
η2
p=
0
.
264]. Post-hoc pair-wise comparison showed that there is a signicant dierence in the TaskCompletionTime
between the Large and the Small sized tactile shapes (p < 0.005) with the mean time of Small as 9.01s (SD=1.3) and
of Large as 5.1s (SD=0.662). The mean time for the correct click for medium size is 6.49s (SD=1.41). But, there is no
signicant dierence between Medium and Small or Medium and Large sizes. The mean task completion time (correct
click) of the all small, medium and large sizes shows that the large sizes of all shapes were easily identiable. Hence, H1
is fully supported.
6.6.2 H2-TaskCompletionTime (Shape). H2 is partially supported. We found a signicant eect of Shape on TaskCom-
pletionTime [F(3,33)=12.881, p<0.05,
η2
p=
0
.
539]. No signicant interaction eect between Size and Shape was identied.
12
FingerTalkie Woodstock ’18, June 03–05, 2018, Woodstock, NY
Fig. 11. Mean number of errors for the shapes based on their sizes
Post-hoc pair-wise comparison showed for the small size, the star shape took signicantly longer time than the triangle
(p<0.05), the circle (p<0.05), and the square (p<0.05), while the triangle took signicantly longer time than the square
(p<0.05). No signicant dierence was found between the square and the circle or the triangle and the circle. For the
medium size, the signicance dierence on the task completion time was found between star and triangle (p<0.05), star
and circle (p<0.05), and star and square (p<0.05), while there was no signicantly dierence among the triangle, the
circle, and the square. For the large size, there was no signicantly dierence among the four shapes. The mean time
for reaching a correct click for each shape in each size is showed in Figure 10. We hypothesized (H2) that the sharper
angle a shape has, the longer it would take for the correct click. As expected, smaller tactile areas are more sensitive to
this eect. The result was as predicted except that the circle performed worse than square in all sizes, although no
signicant dierence was found between circle and square. We speculate that one major reason of square performing
better than circle in our experiment is due to the rectangular shape of the color sensor, which aligns better with straight
lines than curves. While future investigation is needed, this raises alerts on potential impact of the shape of the sensing
area of any sensing technology to be used in this context.
6.6.3 H3-Number of Errors (Size). H3 is fully supported. We found a signicant eect of size on NumberOfErrors
[F(2,22)= 9.82, p<0.05,
η2
p=.
472]. Post-hoc comparison showed that the small size yielded signicantly larger number
of errors than the large size did (p < 0.005). There is also a signicant dierence between the number of errors for
the small size was also signicantly larger than those of the medium size (p < 0.05), while there was no signicant
dierence between the medium and large sizes. The mean number of errrors of the small, medium, and large shapes are
1.05 (SD=0.225), 0.521 (SD=0.235) and 0.26 (SD=0.09) respectively. In general we can see the error rates are rather low:
most trials were completed in one or two attempts even in the smallest size.
6.6.4 H4-Number of Errors (Shape). H4 is partially supported in a similar way to H2. There was a signicant eect of
shape on NumberOfErrors [F(3,33)= 10.96, p<0.001,
η2
p=
0
.
499]. Post-hoc pair-wise comparison showed that the star
shape yielded a signicantly more errors compared to square (small size: p<0.005, medium size: p<0.005, large size:
13
Woodstock ’18, June 03–05, 2018, Woodstock, NY Arshad Nasser, Taizhou Chen, Can Liu, Kening Zhu, and PVM Rao
p<0.005), triangle (small size: p<0.05, medium size: p<0.05, large size: p<0.05) and circle (small size: p<0.005, medium
size: p<0.005, large size: p<0.005). There was no signicant dierence between the square and circle across dierent
sizes where as square yielded signicantly less error when compared to triangle (small size: p<0.05, medium size: p<0.05,
large size: p<0.05). Figure 11 shows detailed results of the number of errors across dierence shapes and sizes. We can
see the error rate is consistent with the task completion time, which accords with our observation that failed attempts
was a major cause for slower performances.
Overall, FingerTalkie was eective in selecting a wide range of tactile shapes. Participants could make a correct
selection easily in one or two shots in most cases, even when the size is smaller than the smallest tactile areas used in
the real world. Eects of sharp angles were shown in smaller tactile shapes. Potential eects of the shape of the sensory
area was uncovered, should be paid attention to in future development of similar technologies.
7 FOCUS-GROUP INTERVIEW WITH BLIND USERS
The aim of focus-group interview is to obtain a deeper understanding on key factors such as wearability and form factor,
novelty and usefulness of the device, diculty in using the device, learnability, cost of the device, audio data-input, and
sharing interface.
7.1 Participants
The subjective feedback session was conducted with 8 congenitally blind participants.The participant group consisted
of 1 adult male (Age=35) and 7 children aging from 11 to 14 (Mean=13.0, SD=1.0). All the users were right-handed.
7.2 Apparatus
We used two tactile gures; squares of two dierent sizes (5 cm and 3 cm) side by side as to demonstrate the working of
the device. One of the square was lled with blue color while another smaller square was lled with red color. Each
square color was annotated with a discrete audio that could be listened through the laptop speakers. The nger worn
device used for the evaluation was the standalone prototype which was connected to the external battery pack using a
USB cable.
7.3 Procedure
The hands-on session was done in an informal setup where the participants were briefed initially about the concept of
nger wearable device and the nature of the problem that it solves. The users were instructed to wear the device and
they were guided to understand the position of the sensor on the tip. They were also instructed to touch the tip of the
device to conform its angle of tilt. In this way, they could get a clear understanding of the distance of the sensor from
the tip of the nger. The oset point-and-click mechanism was explained to each participant. The whole process was
administered by a sighted external helper. The participants were then asked to explore the tactile diagram and perform
the correct-clicking styles freely within the boundaries of the squares. Tone A
3
was played for the correct clicks on the
big and small squares respectively. Tone B
4
was played for a wrong click outside the tactile boundary. Each participant
experienced the device and performed clicking for approximately 10 minutes.
3‘Glass’ sound le in the MacOS sound eects
4‘Basso’ sound le in the MacOS sound eects
14
FingerTalkie Woodstock ’18, June 03–05, 2018, Woodstock, NY
7.4 Results
After the exploratory hands-on session, all participants were asked to provide feedback regarding the following factors:
7.4.1 Usability of the device. After wearing the device for about 5 minutes, all the users were impressed by the
uniqueness of the device. It was also noted that none of the participants have ever used a nger-wearable interactive
device in the past. On the other hand, 3 out of 8 users have used or was familiar with the Pen Friend/annotating pens
[
22
] for audio-tactile markings. A Pen-Friend user said, “Reusability of the colors is a really good feature as we don’t have
to worry about the tags running out.” Another user said, “The best thing I like about the nger device[FingerTalkie] when
compared to Pen Friend is that I can use both my hands to explore the tactile diagrams. ” One user had a prior experience
in using an image-processing-based audio-tactile system where a smartphone/camera is placed on a vertical stand on
top of the tactile diagram. To use such a system, the user needs to ax a sticker on his/her index nger to explore
the tactile diagram. This user stated, “Though this system enabled me to use both the hands for tactile exploration, it
was cumbersome to set up and calibrate the phone with the stand and sometimes didn’t work as expected due to the poor
ambient lighting or improper positioning of the smartphone.” While all the users agreed on the application and usefulness
of the device for audio annotation of tactile graphics, some even suggested dierent levels of applications. A user stated
“I can use this device for annotating everyday objects like medicines and other personal artifacts identication. It will save
me a lot of time in printing Braille and sticking it to the objects.”
7.4.2 Learnability/Ease of use/Adaptability. After wearing the device, the users were able to understand the relation of
the sensor and its distance and angle corresponding to the tactile surface after trying for a couple of minutes. Overall,
the participants showed a great interest in wearing it and exploring the dierent sounds while moving between the two
dierent tactile images. All the users stated that they could adapt to this clicking method easily by using it for a couple
of hours. Asking about the ease of use, a participant stated “this is like a magic device. I just have to tilt my (index) nger
to get the audio description about the place being pointed. Getting audio information from the tactile diagram have never
been so easy.” Another user said “I have used a mobile phone application which can detect the boundaries of the tactile
diagram using the camera and gives audio output corresponding to the area being pointed and double tapped. But for that, I
require a stand on which the mobile phone should be xed st and should also make use that the room is well lit to get the
best result. With this device, the advantage I nd over the others is that its lightweight, portable and it works irrespective of
the lighting conditions in the room.”
7.4.3 Wearability. It was observed that the nger wearable device could t in perfectly on the index nger for seven
out of eight participants with only minor adjustments in the strap. One exemption was a case in which the device was
extremely loose and was tending to sway while the user tried to perform a click. One of the participants claimed “I
don’t think it’s complicated and I can wear it on my own. It is easy to wear and I can adjust it by myself.” The device was
found protruding out of the index nger in half of the cases, however this did not aect the usability of the device. The
users were still able to make the oset-click without any fail.
7.4.4 Need of Mobile Application for user data input. Majority of users were eager to know the mechanism and the
software interface by which the audio can be tagged to a specied color. The child participants were eager to know
if they would be able to do it on their own. Four out of ve child participants insisted that a mobile or computer
application should be made accessible to the VI people so that they can do it on their own without an external assistance.
A user said “Being procient in using the smart phones, I am disappointed with the fact that most of the mobile applications
15
Woodstock ’18, June 03–05, 2018, Woodstock, NY Arshad Nasser, Taizhou Chen, Can Liu, Kening Zhu, and PVM Rao
are not designed taking care of the accessibility and hence render them useless”. One of the special educators said “If the
teachers can themselves make a audio-color prole for each diagram or chapter and then share it with the students, it would
save a lot of time for both the students and the special educators”.
In summary, the participants showed enthusiasm in using FingerTalkie in their daily and educational activities. Their
feedback showed promises of FingerTalkie for providing an intuitive and seamless user experience.Most participants
expressed appreciation to the simple design of the device. The oset point-and-click method appeared to be easy to
learn and perform. Overall, the users liked the experience of the FingerTalkie and suggested for a sturdy design and an
accessible back-end software system.
8 LIMITATIONS AND FUTURE WORK
Though we were able to address most of the usability and hardware drawbacks of FingerTalkie during the iterative
process, the following factors could be improved in future designs:
During the entire design and evaluation process, we used only Blue, Green, Red colors in the tactile diagrams. We used
them to achieve a better detection accuracy. A better color sensor with noise ltering algorithms and a well-calibrated
sensor positioning can help in detection of more colors eciently on a single tactile diagram.
Though the nal prototype is made into a compact wearable form factor, it is still bulky as we used o-the-shelf
hardware components. It could be further miniaturized by the use of custom-made PCB design and SMD electronic
components. In order to achieve a comprehensive and ready-to-use system, an accessible and stable back-end PC
software or mobile app should be developed in the near future. The back-end software/mobile application should
include the features of audio-color-mapping prole creation and sharing. Last but not the least, we will also explore
other modality of on-nger feedback (e.g., vibration [
32
], thermal [
49
], poking [
19
], etc.) for VI users comprehending
tactile diagrams.
9 CONCLUSION
In this paper, we introduce FingerTalkie, a novel nger-worn device with a new oset point-and-click technique that
enables easy access of audio information on tactile diagrams. The design requirements and choices were established
from an iterative user-centered design process. It is an easy-to-use, reliable and inexpensive technique that can help the
VI to reduce the bulkiness of tactile textbooks by eliminating the Braille pages . The oset point-and-click technique
can easily perform even with the smallest tactile areas suggested by the tactile graphics guidelines. The subjective
feedback from VI users shows high acceptance of FingerTalkie in terms of dual-hand exploration ability when compared
to the mainstream audio tactile devices in the market. As high-contrast colored tactile diagrams are gaining popularity
amongst people with low or partial vision, we aim to use the same printed colors to make the color palette for the
FingerTalkie. In addition, we envision that FingerTalkie can not only be used by VI users, but also by sighted users with
special needs, such as elderly and children, to annotate everyday physical objects, such as medicine containers and
textbooks. Due to the versatility of the design with the point-and-click method, the researchers in the future can adopt
such techniques in other devices and systems where nger tips shall not be occluded while performing touch input.
REFERENCES
[1]
Hend S Al-Khalifa. 2008. Utilizing QR code and mobile phones for blinds and visually impaired people. In International Conference on Computers for
Handicapped Persons. Springer, 1065–1069.
[2]
Daniel Ashbrook, Patrick Baudisch, and Sean White. 2011. Nenya: subtle and eyes-free mobile input with a magnetically-tracked nger ring. In
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2043–2046.
16
FingerTalkie Woodstock ’18, June 03–05, 2018, Woodstock, NY
[3]
Catherine M Baker, Lauren R Milne, Jerey Scoeld, Cynthia L Bennett, and Richard E Ladner. 2014. Tactile graphics with a voice: using QR codes
to access text in tactile graphics. In Proceedings of the 16th international ACM SIGACCESS conference on Computers & accessibility. ACM, 75–82.
[4]
Olivier Bau, Ivan Poupyrev, Ali Israr, and Chris Harrison. 2010. TeslaTouch: electrovibration for touch surfaces. In Proceedings of the 23nd annual
ACM symposium on User interface software and technology. ACM, 283–292.
[5]
Roger Boldu, Alexandru Dancu, Denys JC Matthies, Thisum Buddhika, Shamane Siriwardhana, and Suranga Nanayakkara. 2018. FingerReader2. 0:
Designing and Evaluating a Wearable Finger-Worn Camera to Assist People with Visual Impairments while Shopping. Proceedings of the ACM on
Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 3 (2018), 94.
[6] Anke Brock. 2013. Interactive maps for visually impaired people: design, usability and spatial cognition. Ph.D. Dissertation.
[7]
Anke M. Brock, Philippe Truillet, Bernard Oriola, Delphine Picard, and Christophe Jourais. 2015. Interactivity Improves Usability of Geographic
Maps for Visually Impaired People. Hum.-Comput. Interact. 30, 2 (March 2015), 156–194. https://doi.org/10.1080/07370024.2014.924412
[8]
Liwei Chan, Rong-Hao Liang, Ming-Chang Tsai, Kai-Yin Cheng, Chao-Huai Su, Mike Y Chen, Wen-Huang Cheng, and Bing-Yu Chen. 2013. FingerPad:
private and subtle interaction using ngertips. In Proceedings of the 26th annual ACM symposium on User interface software and technology. ACM,
255–260.
[9]
Julie Ducasse, Anke M Brock, and Christophe Jourais. 2018. Accessible interactive maps for visually impaired users. In Mobility of Visually Impaired
People. Springer, 537–584.
[10] Polly Edman. 1992. Tactile graphics. American Foundation for the Blind.
[11] Emerson Foulke. 1982. Reading braille. Tactual perception: A sourcebook 168 (1982).
[12]
Masaaki Fukumoto and Yasuhito Suenaga. 1994. “FingeRing”: a full-time wearable interface. In Conference companion on Human factors in computing
systems. ACM, 81–82.
[13]
Giovanni Fusco and Valerie S Morash. 2015. The tactile graphics helper: providing audio clarication for tactile graphics using machine vision. In
Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility. 97–106.
[14]
John A Gardner. 2002. Access by blind students and professionals to mainstream math and science. In International Conference on Computers for
Handicapped Persons. Springer, 502–507.
[15]
Sarthak Ghosh, Hyeong Cheol Kim, Yang Cao, Arne Wessels, Simon T Perrault, and Shengdong Zhao. 2016. Ringteraction: Coordinated Thumb-index
Interaction Using a Ring. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 2640–2647.
[16]
Arthur C Grant, Mahesh C Thiagarajah, and Krishnankutty Sathian. 2000. Tactile perception in blind Braille readers: a psychophysical study of
acuity and hyperacuity using gratings and dot patterns. Perception & psychophysics 62, 2 (2000), 301–312.
[17]
Morton A Heller. 1989. Picture and pattern perception in the sighted and the blind: the advantage of the late blind. Perception 18, 3 (1989), 379–389.
[18]
Ronald AL Hinton. 1993. Tactile and audio-tactile images as vehicles for learning. COLLOQUES-INSTITUT NATIONAL DE LA SANTE ET DE LA
RECHERCHE MEDICALE COLLOQUES ET SEMINAIRES (1993), 169–169.
[19]
Seungwoo Je, Minkyeong Lee, Yoonji Kim, Liwei Chan, Xing-Dong Yang, and Andrea Bianchi. 2018. PokeRing: Notications by Poking Around the
Finger. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 542.
[20]
Lei Jing, Yinghui Zhou, Zixue Cheng, and Tongjun Huang. 2012. Magic ring: A nger-worn device for multiple appliances control using static nger
gestures. Sensors 12, 5 (2012), 5775–5790.
[21]
Shaun K Kane, Brian Frey, and Jacob O Wobbrock. 2013. Access lens: a gesture-based screen reader for real-world documents. In Proceedings of the
SIGCHI Conference on Human Factors in Computing Systems. ACM, 347–350.
[22] Deborah KENDRICK. 2011. PenFriend and Touch Memo: A Comparison of Labeling Tools. AFB AccessWorld Magazine, set 12, 9 (2011).
[23]
Wolf Kienzle and Ken Hinckley. 2014. LightRing: always-available 2D input on any surface. In Proceedings of the 27th annual ACM symposium on
User interface software and technology. ACM, 157–160.
[24] Myron W Krueger and Deborah Gilden. 1997. KnowWhere: an audio/spatial interface for blind people. Georgia Institute of Technology.
[25]
Steven Landau and Lesley Wells. 2003. Merging tactile sensory input and audio data by means of the Talking Tactile Tablet. In Proceedings of
EuroHaptics, Vol. 3. 414–418.
[26] Vincent Lévesque. 2009. Virtual display of tactile graphics and Braille by lateral skin deformation. Ph.D. Dissertation. McGill University Library.
[27]
Hyunchul Lim, Jungmin Chung, Changhoon Oh, SoHyun Park, and Bongwon Suh. 2016. OctaRing: Examining Pressure-Sensitive Multi-Touch Input
on a Finger Ring Device. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology. ACM, 223–224.
[28]
Hiroki Minagawa, Noboru Ohnishi, and Noboru Sugie. 1996. Tactile-audio diagram for blind persons. IEEE Transactions on Rehabilitation Engineering
4, 4 (1996), 431–437.
[29]
Valerie S Morash, Allison E Connell Pensky, Steven TW Tseng, and Joshua A Miele. 2014. Eects of using multiple hands and ngers on haptic
performance in individuals who are blind. Perception 43, 6 (2014), 569–588.
[30]
Suranga Nanayakkara, Roy Shilkrot, and Pattie Maes. 2012. EyeRing: an eye on a nger. In CHI’12 Extended Abstracts on Human Factors in Computing
Systems. ACM, 1047–1050.
[31]
Masa Ogata, Yuta Sugiura, Hirotaka Osawa, and Michita Imai. 2012. iRing: intelligent ring using infrared reection. In Proceedings of the 25th annual
ACM symposium on User interface software and technology. ACM, 131–136.
[32]
Thijs Roumen, Simon T Perrault, and Shengdong Zhao. 2015. Notiring: A comparative study of notication channels for wearable interactive rings.
In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 2497–2500.
[33] William Schi and Emerson Foulke. 1982. Tactual perception: a sourcebook. Cambridge University Press.
17
Woodstock ’18, June 03–05, 2018, Woodstock, NY Arshad Nasser, Taizhou Chen, Can Liu, Kening Zhu, and PVM Rao
[34]
Gottfried Seisenbacher, Peter Mayer, Paul Panek, and Wolfgang L Zagler. 2005. and Visually Impaired Students-Idea and Feasibility Study. Assistive
Technology: From Virtuality to Reality: AAATE 2005 16 (2005), 73.
[35]
Lei Shi, Ross McLachlan, Yuhang Zhao, and Shiri Azenkot. 2016. Magic touch: Interacting with 3D printed graphics. In Proceedings of the 18th
International ACM SIGACCESS Conference on Computers and Accessibility. ACM, 329–330.
[36]
Lei Shi, Yuhang Zhao, and Shiri Azenkot. 2017. Markit and Talkit: A Low-Barrier Toolkit to Augment 3D Printed Models with Audio Annotations.
In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. ACM, 493–506.
[37]
Roy Shilkrot, Jochen Huber, Wong Meng Ee, Pattie Maes, and Suranga Chandima Nanayakkara. 2015. FingerReader: a wearable device to explore
printed text on the go. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 2363–2372.
[38]
Adam J Sporka, Vladislav Němec, and Pavel Slavík. 2005. Tangible newspaper for the visually impaired users. In CHI’05 extended abstracts on Human
factors in computing systems. ACM, 1809–1812.
[39]
Andrew F Tatham. 1991. The design of tactile maps: theoretical and practical considerations. Proceedings of international cartographic association:
mapping the nations (1991), 157–166.
[40]
The Braille Authority of North America. 2012. Guidelines and Standards for Tactile Graphics. http://www.brailleauthority.org/tg/web-manual/
index.html
[41]
Catherine Thinus-Blanc and Florence Gaunet. 1997. Representation of space in blind persons: vision as a spatial sense? Psychological bulletin 121, 1
(1997), 20.
[42] Harald Vogt. 2002. Ecient object identication with passive RFID tags. In International Conference on Pervasive Computing. Springer, 98–113.
[43]
Samuel White, Hanjie Ji, and Jerey P Bigham. 2010. EasySnap: real-time audio feedback for blind photography. In Adjunct proceedings of the 23nd
annual ACM symposium on User interface software and technology. ACM, 409–410.
[44]
Mathias Wilhelm, Daniel Krakowczyk, Frank Trollmann, and Sahin Albayrak. 2015. eRing: multiple nger gesture recognition with one ring using
an electric eld. In Proceedings of the 2nd international Workshop on Sensor-based Activity Recognition and Interaction. ACM, 7.
[45]
Pui Chung Wong, Kening Zhu, and Hongbo Fu. 2018. FingerT9: Leveraging thumb-to-nger interaction for same-side-hand text entry on
smartwatches. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 178.
[46]
Xing-Dong Yang, Tovi Grossman, Daniel Wigdor, and George Fitzmaurice. 2012. Magic nger: always-available input through nger instrumentation.
In Proceedings of the 25th annual ACM symposium on User interface software and technology. ACM, 147–156.
[47]
Yu Zhong, Pierre J Garrigues, and Jerey P Bigham. 2013. Real time object scanning using a mobile phone and cloud-based visual search engine. In
Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, 20.
[48]
Yu Zhong, Walter S Lasecki, Erin Brady, and Jerey P Bigham. 2015. Regionspeak: Quick comprehensive spatial descriptions of complex images for
blind users. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 2353–2362.
[49]
Kening Zhu, Simon Perrault, Taizhou Chen, Shaoyu Cai, and Roshan Lalintha Peiris. 2019. A Sense of Ice and Fire: Exploring Thermal Feedback
with Multiple Thermoelectric-cooling Elements on a Smart Ring. International Journal of Human-Computer Studies (2019).
18