Conference PaperPDF Available

Empowering Persons with Deafblindness: Designing an Intelligent Assistive Wearable in the SUITCEYES Project

Authors:

Abstract and Figures

Deafblindness is a condition that limits communication capabilities primarily to the haptic channel. In the EU-funded project SUITCEYES we design a system which allows haptic and thermal communication via soft interfaces and textiles. Based on user needs and informed by disability studies, we combine elements from smart textiles, sensors, semantic technologies, image processing, face and object recognition, machine learning, affective computing, and gamification. In this work, we present the underlying concepts and the overall design vision of the resulting assistive smart wearable.
Content may be subject to copyright.
Empowering Persons with Deafblindness: Designing an
Intelligent Assistive Wearable in the SUITCEYES Project
Oliver Korn
Offenburg University
Badstr. 24, 77652 Offenburg,
Germany
oliver.korn@acm.org
Raymond Holt
University of Leeds
Woodhouse Lane, Leeds,
West Yorkshire, LS2 9JT, UK
R.J.Holt@leeds.ac.uk
Efstratios Kontopoulos
CERTH-ITI
6th Km Charilaou-Thermi Road,
57001 Thessaloniki, Greece
skontopo@iti.gr
Astrid M.L. Kappers
VU Amsterdam, Van der
Boechorststraat 9, 1081 BT
Amsterdam, The Netherlands
a.m.l.kappers@vu.nl
Nils-Krister Persson
University of Borås
50190 Borås, Sweden
Nils-Krister.Persson@hb.se
Nasrine Olson
University of Borås
50190 Borås, Sweden
Nasrine.Olson@hb.se
ABSTRACT
Deafblindness is a condition that limits communication
capabilities primarily to the haptic channel. In the EU-funded
project SUITCEYES we design a system which allows haptic and
thermal communication via soft interfaces and textiles. Based on
user needs and informed by disability studies, we combine
elements from smart textiles, sensors, semantic technologies,
image processing, face and object recognition, machine learning,
affective computing, and gamification. In this work, we present
the underlying concepts and the overall design vision of the
resulting assistive smart wearable.
CCS Concepts
Human-centered computing~Empirical studies in HCI
Human-centered computing~Collaborative and social
computing devicesHuman-centered computing~User studies
Human-centered computing~Empirical studies in interaction
design Human-centered computing~Accessibility theory,
concepts and paradigmsHuman-centered computing~
Accessibility systems and toolsSocial and professional
topics~History of hardwareSocial and professional topics~
Codes of ethicsSocial and professional topics~Assistive
technologiesComputing methodologies~Cognitive robotics
Computing methodologies~Robotic planningApplied
computing~Consumer health
Keywords
Deafblindness; Assistive Technologies; Haptics; Smart Textiles;
Wearables; Visual Impairments; Hearing Impairment; Gamification.
1. INTRODUCTION
Innovations in information and communication technology improve
the quality of life for many people. However, most solutions rely on
vision and sound. Thus, they often exclude people with severe dual
vision and hearing impairments. Deafblindness is such a condition,
limiting communication primarily to the haptic channel (Figure 1).
Figure 1: There are about 2.5 million people with
deafblindness in Europe. [Image courtesy of LightHouse for
the Blind and Visually Impaired, see http://lighthouse-sf.org.]
This entails multiple challenges at different levels for a wide
range of people, from individuals with deafblindness to society as
a whole. At the individual level, a person with deafblindness is
typically reliant on other people with limited autonomy as a result.
However, there is an enormous amount of stimuli in the
environment consciously or unconsciously observed and absorbed
by a person who can see and hear. In comparison, the information
communicated to a person with deafblindness is only a very
limited select fragment.
At a societal level, we often heard about the costs involved due to
the required human interventions. However, the contributions of
this community, missed due to the lack of adequate inclusion
measures in our social structures, are an even greater loss. We
need to improve the accessibility of the environment for all
people, regardless of their background or abilities.
Though rare at birth, deafblindness can be acquired due to
different causes. Currently an estimated 2.5 million people with
deafblindness are living in the Europe. Due to the increasing life
expectancy and the aging population, this number is forecasted to
rise substantially by 2030. The members of the EU-funded project
SUITCEYES acknowledge that changes in society are needed to
address these challenges. At the same time the project takes first
steps towards this objective: European policy developments are
evaluated to improve accessibility and inclusion. The results of
these analyses will inform and guide the development of the
project’s technical solution.
Permission to make digital or hard copies of part or all of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. Copyrights
for third-party components of this work must be honored. For all other
uses, contact the Owner/Author.
PETRA '18, June 2629, 2018, Corfu, Greece
© 2018 Copyright is held by the owner/author(s).
ACM ISBN 978-1-4503-6390-7/18/06.
https://doi.org/10.1145/3197768.3201541
In SUITCEYES three main challenges of users with deafblindness
are addressed:
perception of the environment;
communication and exchange of semantic content;
learning and joyful life experiences.
As limited communication is the predominant problem, state of
the art technologies will be explored and a haptic communication
interface will be developed to enable an improved context-aware
mode of communication. We aim to extract environmental and
linguistic clues and translate them into haptic and thermal signals
that will be communicated to the user via a “haptic intelligent,
personalized, interface” (HIPI).
In the SUITCEYES project, we are combining elements from
smart textiles, sensors, semantic technologies, image processing,
face and object recognition, machine learning, affective
computing, and gamification in order to develop a novel mode of
communication. Such technology-driven innovations are often
welcomed by people with deafblindness and disabilities due to the
independence they can offer. However, in many cases they remain
unused and discarded [1] for two main reasons:
1. The development may not take into account the needs and
preferences of the users, for example by overemphasizing the
priorities of professionals. To address this, people with
deafblindness are involved in the project as advisors at all stages.
2. Policies may fail to promote access. Although most countries
signed both the UN Convention on the Rights of Persons
with Disabilities [2] and the European Accessibility Act is
already under development, knowledge of technological
possibilities remains restricted. The project addresses such
aspects of environmental accessibility by linking
technological developments to national and international
policy and practice on accessibility.
Designed based on expressed needs and preferences of the users,
the HIPI-system will afford new means of perceiving the
environment as well as user-triggered communications. This in
turn allows the users to take a more active part in society,
improving possibilities for inclusion in social life and
employment. In addition, learning experiences will be enriched by
gamification and mediated social interactions. While the scope of
the project is broader, the focus of this paper is to present the
outline for the technical solution proposed in the project.
2. DEAFBLINDNESS
A definition of deafblindness is provided by the Nordic Welfare
Center [3] as “a combined vision and hearing impairment of such
severity that it is hard for the impaired senses to compensate for
each other. Thus, deafblindness is a distinct disability.” The
severity of the condition can vary. At one end of the spectrum
there are profound impairments in both visual and hearing senses,
at the other end, a slight sight or hearing may remain. Variations
in conditions can also stem from the causes of deafblindness.
Deafblindness can be congenital or acquired through accident,
illness, or age. Furthermore, deafblindness can be accompanied
with various levels of physical and or cognitive abilities. Figure 3
provides an overview of their different levels. The aim of the
project is to improve perception, communication and quality of
life for people with deafblindness.
Research on deafblindness and related issues remains limited. As
of February 2018, only 809 scholarly publications on
deafblindness could be found on the Web of Science database in
contrast to almost 400,000 items on blindness in the same
database. Of the publications on deafblindness, only 23 were to
some degree related to ‘haptic’, and of those only a handful
related to what is proposed in the SUITCEYES project (Figure 2).
Figure 3: Deafblindness: causes and cognitive impact.
SUITCEYES challenges and expected impact areas.
Figure 2: Network visualization of keyword co-occurrences of 809 publications using the data analysis tool VOSviewer.
Even research on the life and behavior of people with
deafblindness is scarce. In 2016 and 2017 two comprehensive
Swedish dissertations (deemed to be the first in the world) were
published focusing on people with Alström’s syndrome [4] and
with Usher’s syndrome [5]. In both cases, the researchers
identified constant pressures that lead to experiences of
overwhelming exhaustion. The number of people that a person
with deafblindness can communicate with and trust is very
limited, not to speak of diminished possibilities in finding jobs,
earning money, or leading an independent life.
The cumulative effects of such difficulties are manifold and
varying, including high levels of depression, suicidal thoughts and
behavior, diminished life quality, worsening of cognitive abilities
and development of neurological disorders. Those researchers
found that the most fundamental problem is interaction with
surroundings and communication with other people.
In 2015, the Journal of Deafblind Studies on Communication [6]
was established by the University of Groningen. In the last three
years, only 13 articles have been published there. SUITCEYES
addresses some of the communication challenges identified in
earlier researches, and recognizes that facilitating communication
for people with deafblindness reduces levels of stress and is
crucial for improvements in the quality of their life and health.
Furthermore, the solution developed within SUITCEYES will be
of help to family members and professional caregivers, facilitating
their work and enabling it to become more efficient and effective.
3. SOLUTION DESIGN
The overall objective of SUITCEYES is to improve the level of
independence and participation of people with deafblindness.
Together with experts, caretakers and future users, we aim to
augment communication, perception of the environment,
knowledge acquisition and the conduct of daily routines.
As introduced in section 1, our proposed solution is a haptic
intelligent, personalized, interface (HIPI) integrating elements
from smart textiles, sensors, semantic technologies, image
processing, face and object recognition, machine learning,
affective computing, and gamification.
The varied nature of experiences and needs of persons with
deafblindness imply that such a system will need to be modular
and reconfigurable, so that it can be adapted to different
individuals’ needs. SUITCEYES aims to provide a first step
towards such a system, developing a suite of sensors and actuators
that can be combined and configured to fit a variety of user needs
and provide a basis for future research. The system’s elements are
shown in Figure 4.
Figure 4: Schematic overview of the system’s components.
A range of potential feedback modalities will be explored for
inclusion in the HIPI: vibration, pressure and temperature, as well
as exploring different ways in which these can be combined or
positioned to provide different signals.
Likewise, a variety of sensors will be explored to provide
information about the environment: ultrasonic distance sensors to
detect proximity of obstacles, a camera feed to allow recognition
of objects or people, indoor positioning systems to help locate
objects, or radio frequency identification to identify when given
objects come near. A processing unit will be used to interpret
sensor input against a knowledge base and determine appropriate
feedback. Smart textiles will be used to accommodate sensors,
feedback units and the processing unit on different parts of the
body, either mounted on the textiles, or built into them, as
appropriate.
3.1 Wearables and Smart Textiles
Wearables have become an important domain, embracing
watches, smartphones and smart glasses [7]. Still, textiles are
perhaps the ultimate class of wearables, playing a profound role in
daily life of humans of whatever age, sex, health status,
occupation, or activity level. Textiles are ever-present, offer high
comfort, low weight and are very close to the human body.
Inherent properties of textiles pliability, drapability and softness
make them tactual objects from the beginning. Textiles can
become “smart” active haptic communication units by adding
functionalities such as sensing [8], [9], monitoring [10], actuation
[11], [12] (Figure 5). Especially textiles with the possibility to
adapt to the environment are often called smart textiles [13]. As
Profita et al. point out, “textile-based wearable computing systems
have the ability to assist and augment the human sensory network
by leveraging their close proximity with the skin to provide
contextual information to a user” [14].
Figure 5: Smart Textile with conductive elements. [Photo from
the Textile Showroom of University of Borås by Oliver Korn]
For users without disabilities, such smart garment interfaces can
offer a sensory augmentation. However, for persons with
impairments or disabilities, the benefits are substantial: smart
textiles can partially replace the impaired senses. It is already
possible to integrate a broad spectrum of mechanisms especially
for haptic communication within common textile processing
methods such as weaving, knitting, embroidery and sewing.
Mechanisms include vibrotactile [15] and pressure [16] modes
all of these will be explored to extend the communication space of
deafblind users in the SUITCEYES project.
3.2 Haptic Psychophysics
Important factors in the design of assistive wearables that should
be taken into account are how well humans can discriminate and
recognize different stimulation patterns. Moreover, stimulation
should not be painful or irritating, the patterns should be relatively
easy to learn, and they should not require too much attention.
Although in recent years much research has been done (for
example, on vibratory [17] or thermal [18] stimulation), design
requirements for specific groups cannot simply be looked up in a
handbook”.
In the SUITCEYES project, psychophysical experiments will
inform the designers about the requirements of persons with
deafblindness. In systematic and extensive tests, users will be
exposed to certain types of stimulation to investigate all kinds of
relevant aspects of the stimulation, such as the intensity, the
resolution, pattern identification, etc. At first, we will focus on
dedicated laboratory set-ups, but at later stages, prototypes of
garments will be tested.
The psychophysical techniques and the subsequent statistical
analyses that will be employed in this project have already been
described [19]. Depending on the types and number of prototypes,
the most suitable method will be chosen. We foresee the
following possibilities:
Discrimination: The task is to decide which of a pair of
stimuli has the higher intensity of the property of interest. By
repeating this with varying intensity differences, the
minimum noticeable difference can be determined for each
property (Figure 6).
Magnitude estimation: The task is to provide, on an
arbitrary scale, a numerical estimate of the perceived
intensity of the stimulation of interest. By testing a series of
stimuli with different intensities, the relationship between
physical and perceptual properties will be obtained.
Figure 6: Participant comparing two stimuli placed in a
temperature-controlled box. [Image by Astrid Kappers]
As psychophysical experiments are very time-consuming, testing
in earlier stages will be done with blindfolded sighted and hearing
participants. Involving persons with deafblindness at this stage is
not necessary as all humans have the same touch receptors. At
more advanced stages, individuals with deafblindness will test the
prototypes: their feedback and involvement in the development of
the solution is central to our approach. Also, already in the early
stage we will stay in close contact with the community of persons
with deafblindness and take their ideas and preferences into
account.
3.3 Haptic and Thermal Feedback
In human computer interaction (HCI), feedback is a key element.
There is a huge community researching assistive technologies for
persons with physical or cognitive impairments, for example in
the context of SIGACCESS (ACM Special Interest Group on
Accessible Computing) [20] and the related ASSETS conference.
However, most research focuses on impairments of one major
sense, whereas combined sensory impairments are rarely in focus
probably also because it is difficult to approach users with
multiple impairments. Nevertheless, much research for users with
visual or hearing impairments can be used or adapted for people
with deafblindness.
Haptic feedback has received a significant amount of research,
particularly in (but not limited to) the area of HCI. It comes in a
variety of forms:
Force feedback simulates contact by using actuators to
apply forces in response to movement. For example, various
degrees of mechanical impedance can be simulated by
accelerator pedals becoming stiffer above a certain speed [21].
Vibrotactile feedback uses vibration motors to provide
feedback: its most familiar uses are “rumbles” in videogames
or alerts on smartphones, but it can also be used in quite
nuanced ways, such as the display of variable stiffness
through different vibration intensities [22]. Such uses can
well be adapted for communication purposes of deafblind
persons.
Electrotactile displays use an electric current to stimulate
nerve endings in the fingers, thus recreating tactile sensations
[23].
As navigation and the detection of obstacles are an evident form
of assistance deafblind users require, there are already some
applications using haptic feedback: the EyeCane [24] uses
vibrotactile feedback to alert the user to nearby obstacles, the
Haptic Taco [25] adjusts its shape to guide users towards a target.
Interestingly, when compared to haptic feedback, thermal
feedback is a relatively new area of research in HCI. Early
research starts in 2012 with “thermal icons” [26]. In the last years,
especially the Glasgow Interactive Systems Group, namely
Graham Wilson and Stephen Brewster, have advanced this area. A
main focus is the strong connection of affect and thermal
feedback: it has been connected to models of emotion [27] and
there even are technical approaches to diversify the range of
communication for thermal feedback by using an array of three
thermal stimulators [28]. These emotional potentials in
communication will feed very well in the areas gamification and
“life enrichment” targeted in the SUITCEYES project.
3.4 Recognition of Objects and Persons
Providing users with vision and hearing impairments with
artificial capabilities for real-time object detection and face
recognition will significantly extend their perception of the
environment: from the limited area reached by stretching their
arms to a much larger area. Thus, the solution we are developing
in SUITCEYES will augment their feeling of safety and security.
The current state of the art in face detection and recognition
incorporates breakthroughs from deep neural networks [29],
which gave rise to new facial point detection and recognition
algorithms. In this context, deep architectures, such as deep
convolutional network cascade [30] almost solved the face
detection problem. Face recognition algorithms, like Deep Faces
[31] and VGG-Face descriptor [32] deployed deep convolutional
architectures and achieved very high accuracy classification rates
in unconstrained datasets.
On the other hand, state of the art techniques in object detection
use deep convolutional neural networks to represent objects inside
images. They train the parameters and weights of their models on
large datasets and use a spatial window to localize object
candidates inside the images. Processing intensive sliding window
methods with part-based models have been replaced by selective
search [33] and other sophisticated techniques that employ multi-
scale bounding boxes instead of dense sampling. The current state
of the art has turned its attention to developing faster, rather than
more accurate techniques, while some more recent techniques
such as YOLO [34] and SSD [35] achieve even lower
computational cost, rendering them more than appropriate for
embedded vision purposesfor example in a wearable.
SUITCEYES will extend the face detection and recognition state
of the art by leveraging facial point detection and a combination
of shallow features with a deep convolutional framework [36].
Regarding object recognition, similar hybrid representations will
be deployed, combined with the selective scheme that YOLO
uses, in order to design a low computational cost and accurate
system, specifically tailored for embedded vision purposes [37].
From a technical perspective, video will be captured in HD
resolution with a good frame-per-second capturing (4-8 FPS). For
example, when using a mobile camera plugged onto a JetsonTX2,
the frames will be stored on local memory. Facial and object
detection algorithms will apply facial point detection and selective
search algorithms within the captured frames, so that they can find
the candidate bounding boxes that contain faces and objects in
each video frame. A hybrid shallow-to-deep representation will be
used to describe the appropriate features for recognizing familiar
faces and objects inside the provided bounding boxes. These
hybrid features will be classified based on pre-trained face and
object databases.
3.5 Gamification and Social Interaction
As pointed out before, deafblindness is a severe condition.
Especially persons who lost their hearing and seeing capabilities
due to accidents or due to illness, typically experience these
limitations as burdening and depressing. In our project we
acknowledge this problem. However, we aim to move towards
playful challenges, which extend the users’ interests, add
engagement and offer joyful experiences. The way towards this is
the gamification of everyday situations and learning.
Gamification is the integration of video game elements into non-
game services and applications, to improve the user experience,
engagement and performance [38]. In areas like education [39]
and health [40], gamified approaches are already quite popular
and successful. They even have been incorporated in work
environments [41], for example in production [42] and in the
automotive domain [43]. However, like most games, gamification
focuses on the visual and the auditory channel. For people with
deafblindness, these methods are not feasible. Although there are
already elements of vibration in gaming (e.g. rumble controllers),
these elements just aim to enrich an existing experience. If a user
experience is to be not just enriched but constituted by haptic or
thermal feedback, gamification concepts need to be strongly
adapted and partially re-invented.
Therefore, new designs and concepts with positive feedback loops
are developed. The first step is to grasp the deafblind persons’
concept of playfulness. What makes a person with deafblindness
laugh, what is considered humorous?
This requires intensive involvement of the users, their families
and the care providers. Their input is fed into an iterative agile
development process. However, measuring the level of fun and
enjoyment is difficult. The most common ways for evaluation are
interviews and surveys. Since these methods can be problematic
for some users in this target group, also methods from affective
computing will be used. For instance, facial expression analysis
can be used as means to directly assess the engagement of a user
[44]. Another way to deduce positive effects of gamified scenarios
are structured records of the families and educators of the users.
These close persons often notice mood changes.
The most desirable result of the integration of gamification is to
create a motivating flow state [45], an area where skill level and
task affordance converge and a good performance is achieved
seemingly easily. This enables users with deafblindness to
increase their communicative space while enjoying themselves.
So a haptic intelligent, personalized, interface (HIPI) is integrated
in a wearable, making it smart (section 3.1). However, how does
this offer ways to gamify everyday experiences? An exemplary
scenario is the “Easter Egg Hunt”: the HIPI’s haptic and thermal
actuators guide a person with deafblindness towards a target
object. Temperature changes of the thermal actuators and
vibration of the haptic actuators indicate the proximity (section
3.3). The person with deafblindness is moving according to this
feedback. This process may sound straightforward. However, not
only does the system require the capability to navigate the user
around obstacles (section 3.4), the user also has to “learn” to read
and interpret the signals (section 3.2). Within a safe environment,
the Easter Egg Hunt offers a way to make this learning process
fun. If the users become more proficient, it can easily be extended
to include social interaction, for example by playing “Hide and
Seek”: an everyday game for most children, which currently is
utopia for users with deafblindness.
As described in the introduction, innovations in assistive
technologies often remain unused and discarded. Enriching both
the learning and the usage processes with gamification will
increase the motivation to use the solution. Like with learning a
language or a musical instrument, the first steps are the most
painful ones. Only over time, and potentially only if it is fun to
learn, the process of using the HIPI will get fast and proficient.
4 CONCLUSIONS AND FUTURE WORK
In this paper, we introduced the vision of a haptic intelligent,
personalized, interface (HIPI) that integrates elements from smart
textiles, sensors, semantic technologies, image processing, face
and object recognition, machine learning, affective computing,
and gamification. This solution is being designed in a user-
centered and agile process for the community of deafblind
persons. Their special situation has been described in section 2.
In section 3, we presented the five underlying concepts:
Wearables and Smart Textiles (section 3.1), as the solution needs
to be portable and close to the users. Haptic Psychophysics
(section 3.2) to design the stimuli best-suited for communication,
which typically will be Haptic and Thermal Feedback (section
3.3). We described how the textile can become smart using a
Recognition of Objects and Persons (section 3.4). Finally, we
discuss how Gamification and Social Interaction (section 3.5) can
make the difference in motivating users to learn and “play” with
the HIPI. The vision is that users with deafblindness extend their
abilities while enjoying themselves.
Although there is a large community of people with deafblindness
in Europe and all over the world (section 2), the local
communities are often not well connected. Work on the needs of
deafblind persons is just beginning. We hope that this vision paper
is a first step, and that the SUITCEYES project as a whole will
make a difference.
5 ACKNOWLEDGMENTS
This paper is based on the SUITCEYES project proposal [46].
The SUITCEYES project has received funding from the European
Union’s Horizon 2020 research and innovation programme under
grant agreement No 780814.
We thank all contributors and advisors; in alphabetical order:
Konstantinos Avgerinakis, Lea Buchweitz, Panagiotis Mitzias,
Jan Nolin, Panagiotis Petrantonakis, and Sarah Woodin.
6 REFERENCES
[1] A. Roulstone, Disability and Technology: An
Interdisciplinary and International Approach. Springer,
2016.
[2] United Nations, Convention on the Rights of Persons with
Disabilities. 2008.
[3] “NVC - Nordic Welfare Center.” [Online]. Available:
http://www.nordicwelfare.org/. [Accessed: 28-Feb-2018].
[4] H.-E. Frölander, “Deafblindness : Theory-of-mind, cognitive
functioning and social network in Alström syndrome,”
Örebro University, 2016.
[5] A.-B. Johansson, “Se och hör mig. Personer med förvärvad
dövblindhets erfarenheter av delaktighet, rehabilitering och
medborgerligt liv,” University of Gothenburg, 2016.
[6] “Journal of Deafblind Studies on Communication.” [Online].
Available: http://jdbsc.rug.nl/index/. [Accessed: 08-Feb-
2018].
[7] C. Hill, “Wearables the future of biometric technology?,”
Biom. Technol. Today, vol. 2015, no. 8, pp. 5–9, Sep. 2015.
[8] K. Cherenack, C. Zysset, T. Kinkeldei, N. Münzenrieder,
and G. Tröster, “Wearable Electronics: Woven Electronic
Fibers with Sensing and Display Functions for Smart
Textiles (Adv. Mater. 45/2010),” Adv. Mater., vol. 22, no.
45, pp. 5071–5071, Dec. 2010.
[9] K. Nesenbergs and L. Selavo, “Smart textiles for wearable
sensor networks: Review and early lessons,” in 2015 IEEE
International Symposium on Medical Measurements and
Applications (MeMeA) Proceedings, 2015, pp. 402–406.
[10] L. Langenhove, Advances in smart medical textiles :
Treatments and health monitoring. Woodhead Publishing,
2016.
[11] R. Paradiso and D. D. Rossi, “Advances in textile sensing
and actuation for e-textile applications,” in 2008 30th Annual
International Conference of the IEEE Engineering in
Medicine and Biology Society, 2008, pp. 3629–3629.
[12] A. Maziz, A. Concas, A. Khaldi, J. Stålhand, N.-K. Persson,
and E. W. H. Jager, “Knitting and weaving artificial
muscles,” Sci. Adv., vol. 3, no. 1, p. e1600327, Jan. 2017.
[13] L. Guo, T. Bashir, E. Bresky, and N.-K. Persson, “28 -
Electroconductive textiles and textile-based
electromechanical sensorsintegration in as an approach for
smart textiles,” in Smart Textiles and their Applications, V.
Koncar, Ed. Oxford: Woodhead Publishing, 2016, pp. 657–
693.
[14] H. Profita, N. Farrow, and N. Correll, “Flutter: An
Exploration of an Assistive Garment Using Distributed
Sensing, Computation and Actuation,” in Proceedings of the
Ninth International Conference on Tangible, Embedded, and
Embodied Interaction, New York, NY, USA, 2015, pp. 359–
362.
[15] R. W. Lindeman, Y. Yanagida, H. Noma, and K. Hosaka,
“Wearable vibrotactile systems for virtual contact and
information display,” Virtual Real., vol. 9, no. 2–3, pp. 203–
213, Mar. 2006.
[16] B. Holschuh, E. Obropta, and D. Newman, “Low Spring
Index NiTi Coil Actuators for Use in Active Compression
Garments,” IEEEASME Trans. Mechatron., vol. 20, no. 3,
pp. 1264–1277, Jun. 2015.
[17] G. A. Gescheider, S. J. Bolanowski, and R. T. Verrillo,
“Some characteristics of tactile channels,” Behav. Brain
Res., vol. 148, no. 1, pp. 35–40, Jan. 2004.
[18] L. Jones, “Thermal touch,” Scholarpedia, vol. 4, no. 5, p.
7955, 2009.
[19] A. M. L. Kappers and W. M. Bergmann Tiest, “Haptic
perception,” Wiley Interdiscip. Rev. Cogn. Sci., vol. 4, no. 4,
pp. 357–374, Jul. 2013.
[20] “SIGACCESS.” [Online]. Available:
https://www.acm.org/special-interest-groups/sigs/sigaccess.
[Accessed: 10-Feb-2018].
[21] A. H. Jamson, D. L. Hibberd, and N. Merat, “The design of
haptic gas pedal feedback to support eco-driving,” Proc.
Seventh Int. Driv. Symp. Hum. Factors Driv. Assess. Train.
Veh. Des., vol. Driving Assessment 2013., no. Seventh
International Driving Symposium on Human Factors and
Driver Assessment, Training, and Vehicle Design, pp. 17–
20, Jun. 2013.
[22] A. T. Maereg, A. Nagar, D. Reid, and E. L. Secco,
“Wearable Vibrotactile Haptic Device for Stiffness
Discrimination during Virtual Interactions,” Front. Robot.
AI, vol. 4, 2017.
[23] M. Tezuka, N. Kitamura, K. Tanaka, and N. Miki,
“Presentation of Various Tactile Sensations Using Micro-
Needle Electrotactile Display,” PLOS ONE, vol. 11, no. 2, p.
e0148410, Feb. 2016.
[24] S. Maidenbaum et al., “The ‘EyeCane’, a new electronic
travel aid for the blind: Technology, behavior & swift
learning,” Restor. Neurol. Neurosci., vol. 32, no. 6, pp. 813–
824, 2014.
[25] A. J. Spiers and A. M. Dollar, “Design and Evaluation of
Shape-Changing Haptic Interfaces for Pedestrian Navigation
Assistance,” IEEE Trans. Haptics, vol. 10, no. 1, pp. 17–28,
Jan. 2017.
[26] G. Wilson, S. Brewster, M. Halvey, and S. Hughes,
“Thermal Icons: Evaluating Structured Thermal Feedback
for Mobile Interaction,” in Proceedings of the 14th
International Conference on Human-computer Interaction
with Mobile Devices and Services, New York, NY, USA,
2012, pp. 309–312.
[27] G. Wilson, D. Dobrev, and S. A. Brewster, “Hot Under the
Collar: Mapping Thermal Feedback to Dimensional Models
of Emotion,” in Proceedings of the 2016 CHI Conference on
Human Factors in Computing Systems, New York, NY,
USA, 2016, pp. 4838–4849.
[28] J. Tewell, J. Bird, and G. R. Buchanan, “The Heat is On: A
Temperature Display for Conveying Affective Feedback,” in
Proceedings of the 2017 CHI Conference on Human Factors
in Computing Systems, New York, NY, USA, 2017, pp.
1756–1767.
[29] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet
Classification with Deep Convolutional Neural Networks,”
in Proceedings of the 25th International Conference on
Neural Information Processing Systems - Volume 1, USA,
2012, pp. 1097–1105.
[30] Y. Sun, X. Wang, and X. Tang, “Deep Convolutional
Network Cascade for Facial Point Detection,” in 2013 IEEE
Conference on Computer Vision and Pattern Recognition,
2013, pp. 3476–3483.
[31] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf,
“DeepFace: Closing the Gap to Human-Level Performance
in Face Verification,” in 2014 IEEE Conference on
Computer Vision and Pattern Recognition, 2014, pp. 1701–
1708.
[32] O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep Face
Recognition,” BMCV, vol. 1, no. 3, p. 6, Sep. 2016.
[33] J. R. R. Uijlings, K. E. A. van de Sande, T. Gevers, and A.
W. M. Smeulders, “Selective Search for Object
Recognition,” Int. J. Comput. Vis., vol. 104, no. 2, pp. 154–
171, Sep. 2013.
[34] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You
Only Look Once: Unified, Real-Time Object Detection,” in
2016 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), 2016, pp. 779–788.
[35] W. Liu et al., “SSD: Single Shot MultiBox Detector,”
ArXiv151202325 Cs, vol. 9905, pp. 21–37, 2016.
[36] F. Markatopoulou et al., “ITI-CERTH participation in
TRECVID 2017.” TRECVID-SED, 2017.
[37] K. Avgerinakis, P. Giannakeris, A. Briassouli, A.
Karakostas, and S. Vrochidis, “Intelligent traffic city
management from surveillance systems (CERTH-ITI),” in
NVIDIA AI City Challenge 2017, San Fransisco, 2017.
[38] S. Deterding, M. Sicart, L. Nacke, K. O’Hara, and D. Dixon,
Gamification: Using game design elements in non-gaming
contexts, vol. 66. 2011.
[39] O. Korn and A. Dix, “Educational Playgrounds: How
Context-aware Systems Enable Playful Coached Learning,”
interactions, vol. 24, no. 1, pp. 54–57, Dec. 2016.
[40] O. Korn and S. Tietz, “Strategies for Playful Design when
Gamifying Rehabilitation: A Study on User Experience,” in
Proceedings of the 10th International Conference on
PErvasive Technologies Related to Assistive Environments,
New York, NY, USA, 2017, pp. 209–214.
[41] O. Korn, “Industrial playgrounds: how gamification helps to
enrich work for elderly or impaired persons in production,”
in Proceedings of the 4th ACM SIGCHI Symposium on
Engineering Interactive Computing Systems, New York, NY,
USA, 2012, pp. 313–316.
[42] O. Korn, M. Funk, and A. Schmidt, “Design Approaches for
the Gamification of Production Environments: A Study
Focusing on Acceptance,” in Proceedings of the 8th ACM
International Conference on PErvasive Technologies
Related to Assistive Environments, New York, NY, USA,
2015, pp. 6:1–6:7.
[43] O. Korn, P. Muschick, and A. Schmidt, “Gamification of
Production? A Study on the Acceptance of Gamified Work
Processes in the Automotive Industry,” in Advances in
Affective and Pleasurable Design. Proceedings of the AHFE
2016 International Conference, New York, NY, USA, 2016,
pp. 433–445.
[44] O. Korn, S. Boffo, and A. Schmidt, “The Effect of
Gamification on Emotions - The Potential of Facial
Recognition in Work Environments,” in Human-Computer
Interaction: Design and Evaluation, 2015, pp. 489–499.
[45] M. Csikszentmihalyi, Beyond Boredom and Anxiety. Jossey-
Bass Publishers, 1975.
[46] N. Olson et al., “Smart, User-friendly, Interactive, Tactual,
Cognition-Enhancer that Yields Extended Sensosphere -
Appropriating sensor technologies, machine learning,
gamification and smart haptic interfaces.” [Online].
Available:
https://cordis.europa.eu/project/rcn/213173_en.html.
[Accessed: 29-Mar-2018].
... Deafblindness, also known as dual sensory impairment, is the combination of sight and hearing impairments of such extent that it becomes difcult for one sense to compensate for each other [16]. Due to the combination of sensory impairments, individuals living with deafblindness often face challenges in fully participating in society, including issues with communication, access to information, and independent navigation [22,23]. These limitations may lead to a high risk of social isolation, sedentarism and depression [20,21]. ...
... Haptic wearables have the potential to compensate for dual sensory impairments since individuals living with deafblindness often rely on diverse modes of tactile communication [24]. Wearables have been used for displaying information and allowing a diverse range of interactions as an alternative to carrying and interacting with additional devices other than the one being worn [12,22,23,29]. Early work by [15,19,32] attempted to design haptic wearables for sensory augmentation in non-impaired users, however, its suitability for providing navigational cues and independent mobility for users with deafblindness remain unclear. ...
Conference Paper
Full-text available
Deafblindness, a form of dual sensory impairment, signifcantly impacts communication, access to information and mobility. Independent navigation and wayfnding are main challenges faced by individuals living with combined hearing and visual impairments. We developed a haptic wearable that provides sensory substitution and navigational cues for users with deafblindness by conveying vibrotactile signals onto the body. Vibrotactile signals on the waist area convey directional and proximity information collected via a fisheye camera attached to the garment, while semantic information is provided with a tapping system on the shoulders. A playful scenario called “Keep Your Distance” was designed to test the navigation system: individuals with deafblindness were “secret agents” that needed to follow a “suspect”, but they should keep an optimal distance of 1.5 meters from the other person to win the game. Preliminary findings suggest that individuals with deafblindness enjoyed the experience and were generally able to follow the directional cues.
... In the area of research and development (yet also pertaining to healthcare), an interesting application of gamification and machine learning has been developed within the SUITCEYES project in the form of an intelligent assistive wearable empowering people with deafblindness, helping them recognize faces and detect objects [20]. ...
Article
Full-text available
Albeit in different ways, both machine learning and gamification have transfigured the user experience of information systems. Although both are hot research topics, so far, little attention has been paid to how these two technologies converge with each other. This relation is not obvious as while it is feasible to enhance gamification with machine learning, it is also feasible to support machine learning with gamification; moreover, there are applications in which machine learning and gamification are combined yet not directly connected. In this study, we aim to shed light on the use of both machine learning in gamification and gamification in machine learning, as well as the related topics of using gamification in machine learning education and machine learning in gamification research. By performing a systematic literature mapping, we not only identify prior works addressing these respective themes, but also analyze how their popularity evolved in time, investigate the areas of application reported by prior works, used machine learning techniques and software tools, as well as the character of research contribution and the character of evaluation results for works that presented them.
... The importance of developing a method to effectively use the human thermal sense as a data transfer medium derives from the many potential implementations it is expected to have as an alternative or complementary channel for various scenarios in which conventional channels such as vision, hearing, and tactile sensing are not applicable or not sufficient (e.g., enhancing communication capability for the deafblind [9,10], transferring discrete messages in silent or, alternatively, in noisy environments). The use of thermal signals for this purpose is fraught with many difficulties due to the human factor on the one hand and technological factors on the other; see thorough reviews at [5,11]. ...
Preprint
Full-text available
This research is a preliminary phase of a general effort to develop a generic-data transferring capability via human haptic thermal sensation (generic-data refers to a coded language like Morse or Braille). For the capability to be effective, it must include a large variety of short recognizable cues. Hence, we propose the concept of cues based on sequences of thermal pulses, i.e., combinations of warm and cool pulses with several levels of intensity. The objective of this study was to determine the feasibility of basing a generic-data-transfer capability on thermal cues composed of sequences of short pulses. The research included defining the basic characteristics of the stimuli parameters and developing practical methods for generating and measuring them. Several sequences were designed in light of the relevant data known to date, and tests were conducted. The thermal cues presented to the participants were sensed and recognized by touch. The results indicate high feasibility for a capability that is applicable in various scenarios. In addition, the low impact on human skin temperature due to short stimuli duration represents an inherent advantage for later implementation. This report presents promising findings and offers insights for further investigation.
... Implementation in future work: The general objective of the research is to develop methods to effectively use the human thermal sense, as a data transfer mediuman alternative or complementary channel for various scenarios in which conventional channels, vision, hearing, and tactile sensing are not applicable or not sufficient (e.g. enhance a communication capability for the deafblind (Korn, 2018), transfer messages in noisy/silent environments). Future study will focus on diversifying the variety of thermal cues suitable to convey information, with two specific goals: ...
... These tools may support persons with deafblindness when they cannot rely on verbal communication [3]. Vibrotactile signals have been used to convey information in different contexts [7,9,21,24,28,30,33,36,39,44]. Similarly, assistive devices for individuals with deafblindness have primarily used haptics to support sensory substitution [11,12,16,36,43]. ...
Conference Paper
Full-text available
Deafblindness, also known as dual sensory loss, is the combination of sight and hearing impairments of such extent that it becomes difficult for one sense to compensate for the other. Communication issues are a key concern for the Deafblind community. We present the design and technical implementation of the Tactile Board: a mobile Augmentative and Alternative Communication (AAC) device for individuals with deafblindness. The Tactile Board allows text and speech to be translated into vibrotactile signs that are displayed real-time to the user via a haptic wearable. Our aim is to facilitate communication for the deafblind community, creating opportunities for these individuals to initiate and engage in social interactions with other people without the direct need of an intervener.
... They used gamification aspects such as points and leaderboards to make early student profile detection possible, which improves the accuracy of machine learning algorithms. In a different scenario, (S41) introduced a solution for people with deaf blindness that have combined issues with vision and hearing senses (Korn et al. 2018). They have developed a system that uses machine learning for object and face recognition and environmental sensing, which are then provided to the users by means of haptic communication through smart textiles. ...
Article
Full-text available
Recent developments in human–computer interaction technologies raised the attention towards gamification techniques, that can be defined as using game elements in a non-gaming context. Furthermore, advancement in machine learning (ML) methods and its potential to enhance other technologies, resulted in the inception of a new era where ML and gamification are combined. This new direction thrilled us to conduct a systematic literature review in order to investigate the current literature in the field, to explore the convergence of these two technologies, highlighting their influence on one another, and the reported benefits and challenges. The results of the study reflect the various usage of this confluence, mainly in, learning and educational activities, personalizing gamification to the users, behavioral change efforts, adapting the gamification context and optimizing the gamification tasks. Adding to that, data collection for machine learning by gamification technology and teaching machine learning with the help of gamification were identified. Finally, we point out their benefits and challenges towards streamlining future research endeavors.
Article
Full-text available
This research study is the preliminary phase of an effort to develop a generic data transfer method via human haptic thermal sensation (i.e., a coded language such as Morse or Braille). For the method to be effective, it must include a large variety of short, recognizable cues. Hence, we propose the concept of cues based on sequences of thermal pulses: combinations of warm and cool pulses with several levels of intensity. The objective of this study was to determine the feasibility of basing a generic data transfer method on haptic thermal cues using sequences of short pulses. The research included defining the basic characteristics of the stimuli parameters and developing practical methods for generating and measuring them. Several patterns of different sequences were designed considering the relevant data known to date and improved by implementing new insights acquired throughout the tests that were conducted. The final thermal cues presented to the participants were sensed by touch and clearly recognized. The results of this study indicate that developing this new method is feasible and that it could be applicable in various scenarios. In addition, the low impact measured on the user’s skin temperature represents an inherent advantage for future implementation. This report presents promising findings and offers insights for further investigations.
Article
Full-text available
Inclusion of game elements in health education has proved to be effective in helping student training. Commonly termed as “serious games”, these gamified systems can be an alternative to empower and motivate students during the learning process. The literature contains serious games for professional training in many health-related areas, including several motivating and playful gamification elements, and a variety of evaluation techniques used. Some review studies have compiled articles that present serious games for health-related areas analyzing aspects such as development methodologies and assessment techniques. However, the playful aspects that contribute to the health education process have not yet been compiled. This article focuses on a systematic review that analyzes the state of the art regarding serious games for health-related education, and evaluates the following: game elements, platforms, evaluation methods and requirements analysis methods. The findings indicate that “Tasks”, “Score” and “Level Progression” were the most used gamification elements. Physiotherapy, Psychology and Physical Education were the areas most covered by the included articles. Pre- and post-test questionnaires were identified as the main methods used to evaluate the serious games. The article contributes with an overview of the serious games design process, abstracted from the performed review and depicted in a diagram showing the phases commonly found in our study. The paper also proposes a categorization for the most used game elements and evaluation methods.
Article
Purpose Caeski is a keyboard with 12 vibrating keys that connects to an application via smartphone. This assistive technology aims to facilitate the communication of persons with deafblindness in presential contexts or with people who can be anywhere in the world. The purpose is to present this assistive technology and analyse the viability of its use through tests with eleven persons with deafblindness. Materials and methods The study design consisted of ten days of testing with eleven persons with deafblindness and five interpreters who had the function of passing the information about the content of the tests. Results The tests showed that most participants were able to communicate through Caeski. In addition, the tests showed the need to add the function of repeating the reception of information via vibration to confirm the understanding. The results demonstrated the need for more training time to improve the learning of accented words and long sentences. Therefore, training time and previous contact with technological devices are factors that influence the result of the tests. Conclusion The use of Caeski is feasible and as future perspectives, this assistive technology can be used in association with similar assistive technologies such as Perkins Machine and Braille Line, preventing possible tactile overloads. In addition, can be applied in the educational context, from literacy to university. Studies with longer training time should be conducted to confirm the results. • The implications for rehabilitation • Social interactions, presential and online, with deafblind and non-deafblind persons anywhere in the world. • Literacy and cognitive development of persons with deafblindness. • Digital inclusion for occupational, school or academic contexts.
Chapter
Augmented Reality designers and content creators continue to explore ways to engage audiences. However, studies have yet to focus on how different modes of interaction affect understanding and immersion in AR environments. To address this, a simulation and focus group was conducted to elicit feedback about five different modes of interaction: sound, touch, haptic feedback, presence, and gesture. Results identified four themes, with gesture interaction garnering more appeal and immersion than alternatives. Accessibility and self-consciousness in public settings were illuminated during the simulation, which highlighted barriers related to some modes of interaction.
Conference Paper
Full-text available
Surveillance and more specifically traffic management technologies constitute one of the most intriguing aspects of smart city applications. In this work we investigate the applicability of an object detector for vehicle detection and propose a novel hybrid shallow-deep representation to surpass its limits. Furthermore, we leverage the detector's output, so as to localize new vehicles and track them throughout the whole duration that they exist in the video scene. The detection and tracking system is then evaluated and compared with other State-of-the-Art algorithms on the new developed NVIDIA AI city datasets.
Article
Full-text available
In this paper, we discuss the development of cost effective, wireless, and wearable vibrotactile haptic device for stiffness perception during an interaction with virtual objects. Our experimental setup consists of haptic device with five vibrotactile actuators, virtual reality environment tailored in Unity 3D integrating the Oculus Rift Head Mounted Display (HMD) and the Leap Motion controller. The virtual environment is able to capture touch inputs from users. Interaction forces are then rendered at 500 Hz and fed back to the wearable setup stimulating fingertips with ERM vibrotactile actuators. Amplitude and frequency of vibrations are modulated proportionally to the interaction force to simulate the stiffness of a virtual object. A quantitative and qualitative study is done to compare the discrimination of stiffness on virtual linear spring in three sensory modalities: visual only feedback, tactile only feedback, and their combination. A common psychophysics method called the Two Alternative Forced Choice (2AFC) approach is used for quantitative analysis using Just Noticeable Difference (JND) and Weber Fractions (WF). According to the psychometric experiment result, average Weber fraction values of 0.39 for visual only feedback was improved to 0.25 by adding the tactile feedback.
Conference Paper
Full-text available
Gamifying rehabilitation is an efficient way to improve motivation and exercise frequency. However, between flow theory, self-determination theory or Bartle's player types there is much room for speculation regarding the mechanics required for successful gamification, which in turn leads to increased motivation. For our study, we selected a gamified solution for motion training (an exergame) where the playful design elements are extremely simple. The contribution is threefold: we show best practices from the state of the art, present a study analyzing the effects of simple gamification mechanics on a quantitative and on a qualitative level and discuss strategies for playful design in therapeutic movement games.
Article
Full-text available
A need exists for artificial muscles that are silent, soft, and compliant, with performance characteristics similar to those of skeletal muscle, enabling natural interaction of assistive devices with humans. By combining one of humankind’s oldest technologies, textile processing, with electroactive polymers, we demonstrate here the feasibility of wearable, soft artificial muscles made by weaving and knitting, with tunable force and strain. These textile actuators were produced from cellulose yarns assembled into fabrics and coated with conducting polymers using a metal-free deposition. To increase the output force, we assembled yarns in parallel by weaving. The force scaled linearly with the number of yarns in the woven fabric. To amplify the strain, we knitted a stretchable fabric, exhibiting a 53-fold increase in strain. In addition, the textile construction added mechanical stability to the actuators. Textile processing permits scalable and rational production of wearable artificial muscles, and enables novel ways to design assistive devices.
Article
Full-text available
PCL is a good example of a combinatory innovation: Most components are already in place, but they have not yet been combined and tailored to fit the field of education. We think a system that directly assists users in practical learning tasks will help increase the overall quality of education. Additionally, it will reduce the stress for trainers and educators who must teach large groups with limited time resources. A motivating learning experience that incorporates the emotional cues of the student will help to raise motivation for selflearning and contribute to practice and skill acquisition. On the path toward PCL, there already is a first large-scale research project on the way: KoBeLU (contextaware learning environment). We are happy to collaborate with researchers from the areas of education, affective computing, pattern recognition, and machine learning. Therefore, if playful coached learning is something that might interest you or your students, do not hesitate to contact us.
Conference Paper
Full-text available
Gamification is an ever more popular method to increase motivation and user experience in real-world settings. It is widely used in the areas of marketing, health and education. However, in production environments, it is a new concept. To be accepted in the industrial domain, it has to be seamlessly integrated in the regular work processes. In this work we make the following contributions to the field of gamification in production: (1) we analyze the state of the art and introduce domain-specific requirements; (2) we present two implementations gamifying production based on alternative design approaches; (3) these are evaluated in a sheltered work organization. The comparative study focuses acceptance, motivation and perceived happiness. The results reveal that a pyramid design showing each work process as a step on the way towards a cup at the top is strongly preferred to a more abstract approach where the processes are represented by a single circle and two bars.
Conference Paper
We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 dif- ferent classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implemen- tation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called dropout that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry
Conference Paper
Previous research has investigated whether temperature can augment a range of media including music, images and video. We describe the first experiment to investigate whether temperature can augment emotion conveyed by text messages. A challenge in prior work has been ensuring users can discern different thermal signals. We present an improved technique for thermal feedback that uses an array of three thermal stimulators. We demonstrate that the Thermal Array Display (TAD) increases users' ability to identify temperatures within a narrower range, compared to using a single thermal stimulator. While text messages dominate valence in the absence of context for temperature, the TAD consistently conveys arousal, and can enhance arousal of text messages, especially those that are emotionally neutral. We discuss potential applications of augmenting text with temperature.
Book
This book brings together formally disparate literatures and debates on disability and technology in a way that captures the complex interplay between the two. Drawing on disability studies, technology studies and clinical studies, the book argues that interdisciplinary insights together provide a more nuanced and less stylized picture of the benefits and barriers in disability and technology. Drawing on a breadth of empirical studies from across the globe, a picture emerges of the complex and multi-directional interplay of technology and disability. Technology is neither inherently enabling or disabling but fundamentally shaped by the social dynamics that shape their design, use and impact.