Content uploaded by Oliver Korn
Author content
All content in this area was uploaded by Oliver Korn on Apr 11, 2018
Content may be subject to copyright.
Empowering Persons with Deafblindness: Designing an
Intelligent Assistive Wearable in the SUITCEYES Project
Oliver Korn
Offenburg University
Badstr. 24, 77652 Offenburg,
Germany
oliver.korn@acm.org
Raymond Holt
University of Leeds
Woodhouse Lane, Leeds,
West Yorkshire, LS2 9JT, UK
R.J.Holt@leeds.ac.uk
Efstratios Kontopoulos
CERTH-ITI
6th Km Charilaou-Thermi Road,
57001 Thessaloniki, Greece
skontopo@iti.gr
Astrid M.L. Kappers
VU Amsterdam, Van der
Boechorststraat 9, 1081 BT
Amsterdam, The Netherlands
a.m.l.kappers@vu.nl
Nils-Krister Persson
University of Borås
50190 Borås, Sweden
Nils-Krister.Persson@hb.se
Nasrine Olson
University of Borås
50190 Borås, Sweden
Nasrine.Olson@hb.se
ABSTRACT
Deafblindness is a condition that limits communication
capabilities primarily to the haptic channel. In the EU-funded
project SUITCEYES we design a system which allows haptic and
thermal communication via soft interfaces and textiles. Based on
user needs and informed by disability studies, we combine
elements from smart textiles, sensors, semantic technologies,
image processing, face and object recognition, machine learning,
affective computing, and gamification. In this work, we present
the underlying concepts and the overall design vision of the
resulting assistive smart wearable.
CCS Concepts
• Human-centered computing~Empirical studies in HCI
• Human-centered computing~Collaborative and social
computing devices • Human-centered computing~User studies
• Human-centered computing~Empirical studies in interaction
design • Human-centered computing~Accessibility theory,
concepts and paradigms • Human-centered computing~
Accessibility systems and tools • Social and professional
topics~History of hardware • Social and professional topics~
Codes of ethics • Social and professional topics~Assistive
technologies • Computing methodologies~Cognitive robotics
• Computing methodologies~Robotic planning • Applied
computing~Consumer health
Keywords
Deafblindness; Assistive Technologies; Haptics; Smart Textiles;
Wearables; Visual Impairments; Hearing Impairment; Gamification.
1. INTRODUCTION
Innovations in information and communication technology improve
the quality of life for many people. However, most solutions rely on
vision and sound. Thus, they often exclude people with severe dual
vision and hearing impairments. Deafblindness is such a condition,
limiting communication primarily to the haptic channel (Figure 1).
Figure 1: There are about 2.5 million people with
deafblindness in Europe. [Image courtesy of LightHouse for
the Blind and Visually Impaired, see http://lighthouse-sf.org.]
This entails multiple challenges at different levels for a wide
range of people, from individuals with deafblindness to society as
a whole. At the individual level, a person with deafblindness is
typically reliant on other people with limited autonomy as a result.
However, there is an enormous amount of stimuli in the
environment consciously or unconsciously observed and absorbed
by a person who can see and hear. In comparison, the information
communicated to a person with deafblindness is only a very
limited select fragment.
At a societal level, we often heard about the costs involved due to
the required human interventions. However, the contributions of
this community, missed due to the lack of adequate inclusion
measures in our social structures, are an even greater loss. We
need to improve the accessibility of the environment for all
people, regardless of their background or abilities.
Though rare at birth, deafblindness can be acquired due to
different causes. Currently an estimated 2.5 million people with
deafblindness are living in the Europe. Due to the increasing life
expectancy and the aging population, this number is forecasted to
rise substantially by 2030. The members of the EU-funded project
SUITCEYES acknowledge that changes in society are needed to
address these challenges. At the same time the project takes first
steps towards this objective: European policy developments are
evaluated to improve accessibility and inclusion. The results of
these analyses will inform and guide the development of the
project’s technical solution.
Permission to make digital or hard copies of part or all of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. Copyrights
for third-party components of this work must be honored. For all other
uses, contact the Owner/Author.
PETRA '18, June 26–29, 2018, Corfu, Greece
© 2018 Copyright is held by the owner/author(s).
ACM ISBN 978-1-4503-6390-7/18/06.
https://doi.org/10.1145/3197768.3201541
In SUITCEYES three main challenges of users with deafblindness
are addressed:
● perception of the environment;
● communication and exchange of semantic content;
● learning and joyful life experiences.
As limited communication is the predominant problem, state of
the art technologies will be explored and a haptic communication
interface will be developed to enable an improved context-aware
mode of communication. We aim to extract environmental and
linguistic clues and translate them into haptic and thermal signals
that will be communicated to the user via a “haptic intelligent,
personalized, interface” (HIPI).
In the SUITCEYES project, we are combining elements from
smart textiles, sensors, semantic technologies, image processing,
face and object recognition, machine learning, affective
computing, and gamification in order to develop a novel mode of
communication. Such technology-driven innovations are often
welcomed by people with deafblindness and disabilities due to the
independence they can offer. However, in many cases they remain
unused and discarded [1] for two main reasons:
1. The development may not take into account the needs and
preferences of the users, for example by overemphasizing the
priorities of professionals. To address this, people with
deafblindness are involved in the project as advisors at all stages.
2. Policies may fail to promote access. Although most countries
signed both the UN Convention on the Rights of Persons
with Disabilities [2] and the European Accessibility Act is
already under development, knowledge of technological
possibilities remains restricted. The project addresses such
aspects of environmental accessibility by linking
technological developments to national and international
policy and practice on accessibility.
Designed based on expressed needs and preferences of the users,
the HIPI-system will afford new means of perceiving the
environment as well as user-triggered communications. This in
turn allows the users to take a more active part in society,
improving possibilities for inclusion in social life and
employment. In addition, learning experiences will be enriched by
gamification and mediated social interactions. While the scope of
the project is broader, the focus of this paper is to present the
outline for the technical solution proposed in the project.
2. DEAFBLINDNESS
A definition of deafblindness is provided by the Nordic Welfare
Center [3] as “a combined vision and hearing impairment of such
severity that it is hard for the impaired senses to compensate for
each other. Thus, deafblindness is a distinct disability.” The
severity of the condition can vary. At one end of the spectrum
there are profound impairments in both visual and hearing senses,
at the other end, a slight sight or hearing may remain. Variations
in conditions can also stem from the causes of deafblindness.
Deafblindness can be congenital or acquired through accident,
illness, or age. Furthermore, deafblindness can be accompanied
with various levels of physical and or cognitive abilities. Figure 3
provides an overview of their different levels. The aim of the
project is to improve perception, communication and quality of
life for people with deafblindness.
Research on deafblindness and related issues remains limited. As
of February 2018, only 809 scholarly publications on
deafblindness could be found on the Web of Science database – in
contrast to almost 400,000 items on blindness in the same
database. Of the publications on deafblindness, only 23 were to
some degree related to ‘haptic’, and of those only a handful
related to what is proposed in the SUITCEYES project (Figure 2).
Figure 3: Deafblindness: causes and cognitive impact.
SUITCEYES challenges and expected impact areas.
Figure 2: Network visualization of keyword co-occurrences of 809 publications using the data analysis tool VOSviewer.
Even research on the life and behavior of people with
deafblindness is scarce. In 2016 and 2017 two comprehensive
Swedish dissertations (deemed to be the first in the world) were
published focusing on people with Alström’s syndrome [4] and
with Usher’s syndrome [5]. In both cases, the researchers
identified constant pressures that lead to experiences of
overwhelming exhaustion. The number of people that a person
with deafblindness can communicate with and trust is very
limited, not to speak of diminished possibilities in finding jobs,
earning money, or leading an independent life.
The cumulative effects of such difficulties are manifold and
varying, including high levels of depression, suicidal thoughts and
behavior, diminished life quality, worsening of cognitive abilities
and development of neurological disorders. Those researchers
found that the most fundamental problem is interaction with
surroundings and communication with other people.
In 2015, the Journal of Deafblind Studies on Communication [6]
was established by the University of Groningen. In the last three
years, only 13 articles have been published there. SUITCEYES
addresses some of the communication challenges identified in
earlier researches, and recognizes that facilitating communication
for people with deafblindness reduces levels of stress and is
crucial for improvements in the quality of their life and health.
Furthermore, the solution developed within SUITCEYES will be
of help to family members and professional caregivers, facilitating
their work and enabling it to become more efficient and effective.
3. SOLUTION DESIGN
The overall objective of SUITCEYES is to improve the level of
independence and participation of people with deafblindness.
Together with experts, caretakers and future users, we aim to
augment communication, perception of the environment,
knowledge acquisition and the conduct of daily routines.
As introduced in section 1, our proposed solution is a haptic
intelligent, personalized, interface (HIPI) integrating elements
from smart textiles, sensors, semantic technologies, image
processing, face and object recognition, machine learning,
affective computing, and gamification.
The varied nature of experiences and needs of persons with
deafblindness imply that such a system will need to be modular
and reconfigurable, so that it can be adapted to different
individuals’ needs. SUITCEYES aims to provide a first step
towards such a system, developing a suite of sensors and actuators
that can be combined and configured to fit a variety of user needs
and provide a basis for future research. The system’s elements are
shown in Figure 4.
Figure 4: Schematic overview of the system’s components.
A range of potential feedback modalities will be explored for
inclusion in the HIPI: vibration, pressure and temperature, as well
as exploring different ways in which these can be combined or
positioned to provide different signals.
Likewise, a variety of sensors will be explored to provide
information about the environment: ultrasonic distance sensors to
detect proximity of obstacles, a camera feed to allow recognition
of objects or people, indoor positioning systems to help locate
objects, or radio frequency identification to identify when given
objects come near. A processing unit will be used to interpret
sensor input against a knowledge base and determine appropriate
feedback. Smart textiles will be used to accommodate sensors,
feedback units and the processing unit on different parts of the
body, either mounted on the textiles, or built into them, as
appropriate.
3.1 Wearables and Smart Textiles
Wearables have become an important domain, embracing
watches, smartphones and smart glasses [7]. Still, textiles are
perhaps the ultimate class of wearables, playing a profound role in
daily life of humans of whatever age, sex, health status,
occupation, or activity level. Textiles are ever-present, offer high
comfort, low weight and are very close to the human body.
Inherent properties of textiles – pliability, drapability and softness
– make them tactual objects from the beginning. Textiles can
become “smart” active haptic communication units by adding
functionalities such as sensing [8], [9], monitoring [10], actuation
[11], [12] (Figure 5). Especially textiles with the possibility to
adapt to the environment are often called smart textiles [13]. As
Profita et al. point out, “textile-based wearable computing systems
have the ability to assist and augment the human sensory network
by leveraging their close proximity with the skin to provide
contextual information to a user” [14].
Figure 5: Smart Textile with conductive elements. [Photo from
the Textile Showroom of University of Borås by Oliver Korn]
For users without disabilities, such smart garment interfaces can
offer a sensory augmentation. However, for persons with
impairments or disabilities, the benefits are substantial: smart
textiles can partially replace the impaired senses. It is already
possible to integrate a broad spectrum of mechanisms especially
for haptic communication within common textile processing
methods such as weaving, knitting, embroidery and sewing.
Mechanisms include vibrotactile [15] and pressure [16] modes –
all of these will be explored to extend the communication space of
deafblind users in the SUITCEYES project.
3.2 Haptic Psychophysics
Important factors in the design of assistive wearables that should
be taken into account are how well humans can discriminate and
recognize different stimulation patterns. Moreover, stimulation
should not be painful or irritating, the patterns should be relatively
easy to learn, and they should not require too much attention.
Although in recent years much research has been done (for
example, on vibratory [17] or thermal [18] stimulation), design
requirements for specific groups cannot simply be “looked up in a
handbook”.
In the SUITCEYES project, psychophysical experiments will
inform the designers about the requirements of persons with
deafblindness. In systematic and extensive tests, users will be
exposed to certain types of stimulation to investigate all kinds of
relevant aspects of the stimulation, such as the intensity, the
resolution, pattern identification, etc. At first, we will focus on
dedicated laboratory set-ups, but at later stages, prototypes of
garments will be tested.
The psychophysical techniques and the subsequent statistical
analyses that will be employed in this project have already been
described [19]. Depending on the types and number of prototypes,
the most suitable method will be chosen. We foresee the
following possibilities:
● Discrimination: The task is to decide which of a pair of
stimuli has the higher intensity of the property of interest. By
repeating this with varying intensity differences, the
minimum noticeable difference can be determined for each
property (Figure 6).
● Magnitude estimation: The task is to provide, on an
arbitrary scale, a numerical estimate of the perceived
intensity of the stimulation of interest. By testing a series of
stimuli with different intensities, the relationship between
physical and perceptual properties will be obtained.
Figure 6: Participant comparing two stimuli placed in a
temperature-controlled box. [Image by Astrid Kappers]
As psychophysical experiments are very time-consuming, testing
in earlier stages will be done with blindfolded sighted and hearing
participants. Involving persons with deafblindness at this stage is
not necessary as all humans have the same touch receptors. At
more advanced stages, individuals with deafblindness will test the
prototypes: their feedback and involvement in the development of
the solution is central to our approach. Also, already in the early
stage we will stay in close contact with the community of persons
with deafblindness and take their ideas and preferences into
account.
3.3 Haptic and Thermal Feedback
In human computer interaction (HCI), feedback is a key element.
There is a huge community researching assistive technologies for
persons with physical or cognitive impairments, for example in
the context of SIGACCESS (ACM Special Interest Group on
Accessible Computing) [20] and the related ASSETS conference.
However, most research focuses on impairments of one major
sense, whereas combined sensory impairments are rarely in focus
– probably also because it is difficult to approach users with
multiple impairments. Nevertheless, much research for users with
visual or hearing impairments can be used or adapted for people
with deafblindness.
Haptic feedback has received a significant amount of research,
particularly in (but not limited to) the area of HCI. It comes in a
variety of forms:
• Force feedback simulates contact by using actuators to
apply forces in response to movement. For example, various
degrees of mechanical impedance can be simulated by
accelerator pedals becoming stiffer above a certain speed [21].
• Vibrotactile feedback uses vibration motors to provide
feedback: its most familiar uses are “rumbles” in videogames
or alerts on smartphones, but it can also be used in quite
nuanced ways, such as the display of variable stiffness
through different vibration intensities [22]. Such uses can
well be adapted for communication purposes of deafblind
persons.
• Electrotactile displays use an electric current to stimulate
nerve endings in the fingers, thus recreating tactile sensations
[23].
As navigation and the detection of obstacles are an evident form
of assistance deafblind users require, there are already some
applications using haptic feedback: the EyeCane [24] uses
vibrotactile feedback to alert the user to nearby obstacles, the
Haptic Taco [25] adjusts its shape to guide users towards a target.
Interestingly, when compared to haptic feedback, thermal
feedback is a relatively new area of research in HCI. Early
research starts in 2012 with “thermal icons” [26]. In the last years,
especially the Glasgow Interactive Systems Group, namely
Graham Wilson and Stephen Brewster, have advanced this area. A
main focus is the strong connection of affect and thermal
feedback: it has been connected to models of emotion [27] and
there even are technical approaches to diversify the range of
communication for thermal feedback by using an array of three
thermal stimulators [28]. These emotional potentials in
communication will feed very well in the areas gamification and
“life enrichment” targeted in the SUITCEYES project.
3.4 Recognition of Objects and Persons
Providing users with vision and hearing impairments with
artificial capabilities for real-time object detection and face
recognition will significantly extend their perception of the
environment: from the limited area reached by stretching their
arms to a much larger area. Thus, the solution we are developing
in SUITCEYES will augment their feeling of safety and security.
The current state of the art in face detection and recognition
incorporates breakthroughs from deep neural networks [29],
which gave rise to new facial point detection and recognition
algorithms. In this context, deep architectures, such as deep
convolutional network cascade [30] almost solved the face
detection problem. Face recognition algorithms, like Deep Faces
[31] and VGG-Face descriptor [32] deployed deep convolutional
architectures and achieved very high accuracy classification rates
in unconstrained datasets.
On the other hand, state of the art techniques in object detection
use deep convolutional neural networks to represent objects inside
images. They train the parameters and weights of their models on
large datasets and use a spatial window to localize object
candidates inside the images. Processing intensive sliding window
methods with part-based models have been replaced by selective
search [33] and other sophisticated techniques that employ multi-
scale bounding boxes instead of dense sampling. The current state
of the art has turned its attention to developing faster, rather than
more accurate techniques, while some more recent techniques
such as YOLO [34] and SSD [35] achieve even lower
computational cost, rendering them more than appropriate for
embedded vision purposes – for example in a wearable.
SUITCEYES will extend the face detection and recognition state
of the art by leveraging facial point detection and a combination
of shallow features with a deep convolutional framework [36].
Regarding object recognition, similar hybrid representations will
be deployed, combined with the selective scheme that YOLO
uses, in order to design a low computational cost and accurate
system, specifically tailored for embedded vision purposes [37].
From a technical perspective, video will be captured in HD
resolution with a good frame-per-second capturing (4-8 FPS). For
example, when using a mobile camera plugged onto a JetsonTX2,
the frames will be stored on local memory. Facial and object
detection algorithms will apply facial point detection and selective
search algorithms within the captured frames, so that they can find
the candidate bounding boxes that contain faces and objects in
each video frame. A hybrid shallow-to-deep representation will be
used to describe the appropriate features for recognizing familiar
faces and objects inside the provided bounding boxes. These
hybrid features will be classified based on pre-trained face and
object databases.
3.5 Gamification and Social Interaction
As pointed out before, deafblindness is a severe condition.
Especially persons who lost their hearing and seeing capabilities
due to accidents or due to illness, typically experience these
limitations as burdening and depressing. In our project we
acknowledge this problem. However, we aim to move towards
playful challenges, which extend the users’ interests, add
engagement and offer joyful experiences. The way towards this is
the gamification of everyday situations and learning.
Gamification is the integration of video game elements into non-
game services and applications, to improve the user experience,
engagement and performance [38]. In areas like education [39]
and health [40], gamified approaches are already quite popular
and successful. They even have been incorporated in work
environments [41], for example in production [42] and in the
automotive domain [43]. However, like most games, gamification
focuses on the visual and the auditory channel. For people with
deafblindness, these methods are not feasible. Although there are
already elements of vibration in gaming (e.g. rumble controllers),
these elements just aim to enrich an existing experience. If a user
experience is to be not just enriched but constituted by haptic or
thermal feedback, gamification concepts need to be strongly
adapted and partially re-invented.
Therefore, new designs and concepts with positive feedback loops
are developed. The first step is to grasp the deafblind persons’
concept of playfulness. What makes a person with deafblindness
laugh, what is considered humorous?
This requires intensive involvement of the users, their families
and the care providers. Their input is fed into an iterative agile
development process. However, measuring the level of fun and
enjoyment is difficult. The most common ways for evaluation are
interviews and surveys. Since these methods can be problematic
for some users in this target group, also methods from affective
computing will be used. For instance, facial expression analysis
can be used as means to directly assess the engagement of a user
[44]. Another way to deduce positive effects of gamified scenarios
are structured records of the families and educators of the users.
These close persons often notice mood changes.
The most desirable result of the integration of gamification is to
create a motivating flow state [45], an area where skill level and
task affordance converge and a good performance is achieved
seemingly easily. This enables users with deafblindness to
increase their communicative space while enjoying themselves.
So a haptic intelligent, personalized, interface (HIPI) is integrated
in a wearable, making it smart (section 3.1). However, how does
this offer ways to gamify everyday experiences? An exemplary
scenario is the “Easter Egg Hunt”: the HIPI’s haptic and thermal
actuators guide a person with deafblindness towards a target
object. Temperature changes of the thermal actuators and
vibration of the haptic actuators indicate the proximity (section
3.3). The person with deafblindness is moving according to this
feedback. This process may sound straightforward. However, not
only does the system require the capability to navigate the user
around obstacles (section 3.4), the user also has to “learn” to read
and interpret the signals (section 3.2). Within a safe environment,
the Easter Egg Hunt offers a way to make this learning process
fun. If the users become more proficient, it can easily be extended
to include social interaction, for example by playing “Hide and
Seek”: an everyday game for most children, which currently is
utopia for users with deafblindness.
As described in the introduction, innovations in assistive
technologies often remain unused and discarded. Enriching both
the learning and the usage processes with gamification will
increase the motivation to use the solution. Like with learning a
language or a musical instrument, the first steps are the most
painful ones. Only over time, and potentially only if it is fun to
learn, the process of using the HIPI will get fast and proficient.
4 CONCLUSIONS AND FUTURE WORK
In this paper, we introduced the vision of a haptic intelligent,
personalized, interface (HIPI) that integrates elements from smart
textiles, sensors, semantic technologies, image processing, face
and object recognition, machine learning, affective computing,
and gamification. This solution is being designed in a user-
centered and agile process for the community of deafblind
persons. Their special situation has been described in section 2.
In section 3, we presented the five underlying concepts:
Wearables and Smart Textiles (section 3.1), as the solution needs
to be portable and close to the users. Haptic Psychophysics
(section 3.2) to design the stimuli best-suited for communication,
which typically will be Haptic and Thermal Feedback (section
3.3). We described how the textile can become smart – using a
Recognition of Objects and Persons (section 3.4). Finally, we
discuss how Gamification and Social Interaction (section 3.5) can
make the difference in motivating users to learn and “play” with
the HIPI. The vision is that users with deafblindness extend their
abilities while enjoying themselves.
Although there is a large community of people with deafblindness
in Europe and all over the world (section 2), the local
communities are often not well connected. Work on the needs of
deafblind persons is just beginning. We hope that this vision paper
is a first step, and that the SUITCEYES project as a whole will
make a difference.
5 ACKNOWLEDGMENTS
This paper is based on the SUITCEYES project proposal [46].
The SUITCEYES project has received funding from the European
Union’s Horizon 2020 research and innovation programme under
grant agreement No 780814.
We thank all contributors and advisors; in alphabetical order:
Konstantinos Avgerinakis, Lea Buchweitz, Panagiotis Mitzias,
Jan Nolin, Panagiotis Petrantonakis, and Sarah Woodin.
6 REFERENCES
[1] A. Roulstone, Disability and Technology: An
Interdisciplinary and International Approach. Springer,
2016.
[2] United Nations, Convention on the Rights of Persons with
Disabilities. 2008.
[3] “NVC - Nordic Welfare Center.” [Online]. Available:
http://www.nordicwelfare.org/. [Accessed: 28-Feb-2018].
[4] H.-E. Frölander, “Deafblindness : Theory-of-mind, cognitive
functioning and social network in Alström syndrome,”
Örebro University, 2016.
[5] A.-B. Johansson, “Se och hör mig. Personer med förvärvad
dövblindhets erfarenheter av delaktighet, rehabilitering och
medborgerligt liv,” University of Gothenburg, 2016.
[6] “Journal of Deafblind Studies on Communication.” [Online].
Available: http://jdbsc.rug.nl/index/. [Accessed: 08-Feb-
2018].
[7] C. Hill, “Wearables – the future of biometric technology?,”
Biom. Technol. Today, vol. 2015, no. 8, pp. 5–9, Sep. 2015.
[8] K. Cherenack, C. Zysset, T. Kinkeldei, N. Münzenrieder,
and G. Tröster, “Wearable Electronics: Woven Electronic
Fibers with Sensing and Display Functions for Smart
Textiles (Adv. Mater. 45/2010),” Adv. Mater., vol. 22, no.
45, pp. 5071–5071, Dec. 2010.
[9] K. Nesenbergs and L. Selavo, “Smart textiles for wearable
sensor networks: Review and early lessons,” in 2015 IEEE
International Symposium on Medical Measurements and
Applications (MeMeA) Proceedings, 2015, pp. 402–406.
[10] L. Langenhove, Advances in smart medical textiles :
Treatments and health monitoring. Woodhead Publishing,
2016.
[11] R. Paradiso and D. D. Rossi, “Advances in textile sensing
and actuation for e-textile applications,” in 2008 30th Annual
International Conference of the IEEE Engineering in
Medicine and Biology Society, 2008, pp. 3629–3629.
[12] A. Maziz, A. Concas, A. Khaldi, J. Stålhand, N.-K. Persson,
and E. W. H. Jager, “Knitting and weaving artificial
muscles,” Sci. Adv., vol. 3, no. 1, p. e1600327, Jan. 2017.
[13] L. Guo, T. Bashir, E. Bresky, and N.-K. Persson, “28 -
Electroconductive textiles and textile-based
electromechanical sensors—integration in as an approach for
smart textiles,” in Smart Textiles and their Applications, V.
Koncar, Ed. Oxford: Woodhead Publishing, 2016, pp. 657–
693.
[14] H. Profita, N. Farrow, and N. Correll, “Flutter: An
Exploration of an Assistive Garment Using Distributed
Sensing, Computation and Actuation,” in Proceedings of the
Ninth International Conference on Tangible, Embedded, and
Embodied Interaction, New York, NY, USA, 2015, pp. 359–
362.
[15] R. W. Lindeman, Y. Yanagida, H. Noma, and K. Hosaka,
“Wearable vibrotactile systems for virtual contact and
information display,” Virtual Real., vol. 9, no. 2–3, pp. 203–
213, Mar. 2006.
[16] B. Holschuh, E. Obropta, and D. Newman, “Low Spring
Index NiTi Coil Actuators for Use in Active Compression
Garments,” IEEEASME Trans. Mechatron., vol. 20, no. 3,
pp. 1264–1277, Jun. 2015.
[17] G. A. Gescheider, S. J. Bolanowski, and R. T. Verrillo,
“Some characteristics of tactile channels,” Behav. Brain
Res., vol. 148, no. 1, pp. 35–40, Jan. 2004.
[18] L. Jones, “Thermal touch,” Scholarpedia, vol. 4, no. 5, p.
7955, 2009.
[19] A. M. L. Kappers and W. M. Bergmann Tiest, “Haptic
perception,” Wiley Interdiscip. Rev. Cogn. Sci., vol. 4, no. 4,
pp. 357–374, Jul. 2013.
[20] “SIGACCESS.” [Online]. Available:
https://www.acm.org/special-interest-groups/sigs/sigaccess.
[Accessed: 10-Feb-2018].
[21] A. H. Jamson, D. L. Hibberd, and N. Merat, “The design of
haptic gas pedal feedback to support eco-driving,” Proc.
Seventh Int. Driv. Symp. Hum. Factors Driv. Assess. Train.
Veh. Des., vol. Driving Assessment 2013., no. Seventh
International Driving Symposium on Human Factors and
Driver Assessment, Training, and Vehicle Design, pp. 17–
20, Jun. 2013.
[22] A. T. Maereg, A. Nagar, D. Reid, and E. L. Secco,
“Wearable Vibrotactile Haptic Device for Stiffness
Discrimination during Virtual Interactions,” Front. Robot.
AI, vol. 4, 2017.
[23] M. Tezuka, N. Kitamura, K. Tanaka, and N. Miki,
“Presentation of Various Tactile Sensations Using Micro-
Needle Electrotactile Display,” PLOS ONE, vol. 11, no. 2, p.
e0148410, Feb. 2016.
[24] S. Maidenbaum et al., “The ‘EyeCane’, a new electronic
travel aid for the blind: Technology, behavior & swift
learning,” Restor. Neurol. Neurosci., vol. 32, no. 6, pp. 813–
824, 2014.
[25] A. J. Spiers and A. M. Dollar, “Design and Evaluation of
Shape-Changing Haptic Interfaces for Pedestrian Navigation
Assistance,” IEEE Trans. Haptics, vol. 10, no. 1, pp. 17–28,
Jan. 2017.
[26] G. Wilson, S. Brewster, M. Halvey, and S. Hughes,
“Thermal Icons: Evaluating Structured Thermal Feedback
for Mobile Interaction,” in Proceedings of the 14th
International Conference on Human-computer Interaction
with Mobile Devices and Services, New York, NY, USA,
2012, pp. 309–312.
[27] G. Wilson, D. Dobrev, and S. A. Brewster, “Hot Under the
Collar: Mapping Thermal Feedback to Dimensional Models
of Emotion,” in Proceedings of the 2016 CHI Conference on
Human Factors in Computing Systems, New York, NY,
USA, 2016, pp. 4838–4849.
[28] J. Tewell, J. Bird, and G. R. Buchanan, “The Heat is On: A
Temperature Display for Conveying Affective Feedback,” in
Proceedings of the 2017 CHI Conference on Human Factors
in Computing Systems, New York, NY, USA, 2017, pp.
1756–1767.
[29] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet
Classification with Deep Convolutional Neural Networks,”
in Proceedings of the 25th International Conference on
Neural Information Processing Systems - Volume 1, USA,
2012, pp. 1097–1105.
[30] Y. Sun, X. Wang, and X. Tang, “Deep Convolutional
Network Cascade for Facial Point Detection,” in 2013 IEEE
Conference on Computer Vision and Pattern Recognition,
2013, pp. 3476–3483.
[31] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf,
“DeepFace: Closing the Gap to Human-Level Performance
in Face Verification,” in 2014 IEEE Conference on
Computer Vision and Pattern Recognition, 2014, pp. 1701–
1708.
[32] O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep Face
Recognition,” BMCV, vol. 1, no. 3, p. 6, Sep. 2016.
[33] J. R. R. Uijlings, K. E. A. van de Sande, T. Gevers, and A.
W. M. Smeulders, “Selective Search for Object
Recognition,” Int. J. Comput. Vis., vol. 104, no. 2, pp. 154–
171, Sep. 2013.
[34] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You
Only Look Once: Unified, Real-Time Object Detection,” in
2016 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), 2016, pp. 779–788.
[35] W. Liu et al., “SSD: Single Shot MultiBox Detector,”
ArXiv151202325 Cs, vol. 9905, pp. 21–37, 2016.
[36] F. Markatopoulou et al., “ITI-CERTH participation in
TRECVID 2017.” TRECVID-SED, 2017.
[37] K. Avgerinakis, P. Giannakeris, A. Briassouli, A.
Karakostas, and S. Vrochidis, “Intelligent traffic city
management from surveillance systems (CERTH-ITI),” in
NVIDIA AI City Challenge 2017, San Fransisco, 2017.
[38] S. Deterding, M. Sicart, L. Nacke, K. O’Hara, and D. Dixon,
Gamification: Using game design elements in non-gaming
contexts, vol. 66. 2011.
[39] O. Korn and A. Dix, “Educational Playgrounds: How
Context-aware Systems Enable Playful Coached Learning,”
interactions, vol. 24, no. 1, pp. 54–57, Dec. 2016.
[40] O. Korn and S. Tietz, “Strategies for Playful Design when
Gamifying Rehabilitation: A Study on User Experience,” in
Proceedings of the 10th International Conference on
PErvasive Technologies Related to Assistive Environments,
New York, NY, USA, 2017, pp. 209–214.
[41] O. Korn, “Industrial playgrounds: how gamification helps to
enrich work for elderly or impaired persons in production,”
in Proceedings of the 4th ACM SIGCHI Symposium on
Engineering Interactive Computing Systems, New York, NY,
USA, 2012, pp. 313–316.
[42] O. Korn, M. Funk, and A. Schmidt, “Design Approaches for
the Gamification of Production Environments: A Study
Focusing on Acceptance,” in Proceedings of the 8th ACM
International Conference on PErvasive Technologies
Related to Assistive Environments, New York, NY, USA,
2015, pp. 6:1–6:7.
[43] O. Korn, P. Muschick, and A. Schmidt, “Gamification of
Production? A Study on the Acceptance of Gamified Work
Processes in the Automotive Industry,” in Advances in
Affective and Pleasurable Design. Proceedings of the AHFE
2016 International Conference, New York, NY, USA, 2016,
pp. 433–445.
[44] O. Korn, S. Boffo, and A. Schmidt, “The Effect of
Gamification on Emotions - The Potential of Facial
Recognition in Work Environments,” in Human-Computer
Interaction: Design and Evaluation, 2015, pp. 489–499.
[45] M. Csikszentmihalyi, Beyond Boredom and Anxiety. Jossey-
Bass Publishers, 1975.
[46] N. Olson et al., “Smart, User-friendly, Interactive, Tactual,
Cognition-Enhancer that Yields Extended Sensosphere -
Appropriating sensor technologies, machine learning,
gamification and smart haptic interfaces.” [Online].
Available:
https://cordis.europa.eu/project/rcn/213173_en.html.
[Accessed: 29-Mar-2018].