ChapterPDF Available

Nothing Comes between My Robot and Me: Privacy and Human-Robot Interaction in Robotised Healthcare

Authors:

Abstract and Figures

The integration of cyber-physical robotic systems in healthcare settings is accelerating, with robots used as diagnostic aids, mobile assistants, physical rehabilitation providers, cognitive assistants, social and cognitive skills trainers, or therapists. This chapter investigates currently still underexplored privacy and data protection issues in the use of robotic technologies in healthcare, focusing on privacy issues that are specifically related to human engagement with robots as cyber-physical systems in healthcare contexts. It addresses six relevant privacy concerns and analyses them with regard to the European context: 1. The distinctive privacy impacts of subconsciously incentivised disclosure in human-robot interaction. 2. The complexity of consent requirements, including consent for data processing as well as consent for robotic care provision, both governed by different norms and user expectations. 3. Privacy challenges and opportunities arising from conversational approaches to privacy management with robots. 4. The application of data portability requirements in the context of a person ’ s substantive reliance on robots. 5. The privacy risks related to robot-based data collection in the workplace. 6. The need to go beyond simpler Privacy by Design approaches, which reduce privacy to data protection, towards designing robots for privacy in a wider sense. We argue that the communication and interaction with robots in healthcare contexts impacts not just data protection concerns, but wider consideration of privacy values, and that these privacy concerns pose challenges that need to be considered during robot design and their implementation in healthcare settings.
Content may be subject to copyright.
4
Nothing Comes between My Robot
and Me : Privacy and Human-Robot
Interaction in Robotised Healthcare
EDUARD FOSCH VILLARONGA , HEIKE FELZMANN ,
R O B I N L . P I E R C E , S I LV I A D E C O N C A , AV I VA D E G R O O T ,
AIDA PONCE DEL CASTILLO AND SCOTT ROBBINS
Errant consilia nostra, quia non habent quo derigantur;
ignoranti quem portum petat nullus suus ventus est
If a man does not know to which port he is sailing, no
wind is favorable to him
LXXI Seneca Lucilio Suo Salutem, Epistulae Morales
Ad Lucilium , L.A. Seneca.
Abstract
e integration of cyber-physical robotic systems in healthcare settings is
accelerating, with robots used as diagnostic aids, mobile assistants, physical reha-
bilitation providers, cognitive assistants, social and cognitive skills trainers, or
therapists.  is chapter investigates currently still underexplored privacy and
data protection issues in the use of robotic technologies in healthcare, focusing
on privacy issues that are speci cally related to human engagement with robots
as cyber-physical systems in healthcare contexts. It addresses six relevant privacy
concerns and analyses them with regard to the European context: 1.  e distinc-
tive privacy impacts of subconsciously incentivised disclosure in human-robot
interaction. 2.  e complexity of consent requirements, including consent for
data processing as well as consent for robotic care provision, both governed by
di erent norms and user expectations. 3. Privacy challenges and opportunities
arising from conversational approaches to privacy management with robots.
4.  e application of data portability requirements in the context of a person s
substantive reliance on robots. 5.  e privacy risks related to robot-based data
94 Eduard Fosch Villaronga et al
1 Global Medical Robots Market Forecast 2017–2024 NKWood Research, Accessed 27 February
2018. Available at: www.inkwoodresearch.com/reports/medical-robotics-market/#report-summary .
2 ‘ Multi-annual Roadmap 2020 ’ , SPARC, last modi ed 2 February 2015. Available at: www.
eu-robotics.net/cms/upload/downloads/ppp-documents/Multi-Annual_Roadmap2020_ICT-24_Rev_
B_full.pdf .
3 ‘ New Robot Strategy ’ , e Headquarters for Japan s Economic Revitalization, last modi ed
10February 2015. Available at: www.meti.go.jp/english/press/2015/pdf/0123_01b.pdf .
4 Yang, Guang-Zhong , Jim Bellingham , Pierre E. Dupont , Peer Fischer , Luciano Floridi , R o b e r t F u l l ,
Neil Jacobstein et al, ‘  e grand Challenges of Science Robotics . Science Robotics 3 , no 14 ( 2018 ):
eaar7650 .
5 Leenes, Ronald , Erica Palmerini , Bert-Jaap Koops , Andrea Bertolini , Pericle Salvini , and Federica
Lucivero . ‘ Regulatory challenges of robotics: Some guidelines for addressing legal and ethical issues .
Law, Innovation and Technology 9 , no 1 ( 2017 ): 1 – 44 .
6 Civil Law Rules on Robotics , European Parliament resolution of 16 February 2017 with recom-
mendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), last modi ed
collection in the workplace. 6.  e need to go beyond simpler Privacy by Design
approaches, which reduce privacy to data protection, towards designing robots
for privacy in a wider sense. We argue that the communication and interaction
with robots in healthcare contexts impacts not just data protection concerns, but
wider consideration of privacy values, and that these privacy concerns pose chal-
lenges that need to be considered during robot design and their implementation
in healthcare settings.
Keywords
Privacy, data protection, human-robot interaction, socially assistive robots,
healthcare, consent, medical con dentiality, robots, arti cial intelligence, exoskel-
etons, dementia
1. Introduction
e development of robots for use in healthcare settings has been accelerating
signi cantly over the last few years and it is projected to grow by 22.1 per cent
between 2017 and 2024.
1 National and international robot strategy documents
highlight the importance of developing robots for use in the  eld of healthcare
with a variety of arguments, from increased cost-e ectiveness to protections
for the health and safety of workers to the urgent need to address demographic
challenges where increasing numbers of persons in need of care are met with
declining numbers of available caregivers.
2 Japan plans to increase robot use by
caregivers and care-receivers by 80 per cent across care settings.
3
Because the adoption of robots raises substantial ethical, legal, societal
4 and
regulatory issues,
5 the  eld of robotics is now beginning to receive attention from
political and regulatory actors, and these issues are also increasingly discussed in
general public discourse. Early in 2017, the European Parliament (EP) released
the Resolution on Civil Law Rules on Robotics 2015/2103(INL).
6 It is a pioneering
‘Nothing Comes between My Robot and Me 95
16 February 2017. Available at: www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML+
TA+P8-TA-2017-0051+0+DOC+PDF+V0//EN.
7 B r y s o n , J o a n n a J . , M i h a i l i s E . D i a m a n t i s , a n d omas D. Grant . ‘ Of, For, and By the People:
e Legal Lacuna of Synthetic Persons . Arti cial Intelligence and Law 25 , no 3 ( 2017 ): 273 – 291 .
8 Follow up to the European Parliament resolution of 16 February 2017 on civil law rules on
robotics , European Commission, last modi ed 16 May 2017. Available at: www.globalpolicywatch.
com/2017/08/what-is-a-robot-under-eu-law/ .
9 ‘ Evaluation and Fitness Check (FC) Roadmap European Commission, last modi ed
12September 2016. Available at: http://ec.europa.eu/smart-regulation/roadmaps/docs/2016_grow_027_
evaluation_defective_products_en.pdf .
10 BS 8611:2016 Robots and robotic devices. Guide to the ethical design and application of robots
and robotic systems , British Standard Institute, last modi ed April, 2016. Available at: https://shop.
bsigroup.com/ProductDetail/?pid=000000000030320089.
11 ‘ Ethically Aligned Design ’ IEEE, last modi ed December, 2017. Available at: http://standards.ieee.
org/develop/indconn/ec/ead_v2.pdf .
12 ‘ ISO 13482:2014 Robots and Robotic Devices Safety Requirements for Personal Care Robots ,
International Standard Organization, last modi ed February, 2014. Available at: www.iso.org/
standard/53820.html .
e ort to raise legal and regulatory concerns of emerging robot technologies at the
European level including care and medical robots but the resolution also shows
some signi cant de cits: it lacks technical awareness and contains provisions that
may be morally unnecessary and legally troublesome such as the ascription of
personality to synthetic persons.
7 As an all-embracing attempt to cover cyber-
physical systems more generally, the resolution lacks complexity when addressing
legal issues speci cally concerning healthcare robots and arti cial intelligence;
it highlights primarily the dehumanisation of caring practices due to the inser-
tion of care robots into the healthcare context.  e European Commission (EC)
agrees that legal action in the  eld of robotics is urgently required because doing
otherwise would negatively a ect the development and uptake of robots.
8 While
the EC acknowledges the need to revise the Machinery or the Defective Product
Directive,
9 other aspects like the de nition of a robot and the identi cation of
what exactly should fall under the protected scope of such pieces of legislation are
not completely clear.
In light of these uncertainties and delays that characterise the current public
policy making process, industry and regulatory private actors have stepped into
the breach and have begun to develop standards that regulate various robot
application domains. Currently, there are already established standards address-
ing ethical, legal and societal issues (ELSI) of these technologies, including the
British Standard (BS) 8611:2016 Robots and robotic devices. Guide to the ethi-
cal design and application of robots and robotic systems
10 and the IEEE Global
Initiative on Ethics of Autonomous and Intelligent Systems with their document
on Ethically Aligned Design: A Vision for Prioritising Human Well-being with
Autonomous and Intelligent Systems.
11 ese contributions provide a general
discussion of ethical concerns related to robotics, while speci c issues relating to
care robotics are covered by the ISO 13482:2014 Robots and Robotic Devices
Safety Requirements for Personal Care Robots.
12 However, while the attention to
96 Eduard Fosch Villaronga et al
13 Delmas-Marty, M. , Le  ou du droit ( Paris: Canop é , 1986 ) .
14 Guihot, Michael and Anne F. Matthew , and Nicolas Suzor , Nudging Robots: Innovative Solutions
to Regulate Arti cial Intelligence ( 28 July 2017 ). Vanderbilt Journal of Entertainment & Technology
Law , Forthcoming . Available at SSRN: https://ssrn.com/abstract=3017004.
15 Fosch-Villaronga, Eduard , and Antoni Roig . ‘ European Regulatory Framework for Person Carrier
Robots . Computer Law & Security Review 33 , no 4 ( 2017 ): 502 – 520 .
16 Fosch Villaronga, Eduard . Towards a Legal and Ethical Framework for Personal Care Robots.
Analysis of Person Carrier, Physical Assistant and Mobile Servant Robots . ( PhD diss., Erasmus Mundus
Joint Doctorate in Law, Science and Technology Consortium , 2017 ) .
17 Sabel, Charles , Gary Herrigel , and Peer Hull Kristensen . ‘ Regulation under Uncertainty: e
Coevolution of Industry and Regulation . Regulation & Governance ( 2017 ) .
18 D e G r o o t , A v i v a . D e a r R o b o t . e special e ects of socially, assistive robots on the privacy related
rights of care receivers ’ ( LLM Diss. University of Amsterdam , 2017 ) .
19 van den Berg, Bibi . ‘ Mind the Air Gap . In Serge Gutwirth , Ronald Leenes and Paul De Hert (eds),
Data Protection on the Move, Current Developments in ICT and Privacy/Data Protection ( Dordrecht :
Springer , 2016 ), 1 – 24 .
robotics by regulatory and private sector actors is welcome, these measures have
certain shortcomings. Standards only cover particular impacts, predominantly
safety concerns, and o er a limited protected scope, insofar as they do not address
special protected categories of users.  ese documents are also inadequate from a
legal point of view insofar as they do not provide legally binding rules; they are not
directly enforceable because they do not establish consequences (sanctions) for
violations and generally lack precision.
13 Reliance on private-sector initiatives also
risks the decentralisation of regulation,
14 and conveys the impression that what
used to be in the remit of law is now being privatised.
15 e focus on particular
private sector interests has led to a situation where legally signi cant impacts such
as privacy and data protection, while present in the debate, have been compara-
tively underemphasised.
16
In a context where multiple regulatory bodies provide patchy coverage of a
eld of practice due to diverging aims and interests, neither the regulator nor
the addressees might be in the position to de ne clearly and with certainty what
actions are required
17 when users rights might be at stake.  is is particularly
problematic in the context of socially assistive robots (SARs) that are aimed at
enhancing health and wellbeing, whether through caregiving, providing treatment,
or health monitoring.  e inherent vulnerability of the patient in the healthcare
context pushes to the fore the need to ensure that vulnerabilities are not exacer-
bated through inadequate attention to relevant legal concerns.
Some of these concerns relate to privacy and data protection, which mani-
fest themselves in a variety of ways. For instance, the processing of sensitive data
in SARs to generate social reactivity, when based on models that have not been
screened well enough, may have unpredictable feedback e ects.
18 SARs may go
beyond other Internet-of- ings (IoT) objects in obtaining intimate virtual and
physical mapping of private or institutional living, due to their additional informa-
tion gathering potential.
19 Although robots cannot yet reliably identify the social
‘Nothing Comes between My Robot and Me 97
20 Kaminski, Margot E. , Matthew Rueben , William D. Smart , and Cindy M. Grimm . ‘ Averting Robot
Eyes . Maryland Law Review 76 ( 2016 ): 983 .
21 Fosch-Villaronga, E. and J Albo-Canals . ‘ Robotic erapies: Notes on Governance Workshop on
Social Robots in  erapy: Focusing on Autonomy and Ethical Challenges . Human Robot Interaction
Conference 2018 , forthcoming.
22 ‘ Roadmap for Robotics for Healthcare ’ e European Foresight Monitoring Network, last
modi ed November, 2008. Available at: www.foresight-platform.eu/wp-content/uploads/2011/02/
EFMN-Brief-No.-157_Robotics-for-Healthcare.pdf .
23 Garmann-Johnsen, Niels , Tobias Mettler , and Michaela Sprenger . ‘ Service Robotics in Healthcare:
A Perspective for Information Systems Researchers ? , last modi ed August 2014 . Available at: www.
researchgate.net/publication/267763443_Service_Robotics_in_Healthcare_A_Perspective_for_
Information_Systems_Researcher .
24 World Population Ageing 2013 United Nations, Department of Economic and Social A airs
Population Division, last modi ed 2013. Available at: www.un.org/en/development/desa/population/
publications/pdf/ageing/WorldPopulationAgeing2013.pdf .
roles of actors in a care setting, such as inhabitants, personal and formal visitors,
close and remote family or delivery services, they can nevertheless potentially
share any information gathered on users and their environment with companies
and third parties. SARs are also at risk of being hacked, exposing users to particu-
lar privacy risks.
20 Medical con dentiality issues may arise when patients con de
in social robots as friends rather than agents of the care system and expect such
information to remain outside the care system. Yet other issues might ensue from
emotional data capture, a rapidly developing  eld where the reliable processing
of emotions of the users and subsequent initiation of e ective emotionally driven
interaction might soon become feasible.
21
For this chapter, we selected critical concerns about the use of healthcare robots
that are speci cally linked to the interaction between humans and robots and that
have been comparatively underexplored in the literature until now, in order to
highlight the complexity of data protection and privacy concerns that arise in the
use of healthcare robots.
2. Healthcare Robots and Stakeholders
in Robotised Healthcare
In recent years, healthcare robots, de ned as systems able to perform coordinated
mechatronic actions (force or movement exertions) on the basis of processing
of information acquired through sensor technology, with the aim to support the
functioning of impaired individuals, medical interventions, care and rehabilita-
tion of patients and also to support individuals in prevention programs,
22 have
been employed in a wide range of healthcare settings.
23 Driving this phenom-
enon are the consequences of the demographic regression in developed countries:
older persons are projected to exceed the number of children for the  rst time
in 2047.24 ese demographic developments mean that an increasing number of
98 Eduard Fosch Villaronga et al
25 Will a Robot Care for my Mom ? Colin Angle, TedMed, last modi ed 2009. Available at: www.
tedmed.com/talks/show?id=7193 .
26 Supply and Demand Projections of the Nursing Workforce: 2014 2030 US Department of Health
and Human Services, last modi ed 21 July 2017. Available at: https://bhw.hrsa.gov/sites/default/ les/
bhw/nchwa/projections/NCHWA_HRSA_Nursing_Report.pdf .
27 See ‘ Roadmap for Robotics for Healthcare ’ n 22.
older persons in need of care will need to be cared for by a younger generation
whose numbers have dramatically decreased.
25 Given the increase in professional
care delivery, a large number of older persons can be expected to enter nursing
homes and hospitals and substantially increase demand for medical care and
assistance in daily living, on top of already signi cant levels of healthcare care
needs.
26
Robot strategies envisage that the key objectives of healthcare to contribute
to quality, safety and e ciency of care; to promote the shi to preventive and
personalised care; and to support the availability of long term care for people
in need
27 will be increasingly met by robots.  e robots that have been devel-
oped so far are robots that promise to relieve human sta of physical burdens,
such as delivery robots, li ing robots, exoskeletons for nurses or nurses aids;
robots that provide instruction, reminders and support of healthcare-related
patient activities, such as cognitive assistive robots, or physical rehabilita-
tion robots; robots that perform or support the realisation of specialised
physical skills, such as surgery robots; robots that provide support for physical
functions, such as exoskeletons for patients in rehabilitation for mobility impair-
ments; and robots that provide cognitive therapeutic activities for patients,
for example social competence training for patients with autism, or reminis-
cence therapy for patients with neurodegenerative disorders, such as dementia
(see Figure 4. 1).
Figure 4.1 Examples of healthcare robots: from le to right Hospit(R) from Panasonic,
eBuddy from Bluefrog Robotics, and RIBA from Tokai Rubber Industries
‘Nothing Comes between My Robot and Me 99
28 Barco, Alex , Jordi Albo-Canals , Carles Garriga-Berga , Xavier Vilas í s-Cardona , Laura Callej ó n ,
Marc Tur ó n , Claudia G ó mez , and Anna L ó pez-Sala . A Drop-out Rate in a Long-term Cognitive
Rehabilitation Program through Robotics aimed at Children with TBI . In Robot and Human
Interactive Communication, 2014 RO-MAN:  e 23rd IEEE International Symposium . IEEE , 2014 ,
186 – 192 .
29 Valenzuela, Emelideth , Alex Barco , and Jordi Albo-Canals . ‘ Learning social skills through LEGO-
based social robots for children with autism spectrum disorder at CASPAN center in Panama . In 2015
Conference Proceedings, NewFriends , edited by Heerink, M. and Jong, M. de ( 2015 ).( Windeshelm :
Flevoland , 2015 ) .
30 Meet Hookie Dynatech, last accessed 1 March 2018. Available at: http://hookie.dynatech2012.
com/home/.
31 Automated Guided Vehicles. Time Out Analysis University of Michigan, last modi ed 1 July 1992.
Available at: http://umich.edu/~ioe481/ioe481_past_reports/w9208.pdf .
Indeed, robots help deliver care in novel ways. For instance, researchers in autism-
related traditional interventions were confronted with the task to investigate the
complex relationship between the acquisition of communication skills, social-
emotional factors and types of transactional support that predict better outcomes
for children with autism.  is was greatly challenged by the fact that, although
autistic children have comparable developmental di culties, there are huge di er-
ences among them. Robots promise to optimise care delivery because they can
adapt easily to each individual s needs,
28 they are predictive and repetitive, and
also very engaging.
29 Furthermore, with the help of ambient intelligence tech-
nologies, information from the session can be collected for further analysis and
improvement in a way that not feasible before.
30
Concerning the e ciency of care, some hospitals have started incorporating
autonomous ground vehicles (AGV) in their facilities to over internal delivery
routes. An idea already debated in the nineties,
31 these robots help streamline
some of the basic tasks previously performed by nurses (ie deliver food, medicines
and clothes from the kitchen, pharmacy or laundry room to the patients rooms).
ey may have non-biomimetic or humanoid shape, and can work steadily 24/7,
only requiring maintenance.
Each of these types of robots raises particular legal and ethical concerns.
However, considered from a privacy and data protection perspective some
common themes emerge, relating to the collection and use of potentially sensitive
data from healthcare and domestic settings with regard to patients, bystand-
ers and healthcare sta , and with corresponding implications regarding design
responsibilities of robot developers. While patient users tend to be the focus of
attention in most ethical and legal discussions of healthcare robots, patient users
are only one of many stakeholder groups. As this chapter illustrates, a privacy
and data protection angle on the discussion is particularly helpful for exploring
the complexity of the stakeholder network for healthcare robotics which may
include cloud service providers, third parties, the institution as a whole, di er-
ent types of users, workers interacting with the robots as well as manufacturers
(see Figure 4. 2).
100 Eduard Fosch Villaronga et al
Figure 4.2 Human-Robot Interaction stakeholder complexity
Worker
– Doctors
– Nurses
– erapists
User
– Elderly
– Children
– Disabled
– General User
Institution
– Type of Institution
– Decision-Making
– Public/Private
Manufacturer
– Country
– Responsibility
– Articial Intelligence
Providers
– Cloud Services
– Speech Recognition
– Robot-as-a-Service
ird parties
– Relatives
– ird users
3. Six Privacy Issues for Healthcare Robots:
e Distinctive Lens of Human-Robot Interaction
Having as a starting point the objectives of healthcare and the systemic issues that
the sector is facing, this section shows how the complex net of relationships among
the actors involved, all bearing di erent needs and interests can be unraveled
from the perspective of privacy and data protection. We outline privacy and data
protection challenges for healthcare robots for di erent stakeholders, using the
lens of human-robot interaction (ie the way robots and humans work and function
together in the healthcare context).  is focus will allow us to highlight a critical
element that distinguishes healthcare from other services and sectors: the care of
individuals, frequently in circumstances of vulnerability.
‘Nothing Comes between My Robot and Me 101
32 Allen, Anita , ‘ Compliance Limited Health Privacy Laws ’ in Social Dimensions of Privacy:
Interdisciplinary Perspectives edited by Roessler, B. and D. Mokrosinska ( Cambridge : Cambridge
University Press 2015 ), 261 – 277 .
33 Human Rights and Health , World Health Organization, last modi ed December 2017. Available
at: www.who.int/mediacentre/factsheets/fs323/en/ .
34 Dworkin, Gerald . ‘ Can you Trust Autonomy ? ’ e Hastings Center Report 33 , no 2 ( 2003 ): 42 – 44 .
3.1. Con dentiality, Induced Trust and the Nudging
of Disclosure
e use of social robots has been shown to in uence disclosure behaviour of
those who interact with them.  is phenomenon has particular signi cance in
healthcare contexts, where disclosure by patients takes place in a protected con -
dential setting.  e core principle of medical con dentiality, a species of privacy,
governs the interaction between patients and healthcare professionals, protecting
knowledge of what is being said, seen, and done from outsider access. A range
of contextual values, ethical norms and considerations with regard to con den-
tial information are at play in daily health care practices.  ese are inevitably
not fully re ected by the laws that govern these relationships. Accordingly, using
social robots in these settings poses challenges with regard to the management of
complex privacy related responsibilities.
Medical con dentiality legally protects any kind of information shared
between the patient and their professional caregivers and stretches out to anyone
professionally involved in care delivery or the processing of relevant information,
like administrative personnel and technicians. Outside of that circle, information
can only be shared with a patient s explicit consent and following her autonomous
choices.  ese requirements are governed by national laws and professional self-
regulation. For informational relations that fall outside of these laws legal domains
(think of exchanges with visitors, or a patient s private health care app) data
protection rules on medical and health-related data are applicable.  e European
General Data Protection Regulation prohibits processing of these data except on
the basis of a limited list of exceptional grounds, medical treatment being one of
them (GDPR, Article 9).  ese protections support the establishment of trusted
relationships in which a care receiver feels comfortable to disclose sensitive infor-
mation, without fear of wider disclosure, thereby facilitating care delivery that
ts her needs.
32 In that sense, there is a clear public interest, as is recognised by
the World Health Organisation.
33 Con dentiality also protects a care receiver s
autonomous treatment decisions by shielding the process from outside in uence,
including paternalistic interference by persons around the care receiver.
34
When social robots are cast in the role of care providers or care assistants, their
behaviour needs to be compliant with these demands of con dentiality.  is not
102 Eduard Fosch Villaronga et al
35 Frennert, Susanne , H å kan E ring , and Britt Ö stlund . ‘ Case Report: Implications of Doing Research
on Socially Assistive Robots in Real Homes International Journal of Social Robotics 9 , no 3 ( 2017 ):
401 – 415 .
36 Darling, Kate ‘ Extending Legal Protection to Social Robots: e E ects of Anthropomorphism,
Empathy, and Violent Behaviour Towards Robotic Objects in Robot Law e d i t e d b y R y a n C a l o , M i c h a e l
Froomkin , Ian Kerr ( Cheltenham : Edward Elgar Publishing 2016 ), 213 – 231 .
37 See Salem, Maha , Friederike Eyssel , Katharina Rohl ng , Stefan Kopp , and Frank Joublin . ‘ To Err is
Human (-like): E ects of Robot Gesture on Perceived Anthropomorphism and Likability Inter nation al
Journal of Social Robotics 5 , no 3 ( 2013 ): 313 – 323 .
38 W o r t h a m , R o b e r t H . , A n d r e a s eodorou , and Joanna J. Bryson . ‘ Robot Transparency: Improving
Understanding of Intelligent Behaviour for Designers and Users . In Conference Towards Autonomous
Robotic Systems ( Cham : Springer , 2017 ), 274 – 289 .
39 See Salem et al, n 37.
40 Salem, Maha , Gabriella Lakatos , Farshid Amirabdollahian , and Kerstin Dautenhahn . ‘ Would You
Trust A (Faulty) Robot ? : E ects Of Error, Task Type and Personality on Human - Robot Cooperation
and Trust . In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot
Interaction , ( New York : ACM , 2015 ), 141 – 148 .
41 Robinette, Paul , Wenchen Li , Robert Allen , Ayanna M. Howard , and Alan R. Wagner . ‘ Overtrust of
Robots in Emergency Evacuation Scenarios . In Human-Robot Interaction (HRI), 2016 11th ACM/IEEE
International Conference ( IEEE , 2016 ), 101 – 108 .
only pertains to their own interactions with patients, as they are likely to impact the
relationships of patients with their human caregivers as well. E ects on patients
disclosure autonomy and on trust building, and the consequences of secret robot
con dences are complications that need to be taken seriously. Relations between
SARs and care recipients have distinctive characteristics. Users appear to be
keen on establishing a relationship, as illustrated by a quote from a test subject
about a robot, who stated I think I want to see him as something that is alive.
He s charming.35 On a more subconscious level, Human Robot Interaction (HRI)
research convincingly shows that anthropomorphic projections and ascriptions
of purpose, even in the face of unpredictable,
36 unexplained or erroneous robot
behaviour,
37 are strong and highly resistant to elimination. One study showed how
users explained a robot s movements in relation to the goals that they expected it to
pursue, without considering the possibility of a purely technical explanation.
38 In
another study, a robot which asked a human to put an object up there but gestured
towards the  oor was experienced as particularly likeable or playful, rather than
faulty.
39
e same unconscious projections seem to stimulate the establishment of
trust in robots, which happens easily and quickly. Research has shown that people
trust robots even when they have clear evidence that such trust is unjusti ed.
40
In a study on emergency evacuation scenarios, participants consistently followed
a robot despite previously seeing the same robot navigate poorly.  e robot led
26 participants into a dark room with no exits during a simulated  re, and no
one thought to safely exit the way they came in rather than following the robot s
instructions.
41 As will be shown next, this trust sometimes trumps that which is
put in humans.
is natural trust in robots has positive e ects on patients willingness to
disclose, as they not only feel at ease to do so but are stimulated as well. In health
‘Nothing Comes between My Robot and Me 103
42 K a h l , B j ö r n , M a t t h i a s F ü l l e r , omas Beer , and Sven Ziegler . Acceptance and Communicative
E ectiveness of Di erent HRI Modalities for Mental Stimulation in Dementia Care in New Frontiers
of Service Robotics for the Elderly edited by Di Nuovo, E. , F. Broz , F Cavallo , P Dario ( IEEE 2014 ),
11 – 14 .
43 B r y s o n , J o a n n a J . R o b o t s S h o u l d B e S l a v e s . Close Engagements with Arti cial Companions: Key
Social, Psychological, Ethical and Design Issues ( 2010 ): 63 – 74 .
44 omasen, Kristen , ‘ Examining the Constitutionality of Robot-enhanced Interrogation in Robot
Law edited by Ryan Calo , Michael Froomkin , Ian Kerr ( Cheltenham : Edward Elgar Publishing 2016 )
pp. 306 – 330 .
45 Sedenberg, Elaine , John Chuang , and Deirdre Mulligan . ‘ Designing Commercial erapeutic
Robots for Privacy Preserving Systems and Ethical Research Practices Within the Home . Inter national
Journal of Social Robotics 8 , no 4 ( 2016 ): 575 – 587 .
46 See Kaminski et al, n 20 at 983.
47 S e e omasen n 44.
48 Calo, M. Ryan . ‘ Robots and Privacy . in Robot Ethics:  e Ethical and Social Implications of Robotics ,
edited by Lin, Patrick , Keith Abney , and George A. Bekey ( Cambridge, MA : MIT Press 2011 ) .
care settings, the presence of robots has been shown to help care receivers feel at
ease. Paro the baby seal has been found to alleviate stress in persons with demen-
tia, some of whom are enticed into interacting with social robots even when
they refuse to respond to human interaction.
42 Care receivers with less cognitive
impairment have been found to experience robots as more predictable and less
demanding in interaction than humans.
43 ere is extensive evidence that social
robots in di erent practice contexts consistently stimulate their interaction part-
ners to disclose information more readily, and with fewer inhibitions than in the
context of human relationships.
44 Patients are known to disclose more to a robot
than to telepresence screens or cameras, making robots potentially particularly
suitable for monitoring purposes.
45
ese e ects are noted by Kaminski et al, who describe the impact on the
disclosure autonomy function of medical con dentiality: sharing health related
information according to one s own preferences.
46 Robots framing as socially
interactive entities stimulates care receivers to disclose even more, partly because
disclosure of personal information is a behaviour that is generally rewarded in
interactive contexts.  e therapeutic trust-building phase of patients and caregiv-
ers can be speeded up when a robot enters the relationship; the initial testing
of an interactions partner s trustworthiness can be shortened.
47 Calo further
warns that robots might be employed to exert their super-human leverage. e
robot s human-like appeal is ampli ed in absence of the socially inhibiting, self-
censoring e ect that is a part of human interaction.
48 e trust building phase
between patient and caregiver has merits of its own, as patients need to be able
to trust their caregivers independently of robotic presence.  at system of natu-
ral trust development is interfered with and can be potentially exploited by using
robots. Manipulations of patients willingness to disclose risks to erode the disclo-
sure autonomy function of medical con dentiality.
Considering that social robots are designed to draw on these human incli-
nations to ensure their e ective functioning, a robot s role should be carefully
104 Eduard Fosch Villaronga et al
49 Kaminski, et al, n 20 at 983.
50 See Allen n 32 at 261 – 277.
51 P e t t i n a t i , M i c h a e l J . e t a l , e In uence of a Peripheral Social Robot on Self-Disclosure ( 2016 )
25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).
constructed to avoid complications. Transparency with regard to a robot s func-
tion and operational capacities is especially of concern in healthcare settings.  e
hidden nature of information processing by robots is likely to lead to lack of aware-
ness about the nature and extent of information processing that goes on. A robot s
capabilities should be communicated to patients and other users to allow them
to re ect on their engagement with the robot. Disappointment about unexpected
uses of information collected by robots will harm the trust of patients in robots
and the caregivers that employ them.
is means the programming of robotic behaviours needs careful consid-
eration. Using interpersonal clues, robots can be designed to make users more
comfortable, for instance by averting robot eyes , thereby distracting them from
the fact that information is still being collected.
49 Apart from how this harms
a patient s legitimate expectations, this paves the ways for abuse in the form of
covert collection of, for example, sensitive images. With earlier introductions of
covert recording technologies, these highly invasive privacy interferences became
a pervasive problem, targeting black women disproportionately.
50 Depending on
who the robot users are and in what contexts recording is taking place, covert robot
recording might similarly disproportionately a ect members of certain groups or
include a signi cant level of sensitive information.  is risks eroding these groups
trust in medical con dentiality, discouraging them to seek care.
But robots might also be used to enhance an established human therapeutic
relationship. Pettinati et al. researched whether the robot Nao, as a peripheral
tool, might be used to alert caregivers to norm violations in talking sessions
between caregivers and care receivers, for example when patients that su er from
Parkinson s disease show a loss of expressivity and a doctor reacts negatively. As a
rst tested step, an attending, but not recording Nao was considered non-intrusive
by test subjects, even though it was experienced as listening in on them. Human
attendance on the other hand, they reported, would disturb their disclosure
readiness.
51
However, a positive di erence in comfort that patients feel with robots compared
to their human caregivers can cause problems of its own. If patients spend much
time with a social robot, especially if this time is experienced as rewarding, they
might begin to conceive of the robot as a friend rather than a healthcare aid.  is
might have an e ect on how they perceive the norms of disclosure that the robot
is likely to follow. Accordingly, they might consider information given to the robot
to be con dential to the robot itself rather than to the care system within which
robot the robot is deployed, and feel betrayed if such information is passed on to
human caregivers. While human nurses are used to balancing con icting demands
‘Nothing Comes between My Robot and Me 105
52 Nissenbaum, Helen . Privacy in Context: Technology, Policy, and the Integrity of Social Life ( S t a n f o r d :
Stanford University Press , 2009 ) .
53 Sorell, Tom and Heather Draper , ‘ Robot Carers, Ethics, and Older People ’ , Ethics and Information
Technology 16 , no 3 ( 2014 ): 183 – 195 .
54 Faden, Ruth R. , and Tom L. Beauchamp . A History and  eory of Informed Consent . ( Oxford :
Oxford University Press , 1986 ) .
55 B e a u c h a m p , To m L . , J a m e s F . C h i l d r e s s , Principles of Biomedical Ethics ( 7th edn). (New York:
Oxford University Press , 2012 ) .
56 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the
protection of natural persons with regard to the processing of personal data and on the free movement
of such data, and repealing Directive 95/46/EC (General Data Protection Regulation (GDPR)).
for con dentiality and disclosure in the complex context of caring relationships,
52
and are able to use their discretion based on their understanding of speci c patient
needs and expectations, this goes beyond current capabilities of robots.  e rules
of transparency of robot data to the healthcare team might follow di erent princi-
ples and therefore pose particular challenges in terms of who has the control and
authority to disclose patient information to the healthcare team. As Draper and
Sorrell point out, it would, for instance, be highly problematic from an informa-
tional autonomy point of view to use robotic data recording to second-guess the
older person ’ s own testimonies ’ , 53 especially if that data has been gained as a result
of induced willingness to disclose in the presence of the robot.
3.2. Complexities of Consent in Healthcare Robotics
Consent is a core requirement for data processing. Consent must be voluntary
and informed, with a clear explanation of what data is being collected, what the
data will be used for, and how long it will be stored. However, consent to the
collection and use of data by the robot also occurs within the context of health-
care which carries its own norms and requirements for consent. Accordingly, the
robot operates under both data governance schemes in that, as it collects data, it
must comply with Data Protection laws and, as a healthcare aid, must comply with
any requirements for consent for these health activities.  e multi-functional SAR
operating as a caregiver sits within the regulatory domains of each of those func-
tions. Consequently, as a matter of medical ethics, consent must meet requirements
of voluntariness, competence of the patient, clear and full disclosure of relevant
information accompanied by comprehension of that information.
54 , 55 From a legal
viewpoint, consent should be given by a clear a rmative act establishing a freely
given, speci c, informed and unambiguous indication of the data subject s agree-
ment to the processing of personal data relating to him or her.
56 Moreover, beyond
the fact that the controller shall demonstrate that the data subject has given their
consent, and o ers an easily accessible withdrawal possibility to the data subject;
the processing of sensitive data such as health data implies that explicit consent
has been given. During the consent process it should be made clear that not only is
106 Eduard Fosch Villaronga et al
57 T u r k l e , S h e r r y . Alone Together: Why We Expect More from Technology and Less from Each Other .
( London : Hachette , 2017 ) .
58 See Salem, et al, n 37 at 313 323.
59 Rault, Jean-Loup . ‘ Pets in the Digital Age: Live, Robot, or Virtual ? . Frontiers in Veterinary Science
2 ( 2015 ): 11 .
60 Mead, Ross , and Maja J. Mataric . ‘ Robots Have Needs Too: People Adapt their Proxemic Preferences
to Improve Autonomous Robot Recognition of Human Social Signals . New Frontiers in Human-Robot
Interaction 5 , no 2 ( 2015 ): 48 – 68.
61 See De Groot n 28 at 28.
62 ‘ Privacy Policy ’ Terms of Use, ToyTalk, last modi ed April 11, 2017. Available at: https://toytalk.
com/hellobarbie/privacy/.
63 Fosch-Villaronga, E. and Millard, C. ‘ Robots, Clouds and Humans. Challenges in Regulating
Complex and Dynamic Cyber-Physical Ecosystems , SSRN ( 2018 ) , forthcoming.
the robot functioning as a caregiver, but also as a data processing device and both
types of functions should be explicitly authorised by the care receiver or his or her
surrogate. In addition to that, it should also be clari ed who is a data controller and
who is a data processor. As seen in Figure 4. 2, the healthcare sector may involve
multiple actors, including doctors, personnel, healthcare institution, providers of
services, possible lessors for certain equipment or the users themselves. Based on
the de nitions and regimes established by Articles 4, 26, and 28 of the GDPR, this
scenario requires the identi cation of the de facto role of each actor.  is will be
of greater importance for those patients using healthcare robots used at home,
because, whereas the household exception (GDPR, Article 2) would exist for the
users vis- à -vis their guests/domestic collaborators provided that all the conditions
are met; it would not apply with regard to the companies providing the monitoring
services and carrying out the processing of the data.
e importance of regard for consent in the use of healthcare robots is under-
scored by the fact that, due to their interactive nature, robots as cyber-physical
systems are o en not immediately recognised by users as data-processing devices
that may require consent for their data usage. Instead, they tend to be experienced
and conceptualised primarily as interactive partners, albeit as liminal objects
between animate and inanimate, as Turkle highlights.
57 Due to the anthropomor-
phisation e ect 58 , 59 humans tend to project and attribute human-like features to
even very simple objects and o en ascribe function on the basis of similarities
to human anatomy.
60 As pointed out by de Groot, this might lead to signi cant
misconceptions about data processing: not only might robot eyes see more than
human eyes, there might also be other sensors with no human equivalent, captur-
ing large amounts of data, for example for navigation purposes or for supporting
e ective and safe HRI.
61 Additionally, such data might be shared with di erent
companies for improvement purposes (eg robots o en rely on externally devel-
oped speech recognition services). In those cases of further data use, the controller
might not see themselves as responsible for the processor s adherence to data
protection laws.
62 Data may also be processed remotely in the cloud, blurring the
relationships, responsibilities and liabilities among various actors, such as users,
manufacturers, and cloud service providers.
63
‘Nothing Comes between My Robot and Me 107
64 Ienca, M. and Fosch-Villaronga, E. ( 2018 ) ‘ Privacy and Security Issues in Assistive Technologies
for Dementia:  e Case of Ambient Assisted Living, Wearables and Service Robotics . In Assistive
Technologies for Dementia Care , edited by Fabrice Jotterand , Marcello Ienca , Tenzin Wangmo , and
Bernice Elger ( Oxford ; Oxford University Press , 2018 ) .
65 Kaye, Jane , Edgar A. Whitley , David Lund , Michael Morrison , Harriet Teare , and K a r e n M e l h a m .
Dynamic Consent: A Patient Interface for Twenty- rst Century Research Networks ’ European Journal
of Human Genetics 23 , no 2 ( 2015 ): 141 .
66 Williams, Hawys , Karen Spencer , Caroline Sanders , David Lund , Edgar A. Whitley , J a n e K a y e ,
and William G. Dixon . ‘ Dynamic Consent: A Possible Solution to Improve Patient Con dence and
Trust in How Electronic Patient Records are used in Medical Research . JMIR Medical Informatics 3 ,
no 1 ( 2015 ) .
67 Budin-Lj ø sne, Isabelle , Harriet JA Teare , Jane Kaye , Stephan Beck , Heidi Beate Bentzen , Luciana
Caenazzo , Clive Collett et al ‘ Dynamic Consent: A Potential Solution to Some of the Challenges of
Modern Biomedical Research . BMC Medical Ethics 18 , no 1 ( 2017 ): 4 .
Consent requirements must take into account the vulnerability of the robot
user.  e nature of the vulnerability that has resulted in the need for robotic assis-
tance may compound the complexity of consent requirements. Assistive robots
are strongly promoted as a way to address the care and support needs of an aging
demographic; many of those robots are targeted at users with early dementia or
other cognitive impairments.  e consent process for care receivers with cognitive
impairments is particularly demanding due to the complexity of their capacity,
which may be inconsistent and  uctuating. Jurisdiction-speci c capacity legisla-
tion provides the legal basis for consent for persons with cognitive impairments,
with a human rights based focus on the presumption of capacity, while acknowl-
edging that a balance needs to be struck between the patient s autonomy and
informational privacy rights and the duty of bene cence. e consent process
might involve additional parties that support the patient, or in the case of severely
impaired capacity serve as decision-making proxies. As highlighted above, consent
needs to include both the authorisation of the use of the robotic assistance for
healthcare purposes as well as data protection consent, such as consent to make
available user information to additional parties to ensure the provision of compe-
tent and comprehensive care.
64
e mechanism of achieving consent for the use of robotic assistance could
be tailored to enhance understanding and the exercise of autonomy by the care
receiver and achieve an optimal level of transparency and accuracy. A two-phase
model of consent could be conceived here: It would be initially the responsibil-
ity of the service provider to facilitate the user s consent to the use of robot as
healthcare aid, including its most important data protection aspects. However,
once the robot has been accepted as healthcare aid and is in use, it might be possi-
ble to have the robot initiate conversations about consent with users and provide
further information and implement the ongoing consent process for any emerging
changes. With the increasing availability of sophisticated conversational interfaces
this options becomes increasingly attractive (see 3.3. below). Ongoing informa-
tion and consent facilitation by the robot would constitute a form of dynamic
consent,
65 , 66 , 67 allowing users to be kept abreast of relevant developments, but also
108 Eduard Fosch Villaronga et al
68 Recital 12 of the Proposal for a Regulation of the European Parliament and of the Council
concerning the respect for private life and the protection of personal data in electronic communica-
tions and repealing Directive 2002/58/EC (Regulation on Privacy and Electronic Communications).
69 Broadbent, Elizabeth , Rebecca Sta ord , and Bruce MacDonald . ‘ Acceptance of Healthcare Robots
for the Older Population: Review and Future Directions . International Journal of Social Robotics 1 ,
no 4 ( 2009 ): 319 .
70 ‘ Mabu ’ , Catalia Health, last accessed 2 March 2018. Available at: www.cataliahealth.com .
71 Buddy , Bluefrog Robotics, last accessed 2 March 2018. Available at: www.bluefrogrobotics.com/
en/home/ .
72 Mario Project , Managing ACtive and Healthy Aging with the Use of Robotics, European Union,
last accessed, 2 March 2018. Available at: www.mario-project.eu/portal/ .
to manage their own privacy preferences in a time-sensitive manner in short,
enhancing their opportunities to exercise informational autonomy. However,
limitations of the robot s ability to facilitate such communications need to be kept
in mind, and the option to engage in ongoing consent conversations with the
service provider being should always be available.
Another aspect of consent process arises as a result of the interaction of the
robot with any other humans other than the care receiver who may be present. As
we discuss later in this chapter, this may involve the observation and collection of
information about the caregiver and other human sta who may be working in the
presence of the care receiver and the robot. As we note below, SARs whether in
homes or institutions will be functioning in the context of the workplace of other
humans.  is raises privacy considerations with regard to what extent these work-
ers consent to the processing of data collected about them. As we argue, consent
from these workers needs to be obtained if a data protection compliant workplace
is to be provided. However, it remains uncertain whether workers are really free to
provide consent in the full sense to the processing of data, or how this applies to
purely machine-to-machine communications.
68
3.3. Conversational Privacy Management with Robots
SARs with socially communicative capabilities can give physical and cognitive
support to elderly persons, disabled persons, or many other types of users.
69
Robots that provide instructions or remind users of certain tasks, as well as
those designed for cognitive therapeutic activities o en deploy vocal communi-
cation capabilities together with other types of user interfaces, such as buttons
or interactive screens. Personal healthcare companions like Mabu or Buddy, for
instance, use a vocal noti cation system to remind patients when to take their
medicines.
70 , 71 ese capabilities have also been used in robots that are designed
to support elderly patients and patients with dementia.
72 Robots can ask patients
why they got up in the middle of the night, remind them to go to the toilet
or, if they see that patients are opening the door to exit, they can talk to them
or broadcast messages from family members to try and dissuade them from
‘Nothing Comes between My Robot and Me 109
73 Sharkey, Amanda , and Noel Sharkey . ‘ Granny and the Robots: Ethical Issues in Robot Care for the
Elderly . Ethics and Information Technology 14 , no 1 ( 2012 ): 27 – 40 .
74 Diehl, Joshua J. , Lauren M. Schmitt , Michael Villano , and Charles R. Crowell . ‘  e Clinical Use
of Robots for Individuals with Autism Spectrum Disorders: A Critical Review . Research in Autism
Spectrum Disorders 6 , no 1 ( 2012 ): 249 – 262 .
75 Krishna, Golden . e Best Interface is no Interface:  e Simple Path to Brilliant Technology . ( Harlow :
Pearson Education , 2015 ) .
76 Harkous, Hamza , Kassem Fawaz , Kang G. Shin , and Karl Aberer . ‘ PriBots: Conversational Privacy
with Chatbots . In WSF@ SOUPS . 2016.
77 W e i s e r , M a r k . e Computer for the 21st Century . Scienti c American 265 , no 3 ( 1991 ): 94 – 105 .
78 See Ienca and Fosch-Villaronga, n 64.
79 Courtney, Karen L. ‘ Privacy and Senior Willingness to Adopt Smart Home Information Technology
in Residential Care Facilities . Methods Inf Med , 47 , no 1 ( 2008 ): 76 – 81 .
leaving.
73 erapeutic robots for neurodevelopmental disorders such as autism
spectrum disorder talk with their users to play with them, help them learn and
stimulate social responses, thereby providing skills training as well as creating
emotional bonds.
74
Conversational user interfaces, although removing reliance on screens
75 and
aiming towards achieving the ultimate human-computer interface ,
76 raise the
question whether users will, at some point, forget the fact that they are talking
with a machine. Indeed, the most profound technologies are those that disappear.
ey weave themselves into the fabric of everyday life until they are indistinguish-
able from it.
77 e close integration between users and friction-free conversational
interfaces could reduce the users awareness of the information they share with a
system that may be collecting and processing information comprehensively and
continuously.
78 Indeed, in order to understand the patient s interactions and reply
appropriately, robots deploy natural language analysis methods that collect vast
amounts of data, which in turn serve to re ne its overall performance over time.
e collection and processing of personal data voice patterns, content of the
conversations that is also particularly sensitive data health and behavioural
data creates concerns with regard to processing, pro ling, and data transfers to
third parties, as required for reliance on outside services such as speech recogni-
tion services. Such data would also have potential value for possible secondary uses
that are not primarily related to the performance of the service, such as marketing
and behavioural advertising.
While the use of vocal commands is increasingly popular, due to ease of use,
comfort and potential health bene ts,
79 on the other hand, the possibility that
family members, neighbours or caregivers themselves can gain intimate knowl-
edge about them due to overhearing their conversations with the robot might
make users uncomfortable. Users might prefer a non-conversational interface that
allows the content of their interactions to remain more private. Speech interfaces
also come with additional vulnerabilities in terms of security and safety (eg the
risk that other persons might be able to establish a conversation with the robot and
potentially gain access to user information or change the robot s settings) but also
110 Eduard Fosch Villaronga et al
80 V e r b e e k , P e t e r - P a u l . To w a r d a eory of Technological Mediation . Technoscience and
Postphenomenology:  e Manhattan Papers ( 2015 ): 189 .
81 Verbeek, Peter-Paul . Beyond Interaction: A Short Introduction to Mediation e o r y . Interactions
22 , no 3 ( 2015 ): 26 – 31 .
82 G i t t i n s , A n t h o n y J . e Universal in the Local: Power, Piety, and Paradox in the Formation of
Missionary Community . Mission and Culture:  e Louis J. Luzbetak Lectures ( 2012 ): 133 – 87 .
83 Gortzis, L.G. ‘ Designing and Redesigning Medical Telecare Services . ’ Methods of Information in
Medicine 46 , no 1 ( 2007 ): 27 – 35 .
84 Calo, Ryan . ‘ Against Notice Skepticism in Privacy (and Elsewhere) . Notre Dame L. Rev. 87 ( 2011 ):
1027 .
85 See Harkous et al, n 76.
in terms of autonomy and free will, as it may not be clear to what extent norma-
tive behaviours should be enforced by robot and arti cial intelligent healthcare
technologies.
e interaction with robots with conversational capabilities may have an e ect
on how users perceive what counts as private. Technology shapes the way humans
experience reality and operate within it. According to the theory of technological
mediation, the types of relations, the points of contact, and the mutual in uence
between humans and technologies impact on how humans interpret and even
construct reality.
80 Technology becomes a  lter and, at the same time, an agent
determining how individuals see the world.
81 If it is true that dialogue changes
both parties, and its outcome cannot be predetermined
82 then robot agents with
conversational capabilities may a ect us in ways that are potentially unpredict-
able.  is means that, apart from other aspects,
83 designers of socially interactive
robots need to be cognisant of the potential impact of seemingly innocuous tech-
nical solutions that may however change signi cantly the way in which the robot
will be integrated into users lives. Speech-based interaction may constitute such a
substantially signi cant change.
As already indicated in section 3.2. conversational interfaces could be poten-
tially suitable to enhancing privacy management, and could be speci cally
developed to help data subjects navigate privacy settings and policies in a more
engaging and user friendly manner.  e communication of privacy settings and
policies is a notoriously problematic area of privacy management; the tradi-
tional approach to notice and consent has long been criticised as ine ective and
in need of improvement.
84 An example for such a conversational interface for
privacy management is PriBots , a conversational bots product capable of provid-
ing support to users with regard to the privacy options of a website, product, or
company.
85 Inspired by commercial bots used to provide easily and clear responses
to online shoppers, PriBots can give explanations, answer questions, and help
users to automatically change privacy settings. Such a product could potentially
be used in healthcare robots. Conversational bots could deliver information to
facilitate compliance with GDPR requirements, such as information concern-
ing data processing, purposes, data transfer, the requirements of Articles 12
and 13 or Chapter III of the GDPR. Such bots might facilitate an increased level
‘Nothing Comes between My Robot and Me 111
86 See Kaye, Jane et al, n 65 at 141.
87 See Civil Law Rules on Robotics n 6 at 20.
of interactivity which could help data subjects improve their understanding of the
content of the privacy policy of a given company. Such bots could also be deployed
to help alert users to changes and deliver relevant information in an easily acces-
sible and interactive way during the entire lifespan of the healthcare robot, in the
manner of dynamic consent interfaces.
86 Such an approach to the provision of
privacy relevant information would support the principle of transparency and
privacy by design and by default. If such an approach to conversational privacy
management is combined with voice recognition, it can also help foster privacy
and data protection by delivering more tailored information to di erent subjects
using the same robot, such as multiple patients, medical sta , parents or tutors.
It will be important to deploy these conversational interfaces only when it can
be assured that the legal information can be presented accurately in this format,
that the quality of the interaction is su ciently high such that users receive appro-
priate responses to their queries or wider interactions with the system and that the
interface performs at least as well as human interaction with regard to ensuring
the user s understanding of the information delivered.  is will require careful
user studies, which should ideally be designed not just to capture comparative
performance of such interfaces with privacy policies or human conversation, but
also to capture qualitative di erences in use and user experience, to do justice to
the recent statement of the EP ascribing wider accountability of robotics engi-
neers for the social, environmental and human health impacts that robotics may
impose on present and future generations ?
87 A data protection impact assessment
(GDPR, Article 35) could prove that the robots have been designed in the most
privacy-friendly possible way. However, the consequences of technological media-
tion through the innovation of conversational interfaces and their impact on wider
elements of privacy deserve attention in their own right.
3.4. Data Portability and the Robotised Self
e application of data protection and privacy regulation to healthcare robots
needs to remain cognisant of the dual nature of robots with their constitution
as cyber-physical entities: they are at the same time part of the virtual and the
tangible world.  is greatly a ects the translation of general principles of regula-
tions, including data protection, to these entities.  e right to data portability
in relation to healthcare robots is a case in point, as ensuring appropriate robot
functioning might not only depend on which data is collected and processed but
also, and largely, depend on its speci c embodiment.  e possibility to transfer
data from one controller to another is unlikely to be as simple as copy-paste
112 Eduard Fosch Villaronga et al
88 Zhang, Qian , Min Chen , and Limei Xu . ‘ Kinematics and Dynamics Modeling for Lower
Limbs Rehabilitation Robot . ’ In International Conference on Social Robotics ( Berlin : Springer , 2012 )
641 – 649 .
89 Yamawaki, Kanako , Ryohei Ariyasu , Shigeki Kubota , Hiroaki Kawamoto , Yoshio Nakata , Kiyotaka
Kamibayashi , Yoshiyuki Sankai , Kiyoshi Eguchi , and Naoyuki Ochiai . ‘ Application of Robot Suit HAL
to Gait Rehabilitation of Stroke Patients: A Case Study. In International Conference on Computers for
Handicapped Persons ( Springer , Berlin ) 2012 , 184 – 187 .
90 Nozawa, Masako , and Yoshiyuki Sankai . ‘ Control Method of Walking Speed and Step Length
for Hybrid Assistive Leg . International Conference on Computers for Handicapped Persons ( Springer ,
Berlin , 2002 ), 220 – 227.
91 Assistive ExoSkeleton ExoLite , Exomed, last accessed 2 March 2018. Available at: www.exomed.
org/main .
92 Rupal, B.S. , A. Singla , and G.S. Virk . ‘ Lower Limb Exoskeletons: A Brief Review . In Conference
on Mechanical Engineering and Technology (COMET-2016), (IIT (BHU) , Varanasi, India , 2016 )
130 – 140 .
93 ‘ Wearable Computing ’ , e Encyclopedia of Human-Computer Interaction , last accessed 2 March
2018. Available at: w ww.interaction-design.org/literature/book/the-encyclopedia-of-human-computer-
interaction-2nd-ed .
94 S o Exosuits , Harvard BioDesign Lab, last accessed 2 March 2018. Available at: https://biodesign.
seas.harvard.edu/so -exosuits .
95 T u c k e r , M i c h a e l R . , J e r e m y O l i v i e r , A n n a P a g e l , H a n n e s B l e u l e r , M o h a m e d B o u r i , Olivier Lambercy ,
Jos é del R Mill á n , Robert Riener , Heike Vallery , and Roger Gassert . ‘ Control Strategies for Active Lower
Extremity Prosthetics and Orthotics: A Review . Journal of Neuroengineering and Rehabilitation 12 , no 1
( 2015 ): 1 .
because di erent speci c embodiments of the robots in question might prevent
users from adequately using such data, even if they are in possession and control
of their data.
Embodiment is a particularly clear problem in relation to physical assistant
robots such as exoskeletons.  ese have been used for lower limb rehabilitation,
for gait rehabilitation of stroke patients, to help the mobility of persons with
disability or older persons, or to support users with functional disorders in their
legs.
88 , 89 , 90 In essence, these robotic devices help users to walk, which does not just
a ect mobility but also has impact on other essential physiological parameters.
91
Such robots are envisaged as providing mobility support for persons that might
allow them alternative forms of mobility outside of wheelchair use and also make
them less dependent on human caregivers.
92
From a data protection point of view, wearable exoskeletons are not just entities
that provide physical support, but like other wearable computing devices they are
body-borne computational and sensory devices that can collect a wide range of
information from the user s body and from the user s environment.
93 Exoskeletons
are normally bulky, rigid and worn over clothing, although the latest research in
so materials and innovative textiles works towards making them more comfort-
able and unobtrusive.
94 ey are meant to work in a seamless integration with the
user s residual musculoskeletal system and sensory-motor control loops ,
95 in a way
that robot design and human needs are intertwined. To do so, a highly complex set
of data relevant for this performance needs to be used.  is can be highly variable
and depend on internal factors, such as age, mental state of the person, physical
‘Nothing Comes between My Robot and Me 113
96 Rupal, Baltej Singh , Sajid Ra que , Ashish Singla , Ekta Singla , Magnus Isaksson , and Gurvinder
Singh Virk . Lower-limb Exoskeletons: Research Trends and Regulatory Guidelines in Medical and
Non-medical Applications . International Journal of Advanced Robotic Systems 14 , no 6 ( 2017 ) .
97 iBot , Wikipedia, last accessed March 2, 2018. Available at: https://en.wikipedia.org/wiki/
IBOT#Production_Ends.
strength, any pathology that may be a ecting the person s gait indirectly or factors
that are external to the person like the quality of the surface or the lightning.
96
Similar to pre-existing maps used for autonomous cars, pre-existing gait patterns
serve as reference trajectories for exoskeletons; such patterns are then combined
with individual gait patterns and personalised data from the individual user in
order to ensure a safe real-time performance tailored to individual needs.
e symbiotic nature of the interaction between the lower-limb exoskeleton
and the individual user increases dramatically the dependence of the user on the
robotic device, especially if the device facilitates the realisation of a highly valued
function for the user. Such dependence could create anxiety when the device does
not work, frustration when the creator decides to stop producing it, as happened
for instance with the iBot project,
97 or when the user buys a new device and needs
to engage in intense training once again to adapt the device to their individual
movement patterns.  e right to data portability enshrined in Article 20 of the
GDPR might serve as an e ective remedy to this problem as it could ensure that
the data from a previous robot is not lost and could be implemented in a new
robot. However, this requirement might imply signi cant changes to technical
approaches to the design and development of such devices.
e right to data portability entitles data subjects to receive the personal data
that they have provided to the controller in a structured, commonly used and
machine-readable format. e Article gives the right to users to transmit those
data to another controller without hindrance from the controller to which the
personal data have been provided. In the case of exoskeletons, for instance, the
physical embodiment of the robot might play an important role because the port-
ability of such data will be limited by the physical characteristics and limitations
of the particular devices in question. Gait pattern and other data to help con gure
the personal pro le of the user might have to be translated to another device.
e new exoskeleton might not include certain features of the previous robot
or might be substantially physically di erent. As exoskeletons exert direct forces
on the user, highly sensitive adaptation to the user s individual characteristics is
required (eg for elderly or disabled users whose impairment might change over
time).  is means that while data portability might at  rst sight seem primarily
a right centered on data protection, it is also highly safety relevant, insofar as the
safety of the user could be compromised if this portability is not implemented
properly.
is extends to the security of processing mentioned in Article 32 of the GDPR.
is pushes for the implementation of resilient measures to secure the processing
of data.  is provision should be read together with the concept of reversibility
114 Eduard Fosch Villaronga et al
98 European Parliament Resolution n 6.
99 See Tucker et al, n 99.
100 Fosch-Villaronga, E. ‘ Creation of a Care Robot Impact Assessment ’ . WASET, International Science
Journal of Social, Behavioral, Educational, Economic and Management Engineering , 9 , no 6 ( 2015 ):
1817 – 1821 .
101 K r i s t o ersson, Annica , Silvia Coradeschi , Amy Lout , and Kerstin Severinson-Eklundh . ‘ An
Exploratory Study of Health Professionals Attitudes about Robotic Telepresence Technology . Journal
of Technology in Human Services 29 , no 4 ( 2011 ): 263 – 283 .
mentioned in the EP Resolution 2015/2103 (INL), which states that a reversibility
model tells the robot which actions are reversible and how to reverse them if they
are.  e ability to undo the last action or a sequence of actions allows users to
undo undesired actions and get back to the good stage of their work.
98 Although
such measures may be implemented, the inextricable connection between the
data processing and the care-receiver s safety may suggest a need to revisit these
provisions in the light of, for instance, a fall, as there may be no straightforward
resilient/reversible measures when the care-receiver may have su ered injury.
e cyber-physical nature of robot technology will force data controllers to
take into account the speci cities of the embodiment of the robot when develop-
ing interoperable formats that enable data portability (GDPR, recital 68). In the
development of future standardised evaluation criteria and clear e ectiveness and
safety evaluations, which are currently still missing,
99 the data portability require-
ment should be considered alongside other factors.  is also suggests that, in the
future, although a single-impact assessment may apply (eg the data protection
impact assessment enshrined in GDPR, Article 35) it might not fully mitigate the
speci c data-related risks robots may cause. A multi-factored impact assessment
that identi es a range of di erent impacts of this technology, and considers care-
fully the links and dependencies between di erent factors, such as data protection
compromising safety, could thereby help produce safer technology.
100
3.5. Privacy and Robot Data Collection in the Workplace
As indicated in Section 3.2, the interplay of healthcare robot stakeholders is very
complex: it includes private or public institutions, manufacturers, robot users, cloud
services, third parties and many workers, including doctors, nurses, therapists
and other members such as technicians and administrative personnel. Workers
may be exposed to these technologies without having been consulted and hold an
attitude of suspicion towards them;
101 the privacy and data- protection rights of
workers who are required to interact with robot technology during their profes-
sional duties may be compromised. Indeed, the use of robot technology opens the
door to new and highly problematic privacy impacts such as new forms of surveil-
lance or the eventual data-driven displacement of workers. Common examples of
robotisation in the healthcare work sector are exoskeletons to support sta that
perform taxing physical tasks, such as li ing, or social robots working alongside
‘Nothing Comes between My Robot and Me 115
102 A r t i cial Intelligence and Robotics and  eir Impact on the Workplace IBA Global
Employment Institute, last modi ed April, 2017. Available at: www.ibanet.org/Document/Default.
aspx?DocumentUid=c06aa1a3-d355-4866-beda-9a3a8779ba6e .
103 E v e l y n eiss , Get Healthy or Pay Higher Insurance Rates, Cleveland Clinic Employees are Told
Cleveland , 12 February 2012 . Available at: www.cleveland.com/health t/index.ssf/2012/02/join_or_
pay_more_cleveland_cli.html .
104 See Fosch Villaronga n 16.
105 Chartrand, Gabriel , Phillip M. Cheng , Eugene Vorontsov , Michal Drozdzal , Simon Turcotte ,
Christopher J. Pal , Samuel Kadoury , and An Tang . ‘ Deep Learning: A Primer for Radiologists . ’
RadioGraphics 37 , no 7 ( 2017 ): 2113 – 2131 .
106 Lee, June-Goo , Sanghoon Jun , Young-Won Cho , Hyunna Lee , Guk Bae Kim , Joon Beom Seo , and
Namkug Kim . ‘ Deep Learning in Medical Imaging: General Overview . Korean Journal of Radiology 18 ,
no 4 ( 2017 ): 570 – 584 .
humans and patients, both increasingly supported by arti cial intelligence (AI)
and deep learning technologies. While exoskeletons are meant to improve the
working conditions of sta , social robots may be introduced as tools to be used by
practitioners or therapists.
Due to the intrinsically symbiotic nature of exoskeletons, their use introduces
safety and health concerns. For instance, exoskeletons can pose hazards to workers
when transferring the load from one part of the body to another. As robots may
shi the worker s center of gravity, their balance may be compromised. However,
they also raise privacy concerns insofar as the devices can collect large amounts
of data regarding the performance of the worker, necessary to facilitate and opti-
mise the performance of the device. However, this also entails comprehensive
monitoring of the workers activities: li ing, pushing, pulling, turning, throw-
ing or catching. Activities that were not data ed before can now be quanti ed for
e ciency purposes, either for developing guidance on correct posture and usage,
for identifying misuse of the device, or for identi cation of particularly highly
performing workers (eg those that adapt more quickly to the device and perform
better with it). Moreover, because these robotic devices are personalised by nature,
the data collected from the worker may provide a comprehensive pro le of worker
characteristics, including health and personal information.  is means that this
technology is likely to further push the boundaries of the transparent employee ,
102
o ering the possibility to monitor employee behavioural and personal data and
eventually perhaps even to link it to  nancial incentives or penalties via the same
company or through insurance companies.
103 For instance, discrimination for
weight reasons is likely to arise, due to frequently  xed maximum weight require-
ments for such robotic devices.
104
Robots are also increasingly combined with AI capabilities. Deep learning and
AI can be used to assist health professionals by providing automated classi cation,
detection, and image segmentation in ways humans could never do.
105 Researchers
assume that, especially with regard to certain domains, such as radiology, this
can help professionals to perform more e cient screening procedures, diagno-
sis of conditions and their development over time.
106 Although decision support
systems that combine aggregated patient information have existed for a while,
116 Eduard Fosch Villaronga et al
107 Arntz, Melanie , Terry Gregory , and Ulrich Zierahn . ‘  e risk of automation for jobs in OECD
countries: A comparative analysis . OECD Social, Employment, and Migration Working Papers 189
( 2016 ) : 0_1.
108 F r e y , C a r l B e n e d i k t , a n d M i c h a e l A . O s b o r n e . e Future of Employment: How Susceptible are
Jobs to Computerisation ? ’ Technological Forecasting and Social Change 114 ( 2017 ): 254 – 280 .
109 Manyika, James , Susan Lund , Michael Chui , Jacques Bughin , Jonathan Woetzel , Parul Batra ,
Ryan Ko , and Saurabh Sanghvi . ‘ Jobs Lost, Jobs Gained: Workforce Transitions in A Time of Automation .
McKinsey Global Institute, December ( 2017 ) .
110 Acemoglu, Daron , and Pascual Restrepo . ‘ Robots and Jobs: Evidence from US Labor Markets .
SSRN ( 2017 ) .
111 See Frey and Osborne n 111.
112 ‘ Will a Robot Take Your Job ? ’ BBC, last accessed 2 March 2018. Available at: www.bbc.co.uk/news/
technology-34066941 .
progress in this domain conveys the impression that for certain tasks humans
will soon be outperformed by machines.  e danger is that even in the domain
of healthcare which long seemed to be immune to automatisation, there is the
increasing risk that professional tasks may become susceptible to computerisation
or automation. Big data techniques could substitute non-routine cognitive tasks,
and progress in robot dexterity could allow robots to increasingly perform manual
tasks and hence lead to a deep transformation of healthcare workplaces.
107 , 108 , 109
A large quantitative study on industrial robots and human replacement shows that
there is a tendency towards worker replacement in industrial environments due to
the productivity e ect of robots; such developments are likely to apply to health-
care workplaces as well.
110
When it comes to the healthcare sector, however, so far there has been no
comprehensive data collected on the e ect of robot use on care worker replace-
ment, most likely due to the early stages of robot use in healthcare settings. In
2015 the BBC released a so ware tool based on research by Carl Frey and Michael
Osborne
111 that shows the likelihood of automation of several types of work.
112
Although this so ware was only based on one study, it gives some interesting
indication regarding the likelihood of automation of several healthcare-related
jobs. While medical practitioners and physiotherapists are said to run only a 2 or
2.1percent automation risk, other professions like dental nurses, nursery nurses or
assistants, nursing auxiliary assistants, care workers and home caregivers are iden-
ti ed as having a much higher probability of replacement (60 % , 55.7 % , 46.8 % and
39.9 % respectively). Medical radiographers and dental technicians are thought
to be at 27.5 percent risk of being automated, ophthalmic opticians 13.7 percent,
whereas paramedics or speech and language therapists are given lower risk ratings
at 4.9 percent (or 0.5 % ) respectively. Other related professions also face a high risk
of being automated, for instance, so ware healthcare practice managers and medi-
cal secretaries risk 85.1 percent, and hospital porters 57.3 percent. However, at this
point, it is di cult to predict clearly what occupations and speci c tasks are most
likely to disappear or be replaced by machines, and how this will translate into jobs
that will be lost. Hence, Frey and Osborne suggest that any such  gures should be
viewed with caution.
‘Nothing Comes between My Robot and Me 117
113 Yang, Guang-Zhong , Jim Bellingham , Pierre E. Dupont , Peer Fischer , Luciano Floridi , R o b e r t
Full , Neil Jacobstein et al, ‘  e grand challenges of Science Robotics . Science Robotics 3 , no 14 ( 2018 ):
eaar7650 .
114 Arnzt,M. , T. Gregory and U. Zierhan ‘  e Risk of Automation for Jobs in OECD Countries:
A Comparative Analysis , OECD Social, Employment and Migration Working Papers, No 189 . ( Paris :
OECD , 2016 ) .
is raises nevertheless the question how far a human in the loop will remain
desirable. Although the data-protection laws refer to the right not to be subject to
a decision based solely on automated processing [ ] which produces legal e ects
concerning him or her and the right to obtain human intervention on the part of
the controller (GDPR, Article. 22), it is uncertain whether this will (and should)
be enforced. Depending on technical advances it is possible that referring to a
human could endanger the rights of users, for example if humans turn out to be
less safe or e cient at performing a task than the machine, or if the human refer-
ral process leads to unjusti able delays. However, such automatisation may lead to
the delegation of sensitive tasks and promote the de-responsibilisation of humans
vis- à -vis machines. 113
A larger question that draws on privacy considerations but has signi cance
for wider workers rights and employer responsibilities relates to what responsi-
bility a hospital or care facility has with regard to the eventual displacement of
workers if the employer introduces technologies in the workplace that ultimately
contribute to the automatisation of some of the employees tasks, while potentially
relying on workers data in the process. Although this problem might have to be
addressed under national legislation (GDPR, Article 88), this seems to suggest
that workplaces will need to adjust and accommodate di erent adaptations to face
the challenges of transformation due to robotic and AI innovations.  is could be
done in two ways:
Firstly, employers need to support their workforce by re-shaping training and
introducing speci c education initiatives with technological competences.
114 is
entails the rede nition of the job description, roles and responsibilities, as well as
a need to clarify tasks in the daily routine, work methods, and the re- identi cation
of risks faced by workers, either young or more senior. Additionally, and before
implementing any robotic or automated system, the employer will need to perform
a task analysis in order to re-assess skills by occupation, identify skill trends and
look at the deskilling e ects.
Secondly, employers need to adapt the workplace environment, the processes
and communication to accommodate human-robot interaction in a collaborative
manner. Knowing that healthcare workers are in constant dialogue and commu-
nication with other healthcare professionals and patients, the technology must
be introduced in a way that allows workers to positively in uence the changes,
determine how it will be used by them. In this new way of workplace collabo-
ration, researchers recommend including a proactive approach by integrating
118 Eduard Fosch Villaronga et al
115 Murashov, Vladimir , Frank Hearl , and John Howard . ‘ Working Safely with Robot Workers:
Recommendations for the new Workplace . Journal of Occupational and Environmental Hygiene 13 ,
no 3 ( 2016 ): D61 – D71 .
116 Decker, Michael , Martin Fischer , and Ingrid Ott . ‘ Service Robotics and Human Labor: A First
Technology Assessment of Substitution and Cooperation . Robotics and Autonomous Systems 87 ( 2017 ):
348 – 354 .
117 Cavoukian, Ann . ‘ Privacy by Design [Leading Edge] . IEEE Technology and Society Magazine 31 ,
no 4 ( 2012 ): 18 – 19 .
(1) qualitative risk assessment; (2) adapting workplace strategies and re ne
requirements; (3)adopting an appropriate level of precaution; (4) ensuring global
applicability; (5)promoting the ability to elicit voluntary cooperation by compa-
nies; and (6)ensuring stakeholder involvement.
115
e insertion of robot technologies in the healthcare workplace may impact
the rights of the workers. Working with robots and automated systems requires the
provision of workers consent to processes and operations that use personal data
following the data protection principles. Although the GDPR is not speci c about
this, such cases should be legitimate and employers should explicitly and appropri-
ately inform workers (principle of transparency) on how their private information
or knowledge will be used during and for their employment relation ( nality).
Workers would need to previously agree to working under conditions in which the
employer might invade their privacy by addressing the new individual rights under
the GDPR.  ese issues could be dealt with in a collective agreement. If work-
ers could be replaced by robots or AI technologies that have been fed with their
data or based on their knowledge and savoir-faire , it appears justi ed to demand
responsibilities to the employers.  e privacy of the workers plays a meaningful
role in the deployment of the AI and robotics system. As workers whose data and
knowledge are informing the systems that are likely to replace them may not be
in the same location or work for the same employer, a wider approach should be
taken to reciprocity obligations and protections. In this respect, we claim that an
international e ort to tackle this problem should be promoted.
116
3.6. Designing Privacy for Healthcare Robotics: Beyond
Simple Privacy by Design
e wide range of privacy concerns discussed in the previous sections raise
particular challenges for the design process. While those in charge of the techni-
cal side of robot development tend to be familiar with the general idea of privacy
by design ’ , 117 they o en lack understanding of what types of concerns exactly the
protection of privacy should entail. In particular, there tends to be little under-
standing among technical sta of the requirements of the GDPR beyond ensuring
data security. However, understanding the values that are at stake is necessary for
the prospective design and implementation of robots which realise those values in
healthcare contexts.
‘Nothing Comes between My Robot and Me 119
118 Friedman, Batya , Peter H. Kahn , Alan Borning , and Alina Huldtgren . ‘ Value Sensitive Design
and Information Systems . In Early Engagement and New Technologies: Opening Up the Laboratory ,
( Dordrecht : Springer , 2013 ), 55 – 95 .
119 Koops, Bert-Jaap , and Ronald Leenes . ‘ Privacy Regulation cannot be Hardcoded. A Critical
Comment on the Privacy by Design provision in Data Protection Law International Review of Law,
Computers & Technology 28 , no 2 ( 2014 ): 159 – 171 .
120 Van Wynsberghe, Aimee . Designing Robots for Care: Care-centered Value-sensitive Design . ’
Science and Engineering Ethics 19 , no 2 ( 2013 ): 407 – 433 .
121 van Wynsberghe, Aimee . ‘ A Method for Integrating Ethics into the Design of Robots . In dustr ial
Robot: An International Journal 40 , no 5 ( 2013 ): 433 – 440 .
122 Va n W y n s b e r g h e , A i m e e . Healthcare Robots: Ethics, Design and Implementation ( A b i n g d o n :
Routledge , 2016 ) .
e design of privacy-preserving healthcare robots requires an integrated
approach that ensures that the many facets of privacy are understood by develop-
ers and translated adequately into technical speci cations. is includes clarifying
the requirements of the GDPR for healthcare robotics, as well as doing justice to
wider privacy concerns that go beyond data protection. According to Friedman et
al. ’ s ‘ value sensitive design ’ 118 approach, an integrated approach to design should
include the combination of conceptual analysis of privacy concerns (also their
relation to other relevant values), stakeholder perspectives on privacy in this
particular  eld of application, and re ections on the technical realisability of such
considerations.
Value-sensitive design is a methodology that tries to unearth the values at
stake through a comprehensive process that starts with an analysis of who the
stakeholders are and what their needs, values and concerns are with regard to
a speci c problem. Developing a comprehensive conceptual understanding and
communicating about these needs and values with designers and helping them
to understand what those values mean in practice is meant to help them translate
those values into speci c design requirements, although some authors contest
these assumptions.
119 To illustrate such an analysis, van Wynsberghe uses the
idea of a li ing robot to show that two di erent designs manifest a di erent sets
of values.  e rst robot she discusses is a robot which has a torso, two arms, and
a face.  is robot can li the patient much like a human would.  e problem,
according to Van Wynsberghe is that when a nurse or other hospital sta li ed
the patient it helped to manifest the care values (eg of interpersonal attentiveness
and reciprocity). Without a human being there, these values could not be real-
ised. However, a di erent design in which the nurse wears a robotic exoskeleton
solves the problem of a nurse being able to li a patient (maximising the care
value of competence) whilst also not diminishing the values of attentiveness and
reciprocity
120 , 121 , 122
With regard to the values of data protection and privacy, it is essential for
designers to understand that there is not just one, but many di erent design possi-
bilities due to signi cant case-by-case di erences in technologies and application
contexts. Robots are o en made for use in a variety of contexts, and they come with
120 Eduard Fosch Villaronga et al
123 W i l l K n i g h t , e Roomba Now Sees and Maps a Home MIT Technology Review , September 16,
2015 , www.technologyreview.com/s/541326/the-roomba-now-sees-and-maps-a-home/.
124 B r y s o n , J o a n n a J . , M i h a i l i s E . D i a m a n t i s , a n d omas D. Grant . ‘ Of, For, and By the People:
e Legal Lacuna of Synthetic persons . Arti cial Intelligence and Law 25 , no 3 ( 2017 ): 273 – 291 .
125 van Gorp, Anke , and S. van der Molen . ‘ Parallel, Embedded or Just Part of the Team: Ethicists
Cooperating within a European Security Research Project . Science and Engineering Ethics 17 , no 1
( 2011 ): 31 – 43 .
126 Van Wynsberghe, Aimee and Scott Robbins . ‘ Ethicist as Designer : A Pragmatic Approach to Ethics
in the Lab . ’ Science and Engineering Ethics 20 , no 4 ( 2014 ): 947 – 961 .
127 Fosch Villaronga, n 16.
a variety of sensors and capabilities. In some setting, a subset of capabilities may
need to be disabled to ensure realisation of important values in this context. For
example, many contemporary robots are equipped with cameras and microphones
for navigation and interaction, which, however, in certain healthcare contexts may
invade a patient s privacy. For instance, a robot with the task of cleaning bedpans
should not have a microphone which listens in on patients nor have cameras that
record its environment.
123 inking about the tasks that the robot will or can real-
ise during the design process may inform the value modelling and may lead to the
conclusion that there are tasks that should be considered o -limits . is might
include tasks that interfere with important values such as privacy, or include tasks
that may involve complex capabilities such as moral responsibility which robots do
not currently have (but may have at some point in the future).
124
ere is an urgent need to put both technical and organisational measures in
place to ensure appropriate considerations of privacy values in the design process.
However, even if such measures are in place, substantive di culties with designing
and implementing for values would remain, especially with regard to foreseeing
unintended consequences or possible problems in the future. Understanding
whether the values of security, privacy, transparency, etc. will be diminished or
enhanced in a particular context with the use of a particular robotic platform is
highly complex, and such predictions are uncertain. Such assessment requires the
combination and integration of a range of specialised expertise which no single
profession can claim to have by themselves. It requires input from technical people
who know what a robot s capabilities are, from ethicists who understand what it
means to realise ethical values, and from lawyers who know which legal require-
ments need to be followed.
125 , 126 erefore the use of multidisciplinary test beds or
living labs should be regarded as a conditio sine qua non for the realisation of truly
safe and ethical robots.
127
4. Conclusion
Although bringing a wide range of bene ts and possibilities, the growing presence
of robots in our daily lives raises ethical, legal and societal concerns to experts,
industry, workers and the general public.  is paper focused in particular on six
‘Nothing Comes between My Robot and Me 121
challenging areas of concern where the use of robots raises data protection and
privacy issues. We argue that the insertion of robots in the healthcare sector raises
particular challenging questions because, in healthcare, technological innova-
tion, such as robots, both supports and challenges the realisation of the duty of
care for vulnerable members of society. Because this element of care materialises
via complex relations between healthcare institutions, doctors, medical and non-
medical personnel, patients and relatives, and designers this paper has identi ed
and re ected upon some of the less explored consequences for privacy and data
protection arising within this network of stakeholders.
In the chapter, we acknowledge that SARs operate both as data-processing
devices, governed by data governance schemes, and as a healthcare aid, governed
by professional norms.  is means they will have to be compliant with both the
GDPR, medical law and the medical ethics tradition. We also argue that the tradi-
tional elements of trust characterising the doctor patient relationship are likely
to undergo a shi with the insertion of robots into the care setting, as patients
are o en more prone to disclose information to robots, while robots are o en not
correctly identi ed as information processing devices. From the perspective of
con dentiality, we highlight that it is important to carefully cra the purpose of
the robot, and be transparent about its capabilities and functioning to avoid unde-
sirable privacy impacts. Risks might also derive from the unwitting disclosure of
information through overheard conversations with robots; and also from uncon-
sented secondary uses of patient or worker data collected by the robot. In this
respect, awareness mechanisms should be put in place to reduce privacy violations
of those interacting socially with the robot.
Among the critical points emerging from this analysis, we give reasons to
consider privacy-related aspects when designing robots that will interact with
many types of user. Ensuring compliance with the GDPR, whose provisions
are vague and technology-neutral, can pose a challenge for the realisation and
implementation of robots that will have substantial variation of purposes, uses
and embodiments. In particular, we highlight that structural di erences among
models and embodiments might impede full portability of the data collected. In
this respect, we assert that the dual nature of robots cyber and physical needs to
be integrated as a fundamental part of any future regulatory instrument governing
robot technology, which requires going substantially beyond general data protec-
tion requirements in the regulation of robots.
In addition, we raise the question of the privacy and data protection impli-
cations of robots with conversational capabilities. We argue that conversational
HRI might have unexpected consequences due to the way in which the robotic
technology mediates the perception and construction of reality for patients and
workers. While greater transparency and clarity can be achieved through conver-
sational interfaces applied to privacy and data protection settings, this may a ect
the construction of the notion of privacy.  is could, in turn, a ect the correct
implementation of the GDPR. We also identify additional issues with regard to
workers in the healthcare sector, who may be required to cooperate and interact
122 Eduard Fosch Villaronga et al
with robots, which may a ect their privacy in various ways. Appropriate train-
ing, adjustments in the work environment, as well as appropriate guidelines in the
application of the GDPR for healthcare workers appear necessary for the future.
Based on the issues identi ed, a comprehensive multi-impact assessment is
desirable to identify not only a range of di erent impacts of this technology, but
also the links and dependencies between di erent factors and stakeholders to
implement the GDPR and other regulations in a way that captures these complexi-
ties. Accordingly, we suggest that the endorsement of a wider value sensitive
approach to design for privacy that does justice to a wider range of privacy values
is preferable to a design approach that is focused primarily on data protection
concerns. To achieve such an approach, multi-disciplinary collaboration of tech-
nologists, ethicists and lawyers, among others, and the willingness to give a voice
to all relevant stakeholders, especially those who will be most directly a ected by
the use of robots in healthcare is needed.
Article
Full-text available
From exoskeletons to lightweight robotic suits, wearable robots are changing dynamically and rapidly, challenging the timeliness of laws and regulatory standards that were not prepared for robots that would help wheelchair users walk again. In this context, equipping regulators with technical knowledge on technologies could solve information asymmetries among developers and policymakers and avoid the problem of regulatory disconnection. This article introduces pushing robot development for lawmaking (PROPELLING), an financial support to third parties from the Horizon 2020 EUROBENCH project that explores how robot testing facilities could generate policy-relevant knowledge and support optimized regulations for robot technologies. With ISO 13482:2014 as a case study, PROPELLING investigates how robot testbeds could be used as data generators to improve the regulation for lower-limb exoskeletons. Specifically, the article discusses how robot testbeds could help regulators tackle hazards like fear of falling, instability in collisions, or define the safe scenarios for avoiding any adverse consequences generated by abrupt protective stops. The article’s central point is that testbeds offer a promising setting to bring policymakers closer to research and development to make policies more attuned to societal needs. In this way, these approximations can be harnessed to unravel an optimal regulatory framework for emerging technologies, such as robots and artificial intelligence, based on science and evidence.
Article
Full-text available
Mental health disorders are complex disorders of the nervous system characterized by a behavioral or mental pattern that causes significant distress or impairment of personal functioning. Mental illness is of particular concern for younger people. The WHO estimates that around 20% of the world's children and adolescents have a mental health condition, a rate that is almost double compared to the general population. One approach toward mitigating the medical and socio-economic effects of mental health disorders is leveraging the power of digital health technology to deploy assistive, preventative, and therapeutic solutions for people in need. We define “digital mental health” as any application of digital health technology for mental health assessment, support, prevention, and treatment. However, there is only limited evidence that digital mental health tools can be successfully implemented in clinical settings. Authors have pointed to a lack of technical and medical standards for digital mental health apps, personalized neurotechnology, and assistive cognitive technology as a possible cause of suboptimal adoption and implementation in the clinical setting. Further, ethical concerns have been raised related to insufficient effectiveness, lack of adequate clinical validation, and user-centered design as well as data privacy vulnerabilities of current digital mental health products. The aim of this paper is to report on a scoping review we conducted to capture and synthesize the growing literature on the promises and ethical challenges of digital mental health for young people aged 0–25. This review seeks to survey the scope and focus of the relevant literature, identify major benefits and opportunities of ethical significance (e.g., reducing suffering and improving well-being), and provide a comprehensive mapping of the emerging ethical challenges. Our findings provide a comprehensive synthesis of the current literature and offer a detailed informative basis for any stakeholder involved in the development, deployment, and management of ethically-aligned digital mental health solutions for young people.
Preprint
Full-text available
Robotics and AI are dynamically and rapidly evolving, introducing the problem of regulatory disconnection, where either "the covering descriptions employed by the regulation no longer correspond to the technology" or "the technology and its applications raise doubts as to the value compact that underlies the regulatory scheme." Lack of information seems to be one of the main drivers behind such a disconnection. Equipping regulators with better means to understand and tackle novel technologies thus demands solving information asymmetries among developers and policymakers. This paper introduces PROPELLING: 'Pushing robot development for lawmaking,' an FSTP (financial support to third parties) from the H2020 EUROBENCH project一 as an alternative for remediating those asymmetries. Using ISO's safety standards for lower-limb exoskeletons as a case study, PROPELLING probes testbeds as data generators for standard-makers. Its central tenet is that testbeds offer a promising setting for bringing policymakers closer to research and development (R&D), and, in this way, they can be harnessed to unravel an optimal regulatory framework for emerging technologies that is based on science and evidence. The piece is structured in four sections. Section 2 argues that the paucity of data encumbers defining adequate policies for robotics. Section 3 introduces PROPELLING as a model for addressing those difficulties, whereas Section 4 highlights the significance of experimentation for improving the content of standards. Section 5 presents some of the difficulties of experimentation and early lessons learned. It also argues for the importance of testbeds and data repositories as a proper setting for replicable experimentation and information gathering.
Article
With an ageing population, more and more older people are expected to remain in their living environment. Mobile robots, whose market is expected to increasingly grow, could assist them for specific tasks. Existing studies, however, show that potential users have privacy concerns. In this paper, we therefore aim at understanding factors influencing these concerns and exploring their preferences with regards to different aspects related to informational privacy. In a quantitative study with 1090 German-speaking older adults, we show that female and non-owners of robots tend to express more concerns about their privacy than others.
Article
Full-text available
Wearable robots and exoskeletons are relatively new technologies designed for assisting and augmenting human motor functions. Due to their different possible design applications and their intimate connection to the human body, they come with specific ethical, legal, and social issues (ELS), which have not been much explored in the recent ELS literature. This paper draws on expert consultations and a literature review to provide a taxonomy of the most important ethical, legal, and social issues of wearable robots. These issues are categorized in (1) wearable robots and the self, (2) wearable robots and the other, and (3) wearable robots in society.
Book
Full-text available
The integration of robotic systems and artificial intelligence into healthcare settings is accelerating. As these technological developments interact socially with children, the elderly, or the disabled, they may raise concerns besides mere physical safety; concerns that include data protection, inappropriate use of emotions, invasion of privacy, autonomy suppression, decrease in human interaction, and cognitive safety. Given the novelty of these technologies and the uncertainties surrounding the impact of care automation, it is unclear how the law should respond. This book investigates the legal and regulatory implications of the growing use of personal care robots for healthcare purposes. It explores the interplay between various aspects of the law, including safety, data protection, responsibility, transparency, autonomy, and dignity; and it examines different robotic and AI systems, such as social therapy robots, physical assistant robots for rehabilitation, and wheeled passenger carriers. Highlighting specific problems and challenges in regulating complex cyber-physical systems in concrete healthcare applications, it critically assesses the adequacy of current industry standards and emerging regulatory initiatives for robots and AI. After analyzing the potential legal and ethical issues associated with personal care robots, it concludes that the primarily principle-based approach of recent law and robotics studies is too abstract to be as effective as required by the personal care context. Instead, it recommends bridging the gap between general legal principles and their applicability in concrete robotic and AI technologies with a risk-based approach using impact assessments. As the first book to compile both legal and regulatory aspects of personal care robots, this book will be a valuable addition to the literature on robotics, artificial intelligence, human–robot interaction, law, and philosophy of technology.
Chapter
Full-text available
This chapter reflects upon the ethical, legal, and societal (ELS) implications of the use of emotions by robot technology. The first section introduces different cases where emotions play a role in human-robot interaction (HRI) contexts. This chapter draws particular attention to disparities found in recent technical literature relating to the appropriateness of the use of emotions in HRIs. These examples, coupled with the lack of guidelines on requirements, boundaries, and the appropriate use of emotions in HRI, give rise to a vast number of ELS implications that the second section addresses. Recent regulatory initiatives in the European Union (EU) aim at mitigating the risks posed by robot technologies. However, these may not entirely suffice to frame adequately the questions the use of emotions entails in these contexts.
Article
Full-text available
The insertion of robotic and artificial intelligent (AI) systems in therapeutic settings is accelerating. In this paper, we investigate the legal and ethical challenges of the growing inclusion of social robots in therapy. Typical examples of such systems are Kaspar, Hookie, Pleo, Tito, Robota,Nao, Leka or Keepon. Although recent studies support the adoption of robotic technologies for therapy and education, these technological developments interact socially with children, elderly or disabled, and may raise concerns that range from physical to cognitive safety, including data protection. Research in other fields also suggests that technology has a profound and alerting impact on us and our human nature. This article brings all these findings into the debate on whether the adoption of therapeutic AI and robot technologies are adequate, not only to raise awareness of the possible impacts of this technology but also to help steer the development and use of AI and robot technologies in therapeutic settings in the appropriate direction. Our contribution seeks to provide a thoughtful analysis of some issues concerning the use and development of social robots in therapy, in the hope that this can inform the policy debate and set the scene for further research.
Conference Paper
Full-text available
Autonomous robots can be difficult to design and understand. Designers have difficulty decoding the behaviour of their own robots simply by observing them. Naive users of robots similarly have difficulty deciphering robot behaviour simply through observation. In this paper we review relevant robot systems architecture, design, and transparency literature, and report on a programme of research to investigate practical approaches to improve robot transparency. We report on the investigation of real-time graphical and vocalised outputs as a means for both designers and end users to gain a better mental model of the internal state and decision making processes taking place within a robot. This approach, combined with a graphical approach to behaviour design, offers improved transparency for robot designers. We also report on studies of users’ understanding, where significant improvement has been achieved using both graphical and vocalisation transparency approaches.
Article
Full-text available
The artificial neural network (ANN)–a machine learning technique inspired by the human neuronal synapse system–was introduced in the 1950s. However, the ANN was previously limited in its ability to solve actual problems, due to the vanishing gradient and overfitting problems with training of deep architecture, lack of computing power, and primarily the absence of sufficient data to train the computer system. Interest in this concept has lately resurfaced, due to the availability of big data, enhanced computing power with the current graphics processing units, and novel algorithms to train the deep neural network. Recent studies on this technology suggest its potentially to perform better than humans in some visual and auditory recognition tasks, which may portend its applications in medicine and healthcare, especially in medical imaging, in the foreseeable future. This review article offers perspectives on the history, development, and applications of deep learning technology, particularly regarding its applications in medical imaging.
Article
Full-text available
The current paper addresses the implications of doing research on socially assistive robots in real homes. In contrast to laboratory studies, studies of robots in their intended natural environments can provide insights into people’s experiences of robots, and if and how a robot becomes embedded and used in people’s everyday life. However, moving robots out of the lab and into real life environments poses several challenges. Laboratory methods mainly focus on cause-and-effect relations between independent and dependent variables, while researchers who are conducting studies in real homes have much less control. In home trials, researchers need to decide what kind of data is obtainable and available. In real homes, researchers face unique challenges that require unique and pragmatic approaches. Any single study conducted in a real home is likely to have methodological limitations. Therefore, several different studies using different robots and methods are needed before the results can be converged in order to reach conclusions that are convincingly supported. This paper is an effort to provide such a report on a specific empirical case and converging findings from other studies. The goal is to provide an account of the research challenges and opportunities encountered when introducing a robot into its intended practice: the homes of older people. The aim is to give enough details for other researchers to critically examine and systematically build on the insights and findings presented.
Conference Paper
Full-text available
Robots have the potential to save lives in emergency scenarios, but could have an equally disastrous effect if participants overtrust them. To explore this concept, we performed an experiment where a participant interacts with a robot in a non-emergency task to experience its behavior and then chooses whether to follow the robot's instructions in an emergency or not. Artificial smoke and fire alarms were used to add a sense of urgency. To our surprise, all 26 participants followed the robot in the emergency, despite half observing the same robot perform poorly in a navigation guidance task just minutes before. We performed additional exploratory studies investigating different failure modes. Even when the robot pointed to a dark room with no discernible exit the majority of people did not choose to safely exit the way they entered.
Conference Paper
Full-text available
How do mistakes made by a robot affect its trustworthiness and acceptance in human-robot collaboration? We investigate how the perception of erroneous robot behavior may influence human interaction choices and the willingness to cooperate with the robot by following a number of its unusual requests. For this purpose, we conducted an experiment in which participants interacted with a home companion robot in one of two experimental conditions: (1) the correct mode or (2) the faulty mode. Our findings reveal that, while significantly affecting subjective perceptions of the robot and assessments of its reliability and trustworthiness, the robot's performance does not seem to substantially influence participants' decisions to (not) comply with its requests. However, our results further suggest that the nature of the task requested by the robot, e.g. whether its effects are revocable as opposed to irrevocable, has a significant impact on participants' willingness to follow its instructions.
Article
Deep learning is a class of machine learning methods that are gaining success and attracting interest in many domains, including computer vision, speech recognition, natural language processing, and playing games. Deep learning methods produce a mapping from raw inputs to desired outputs (eg, image classes). Unlike traditional machine learning methods, which require hand-engineered feature extraction from inputs, deep learning methods learn these features directly from data. With the advent of large datasets and increased computing power, these methods can produce models with exceptional performance. These models are multilayer artificial neural networks, loosely inspired by biologic neural systems. Weighted connections between nodes (neurons) in the network are iteratively adjusted based on example pairs of inputs and target outputs by back-propagating a corrective error signal through the network. For computer vision tasks, convolutional neural networks (CNNs) have proven to be effective. Recently, several clinical applications of CNNs have been proposed and studied in radiology for classification, detection, and segmentation tasks. This article reviews the key concepts of deep learning for clinical radiologists, discusses technical requirements, describes emerging applications in clinical radiology, and outlines limitations and future directions in this field. Radiologists should become familiar with the principles and potential applications of deep learning in medical imaging.
Article
We study the effects of industrial robots on US labor markets. We showtheoretically that robots may reduce employment and wages and thattheir local impacts can be estimated using variation in exposure to ro-bots—defined from industry-level advances in robotics and local indus-try employment. We estimate robust negative effects of robots on em-ployment and wages across commuting zones. We also show that areasmost exposed to robots after 1990 do not exhibit any differential trendsbefore then, and robots’impact is distinct from other capital and tech-nologies. One more robot per thousand workers reduces the employment-to-population ratio by 0.2 percentage points and wages by 0.42%.
Article
Interaction is one of many possible relations between humans and technologies. Mediation theory can help designers to anticipate the impact of a product on human practices and experiences. Another approach to human-technology relations has a dialectical nature in the sense that it sees an opposition, rather than a continuity between humans and technologies. The concept of technological mediation can be helpful in investigating this hybrid character of human-technology relations.
Article
We examine how susceptible jobs are to computerisation. To assess this, we begin by implementing a novel methodology to estimate the probability of computerisation for 702 detailed occupations, using a Gaussian process classifier. Based on these estimates, we examine expected impacts of future computerisation on US labour market outcomes, with the primary objective of analysing the number of jobs at risk and the relationship between an occupations probability of computerisation, wages and educational attainment.