ArticlePDF Available

Security and Privacy in the Metaverse: The Threat of the Digital Human

Authors:

Abstract

Each year, researchers and technologists are bringing the vision of the Metaverse, which is predicted to be the future of the internet, closer to becoming a reality. People will spend most of their time in this space interacting face-to-face, so to speak, with highly customizable digital avatars that seamlessly convey precise non-verbal cues from the physical movements of the users themselves. This is an exciting prospect; however, there are many privacy and security concerns that arise from this new form of interaction. Precision motion tracking is required to drive high-fidelity animation, and this affords a mass of data that has never been available before. This data provides a wealth of physical and psychological information that can reveal medical conditions, mental disorders, personality, emotion, personal identity, and more. In this paper, we discuss some implications of the availability of this data, with a focus on the psychological manipulation and coercion capabilities made available by it.
Security and Privacy in the Metaverse: The Threat of the Digital
Human
Lauren Buck
Trinity College Dublin
Dublin, Ireland
lauren.e.buck.12@gmail.com
Rachel McDonnell
Trinity College Dublin
Dublin, Ireland
ramcdonn@tcd.ie
ABSTRACT
Each year, researchers and technologists are bringing the vision of
the Metaverse, which is predicted to be the future of the internet,
closer to becoming a reality. People will spend most of their time
in this space interacting face-to-face, so to speak, with highly cus-
tomizable digital avatars that seamlessly convey precise non-verbal
cues from the physical movements of the users themselves. This is
an exciting prospect; however, there are many privacy and security
concerns that arise from this new form of interaction. Precision
motion tracking is required to drive high-delity animation, and
this aords a mass of data that has never been available before. This
data provides a wealth of physical and psychological information
that can reveal medical conditions, mental disorders, personality,
emotion, personal identity, and more. In this paper, we discuss some
implications of the availability of this data, with a focus on the psy-
chological manipulation and coercion capabilities made available
by it.
CCS CONCEPTS
Security and privacy
Social aspects of security and pri-
vacy;Human-centered computing
Collaborative and social
computing theory, concepts and paradigms.
KEYWORDS
virtual reality, personal data collection and use, shared virtual envi-
ronments, avatars, agents, biometric data, machine learning, arti-
cial intelligence
ACM Reference Format:
Lauren Buck and Rachel McDonnell. 2022. Security and Privacy in the Meta-
verse: The Threat of the Digital Human. In Proceedings of CHI Conference
on Human Factors in Computing Systems (CHI EA ’22, Proceedings of the 1st
Workshop on Novel Challenges of Safety, Security and Privacy in Extended
Reality). ACM, New York, NY, USA, 4 pages.
1 INTRODUCTION
In the present moment, we live in an age where personal data has
been considered by some as more valuable than oil [
2
]. Tech com-
panies deliberately design applications that are addictive to their
users and source user data to create personalized ad experiences
in order to generate revenue. Meta reported US$33bn in revenue
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
CHI EA ’22, Proceedings of the 1st Workshop on Novel Challenges of Safety, Security and
Privacy in Extended Reality, April 29 - May 5, 2022, New Orleans, LA, USA
©2022 Copyright held by the owner/author(s).
in the 4th Quarter of 2021 alone [
1
], and ByteDance (TikTok) is
currently valued around US$400bn as it reigns as the top grossing
social media app of 2021 [
6
]. As individuals and governments grap-
ple with the impact of personal data collection and how to protect
individual users from big tech, small pockets of online scammers
vie for the same data. Romance scams occur frequently online,
where individuals are coalesced into fake relationships that often
result in manipulation and theft, and infamous ‘rug pull’ scenarios
involving cryptocurrencies have raked in over US$2.8bn in just
the last year [
16
]. Protecting the individual has never been more
important than ever as new technologies evolve and create new
hosts of problems to resolve. In this paper, we consider some issues
that are probable to become common, complex issues derived from
the widespread adoption of virtual reality (VR) technology.
VR is a unique technological medium that is set apart from other
media by one dening attribute: 3D interaction. VR environments
emulate real world perceptions and allow users to move through
space and interact with objects and people as they would naturally.
This capability is driven by positional tracking, which calculates the
precise position of a head-mounted display, controllers, and other
trackers attached to the body, within Euclidean space. This tracking
captures the motions of users and allows for the embodiment of
bodily self-representations (i.e., avatars), which are an essential
component of the VR experience. These capabilities alone intro-
duce a myriad of privacy and security issues that require attention,
especially considering the growing of body of knowledge about
data extraction and psychological inuence occurring due to the
use of VR in experimental settings alone.
The existing catalog of literature that revolves around privacy
and security in VR mainly focuses on motion tracking (gait analy-
sis, eye tracking, general bodily motions), and there is increasing
awareness of the fact that this data provides personally identi-
able information. A shocking study carried out by Mark Miller
and colleagues reports that ve minutes of motion tracking data
gathered only from a head-mounted display during a typical view-
ing task is enough to identify a user with 95% accuracy [
25
]. This
type of data has the potential to reveal information about users
that reect mental state and medical status. Buck and Bodenheimer
note that tracking a user’s personal space representation can reveal
social preferences and disorders like social anxiety [
5
]. Dierent
types of body motions detected can predict levels of creativity [
38
]
and learning [
37
], along with medical conditions such as autism,
ADHD, and PTSD [
25
]. Diane Hosfelt, among others, have warned
that the misuse and abuse of this data can produce life-altering
consequences [15, 28].
However, the safety and privacy issues of VR that we as a com-
munity are cognizant of still remain a gray area. In this work, we
hope to bring to light one pocket of issues directly related to the
CHI EA ’22, April 29 - May 5, 2022, New Orleans, LA, USA Buck and McDonnell
psychological manipulation of users by digital humans based on
biometric data collection. There has been little publication, to our
knowledge, on this issue.
2 THE DANGERS OF THE DIGITAL HUMAN
It is no secret that VR experiences can be psychologically com-
pelling. Mel Slater and his colleagues have published an abundance
of research that can attest to this. Men report feelings of empathy
toward women after experiencing sexual harassment in a woman’s
body [
27
], the observation of a virtual body-double of oneself in-
teracting with a crowd can reduce self-persecutory thoughts [
13
],
and putatively stressful simulations can produce both physiologi-
cal and psychological responses [
22
]. Why these simulations can
be so impactful stems from the ability of VR users to embody vir-
tual self-representations. The embodiment of a virtual character
is a commonplace phenomenon in VR [
20
], and increases the per-
ceived plausability of the simulation [
32
]. These graphical self-
representations, or self-avatars, do not have to match the physical-
ity of their users in order to elicit the sensation of embodiment [
12
],
and virtual characters are not bound by physical constraints.
The importance of knowing that avatar appearance is mutable
is in the detail that humans are naturally prone to making judg-
ments based on physical appearance. It has been proven, even in
early 2D games with elementary graphics, that the way users in-
teract with one another has a lot to do with looks [
10
,
30
]. It has
been made clear that attractiveness dictates how VR users perceive
themselves and others and the behaviors they choose to carry out.
Avatars have already been posited as potential salespeople [
17
,
26
],
whose appearance can be persuasive enough to inuence decision-
making [
9
,
19
], and when embodied can embolden users to engage
in risk-taking behaviors [
23
]. VR gives us the exibility to be dig-
ital chameleons, and we can connect the dots to understand that
this ability in the hands of those with malicious intent will drive
us toward a dystopian vision of the future we are all becoming
increasingly familiar with.
This is where biometric data comes into play: the appearance of
the computer or human-driven agents and avatars users are inter-
acting with can be adapted to user preferences based on data that
the user is unaware they have shared. Depending on the technol-
ogy a system is using, eye tracking data can provide pupil dilation
and gaze xation data in response to visual stimuli [
36
], motion
data can provide proxemic behavior and body language cues [
5
,
8
],
and physiological data like EEG and skin conductance can provide
levels of emotional activation [
33
]. Additionally, facial tracking can
also provide a window into emotional response [
3
]. These non-
verbal cues can be fed into clever machine learning and articial
intelligence algorithms to create personalized, idealized interaction
partners.
Besides outward appearance, both voice and motion can be ma-
nipulated to be appealing to users. Software can manipulate vocal
tone, pitch and amplitude, which can allow users to change their
voice from male to female and vice versa. Attractive voices that are
smooth in texture and similar in pitch and timbre can be created
easily via auditory morphing [
4
]. Vocal cloning software can mimic
the sound of a particular person’s voice. Motion data can give way
to an expanse of physical and psychological information, which
we have discussed in the introduction. Vocal expression in virtual
characters has already been shown to impact social inuence and
attraction [
34
], and motion data is rich with cues that can be cat-
egorized into levels of attractiveness, which inuence interaction
behaviors [40].
These interaction partners may not only be designed to be at-
tractive to users, but may understand users deeply on both a physi-
cal and psychological level. The personality of an agent could be
adapted to be most likeable by the personality type the user is ex-
hibiting [
21
]. A more advanced iteration of articial intelligence or
a human driving an avatar could perhaps detect and empathize with
medical and mental conditions to create a sense of closeness and
trust with a user. There are many applications of this psychological
information that can take place with both benevolent and malicious
intent.
Herein lies our danger. How far are we willing to take these
technological capabilities? Generative adversarial networks (GANs)
have already been leveraged to create human likenesses from scratch
[
18
] and deepfakes generate video and audio to create scenes that
have never happened in real life. Adapting agent and avatar appear-
ance, personality, and interaction in order to sell products to users
is not a far reach, as Amazon populates recommended items based
on shopping habits, and social media sites generate personalised
ads, by using machine learning techniques. Neither is it obscene to
think avatars may be used for political duress, as some social media
sites are notorious for serving politically polarizing content to users
and manipulating emotions to increase engagement. Radical groups
have been known to recruit impressionable, young people through
online tactics. Additionally, Online gambling sites take advantage
of those suering with gambling addictions, and VR is already
considered to promote high-risk gambling behaviors [
29
]. Addition-
ally, virtual inuencers are already materializing [
35
]. Could not a
strategically placed agent persuade a user to engage in high-risk
behavior? If we think about it, mental and physical traits extracted
from biometric data could be exploited to coerce users into situa-
tions and behaviors they would otherwise refuse to engage in.
Particular attention needs to be paid to the potential for users to
be manipulated not just by businesses and institutions, but by other
individuals. It would be quite easy for a number of existing internet
scams to spill into the Metaverse, and for their impact to be even
more psychologically devastating because of the immersive aspect
of VR. Extortion and bullying could put users in more personal,
compromising situations, and concern comes with how children
will be protected since they are particularly impressionable. Online
predators will be handed a whole new toolkit of coercive measures
with the availability of more natural interaction. Finally, cyberat-
tacks will expose sensitive biometric data that could be sold on the
dark web, which would be devastating to one’s personal privacy.
Fortunately, there are many things that can be done to combat
the misuse of biometric data before it begins, and we are all called
to make positive contributions in this space. Power should be given
to the user. People should be made aware of and educated on the
implications of biometric data coupled with VR, and should be given
the option to opt out of this type of data collection. Developers can
implement cybersecurity protocols and can also choose to introduce
noise to this type of data in order to generalize it and prevent it from
revealing personally identiable information. Finally, legislators
Security and Privacy in the Metaverse: The Threat of the Digital Human CHI EA ’22, April 29 - May 5, 2022, New Orleans, LA, USA
can introduce laws that prevent businesses and individuals from
collecting and using this data with malicious intent.
Academics are called to make an impact through ongoing re-
search to help understand and mitigate known and unknown psy-
chological problems that will arise in the Metaverse. Potential re-
search avenues include continuing to understand how users can
be manipulated in advertisement scenarios [
14
,
24
], what physical
properties of agents and avatars are likely to have psychological
inuence over users [
39
], how risky behaviors translate from real to
virtual scenarios [
7
], and the overall psychological impact of digital
interaction in the Metaverse that will translate into the daily lives
of users (think how augmented images aect self-esteem [
11
,
31
]).
There are many positive impacts that interaction with digital hu-
mans can have, and it is up to bring about an ethical iteration of
the Metaverse.
3 CONCLUSIONS
Widespread adoption of the Metaverse comes with many unique
threats to user privacy and security, some of which we have broached
in this work with regard to digital humans. Biometric data reveals
a host of personally identiable information which can in turn
be used to potentially manipulate users on a psychological level
through the creation of avatars that are adaptable to user prefer-
ences. With respect to security and privacy issues, the VR commu-
nity is in the midst of the Collingridge Dilemma; it is faced with the
responsibility of understanding the potential risks that the Meta-
verse poses to the individual and mitigating those problems before
it is too late. In the grand scheme of things, the digital human is
something amazing and fearsome, and an aspect of VR that is not
to be considered lightly.
ACKNOWLEDGMENTS
This research was funded by Science Foundation Ireland under
the ADAPT Centre for Digital Content Technology (Grant No.
13/RC/2106_P2) and RADICal (Grant No. 19/FFP/6409).
REFERENCES
[1]
2021. Facebook Reports Third Quarter 2021 Results. https://investor.fb.com/
investor-news/press-release-details/2022/Meta-Reports-Fourth- Quarter-and-
Full-Year-2021- Results/default.aspx. Accessed: 2022-02-21.
[2]
Anonymous. 2017. The world’s most valuable resource is no longer oil, but
data. The Economist (2017). https://www.economist.com/leaders/2017/05/06/the-
worlds-most- valuable-resource- is-no-longer-oil-but- data Accessed: 2022-02-01.
[3]
Jeremy N Bailenson, Emmanuel D Pontikakis, Iris B Mauss, James J Gross, Maria E
Jabon, Cendri AC Hutcherson, Cliord Nass, and Oliver John. 2008. Real-time
classication of evoked emotions using facial feature tracking and physiological
responses. International journal of human-computer studies 66, 5 (2008), 303–317.
[4]
Laetitia Bruckert, Patricia Bestelmeyer, Marianne Latinus, Julien Rouger, Ian
Charest, Guillaume A Rousselet, Hideki Kawahara, and Pascal Belin. 2010. Vocal
attractiveness increases by averaging. Current biology 20, 2 (2010), 116–120.
[5]
Lauren E Buck and Bobby Bodenheimer. 2021. Privacy and Personal Space:
Addressing Interactions and Interaction Data as a Privacy Concern. In 2021 IEEE
Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops
(VRW). IEEE, 399–400.
[6]
David Curry. 2022. Top Grossing Apps (2022). Business of Apps (2022). https:
//www.businessofapps.com/data/top-grossing-apps/ Accessed: 2022-02-01.
[7]
Carla de Juan-Ripoll, José L Soler-Domínguez, Jaime Guixeres, Manuel Contero,
Noemi Álvarez Gutiérrez, and Mariano Alcañiz. 2018. Virtual reality as a new
approach for risk taking assessment. Frontiers in psychology (2018), 2532.
[8] Julius Fast. 1970. Body language. Vol. 82348. Simon and Schuster.
[9]
Ylva Ferstl, Elena Kokkinara, and Rachel Mcdonnell. 2017. Facial features of non-
player creatures can inuence moral decisions in video games. ACM Transactions
on Applied Perception (TAP) 15, 1 (2017), 1–12.
[10]
Ylva Ferstl, Michael McKay, and Rachel McDonnell. 2021. Facial feature manip-
ulation for trait portrayal in realistic and cartoon-rendered characters. ACM
Transactions on Applied Perception (TAP) 18, 4 (2021), 1–8.
[11]
Rebecca Fribourg, Etienne Peillard, and Rachel Mcdonnell. 2021. Mirror, Mirror
on My Phone: Investigating Dimensions of Self-Face Perception Induced by
Augmented Reality Filters. In 2021 IEEE International Symposium on Mixed and
Augmented Reality (ISMAR). IEEE, 470–478.
[12]
Mar Gonzalez-Franco and Tabitha C Peck. 2018. Avatar embodiment. towards a
standardized questionnaire. Frontiers in Robotics and AI 5 (2018), 74.
[13]
Georey Gorisse, Gizem Senel, Domna Banakou, Alejandro Beacco, Ramon Oliva,
Daniel Freeman, and Mel Slater. 2021. Self-observation of a virtual body-double
engaged in social interaction reduces persecutory thoughts. Scientic reports 11,
1 (2021), 1–13.
[14]
Brittan Heller and Avi Bar-Zeev. 2021. The Problems with Immersive Advertising:
In AR/VR, Nobody Knows You Are an Ad. Journal of Online Trust and Safety 1, 1
(2021).
[15]
Diane Hosfelt. 2019. Making ethical decisions for the immersive web. arXiv
preprint arXiv:1905.06995 (2019).
[16]
Stephanie Hughes. 2022. Rug-pull scams raked in over US$2.8 billion in crypto
in 2021, report nds. Financial Post (2022). https://nancialpost.com/fp-
nance/cryptocurrency/rug-pull- scams-raked-in- over-us2- 8-billion- in-
crypto-in- 2021-report-nds Accessed: 2022-02-01.
[17]
Seung-A Annie Jin and Justin Bolebruch. 2009. Avatar-based advertising in
Second Life: The role of presence and attractiveness of virtual spokespersons.
Journal of Interactive Advertising 10, 1 (2009), 51–60.
[18]
Tero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator ar-
chitecture for generative adversarial networks. In Proceedings of the IEEE/CVF
conference on computer vision and pattern recognition. 4401–4410.
[19]
Rabia Fatima Khan and Alistair Sutclie. 2014. Attractive agents are more
persuasive. International Journal of Human-Computer Interaction 30, 2 (2014),
142–150.
[20]
Konstantina Kilteni, Raphaela Groten, and Mel Slater. 2012. The sense of embod-
iment in virtual reality. Presence: Teleoperators and Virtual Environments 21, 4
(2012), 373–387.
[21]
Tze Wei Liew and Su-Mae Tan. 2016. Virtual agents with personality: Adaptation
of learner-agent personality in a virtual learning environment. In 2016 Eleventh
International Conference on Digital Information Management (ICDIM). IEEE, 157–
162.
[22]
Marieke AG Martens, Angus Antley, Daniel Freeman, Mel Slater, Paul J Harrison,
and Elizabeth M Tunbridge. 2019. It feels real: physiological responses to a
stressful virtual reality environment and its impact on working memory. Journal
of Psychopharmacology 33, 10 (2019), 1264–1273.
[23]
Paul R Messinger, Xin Ge, Eleni Stroulia, Kelly Lyons, Kristen Smirnov, and
Michael Bone. 2008. On the relationship between my avatar and myself. Journal
For Virtual Worlds Research 1, 2 (2008).
[24]
Abraham Hani Mhaidli and Florian Schaub. 2021. Identifying manipulative
advertising techniques in xr through scenario construction. In Proceedings of the
2021 CHI Conference on Human Factors in Computing Systems. 1–18.
[25]
Mark Roman Miller, Fernanda Herrera, Hanseul Jun, James A Landay, and
Jeremy N Bailenson. 2020. Personal identiability of user tracking data dur-
ing observation of 360-degree VR video. Scientic Reports 10, 1 (2020), 1–10.
[26]
Ian Mull, Jamie Wyss, Eunjung Moon, and Seung-Eun Lee. 2015. An exploratory
study of using 3D avatars as online salespeople: The eect of avatar type on
credibility, homophily, attractiveness and intention to interact. Journal of Fashion
Marketing and Management (2015).
[27]
Solène Neyret, Xavi Navarro, Alejandro Beacco, Ramon Oliva, Pierre Bourdin,
Jose Valenzuela, Itxaso Barberia, and Mel Slater. 2020. An embodied perspective
as a victim of sexual harassment in virtual reality reduces action conformity in a
later milgram obedience scenario. Scientic reports 10, 1 (2020), 1–18.
[28]
Mónika Nogel, Gábor Kovács, and György Wersényi. 2021. The Regulation of
Digital Reality in Nutshell. In 12th IEEE International Conference on Cognitive
Infocommunications (CogInfoCom). 1–7.
[29]
Sebastian Oberdörfer, David Schraudt, and Marc Erich Latoschik. 2022. Embodied
Gambling-Investigating the Inuence of Level of Embodiment, Avatar Appear-
ance, and Virtual Environment Design on an Online VR Slot Machine. Frontiers
in Virtual Reality (2022), 8.
[30]
Connor P Principe and Judith H Langlois. 2013. Children and adults use attrac-
tiveness as a social cue in real people and avatars. Journal of experimental child
psychology 115, 3 (2013), 590–597.
[31]
Susruthi Rajanala, Mayra BC Maymone, and Neelam A Vashi. 2018. Seles—living
in the era of ltered photographs. JAMA facial plastic surgery 20, 6 (2018), 443–
444.
[32]
Mel Slater. 2009. Place illusion and plausibility can lead to realistic behaviour in
immersive virtual environments. Philosophical Transactions of the Royal Society
B: Biological Sciences 364, 1535 (2009), 3549–3557.
[33]
Feng Tian, Minlei Hua, Wenrui Zhang, Yingjie Li, and Xiaoli Yang. 2021. Emo-
tional arousal in 2D versus 3D virtual reality environments. Plos one 16, 9 (2021),
e0256211.
CHI EA ’22, April 29 - May 5, 2022, New Orleans, LA, USA Buck and McDonnell
[34]
Ilaria Torre, Emma Carrigan, Katarina Domijan, Rachel McDonnell, and Naomi
Harte. 2021. The Eect of Audio-Visual Smiles on Social Inuence in a Cooper-
ative Human–Agent Interaction Task. ACM Transactions on Computer-Human
Interaction (TOCHI) 28, 6 (2021), 1–38.
[35] Christopher Travers. 2022. Virtual Humans. https://www.virtualhumans.org/
[36]
Joseph Tao-yi Wang. 2011. Pupil dilation and eye tracking. A handbook of process
tracing methods for decision research: A critical review and user’s guide (2011),
185–204.
[37]
Andrea Stevenson Won,Jeremy N Bailenson, and Joris H Janssen. 2014. Automatic
detection of nonverbal behavior predicts learning in dyadic interactions. IEEE
Transactions on Aective Computing 5, 2 (2014), 112–125.
[38]
Andrea Stevenson Won, Jeremy N Bailenson, Suzanne C Stathatos, and Wenqing
Dai. 2014. Automatically detected nonverbal behavior predicts creativity in
collaborating dyads. Journal of Nonverbal Behavior 38, 3 (2014), 389–408.
[39]
Nick Yeeand Jeremy Bailenson. 2007. The Proteus eect: The eect of transformed
self-representation on behavior. Human communication research 33, 3 (2007),
271–290.
[40]
Katja Zibrek, Benjamin Niay, Anne-Hélène Olivier, Ludovic Hoyet, Julien Pettré,
and Rachel Mcdonnell. 2020. The eect of gender and attractiveness of motion
on proximity in virtual reality. ACM Transactions on Applied Perception (TAP) 17,
4 (2020), 1–15.
... The literature has highlighted the potential of XR to enhance gaming and socialization [79,81], arts and design [67], e-commerce advertisements [51], and education [57]. The rise of XR technology has prompted discussions on deceptive design 1 (also known as "dark pattern")-the user interface design that researchers deem manipulative [6,49]-from experts in engineering [39,73], security and privacy [8,21], cognitive science [20], and humanities and social science [67]. ...
... Thus, to build a strong foundation, our RQ1 asks, how has the existing literature defined deceptive design in the context of XR? Deceptive design found on websites, games, and mobile apps often relies on interface design elements (e.g., a countdown timer) [7,23]. Research indicates that XR's immersive capabilities [86], including multisensory feedback [8,51], have the potential to modify users' choice architecture [68]. Therefore, RQ2 asks, How can XR amplify the effects of deceptive design? ...
... XR encompasses Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) [51]. VR lets users interact with virtual objects by providing visual and auditory feedback in a fully immersive virtual environment [8,64,73,86]. By creating a virtual world that replaces reality, VR blocks users' perception of the real world [51]. ...
Article
Full-text available
The well-established deceptive design literature has focused on conventional user interfaces. With the rise of extended reality (XR), understanding deceptive design's unique manifestations in this immersive domain is crucial. However, existing research lacks a full, cross-disciplinary analysis that analyzes how XR technologies enable new forms of deceptive design. Our study reviews the literature on deceptive design in XR environments. We use thematic synthesis to identify key themes. We found that XR's immersive capabilities and extensive data collection enable subtle and powerful manipulation strategies. We identified eight themes outlining these strategies and discussed existing countermeasures. Our findings show the unique risks of deceptive design in XR, highlighting implications for researchers, designers, and policymakers. We propose future research directions that explore unintentional deceptive design, data-driven manipulation solutions, user education, and the link between ethical design and policy regulations.
... XR technologies create a virtual world for people to interact with digital objects by tracking their positions, movements, and surroundings through body sensors [7]. As XR changes the way people access digital information and interact with the real world, advertisers can profile users using XR sensor data [46]. ...
... XR uses body sensors to track user movements and generate visual, audio, and haptic feedback when interacting with the virtual environment [7]. Based on these data, inferences can be made about user's physical and mental conditions [47,51] as well as habitual movements [41,47,56]. ...
... Based on these data, inferences can be made about user's physical and mental conditions [47,51] as well as habitual movements [41,47,56]. Cognitive, emotional, and personality issues can be estimated [7,46,51,52]. To better use XR systems, trade-offs between providing users' biometric and demographic information to enable system functionality are inevitable [43]. All this information can further be used to deanonymize the user's identity [24,47,56], manipulate the user toward certain behaviours and buying decisions [7,46,51], and derive profit for the manufacturer and third-party companies [18,46]. ...
Preprint
Full-text available
Extended Reality (XR) technology is changing online interactions, but its granular data collection sensors may be more invasive to user privacy than web, mobile, and the Internet of Things technologies. Despite an increased interest in studying developers' concerns about XR device privacy, user perceptions have rarely been addressed. We surveyed 464 XR users to assess their awareness, concerns, and coping strategies around XR data in 18 scenarios. Our findings demonstrate that many factors, such as data types and sensitivity, affect users' perceptions of privacy in XR. However, users' limited awareness of XR sensors' granular data collection capabilities , such as involuntary body signals of emotional responses, restricted the range of privacy-protective strategies they used. Our results highlight a need to enhance users' awareness of data privacy threats in XR, design privacy-choice interfaces tailored to XR environments, and develop transparent XR data practices.
... Alongside these physical and cognitive transformations, big tech companies have competed to bring cutting-edge technologies to the market: after an announcement from Meta in June 2023 that they were going to invest millions in the Metaverse, Apple released the Apple Vision Pro, a device which allows for the combination of physical, virtual, and augmented reality. These changes, however, do not come without controversies: the Metaverse, which is envisaged to allow us to move away from the Internet as a distinctly separate space from our physical world, and virtual reality (VR), which is one of the possible ways we will access the Metaverse, have reignited debates on privacy, safety, and surveillance, especially when children are concerned [2][3][4]. In fact, the rapid development and cost accessibility of immersive technologies as well as the fast scaling of VR products in the gaming industry have made children the main target for the commercialization of devices and programs for experiences in virtual environments [5,6]. ...
Article
Full-text available
This literature review presents a comprehensive and systematic account of research on the experiences of children with extended reality (XR), including VR, AR, and other types of immersive technologies that enhance and augment children’s activities. The search on Scopus and Web of Science produced 531 outputs. Content analysis with inter-rater reliability (Krippendorff’s α) and Leximancer, a software for text mining, were used for analyzing the material. Four research strands were identified: (1) interventions, treatments, and medical procedures in clinical contexts; (2) teaching and learning enhanced by XR; (3) children’s adoption and user experiences; (4) design and prototyping of XR hardware and software for children. The results showed the following findings: (a) studies on children’s clinical interventions and treatments using HMD-supported immersive virtual reality comprise the most substantial strand of studies; (b) research in this area, and in teaching and learning studies, has grown dramatically since 2017, while the other areas have been stagnant over the years; (c) AR research is still limited and is mainly applied in educational contexts for design and prototyping; (d) few studies have considered children’s perspectives on XR safety issues; (e) research on the use of XR for enhancing social and emotional skills development is underrepresented. Future research should focus on the potential of XR technologies for interventions to enhance children’s psychosocial wellbeing and health more broadly. The further implications and study limitations for the fast-developing nature of this transdisciplinary research field are also discussed.
... Over the past decade, rapid advancements have been made in big data, cloud services, the Internet of Things, machine deep learning, and other artificial intelligence technologies [1][2][3][4][5]. However, challenges have emerged alongside these developments, with personal privacy facing unprecedented threats [6,7]. In response to the limitations and constraints of traditional privacy protection methods, federated learning has emerged as a cutting-edge solution [8,9]. ...
Article
Full-text available
Federated learning, as a distributed machine learning framework, aims to protect data privacy while addressing the issue of data silos by collaboratively training models across multiple clients. However, a significant challenge to federated learning arises from the non-independent and identically distributed (non-iid) nature of data across different clients. non-iid data can lead to inconsistencies between the minimal loss experienced by individual clients and the global loss observed after the central server aggregates the local models, affecting the model’s convergence speed and generalization capability. To address this challenge, we propose a novel federated learning algorithm based on update bias (FedUB). Unlike traditional federated learning approaches such as FedAvg and FedProx, which independently update model parameters on each client before direct aggregation to form a global model, the FedUB algorithm incorporates an update bias in the loss function of local models—specifically, the difference between each round’s local model updates and the global model updates. This design aims to reduce discrepancies between local and global updates, thus aligning the parameters of locally updated models more closely with those of the globally aggregated model, thereby mitigating the fundamental conflict between local and global optima. Additionally, during the aggregation phase at the server side, we introduce a metric called the bias metric, which assesses the similarity between each client’s local model and the global model. This metric adaptively sets the weight of each client during aggregation after each training round to achieve a better global model. Extensive experiments conducted on multiple datasets have confirmed the effectiveness of the FedUB algorithm. The results indicate that FedUB generally outperforms methods such as FedDC, FedDyn, and Scaffold, especially in scenarios involving partial client participation and non-iid data distributions. It demonstrates superior performance and faster convergence in tasks such as image classification.
Chapter
This chapter delves into nonverbal communication within the metaverse, encompassing aspects such as body language, emotional and facial expressions, as well as vocal cues. Equally significant are feedback channels from receivers, avatar customization, and the utilization of deep synthesis or deepfakes. The chapter concludes by examining the role of emoticons and emojis in communication within the metaverse.
Article
Dark Patterns are deceptive designs that influence a user's interactions with an interface to benefit someone other than the user. Prior work has identified dark patterns in WIMP interfaces and ubicomp environments, but how dark patterns can manifest in Augmented and Virtual Reality (collectively XR) requires more attention. We conducted ten co-design workshops with 20 experts in XR and deceptive design. Our participants co-designed 42 scenarios containing dark patterns, based on application archetypes presented in recent HCI/XR literature. In the co-designed scenarios, we identified ten novel dark patterns in addition to 39 existing ones, as well as ten examples in which specific characteristics associated with XR potentially amplified the effect dark patterns could have on users. Based on our findings and prior work, we present a classification of XR-specific properties that facilitate dark patterns: perception, spatiality, physical/virtual barriers, and XR device sensing. We also present the experts' assessments of the likelihood and severity of the co-designed scenarios and highlight key aspects they considered for this evaluation, for example, technological feasibility, ease of upscaling and distributing malicious implementations, and the application's context of use. Finally, we discuss means to mitigate XR dark patterns and support regulatory bodies to reduce potential harms.
Article
Full-text available
Slot machines are one of the most played games by players suffering from gambling disorder. New technologies like immersive Virtual Reality (VR) offer more possibilities to exploit erroneous beliefs in the context of gambling. Recent research indicates a higher risk potential when playing a slot machine in VR than on desktop. To continue this investigation, we evaluate the effects of providing different degrees of embodiment, i.e., minimal and full embodiment. The avatars used for the full embodiment further differ in their appearance, i.e., they elicit a high or a low socio-economic status. The virtual environment (VE) design can cause a potential influence on the overall gambling behavior. Thus, we also embed the slot machine in two different VEs that differ in their emotional design: a colorful underwater playground environment and a virtual counterpart of our lab. These design considerations resulted in four different versions of the same VR slot machine: 1) full embodiment with high socio-economic status, 2) full embodiment with low socio-economic status, 3) minimal embodiment playground VE, and 4) minimal embodiment laboratory VE. Both full embodiment versions also used the playground VE. We determine the risk potential by logging gambling frequency as well as stake size, and measuring harm-inducing factors, i.e., dissociation, urge to gamble, dark flow, and illusion of control, using questionnaires. Following a between groups experimental design, 82 participants played for 20 game rounds one of the four versions. We recruited our sample from the students enrolled at the University of Würzburg. Our safety protocol ensured that only participants without any recent gambling activity took part in the experiment. In this comparative user study, we found no effect of the embodiment nor VE design on neither the gambling frequency, stake sizes, nor risk potential. However, our results provide further support for the hypothesis of the higher visual angle on gambling stimuli and hence the increased emotional response being the true cause for the higher risk potential.
Article
Full-text available
The proportion of the population who experience persecutory thoughts is 10–15%. People then engage in safety-seeking behaviours, typically avoiding social interactions, which prevents disconfirmatory experiences and hence paranoia persists. Here we show that persecutory thoughts can be reduced if prior to engaging in social interaction in VR participants first see their virtual body-double doing so. Thirty non-clinical participants were recruited to take part in a study, where they were embodied in a virtual body that closely resembled themselves, and asked to interact with members of a crowd. In the Random condition (n = 15) they observed their body-double wandering around but not engaging with the crowd. In the Targeted condition the body-double correctly interacted with members of the crowd. The Green Paranoid Thoughts Scale was measured 1 week before and 1 week after the exposure and decreased only for those in the Targeted condition. The results suggest that the observation of the body-double correctly carrying out a social interaction task in VR may lead to anxiety-reducing mental rehearsal for interaction thus overcoming safety behaviours. The results also extend knowledge of the effects of vicarious agency, suggesting that identification with the actions of body-double can influence subsequent psychological state.
Article
Full-text available
The Problems with Immersive Advertising: In AR/VR, Nobody Knows You Are an Ad
Conference Paper
Full-text available
Digital reality refers to the wide spectrum of technologies and affordances that include Augmented Reality, Virtual Reality and Mixed Reality that simulate reality in various ways. Current level of digital technology and future development towards improving user involvement, entertainment, and accessibility based on digital reality induces not only technological questions but also regulatory, policy and liability issues. The ever-growing market of services using public networks will offer new possibilities and dangers for the user, for the business and create place for criminal activity. Regulators try to follow and adjust laws according to the challenges. This paper briefly analyses the current level and status of regulations on Hungarian and EU level, directing the attention of developers, system engineers and software designers to the questions of responsibility. Based on literature review, this paper discusses issues that are currently in the focus of the regulation in Europe in this regard.
Article
Full-text available
Previous studies have suggested that virtual reality (VR) can elicit emotions in different visual modes using 2D or 3D headsets. However, the effects on emotional arousal by using these two visual modes have not been comprehensively investigated, and the underlying neural mechanisms are not yet clear. This paper presents a cognitive psychological experiment that was conducted to analyze how these two visual modes impact emotional arousal. Forty volunteers were recruited and were randomly assigned to two groups. They were asked to watch a series of positive, neutral and negative short VR videos in 2D and 3D. Multichannel electroencephalograms (EEG) and skin conductance responses (SCR) were recorded simultaneously during their participation. The results indicated that emotional stimulation was more intense in the 3D environment due to the improved perception of the environment; greater emotional arousal was generated; and higher beta (21–30 Hz) EEG power was identified in 3D than in 2D. We also found that both hemispheres were involved in stereo vision processing and that brain lateralization existed in the processing.
Conference Paper
Full-text available
The privacy and security of personal data has been at the forefront of public concern for some time now, and is typically understood in the context of data collected from online interaction (social media, transactions, search engine queries, etc.). The advent of immersive technologies expand data collection beyond what can typically be extracted via online interaction, particularly in terms of the availability of biometric data (eye tracking and gait analysis). However, it has not yet been brought to light that interactions and interaction data will soon be of concern in terms of privacy. We mediate interactions in everyday life through the maintenance of personal space and allow certain individuals and objects into our personal space. We do the same in virtual reality. Our personal space allows us to preserve our feeling of safety, and the way we mediate it shows our biases and preferences. In this work, we take a look at the implications of interaction and the availability of personal data that will bring a host of ethical and privacy concerns.
Article
Emotional expressivity is essential for human interactions, informing both perception and decision-making. Here, we examine whether creating an audio-visual emotional channel mismatch influences decision-making in a cooperative task with a virtual character. We created a virtual character that was either congruent in its emotional expression (smiling in the face and voice) or incongruent (smiling in only one channel). People (N = 98) evaluated the character in terms of valence and arousal in an online study; then, visitors in a museum played the “lunar survival task” with the character over three experiments (N = 597, 78, 101, respectively). Exploratory results suggest that multi-modal expressions are perceived, and reacted upon, differently than unimodal expressions, supporting previous theories of audio-visual integration.
Article
Previous perceptual studies on human faces have shown that specific facial features have consistent effects on perceived personality and appeal, but it remains unclear if and how findings relate to perception of virtual characters. For example, wider human faces have been found to appear more aggressive and dominant, whereas studies on virtual characters have shown opposite trends but have suffered from significant eeriness of exaggerated features. In this study, we use highly realistic virtual faces obtained from 3D scanning, as well as cartoon-rendered counterparts retaining facial proportions. We assess the effects of facial width and eye size on perceptions of appeal, trustworthiness, aggressiveness, dominance, and eeriness. Our manipulations did not affect eeriness, and we find the same perceptual trends previously reported for human faces.