ChapterPDF Available

Regulating the Metaverse, a Blueprint for the Future

Authors:
  • Unanimous AI

Abstract and Figures

The core Immersive Media (IM) technologies of Virtual Reality (VR) and Augmented Reality (AR) have steadily advanced over the last thirty years, enabling high fidelity experiences at consumer prices. Over the same period, networking speeds have increased dramatically, culminating in the deployment of 5G networks. Combined, these advancements greatly increase the prospects for widespread adoption of virtual and augmented worlds. Recently branded "the metaverse" by Meta and other large platforms, major corporations are currently investing billions to deploy immersive worlds that target mainstream activities from socializing and shopping to education and business. With the prospect that corporate-controlled metaverse environments proliferate society over the next decade, it is important to consider the risks to consumers and plan for meaningful regulation. This is especially true in light of the unexpected negative impact that social media platforms have had on society in recent years. The dangers of the metaverse are outlined herein along with proposals for sensible regulation. Keywords: Metaverse, Virtual Reality, Augmented Reality, Mixed Reality, Regulation, Virtual Product Placements (VPPs), Virtual People, Virtual Spokespeople
Content may be subject to copyright.
Regulating the Metaverse,
a Blueprint for the Future
Louis B. Rosenberg(B)
Unanimous AI, San Francisco, CA, USA
Louis@Unanimous.ai
Abstract. The core Immersive Media (IM) technologies of Virtual Reality (VR)
and Augmented Reality (AR) have steadily advanced over the last thirty years,
enabling high fidelity experiences at consumer prices. Over the same period, net-
working speeds have increased dramatically, culminating in the deployment of
5G networks. Combined, these advancements greatly increase the prospects for
widespread adoption of virtual and augmented worlds. Recently branded “the
metaverse” by Meta and other large platforms, major corporations are currently
investing billions to deploy immersive worlds that target mainstream activities
from socializing and shopping to education and business. With the prospect that
corporate-controlled metaverse environments proliferate society over the next
decade, it is important to consider the risks to consumers and plan for meaningful
regulation. This is especially true in light of the unexpected negative impact that
social media platforms have had on society in recent years. The dangers of the
metaverse are outlined herein along with proposals for sensible regulation.
Keywords: Metaverse ·Augmented reality ·Virtual reality ·Extended reality ·
Mixed reality ·XR ·VR ·AR ·MR ·Technology policy ·Regulation
1 Background: Regulating Media
To provide a legal and philosophical basis for regulating the metaverse, it is helpful to first
consider the arguments made for regulating social media, as the metaverse can be viewed
as an evolution of the same industry. Assessing media regulation, Yale Law professor
Jack Balkin describes social media companies as “key institutions in the twenty-first
century digital public sphere,” and explains that the public sphere “doesn’t work properly
without trusted and trustworthy institutions.” He further argues the public sphere created
by social media is a successor to the public sphere created by print and broadcast media,
which has been regulated by industry norms and government oversight for generations
[1]. At the same time, Balkin and other scholars express caution about government
overreach, as aggressive regulation could be damaging to free speech and other rights,
with some experts pushing for industrywide self-regulation as a means of reducing the
level of necessary government oversight [2,3].
As we look beyond social media to emerging technologies such as virtual reality and
augmented reality, similar principals apply. In fact, the impact of the metaverse on the
© Springer Nature Switzerland AG 2022
L. T. De Paolis et al. (Eds.): XR Salento 2022, LNCS 13445, pp. 1–10, 2022.
https://doi.org/10.1007/978-3-031-15546-8_23
Author Proof
2 L. B. Rosenberg
public sphere is likely to be far more encompassing. In October of 2021, Meta CEO,
Mark Zuckerberg, wrote that “in the metaverse, you’ll be able to do almost anything you
can imagine get together with friends and family, work, learn, play, shop, create.”
Clearly the metaverse, when broadly deployed by major corporations, aims to take on
the role of “a public sphere” as much if not more than today’s social media [4]. This
transition may happen very quickly, as Meta is currently investing $10B per year with
the stated goal that “within the next decade, the metaverse will reach a billion people,
host hundreds of billions of dollars of digital commerce, and support jobs for millions
of creators and developers.” And Meta is not the only major corporation aggressively
pursuing this vision Apple, Microsoft, Google, Sony, Samsung, and Snap are just a
few of the major players that have announced significant efforts [5,6].
With Big Tech investing hundreds of billions of dollars in VR and AR products and
services, it is reasonable to predict that the metaverse will impact the lives of billions
of people within the next decade, driving a global transition from flat media to immer-
sive media as the primary means by which users access digital content [7]. This will
greatly impact the public sphere, giving even more control to platform providers than
current technologies. With the industry heading in this direction, it’s prudent to assess
the potential dangers of the metaverse and propose viable regulatory solutions [8].
2 The Metaverse: Potential Dangers
At the highest level, “the metaverse” can be described as the societal transition from
the current information ecosystem based on flat media viewed in the third person to a
new ecosystem rooted in immersive media experienced in the first person. It is not the
technology itself that is dangerous to consumers, but the fact that metaverse platforms
are likely to be controlled by large corporations that implement aggressive business
tactics similar to those used in social media. Before describing the potential dangers of
corporate controlled metaverse platforms, it’s worth defining “metaverse” along with
the two primary forms of immersive media, “virtual reality” and “augmented reality”:
Virtual Reality (VR) is an immersive and interactive simulated environment that
is experienced in the first person and provides a strong sense of presence to the
user [27].
Augmented Reality (AR) is an immersive and interactive environment in which
virtual content is spatially registered to the real world and experienced in the first
person, providing a strong sense of presence in a combined real/virtual space [9,
28].
Mixed Reality (MR) is commonly used as a synonym for augmented reality,
referring to mixed environments of real/virtual content. Extended Reality (XR)
is commonly used as a catch-all for all forms of immersive media [9,28].
Metaverse is a persistent and immersive simulated world that is experienced in
the first person by large groups of simultaneous users who share a strong sense
of mutual presence. It can be fully virtual (i.e. a Virtual Metaverse) or it can be
layers of virtual content overlaid on the real world (i.e. an Augmented Metaverse)
[9,27,28].
Author Proof
Regulating the Metaverse, a Blueprint for the Future 3
With these definitions in place, we can express that a virtual metaverse is a fully
simulated world in which users are represented by graphical avatars under their own
individual control (see Fig. 1). Conversely, in an augmented metaverse the human par-
ticipants are generally not avatar-based but interact directly with a real world that is
embellished with virtual content (see Fig. 2).
Fig. 1. Virtual Metaverse example The Nth Floor (Accenture)
Fig. 2. Augmented Metaverse example - (Keiichi Matsuda)
Some say a metaverse must also include rules of conduct and a fully functional
economic system, but that is still open for debate [10]. In addition, some say that a meta-
verse must also be interoperable, enabling items and transactions to be exchanged among
multiple virtual worlds. While it is likely that many virtual and augmented worlds will
enable such interoperability, the definition of metaverse should allow for self-contained
worlds, as it is likely that many platforms will be independent.
Author Proof
4 L. B. Rosenberg
Whether a virtual metaverse or an augmented metaverse, it’s not the technology
itself that poses a risk to society, but the fact that the infrastructure required to enable
immersive worlds will give powerful corporations the ability to monitor and mediate
intimate aspects of our lives, from what products, services, and information consumers
are exposed to, to what experiences they have throughout their day and who they are
having those experiences with. On the surface, this may sound similar to the impact
of today’s social media, which also monitors and mediates user experiences, but in the
metaverse the corporate intrusion could be far more extensive.
To explore the potential dangers in a structured way, it is helpful to describe what has
been called The Three M’s of the Metaverse, for the core social problems relate to the
inherent ability of metaverse platforms to monitor users, manipulate users, and monetize
users [11,27]. These three risk-categories are outlined as follows:
(1) Monitoring Consumers in the Metaverse: Over the last two decades, technology
companies have made a science of tracking and characterizing users on their platforms,
as it enables the sale of targeted advertising [12]. Such targeting has been a boon for
advertisers and a windfall for media platforms, resulting in some of the most valuable
corporations in history. Unfortunately, such targeting has exploited consumers, reduced
personal privacy, and has made social media a polarizing force by allowing third-parties
to deploy customize messaging that is skillfully aimed at very specific demographic
groups. This tactic has had the widespread effect of amplifying existing biases and
preconceptions in populations, radicalizing political views and spreading misinformation
[13].
In the metaverse, these problems are likely to get significantly worse [14]. That’s
because the technology will not just track where users click, but where they go, who
they’re with, what they do, what they look at, even how long their gaze lingers. Immer-
sive platforms will also track facial expressions, vocal inflections, and vital signs, while
intelligent algorithms use such data to predict each person’s real-time emotional state
[1517]. Tracking will also include real-time monitoring of user gait and posture, assess-
ing when users slow down to browse products or services. Metaverse platforms will even
monitor manual reach, assessing when users grab for objects (both real and virtual) and
tracking how long they hold the objects to investigate. This will be especially invasive
in the augmented metaverse in which user gaze, gait, and reach will be monitored in the
real world, for example while shopping in augmented physical stores. This may sound
extreme, but real-time tracking of manual interactions with real objects goes back to
the first interactive augmented reality system developed in 1992 at Air Force Research
Laboratory (Virtual Fixtures platform) [7,24,26].
These various forms of tracking, when taken together, suggest that the platform
providers controlling the metaverse will not just monitor how their users act, but how
they react and interact, profiling their behaviors and responses at far deeper levels than
has been possible in traditional media platforms. Of course, the danger is not merely
that these personal parameters can be tracked and stored, but that advertisers and other
paying third parties will be able to use this invasive data to manipulate consumers with
targeted content more effectively than ever before.
(2) Manipulating Experiences in the Metaverse: From the early days of radio and
television, advertisers have skillfully influenced public opinion on topics ranging from
Author Proof
Regulating the Metaverse, a Blueprint for the Future 5
consumer products to political beliefs. With the advent of social media, custom targeted
advertising has greatly increased the persuasive ability of promotional messaging [18,
19]. In the metaverse, such targeting will get far more personal and the content will
get much harder to resist [20]. That’s because in today’s flat media environments like
Facebook, Instagram, TikTok and Twitter, consumers are generally aware when they are
being targeted by an advertisement and can muster a healthy dose of skepticism [21]. But
in the metaverse, consumers won’t be targeted with simple pop-up ads or promo-videos.
Instead, consumers will have immersive content injected into their world such as virtual
people, products, and activities that seem as real as everything else around them.
Virtual Product Placements (VPPs) are likely to become a widespread advertising
tactic within the metaverse, being applied in both virtual and augmented worlds. We
can define a VPP as a simulated product, service, or activity injected into a virtual
or augmented world on behalf of a paying sponsor such that it appears as a natural
element of the ambient environment. Such advertising can be quite effective because
users can easily mistake a VPP for an organic part of the world that was serendipitously
encountered rather than a targeted piece of promotional material that was deliberately
inserted into the world for that specific user to experience.
For example, imagine walking down the street in a virtual or augmented world. In
both cases, the platform provider will be able to track where you are in real-time, how
fast you are moving, and what your gaze is aimed at. The platform provider will also
have access to a database about your behaviors and interests, values and affiliations, and
of course, your shopping history. In social media, this personal information would be
used to target you with traditional advertising. In the metaverse, platform providers will
be able to manipulate your real-time experiences. This might include seeing particular
cars on the virtual streets, particular brands in store windows, or even include simulated
people drinking a particular soft drink as they walk past you on the sidewalk. You might
assume that everyone around you is seeing the same thing, but that is not the case these
promotional experiences could have been injected exclusively for you by the platform
provider on behalf of a paying third party.
Virtual People (Veeple): In the metaverse, immersive promotional content will go
beyond inanimate objects to AI-driven simulated people that look and act like any other
user but are computer generated spokespeople controlled by AI engines programmed
to pursue a persuasive agenda. These virtual people will be placed in the metaverse,
targeting specific users for either (i) passive observation or (ii) direct engagement. As a
passive example, a targeted user might observe two people having a conversation about
a product, service, or idea. The targeted user may assume that the people are other
metaverse users like themselves, not realizing that a third party injected those virtual
people into their world as a subtle form of targeted advertising. As an active example, a
targeted user may be approached by a virtual person that engages them in promotional
conversation. The interaction could be so authentic, the targeted user would not realize
it’s an AI-driven avatar with a persuasive agenda.
For both active and passive uses, these AI agents will have access to profile data
collected about the targeted user, including their preferences, interests, and a historical
record of their reactions to prior promotional engagements. These AI agents will also
Author Proof
6 L. B. Rosenberg
have access to real-time emotional data generated by capturing facial expressions, vocal
inflections, and vital signs of the targeted users. This will enable the AI agent to adjust
conversational tactics in real-time for optimal persuasion. Even the manner in which
these simulated people appear will be crafted for maximum persuasion—their gender,
hair color, eye color, clothing style, voice and mannerisms—will be custom generated
by algorithms that predict which sets of features are most likely to influence the targeted
user based on previous interactions and behaviors (see Fig. 3)[11,23,27].
Fig. 3. Virtual human used in social trust research (2019) [23]
In the past, researchers have expressed doubt that computer generated avatars could
successfully fool consumers, but recent research suggests otherwise. In a 2022 study,
researchers from Lancaster University and UC Berkeley demonstrated that when virtual
people are created using generative adversarial networks (GANs), the resulting faces are
indistinguishable from real humans to average consumers. Even more surprising, they
determined that average consumers perceive virtual people as “more trustworthy” than
real people [29]. This suggests that in the not so distant future, advertisers will prefer
AI-driven artificial humans as their promotional representatives.
(3) Monetizing Users in the Metaverse: It is worth acknowledging that platform
providers are not charities but commercial entities that require substantial revenue to
support the interests of their employees and shareholders. And because the public has
resisted paying subscriptions for access to online platforms, the most common industry
model is to provide free access in exchange for widespread advertising, focusing largely
on “targeted ads” that can be precisely delivered based on the unique behaviors and
interests of individual users. It is the business model of targeted advertising that has
driven platform providers to pursue extensive tracking and profiling of their users. This
suggests that one way to reduce these risks in future metaverse platforms is to shift from
ad-based to subscription-based business models [11,27].
Author Proof
Regulating the Metaverse, a Blueprint for the Future 7
3 The Metaverse: Non-regulatory Solutions
As described above, shifting from ad-based to subscription-based models could reduce
the motivation that platform providers have to profile and target their userbase. This is
only viable if consumers are willing to pay for access in other ways. Therefore we cannot
assume this approach will become widespread anytime soon. We also can’t assume
that the industry will adopt trusted norms and practices that eliminate abuses without
government oversight. And finally, we can’t expect consumers to simply opt-out of the
metaverse if they are uncomfortable with extensive tracking and targeting. This is because
metaverse platforms will become a primary access point to digital content. Similar to
how consumers cannot opt out of using the internet, opting out of the metaverse would
mean missing out on critical information and services [8,11,27].
4 The Metaverse: Regulatory Solutions
Assuming the problems are not solved by industry norms or by changes in business
models, we will need some level of government regulation of the metaverse to prevent
exploitation of consumers. Of course, the question is what needs to be regulated? A
number of ideas are presented below for consideration:
4a Restrict User Monitoring: As described above, platform providers will have
access to everything their users do, say, touch and see inside the metaverse. It may
be impossible to prevent this, as tracking is required for real-time simulation of virtual
interactions. That said, platform providers should not be allowed to store this data for
more than the short periods of time required to mediate whatever virtual experience is
being generated. This would greatly reduce the degree to which providers can profile
user behaviors over time. In addition, providers should be required to inform the public
as to what is being tracked and how long it is retained. For example, if a platform is
monitoring the direction and duration of a user’s gaze as you walk through a virtual or
augmented world, that user should be overtly notified at the time of tracking.
4b Restrict Emotional Analysis: As outlined above, the metaverse will likely use
advertising algorithms that monitor personal features such as facial expressions, vocal
inflections, posture, and vital signs including heart rate, respiration rate, blood pressure,
and galvanic skin response captured through smart-watches and other wearable devices
such as earbuds. Unless regulated, these invasive physiological reactions will be used
to generate emotional profiles and optimize marketing messages, enabling AI agents to
adjust their conversational strategy in real time. Regulation should be crafted to limit
the scope of such advertising. In addition, users should be overtly informed whenever
these personal qualities are being tracked and stored for promotional purposes.
4c Regulate Virtual Product Placements: inside the metaverse, advertisers will
move away from traditional marketing methods like pop-up ads and promo-videos,
instead leveraging the immersive features of the technology. This will include targeting
users with promotional artifacts and activities injected into their environment that seem
authentic. In the metaverse, a targeted user might observe a person walking down the
Author Proof
8 L. B. Rosenberg
street drinking a particular brand of soft drink. That observation will influence the tar-
geted user, consciously or subconsciously, especially if they notice many people drinking
that same product throughout their day. Users could easily believe such virtual experi-
ences were authentic serendipitous encounters that reflect popularity of the particular
drink in their community, when really, it was a targeted experience, injected into their
world for an unknown third-party. And this type of manipulation can extend beyond soft
drinks to biased political messaging, disinformation, and other socially destabilizing
forms of agenda-driven promotion.
Because it may be impossible to distinguish between authentic and manufactured
experiences in the metaverse, regulation is needed to protect consumers. At a minimum,
platform providers should be required to inform users of all product placement in virtual
or augmented worlds, ensuring that targeted promotional content is not misinterpreted as
natural serendipitous encounters. In fact, product placements should be visually distinct
from other items in the metaverse, enabling users to easily identify when an artifact has
been placed in the world for promotional purposes versus organic content. In addition,
platform providers should be required to inform users who sponsored each virtual product
placement. Such transparency will greatly protect consumers.
4d Regulate Virtual People: as described above, the most manipulative form of per-
suasion in the metaverse is likely to be through agenda-driven artificial agents that engage
users in promotional conversation. These virtual people will look and sound like other
users in the metaverse, whether the virtual world uses cartoon avatars or photorealistic
human representations [14,22]. Regardless of fidelity, if consumers can’t distinguish
between real users and artificial agents, they can be misled into believing they are having
a natural encounter when really, it’s a targeted promotional interaction.
To protect consumers, platform providers should be required to overtly inform users
whenever they are engaged by conversational agents controlled by AI engines. This
becomes even more important when the algorithms haveaccess to user behavioral profiles
and real-time emotional data. At a minimum, platforms should be required to distinguish,
through overt visual and audio cues, that a user is interacting with an artificial agent and
further indicate if the agent can perform emotional analysis. In addition, the use of
vital signs such as blood pressure and heart rate should be banned for use in AI-driven
conversational advertising.
5 The Metaverse: Is It Worth It?
In 2021, the Aspen Institute published an 80 pages report indicating that social media
platforms have become a “force multiplier for exacerbating our worst problems as
a society.”25 As described above, metaverse platforms could create similar but more
extreme problems, enabling more invasive forms of profiling and more persuasive meth-
ods of targeting.27 Despite these risks, the metaverse has enormous potential to unleash
creativity and productivity, expanding what it means to be human. To enable these ben-
efits while protecting consumers, government and industry actors should push quickly
for meaningful and aggressive regulation.
Author Proof
Regulating the Metaverse, a Blueprint for the Future 9
References
1. Balkin, J.: How to regulate (and not regulate) social media. J. Free Speech Law 71, 1–31
(2021). (Knight Institute Occasional Paper Series, No. 1, 25 March 2020, Yale Law School,
Public Law Research Paper )
2. Weissmann, S.: How not to regulate social media. The New Atlantis. 58, 58–64 (2019). https://
www.jstor.org/stable/26609117
3. Cusumano, M., Gawer, A., Yoffie, D.: Social media companies should self-regulate. Now.
Harvard Business Review (2021)
4. Letter, F., Zuckerberg, M. 28 October 2021. https://about.fb.com/news/2021/10/founders-let
ter/
5. Strange, A.: Facebook planted the idea of the metaverse but Apple can actually populate it.
Quartz, 29 November 2021. https://qz.com/2095986/facebook-is-marketing-the-metaverse-
but-apple-can-make-it-real/
6. Burke, E.: Tim Cook, AR will pervade our entire lives. Silicon Republic, January 2020
7. Rosenberg, L.B.: Augmented reality: reflections at thirty years. In: Arai, K. (ed.) FTC 2021.
LNNS, vol. 358, pp. 1–11. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-899
06-6_1
8. Rosenberg, L.: The Metaverse needs Aggressive Regulation. VentureBeat Magazine, 4
December 2021. https://venturebeat.com/2021/11/30/the-power-of-community-3-ways-sco
pely-keeps-players-engaged-entertained-and-connected/
9. Rosenberg, L.: VR vs. AR vs. MR vs. XR: What’s the difference? Big Think. 18 January
2022. https://bigthink.com/the-future/vr-ar-mr-xr-metaverse/
10. Park, G.: Silicon Valley is racing to build the next version of the Internet. The Washington
Post, 17 April 2020. https://www.washingtonpost.com/video-games/2020/04/17/fortnite-met
averse-new-internet/
11. Rosenberg, L.: Fixing the Metaverse: Augmented reality pioneer shares ideas for avoiding
dystopia. Big Think; 9 December 2021. https://bigthink.com/the-future/metaverse-dystopia/
12. Tucker, C.: The economics of advertising and privacy. Int. J. Indust. Organiz. 30(3), 326–329
(2012). ISSN 0167–7187
13. Crain, M., Nadler, A.: Political Manipulation and Internet Advertising Infrastructure. J. Inf.
Policy. 9, 370–410 (2019). https://doi.org/10.5325/jinfopoli.9.2019.0370
14. Rosenberg, L.: Metaverse: Augmented reality pioneer warns it could be far worse than social
media. Big Think. 6 November 2021. https://bigthink.com/the-future/metaverse-augmented-
reality-danger/
15. Ivanova,E., Borzunov, G.: Optimization of machine learning algorithm of emotion recognition
in terms of human facial expressions. Proc. Comput. Sci. 169, 244–248 (2020)
16. van den Broek, E.L., Lisý, V., Janssen, J.H., Westerink, J.H.D.M., Schut, M.H., Tuinenbrei-
jer, K.: affective man-machine interface: unveiling human emotions through biosignals. In:
Fred, A., Filipe, J., Gamboa, H. (eds.) Biomedical Engineering Systems and Technologies.
CCIS, vol. 52, pp. 21–47. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-
11721-3_2
17. Boz, H., Kose, U.: Emotion extraction from facial expressions by using artificial intelligence
techniques. BRAIN. Broad Res. Artif. Intell. Neurosci. 9(1), 5–16 (2018). ISSN 2067–3957
18. Zarouali, B., et al.: Using a personality-profiling algorithm to investigate political microtar-
geting: assessing the persuasion effects of personality-tailored ads on social media. Commun.
Res. 0093650220961965 (2020)
19. Van Reijmersdal, E.A., et al.: Processes and effects of targeted online advertising among
children. Int. J. Advert. 36(3), 396–414 (2017)
Author Proof
10 L. B. Rosenberg
20. Hirsh, J.B., Kang, S.K., Bodenhausen, G.V.: Personalized persuasion: tailoring persuasive
appeals to recipients’ personality traits. Psychol. Sci. 23(6), 578–581 (2012)
21. Wojdynski, B.W., Evans, N.J.: The covert advertising recognition and effects (CARE) model:
processes of persuasion in native advertising and other masked formats. Int. J. Advert. 39(1),
4–31 (2020)
22. Rosenberg, L.: The Metaverse will be filled with Elves. TechCrunch, 12 January 2022. https://
techcrunch.com/2022/01/12/the-metaverse-will-be-filled-with-elves/
23. Zibrek, K., Martin, S., McDonnell, R.: Is Photorealism Important for Perception of Expressive
Virtual Humans in Virtual Reality? ACM Trans. Appl. Percept. 16(3), 19 (2019). https://doi.
org/10.1145/3349609
24. Rosenberg, L.: The Use of Virtual Fixtures as Perceptual Overlays to Enhance Operator
Performance in Remote Environments. Technical report AL-TR-0089, USAF Armstrong
Laboratory, Wright-Patterson AFB OH (1992)
25. Commission on Information Disorder Final Report, Aspen Institute, November 2021. https://
www.aspeninstitute.org/publications/commission-on-information-disorder-final-report/Asp
enDigital
26. Rosenberg, L.: How a Parachute Accident Helped Jump-start Augmented Reality. IEEE
Spectrum, 7 April 2022. https://spectrum.ieee.org/history-of-augmented-reality
27. Rosenberg, L.: Regulation of the metaverse: a roadmap. In: The 6th International Conference
on Virtual and Augmented Reality Simulations (ICVARS 2022), 25–27 March 2022, Brisbane
Australia (2022)
28. Metaverse 101: Defining the Key Components. VentureBeat, 5 February 2022. https://ventur
ebeat.com/2022/02/05/metaverse-101-defining-the-key-components/
29. Nightingale, S., Hany, J.F.: AI-synthesized faces are indistinguishable from real faces and
more trustworthy. In: Proceedings of the National Academy of Sciences, 22 February 2022.
https://doi.org/10.1073/pnas.2120481119
Author Proof
... As defined above, the metaverse reflects a substantial transformation in how the general public relates to digital content, shifting the user from an outsider that peers in at digital content, to an insider that directly engages content all around them [7]. In some cases, users will be immersed in fully simulated worlds (virtual reality) while in other cases, users will engage the real world embellished with spatially registered virtual content (augmented reality) [8]. In addition, most metaverse environments will be persistent worlds that enable large populations of users to share mutual presence over expended periods [9,10]. ...
... For example, if a user of a particular demographic profile (age, income, education, etc.) is identified as a movie fan, that user might see a distinctive car from a newly released movie drive past them while walking down a virtual or augmented street. Because this is a targeted VPP, other people on that street would not see the same content [6][7][8][9][10]. ...
Chapter
Over the next decade, virtual and augmented worlds are expected to significantly impact global markets. Often called “the metaverse” by industry players, these technologies will significantly impact how consumers engage their digital life, shifting mainstream computing from an ecosystem of mostly flat media to one that largely employs immersive media. This shift into the metaverse will impact marketing professionals and their industry in profound ways, transforming basic tools and methods from flat images and videos to fully immersive content experienced in the 1st person. These tactics will likely include widespread use of Virtual Spokespeople (VSPs) and Virtual Product Placements (VPPs). These emerging methods have the potential to be extremely persuasive vehicles, for they target users through organic, intuitive, and deeply personal interactions. At the same time, these methods pose real dangers, for they can be misused on consumers in highly manipulative ways. This paper reviews the immersive marketing techniques expected to be employed in the metaverse, outlines their potential risks to consumers, and makes recommendations for regulating immersive marketing tactics in largescale platforms.KeywordsMetaverse marketingMetaverse advertisingRegulationVirtual Product Placements (VPPs)Virtual Spokespeople (VSPs)Augmented realityMixed realityVirtual realityImmersive rights
... In the context of the reality and virtuality continuum of Milgram and Kishino [81], our SoK focuses on virtual reality (VR), i.e, a computer-simulated interactive environment experienced in the first person [110]. The considered VR devices stem from reviewing the 68 collected publications. ...
... In the 2010s, researchers delved into ethical considerations of MR (2014, 2018) [52], [1], updated its challenges (2016) [104], discussed the threats of converging VR and social networks (2016) [91], and studied VR safety (e.g., epilepsy) (2018) [9]. Most recently, practitioners have continued work on VR safety attacks (e.g., misleading users to collide with their real-world surroundings) (2021, 2022) [19], [128], user authentication (2022) [118], [36], and advocated for new regulations for the upcoming metaverse applications (2022) [110]. While the ethics, authentication, safety, regulations, and underlying technologies are paramount for VR, these works lack a focus on privacy attacks and defenses. ...
Preprint
The adoption of virtual reality (VR) technologies has rapidly gained momentum in recent years as companies around the world begin to position the so-called "metaverse" as the next major medium for accessing and interacting with the internet. While consumers have become accustomed to a degree of data harvesting on the web, the real-time nature of data sharing in the metaverse indicates that privacy concerns are likely to be even more prevalent in the new "Web 3.0." Research into VR privacy has demonstrated that a plethora of sensitive personal information is observable by various would-be adversaries from just a few minutes of telemetry data. On the other hand, we have yet to see VR parallels for many privacy-preserving tools aimed at mitigating threats on conventional platforms. This paper aims to systematize knowledge on the landscape of VR privacy threats and countermeasures by proposing a comprehensive taxonomy of data attributes, protections, and adversaries based on the study of 68 collected publications. We complement our qualitative discussion with a statistical analysis of the risk associated with various data sources inherent to VR in consideration of the known attacks and defenses. By focusing on highlighting the clear outstanding opportunities, we hope to motivate and guide further research into this increasingly important field.
... For example, the VR metaverse is currently evolving out of the gaming and social media industry, while the AR metaverse is evolving out of the mobile phone industry. In both cases, the shift from flat media to immersive experiences is likely to transform marketing tactics, inspiring a wide range of new advertising methods while introducing many new risks to consumers that should be carefully considered [8][9][10]20]. ...
... For example, a user profiled as a sports fan of a particular age and income level might see someone walking past them down the street (virtual or augmented) wearing a jersey that promotes a high-end sports bar two blocks ahead. Because this is a targeted VPP, other people around them would not see the same content [6,8,10]. ...
Conference Paper
Full-text available
Over the next five to ten years, the metaverse is likely to transform how consumers interact with digital content, transitioning society from flat media viewed in the third person to immersive experiences engaged in the first person. This will greatly impact the marketing industry, transforming the basic tools, techniques, and tactics from flat artifacts such as images and videos, to immersive and interactive promotional experiences. In the metaverse, marketing campaigns will likely include extensive use of Virtual Product Placements (VPPs) and Virtual Spokespeople (VSPs). Such methods will be highly effective forms of advertising, for they will target users through natural, personal, and immersive means. At the same time, these methods can easily be used and abused in predatory ways. This paper reviews the most likely marketing techniques of the metaverse, outlines the potential risks to consumers, and makes recommendations for policymakers and business leaders that could protect the public.
... For these reasons, policymakers should consider aggressive and meaningful regulations that protect populations from abuse or misuse of interactive media technologies [15,16,[35][36][37]. For example, regulators could ban or highly restrict any use of AI that "closes the loop" around users in real-time and establishes AI-powered feedback control systems that imparts persuasion, coercion or manipulation. ...
Conference Paper
Full-text available
Over the last 18 months, two human-computer interaction (HCI) technologies have rapidly come to mainstream markets, funded by massive investments from major corporations. The first area of advancement has been virtual and augmented worlds, now commonly called “The Metaverse.” The second area of advancement has been the foundational AI models that allow users to freely interact with computers through natural dialog. Commonly referred to as “Conversational AI,” this technology has advanced rapidly with the deployment of Large Language Models (LLMs). When combined, these two disciplines will enable users to hold conversations with realistic virtual agents. While this will unleash many positive applications, there is significant danger of abuse. Most significant is the potential deployment of real-time interactive experiences that are designed to persuade, coerce, or manipulate users as a form of AI-powered targeted influence. This issue has largely been overlooked by policymakers who have focused instead on traditional privacy, bias and surveillance risks. It is increasingly important for policymakers to appreciate that interactive influence campaigns can be deployed through AI-powered Virtual Spokespeople (VSPs) that look, speak, and act like authentic users but are designed to push the interests of third parties. Because this “AI Manipulation Problem” is unique to real-time interactive environments, it is presented in this paper in the context of Control Theory to help policymakers appreciate that regulations are likely needed to protect against closed-loop forms of influence, especially when Conversational AI is deployed. KEYWORDS: Virtual Reality, Augmented Reality, Mixed Reality, Conversational AI, Virtual Spokespeople, Epistemic Agency, AI Manipulation Problem, Metaverse Regulation, LLMs, Democracy
... I genuinely believe that the metaverse can be a positive technology for humanity. But if we don't protect against the downsides by crafting thoughtful regulation, it could challenge our most sacred personal freedoms including our basic capacity for free will [17][18][19]. ...
Conference Paper
Unless regulated, the metaverse could become one of the most dangerous tools of persuasion ever created. To explain why immersive technologies pose such a significant risk to consumers compared to previous forms of media, this article explores the issue from the engineering perspective of Control Theory.
Chapter
The metaverse can be described as the largescale societal shift from flat media viewed in the third person to immersive media experienced in the first person. While there is nothing inherently dangerous about immersive media technologies such as virtual and augmented reality, many policymakers have raised concerns about the extreme surveillance capabilities that powerful metaverse platforms could wield over users. What is often overlooked, however, is how surveillance-related risks become amplified when platforms are allowed to simultaneously target users with promotionally altered experiences. When considered in the context of control theory, the pairing of real-time surveillance and real-time influence raises new concerns, as large metaverse platforms could become extremely efficient tools for deception, manipulation, and persuasion. For these reasons, regulation should be considered that limit the ability of metaverse platforms to impart real-time influence on users based on real-time surveillance.
Article
Full-text available
Maximalist, interconnected set of experiences straight out of sci-fi, based on 3D virtual environment through personal computing, and augmented reality headsets-a world known as the Metaverse-this is the futuristic vision of internet that technology giants are investing in. There has been some research on data privacy risks in the metaverse; however, detailed research on the cybersecurity risks of virtual reality platforms like metaverse have not been performed. This research paper addresses this gap of understanding the various possible cybersecurity risks on metaverse platforms. This study tries to understand the risks associated with metaverse by describing the technologies supporting metaverse platform and understanding the inherent cybersecurity threats in each of these technologies. Further, the paper proposes a cybersecurity risk governance regulatory framework to mitigate these risks.
Technical Report
Full-text available
In 1992, hardware for the first interactive AR system literally fell from the sky. This document describes the development of that early AR system at Air Force Research Laboratory (AFRL) which for the first time allowed users to interact with a mixed reality of real and virtual objects. Known as the Virtual Fixtures platform, the system was built at Wright Patterson Air Force Base where a parachute accident had a pivotal role in helping to complete the system. KEYWORDS: Augmented Reality, Mixed Reality, Virtual Reality, AR, MR, VR, XR, History of Technology, Invention, Haptics, Virtual Fixtures, HCI
Conference Paper
Full-text available
Over the last thirty years, the immersive technologies of virtual reality (VR) and augmented reality (AR) have steadily advanced, enabling high fidelity experiences at consumer prices. Over the same period, networking speeds have increased dramatically, culminating in the deployment of 5G cellular networks. Combined, these advancements have greatly increased the prospects for widespread adoption of VR and AR worlds. Recently branded “the metaverse” by Facebook (now Meta) and other platform providers, major corporations have begun investing billions of dollars to deploy immersive environments aimed at mainstream activities from socializing and shopping to education and business. With the likelihood rising that metaverse platforms greatly impact society over the next decade, it is prudent to consider the risks and plan for meaningful regulation. This is especially true in light of the negative impacts that social media has had on society in recent years. The dangers of the metaverse are outlined herein along with proposals for regulation.
Article
Full-text available
Artificial intelligence (AI)–synthesized text, audio, image, and video are being weaponized for the purposes of nonconsensual intimate imagery, financial fraud, and disinformation campaigns. Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces.
Chapter
Full-text available
Three decades ago in a small lab at Wright Patterson Air Force Base, the first functional augmented reality (AR) system was formally tested on groups of human subjects, each of them experiencing for the first time, interactive virtual objects that were seamlessly merged into their perception of the real world. The system, known as the Virtual Fixtures platform, produced virtual overlays that felt physically real and were spatially registered with such 3D precision, they could be used to significantly boost human performance in dexterous real-world tasks. Now three decades later, the principal investigator of that early effort reflects on the lessons learned, the unexpected delays in achieving widespread adoption of AR, and the future prospects and dangers as the field goes mainstream. Key Words: Virtual Fixtures, Augmented Reality, Virtual Reality, Mixed Reality, AR, VR, MR, XR
Article
Full-text available
Political advertisers have access to increasingly sophisticated microtargeting techniques. One such technique is tailoring ads to the personality traits of citizens. Questions have been raised about the effectiveness of this political microtargeting (PMT) technique. In two experiments, we investigate the causal effects of personality-congruent political ads. In Study 1, we first assess participants' extraversion trait by means of their own text data (i.e., by using a personality profiling algorithm), and in a second phase, target them with either a personality-congruent or incongruent political ad. In Study 2, we followed the same protocol, but instead targeted participants with emotionally-charged congruent ads, to establish whether PMT can be effective on an affect-based level. The results show evidence that citizens are more strongly persuaded by political ads that match their own personality traits. These findings feed into relevant and timely contributions to a salient academic and societal debate.
Article
Full-text available
This work is devoted to the optimization of the recognition method of seven basic emotions (joy, sadness, fear, anger, surprise, disgust and neutral) in terms of the expressions of the human face. The existing approaches of the emotion recognition systems construction was analyzed based on human facial expressions, and focused on the advantages of using scheems based on neural networks. We proposed a method of constructing an emotion recognition system based on a neural network, which includes an optimized algorithm for generating training and test samples, as well as determining the rational number of layers of the neural network.
Article
IEEE SPECTRUM History of Technology: This article is about the early days of augmented reality (mixed reality) and the creation of the Virtual Fixtures Platform at the Air Force Research Laboratory (Armstrong Labs). Developed at Wright Patterson Air Force Base in 1992, it was the first augmented reality system to enable users to interact simultaneously with real and virtual objects in 3D space. Designed to enhance human performance in skilled manual tasks, the platform was not just visual but included 3D haptics and 3D spatial audio from both real and virtual worlds.
Article
ABSTRACT Disinformation and other forms of manipulative, antidemocratic communication have emerged as a problem for Internet policy. While such operations are not limited to electoral politics, efforts to influence and disrupt elections have created significant concerns. Data-driven digital advertising has played a key role in facilitating political manipulation campaigns. Rather than stand alone incidents, manipulation operations reflect systemic issues within digital advertising markets and infrastructures. Policy responses must include approaches that consider digital advertising platforms and the strategic communications capacities they enable. At their root, these systems are designed to facilitate asymmetrical relationships of influence.
Article
Covert advertisements, or those that utilize the guise and delivery mechanisms of familiar non-advertising formats, differ from other more direct forms of advertising in several ways that are important for understanding users’ psychological responses. Research across various covert advertising formats including various forms of sponsored editorial content, other native advertising formats, and product placement has shown that variation consumers’ persuasive responses to such messages is largely driven by whether they recognize that such messages are advertising at all. After reviewing the findings of empirical research into covert advertising effects, we present a model of covert advertising recognition effects (CARE) that outlines potential antecedents and processes underlying the recognition of covert advertising, and maps several pathways to persuasive outcomes that are contingent on advertising recognition and perceptions related to the information in and perceived presentation of the advertisement itself.