Content uploaded by Louis Rosenberg
Author content
All content in this area was uploaded by Louis Rosenberg on Dec 28, 2022
Content may be subject to copyright.
Regulating the Metaverse,
a Blueprint for the Future
Louis B. Rosenberg(B)
Unanimous AI, San Francisco, CA, USA
Louis@Unanimous.ai
Abstract. The core Immersive Media (IM) technologies of Virtual Reality (VR)
and Augmented Reality (AR) have steadily advanced over the last thirty years,
enabling high fidelity experiences at consumer prices. Over the same period, net-
working speeds have increased dramatically, culminating in the deployment of
5G networks. Combined, these advancements greatly increase the prospects for
widespread adoption of virtual and augmented worlds. Recently branded “the
metaverse” by Meta and other large platforms, major corporations are currently
investing billions to deploy immersive worlds that target mainstream activities
from socializing and shopping to education and business. With the prospect that
corporate-controlled metaverse environments proliferate society over the next
decade, it is important to consider the risks to consumers and plan for meaningful
regulation. This is especially true in light of the unexpected negative impact that
social media platforms have had on society in recent years. The dangers of the
metaverse are outlined herein along with proposals for sensible regulation.
Keywords: Metaverse ·Augmented reality ·Virtual reality ·Extended reality ·
Mixed reality ·XR ·VR ·AR ·MR ·Technology policy ·Regulation
1 Background: Regulating Media
To provide a legal and philosophical basis for regulating the metaverse, it is helpful to first
consider the arguments made for regulating social media, as the metaverse can be viewed
as an evolution of the same industry. Assessing media regulation, Yale Law professor
Jack Balkin describes social media companies as “key institutions in the twenty-first
century digital public sphere,” and explains that the public sphere “doesn’t work properly
without trusted and trustworthy institutions.” He further argues the public sphere created
by social media is a successor to the public sphere created by print and broadcast media,
which has been regulated by industry norms and government oversight for generations
[1]. At the same time, Balkin and other scholars express caution about government
overreach, as aggressive regulation could be damaging to free speech and other rights,
with some experts pushing for industrywide self-regulation as a means of reducing the
level of necessary government oversight [2,3].
As we look beyond social media to emerging technologies such as virtual reality and
augmented reality, similar principals apply. In fact, the impact of the metaverse on the
© Springer Nature Switzerland AG 2022
L. T. De Paolis et al. (Eds.): XR Salento 2022, LNCS 13445, pp. 1–10, 2022.
https://doi.org/10.1007/978-3-031-15546-8_23
Author Proof
2 L. B. Rosenberg
public sphere is likely to be far more encompassing. In October of 2021, Meta CEO,
Mark Zuckerberg, wrote that “in the metaverse, you’ll be able to do almost anything you
can imagine — get together with friends and family, work, learn, play, shop, create.”
Clearly the metaverse, when broadly deployed by major corporations, aims to take on
the role of “a public sphere” as much if not more than today’s social media [4]. This
transition may happen very quickly, as Meta is currently investing $10B per year with
the stated goal that “within the next decade, the metaverse will reach a billion people,
host hundreds of billions of dollars of digital commerce, and support jobs for millions
of creators and developers.” And Meta is not the only major corporation aggressively
pursuing this vision – Apple, Microsoft, Google, Sony, Samsung, and Snap are just a
few of the major players that have announced significant efforts [5,6].
With Big Tech investing hundreds of billions of dollars in VR and AR products and
services, it is reasonable to predict that the metaverse will impact the lives of billions
of people within the next decade, driving a global transition from flat media to immer-
sive media as the primary means by which users access digital content [7]. This will
greatly impact the public sphere, giving even more control to platform providers than
current technologies. With the industry heading in this direction, it’s prudent to assess
the potential dangers of the metaverse and propose viable regulatory solutions [8].
2 The Metaverse: Potential Dangers
At the highest level, “the metaverse” can be described as the societal transition from
the current information ecosystem based on flat media viewed in the third person to a
new ecosystem rooted in immersive media experienced in the first person. It is not the
technology itself that is dangerous to consumers, but the fact that metaverse platforms
are likely to be controlled by large corporations that implement aggressive business
tactics similar to those used in social media. Before describing the potential dangers of
corporate controlled metaverse platforms, it’s worth defining “metaverse” along with
the two primary forms of immersive media, “virtual reality” and “augmented reality”:
Virtual Reality (VR) is an immersive and interactive simulated environment that
is experienced in the first person and provides a strong sense of presence to the
user [27].
Augmented Reality (AR) is an immersive and interactive environment in which
virtual content is spatially registered to the real world and experienced in the first
person, providing a strong sense of presence in a combined real/virtual space [9,
28].
Mixed Reality (MR) is commonly used as a synonym for augmented reality,
referring to mixed environments of real/virtual content. Extended Reality (XR)
is commonly used as a catch-all for all forms of immersive media [9,28].
Metaverse is a persistent and immersive simulated world that is experienced in
the first person by large groups of simultaneous users who share a strong sense
of mutual presence. It can be fully virtual (i.e. a Virtual Metaverse) or it can be
layers of virtual content overlaid on the real world (i.e. an Augmented Metaverse)
[9,27,28].
Author Proof
Regulating the Metaverse, a Blueprint for the Future 3
With these definitions in place, we can express that a virtual metaverse is a fully
simulated world in which users are represented by graphical avatars under their own
individual control (see Fig. 1). Conversely, in an augmented metaverse the human par-
ticipants are generally not avatar-based but interact directly with a real world that is
embellished with virtual content (see Fig. 2).
Fig. 1. Virtual Metaverse example – The Nth Floor (Accenture)
Fig. 2. Augmented Metaverse example - (Keiichi Matsuda)
Some say a metaverse must also include rules of conduct and a fully functional
economic system, but that is still open for debate [10]. In addition, some say that a meta-
verse must also be interoperable, enabling items and transactions to be exchanged among
multiple virtual worlds. While it is likely that many virtual and augmented worlds will
enable such interoperability, the definition of metaverse should allow for self-contained
worlds, as it is likely that many platforms will be independent.
Author Proof
4 L. B. Rosenberg
Whether a virtual metaverse or an augmented metaverse, it’s not the technology
itself that poses a risk to society, but the fact that the infrastructure required to enable
immersive worlds will give powerful corporations the ability to monitor and mediate
intimate aspects of our lives, from what products, services, and information consumers
are exposed to, to what experiences they have throughout their day and who they are
having those experiences with. On the surface, this may sound similar to the impact
of today’s social media, which also monitors and mediates user experiences, but in the
metaverse the corporate intrusion could be far more extensive.
To explore the potential dangers in a structured way, it is helpful to describe what has
been called The Three M’s of the Metaverse, for the core social problems relate to the
inherent ability of metaverse platforms to monitor users, manipulate users, and monetize
users [11,27]. These three risk-categories are outlined as follows:
(1) Monitoring Consumers in the Metaverse: Over the last two decades, technology
companies have made a science of tracking and characterizing users on their platforms,
as it enables the sale of targeted advertising [12]. Such targeting has been a boon for
advertisers and a windfall for media platforms, resulting in some of the most valuable
corporations in history. Unfortunately, such targeting has exploited consumers, reduced
personal privacy, and has made social media a polarizing force by allowing third-parties
to deploy customize messaging that is skillfully aimed at very specific demographic
groups. This tactic has had the widespread effect of amplifying existing biases and
preconceptions in populations, radicalizing political views and spreading misinformation
[13].
In the metaverse, these problems are likely to get significantly worse [14]. That’s
because the technology will not just track where users click, but where they go, who
they’re with, what they do, what they look at, even how long their gaze lingers. Immer-
sive platforms will also track facial expressions, vocal inflections, and vital signs, while
intelligent algorithms use such data to predict each person’s real-time emotional state
[15–17]. Tracking will also include real-time monitoring of user gait and posture, assess-
ing when users slow down to browse products or services. Metaverse platforms will even
monitor manual reach, assessing when users grab for objects (both real and virtual) and
tracking how long they hold the objects to investigate. This will be especially invasive
in the augmented metaverse in which user gaze, gait, and reach will be monitored in the
real world, for example while shopping in augmented physical stores. This may sound
extreme, but real-time tracking of manual interactions with real objects goes back to
the first interactive augmented reality system developed in 1992 at Air Force Research
Laboratory (Virtual Fixtures platform) [7,24,26].
These various forms of tracking, when taken together, suggest that the platform
providers controlling the metaverse will not just monitor how their users act, but how
they react and interact, profiling their behaviors and responses at far deeper levels than
has been possible in traditional media platforms. Of course, the danger is not merely
that these personal parameters can be tracked and stored, but that advertisers and other
paying third parties will be able to use this invasive data to manipulate consumers with
targeted content more effectively than ever before.
(2) Manipulating Experiences in the Metaverse: From the early days of radio and
television, advertisers have skillfully influenced public opinion on topics ranging from
Author Proof
Regulating the Metaverse, a Blueprint for the Future 5
consumer products to political beliefs. With the advent of social media, custom targeted
advertising has greatly increased the persuasive ability of promotional messaging [18,
19]. In the metaverse, such targeting will get far more personal and the content will
get much harder to resist [20]. That’s because in today’s flat media environments like
Facebook, Instagram, TikTok and Twitter, consumers are generally aware when they are
being targeted by an advertisement and can muster a healthy dose of skepticism [21]. But
in the metaverse, consumers won’t be targeted with simple pop-up ads or promo-videos.
Instead, consumers will have immersive content injected into their world such as virtual
people, products, and activities that seem as real as everything else around them.
Virtual Product Placements (VPPs) are likely to become a widespread advertising
tactic within the metaverse, being applied in both virtual and augmented worlds. We
can define a VPP as a simulated product, service, or activity injected into a virtual
or augmented world on behalf of a paying sponsor such that it appears as a natural
element of the ambient environment. Such advertising can be quite effective because
users can easily mistake a VPP for an organic part of the world that was serendipitously
encountered rather than a targeted piece of promotional material that was deliberately
inserted into the world for that specific user to experience.
For example, imagine walking down the street in a virtual or augmented world. In
both cases, the platform provider will be able to track where you are in real-time, how
fast you are moving, and what your gaze is aimed at. The platform provider will also
have access to a database about your behaviors and interests, values and affiliations, and
of course, your shopping history. In social media, this personal information would be
used to target you with traditional advertising. In the metaverse, platform providers will
be able to manipulate your real-time experiences. This might include seeing particular
cars on the virtual streets, particular brands in store windows, or even include simulated
people drinking a particular soft drink as they walk past you on the sidewalk. You might
assume that everyone around you is seeing the same thing, but that is not the case – these
promotional experiences could have been injected exclusively for you by the platform
provider on behalf of a paying third party.
Virtual People (Veeple): In the metaverse, immersive promotional content will go
beyond inanimate objects to AI-driven simulated people that look and act like any other
user but are computer generated spokespeople controlled by AI engines programmed
to pursue a persuasive agenda. These virtual people will be placed in the metaverse,
targeting specific users for either (i) passive observation or (ii) direct engagement. As a
passive example, a targeted user might observe two people having a conversation about
a product, service, or idea. The targeted user may assume that the people are other
metaverse users like themselves, not realizing that a third party injected those virtual
people into their world as a subtle form of targeted advertising. As an active example, a
targeted user may be approached by a virtual person that engages them in promotional
conversation. The interaction could be so authentic, the targeted user would not realize
it’s an AI-driven avatar with a persuasive agenda.
For both active and passive uses, these AI agents will have access to profile data
collected about the targeted user, including their preferences, interests, and a historical
record of their reactions to prior promotional engagements. These AI agents will also
Author Proof
6 L. B. Rosenberg
have access to real-time emotional data generated by capturing facial expressions, vocal
inflections, and vital signs of the targeted users. This will enable the AI agent to adjust
conversational tactics in real-time for optimal persuasion. Even the manner in which
these simulated people appear will be crafted for maximum persuasion—their gender,
hair color, eye color, clothing style, voice and mannerisms—will be custom generated
by algorithms that predict which sets of features are most likely to influence the targeted
user based on previous interactions and behaviors (see Fig. 3)[11,23,27].
Fig. 3. Virtual human used in social trust research (2019) [23]
In the past, researchers have expressed doubt that computer generated avatars could
successfully fool consumers, but recent research suggests otherwise. In a 2022 study,
researchers from Lancaster University and UC Berkeley demonstrated that when virtual
people are created using generative adversarial networks (GANs), the resulting faces are
indistinguishable from real humans to average consumers. Even more surprising, they
determined that average consumers perceive virtual people as “more trustworthy” than
real people [29]. This suggests that in the not so distant future, advertisers will prefer
AI-driven artificial humans as their promotional representatives.
(3) Monetizing Users in the Metaverse: It is worth acknowledging that platform
providers are not charities but commercial entities that require substantial revenue to
support the interests of their employees and shareholders. And because the public has
resisted paying subscriptions for access to online platforms, the most common industry
model is to provide free access in exchange for widespread advertising, focusing largely
on “targeted ads” that can be precisely delivered based on the unique behaviors and
interests of individual users. It is the business model of targeted advertising that has
driven platform providers to pursue extensive tracking and profiling of their users. This
suggests that one way to reduce these risks in future metaverse platforms is to shift from
ad-based to subscription-based business models [11,27].
Author Proof
Regulating the Metaverse, a Blueprint for the Future 7
3 The Metaverse: Non-regulatory Solutions
As described above, shifting from ad-based to subscription-based models could reduce
the motivation that platform providers have to profile and target their userbase. This is
only viable if consumers are willing to pay for access in other ways. Therefore we cannot
assume this approach will become widespread anytime soon. We also can’t assume
that the industry will adopt trusted norms and practices that eliminate abuses without
government oversight. And finally, we can’t expect consumers to simply opt-out of the
metaverse if they are uncomfortable with extensive tracking and targeting. This is because
metaverse platforms will become a primary access point to digital content. Similar to
how consumers cannot opt out of using the internet, opting out of the metaverse would
mean missing out on critical information and services [8,11,27].
4 The Metaverse: Regulatory Solutions
Assuming the problems are not solved by industry norms or by changes in business
models, we will need some level of government regulation of the metaverse to prevent
exploitation of consumers. Of course, the question is – what needs to be regulated? A
number of ideas are presented below for consideration:
4a – Restrict User Monitoring: As described above, platform providers will have
access to everything their users do, say, touch and see inside the metaverse. It may
be impossible to prevent this, as tracking is required for real-time simulation of virtual
interactions. That said, platform providers should not be allowed to store this data for
more than the short periods of time required to mediate whatever virtual experience is
being generated. This would greatly reduce the degree to which providers can profile
user behaviors over time. In addition, providers should be required to inform the public
as to what is being tracked and how long it is retained. For example, if a platform is
monitoring the direction and duration of a user’s gaze as you walk through a virtual or
augmented world, that user should be overtly notified at the time of tracking.
4b – Restrict Emotional Analysis: As outlined above, the metaverse will likely use
advertising algorithms that monitor personal features such as facial expressions, vocal
inflections, posture, and vital signs including heart rate, respiration rate, blood pressure,
and galvanic skin response captured through smart-watches and other wearable devices
such as earbuds. Unless regulated, these invasive physiological reactions will be used
to generate emotional profiles and optimize marketing messages, enabling AI agents to
adjust their conversational strategy in real time. Regulation should be crafted to limit
the scope of such advertising. In addition, users should be overtly informed whenever
these personal qualities are being tracked and stored for promotional purposes.
4c – Regulate Virtual Product Placements: inside the metaverse, advertisers will
move away from traditional marketing methods like pop-up ads and promo-videos,
instead leveraging the immersive features of the technology. This will include targeting
users with promotional artifacts and activities injected into their environment that seem
authentic. In the metaverse, a targeted user might observe a person walking down the
Author Proof
8 L. B. Rosenberg
street drinking a particular brand of soft drink. That observation will influence the tar-
geted user, consciously or subconsciously, especially if they notice many people drinking
that same product throughout their day. Users could easily believe such virtual experi-
ences were authentic serendipitous encounters that reflect popularity of the particular
drink in their community, when really, it was a targeted experience, injected into their
world for an unknown third-party. And this type of manipulation can extend beyond soft
drinks to biased political messaging, disinformation, and other socially destabilizing
forms of agenda-driven promotion.
Because it may be impossible to distinguish between authentic and manufactured
experiences in the metaverse, regulation is needed to protect consumers. At a minimum,
platform providers should be required to inform users of all product placement in virtual
or augmented worlds, ensuring that targeted promotional content is not misinterpreted as
natural serendipitous encounters. In fact, product placements should be visually distinct
from other items in the metaverse, enabling users to easily identify when an artifact has
been placed in the world for promotional purposes versus organic content. In addition,
platform providers should be required to inform users who sponsored each virtual product
placement. Such transparency will greatly protect consumers.
4d – Regulate Virtual People: as described above, the most manipulative form of per-
suasion in the metaverse is likely to be through agenda-driven artificial agents that engage
users in promotional conversation. These virtual people will look and sound like other
users in the metaverse, whether the virtual world uses cartoon avatars or photorealistic
human representations [14,22]. Regardless of fidelity, if consumers can’t distinguish
between real users and artificial agents, they can be misled into believing they are having
a natural encounter when really, it’s a targeted promotional interaction.
To protect consumers, platform providers should be required to overtly inform users
whenever they are engaged by conversational agents controlled by AI engines. This
becomes even more important when the algorithms haveaccess to user behavioral profiles
and real-time emotional data. At a minimum, platforms should be required to distinguish,
through overt visual and audio cues, that a user is interacting with an artificial agent and
further indicate if the agent can perform emotional analysis. In addition, the use of
vital signs such as blood pressure and heart rate should be banned for use in AI-driven
conversational advertising.
5 The Metaverse: Is It Worth It?
In 2021, the Aspen Institute published an 80 pages report indicating that social media
platforms have become a “force multiplier for exacerbating our worst problems as
a society.”25 As described above, metaverse platforms could create similar but more
extreme problems, enabling more invasive forms of profiling and more persuasive meth-
ods of targeting.27 Despite these risks, the metaverse has enormous potential to unleash
creativity and productivity, expanding what it means to be human. To enable these ben-
efits while protecting consumers, government and industry actors should push quickly
for meaningful and aggressive regulation.
Author Proof
Regulating the Metaverse, a Blueprint for the Future 9
References
1. Balkin, J.: How to regulate (and not regulate) social media. J. Free Speech Law 71, 1–31
(2021). (Knight Institute Occasional Paper Series, No. 1, 25 March 2020, Yale Law School,
Public Law Research Paper )
2. Weissmann, S.: How not to regulate social media. The New Atlantis. 58, 58–64 (2019). https://
www.jstor.org/stable/26609117
3. Cusumano, M., Gawer, A., Yoffie, D.: Social media companies should self-regulate. Now.
Harvard Business Review (2021)
4. Letter, F., Zuckerberg, M. 28 October 2021. https://about.fb.com/news/2021/10/founders-let
ter/
5. Strange, A.: Facebook planted the idea of the metaverse but Apple can actually populate it.
Quartz, 29 November 2021. https://qz.com/2095986/facebook-is-marketing-the-metaverse-
but-apple-can-make-it-real/
6. Burke, E.: Tim Cook, AR will pervade our entire lives. Silicon Republic, January 2020
7. Rosenberg, L.B.: Augmented reality: reflections at thirty years. In: Arai, K. (ed.) FTC 2021.
LNNS, vol. 358, pp. 1–11. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-899
06-6_1
8. Rosenberg, L.: The Metaverse needs Aggressive Regulation. VentureBeat Magazine, 4
December 2021. https://venturebeat.com/2021/11/30/the-power-of-community-3-ways-sco
pely-keeps-players-engaged-entertained-and-connected/
9. Rosenberg, L.: VR vs. AR vs. MR vs. XR: What’s the difference? Big Think. 18 January
2022. https://bigthink.com/the-future/vr-ar-mr-xr-metaverse/
10. Park, G.: Silicon Valley is racing to build the next version of the Internet. The Washington
Post, 17 April 2020. https://www.washingtonpost.com/video-games/2020/04/17/fortnite-met
averse-new-internet/
11. Rosenberg, L.: Fixing the Metaverse: Augmented reality pioneer shares ideas for avoiding
dystopia. Big Think; 9 December 2021. https://bigthink.com/the-future/metaverse-dystopia/
12. Tucker, C.: The economics of advertising and privacy. Int. J. Indust. Organiz. 30(3), 326–329
(2012). ISSN 0167–7187
13. Crain, M., Nadler, A.: Political Manipulation and Internet Advertising Infrastructure. J. Inf.
Policy. 9, 370–410 (2019). https://doi.org/10.5325/jinfopoli.9.2019.0370
14. Rosenberg, L.: Metaverse: Augmented reality pioneer warns it could be far worse than social
media. Big Think. 6 November 2021. https://bigthink.com/the-future/metaverse-augmented-
reality-danger/
15. Ivanova,E., Borzunov, G.: Optimization of machine learning algorithm of emotion recognition
in terms of human facial expressions. Proc. Comput. Sci. 169, 244–248 (2020)
16. van den Broek, E.L., Lisý, V., Janssen, J.H., Westerink, J.H.D.M., Schut, M.H., Tuinenbrei-
jer, K.: affective man-machine interface: unveiling human emotions through biosignals. In:
Fred, A., Filipe, J., Gamboa, H. (eds.) Biomedical Engineering Systems and Technologies.
CCIS, vol. 52, pp. 21–47. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-
11721-3_2
17. Boz, H., Kose, U.: Emotion extraction from facial expressions by using artificial intelligence
techniques. BRAIN. Broad Res. Artif. Intell. Neurosci. 9(1), 5–16 (2018). ISSN 2067–3957
18. Zarouali, B., et al.: Using a personality-profiling algorithm to investigate political microtar-
geting: assessing the persuasion effects of personality-tailored ads on social media. Commun.
Res. 0093650220961965 (2020)
19. Van Reijmersdal, E.A., et al.: Processes and effects of targeted online advertising among
children. Int. J. Advert. 36(3), 396–414 (2017)
Author Proof
10 L. B. Rosenberg
20. Hirsh, J.B., Kang, S.K., Bodenhausen, G.V.: Personalized persuasion: tailoring persuasive
appeals to recipients’ personality traits. Psychol. Sci. 23(6), 578–581 (2012)
21. Wojdynski, B.W., Evans, N.J.: The covert advertising recognition and effects (CARE) model:
processes of persuasion in native advertising and other masked formats. Int. J. Advert. 39(1),
4–31 (2020)
22. Rosenberg, L.: The Metaverse will be filled with Elves. TechCrunch, 12 January 2022. https://
techcrunch.com/2022/01/12/the-metaverse-will-be-filled-with-elves/
23. Zibrek, K., Martin, S., McDonnell, R.: Is Photorealism Important for Perception of Expressive
Virtual Humans in Virtual Reality? ACM Trans. Appl. Percept. 16(3), 19 (2019). https://doi.
org/10.1145/3349609
24. Rosenberg, L.: The Use of Virtual Fixtures as Perceptual Overlays to Enhance Operator
Performance in Remote Environments. Technical report AL-TR-0089, USAF Armstrong
Laboratory, Wright-Patterson AFB OH (1992)
25. Commission on Information Disorder Final Report, Aspen Institute, November 2021. https://
www.aspeninstitute.org/publications/commission-on-information-disorder-final-report/Asp
enDigital
26. Rosenberg, L.: How a Parachute Accident Helped Jump-start Augmented Reality. IEEE
Spectrum, 7 April 2022. https://spectrum.ieee.org/history-of-augmented-reality
27. Rosenberg, L.: Regulation of the metaverse: a roadmap. In: The 6th International Conference
on Virtual and Augmented Reality Simulations (ICVARS 2022), 25–27 March 2022, Brisbane
Australia (2022)
28. Metaverse 101: Defining the Key Components. VentureBeat, 5 February 2022. https://ventur
ebeat.com/2022/02/05/metaverse-101-defining-the-key-components/
29. Nightingale, S., Hany, J.F.: AI-synthesized faces are indistinguishable from real faces and
more trustworthy. In: Proceedings of the National Academy of Sciences, 22 February 2022.
https://doi.org/10.1073/pnas.2120481119
Author Proof