Conference PaperPDF Available

Travel with Wander in the Metaverse: An AI chatbot to Visit the Future Earth

Authors:

Abstract and Figures

We developed Wander[001] as an experiment to discuss several visions toward the metaverse: through crowd contribution, how an AI agent can become a highly accessible storyteller, and how to link the virtual and physical world through AI-generated content (AIGC). In this artwork, we implement a hybrid AIGC and user-generated content (UGC) system to facilitate a narrative AI chatbot, Wander, that produces interactive fiction through knowledge graphs with text messages input by users on instant messaging social platforms. On Discord and WeChat, Wander can generate science-fiction-style travelogues about the future earth, including text, style-transferred images and global coordinates (GPS) based on real-world locations (e.g. Paris). The crowd interactions with Wander are visualised on an interactive globe map in real time, documenting the players' asynchronous contributions to exploring the speculative future earth. This paper presents Wander's concept, development and user study to demonstrate how people would interact with an AI agent in a narrative context for the future metaverse.
Content may be subject to copyright.
Travel with Wander in the Metaverse: An AI
chatbot to Visit the Future Earth
Yuqian Sun1*, Ying Xu2,Chenhang Cheng,Yihua Li3,Chang Hee Lee4,Ali Asadipour1
1Computer Science Research Centre, Royal College of Art, London, UK
2Department of Industrial Design, School of Art and Design, Wuhan University of Technology, Wuhan, China
3Department of Product Design, Donghua University, Shanghai, China
4Affective Systems and Cognition Lab, Industrial Design Department,
College of Engineering, Korea Advanced Institute of Science and Technology(KAIST), Daejeon, South Korea
Email: yuqiansun@network.rca.ac.uk, ali.asadipour@rca.ac.uk
Abstract—We developed Wander[001] as an experiment to
discuss several visions toward the metaverse: through crowd
contribution, how an AI agent can become a highly accessible
storyteller, and how to link the virtual and physical world through
AI-generated content (AIGC). In this artwork, we implement a
hybrid AIGC and user-generated content (UGC) system to facil-
itate a narrative AI chatbot, Wander, that produces interactive
fiction through knowledge graphs with text messages input by
users on instant messaging social platforms. On Discord and
WeChat, Wander can generate science-fiction-style travelogues
about the future earth, including text, style-transferred images
and global coordinates (GPS) based on real-world locations (e.g.
Paris). The crowd interactions with Wander are visualised on
an interactive globe map in real time, documenting the players’
asynchronous contributions to exploring the speculative future
earth. This paper presents Wander’s concept, development and
user study to demonstrate how people would interact with an AI
agent in a narrative context for the future metaverse.
Index Terms—intelligent interactive system, co-creative AI,
chatbot, metaverse, gaming
I. INTRODUCTION
Metaverse, a term and concept combining ‘meta’ and
‘verse’, has been the focus of much attention and discussions
in academia and industry. Even though it has not reached
a coherent definition, the metaverse is generally regarded as
a shared, immersive 3D world with emerging technologies,
integrating the real world with the virtual world [1] [2] [3].
Related AI technologies, including algorithms and technical
aspects, are considered the foundation of the development of
the metaverse [4] [5]to assist with content generation and user
experience. While industry and academia are still articulating
the definition and structures of the metaverse, we want to raise
awareness of several aspects of development.
Lee et al.’s description of metaverse development as a
‘digital twins-native continuum’ [1] clearly shows the duality
of the metaverse as ‘co-existence of physical–virtual reality’.
This description also points out the limited connectivity be-
tween the physical and virtual worlds. While previous research
about how real and virtual worlds merge mostly focused
on technology infrastructure about holistic 3D worlds (for
example, through extended reality (XR) technology), we want
to use the very primary category of metaverse text-based
interactive games [6] to draft a futuristic process where
Fig. 1. (Up) Concept arts NFTs (Non-fungible token) for Wander (Bottom)
Character design of Wander
existing knowledge forms a future earth through human and
AI’s co-creation.
Regarding AI agents as the messengers between real life and
the fictional world, we created Wander[001]1with a hybrid
AIGC and UGC system. This work presents a chatbot called
Wander (Fig. 2), a female android(Fig. 1) who travels the
future earth, using modern instant messaging (IM) software,
Discord and WeChat, as communication terminals. Each time
a participant sends a location message, Wander will generate
a sci-fi interactive travelogue. The travelogues are visualised
in real-time on an interactive map that is updated with partic-
ipants’ data. The map shows the results of this hybrid UGC
1Project website: https://www.wander001.com
978-1-6654-7189-3/22/$31.00 © 2022 Crown
2022 IEEE 24th International Workshop on Multimedia Signal Processing (MMSP) | 978-1-6654-7189-3/22/$31.00 ©2022 IEEE | DOI: 10.1109/MMSP55362.2022.9950031
Authorized licensed use limited to: Royal College of Art. Downloaded on November 28,2022 at 15:39:19 UTC from IEEE Xplore. Restrictions apply.
Fig. 2. Wander’s two types of command: ‘Visit’ and ‘Action’. (A) presents Wander’s feedback, which contains environmental descriptions, geographic
coordinates and photos, and (B) demonstrates the co-creative story based on the commands players send.
and AIGC system, bringing an asynchronous, crowd-sourced
interaction to contribute to a future earth chronicle.
We discuss world-building based on real-world knowledge
through AI-assisted crowd interactions. All fictional content,
including text and images, is generated in real time based
on a Google Knowledge Graph and story generation model.
Each trip is like a text-based adventure, but there are no fixed
choices, and the adventure can be easily accessed through text
messages. The training data, the interaction system and the
images are all from reality and existing knowledge. Thus,
at a higher level, the world here can be seen differently
through the AI’s understanding, as Shklovsky proposed the
defamiliarisation technique: ‘impart the sensation of things as
they are perceived and not as they are known’ [7].
The main contributions of this work are as follows:
(1) We designed a prototype interactive system to import
users’ input and real-world knowledge to procedurally gener-
ate human-AI co-creative playable stories.
(2) We demonstrated a hybrid system of UGC and AIGC
through a chatbot that can be easily accessed through text mes-
sages with natural language inputs on familiar and commonly
available IM software (WeChat and Discord).
(3) We visualised players’ asynchronous participation on a
globe website and decentralised future earth in a metaverse
that mixes UGC and AIGC.
This paper, focusing on the WeChat version, details our
perspectives, techniques, and processes in developing this
work. It includes the feedback we gathered from participants
to understand how people would interact with an AI agent in a
narrative context and how the system contributes to human-AI
interactions for the future metaverse.
II. MOT IVATIO NS
A. Experimental world building with crowd interactions
Many UGC-driven 3D projects, such as Animal Crossing2,
where people and virtual villagers live together, and Cyp-
toVoxel3, where players can build assets and buildings with
voxels, have been treated as cutting-edge examples of the
metaverse. Beyond that, some online projects have received
less attention because of the lack of 3D environments, but they,
in fact, reflect the process of world-building in the metaverse:
co-creation and some perspectives of reality. An example is
Ai Weiwei and Olafur Eliasson’s work Moon [8]. The website
becomes an ever-changing and bizarre monument by con-
necting people through a space of imagination. Another key
example is the 2050 website [9], which brings together expert
and professional artists’ predictions of the future and selected
public submissions. This website visualises them through the
global web. However, it lacks real-time participation because
experts and teams carefully curate all content, and participants
are asked to write press-release-like articles to participate.
In summary, these works question reality and ask how the
different worlds co-exist in a virtual space, and they should be
considered examples of metaverse development as ‘creations
in the realm of surrealistic cyberspace’ [10].
B. AI’s character in the metaverse
In the projects and research mentioned above, the role of
AI technology in virtual spaces is mainly limited to recom-
mendation algorithms, content creation assistants and scripted
2https://www.animal-crossing.com/
3https://www.cryptovoxels.com/
Authorized licensed use limited to: Royal College of Art. Downloaded on November 28,2022 at 15:39:19 UTC from IEEE Xplore. Restrictions apply.
non-player characters (NPCs). However, with the development
of affective computing, natural language processing (NLP)
and other technologies, AI agents have taken on self-agency.
In recent discussions, AI agents have been considered as an
‘indispensable part of our metaverse’ [2] and even as ‘native
species’ [11]. These predictions are increasingly borne out
in reality. In terms of emotional engagement, AI intelligence
has gradually increased to the level of intimate companion
[12]. AI agents can already act as independent individuals,
complete a live survival Livestream show [13], perform in a
generative talk show [14] and even form a simulated society
with humans4with the ability to talk to a human in open field
conversations. The core skill of AI agents in these emerging
programs is their language ability. Unlike traditional rule-
driven NPCs or those powered by drama management systems
like Facade [15], they can generate dynamic responses to
different people to form truly diverse content to meet the
needs of different people and are no longer limited by the
content pre-prepared by developers. On the one hand, their
rich responses help to build social good in the metaverse to
meet diverse requirements [2]. On the other hand, their data
processing capabilities can help humans to better enjoy the
metaverse and become messengers connecting the real and
virtual worlds.
C. Concept summary
We wanted to draft an experiment where AI and humans can
explore and document a virtual world together based on real-
world information. This idea of ‘future traveller’ took shape
during the COVID-19 pandemic when travel was limited. Our
initial motivation was to let AI take us to see places we
cannot reach, beyond time and space. Just like UC Berkeley’s
graduation ceremony in Minecraft5and Travis Scott’s concert
in Fortnite6, the virtual world extends people’s experience of
daily life beyond the limits of geographical location and the
pandemic. Everyone can easily participate in it. And, as in
defamiliarisation theory [7], with the help of AI, people can
see familiar and somewhat different realities and leave their
own traces. This is similar to the asynchronous multiplayer
mode in games like Death Stranding7: players participate and
contribute to a shared building activity (e.g. building facilities
that everyone can use), although not concurrently.
III. THE ART WORK
A. Wander bot
The core of this project is the conversational agent Wander
bot (Fig. 2). According to the pre-set background story, she is
an android that wanders the future earth, contacting people in
the twenty-first century through IMs.
Her journeys (travelogue) on the future earth are realised
through two types of commands: ‘Visit’ and Action’. Each
time a participant sends a location message with the ‘Visit’
4https://island.xiaoice.com/
5https://www.minecraft.net/
6https://www.epicgames.com/fortnite
7https://store.epicgames.com/en-US/p/death-stranding
command, Wander will go to that place in a random year
between 3000 and 5000 CE and then send back travel notes,
including GPS location, futuristic photos and an environmental
description. Then, with the Action’ command, participants can
ask Wander to explore the place using any method, such as
searching for life, going into the ruins, or finding other robots.
1) Image and style transfer: Through Google Map’s API,
we can obtain the GPS location and an image of any place, if
an image exists. We used the Arbitrary Style Transfer model
from Runway.ml [16] because it only needs the original image
and a style image to transfer, and the average calculation
time is under 10 seconds. This off-the-shelf technology allows
Wander to give instant responses.
2) Text generation model: Dreamily.ai: The text generation
model we used is called dreamily.ai [17], a creative writing
platform using a modified transformer (a self-attention multi-
layer neural network) model trained with quality fiction. Both
the English and Chinese data sets consist of open access fan-
fiction and ebooks on the web, and both data set sizes are
about 100 GB. Although this model is not for ubiquitous tasks
(e.g. writing official documents or codes), it is very suitable for
story generation. We designed prompts with Google Knowl-
edge Graph to extract descriptive sentences about the location.
With the variation in prompts, Wander will produce a rich and
surprising response that corresponds to the destination.
B. Future map
Wander’s Map (Fig. 3) website is a record of public partici-
pation. All records of Visit commands, including text, photos,
visitor id and visit time (including both real and future visits),
will be sorted in the timeline. Through public participation,
knowledge of the future earth will be updated and become a
speculative future chronicle that is developed by humans and
AI together.
Locations with more visits recorded will have a longer light
pillar. Participants can rotate the globe and check the journey
histories of each location online. In the physical installation
work, the map will automatically locate and zoom to the latest
record of the journey.
IV. USE R STU DY
We conducted a survey to quantitatively and qualitatively
evaluate various aspects of the user experience of Wan-
der. Participants were recruited by posting an announcement
on WeChat Moments8and WeChat Groups where Wander
has been deployed. All participants were told that their
anonymised data would be utilised for academic study. Before
the survey, we explained the purpose and procedures to
the participants. To ensure the experience flowed fully, we
required users to send Wander at least two Visit commands
and five Action commands before filling the questionnaire.
The survey question set was based on Changhoons’ [18]
research and contained 15 evaluation criteria, including 12
items that are normally used to assess the usability and
8Social media within WeChat.
Authorized licensed use limited to: Royal College of Art. Downloaded on November 28,2022 at 15:39:19 UTC from IEEE Xplore. Restrictions apply.
Fig. 3. Top: Wander’s physical installation using the Discord version Bottom: Wander’s future map website
user experience of user interfaces [19], [20] and three items
used to indicate the experience of AI interfaces [21]–[23]I.
Each question came with a description, like: Fun - Wander’s
feedback always surprises me. I don’t get bored interacting
with Wander. Users evaluated each task on the survey with
a 7-point Likert scale ranging from highly disagree (score as
3) to highly agree (score as 3). In addition, we asked the
4 questions about motivation, expectation and feels about the
identification of Wander with multiple-choice answers. A total
of 268 participants (103 men and 143 women, 9 who chose
not to disclose and 3 who chose other) participated in the
valid questionnaire. Their mean age was 21.730, and the SD
was 5.314 (Men: Mean = 22.83, SD = 5.06, Women: Mean =
20.91, SD = 5.03). Participants received a non-fungible token
(NFT) badge(Fig. 5) as a gift on blockchain by FLOAT9for
completing the questionnaire.
A. Results
We obtained the participants’ responses from the survey and
evaluated the mean and standard deviation of all participant
scores on each item. The results of the analysis are as follows.
Wander’s interaction is easy to learn, friendly, easy
to use, satisfying, and comfortable. We identified that
9https://floats.city/
Wander received higher scores in five items (Mean and SD
of each item are shown in Table 1): easy to learn, friendly,
easy to use, satisfying, and comfortable. Most participants said
communicating with Wander via WeChat was a friendly way
to interact with the AI and that the exhibition’s visualisation
was very clear. Because it is something they would typically
do on a daily basis, this form of interaction with the AI on
an IM device through a familiar daily chat was simple for
participants to grasp quickly, and participants did not need to
spend much extra effort with the AI until they were familiar
with it. Most participants indicated they enjoyed interacting
with Wander and would recommend it to their friends.
The interaction with Wander is less consistent and
efficient but somewhat communicative, useful, meaning-
ful and interesting. We learned from the participants’
assessment that Wander’s feedback lacked some logic in its
context and occasionally had contradictory language. The
participants reported they could communicate with Wander,
although sometimes the responses could be ineffective, and
feedback could sometimes take longer. On the whole, the
photos that were returned matched the textual material. Most
participants (90.3%) reported having fun while interacting with
it. They even claimed that Wander might inspire them to be
more creative.
Authorized licensed use limited to: Royal College of Art. Downloaded on November 28,2022 at 15:39:19 UTC from IEEE Xplore. Restrictions apply.
Fig. 4. Technical Workflow of Wander
Fig. 5. Wander’s gift NFT
In some ways, AI can facilitate the connection between
the virtual world and the real world and establish a rela-
tionship with human beings. Along with their interest in
AI robots (64.9%) and the future world (54.5%), participants’
motivations for interacting with Wander included allowing
them to ‘travel’ to a place they missed but were unable to
visit (22.4%), perhaps due to COVID-19. More than 60%
of the participants believed that Wander’s text descriptions
and images often matched the locations they were eager to
see. An impressive 91%of subjects thought the feedback
they received felt familiar or even like d´
ej`
a vu. For instance,
82.4%of participants correctly identified real locations based
on the iconic buildings in the images, indicating that the style
transfer provides some visual unfamiliarity but, used alone,
cannot significantly change existing buildings to make them
unrecognisable. Furthermore, 13.1%of participants reported
that Wander gave them a sense of company when they
felt lonely. Surprisingly, 34.3% of individuals also identified
Wander as a friend, which seems to imply that there may be
potential for a human-AI relationship.
B. Special cases
In this section, we present some special cases shared by
several passionate participants. These stories demonstrate how
TABLE I
THE M EAN A ND S TANDA RD DE VI ATION O F AL L PARTI CIPA NT SC OR ES ON
EAC H ITE M
human-AI interaction can link emotion and creativity in real
life to a fictional world.
1) Case 1: Reflection on personal experiences: Sometimes,
coincidences break the wall between fiction and reality, and
they result from the human-agent collaboration. A female
Chinese visitor said that her boyfriend was in Hong Kong
and that his nickname was White Dragon. When she travelled
to Hong Kong with Wander and input the ‘dive into water’
action, Wander met a white dragon among the ruins. The girl
was still in mainland China and could not see her boyfriend
in person due to the pandemic, but she met a representation of
her boyfriend through the interaction with the chatbot. When
asked what she thought about Wander, she said:
She’s like my avatar, but not actually myself. The
most interesting thing is that each journey is ran-
domly generated, so it really feels like wandering on
earth. I talk to Wander now and then. It feels like a
diary, and my message is decided by my mood that
day. Another interesting thing is that Wander sends
posts in Moments, so she feels more like a person in
an alternate universe who has a connection to me.
Authorized licensed use limited to: Royal College of Art. Downloaded on November 28,2022 at 15:39:19 UTC from IEEE Xplore. Restrictions apply.
2) Case 2: Creators: Even though we put ‘please send
Wander to real places on earth’ in the instructions, a Polish
artist found it interesting to visit fictional places. When re-
ceiving virtual place names that do not exist in the real world
(such as ‘metaverse land’), Google Maps will find a place that
matches the real world, which is completely unpredictable.
The artist visited ‘Fomo verse’ and found that it was located
in India. He visited many virtual places and tried to find
himself and his artwork in galleries throughout the journey.
He commented afterwards: ‘I learn more about myself’. He
posted these journeys on Twitter and noted that Wander is also
a good tool for creation. He shared screenshots of travelogues
on Twitter.
3) Case 3: Group creation: A male participant who re-
cently graduated from high school invited his classmate to
travel with Wander together and collect the stories to document
and remember their class. He had used dreamily.ai before.
When asked about the difference between Wander and the AI
writing platform, he said:
Wander is more like a virtual character. My class-
mates think Wander is better because Dreamily’s
beginning story is not always fluent, but Wander
always provides a specific location to visit. With a
certain location, my classmates are easier to get in
the mood. . .
V. CONCLUSION
Our work presents an AI chatbot-based work, Wander, that
generates interactive fiction with open data from real-world
maps. It introduces a narrative context for human-AI co-
creativity and then invites humans to explore the fictional fu-
ture earth with accessible IMs. Using real-world information,
we aim to create novel experiences that resonate with people’s
personal experiences and lead them to gain a new perspective
to see the world through the metaverse.
A. Future work
While the metaverse emphasises decentralisation and data
ownership, we believe that from the current level of pop-
ularity, it is not immediately possible to reach the point
where everyone is in control of their own data (e.g. everyone
has a crypto wallet). Instead, we may be able to practice
decentralisation and co-creation from the existing architecture.
To put accessibility in first place, we have not put generated
content on blockchain as other metaverse projects did (The
Sandbox, CryptoVoxel), and players’ data is saved in the
project server. In future work, we will introduce blockchain
technologies to make the interaction records transparent and
more sustainable and ensure the ownership of content for
creators. In fact, we already created many concept arts inspired
by Wander’s travelogues and minted them as NFTs to save
forever on blockchain.
Like most mainstream text generation models, dreamily.ai
may not perform well if the participants’ inputs are too long.
Due to technical limitations, the generated context will lose
consistency when player inputs are bizarre (e.g. ‘Destroy the
earth and fly into space’). To address these issues, Wander now
only allows eight actions per visit to prevent the story from
devolving into nonsense, and the map will only show the initial
travelogue after a ‘Visit:’ commend to keep visualisation safe
from uncontrollable user inputs later. We plan to finetune the
AI model with a more diverse data set and filter conflicted
content.
VI. ACK NOWLEDGEMENT
Thank Xingyuan Yuan from ColorfulClouds Tech for sup-
porting us with the AI model dreamily.ai [17].
REFERENCES
[1] Lik-Hang Lee, Tristan Braud, Pengyuan Zhou, Lin Wang, Dianlei Xu,
Zijun Lin, Abhishek Kumar, Carlos Bermejo, and Pan Hui, All one
needs to know about metaverse: A complete survey on technological
singularity, virtual ecosystem, and research agenda, 2021.
[2] Haihan Duan, Jiaye Li, Sizheng Fan, Zhonghao Lin, Xiao Wu, and Wei
Cai, “Metaverse for Social Good: A University Campus Prototype,” in
Proceedings of the 29th ACM International Conference on Multimedia,
pp. 153–161. Association for Computing Machinery, New York, NY,
USA, Oct. 2021.
[3] Sang-Min Park and Young-Gab Kim, “A metaverse: Taxonomy, com-
ponents, applications, and open challenges,” IEEE Access, vol. 10, pp.
4209–4251, 2022.
[4] Thien Huynh-The, Quoc-Viet Pham, Xuan-Qui Pham, Thanh Thi
Nguyen, Zhu Han, and Dong-Seong Kim, Artificial intelligence for
the metaverse: A survey, 2022.
[5] Qinglin Yang, Yetong Zhao, Huawei Huang, Zehui Xiong, Jiawen Kang,
and Zibin Zheng, “Fusing blockchain and ai with metaverse: A survey,
IEEE Open Journal of the Computer Society, vol. 3, pp. 122–136, 2022.
[6] John David N. Dionisio, William G. Burns III, and Richard Gilbert, “3d
virtual worlds and the metaverse: Current status and future possibilities,
ACM Comput. Surv., vol. 45, no. 3, jul 2013.
[7] Viktor Shklovsky, Art as Technique, Literary Theory: An Anthology 3.
1917.
[8] Aurelio Cianciotta, “Moon by Ai Wei Wei and Olafur Eliasson,
collective testament. |Neural, 2014.
[9] Earth 2050, “Earth 2050: A glimpse into the future |Kaspersky,” .
[10] Lik-Hang Lee, Zijun Lin, Rui Hu, Zhengya Gong, Abhishek Kumar,
Tangyao Li, Sijia Li, and Pan Hui, “When Creators Meet the Metaverse:
A Survey on Computational Arts, Nov. 2021.
[11] rct AI, “The Metaverse Needs a “Brain”,” Aug. 2021.
[12] “Replika, https://replika.ai.
[13] “Rival Peak |An experimental competition reality show, featuring
artificially intelligent contestants whose fate is controlled by YOU, .
[14] “Robot Bores: AI-powered awkward first date, BBC News, Nov. 2020.
[15] Michael Mateas and Andrew Stern, “Fac¸ade: An Experiment in Building
a Fully-Realized Interactive Drama, p. 24.
[16] “RunwayML Machine learning for creators., https://runwayml.com/.
[17] ColorfulClouds Tech, “Dreamily-beta,” 2021.
[18] Changhoon Oh, Jungwoo Song, Jinhan Choi, Seonghyeon Kim, Sung-
woo Lee, and Bongwon Suh, “I lead, you help but only with enough
details: Understanding user experience of co-creation with artificial
intelligence,” in Proceedings of the 2018 CHI Conference on Human
Factors in Computing Systems, 2018, pp. 1–13.
[19] Bill Albert and Tom Tullis, Measuring the user experience: collecting,
analyzing, and presenting usability metrics, Newnes, 2013.
[20] Arnold M Lund, “Measuring usability with the use questionnaire12,”
Usability interface, vol. 8, no. 2, pp. 3–6, 2001.
[21] Melanie Hartmann, “Challenges in developing user-adaptive intelligent
user interfaces.,” in LWA. Citeseer, 2009, pp. ABIS–6.
[22] Anthony David Jameson, “Understanding and dealing with usability
side effects of intelligent processing,” Ai Magazine, vol. 30, no. 4, pp.
23–23, 2009.
[23] Ben Shneiderman and Pattie Maes, “Direct manipulation vs. interface
agents. interactions, 4 (6): 42–61,” Google Scholar Google Scholar
Digital Library Digital Library, 1997.
Authorized licensed use limited to: Royal College of Art. Downloaded on November 28,2022 at 15:39:19 UTC from IEEE Xplore. Restrictions apply.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
As the latest buzzword, Metaverse has attracted great attention from both industry and academia. Metaverse seamlessly integrates the real world with the virtual world and allows avatars to carry out rich activities including creation, display, entertainment, social networking, and trading. Thus, it is promising to build an exciting digital world and transform a better physical world through the exploration of the metaverse. In this survey, we dive into the metaverse by discussing how Blockchain and Artificial Intelligence (AI) fuse with it by investigating the most related studies across the metaverse components, digital currencies, AI technologies, and applications in the virtual world, and blockchain-empowered technologies. Further exploitation and interdisciplinary research on the fusion of AI and Blockchain towards the metaverse will definitely require collaboration from both academia and industries. We wish that our survey can help researchers, engineers, and educators build an open, fair, and rational future metaverse.
Article
Full-text available
Unlike previous studies on the Metaverse based on Second Life, the current Metaverse is based on the social value of Generation Z that online and offline selves are not different. With the technological development of deep learning-based high-precision recognition models and natural generation models, Metaverse is being strengthened with various factors, from mobile-based always-on access to connectivity with reality using virtual currency. The integration of enhanced social activities and neural-net methods requires a new definition of Metaverse suitable for the present, different from the previous Metaverse. This paper divides the concepts and essential techniques necessary for realizing the Metaverse into three components (i.e., hardware, software, and contents) and three approaches (i.e., user interaction, implementation, and application) rather than marketing or hardware approach to conduct a comprehensive analysis. Furthermore, we describe essential methods based on three components and techniques to Metaverse’s representative Ready Player One, Roblox, and Facebook research in the domain of films, games, and studies. Finally, we summarize the limitations and directions for implementing the immersive Metaverse as social influences, constraints, and open challenges.
Article
Full-text available
Moving from a set of independent virtual worlds to an integrated network of 3D virtual worlds or Metaverse rests on progress in four areas: immersive realism, ubiquity of access and identity, interoperability, and scalability. For each area, the current status and needed developments in order to achieve a functional Metaverse are described. Factors that support the formation of a viable Metaverse, such as institutional and popular interest and ongoing improvements in hardware performance, and factors that constrain the achievement of this goal, including limits in computational methods and unrealized collaboration among virtual world stakeholders and developers, are also considered.
Article
Along with the massive growth of the Internet from the 1990s until now, various innovative technologies have been created to bring users breathtaking experiences with more virtual interactions in cyberspace. Many virtual environments have been developed with immersive experience and digital transformation, but most are incoherent instead of being integrated into a platform. In this context, metaverse has been introduced as a shared virtual world that is fueled by many emerging technologies. Among such technologies, artificial intelligence (AI) has shown the great importance of enhancing immersive experience and enabling human-like intelligence of virtual agents. In this survey, we make a beneficial effort to explore the role of AI, including machine learning algorithms and deep learning architectures, in the foundation and development of the metaverse. As the main contributions, we convey a comprehensive investigation of AI-based methods concerning several technical aspects (e.g., natural language processing, machine vision, blockchain, networking, digital twin, and neural interface) that have potentials to build virtual worlds in the metaverse. Furthermore, several primary AI-aided applications, including healthcare, manufacturing, smart cities, and gaming, are studied to be promisingly deployed in the virtual worlds. Finally, we conclude the key contribution and open some future research directions of AI for the metaverse. Serving as a foundational survey, this work will help researchers, including experts and non-experts in related fields, in applying, developing, and optimizing AI techniques to polish the appearance of virtual worlds and improve the quality of applications built in the metaverse.
Conference Paper
Recent advances in artificial intelligence (AI) have increased the opportunities for users to interact with the technology. Now, users can even collaborate with AI in creative activities such as art. To understand the user experience in this new user-AI collaboration, we designed a prototype, DuetDraw, an AI interface that allows users and the AI agent to draw pictures collaboratively. We conducted a user study employing both quantitative and qualitative methods. Thirty participants performed a series of drawing tasks with the think-aloud method, followed by post-hoc surveys and interviews. Our findings are as follows: (1) Users were significantly more content with DuetDraw when the tool gave detailed instructions. (2) While users always wanted to lead the task, they also wanted the AI to explain its intentions but only when the users wanted it to do so. (3) Although users rated the AI relatively low in predictability, controllability, and comprehensibility, they enjoyed their interactions with it during the task. Based on these findings, we discuss implications for user interfaces where users can collaborate with AI in creative works.
Book
Measuring the User Experience was the first book that focused on how to quantify the user experience. Now in the second edition, the authors include new material on how recent technologies have made it easier and more effective to collect a broader range of data about the user experience. As more UX and web professionals need to justify their design decisions with solid, reliable data, Measuring the User Experience provides the quantitative analysis training that these professionals need. The second edition presents new metrics such as emotional engagement, personas, keystroke analysis, and net promoter score. It also examines how new technologies coming from neuro-marketing and online market research can refine user experience measurement, helping usability and user experience practitioners make business cases to stakeholders. The book also contains new research and updated examples, including tips on writing online survey questions, six new case studies, and examples using the most recent version of Excel.
Article
As user interfaces become more and more com-plex and feature laden, usability tends to de-crease. One possibility to counter this effect are intelligent user interfaces (IUIs) that support the user's interactions. In this paper, we give an overview of design challenges identified in litera-ture that have to be faced when developing user-adaptive IUIs and possible solutions. Thereby, we place special emphasis on design principles for successful adaptivity.