PreprintPDF Available
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

Human habitation across multiple planets requires communication and social connection between planets. When the infrastructure of a deep space network becomes mature, immersive cyberspace, known as the Metaverse, can exchange diversified user data and host multitudinous virtual worlds. Nevertheless, such immersive cyberspace unavoidably encounters latency in minutes, and thus operates in a turn-taking manner. This Blue Sky paper illustrates a vision of an interplanetary Metaverse that connects Earthian and Martian users in a turn-based Metaverse. Accordingly, we briefly discuss several grand challenges to catalyze research initiatives for the 'Digital Big Bang' on Mars.
Content may be subject to copyright.
Beyond the Blue Sky of Multimodal Interaction: A Centennial Vision of
Interplanetary Virtual Spaces in Turn-based Metaverse
,Hong Kong University of Science and Technology, Hong Kong
TRISTAN BRAUD, Hong Kong University of Science and Technology, Hong Kong
SIMO HOSIO, University of Oulu, Finland
PAN HUI,Hong Kong University of Science and Technology, Hong Kong
Human habitation across multiple planets requires communication and social connection between planets. When the infrastructure
of a deep space network becomes mature, immersive cyberspace, known as the Metaverse, can exchange diversied user data and
host multitudinous virtual worlds. Nevertheless, such immersive cyberspace unavoidably encounters latency in minutes, and thus
operates in a turn-taking manner. This Blue Sky paper illustrates a vision of an interplanetary Metaverse that connects Earthian and
Martian users in a turn-based Metaverse. Accordingly, we briey discuss several grand challenges to catalyze research initiatives for
the ‘Digital Big Bang’ on Mars.
CCS Concepts: Human-centered computing
Mixed / augmented reality;Graphical user interfaces;Software and its
engineering Development frameworks and environments.
Additional Key Words and Phrases: Metaverse, Interplanetary Cyberspace, Space Communications, Digital Twins, Virtual Reality,
Space CHI.
ACM Reference Format:
Lik-Hang Lee, Carlos Bermejo, Ahmad Alhilal, Tristan Braud, Simo Hosio, Esmée de Haas, and Pan Hui. 2022. Beyond the Blue Sky
of Multimodal Interaction: A Centennial Vision of Interplanetary Virtual Spaces in Turn-based Metaverse. In Woodstock ’18: ACM
Symposium on Neural Gaze Detection, June 03–05, 2022, Woodstock, NY. ACM, New York, NY, USA, 8 pages.
During the 2020 pandemic, the implementation of many preventive measures (such as community lockdowns) changed
people’s daily habits and lifestyles. Such a sudden change has brought our way of life into a post-covid-19 era, and
several aspects of our lives are gradually moving towards the Internet and virtualized platforms. For instance, schools
rapidly adopted the online conference platform ZOOM to conduct online classes - in many cases for a full academic
year or more. Many employers and employees now not only accept but expect work-from-home. Perhaps, the pandemic
This is the corresponding author:
Both authors contributed equally to this research (co-second authors).
The primary aliation of Pan Hui is The Hong Kong University of Science and Technology (Guangzhou), China. He is also aliate with the University
of Helsinki, Finland.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not
made or distributed for prot or commercial advantage and that copies bear this notice and the full citation on the rst page. Copyrights for components
of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to
redistribute to lists, requires prior specic permission and/or a fee. Request permissions from
©2022 Association for Computing Machinery.
Manuscript submitted to ACM
ICMI’22, 7-11 Nov, ICMI 2022, Virtual Lee, et al.
Fig. 1. A vision of interplanetary Metaverse connecting Earthian and Martian: the virtual space, as a ‘purple planet’, serves as a
turn-taking communication platform that leverages multimodal approaches to realize meeting and gathering, albeit people on the
Earth and Mars are apart by an ultra-long distance.
since 2020 can be regarded as one of the largest “experiments" in history do people accept the movement of various
functions of life into the online virtual world (i.e., the Metaverse)? Although we have not yet come up with a denite
answer, various indications shed light on that we are open to numerous opportunities in virtual worlds, and, remarkably,
irreversible changes have taken place in the metaverse era.
The Metaverse was rst mentioned by Neal Stephenson in a sci-ction novel entitled Snow Crash. Nowadays,
the Metaverse refers to an immersive Internet, characterized by an endless and gigantic virtual environment that is
able to accommodate a million users for activities (e.g., content creation) simultaneously [
]. Since 2021, numerous
virtual worlds have advent, which connect people via their avatars (e.g., Meta Horizon Workrooms
), although Xu et al.
pinpointed the lack of consensus on the Metaverse conceptualization among research communities [
]. An article [
recently presented a comprehensive survey regarding the technological enablers and ecosystem issues for implementing
the Metaverse, in which the authors mentioned the Metaverse progression could be divided into three stages, namely (1)
digital twins, (2) digital natives, and eventually (3) the convergence of physical and virtual environments. It is important
to note that the three-stage progression will take at least two to three decades.
Coincidentally, the colonization of Mars is expected to achieve the milestone of establishing the rst Mars city, named
by 2050. Nonetheless, most likely the rst trip to Mars is a one-way trip due to the unavailability of technology
for a return trip. Also, the minimum distance between the Earth and Mars (e.g., 55 million kilometers away) leads to
no less than a two-year travel time with our current spaceship technology. Theoretically speaking, humans on Mars
are analogous to a ‘community lockdown’ situation on a planet-wide scale. Thus, facilitating eective communication
channels between people on the two planets, and oering a way of social gathering for them, are yet to be solved.
Virtual environments under the interplanetary scenario are regarded as one of the unexplored solutions driven by the
Metaverse (Figure 1).
2 nuwa-plans/
A Centennial Vision of Interplanetary Virtual Spaces in Turn-based Metaverse ICMI’22, 7-11 Nov, ICMI 2022, Virtual
This paper introduces a new perspective of multimodal interaction when humans are already moving ahead to
another planet. Additionally, we outline the potential roles of multimodal interaction with the metaverse and briey
present the grand challenges that respond to the rising interest in space-oriented human-computer interaction [11].
This section rst explains the physical constraints of implementing an interplanetary Metaverse. Accordingly, we
discuss possibilities for the people’s communication driven by multimodal interaction in virtual Earthian-Martian
The Primary Constraint: Martian Bandwidth. The distance between the Earth and Mars is not a xed value. It
depends on the trajectory and relative orbital positions of the two planets. The varying relative positions can result in a
minimum latency of around four minutes in each network utility, known as a ping. In addition, the latency can go up
to 20 minutes with our current IP Network (e.g., direct links), and a round-trip communication between two planets
will need roughly 40 minutes
. In the worst scenario, the communication breakdown occurs about every two years,
when the Earth and Mars wind up on opposite sides of the Sun.
Furthermore, the existence of an interplanetary network can only improve the reliability of the network, also known
as Delay/Disruption Tolerant Networking (DTN), but not alter the physical constraints, such as distance and the speed
of light [
]. That is, a laser link in a straight line from Earth to Mars can achieve the theoretical minimum latency of 4
minutes. Nonetheless, the direct and seamless transmission of messages and communication will not hold in practice as
the Mars orbiter (i.e., a key satellite for relaying communications) is only visible to the Earth counterpart for a short
time window. With such stringent constraints in the Martian bandwidth, NASA’s Perseverence rover makes only two
sessions of 15-minute communication every day. As such, high-resolution and colored still images will take several
hours in transmission, and a large-size le will be divided and delivered into numerous smaller pieces.
Possibilities of Earthian-Martian Interaction. At rst glance, the distance and network bandwidth further make
multimodal interaction challenging. Considering the constraints as mentioned above, implementing interactive virtual
environments and real-time communication becomes technically infeasible. As a round-trip communication session can
vary from hours to days, jitter and latency will deteriorate the user experience signicantly, primarily related to the
sense of presence and realism [
]. In other words, the most common enriched communication mediums available on the
Earth, including real-time video conferencing, videos, and even colored images, are no longer applicable, not to mention
the multimodal interaction in virtual worlds because of the demanding transmission of 3D graphics regarding virtual
scenes and avatars. In the sci- novel and movie The Martian (Figure 2), the Earthian-Martian communication leveraged
text-based interaction and signage through black-and-white images in a turn-taking approach. Similarly, people’s
communication can be characterized by some type of turn-taking and non-real-time interaction, e.g., ARCAXER
. As
such, we have to consider these characteristics as the core consideration of designing the interplanetary Metaverse.
Values of Multimodal Interaction. It is worth mentioning that turn-taking interaction does not mean second-rated
and is also exclusive to multimodal interaction. In fact, our daily communication occurs in a two-way conversation
when one person listens while the other person speaks, or vice versa. Apart from verbal communication, people rely on
other non-verbal cues, including attention indication and direction, approval, social grooming, social disruption, and
3 live-video- from-mars- nasas-jaw- dropping-plans-for- laser-tv-
from-the- red-planet/?sh=b3127fe18716
ICMI’22, 7-11 Nov, ICMI 2022, Virtual Lee, et al.
Fig. 2. In the absence of other means of communication, the protagonist of Andy Weir’s “The Martian" (2011/2015) resorts to text and
pictorial interaction between Mars and the Earth
interpersonal provocation [
]. Therefore, multimodal interaction can serve as a strategy to express the non-verbal cues,
through employing various input sensors to capture our speeches, emotions, gestures, etc., and meanwhile delivering
diversied (output) cues that stimulate our ve senses, commonly recognized as visual, audio, and haptic feedback.
The Avatar Turn-taking in a Virtual ‘Metaverse’ Planet. The Metaverse acts as connection dots between
multimodal enriched interaction and Earthian-Martian non-real-time turn-takings. The key idea is that, when the
AI-driven embodied agents meet the Metaverse, the avatars, perhaps assisted by AIs or intelligent agents [
], can
interact with other avatars in such an interplanetary virtual environment. In such context, the captured data from the
user’s facial expression, physical body movements, and other internal states (e.g., heartbeat rates) can be converted into
a dataset representing the user interaction trace mapping to conversation dialogues. The avatars, as a type of digital
twins, can leverage the dataset to reconstruct and manage high-resolution user interaction in the Metaverse, including
verbal communication (e.g., text) and non-verbal interaction.
More importantly, we have to strike a balance between rationale resource allocation and user experience (Figure 1).
On the one hand, the bulky data, including the digital twins of the respective planet(s), avatars, and virtual scenes,
will be rst conveyed by physical means, i.e., rockets. As replenishment rockets will arrive on Mars regularly, the
metaverse and avatars will probably get updated patches quarterly. Also, once the infrastructure on Mars becomes
more mature, smaller rockets, if necessary, can deliver bulky data from Mars to the Earth. On the other hand, the
interplanetary network will primarily be responsible for transmitting text-based messages that will drive the behaviors
of the embodied agents (i.e., the avatars). The text-driven turn-taking can thus maintain low-cost triggers of two-way
user interaction between Earthian and Martian, reserving reasonable levels of interactivity and expressiveness.
Figure 3 illustrates the underlying technology to enable the interplanetary network (IPN) Metaverse. Earthians interact
with Martians through an intermediary spacecraft that operates as a Metaverse server to drop the delay. The terrestrial
deep space network (DSN) intercommunicates with the spacecraft using radio waves (dotted blue line) to countermeasure
atmospheric absorption. Martian orbiter intercommunicates with the spacecraft radio waves or optical links (solid red
lines). The spacecraft could be any spacecraft stationing at one of the Lagrangian points (L2,L3,L4,L5) depending on
the relative location of the Earth to Mars. The virtual space (purple planet) is installed on the Metaverse server. The
A Centennial Vision of Interplanetary Virtual Spaces in Turn-based Metaverse ICMI’22, 7-11 Nov, ICMI 2022, Virtual
Metaverse APP
Metaverse Server
Metaverse APP
Radio Link Optical Link
Termporary Storage
Fig. 3. Underlaying communications and protocol stack to enable IPN Metaverse [2].
Metaverse application runs on top of a bundle protocol agent (BPA) to ensure reliable delivery as a crucial component
of the end-to-end DTN architecture. For space links, DTN can utilize Ku-Band and Ka-band of radio waves. However,
future space missions are envisioned to use the visible light spectrum to achieve much higher data rate [
]. This enables
bandwidth-demanding multimodal data such as audio and video.
In this section, we further detail the roles of multimodal interaction, including the data and technological requirements,
under the three-stage progression of the Metaverse [
]. The embodied agents leverage the data and technology to oer
functions, services, user connections, and social experiences.
Digital Twins (Stage I) are digital models that duplicate the characteristics and behaviors of their physical counter-
parts. These high-quality models require huge amounts of data, for example to represent the properties (e.g., temperature,
humidity) of the physical twins [
]. At rst, the information about the digital twins will be transmitted using physical
storage that is delivered using the same replenishment rockets. The conveyed data will continue to update the original
digital twins via periodic patches. At the same time, inspired by the proposed roadmap in [
], continuous deployment
of relay satellites can enable a DTN communication system. The DTN will enable the transmission of a huge amount
of information that does not require real-time communications, such as updates of existing digital twins and small
quantity of new digital twins. Similarly, these DTN architecture will allow bidirectional transmission, overcoming the
limits of sending resource-demanding rockets from Mars to the Earth.
Due to the bandwidth limitations of interplanetary communications [
], AI-based behaviour modeling will be the
core of these digital twins, regardless of humans (e.g., avatars representing a friend or family members on the Earth) or
non-human (e.g., buildings or attraction sites on the Earth). The AI models behind the virtual entities can simulate,
with realism and accuracy, the properties and behaviors of their physical counterparts. In particular, the AIs of digital
ICMI’22, 7-11 Nov, ICMI 2022, Virtual Lee, et al.
twins are well-trained to present diversied behaviors once they receive text-based instructions, albeit with delays.
Also, the AIs can maintain some social responses autonomously when awaits further instructions, for the sake of user
interactivity and social presence. These models will reduce the transmission trac of (1) sensor-related data, such as
temperature and object motions, and (2) human-driven behaviors in the interplanetary network.
Following a prior work on embodied virtual agents [
], the created avatars and other virtual assets should embed
smart features to facilitate the interaction (e.g., communication) between users in the interplanetary metaverse. As
such, the presence of avatars and their behaviors will be deteriorated by noticeable delays. The multimodal interaction,
e.g., gestures and speech tones/pitches, will be generated by intelligence in the digital twin of avatar models locally on
Mars. Moreover, we can see the creation of this embodied avatar as clones of real users (e.g., an Earthian 5.5 million
away), where gestures, facial expressions, and emotions emulate the actual user. Therefore, we need an intelligent
system that captures multimodal data, such as user movements, facial expression, speech, heartbeat rate, mental state,
emotions, nerve activities of real users on a planet, to have a faithful representation of the real user as embodied avatars
on another planet. These embodied virtual agents can speed up the turn-taking in the text-alike format. In contrast,
turn-taking of transmitting graphics and videos (e.g., gestures) causes more severe delays by keeping some realistic
reactions for the avatars at distal, while the physical Martian user will suer from prolonged waiting and quit the
turn-taking Metaverse. Finally, the digital twins can connect to local output devices on Mars that enrich feedback cues
to Martian users. For instance, the avatars, after receiving a message Give me ve’, can give haptic feedback through
wearables with touch and kinesthetic features attached to the Martians’ bodies.
Second, once a signicant size of digital twins is available, the main activities in the Metaverse will shift to the creation
of digital content in the Martian environment (Digital Natives, Stage II). This stage will become feasible when a growing
bandwidth of DSN exists. As we expect that turn-taking content creation and information exchange will become more
frequent, we have to half the latency, i.e., axing each turn from four to two minutes, by deploying a Metaverse server
between the Earth and Mars, denoted as the purple planet in Figure 3). Metaverse participants (Earthians or Martians)
will enable a new paradigm with embodied virtual agents and turn-based interactions in the virtual counterpart,
connecting to diversied ecosystems, e.g., culture, gaming, social, and economy. The interplanetary Metaverse will
require protocols for managing user instructions, and (multi-)agent behaviors. Meanwhile, the participants will work
on virtual tasks and create contents in iterations. This opens research opportunities of turn-based collaboration among
multiple users, and the design space of such tasks that contain virtual objects and their behaviors.
Next, the third stage depicts the convergence of (physical) blue, red, and (virtual) purple planets (Convergence
between Virtual Planets and Respective Planets, Stage III). We foresee that Metaverse servers will gradually form a
distributed network, potentially resulting in lower latency, improved bandwidth, and hence diversied user interaction
distal. These servers, local to each planet, would be interconnected through well-established links, enabling faster
and more stable transmission of content across the solar system [
]. From the Martians’ perspective, we can see the
metaverse as a door to explore and travel inside the virtual earth. Digital twins that replicate buildings, e.g., museums,
can be virtually explored by the Martians in the metaverse. Moreover, we see embodied avatars as digit twins of physical
users, allowing the Martians to have quasi-real interactions (e.g., chatting) in the turn-taking scenarios. Alternatively,
both Martian and Earthian can leverage social robots to achieve telepresence in the physical means, visiting places and
people remotely.
A Centennial Vision of Interplanetary Virtual Spaces in Turn-based Metaverse ICMI’22, 7-11 Nov, ICMI 2022, Virtual
The blue sky should not become the limit of multimodal interaction. Meanwhile, the Metaverse, initiated by the concept
of digital twins, will reach interplanetary users. This paper proposes an interplanetary virtual space of turn-taking. We
outline several grand challenges in such an interplanetary Metaverse, as follows.
Trust and Multiple Identities. The use of virtual entities has an impact on the social interaction between users.
The metaverse allows users to create a myriad of avatar representations, from self avatar that realistically represents
the owner of the avatar to multiple identities such as animals, dierent gender, and impersonation of other avatars.
The representation of the avatar has a strong impact on social dynamics [
]. In the case of embodied avatars, users’
representation in the metaverse goes beyond visuals. Gestures, emotions, and other data that simulate the physical
owner of the avatar are essential features in building the trust of other users. Reactions of these embodied avatars
that are not realistic or go close to the uncanny valley can reduce users’ trust in the metaverse. In this case, the more
feedback and multimodal interactions, the more risk of users stopping trusting an entity when these are not realistic or
do not represent the behavior and actions of the physical owner (self-representations).
Integrity of Multimodal Data. As we have described, the embodied virtual agents will require data from sensors
to simulate the physical owners’ behavior. AI models will rely tightly on the quality of the sensed data to represent the
physical users while waiting for users’ responses. The integrity of the data involves monitoring the sensors (hardware),
data collection algorithms, and AI models (training and testing). The data integrity should also be resilient to possible
attacks, such as biasing the data collection process. As we have seen in prior challenges, when the embodied virtual
agents do not present realistic behaviors, the users will feel uncomfortable and modify their social dynamics with such
virtual agents.
Resolution of the turn-taking Metaverse. Interplanetary communication is unequivocally aicted with long
latency, in the order of minutes to days. Novel semi-synchronous communication modalities will need to be developed
for such constraints. These modalities will rely on both explicit (enforced) and implicit (cultural) turn-taking protocols
that oversee whose turn it is to communicate. These protocols already implicitly exist in the text- and voice-based instant
messaging applications such as WhatsApp, and to a lesser extent in email communication. However, how to integrate
such protocols within the richer spectrum of interaction modalities oered by the Metaverse, close to face-to-face
communication, at a scale spreading from minutes to days between the transmission of messages remains unknown.
Specically, these protocols will aim to manage or hide the downtime caused by network transmission. AI agents
may act as intermediary nodes in the communication chain, enabling more natural, albeit less direct, communication
between two users on dierent planets while enforcing such semi-synchronous protocols. Although human-agent
synchronous communication is currently the subject of signicant research [
], there has been little work on human-
agent-human communication combining synchronous human-agent communication with asynchronous human-human
Embodied virtual agents and Human-scale space. The space under the human perspective, i.e., human-scale
space [
], will inuence the user behaviors. Even in an identical (virtual) space, users with embodied virtual agents
own diversied perspectives, in addition to the less favorable condition of non-real-time user interaction. Human
users have built-in sensing organs that help us understand and navigate the world through the ve primary senses.
Remarkably, we have a lot of subconscious understandings of places. For example, a dark- and blue-colored house can
ICMI’22, 7-11 Nov, ICMI 2022, Virtual Lee, et al.
be regarded as a haunted house. However, if we feed these inputs (as texts) into embodied agents, they may give a
house with bad weather. Thus, this leaves questions about how to construct a human-centric perception of human-scale
space in a turn-based metaverse.
Scalability of embodied agents in the Metaverse. The IPN Metaverse should scale up to adapt to the evolutionary
IPN architecture. In particular, it must evolve as IPN communication architecture evolves from a Near-term (low
data rate and non-network) to a Long-term architecture (high data rate and availability), passing through a Mid-
term architecture [
]. Ultimately, it must cope with the bi-directional and high bandwidth-demanding multimodal
interaction. As such, we have to prioritize the multimodal data for digital twins and user interaction in the aforementioned
timeline. The availability of multimodal data will govern the behaviors of digital twins, including embodied virtual
agents, and thus impact user perception of the immersive spaces, e.g., object, social and spatial presences.
This work has been supported by the Academy of Finland, under Grant 319669 and Grant 325570, as well as Academy
of Finland Strategic Research, Grant 335729.
Ahmad Alhilal, Tristan Braud, and Pan Hui. 2019. The Sky is NOT the Limit Anymore: Future Architecture of the Interplanetary Internet. IEEE
Aerospace and Electronic Systems Magazine 34, 8 (2019), 22–32.
Ahmad Yousef Alhilal, Tristan Braud, and Pan Hui. 2021. A roadmap toward a unied space communication architecture. IEEE Access 9 (2021),
[3] Justine Cassell. 2001. Embodied conversational agents: representation and intelligence in user interfaces. AI magazine 22, 4 (2001), 67–67.
Tara Collingwoode-Williams, Zoë O’Shea, Marco Fyfe Pietro Gillies, and Xueni Pan. 2021. The Impact of Self-Representation and Consistency in
Collaborative Virtual Environments. Frontiers in Virtual Reality 2 (2021), 45.
[5] Brian Dunbar. 2015. Optical communication.
Lik-Hang Lee, Tristan Braud, Pengyuan Zhou, Lin Wang, Dianlei Xu, Zijun Lin, Abhishek Kumar, Carlos Bermejo, and Pan Hui. 2021. All One Needs
to Know about Metaverse: A Complete Survey on Technological Singularity, Virtual Ecosystem, and Research Agenda. ArXiv abs/2110.05352 (2021).
Lik-Hang Lee, Zijun Lin, Rui Hu, Zhengya Gong, Abhishek Kumar, Tangyao Li, Sijia Li, and Pan Hui. 2021. When Creators Meet the Metaverse: A
Survey on Computational Arts. ArXiv abs/2111.13486 (2021).
Thibault Louis, Jocelyne Troccaz, Amélie Rochet-Capellan, and François Bérard. 2019. Is It Real? Measuring the Eect of Resolution, Latency,
Frame Rate and Jitter on the Presence of Virtual Entities. In Proceedings of the 2019 ACM International Conference on Interactive Surfaces and Spaces
(Daejeon, Republic of Korea) (ISS ’19). Association for Computing Machinery, New York, NY, USA, 5–16.
Divine Maloney, Guo Freeman, and Donghee Yvette Wohn. 2020. "Talking without a Voice": Understanding Non-Verbal Communication in Social
Virtual Reality. Proc. ACM Hum.-Comput. Interact. 4, CSCW2, Article 175 (oct 2020), 25 pages.
Ville Paananen, Jonas Oppenlaender, Jorge Goncalves, Danula Hettiachchi, and Simo Hosio. 2021. Investigating Human Scale Spatial Experience.
Proc. ACM Hum.-Comput. Interact. 5, ISS, Article 496 (nov 2021), 18 pages.
Pat Pataranutaporn, Valentina Sumini, Ariel Ekblaw, Melodie Yashar, Sandra Häuplik-Meusburger, Susanna Testa, Marianna Obrist, Dorit Donoviel,
Joseph Paradiso, and Pattie Maes. 2021. SpaceCHI: Designing Human-Computer Interaction Systems for Space Exploration. In Extended Abstracts of
the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI EA ’21). Association for Computing Machinery, New York,
NY, USA, Article 96, 6 pages.
Katie Seaborn, Norihisa P. Miyake, Peter Pennefather, and Mihoko Otake-Matsuura. 2021. Voice in Human–Agent Interaction: A Survey. ACM
Comput. Surv. 54, 4, Article 81 (may 2021), 43 pages.
Alexander Serenko, Nick Bontis, and Brian Detlor. 2007. End-user adoption of animated interface agentsin everyday work applications. Behaviour &
Information Technology 26 (2007), 119 132.
Anthony Steed, Mel Slater, Amela Sadagic, Adrian Bullock, and Jolanda Tromp. 1999. Leadership and collaboration in shared virtual environments.
In Proceedings IEEE Virtual Reality (Cat. No. 99CB36316). IEEE, 112–115.
Jiangnan Xu, Konstantinos Papangelis, John Dunham, Jorge Goncalves, Nicolas James LaLone, Alan Chamberlain, Ioanna Lykourentzou, Federica L
Vinella, and David I Schwartz. 2022. Metaverse: The Vision for the Future. In CHI Conference on Human Factors in Computing Systems Extended
Abstracts (New Orleans, LA, USA) (CHI EA ’22). Association for Computing Machinery, New York, NY, USA, Article 167, 3 pages.
ResearchGate has not been able to resolve any citations for this publication.
Full-text available
In recent years, the number of space exploration missions has multiplied. Such an increase raises the question of effective communication between the multitude of human-made objects spread across our solar system. An efficient and scalable communication architecture presents multiple challenges, including the distance between planetary entities, their motion and potential obstruction, the limited available payload on satellites, and the high mission cost. This paper brings together recent relevant specifications, standards, mission demonstrations, and the most recent proposals to develop a unified architecture for deep-space internetworked communication. After characterizing the transmission medium and its unique challenges, we explore the available communication technologies and frameworks to establish a reliable communication architecture across the solar system. We then draw an evolutive roadmap for establishing a scalable communication architecture. This roadmap builds upon the mission-centric communication architectures in the upcoming years towards a fully interconnected network or InterPlanetary Internet (IPN). We finally discuss the tools available to develop such an architecture in the short, medium, and long terms. The resulting architecture cross-supports space agencies on the solar system-scale while significantly decreasing space communication costs. Through this analysis, we derive the critical research questions remaining for creating the IPN regarding the considerable challenges of space communication.
Full-text available
This paper explores the impact of self-representation (full body Self Avatar vs. Just Controllers) in a Collaborate Virtual Environment (CVE) and the consistency of self-representation between the users. We conducted two studies: Study 1 between a confederate and a participant, Study 2 between two participants. In both studies, participants were asked to play a collaborative game, and we investigated the effect on trust with a questionnaire, money invested in a trust game, and performance data. Study 1 suggested that having a Self Avatar made the participant give more positive marks to the confederate and that when the confederate was without an avatar, they received more trust (measured by money). Study 2 showed that consistency led to more trust and better productivity. Overall, results imply consistency improves trust only when in an equal social dynamic in CVE, and that the use of confederate could shift the social dynamics.
Full-text available
Social robots, conversational agents, voice assistants, and other embodied AI are increasingly a feature of everyday life. What connects these various types of intelligent agents is their ability to interact with people through voice. Voice is becoming an essential modality of embodiment, communication, and interaction between computer-based agents and end-users. This survey presents a meta-synthesis on agent voice in the design and experience of agents from a human-centered perspective: voice-based human--agent interaction (vHAI). Findings emphasize the social role of voice in HAI as well as circumscribe a relationship between agent voice and body, corresponding to human models of social psychology and cognition. Additionally, changes in perceptions of and reactions to agent voice over time reveals a generational shift coinciding with the commercial proliferation of mobile voice assistants. The main contributions of this work are a vHAI classification framework for voice across various agent forms, contexts, and user groups, a critical analysis grounded in key theories, and an identification of future directions for the oncoming wave of vocal machines.
Conference Paper
Full-text available
We present an experiment that investigates the behaviour of small groups of participants in a wide-area distributed collaborative virtual environment (CVE). This is the third and largest study in a series of experiments that have examined trios of participants carrying out a highly collaborative puzzle-solving task. The results reproducing those of earlier studies suggest a positive relationship between place-presence and co-presence, between co-presence and group accord, with evidence supporting the notion that immersion confers leadership advantage.
Spatial experience, or how humans experience a given space, has been a pivotal topic especially in urban-scale environments. On the human scale, HCI researchers have mostly investigated personal meanings or aesthetic and embodied experiences. In this paper, we investigate the human scale as an ensemble of individual spatial features. Through large-scale online questionnaires we first collected a rich set of spatial features that people generally use to characterize their surroundings. Second, we conducted a set of field interviews to develop a more nuanced understanding of the feature identified as most important: perceived safety. Our combined quantitative and qualitative analysis contributes to spatial understanding as a form of context information and presents a timely investigation into the perceived safety of human scale spaces. By connecting our results to the broader scientific literature, we contribute to the field of HCI spatial understanding.
Conference Paper
Space travel and becoming an interplanetary species have always been part of human's greatest imagination. Research in space exploration helps us advance our knowledge in fundamental sciences, and challenges us to design new technology and create new industries for space. However, keeping a human healthy, happy and productive in space is one of the most challenging aspects of current space programs. Our biological body, which evolved in the earth specific environment, can barely survive by itself in space's extreme conditions with high radiation, low gravity, etc. This is similar for the moons and planets in the solar system that humans plan to visit. Therefore, researchers have been developing different types of human-computer interfaces systems that support humans' physical and mental performance in space. With recent advancements in aerospace engineering, and the democratized access to space through aerospace tech startups such as SpaceX, Blue Origin, etc., space research is becoming more plausible and accessible. Thus, there is an exciting opportunity for researchers in HCI to contribute to the great endeavor of space exploration by designing new types of interactive systems and computer interfaces that can support humans living and working in space and elsewhere in the solar system.
Conference Paper
The feeling of presence of virtual entities is an important objective in virtual reality, teleconferencing, augmented reality, exposure therapy and video games. Presence creates emotional involvement and supports intuitive and efficient interactions. As a feeling, presence is mostly measured via subjective questionnaire, but its validity is disputed. We introduce a new method to measure the contribution of several technical parameters toward presence. Its robustness stems from asking participant to rank contrasts rather than asking absolute values, and from the statistical analysis of repeated answers. We implemented this method in a user study where virtual entities were created with a handheld perspective corrected display. We evaluated the impact on two virtual entities' presence of four important parameters of digital visual stimuli: resolution, latency, frame rate and jitter. Results suggest that jitter and frame rate are critical for presence but not latency, and resolution depends on the explored entity.