ArticlePDF Available

Human Interaction in Multi-User Virtual Reality

Article

Human Interaction in Multi-User Virtual Reality

Abstract and Figures

In this paper we will present an immersive multi- user environment for studying joint action and so- cial interaction. Besides the technical challenges of immersing multiple persons into a single virtual environment, additional research questions arise: Which parameters are coordinated during a joint action transportation task? In what way does the visual absence of the interaction partner affect the coordination task? What role does haptic feed- back play in a transportation task? To answer these questions and to test the new experimental environment we instructed pairs of subjects to per- form a classical joint action transportation task: carrying a stretcher through an obstacle course. With this behavioral experiment we demonstrated that joint action behavior (resulting from the co- ordination task) is a stable process. Even though visual and haptic information about the interac- tion partner were reduced, humans quickly com- pensated for the lack of information. After a short time they did not perform significantly differently from normal joint action behavior.
Content may be subject to copyright.
Human Interaction in Multi-User Virtual Reality
Stephan Streuber, Astros Chatziastros
Max Planck Institute for Biological Cybernetics
Spemannstrasse 38
72076 Tuebingen
Germany
voice: [+49](7071)601-216; fax: [+49](7071)601-616
e·mail: Stephan.Streuber@Tuebingen.MPG.de
www: http://www.kyb.mpg.de/~stst
Abstract
In this paper we will present an immersive multi-
user environment for studying joint action and so-
cial interaction. Besides the technical challenges
of immersing multiple persons into a single virtual
environment, additional research questions arise:
Which parameters are coordinated during a joint
action transportation task? In what way does the
visual absence of the interaction partner affect the
coordination task? What role does haptic feed-
back play in a transportation task? To answer
these questions and to test the new experimental
environment we instructed pairs of subjects to per-
form a classical joint action transportation task:
carrying a stretcher through an obstacle course.
With this behavioral experiment we demonstrated
that joint action behavior (resulting from the co-
ordination task) is a stable process. Even though
visual and haptic information about the interac-
tion partner were reduced, humans quickly com-
pensated for the lack of information. After a short
time they did not perform significantly differently
from normal joint action behavior.
Keywords: Multi-User Virtual Reality; Joint
Action Transportation Task; Collaborative Vir-
tual Environment; Human Interaction
1 Introduction
In the first part of this paper we will compare
two technical environments (immersive virtual re-
ality and non-immersive multi-user environments)
for their eligibility to perform behavioral experi-
ments in the domain of joint action and spatial
coordination. We propose that a combination of
the advantages of both technologies can be used
effectively to study proximal, physical interaction
behavior in real-time (joint action). In the second
part of this paper we will describe an experiment
in which we investigated how a reduction of visual
and/or haptic information effects interpersonal co-
ordination.
2 IVR in Behavioral Sciences
Advanced developments in the field of media tech-
nology, including improvement of computational
power, display technologies and tracking systems,
led towards an intense usage of immersive virtual
reality (IVR) systems in various fields such as re-
search, simulation, training, rehabilitation and en-
tertainment [BC03] [TW02]. These progressions
extended the perceptual and technical limits of
this media in terms of feeling present (defined as
the sense of being in the computer-generated en-
vironment) and immersed (amount of sensory in-
put provided by the technology) [Sch02][HD92].
These advantages of IVR are utilized in behav-
ioral sciences to investigate human behavior un-
der controlled conditions, where subjects’ percep-
tions and actions are linked through the virtual
environment [vdHB00][LBB99]. This experimen-
tal paradigm is effective when it comes to the
investigation of isolated humans and their inter-
action with the physical world. However, it can
be insufficient for studying real life situations, in
which individuals have to coordinate their actions
with those of others consistently. For example,
researchers in the domain of applied perception
investigate human car driving behavior by using
driving simulators [KP03]. The driving simulator
transforms user’s interaction with the virtual en-
vironment (e.g. acceleration, navigation) into a
realistic visual stimulus projected on a screen de-
vice. Other traffic members (e.g. cars, pedestri-
ans) are represented as computer-controlled, non-
interactive 3D-models. This reduced social envi-
ronment denies the social component of driving: in
the real world, each driver’s behavior is strongly
interconnected with the behavior of others. There-
fore driving - in a social context - can not simply
be seen as an obstacle avoidance navigation task,
but rather as a dynamic feedback system (=traf-
fic) in which the individual behavior is a result of
the interaction with other single drivers and with
the environment. Since many every-day behaviors
take place in a social environment, a thorough in-
vestigation of human behavior requires also the
inclusion of social information. However, most
of the current IVR systems are not designed for
multi-user simulations and therefore they are not
qualified for the integration of social elements in
the form of group interaction.
3 Non-immersive Multi User
Environments
Media which allows for social interaction include,
for instance, non-immersive online multi-user en-
vironments (OMUE) like social networks, chat
programs, trade fairs or computer games. Mil-
lions of users daily exchange information, sell or
buy products, communicate or play against (or
with) each other over the internet. In all these
activities individuals have to coordinate their ac-
tions with others. Of course human interaction
using internet technology is strongly limited com-
pared to real life situations. For instance, the
interaction quality is drastically reduced (usually
occurs by clicking or typing), the sensory modali-
ties are mostly limited to the visual sense and the
user is represented only abstractly (responses af-
fect only the virtual representation and not the
user itself).Today’s most realistic, real-time inter-
action over the internet takes place in massive
multi-player online role games (MMORG). In that
case, the user is represented as a human-like avatar
which allows for the exploration of three dimen-
sional landscapes, communication with other play-
ers, spatial coordination and physical interaction
(e.g. fighting, manipulating objects, or operating
vehicles). The huge number of users, advanced
interaction possibilities, and the ability to track
behavioral data of avatars makes OMUE an inter-
esting tool for analyzing complex social behavior
[DM04][NH06]. However, considering the limita-
tions of interaction possibilities, we have to move
from desktop virtual reality to immersive virtual
reality in order to study close human interaction.
4 Immersive multi-user environ-
ment (IMUE)
We believe it is time to create a new experimen-
tal environment in which the advantages of IVR
and OMUE are merged into a single framework in
which several humans can interact, communicate
and cooperate in a highly immersive virtual envi-
ronment. In this experimental environment it will
be possible to account for the social nature of per-
ception and to perform experiments in which we
can investigate real-time human interaction. For
this immersive environment it is required that the
standard technical setup is extended by three im-
portant features: synchronous real-time tracking
of multiple rigid bodies, a distributed application
to render one virtual world from different perspec-
tives (for each user) and the usage of avatars to
enable users to identify and localize each other. In
this setup participants can interact with the world
and with others from an egocentric perspective by
using their physical body as an interaction device.
5 Setup
The setup was implemented within a large, fully-
tracked, free-walking space: 12 by 15 meters in
size and equipped with an optical tracking system
(16 Vicon MX13 cameras). Participants’ head po-
sitions and orientations were tracked through the
monitoring of reflective markers attached to the
participants’ heads and to an additional interac-
tion object (stretcher). Each Vicon camera has a
resolution of 1280x1024 and the tracking system
has a maximum frame rate of 484Hz. In addition
to updating the visual environment as a function
of participants’ head movements, the tracking sys-
IVR OMUE IMUE
Sensory Motor x x
Integration
Large Field x x
of View
Ego Perspective x x
Multi User x x
Interaction
Interaction Quality x x
(accuracy, realism)
User is visually x x
represented
Somatosensory x x
Interaction
Table 1: Selection of features that are important
for a realistic simulation of the world, including
real social interaction.
tem also recorded walking trajectories (head po-
sition, head orientation) of both participants and
the stretcher. Furthermore both participants wore
lightweight HMD’s (eMagin, Z800) with a resolu-
tion of 800x600 pixels and a 40-degree diagonal
field of view (FOV) per eye (the software extended
the FOV to 60 degree). We used a stereo projec-
tion to display the stimulus to both participants
which saw the same virtual world from different
perspectives, depending on their head position
and facing direction. The HMD had a refresh rate
of 60 Hz. Both the participants’ and the stretch-
ers’ positional information was sent from the opti-
cal trackers (via a wireless network connection) to
a backpack-mounted laptop worn by each partic-
ipant. This information was then used to update
participants’ virtual viewing camera within the
virtual environment. This setup allowed partici-
pants to move freely throughout the entire walking
space without being constrained or tethered.
The virtual world was rendered using Virtools
Dev 3.5 and the Virtools VR-Pack and contained
3D-models of a labyrinth, a stretcher and two
avatars. While the virtual maze was spatially fixed
to the physical boundaries of the tracking space,
both the avatars and the stretcher were rendered
depending on their real world position and orien-
tation. The participants carried a real stretcher,
but what they perceived through the HMD was
Figure 1: Overview of the hardware components
used for the experimental setup. The two persons
carry a stretcher which is used as an interaction
device. Furthermore, each person carries a laptop
for the visualization of the virtual environment.
solely the visual representation of each other and
the virtual stretcher.
6 Behavioral Experiment
6.1 Introduction
Figure 2: Two participants carrying a stretcher.
Left: Subjects were equipped with laptops, head
mounted displays and tracking helmets; Right:
The visual stimulus which was projected in the
HMD.
We tested 10 pairs of subjects performing a
simple joint action task: to transport a stretcher
through a virtual maze without colliding with the
walls (if a collision occurred, an alarm sound was
activated). Subjects were instructed to follow a
corridor consisting of identical 90-degree corners.
Furthermore, we told subjects to walk as naturally
as possible and to not influence the behavior of
the partner by pushing and pulling the stretcher.
Analyzing walking trajectories of both subjects re-
vealed information about the interpersonal coor-
dination.
To perform the coordination task successfully,
subjects were forced to coordinate their actions
with each other. To perform the joint action task
they could integrate different sources of informa-
tion, such as:
Visual and haptic information (information
received through the physical contact with
the stretcher) could be used to determine po-
sition, walking direction and velocity of the
other person.
Forces applied through the stretcher could
communicate the preferred direction and ve-
locity of the other person.
Visual and haptic information could increase
the continuous awareness of the presence of
the other person. This awareness could re-
sult in a stronger activation of joint action
behavior in individuals.
We hypothesized that the more information
about partner and stretcher that is available, the
better the dyad would perform the joint action
task, resulting in a lower collision rate. Further-
more, we took the relative length of the walk-
ing trajectories of both partners as an indication
about the amount of cooperation in this task. The
absolute path length, however, characterizes the
capability of the dyad to optimize its behavior.
Specifically, we expected that integrated visual
and haptic information will lead to a strong feel-
ing of co-presence of the partner, which should
influence the cooperation positively. This in turn
should be reflected in shorter trajectories, less rel-
ative length differences and lower collision rates.
6.2 Methods
For the behavioral experiment we designed five
conditions in which we selectively reduced visual
and/or haptic information (see table 2).
In each condition pairs of subjects navigated
through a virtual corridor transporting a stretcher
Condition Stretcher physical Avatars
visible Stretcher visible
1. Baseline x x x
2. No Haptic x x
3. No Visuals x
4. No Strecher x x
5. No Avatars x x
Table 2: Overview of different experimental con-
ditions. The marked fields display whether this
sensory information is available during the partic-
ular condition.
(length = 2.5 m) together. As both subjects
faced the walking direction, the person in the front
(leader) was not able to see the person in the back
(follower). In each condition the dyad performed
two trials (first trial, second trial). In each trial
subjects walked 10 corner segments (90-degree
corners). After the first trial subjects switched
leader and follower positions. Each experiment
took approximately 1.5 hours with no breaks in
between. We tested 10 pairs of subjects with the
following gender combinations: 4 times female-
male; 2 times male-male; 4 times female-female.
All of the subjects were students within an age
range of 23 to 37 years.
To compare the performance in different condi-
tions, we analyzed the two dependent variables
that revealed information about the joint ac-
tion behavior and the coordination: collision rate
(number of collisions that occurred during each
trial) and the average path length (length of tra-
jectories in each segment).
6.3 Results and Discussion
As expected, we observed a higher collision rate
in the first trial of the No Haptic Condition (see
figure 4). This result can partially be explained
by an increase in task difficulty, because subjects
additionally had to control their distance (in all
other conditions interpersonal distance was easily
maintained through the physical stretcher). After
the first trial, however the collision rate dropped
down to baseline, which suggests that sufficient
coordination in this task can be achieved without
a physical connection providing haptic and tac-
tile information. However the physical stretcher
simplifies the task by keeping subjects at a con-
Figure 3: Top view of C-Segment (90-degree cor-
ner). The black and the grey dots represent walk-
ing trajectories of subjects that were connected
via stretcher. Subjects walked from right-top to
left-bottom (black=leader; grey=follower).
Figure 4: The average number of collisions for
each condition over all subjects (N=10).
Figure 5: The average path length per corner in
each condition for all subjects for leader (grey)
and follower (black). In all conditions the follower
walks a longer trajectory than the leader.The error
bars show the standard error of the mean (N=10).
stant distance. The path length indicator sup-
ports these results (see figure 5): there were no
significant differences in the path length in either
condition (Path Length in No Haptic Condition is
similar to Path Length in the Baseline Condition).
Surprisingly, there was no significant difference
in the collision rate between Baseline Condition
(3.8 collisions), No Stretcher Condition (4.4 colli-
sions), and No Avatar Condition (3.7 collisions).
This indicates that the subjects could accom-
modate to the missing visual information, pos-
sibly because in these conditions the position of
the non-visible elements could have been inferred
from the available visual information (if only the
stretcher is visible, the follower knows were the
leader is located; if only the avatars are visible, the
follower knows were the stretcher is located). In
accordance with this interpretation, we observed
that the collision rate was slightly increased in
the No Visual Condition, where visual information
about the stretcher and partner was completely
absent. Thus, in this experiment, we did not find
an improvement of coordination by the visual (co-
)presence of the interaction partner.
Interestingly, we observed that subjects walked
longer trajectories in conditions where the
stretcher was not visible (No Visual Condition and
No Stretcher Visible Condition). This observation
could be explained in that subjects could not eas-
ily control the visual distance between stretcher
and corner and therefore they preferred to walk a
longer trajectory than to risk a collision with the
corner.
7 Conclusions
We showed that humans can quickly compensate
for a lack of haptic and tactile feedback if they
are immersed into the VE. Nevertheless, the hap-
tic feedback seems to be important in that it de-
creases the task difficulty. Our prediction of an
increased collision rate due to the reduction of vi-
sual information was only partially confirmed.
In all conditions subjects showed very similar
path length which indicates a robust coordination
behavior relatively independent from the immedi-
ate feedback cues about the partner.
We have presented an approach to utilize im-
mersive multi-user virtual environment for the be-
havioral investigation of human interaction and
spatial coordination in a social context. Our ap-
proach within the behavioral science represents
only one area of application for incorporation for
the future development of IMUE. Also in other ar-
eas an interest and demand for incorporation of so-
cial interaction into immersive environments exist.
First implementations and prototypes have been
developed in the field of architecture [KBP+00],
learning and education [JF00], as well as, enter-
tainment [SFK+03], that are expected to identify
interesting areas of investigation also for the be-
havioral sciences.
Acknowledgments
This study is supported by EU-project JAST
(IST-FP6-003747).
References
[BC03] Grigore C. Burdea and Philippe Coif-
fet, Virtual reality technology, John
Wiley & Sons, Inc., New York, NY,
USA, 2003.
[DM04] Nicolas Ducheneaut and Robert J.
Moore, The social side of gaming: a
study of interaction patterns in a mas-
sively multiplayer online game, CSCW
’04: Proceedings of the 2004 ACM con-
ference on Computer supported coop-
erative work (New York, NY, USA),
ACM, 2004, pp. 360–369.
[HD92] Richard H. Held and Nathaniel I.
Durlach, Telepresence, Presence 1
(1992), no. 1, 109–112.
[JF00] Randolph L. Jackson and Eileen Fa-
gan, Collaboration and learning within
immersive virtual reality, CVE ’00:
Proceedings of the third international
conference on Collaborative virtual en-
vironments (New York, NY, USA),
ACM, 2000, pp. 83–92.
[KBP+00] H. Kato, M. Billinghurst, I. Poupyrev,
K. Imamoto, and K. Tachibana, Vir-
tual object manipulation on a table-
top ar environment, Augmented Real-
ity, 2000. (ISAR 2000). Proceedings.
IEEE and ACM International Sympo-
sium, 2000, pp. 111–119.
[KP03] A. Kemeny and F. Panerai, Evaluat-
ing perception in driving simulation ex-
periments, TRENDS IN COGNITIVE
SCIENCES 7(2003), 31–37.
[LBB99] J. M. Loomis, J. J. Blascovich, and
A. C. Beall, Immersive virtual environ-
ment technology as a basic research tool
in psychology., Behav Res Methods In-
strum Comput 31 (1999), no. 4, 557–
564.
[NH06] Bonnie Nardi and Justin Harris,
Strangers and friends: collaborative
play in world of warcraft, CSCW ’06:
Proceedings of the 2006 20th anniver-
sary conference on Computer sup-
ported cooperative work (New York,
NY, USA), ACM, 2006, pp. 149–158.
[Sch02] R. Schroeder, Copresence and inter-
action in virtual environments: An
overview of the range of issues.,
Presence 2002: Fifth International
Workshop, 274-295., Presence 2002:
Fifth International Workshop, 274-
295., 2002.
[SFK+03] Benjamin Schaeffer, Mark Flider,
Hank Kaczmarski, Luc Vanier, Lance
Chong, and Yu Hasegawa-Johnson,
Tele-sports and tele-dance: full-body
network interaction, VRST ’03: Pro-
ceedings of the ACM symposium on
Virtual reality software and technology
(New York, NY, USA), ACM, 2003,
pp. 108–116.
[TW02] Michael J Tarr and William H War-
ren, Virtual reality in behavioral neu-
roscience and beyond., Nat Neurosci 5
Suppl (2002), 1089–1092.
[vdHB00] M. von der Heyde and H.H. Buelthoff,
Perception and action in virtual en-
vironments., Max Planck Institute for
Biological Cybernetics, Tbingen, Ger-
many, 2000.
... As mentioned above, interaction with the virtual world is one of the key factors which determines the quality of VR systems. Additionally, interaction can be an essential element for investigating human behavior under controlled conditions [5]- [12]. We may assess how a user cooperates with a virtual environment to perform a specific task, a task which quite often requires not only cooperation with a virtual environment but also with other users [13]. ...
... We may assess how a user cooperates with a virtual environment to perform a specific task, a task which quite often requires not only cooperation with a virtual environment but also with other users [13]. As most every-day behavior occurs in a social environment, detailed investigation of human behavior also has to take into account the inclusion of social context [12]. The task is therefore to immerse multiple persons into a single virtual environment. ...
... Some authors distinguish between a nonimmersive multiuser environment and an immersive multiuser environment for social interaction. The former comprises online multiuser twodimensional (2-D) environments, such as social networks, chat programs, auction websites and computer games, including massive multiplayer online role-playing games [12]. The huge number of users and advanced interaction possibilities make multiuser 2-D environments an interesting tool for analyzing complex social behavior. ...
Article
Full-text available
Despite the development of increasingly popular head mounted displays, CAVE-type systems may still be considered one of the most immersive virtual reality systems with many advantages. However, a serious limitation of most CAVE-type systems is the generation of a three-dimensional (3-D) image from the perspective of only one person. This problem is significant because in some applications, the participants must cooperate with each other in the virtual world. This paper presents the adaptation of a one-user Cave Automatic Virtual Environment (CAVE) installation in the Immersive 3-D Visualization Lab at the Gdańsk University of Technology to a two-user stereoscopy system. Simultaneous use of two alternative one-user stereoscopies available in the I3DVL (a technique with spectrum separation—Infitec, and active stereo) and a simple electronic circuit have allowed us to transform the one-user stereoscopy CAVE installation to a two-user stereoscopic system. The experiments performed concentrated on several objective measurable parameters. The calculated crosstalk value was low, approximately 1%, which can be considered negligible and shows the proper operation of the proposed technique. Additionally, initial experiments based on the tested two-user application and related to user comfort in the developed two-user stereoscopy are discussed in this paper. However, this topic still needs further research. The proposed solutions are a cheap alternative to adapt the existing one-user CAVE-type systems which support two projection techniques to a two-user system.
... The multiuser scenario takes into account aspects like communication, collaboration and evaluation not explored in a single-player simulation: "This experimental paradigm is effective when it comes to the investigation of isolated humans and their interaction with the physical world. However, it can be insufficient for studying real-life situations, in which individuals have to coordinate their actions with those of others consistently" [13] . ...
... Here, the user is represented as a human-like avatar allowing exploration of threedimensional landscapes, communication with other players, spatial coordination and physical interaction (e.g., fighting, manipulating objects, or operating vehicles). The massive number of users, advanced interaction possibilities, and avatar behavioural data tracking capability make online multiuser environments a new tool for analysing complex social behaviour [13] . ...
Article
Full-text available
Simulations in augmented (AR) and virtual reality (VR) environments are increasingly part of space programs studying human-crewed missions to Moon and Mars. The goal of these simulation programs is to both train crews and study human behaviour in extreme conditions environments. A virtual multiuser experience (VMUE) consists of the virtual simulation of knowledge shared by at least two users interacting with each other and with the surrounding critical environment. In the space sector, VMUE is a relatively low cost means of simulating the concurrent activities of astronauts interacting with a simulated space environment that can be rolled out right from the start of the project. Applications include training in exploration, maintenance and inspection, ground/space station communication, command and control task execution on robotic systems, and most importantly, crisis resource management (CRM) with situational awareness training in safety analysis and risk prevention. The VR multiuser experience can also be combined with AR devices to obtain a wide range of simulation capabilities and collaborative operations in a hybrid environment. Given the critical importance of multiuser experience in VR and AR, this paper aims to offer a short review of the Mars Planet Research Group's activities, namely: High level-of-likelihood reconstruction of entire regions of Mars/Moon to be used in the VR environment; Development and combination of the two VR treadmills - MOTIVITY and MOTIGRAVITY - for a series of scientific operations and simulations, including physical and medical rehabilitation and education.
... Basically, the physical interactions require that when multiple users meet at a virtual location, they are asked to stay at the same real location. However, most existing multiuser VR systems either employ networking to allow remotely located users to share a virtual environment (Cherry Pop Games 2013; Sharma and Chen 2014;SimforHealth 2018) or request users to operate collaboratively to avoid collisions by pairing with an extra assistant or by grouping together to function as a single user (Chagué and Charbonnier 2016;Lages et al. 2016;Podkosova et al. 2016;Streuber and Chatziastros 2007). ...
... Cases in which users play a game with natural interactions have been studied in Lages et al. (2016); users could not walk freely owing to collisions. Multiple users could walk and interact in the same real workspace in Streuber and Chatziastros (2007), Chagué and Charbonnier (2016), and Podkosova et al. (2016), but they always stayed together to serve as a single user. Some VR systems use physical props to provide a sense of touch and enhance users' experience (Kohli et al. 2005;Steinicke et al. 2008b). ...
Article
We propose a novel technique to provide multiuser real walking experiences with physical interactions in virtual reality (VR) applications. In our system, multiple users walk freely while navigating a large virtual environment within a smaller physical workspace. These users can interact with other real users or physical props in the same physical locations. The key of our method is a redirected smooth mapping that incorporates the redirected walking technique to warp the input virtual scene with small bends and low distance distortion. Users possess a wide field of view to explore the mapped virtual environment while being redirected in the real workspace. To keep multiple users away from the overlaps of the mapped virtual scenes, we present an automatic collision avoidance technique based on dynamic virtual avatars. These avatars naturally appear, move, and disappear, producing as little influence as possible on users’ walking experiences. We evaluate our multiuser real walking system through formative user studies, and demonstrate the capability and practicability of our technique in two multiuser applications.
... Future studies might also address other kinds of interdependence (e.g., goal, reward) potentially leading to a stronger psychological involvement or longer lasting effects of cooperation (e.g., bonus distribution after the experiment). Another important contribution might be the assessment of VRspecific features (e.g., spatial aspects) and social effects as addressed by [46]. In this regard, the spatial, time, and role trajectories of participants according to [8] might give further insight into social behavior in multi-user environments (further discussion in [49]). ...
... In order to deal with the aforementioned functional constraints, we are of the view that a guiding vision for robot controller design can be inspired by the strategy of Human action in joint transportation tasks. In Streuber and Chatziastros (2007), a behavioral experiment was presented, in which several teams of two human subjects were studied while transporting a stretcher. For each experiment, there was a leader (in front) and a helper (at the rear). ...
Article
Full-text available
[A Full-text view version of this paper is possible with the link https://rdcu.be/Lrq1 ] This paper shows how non-linear attractor dynamics can be used to control teams of two autonomous mobile robots that coordinate their motion in order to transport large payloads in unknown environments, which might change over time and may include narrow passages, corners and sharp U-turns. Each robot generates its collision-free motion online as the sensed information changes. The control architecture for each robot is formalized as a non-linear dynamical system, where by design attractor states, i.e. asymptotically stable states, dominate and evolve over time. Implementation details are provided, and it is further shown that odometry or calibration errors are of no significance. Results demonstrate flexible and stable behavior in different circumstances: when the payload is of different sizes; when the layout of the environment changes from one run to another; when the environment is dynamic—e.g. following moving targets and avoiding moving obstacles; and when abrupt disturbances challenge team behavior during the execution of the joint transportation task.
... Für eine realistische Interaktion mit einem virtuellen Objekt sind sechs Freiheitsgrade anzustreben. Auch das Vorhandensein von haptischem Feedback, also einer Kraftübertragung erhöht die Interaktionseffizienz des jeweiligen Nutzers (Streuber, 2007). ...
Conference Paper
Full-text available
Virtuelle Realität (VR) wird bereits seit Jahrzehnten erfolgreich in der Industrie genutzt, etwa im Rahmen von Simulatoren zur Ausbildung von Piloten. In den letzten Jahren hat VR dank Oculus Rift und ähnlichen Produkten auch in Privathaushalten verstärkt Einzug gehalten. Dennoch existieren innerhalb virtueller Welten noch viele ungelöste Fragestellungen, insbesondere zum Thema Interaktion mehrerer Nutzer in der gleichen virtuellen Umgebung.
... Haptic feedback uses the sense of touch to convey additional information like object collision and cues via vibration [11]. This feedback can decrease the difficulty of a cooperative task [12]. ...
Poster
Full-text available
When users are interacting in a collaborative virtual environment it can neither be guaranteed that every user has the same input device nor that they have access to the same information. Our research aims at understanding the effects of such asymmetries on the user embodiment in cooperative multi-user environments. In order to do this, a generic interaction application is needed, which can be altered quickly to generate differences in information distribution and interaction possibilities. In this paper we therefore present the development of such a prototyping platform for cooperative interaction between two users in a stereoscpoic cooperative virtual world. A flexible software framework enables us to quickly generate physics-based puzzles for the users to solve together. To change the information a user gets, we are incorporating ”special views” for each person. To diversify interaction fidelity, an easily expandable array of input devices is supported, e.g. Mouse/Keyboard, Novint Falcon, Razer Hydra, ART Flightstick, etc. Those devices can provide additional information to a user, like haptic feedback, but they can also restrain a user by providing less degrees of freedom. Additionally, they can be used to compare different interaction metaphors, like handover, object possession, or picking behavior.
... Such multimodal communication streams can positively effect social presence. (Bailenson et al. 2008;Bowman & McMahan, 2007;Brooks, 1999;Bryson, 1996;Carlson, 2009;Leung & Chen, 2001;Roussos et al., 1997;Streuber & Chatziastros, 2007) Thirdly, virtual worlds can be viewed as a closely related concept of immersive multi-user virtual environments. Unlike IMUVE virtual worlds are not only characterized by immersion, feeling of presence and social interaction but they are also a persistent online environment, where a large population of users can interact over time. ...
Article
Collaborative learning activities apply different approaches in-class or out-of-class, which range from classroom discussions to group-based assignments and can involve students more actively as well as stimulate social and interpersonal skills. Information and communication technology can support collaboration, however, a great number of pre-existing technologies and implementations have limitations in terms of the interpersonal communication perspective, limited shared activity awareness, and a lack of a sense of co-location. Virtual 3D worlds offer an opportunity to either mitigate or even overcome these issues. This book chapter focuses on how virtual 3D worlds can foster the collaboration both between instructors and students as well as between student peers in diverse learning settings. Literature review findings are complemented by the results of practical experiences on two case studies of collaborative learning in virtual 3D worlds: one on small group learning and one on physics education. Overall findings suggest that such learning environment's advantages are a promising alternative to meet more easily and spontaneously; and that an integrated platform with a set of tools and a variety of communication channels provides real life world phenomena as well as different ones. On the negative side, there are usability issues in relation to the technical limitations of 3D world platforms and applications, which reduce the potential for learning in such collaborative virtual environments.
Conference Paper
When users are interacting collaboratively in a virtual environment, it cannot be guaranteed that every user has the same input device or that they have access to the same information. Our research aims at understanding the effects of such asymmetries on the user embodiment in collaborative virtual environments (CVEs). In order to do this, we have developed a prototyping platform for cooperative interaction between two users. To change the information a user has, we are combining stereoscopic image separation techniques to have different views for each person, similar to the "special views" by Agrawala et al.\cite{Agrawala1997}. Furthermore, an easily expandable array of input devices is supported, e.g. Mouse/Keyboard, Novint Falcon, Razer Hydra, ART Flightstick, Leap Motion, etc. Those devices can provide additional information to an user, like haptic feedback, but they can also be used to restrain, for example by providing less degrees of freedom.
Conference Paper
Full-text available
Playing computer games has become a social experience. Hundreds of thousands of players interact in massively multiplayer online games (MMORPGs), a recent and successful genre descending from the pioneering multi-user dungeons (MUDs). These new games are purposefully designed to encourage interactions among players, but little is known about the nature and structure of these interactions. In this paper, we analyze player-to-player interactions in two locations in the game Star Wars Galaxies. We outline different patterns of interactivity, and discuss how they are affected by the structure of the game. We conclude with a series of recommendations for the design and support of social activities within multiplayer games.