Content uploaded by Zsolt Szalavári
Author content
All content in this area was uploaded by Zsolt Szalavári
Content may be subject to copyright.
“Studierstube”
An Environment for Collaboration in Augmented Reality
Zsolt Szalavári, Dieter Schmalstieg, Anton Fuhrmann, Michael Gervautz
Institute of Computer Graphics
Vienna University of Technology
Karlsplatz 13/186/2, A-1040 Vienna, Austria
szalavari|schmalstieg|fuhrmann|gervautz@cg.tuwien.ac.at
Abstract: We propose an architecture for multi-user augmented reality with
applications in visualization, presentation and education, which we call “Studierstube”.
Our system presents three-dimensional stereoscopic graphics simultaneously to group of
users wearing light weight see-through head mounted displays. The displays do not
affect natural communication and interaction, making working together very effective.
Users see the same spatially aligned model, but can independently control their
viewpoint and different layers of the data to be displayed. The setup serves computer
supported cooperative work and enhances cooperation of visualization experts. This
paper presents the client-server software architecture underlying this system and details
that must be addressed to create a high-quality augmented reality setup.
Keywords: augmented reality, multi-user applications, collaboration, distributed
graphics
1. Introduction
Daß ich erkenne, was die Welt
Im Innersten zusammenhält,
Schau alle Wirkenskraft und Samen,
Und tu nicht mehr in Worten kramen.
To realize what holds the world
Together in its core,
I see all seeds and force of act
And search for words no more.
Johann Wolfgang von Goethe, Faust
We selected the project name “Studierstube”, after the play Faust by Johann Wolfgang
von Goethe, in which the leading character uses a study room for performing research
and philosophy: the Studierstube.
This paper deals with an attempt to combine two very important evolving fields:
• The method for visual improvement or enrichment of the surrounding environment
by overlaying spatially aligned computer-generated information onto a human’s
view, called Augmented Reality (AR), has potential for a broad range of applications,
including mobile context-sensitive information systems, scientific visualization, in-
place display of measurement data, medicine and surgical planning, education,
training and entertainment.
• The primary goal to provide insight into a complicated problem by the enrichment of
simulation data, that is mapped and rendered to a displayable image (Figure 1) has
become important to numerous fields of science outside of computer graphics and
augmented reality. Scientific visualization realizes projects of higher and higher
complexity.
Figure 1: Visualization pipeline (Nielsen, 1990)
As a highly interdisciplinary field, scientific visualization frequently requires experts
with different background to cooperate. Collaborators may have different preferences
concerning the chosen visual representation of the data, or they may be interested in
different aspects. An efficient collaboration requires that each of the researchers has a
customized view of the data set. At the same time, presence in the same room is
preferred because of the natural interaction during a discussion. These requirements can
uniquely be fulfilled in an augmented reality system which combines real world
experience of the collaborators and physical equipment with the visualization of the
synthetic data.
Compared to visualization in immersive virtual reality, augmented reality allows the use
of detailed physical models, the properties of which cannot be met by their virtual
counterparts: arbitrarily detailed visual representation, no visual or temporal artifacts
and force-feedback for free. Only those aspects of the model that cannot be seen in
reality have to be added by the computer system: For example, one could take the
physical model of an airplane or airplane wing to investigate the flow around this object,
which is simulated by computer and added to the display. Manipulation of the real
world model (e. g. its orientation) is more intuitive and simpler to support than a purely
virtual environment. A related example would be the use of a humanoid torso or puppet
that is overlaid with medical information from inside the human body in the style of
(Bajura, 1992).
This combination of conventional experimental work with scientific visualization and
augmented reality technology leads to the concept of an augmented laboratory, which
Interaction
in AR
Displayable
Image
Simulation
Data Derived
Data
Abstract
Visualization
Object
Data
Enrichment Visualization
Mapping Rendering
Augmented
Reality (AR)
would provide a superior research environment in which to conduct experiments that are
executed solely inside the computer, while maintaining a conventional and familiar
work setup.
The “Studierstube” approach concentrates on the seamless combination of a physical
world workspace and an augmented environment for multiple users in three dimensions,
with unaffected social communication channels and an augmented user interface that
supports natural handling of complex data at interactive rates. In this type of distributed
multi-user systems adequate communication strategies for continuous synchronization
and real-time performance are required that also allow the interaction with a shared
geometric database.
2. Related Work
The evolution of augmented reality started in the early days of computer graphics, when
Sutherland pioneered research on head mounted displays (Sutherland, 1968). His work
still inspires the virtual reality research community of today. Although only capable of
simple vector drawings, his prototype head mounted display was the first binocular see-
through system, effectively the first augmented reality system. Feiner et al. (Feiner,
1992) (Feiner, 1993) described a knowledge based augmented reality system. As a
demonstration, they chose to configure the system to support people with the
maintenance of laser printers. However, a lot of effort is required to generate accurate
models, and extremely precise registration is required. Bajura et.al. (Bajura, 1992)
described a medical visualization system based on augmented reality techniques. A see-
through head mounted display (HMD), also developed at UNC (Holmgren, 1992),
allows geometrically correct superposition of ultrasound data of the unborn onto the
belly of the mother-to-be, so the gynecologist can examine the position of the unborn
within the mother. Another medical application of AR has been presented by State et. al.
(State, 1996) for ultrasound guided needle biopsy of the breast. Sharma and Molineros
(Sharma, 1996) present a system for mechanical assembly guidance using annotations
attached to real world scenery.
Scientific visualization in virtual reality becomes increasingly a field of interest for
many researchers. In the early 90ies at UNC within the GROPE project a group around
Fred Brooks produced a haptic arm-like device and a large stereo display for the
visualization and manipulation of chemical data (Brooks, 1990). Their nanomanipulator
(Taylor, 1993) allows precise manipulation of a scanning tunneling microscope and
works also with force feedback. Another important milestone for the combination of VR
and Scientific Visualization was the development of the virtual wind tunnel at NASA-
AMES by Steve Bryson. Using a BOOM device and a data glove as interaction tool
(Bryson, 1991), scientists were able to see and interact with true stereoscopic images of
a flow field visualization. A follow-up project, the distributed wind tunnel (Bryson,
1993) was developed, which divided computation in a distributed system for better
efficiency, and allowed multiple users to experience the simulation at the same time.
Collaboration in a distributed virtual environment, not necessarily limited to scientific
visualization has been proposed by Fahlén et. al. (Fahlén, 1993).
Most existing augmented applications are single user setups, or do not exploit the multi-
user character of their systems. Exceptions are the CAVE-System (Cruz-Neira, 1993 a)
(Cruz-Neira, 1993 b), the responsive workbench (Krüger, 1995) and the Shared Space
(Billinghurst, 1996) which are examples of multi-user augmented reality systems. In the
CAVE users see stereoscopic 3-D scenes with LCD-shutter glasses on large projection
walls surrounding them. One user is head-tracked, so that the images on all walls
correspond to this viewer’s position. The viewers have the impression to be surrounded
by 3-D virtual scene. A disadvantage of this system is that the presented images fit to
the head position only for one viewer; noticeable visual artifacts exist for all other
viewers. The responsive workbench uses one display area, which is built into a table
top. Like in the CAVE, viewers wear LCD shutter glasses and only one user can see the
objects in correct stereoscopy. Furthermore, a relatively steep viewing angle is
necessary to achieve a good 3D impression, i.e. the viewers have to stay close to the
table. Closest to our work is the prototype implementation of Shared Space. Users
wearing head-mounted see-through displays can discuss shared information in three
dimensions floating around them in space and interact using gestures and speech
commands. As the focus of this work is on ubiquitous computing and not in situ
cooperative work, distribution of data, information sharing and interaction techniques
face different problems as presented in our work.
3. The “Studierstube” approach
We propose a system capable of visualization of three-dimensional scientific data for
multiple simultaneous viewers within one room. The choice of this setting limits the
complexity of the problem, as the “real world” is limited to a room, which is
complemented by the “virtual world”. Each viewer wears magnetically tracked see-
through HMDs providing a stereoscopic real-time display, and can freely walk around
in order to observe the augmented environment from different viewpoints.
Figure 2. Three people wearing see-trough glasses at a meeting, viewing a virtual globe. Note that the
table is an object in the real world, the globe just an image projected into the space by the
head-set.
The mixture between real and virtual visual experience, created in our system by see-
trough HMDs, is a key feature of our system. Thus, it is possible to move around freely
without fear to bump into obstacles, as opposed to fully immersive displays, where only
virtual objects can be perceived. This enables a work group to discuss the viewed object,
because the participants are seeing one another and can therefore communicate in the
usual way.
Interaction with the augmented part of the scenery is maintained using high-level
interaction metaphors and tools like the Personal Interaction Panel (PIP) (Szalavári,
1997). We incorporate this new two-handed input device that supports a multitude of
interaction styles and is particularly well suited for augmented reality applications. It
unifies general control functions of Studierstube, usual 3D manipulation tasks, as well
as application specific interaction methods. The PIP is composed of position and
orientation tracked lightweight, notebook sized hand-held panel and a pen and carries
instant augmented elements for interaction.
3.1 Properties of our system
The following key properties summarize the attributes of our system:
Virtuality
Viewing and examining of objects that are not accessible directly or that do not exist in
the real world can be carried out in this environment. Investigation of data-sets using
information visualization becomes a task of handling almost “real” objects. Size,
complexity, physical properties are just parameters in a simulation, no longer are they
constraints for the analysis.
Augmentation
Real-world objects can be augmented with spatially aligned information. This allows
smooth extension of real objects with virtual properties in design processes, like
variations of new parts for an existing system. Superimposed information can also
incorporate enhancing elements for real objects, like descriptions or guidance in training
or education situations, which we call annotations.
Multi-user support
A situation where multiple users congregate to discuss, design, or perform other types of
joint work is generally categorized as CSCW (computer supported cooperate work).
Much research has been devoted to the question how conventional software and desktop
computers can be enhanced with measures to support effective group interaction.
Fortunately, a benefit of augmented reality is that sophisticated groupware mechanisms
are not really needed to perform real work. Normal human interactions (verbal, gestures,
etc.) are easily possible in an augmented reality setup, and they are probably richer than
any computer-governed interaction can ever be.
Independence
Unlike the CAVE and the Workbench, control is not limited to a guiding person, while
other users act as passive observers. Each user has the option to move freely and
independently of the other users. In particular, each user may freely choose a viewpoint
with stereoscopy for correct depth perception. But not only is observation independent,
interaction can also be performed on a personal base. The semi-immersive character of
our augmentation helps to keep human communication channels open, thus improving
the quality of collaboration.
Sharing vs. Individuality
Investigated objects are in general shared among users, in the sense of visibility, this
means that all participants can see the same coherent model, consistent in its state over
time. By presenting the visual sensation directly to each user with the lightweight see-
trough HMDs, the displayed data set can also be different for each viewer, as required
by the application’s needs and the individual’s choice. Personal preferences on different
layers of information can be switched on and off, as described in the next sub-section.
Interaction and Interactivity
With the support of augmented tools like the proposed PIP, visualized data can be
explored interactively. Changes inherent in the scientific simulation can be viewed
immediately. The visual components of the panel in one users hand can be kept private,
invisible for other users, or public, sharing even 3D information by direct visibility or
projection to projection walls, as described in the next section.
3.2 Augmented features
We incorporated layers and annotations as augmented features to our system.
Furthermore we show uses of mobile tracked objects in an augmented environment.
Layers
We incorporate layers similar in concept to the ones found in technical illustration
programs or CAD packages and the work of Fritzmaurice (Fritzmaurice, 1993). Data is
separated into disjoint sets according to semantic considerations (e.g. floor plan with
walls only - furniture - measurements). Display can be turned on and off for every layer
individually. This concept is fundamental for allowing individuals to customize the
display to their needs. Users may see the same model and at the same time not see the
same model, as everyone sees a different set of aspects of the same thing. Aside from
personal taste and interest, this is useful if professional people (e.g. an architect) talk to
inexperienced people (e.g. customer), or if people with different interest (e.g. designer
and engineer) collaborate.
Annotations
Augmentation is not necessarily limited to 3D graphics added to the physical world.
General multi-media data can be useful (e.g. sound cues), but what we consider
absolutely essential to support are textual annotations. While it is often true that
illustrations and graphics make difficult concepts clearer than textual explanations can,
for complicated models a legend that explains important parts and gives names is just as
important. The system provides a possibility to link text to specific 3D points of a
model. The text is then displayed “in place”, but in 2D overlaid onto the 3D image
similar to (Rekimoto, 1995). As the user moves his viewpoint, the text stays screen-
aligned so that it is always clearly readable. The system takes care that multiple text
elements do neither overlap nor occlude each other. By means of the layer mechanism,
individual annotation sets can be switched on and off. The annotation concept will be
especially useful if physical props (e.g. demonstration objects or mock-ups for
education) are used, but it will also improve the quality of purely virtual presentations.
Annotations can be created, edited and directly placed or moved in the augmentation
with the Personal Interaction Panel. A three dimensional drag and drop mechanism
gives a natural interactive feeling of handling spatially aligned multi-media data.
Tracked mobile objects
Static objects become part of the augmentation in a simple setup phase. Geometric
properties such as size and position have to be registered for inclusion in an
environment. To include a real world object completely in the system and an ongoing
simulation, the system needs to have information about changes in position, orientation
and state in addition to the static properties, so that they can interact with other parts of
the augmentation.
For this reason we introduce tracked mobile objects as functional part of our system,
which can be moved, held in hand by users, passed on from user to user and so forth.
Typically, the number of such objects will be small, but their role in the application will
be significant. Main usage of mobile objects are manipulation tools such as the PIP, and
physical models (mock-ups) that are augmented with supplementary information not
physically available (e.g. isolines of stress). Technically, the position of these objects is
determined by a dedicated tracking sensor, and a representation of the physical model is
rendered in background color to resolve the occlusion problem among physical and
virtual objects.
4. System Overview
We consider our system to work in a stationary environment, e.g. a room, so we can
assume sufficient network bandwidth for communication between parts of our client-
server approach, both for geometric and application data, as well as supporting
information like tracker data. The representation of this data and communication
concerning the changes, as well as interaction between users and system are crucial
factors calling for detailed presentation.
4.1 Data representation and modeling
We use three different kinds of 3D models, each for a different purpose:
Static data
Static data describes the geometry of the presentation room (walls, windows, doors etc.).
Because this kind of data is completely static (does not change at all), it can be prepared
for taking occlusion with real objects into account.
Data representing mobile objects
Within our environment the system also supports mobile objects, which can have virtual
data representing or supporting them. This type of data differs from static registration
data, as it has to be updated in real-time during operation.
Display data
Data presented or added to the environment is generally handled as display data. This
data is shared between the Studierstube and the underlying simulation. Simulation
engines have to provide visual output in the same format, so that inclusion in the
geometric database of the Studierstube is rather simple, but major changes to this
database can still be controlled directly by the application.
4.2 Client-Server Approach
We propose a software architecture for our augmented reality system, which is based on
a client-server structure. A server holds a database of all data types, including
registration, mobile object and display or application data. Users connect to this
Environment Server via a network using client software. The client obtains a replica of
the database from the server, which is used by the client locally to render the image
presented to the user.
Except for special customizations, the view that concurrent users have of the scene
(position, color of objects etc.) must be consistent. As there are multiple local copies of
an object, if any change is made to the presented scene (e.g. color of an object changed),
changes must be propagated to other replicas. This is done by sending a message to the
server, which in turn distributes them to the other participants. As such update events
only happen occasional (note that tracker data is handled separately!), the improved
consistency outweighs the longer communication involving a server.
Tracker data is managed by a special tracker demon running on the Tracker Server. The
quality of tracking is crucial to the quality of the experience, so a separate machine is
dedicated to the tracking. The tracker demon is continuously running, and clients can
connect at will to obtain a stream of tracker data. Our system involves multiple tracked
points (head tracking for multiple users, hand/pointer tracking, tracking of mobile
objects). All the data from these tracked points influences the state of the scene and is
therefore propagated to the connected clients as a bundle, which improves throughput
and consistency of the data.
The proposed overall system architecture can be seen in Figure 3.
Tracker Server
User
Client
User
Client
T
T
T
T
TT
T
T
T
see-trough HMD
tracker receiver
Simulation Engine
(AVS)
Env ironment Server
PIP
PIP Physical Model
Geometry Export
Computati onal Steering
Figure 3. System architecture: The augmented reality environment is maintained by a server that takes
care of the synchronization needs of the clients and interoperates with the simulation backend.
The clients are responsible for displaying the environment. A tracker server manages input
devices.
4.3 Visualization loop
A Simulation Engine is required to provide the data for the scientific visualization task
in our implementation. This data can be precomputed and loaded into the system at
runtime. Simple visualizations such as analytical dynamical systems can be hand-coded.
However, a capable simulation system is better suited to address the diverse needs of
multiple visualization tasks, and also eases development. In previous projects, we have
used AVS (AVS, 1992) to create scientific visualization data. Its data flow concept
allows export of the data in almost any desired format and lends itself naturally to an
integration into our system architecture.
A loose coupling is defined between AVS as the computational back end and the
visualization server that coordinates interaction with the model in the Studierstube.
Visualization data is exported from AVS to the visualization server that takes care of
distribution of the data among the Studierstube’s clients. Computational steering is
achieved by using special input modules for AVS that accept new values for simulation
parameters from the Studierstube. If re-generation of the model or its parts with
modified parameters is reasonably fast, real-time or near real-time steering can be
achieved.
3D input
module
3D input
module
visualization
DynSys 3D
network
coroutine
AVS
Environment
Server
Figure 4. Integration of AVS-DynSys3D as a Simulation Engine in Studierstube
Modifications of the visualization data that do not involve the simulation (such as
rotating the simulated model) can be carried out in a close loop by the Studierstube
system alone and do not pass data between Studierstube and AVS. Such simple
interactions are not affected by the performance penalty created by invoking a complex
software system such as AVS and can therefore always be carried out with real-time
response and high fidelity.
The distribution and consistency of the shared geometric database plays a significant
role in the quality of our system. To handle coherence and merge the interaction of
multiple users and communication between server and simulation engine, we base the
intercourse between all parts of our client-server environment on sophisticated protocols
and message passing algorithms. We are currently developing an in-house
communication standard for connection of three dimensional user interfaces to
visualization.
4.4 Interaction tools
Interaction with the augmented part of the scene is performed with the Personal
Interaction Panel is a unique tool that combines physical and virtual attributes: The
physical nature of the pen and panel makes it a very simple, yet effective and precise
device for interaction, that supports tactile feedback and has good ergonomics.
However, the surface of the panel is a virtually unlimited information display of
computer generated (augmented) information.
There are many different possibilities to use the PIP as interaction tool in the augmented
environment, we will show features for general tasks and in the application section those
supporting our implementation of a scientific visualization environment.
The pen alone can be used for any 3D pointing operation and direct manipulation, where
a 3D mouse (6 degrees of freedom) is normally used. This feature is integrated with the
extended PIP functionality, so that the PIP supports a superset of “standard” 3D
operations in virtual and augmented reality.
A conventional 2D computer display can be projected onto the board, supporting a 2D
desktop metaphor better than “flying menus” so traditional 2D user interaction and
parameter manipulation is possible. In addition to “flat” 2D user interface elements,
three-dimensional widgets that “float” above the panel’s surface are supported (e.g.,
selection of a point on a sphere), clipboard functionality and drag-and-drop in 3D can
also be implemented.
Using the pen and panel, a snapshot camera metaphor has been implemented. The
direction of the pen orients a virtual camera, the resulting snapshot is shown on the PIP
for immediate feedback.
Multiple navigation metaphors are supported by two handed interaction as featured by
the PIP: Among them are use of hand-held miniatures (compare (Pausch, 1995))
specifying direction of movement with the pen or “spaceship” control gadgets (2-D
buttons or 3-D widgets) on the panel’s surface.
The general controls for Studierstube can easily be made available by the PIP, so
reconfiguration of the application can largely be achieved without leaving the
augmented environment. For example, loading a new model can be done with a
graphical file selector presented on the PIP.
4.5 Implementation details
Our current implementation of the described system above consists of an environment
for two users. The hardware configuration includes i-Glasses head mounted see-trough
displays and a Polhemus Fastrak tracking device connected to a tracker server PC.
Tracker data is transmitted over Ethernet using TCP/IP protocols and multicast
technology. Rendering is done on Silicon Graphics workstations (Maximum Impact
graphics) using Open Inventor libraries. The hardware of the Personal Interaction Panel
consists of a lightweight wooden panel and a plastic pointer, both tracked in position
and orientation with Fastrak receivers. From our current implementation we can
conclude, that for high-fidelity augmented reality, precise registration of the real world
with the augmented display is crucial, and our current static registration is barely
sufficient. Nevertheless, our experiences show that users feel comfortable and working
in the environment is pleasant. Concerning the tracking problem enhancement of
registration by hybrid tracking technology is currently under developed in cooperation
with the Vision Group of the Graz University of Technology as part of a parallel project.
5. Applications in Scientific Visualization
As described above, we set the focus of our applications to scientific visualization.
Augmented reality for scientific visualization can provide an intuitive, even transparent,
interface for computational steering. Consequently, a test case was needed in the
beginning that is simple enough for interactive steering even on conventional
workstations, yet complex enough to be interesting to researchers working in the field.
The Wonderland Model
Following a previous cooperation with researchers from econometrics, we first
concentrated on population models like the “Wonderland Model”, where for example
the interaction between population growth, economic activity and environmental impact
is modeled (Gröller, 1996). Changing certain parameters of such systems only slightly
may have significant impact on the long term behavior, making interactive computation
steering essential for the understanding of such systems. The simulation of such systems
required the numerical approximation of differential equations fast enough for
interactive computational steering.
Figure 5. The Wonderland Model (Gröller, 1996) on the Personal Interaction Panel in Studierstube
Dynamical systems
Based on our first experiments with the integration we generalized our concept of
connecting a Simulation Server to Studierstube and connected the multi-purpose
workbench for the rapid development of advanced visualization techniques in the field
of three-dimensional dynamical systems DynSys3D into Studierstube (Löffelmann,
1997)(Fuhrmann, 1997). Standard visualization techniques including stream lines,
stream surfaces, and particles support the illustration of the investigated systems.
One design guideline of this system, namely that all of its modules have to produce
standard AVS geometry, was very important for the integration. A simple conversion
utility that converts AVS geometry into the display data-format of Studierstube (Open
Inventor) was sufficient to exchange geometric information. Interaction messages from
the Environment Server are sent to the visualization systems input modules as AVS
geometry items. To proof our concept and test the stability of the system following
DynSys3D applications were selected as representative examples.
Mixed-mode Oscillations
A model we investigated together with colleagues from our econometrics department is
the 3D autocatalator (Milik 1996), a simple 3D dynamical system which exhibits mixed-
mode oscillations. These oscillating phenomena often encountered in real world
systems, e.g., chemical systems. Depending on the parameters of this system either
periodic or quasiperiodic (chaotic) solutions can be found. The investigation of the 3D
phase space as a direct three dimensional augmentation provides better understanding of
the structure of this system and the direct control on placing streamlines or
streamsurfaces makes the analysis of a given parameter set much more intuitive.
RTorus
RTorus is a ‘synthetic’ dynamical system which is very useful for demonstrating certain
properties that are common to lots of others. Abraham and Shaw use this model as an
example to explain several fundamental flow properties (Abraham, 1992). This
dynamical system models a coupled oscillation within three-space. Depending on the
parameters of the model either an attracting cycle within the x-y-plane or an attracting
torus around the z-axis appears. By placing streamlines and stream surfaces
interactively, lots of interesting settings of RTorus can be found as, e.g., the Möbius
band. Due to the interactive response times the RTorus-system can be especially well
investigated within Studierstube, details can be obtained that would require long-term
adjustments using the standard AVS interface.
Figure 6: Investigating the RTorus on the Personal Interaction Panel
Rössler
As a rather well-known dynamical system we also investigated the Rössler attractor in
Studierstube. Rössler is also a three-dimensional dynamical system that exhibits a
chaotic attractor if parameters are set properly. Taking this familiar dynamical system
for analysis in Studierstube allowed us to easily compare visualization in AR to
established techniques.
viewHMD#1
viewHMD#2
magnetictracker
virtualobject
3D-mouse
Figure 7: Two users investigating the Rössler-attractor in the Studierstube setup. Due to the direct
correspondence of positions in real world and the augmentation, simple interactions, e.g.
pointing are possible and enrich communication.
Interaction with visualization data
We enhance the expressive power of the display by interface techniques exploiting the
augmented reality setup. Using the PIP metaphor (see above) custom tailored for
interaction with the dynamical system, as a probing tool to define 2-D cross sections and
to specify the origin of particles introduced into the flow gives a natural feeling of
handling visualization data. The augmented reality setup also allows the use of an
additional high-resolution CRT monitor for the display of high quality 2-D images (e.g.,
the mentioned cross sections) without leaving the augmented environment.
6. Conclusions and Future Work
We presented a collaborative augmented environment setup supporting interactive
scientific visualization for multiple users. Our system provides 3D display of synthetic
data and augmentation of physical objects with geometrically aligned information. Co-
workers wear position and orientation tracked see-trough head mounted displays,
allowing independent choice of viewpoint. Interaction is performed using the Personal
Interaction Panel, a two-handed interface for augmented reality.
The system provides a natural working atmosphere, by enriching reality with spatially
aligned information while leaving natural communication channels unaffected.
Annotations enhance understandability of the discussed topic while customization of
different data layers support cooperation of experts from different fields. Direct
exploration and modification in visualization provides improved insight in complex
problems.
We have verified that true three-dimensional viewing and manipulation is indeed
superior to screen-and-mouse based interaction of complex 3D models. The tedious
work of positioning, orienting, and zooming, typical for conventional systems, can be
reduced significantly. Alternatives in the operation (e.g. moving ones head vs. rotating
the object) make exploration less computer-centric and are easy to learn for
inexperienced users, however surprising in the beginning.
Although experiments with unskilled users show promising results regarding
acceptance, enhanced registration and correct matching of real environment and overlaid
graphics is required. Our restricted implementation supporting two users should be
extended to a number of participants, allowing more complex collaborative situations.
Connection to external modules with standardized protocols for image and interaction
data will provide a wide variety of different applications. To improve the visualization
setup, we will also use the PIP’s pen as a probing tool to display local properties of the
visualization data with real-time update on the panel.
7. Acknowledgments
This work has been supported by the Austrian Fond for Science and Research FWF
Proj. No. P 12074-MAT.
8. References
Abraham, R.H., Shaw, C.D. (1992). Dynamics: The Geometry of Behavior. (Redwood
City/California: Addison-Wesley).
AVS (1992). AVS Developers Guide - Release 4. Advanced Visualization Systems Inc.
Bajura, M., Fuchs, H. and Ohbuchi, R. (1992). Merging Virtual Objects with the Real
World: Seeing Ultrasound Imaginary within the Patient. In proceedings of
SIGGRAPH’92: 203-210.
Billinghurst, M., Weghorst, S., Furness, T. III (1996). Shared Space: An Augmented
Reality Interface for Computer Supported Collaborative Work. In proceedings of
Collaborative Virtual Environments’96.
Brooks, F. Jr. et. al. (1990). Project GROPE - Haptic Displays for Scientific
Visualization. In proceedings of SIGGRAPH’90: 177-185.
Bryson, S. (1991). The Virtual Wind Tunnel. In proceedings of IEEE Visualization’91:
17-25.
Bryson, S. (1993). The Distributed Virtual Wind Tunnel. In proceedings of
Supercomputing ’92, also in SIGGRAPH’93 Course Notes 43: 3.1-3.10.
Cruz-Neira, C., Sandin, D. and DeFanti, T. (1993 a). Surround-Screen Projection-Based
Virtual Reality: The Design and Implementation of the CAVE. In proceedings of
SIGGRAPH’93: 135-142.
Cruz-Neira, C. et al. (1993 b). Scientists in Wonderland: A Report on Visualization
Applications in the CAVE Virtual Reality Environment. In proceedings of the IEEE
1993 Symposium on Research Frontiers in Virtual Reality: 59-67.
Fahlén, L.E., Brown, C.G., Ståhl, O. and Carlsson, C. (1993). A Space Based Model for
User Interaction in Shared Synthetic Environments. In proceedings of
INTERCHI’93: 43-48.
Feiner, S., MacIntyre, B. and Seligmann, D. (1992). Annotating the Real World with
Knowledge-Based Graphics on a See-Through Head-Mounted Display. In
proceedings of Graphics Interface’92: 78-85.
Feiner, S., MacIntyre, B. and Seligmann, D. (1993). Knowledge-Based Augmented
Reality. Communications of the ACM 36(7): 53-62.
Fritzmaurice, G.W. (1993). Situated Information Spaces and Spatially aware Palmtop
Computers. Communications of the ACM 39(7): 39-49.
Fuhrmann, A., Löffelmann, H., Schmalstieg, D. (1997). Collaborative Augmented
Reality: Exploring Dynamical Systems. In proceedings of Visualization’97.
Gröller, E., Wegenkittl, R., Milik, A., Prskawetz, A., Feichtinger, G. and Sanderson,
W.C. (1996). The Geometry of Wonderland. Chaos, Solitons & Fractals 7(12):1989-
2006.
Holmgren, D. (1992). Design and Construction of a 30-Degree See-Through Head-
Mounted-Display. Technical Report 92-030 at the University of North Carolina,
available at ftp://ftp.cs.unc.edu./pub/technical-reports/92-030.ps.Z .
Krüger, W., Bohn, C., Fröhlich, B., Schüth, H., Strauss, W. and Wesche, G. (1995). The
Responsive Workbench: A Virtual Work Environment. IEEE Computer 28(7): 42-48.
Löffelmann, H., Gröller, E. (1997). DynSys3D: A workbench for developing advanced
visualization techniques in the field of three-dimensional dynamical systems. In
proceedings of WSCG'97: 301-310.
Nielsen, G., Shriver, B. and Rosenblum, L. (eds.) (1990). Visualization in Scientific
Computing (Los Alamitos/California: IEEE Computer Society Press).
Milik, A. (1996). Dynamics of Mixed-mode Oscillations. PhD thesis, Vienna University
of Technology, Austria.
Pausch, R., Burnette, T., Brockway, D. and Weiblen, M. (1995). Navigation and
Locomotion in Virtual Worlds via Flight into Hand-Held Miniatures. In proceedings
of SIGGRAPH’95: 399-401.
Rekimoto, J. and Nagao, K. (1995). The World through the Computer: Computer
Augmented Interactions with Real World Environments. In proceedings of UIST ‘95:
29-36.
Sharma, R., Molineros, J. (1996). Interactive Visualization and Augmentation of
Mechanical Assembly Sequences. Proceedings of Graphics Interface ’96: 230-237.
State, A., Livingston, M.A., Garrett, F., Hirota, G., Whitton, M.C., Pisano, E.D. and
Fuchs, H. (1996). Technologies for Augmented Reality Systems: Realizing
Ultrasound-Guided Needle Biopsies. In proceedings of SIGGRAPH’96: 439-446.
Sutherland, I. (1968). A Head-Mounted Three Dimensional Display. Fall Joint
Computer Conference, In proceedings of AFIPS Conference 33: 757-764.
Szalavári, Zs. and Gervautz, M. (1996). The Personal Interaction Panel - A Two-handed
Interface for Augmented Reality. In proceedings of EUROGRAPHICS’97.
Taylor, R. M. et. al. (1993). The Nanomanipulator: A Virtual Reality Interface for a
Scanning Tunneling Microscope. In proceedings of SIGGRAPH’93: 127-134.