A Subjective Virtual Environment for Collaborative Information Visualization
ABSTRACT this paper we describe an initial prototype of a subjective visualization system and the techniques used in its implementation
A subjective virtual environment for
collaborative information visualization
Department of Computer Science,
University of Nottingham, UK
Department of Numerical Analysis and Computing Science,
Royal Institute of Technology, Sweden
Although it is generally desirable that users in a collaborative virtual environment
should perceive it in the same way we argue that in some situations it may be useful
to allow users’ views of the environment to diverge, allowing each user to tailor
their view to one that best suits their needs whilst still allowing some form of
collaboration. We refer to such environments as subjective environments. In this
paper we describe an initial prototype of a subjective visualization system and the
techniques used in its implementation.
In the field of abstract information visualization it is possible to imagine several techniques for
visualization of the same data set. Since the data set has no intrinsic ``natural’’ representation (unlike
a 3D CAD model for example) the choice of a particular technique will depend on the task at hand or
on the preferences of the user. It is therefore natural for visualization systems to present the user with
a number of choices which govern the nature of the visualization presented to them. Since people
need to work together, there is some interest in the field of Populated Information Terrains ( PITs)
[Benford’95a] in which both users and information are embodied in a Collaborative Virtual
However, using most current systems all users are forced to use the same representation of the
information and thereby trade flexibility for the ability to collaborate in the use of the information. In
this paper we describe a prototype environment that both allows multiple users to collaborate and
communicate in a shared space and allows the environment to contain viewer dependent features. We
refer to this as a subjective [Snowdon’95] environment.
Section 2 will introduce the concept of Populated Information Terrains and in section 3 we will
provide a brief justification for extending the PITs concept to include subjectivity. Sections 4 & 5 will
1This work was performed at the Swedish Institute of Computer Science
describe how we implemented a subjective PIT. Finally, in section 6, we shall conclude with a brief
discussion of the trade-offs and potential problems with our current implementation.
2. Populated Information Terrains
The concept of Populated Information Terrains [Benford’95a] combines ideas from the fields of
Computer Supported Cooperative Work (CSCW), virtual reality and databases to create multi-user
virtual environments which support visualization of, and cooperative work within, shared data. The
underlying philosophy of PITs is that they should support people in working together within data as
opposed to merely with data. In other words users are explicitly embodied in the virtual environment
and not relegated to the status of external agents whose presence is merely implied as a side effect of
their actions. There are several reasons why we believe PITs are useful, these include:
VR technology seems to hold great promise for constructing dynamic, interactive representations
of abstract data that will enable users to work with the data in a more intuitive fashion.
3D virtual environments also provide a natural mechanism for representing other users and the
inclusion of support for other media such as real-time audio and/or textual communication allows
very rich human-human communication to take place.
Possible applications include browsing large databases, browsing library catalogues, navigating
through large hypertext systems such as the World Wide Web or supporting teams of programmers
developing large pieces of software. Section 2.1 will describe VR-VIBE an existing PIT developed
using the SICS DIVE [Hagsand’96, Carlsson’93] multi-user virtual environment.
VR-VIBE [Benford’95b] is a multi-user 3D visualization of a collection of documents or document
references. The visualization is structured using a 3D spatial framework of keywords, called Points
of Interest or POIs. The spatial position of an document icon indicates the relative attraction of a
document to the different POIs, where attraction is estimated in terms of thematic similarity. Thus,
an icon equally spaced between two POIs is equally relevant to both, while an icon close to a
particular POI is relevant to that POI only.
Absolute relevance is depicted by the size and brightness of the document’s representation; the more
relevant the document the bigger the icon and the brighter the colour. As spatial location only
determines relatedness to each POI in the current query, relative icon size and brightness thus
differentiates documents which are slightly relevant to all POIs from documents which are highly
relevant to all POIs.
To determine these relative and overall measures, VR-VIBE employs a simple text matching
algorithm (i.e. counting the number of occurrences of keywords in titles, abstracts and the body of the
document) to compute a match between each document in the store and each POI. This computation
results in a normalised score which is translated into the appropriate visual representation.
Figure 1 illustrates VR-VIBE at work. Here 5 POIs are specified, represented by green octahedrons.
A white sphere above an octahedron indicates the POI is currently ‘active’. Blue blocks represent
documents. Also visible in the picture are three other users, all with differing styles of embodiment
ranging from the simple green T-shaped “blocky” on the right of the 3D view to the more complex
“cartoon” embodiment on the left. Users can navigate within the 3-D space, select individual
documents, control the display according to the dynamic relevance threshold and “drag” POIs to new
locations. Dragging alters the shape of the document icon space; icon location changes can be used to
gain further information about relative attraction to different POIs. In addition, new searches can be
specified by creation of new POIs and/or by specifying keywords. Selecting a document icon causes
some summary information to be displayed. If a document is available via the World Wide Web
[Berners-Lee’92] then VR-VIBE can invoke a web browser to display the entire document contents
Figure 1: The world of documents in VR-VIBE
We chose VR-VIBE as our test application because we had access to the source code and therefore
could adapt it for subjective operation and because it already supports a number of visualization styles
allowing the creation of subjective views that may differ substantially
3. The need for subjectivity in PITs
Most current multi-user virtual reality systems provide a highly objective virtual environment. That
is, all users see the environment in the same way, albeit from different viewpoints and all users see
same objects in the same places with the same appearances. However, experiences with the strictly
objective WYSIWIS (What You See Is What I see) paradigm for 2D interfaces suggest that
collaborative applications in general require some degree of subjectivity, leading to variants such as
``Relaxed WYSIWIS’’ [Stefik’87].
Figure 2: Alternative views of the same dataset shown in figure 1.
Visualizations of abstract information, such as those provided by VR-VIBE, can have a great deal of
flexibility in the choice of visualization style since the source data has no intrinsic appearance. This is
illustrated by Figure 2, which shows alternative VR-VIBE visualizations of the same dataset as that
shown in Figure 1. Given this freedom of choice it is likely that users will form their own preferences
for the display of particular datasets. If the virtual environment does not support subjective views then
the users are forced to agree on a common (possibly non-optimal) visualization style. However, if the
virtual environment is capable of supporting subjective views then users are free to choose their own
preferred visualization style. We therefore hypothesize that the PITs concept would benefit if it were
extended to allow subjective views of the data and other users. In order to achieve this we need to be
able to do the following:
Extend the virtual environment so that it is capable of displaying different representations of the
same artefact to different users.
Find appropriate techniques for representing users to each other in the case where the users are
experiencing the virtual environment in quite different ways.
The next sections will show how we attempt to solve these problems.
4. Subjective VR-VIBE
In order to support subjective visualizations VR-VIBE needs to be capable of generating a different
visualization for each user. It does this by explicitly separating objective and subjective state
information. The objective state contains the content of the document store and other information that
is not dependent on the nature of the generated visualization. There is exactly one copy of the
objective state information. The subjective state contains the objects used to create a specific
visualization and any other parameters that are dependent on the nature of the visualization
(configuration options etc.). There is an instance of the subjective state for each subjective view
generated by VR-VIBE.
The latest versions of DIVE allow a virtual world to be hierarchically partitioned between several
multicast groups. Associating a multicast group with each user provides an efficient mechanism for
implementing subjective (viewer-dependent) views of parts of a DIVE world as each user-client is
only sent the updates relating to the top-level (objective) portion of the world and the portion
contained in their own multicast group.
VR-VIBE constructs a subjective environment in the following manner: The main DIVE world
(which also has a multicast group associated with it) contains only invisible objects containing
objective state information. Each user is required to use an embodiment with a subjectivity flag set,
which upon entry in the world causes the creation of a multicast subgroup unique to that user. When
VR-VIBE detects that a new user has entered, it creates copies of each object and places them in the
new group. They, as well as the user’s embodiment are thus invisible for everyone except the user in
question. The next section will explain how users retain the ability to communicate.
5. Artifact Centred Coordinates
Since the artifacts in one user’s view may be in very different locations in another user’s view we
cannot place user embodiments at the same world coordinates in each view. We must therefore
consider some other mechanism for positioning user embodiments in subjective views. We observe
that an empty space conveys no useful information; it is the artifacts, other users in the space and
their actions in relation to one another that provides us with information. We therefore consider the
position and orientation of users in terms of the artifacts they are accessing rather than in terms of
their location in world coordinates. We do this using a technique, which we term artifact-centred
coordinates, which uses the artifacts the user is aware of to determine the position and orientation of
that user in other subjective views.
The basic concept behind artifact-centred coordinates is to compute a user’s awareness of a set of
artifacts, find which of these artifacts exist in the subjective view we want to represent the user in and
place the representation of the user in a position and orientation determined by the location of the
artifacts that the user is aware of in the target view.
A translator process is located in the same main world as VR-VIBE, and thus sees the same
subjective views being created and subscribes to these. For each new user that enters it will create a
“pseudobody” for this user in every other private view; this is necessary since, as we noted above,
firstly the user’s “true” body is invisible to all others, and secondly, the position of that body is
meaningless in the reorganised space of another user.
To position the pseudobodies, the translator finds the set of artifacts that the user is considered to be
aware of by computing a “volume of interest’’ around the user which encompasses all the artifacts
that the user is currently capable of interacting with. (We assume that the user has some degree of
awareness of all artifacts which intersect their volume of interest.) An awareness function is then
applied to these artifacts to determine the degree of awareness of each artifact. This awareness
function could simply use the distance between the user and artifact and how close the artifact is to
the user’s line of sight, alternatively, an arbitrarily complex function could be used which took into
account e.g. the number of accesses of the artifact and the type of the artifact itself.
The translator will then for all other views find the objects that correspond to the set of objects the
user is aware of in her private view, and then adjust the position of the pseudobodies in those views,
according to some inverse of the awareness function, such that the pseudobodies in all other views
end up close to the same objects the “true” embodiment was close to in the user’s own view. It may be
that that the objects the user is aware of do not exist in a particular subjective view, in which case we
may decide either to move the pseudobody to a neutral position or not to move it at all.