# A Simplification Architecture for Exploring Navigation Tradeoffs in Mobile VR

**ABSTRACT** Interactive applications on mobile devices often reduce data fidelity to adapt to resource constraints and variable user preferences. In virtual reality applications, the problem of reducing scene graph fidelity can be stated as a combinatorial optimization problem, where a part of the scene graph with maximum fidelity is chosen such that the resources it requires are below a given threshold and the hierarchical relationships are maintained. The problem can be formulated as a variation of the Tree Knapsack Problem, which is known to be NP-hard. For this reason, solutions to this problem result in a tradeoff that affects user navigation. On one hand, exact solutions provide the highest fidelity but may take long time to compute. On the other hand, greedy solutions are fast but lack high fidelity. We present a simplification architecture that allows the exploration of such navigation tradeoffs. This is achieved by a formulating the problem in a generic way and developing software components that allow the dynamic selection of algorithms and constraints. The experimental results show that the architecture is flexible and supports dynamic reconfiguration.

**0**Bookmarks

**·**

**81**Views

- [Show abstract] [Hide abstract]

**ABSTRACT:**Heterogeneity in mobile computing devices and application scenarios complicates the development of collaborative software systems. Heterogeneity includes disparate computing and communication capabilities, differences in users needs and interests, and semantic conflicts across different domains and representations. In this paper, we describe a software framework that supports mobile collaboration by managing several aspects of heterogeneity. Adopting graph as a common data structure for the application state representation enables us to develop a generic solution for handling the heterogeneities. The effect external forces, such as resource constraints and diverging user interests, can be quantified and controlled as relational and attribute heterogeneity of state graphs. When mapping the distributed replicas of the application state, the external forces inflict a loss of graph information, resulting in many-to-one correspondences of graph elements. A key requirement for meaningful collaboration is maintaining a consistent shared state across the collaborating sites. Our framework makes the best of maximizing the state consistency, while accommodating the external force constraints, primarily the efficient use of scarce system resources. Furthermore, we describe the mobility aspects of our framework, mainly its extension to peer-to-peer scenarios and situations of intermittent connectivity. We describe an implementation of our framework applied to the interoperation of shared graphics editors across multiple platforms, where users are able to share 2D and 3D virtual environments represented as XML documents. We also present performance results, namely resource efficiency and latency, which demonstrate its feasibility for mobile scenarios.Computer Supported Cooperative Work 12/2004; 13(5):603-638. · 0.61 Impact Factor

Page 1

A Simplification Architecture for Exploring Navigation Tradeoffs in Mobile VR

Carlos D. Correa and Ivan Marsic

Department of Electrical and Computer Engineering and the CAIP Center, Rutgers University

{cdcorrea, marsic}@caip.rutgers.edu

Abstract

Interactive applications on mobile devices often reduce

data fidelity to adapt to resource constraints and variable

user preferences. In virtual reality applications, the

problem of reducing scene graph fidelity can be stated as

a combinatorial optimization problem, where a part of the

scene graph with maximum fidelity is chosen such that the

resources it requires are below a given threshold and the

hierarchical relationships are maintained. The problem

can be formulated as a variation of the Tree Knapsack

Problem, which is known to be NP-hard. For this reason,

solutions to this problem result in a tradeoff that affects

user navigation. On one hand, exact solutions provide the

highest fidelity but may take long time to compute. On the

other hand, greedy solutions are fast but lack high

fidelity. We present a simplification architecture that

allows the exploration of such navigation tradeoffs. This

is achieved by a formulating the problem in a generic way

and developing software components that allow the

dynamic selection of algorithms and constraints. The

experimental results show that the architecture is flexible

and supports dynamic reconfiguration.

1. Introduction

With the increasing popularity of mobile devices, many

applications originally designed for powerful workstations

are migrating to mobile computing systems, where

resources are limited, diverse and variable. Mobile

devices are not capable of storing large datasets, and

available bandwidth limits the amount of data that can be

transmitted over a wireless network. To meet the resource

challenges, applications and their data representations

need to take different forms on different devices.

Let us consider the scenario depicted in Figure 1. A

server maintains a scene graph with a virtual world model.

A mobile user connects to the server and requests

download of the virtual world. Since system resources

(network bandwidth, device’s memory) are scarce, a

reduced-detail scene is sent to the mobile device.

Subsequently, the user interacts with the simplified scene.

In order to receive the most useful scene, the server

computes the version of the scene graph that maximizes

the user’s utility function. The user might change this

utility function at any point and request another version of

the scene. This scenario, depicted in Figure 1(a), is what

we call interactive simplification.

Another scenario involves real-time simplification.

Here the mobile user interacts with the virtual

environment such that a new simplified version of the

scene must be displayed. This involves computing the

simplification in real-time so that changes in the virtual

world (delta)—initiated by the user or a remote

participant—are also displayed at the mobile device.

The problem of optimal scene graph simplification is

NP-hard since it can be reduced to a variation of the Tree

Knapsack Problem [1], which we call Exclusive Multiple

Choice Tree Knapsack Problem (EMCTKP). As a result,

it is not possible to compute an exact solution in

polynomial time, which is particularly problematic for

real-time simplification. The usual solution is greedy

algorithms, but greedy solutions for the TKP are known to

have no guarantee on optimality. This means that the

simplified scene might be of little or no utility to the user.

Server

Request

Scene

Server

Update

Server

Request

Scene Delta scene

Server

Request

Delta scene

(a)

(b)

Figure 1. Application scenarios. (a) Interactive

simplification. (b) Real-time simplification.

Page 2

2

The fidelity vs. latency represents a key tradeoff to the

user when navigating virtual worlds using mobile devices.

On one hand, there is a need for fast algorithms in order to

provide interactivity. On the other hand, there is a need

for optimal simplification in order to obtain the best

fidelity given the available resources. In addition, user

preferences need to be considered.

In this paper we present a simplification architecture

for exploring such navigation tradeoffs. The key aspects

of this architecture are:

- It is independent of the simplification algorithm

- It can be configured dynamically to allow users to

express preferences on speed vs. fidelity and to

define various constraints

- It shields the user and application developer from

the complexity of simplification algorithms

We call this architecture

Architecture, and we test it in a scenario where a mobile

PDA user navigates through a shared virtual environment.

This paper is organized as follows: Section 2 describes

the previous work on simplification. Section 3 describes

scene graph simplification and its formulation as an

optimization problem. Section 4 presents the architecture

for supporting different simplification techniques. In

Section 5 we show the results obtained as the user

expresses preference over speed or fidelity and as the user

sets various constraints.

Stackable Solvers

2. Related Work

The main challenge of navigating virtual environments

using mobile devices is the scarcity of resources. In

mobile devices there are three particular limitations: (i)

computation power, which limits the number of polygons

that can be displayed at interactive rates, (ii) network

bandwidth, which limits the size in bytes of the 3D objects

and (iii) storage capacity, which limits the size of 3D

objects stored locally at the mobile device.

An example of improving the frame rate is the work by

Hekmatzada [6] which uses non-photorealistic rendering

of 3D models. Approaches such as geometry compression

[3] and progressive transmission [7] deal mainly with the

low bandwidth constraint.

Simplification of 3D models, however, solves the

problem of resources scarcity in general. In this paper, we

focus on the optimal simplification of scene graphs.

Optimal simplification is the computation of a simplified

scene that maximizes the fidelity while the required

resources are maintained below a given threshold.

Simplification is accomplished with the use of impostors,

which are less detailed substitutes of the original objects

that require fewer resources [9].

Optimal simplification has been identified as a

variation of the knapsack problem, which is known to be

NP-hard problem. Funkhouser and Sequin [5] applied it to

support adaptive frame rates in virtual environments. They

used a greedy algorithm to find the best combination of

impostors from a non-hierarchical set of objects. Their

work was extended to hierarchical scenes [4],

[9],[11],[12]. Mason and Blake [12] recognized that,

although half-optimality can be guaranteed for the greedy

algorithm in [5], there is no guarantee for its extension to

hierarchical scenes. They proposed a new greedy

algorithm that guarantees half-optimality for hierarchical

scenes when the fidelity metric has diminishing returns.

In general, there has been no effort for providing

optimal solutions to the problem. This is due to the lack of

a formal definition of the problem, and an inherent

preference for speed over fidelity.

In this paper, we provide a formulation of the problem

as a variation of the Tree Knapsack Problem, identified by

Cho and Shaw [1]. We call this variation EMCTKP, for

which we developed an exact algorithm that provides the

solution with the highest fidelity [2]. In our approach we

do not enforce a particular algorithm—greedy or exact—

to solve EMCTKP. Instead, we allow the user to set the

preferences over speed or fidelity by choosing the

algorithm at runtime. We also discovered that the problem

can be modified in order to support different types of

constraints, such as the scene completeness condition, as

described in Section 3 below.

3. Scene Graph Simplification

The most common document representation is a tree

data structure, particularly for graphics scenes. The

graphics tree is called scene graph, where each node

represents an individual object or part of the scene.

Semantics are embedded in the form of properties, such as

geometric shape, color, position, etc. The next section

defines the problem of simplification for a given tree data

structure.

3.1. Formal Definition

Let us denote T = (V,E) a tree structure of nodes and

links sets V and E, respectively. Let us assume that for

each node vi, there is a set of impostors, denoted as

Impostors(vi), where |Impostors(vi)| ≥ 0. Then:

Definition 1. A simplified tree T’=(V’,E’) of a tree

T=(V,E) is a tree such that V’ is (1) closed under the

predecessor [1], i.e., vi ∈ V’ implies predecessor(vi) ∈ V’,

and (2) for any leaf node vi’ of V’, it corresponds to a node

vi in V’ or an element in Impostors(vi).

Definition 2. A simplified tree T’=(V’,E’) satisfies

scene completeness (SC) condition, if for any leaf node v

whose path from the root is v1,v2,…,vn, where v1 is the root

and vn = v, then either v1,v2,…,vn ∈ V’ or there exists a

node vk predecessor of v, such that v1,v2,…,vk−1, vk’ ∈ V’

where vk’ ∈ Impostors(vk).

Page 3

3

Figure 2 shows an example of scene simplification. A

3D-scene of a city is represented as a tree. With the SC

constraint, the simplified version must contain a

representation of each node, or a representation that

combines two or more elements, such as the contraction of

Bldg A and Bldg B in Figure 2(b). In Figure 2(c) SC is not

enforced, and highlighted nodes in Figure 2(a) (e.g., Cars,

Bldg A and Bldg B) have no corresponding representation.

This relaxation of SC helps to omit elements that

contribute little to the perceptual benefit of the simplified

scene while consuming resources that can be allocated to

more important objects.

3.2. Combinatorial Optimization Problem

In constrained environments, such as mobile devices, it

is not enough to provide a solution that does not exceed

the limits in resources. It is necessary to provide the best

scene graph that satisfies that constraint.

In order to measure how good is a given scene graph, a

benefit metric is defined for each node and impostor, such

that those elements that the user considers more important

will have a greater value. Let us define for each node vi a

benefit/resource pair:

(bi, ri)

and the pairs:

(bi’, ri’), (bi’’, ri’’), … for the impostors of vi.

Then, the problem of obtaining the best simplification

can be formulated as a variation of the well-known

Knapsack Problem [10]. Shaw and Cho [1],[15] identified

the generalization of this problem for tree structures,

which is called Tree Knapsack Problem. In this problem, a

set of nodes are selected into the knapsack, such that total

resources does not exceed a given upper limit and the sum

of benefits is maximized. In addition, hierarchy

constraints are present such that if a node is selected into

the knapsack, all its predecessors are selected as well.

However, the definition of TKP does not take into

account the presence of impostors. Thus, we define a new

problem, called Exclusive Multiple Choice Tree

Knapsack Problem (EMCTKP), as follows:

First, let us assume, without loss of generality, that

each node has exactly 1 impostor, i.e., |Impostors(vi)| = 1

for all nodes vi.. In the following section, we show that the

general case can be transformed into the case where each

node has exactly 1 impostor.

Let

=

otherwise0

and

=

otherwise0

and let us define:

u→ v

if node u is predecessor of node v

Then, we have the following optimization problem:

Max {∑ xi bi + ∑ yi bi’},

Subject to

∑ xi ri + ∑ yi ri’ ≤ R

xi + yi ≤ 1

xj + yi ≤ 1 if vi→ vj

xi ≥ xj + yj if vi→ vj

xi, yi = 0 or 1

where (1) is the benefit function to be maximized and (2)

is the upper bound on resources. Constraint (3) is the

exclusivity constraint, specifying that a node and its

impostor cannot be selected at the same time. Constraint

(4) states that if the impostor of a node is selected, none of

its descendants can be selected. Constraint (5) is the

′

∈

node if1

Vv

x

i

i

′

∈

node ofimpostor if1

Vv

y

i

i

(1)

(2)

(3)

(4)

(5)

(6)

(a)

(b)

(c)

Traffic

Bldg A Bldg B

Buildings …

Block 1

Block 2

Traffic

Block 1

Block 2

Traffic

Buildings …

Bldg A

Cars …

Bldg B

Buildings …

Block 1

Block 2

,

Figure 2. An example of scene graph simplification. (a) Original scene and its tree representation. (b) A

simplification with scene completeness. (c) A simplification without preserving scene completeness. Items

highlighted in (a) do not appear in (c), while they are represented in (b).

Page 4

4

closed under predecessor constraint [1], which states that

if either a node or its impostor is selected, its predecessor

must be selected as well.

3.3. Navigation Tradeoffs

Let us consider again the scenarios in Figure 1. In

order to support interactivity, the optimization problem

must be solved quickly. However, since there is no known

polynomial time solution to the problem, this might not be

possible. A brute-force approach is prohibitive even for

off-line computation. However, there are dynamic

programming algorithms that run in pseudo-polynomial

time, which are suitable for interactive simplification. For

details on such algorithms see [2].

The usual approach for virtual environments has been

the use of greedy algorithms. However, their guarantees of

optimality are usually very weak, and in the case of

hierarchical structures, there is no guarantee at all [8].

This tradeoff between speed and fidelity is illustrated

in Figure 3a. The figure illustrates the result of requesting

the best fidelity vs. the fastest simplification. The former

is obtained using a Dynamic Programming (DP)

algorithm, which takes 3.43 ms on the server (see Section

5 below), while the latter is obtained by means of a greedy

algorithm, which takes 0.31 ms.

In our system, we let the user decide which algorithm

to use. To support such flexibility, the architecture is

designed to be independent of the optimization algorithm.

Other navigation tradeoffs arise from constraints to the

simplification problem. Here we consider the scene

completeness constraint. As described above, if this

constraint is not enforced some elements may be omitted

so fidelity is improved for the interesting nodes. In other

situations, this constraint may be enforced to provide

context to the user. In [9],[11],[12], the scene

completeness condition was considered implicitly in their

greedy algorithms. Again, we let the user decide whether

to apply it. Furthermore, we discovered that it is possible

to represent the constraint as a modification to the

problem tree, so that it can be reduced to a TKP instance.

In this way, we do not need to provide a particular

algorithm for this case, but it can be solved using any

algorithm, such as the ones in [1],[2],[14]. This

modification is described in Section 3.5. Figure 3b

illustrates the tradeoff in presence of the SC condition.

Note that in the left figure some object parts are missing,

but there is a higher fidelity in the area of interest circled

by the user. In contrast, the right figure also tries to

provide the maximum fidelity for the area of interest, but

preserves the completeness of the scene.

3.4. General Case for EMCTKP Transformation

As stated above, the EMCTKP definition assumes that

each node has exactly one impostor. In general, a node

can have any number of impostors. This general case can

easily be transformed into an EMCTKP problem by

adding dummy nodes for each additional impostor. The

added nodes have no contribution to benefit and no cost,

i.e., both are equal to zero, so the problem is unaltered.

For an illustration of this, see Figure 4.

3.5. Scene Completeness Transformation

The scene completeness constraint requires that each

node have at least one representation. Instead of giving a

particular solution when the constraint is present, we show

that the problem can be transformed into a TKP. Let us

assume that every node has a single impostor (obtained as

in Figure 4) and has benefit/resource pairs (bi, ri). Since in

the solution a node must be replaced by all of its children,

the problem can be transformed to a TKP instance by

merging all the children v1, v2, … , vk of a given node vp

into a single node. The corresponding cost and profit of

the merged node are

(bp–bp’+b1’+b2’+…+bk’, rp–rp’+ r1’+r2’+…+rk’)

This transformation is illustrated in Figure 5.

(a)

(b)

DP algorithm

R = 1000

B = 15522

t = 3.43 ms

Greedy algorithm

R = 1000

B = 9096 (58% of optimal

fidelity)

t = 0.31 ms

No SC SC

Figure 3. Examples of navigational tradeoffs. (a)

Speed vs. Fidelity using greedy vs. DP algorithm

(b) No scene completeness

completeness.

vs. Scene

Page 5

5

3.6. Benefit Metrics

In previous work [5] several metrics have been

identified for measuring the fidelity of a given

representation. These heuristics include accuracy of the

representation, physical size, focus-of-attention (distance

to visible area), and semantics, among others. Accuracy

favors those impostors that are visually more similar to the

original object; focus favors those elements that are

centered in user’s view, and semantics defines the inherent

importance of some object types. In mobile devices, for

example, display real estate is limited, so the user only

sees a portion of the scene at any given time. Elements on

the periphery are less important then the elements in the

center of the display, and elements outside the viewing

window are much less important. The user can express

viewing preferences explicitly, by assigning priorities to

object types, or implicitly, by attending to the most

interesting part of the scene.

4. Stackable Solvers Architecture

One of the key requirements of implementing the

simplification component is its flexibility to support

different algorithms. It is also necessary to support

different types of constraints, such as the Scene

Completeness Condition. A user that needs a complete

(but simplified) view of the scene or object will need the

SCC, while some users can tolerate missing parts of the

scene as long as its utility is maximized.

This flexibility can be achieved by allowing the

component to change the optimization algorithms at

runtime. Below we present a multi-layer architecture

called Stackable Solvers Architecture and its components,

as shown in Figure 6.

4.1. The Solver Component

The optimization component is a stack of Solvers,

where each solver can be considered as a black-box

component with two important interfaces:

addNode(parent, node)

setSolution(node, solution_state)

The addNode interface is used to generate the tree

structure, while the setSolution is used to mark the

tree node with a particular solution state (0 if the node is

not selected, 1 if selected, >1 if an impostor is selected).

An important characteristic of a Stackable Solver is

that it is independent of the solvers below and above it.

This characteristic is what allows the flexibility of the

simplification process.

At the top of the stack is the Optimizer solver, which is

a high-level optimizer that builds a tree as a generic

simplification problem (i.e., where each node has 0 or

more impostors), and at the low levels are the

optimization algorithms. In between, it is possible to add

the problem transformers as needed. Figure 6(b) and (c)

show an example of the component stack for the

EMCTKP simplification and the simplification with SCC,

respectively, where EMCTKP and SCC transformers are

described in Sections 3.4 and 3.5, respectively.

An important advantage of this architecture is that the

stacked components can be exchanged transparently, since

the application only interacts with the high level

component. Furthermore, it is independent of the

implementation of the stacked components. This means

that new algorithms can be supported for the EMCTKP

problem or different approaches to the simplification

problem, such as non-linear programming and genetic

algorithms, among others [13].

4.2. Support for Dynamic Simplification

A key aspect of this architecture is its support for

dynamic simplification. In dynamic situations, one of the

following operations occur which affect the optimization

problem:

(i) A node is added or removed from the tree. This

is the case when the user is navigating a large

Original scene graph Transformed to EMCTKP

v1

v1’ v1’’

v2

v3

v3’

v3’

v1’’

v3

v2’’

v2’

v1’

v1

u1

v2

u2

v2’ v2’’

Figure 4. Transformation of general case of

optimal simplification to EMCTKP. Nodes are

represented by circles and impostors by triangles.

Nodes u1 and u2 are added to incorporate extra

impostors. Since these nodes have no cost, the

problem is unaltered.

(b1, r1)

(b1’, r1’ )

(b2, r2)

(b3, r3)

(b2’, r2’ )

(b3’, r3’ )

(b2’+b3’−b1’+b1,

r2’+r3’−r1’+r1)

(b2−b2’, r2−r2’)

(b3−b3’, r3−r3’)

(b1’, r1’)

EMCTKP + SC instance TKP instance

Figure 5. Transformation from EMCTKP with the

scene completeness constraint to TKP.

Page 6

6

environment, such that the simplification is performed on

a subset of the entire data set, which changes as the user

navigates. This is also the case for updates to the scene,

initiated either by the mobile user or remote participants

in the case of shared environments.

(ii) The benefit or resource metric for a node is

modified. This may occur because one or several

properties of the node are modified, or because the user

has changed the parameters of the benefit and resource

metrics.

(iii) The upper bound in resources is changed. This

change may be initiated by the user or obtained by

monitoring the available resources of the mobile client,

such as bandwidth, memory, etc.

Dynamic simplification is supported with the use of

additional interfaces:

removeNode(node)

updateMetric(oldValue, newValue)

setMaxResources(max_resources)

The first interface is used together with the addNode

interface to support topology changes, while the latter two

are used to support changes in the optimization

parameters.

4.3. Complexity Analysis

Time and space complexity of the optimization

component depends on the particular algorithms and

transformers used to solve the problem. Let us consider

the optimization stack of Figure 6c for optimal

simplification of trees with n nodes under scene

completeness. There are three important stages of the

simplification to consider in the complexity analysis:

(i) Creation of problem tree: This is performed by

successive calls to addNode(p, v) for every node v and the

corresponding parent node p. A naïve approach would

create intermediate problem trees for EMCTKP and SC

transformers, as illustrated in Figure 4 and Figure 5. The

use of stackable solvers avoids the necessity for such

intermediate structures thus saving space. In addition,

transformation is performed in constant time for each call

of addNode. Note that EMCTKP transformation involves

the creation of at most |Impostors(v)| nodes, where

|Impostors(v)| is usually a constant with respect to n. SC

transformation consists of simple arithmetic operations

which also take constant time.

(ii) Solution of the problem: This stage mostly

depends on the particular optimization algorithm. Greedy

algorithms are known to be fast, on the order of O(n log n)

in most cases. In previous work, we developed an exact

algorithm that runs in O(nR) time [2]. With the use of

certain heuristics, we were able to compute near-optimal

solutions with the same amount of time of greedy

algorithms. In addition, the solution of the problem needs

to be propagated from the algorithm to the application

with successive calls of setSolution (each of these calls

takes constant time for the example in Figure 6c)

(iii) Update of the problem: similarly to the creation of

the problem, individual nodes are updated using the

interface updateNode. In a similar fashion it is performed

in constant time for the example in Figure 6c.

In summary, time complexity of the simplification

process can be defined as:

t = tcreation + tsolving + tsetsolution

For dynamic simplification

t = tupdate + tsolving + tsetsolution

where:

tcreation = lower_bound (n) (for the example in Figure 6

this is O(n) where n is the size of the problem tree)

tupdate = O(m) where m<n is the number of updated nodes

tsetsolution = O(n’) where n’ is the size of the solution tree

tsolving depends on the algorithm.

4.4. Example Implementation

Let us consider the implementation of the EMCTKP

transformer. The code in the addNode interface is

responsible for building the corresponding tree as

described in Section 3.4 above.

AddNode(p, v) {

For each additional impostor i of node v do

Create a new node u

bu = 0

ru = 0

bu’ = benefit of impostor i

ru’ = resources of impostor i

lower.AddNode(p, u)

(a)

addNode setSolution

Optimizer

Transformer

Algorithm

Application

Optimizer

EMCTKP

Transformer

SC Transformer

TKP DP Algorithm

Optimizer

EMCTKP

Transformer

EMCTKP DP

Algorithm

Transformer

addNode setSolution

addNode setSolution

addNode setSolution

(b) (c)

Figure 6. Stackable Solvers Architecture

Page 7

7

}

End for

lower.AddNode(p, u)

p = u

5. Experimental Results

The system was tested in a client-server mobile

collaborative environment. The mobile client runs a Java

application, which uses a Java software renderer on a

Pocket PC iPAQ h3860. The server was run on a 2.2 GHz

Intel Xeon P4 with 1Gb RAM.

We allow the user to define preferences for speed vs.

fidelity and to enable/disable the scene completeness

constraint. Preferences are stored in an XML file on the

local host and sent to the server upon connection. The

server uses this file to configure the optimization stack

and setup the benefit and resources metrics. These

parameters can be changed dynamically by the user. To

express particular interest in some objects, we allow the

user to set the region of interest. Objects that are closer to

that area are more important than those further away.

The stack configuration is the one shown in Figure 6(b)

and (c) for the cases where SC is disabled or enabled,

respectively. This configuration is used to provide optimal

fidelity. In the case of preference for speed over fidelity,

the algorithms at the bottom of the stacks are replaced by

greedy algorithms.

We measured the average computation time and the

fidelity of the resulting simplifications for navigation of

virtual environments with very different topologies: (i)

octrees, where objects are localized in a spatial

subdivision, (ii) grids and (iii) arbitrary scenes, where

objects are located arbitrarily in a spatial region. We

noticed that the sub-optimal fidelity of greedy algorithms

is most noticeable in octrees and least in arbitrary scenes.

Figure 7 shows the fidelity comparison of greedy vs.

DP algorithms with and without scene completeness for

octrees. (The results for grids and arbitrary scenes are

slightly more favorable for greedy algorithms.) This

fidelity comparison is obtained using the benefit ratio

b

defined as:

optimal

b

greedy

and results are graphed in two dimensions: the amount of

available resources R and the scene instance. The scene

complexity (scene graph size) grows exponentially, i.e.,

xcity44 world is approximately twice as large as xcity43,

which in time is twice as large as xcity42. The graph

shows how the greedy algorithm performs in terms of

fidelity relative to the optimal solution. For instance, the

benefit ratio of simplifying the scene xcity44 with R=500

is 0.33, i.e., the fidelity provided by the greedy algorithm

is 33% of the optimal fidelity. A benefit ratio of 1 means

that the greedy algorithm obtains the optimal solution.

Figure 8 shows the speed comparison of greedy vs. DP

algorithms. The speed is computed as the ratio between

the times for greedy and exact algorithms. For example,

for the scene xcity44 with R=500, the time is 0.5, which

means that the greedy algorithm is twice as fast as the

exact one. Note that the speed-up grows linearly with R

because the optimal solution takes O(nR) time, while the

greedy algorithm is not significantly affected by R.

Some of the key findings of our experiments are:

•

The greedy algorithm results in a varying degree of

fidelity, depending on the structure of the scene and

the benefit metric. For small scenes, the greedy

algorithm results in a fidelity of 99%. We attribute

this to the low sensitivity of the benefit metric in trees

of little depth. In other words, the greedy algorithm is

more prone to fail in trees of larger complexity, both

in depth and degree.

•

The difference in fidelity is considerably larger when

the scene completeness condition is enforced.

•

The difference in fidelity is more dramatic when the

bound on resources is low relative to the total

resources needed for that scene, i.e., R « ∑ ri. This is

of special interest since the DP optimal solutions are

fast for small values of R. In mobile devices this is

also particularly useful since resources are scarce.

500

1000

5000

10000

20000

xcity41

xcity42

xcity43

xcity44

0

0.2

0.4

0.6

0.8

1

benefit ratio

R

world

500

1000

5000

10000

20000

xcity41

xcity42

xcity43

xcity44

0

0.2

0.4

0.6

0.8

1

benefit ratio

R

world

(a)

(b)

Figure 7. Fidelity comparison of Exact vs. Greedy

simplification (a) with Scene Completeness (b)

without Scene Completeness

Page 8

8

The presence of such evident tradeoff makes it

necessary to allow the user to express preferences on the

simplification parameters. This is performed in our

architecture by dynamically

optimization stack.

reconfiguring the

6. Conclusions and Future Work

We have presented a simplification architecture that

enables the exploration of navigation tradeoffs in mobile

VR. The most important tradeoff we described is speed

(latency) vs. fidelity. In mobile VR, it is crucial to provide

the scene with the best fidelity given the available

resources, within certain time constraints. Given that

sometimes the user may prefer fidelity over speed, the

architecture may be configured on the fly, so it uses

different optimization algorithms. We showed the impact

of the use of optimal vs. greedy algorithms in terms of

fidelity and computation time. We also showed that

certain constraints, namely scene completeness, can be

modeled as transformations of the optimization problem.

This is also enabled by the architecture through stackable

components that can be exchanged dynamically. The

architecture is generic to support different simplification

approaches and algorithms. In future work, we plan to

study other algorithms and heuristics that provide optimal

solutions, such as branch-and-bound and the heuristic

presented in [14]. Another interesting research area is the

design of interaction techniques for setting user

preferences and interacting with the simplification

architecture.

Acknowledgments

The research is supported by NSF Contract No. ANI-

01-23910, US Army CECOM Contract No. DAAB07-02-

C-P301, and by the Rutgers Center for Advanced

Information Processing (CAIP) and its corporate affiliates.

References

[1] G. Cho, and D. X. Shaw, “A Depth-First Dynamic

Programming Algorithm for the Tree Knapsack Problem,”

INFORMS J. Computing, vol. 9, no.4, 1997, pp.431-438.

[2] C. Correa, I. Marsic, and X. Sun, “Semantic Consistency

Optimization in Heterogeneous Virtual Environments,”

Technical Report CAIP-TR-267,

Available online at: http://www.caip.rutgers.edu/disciple/

[3] M. Deering, “Geometry Compression,” Proc. ACM

SIGGRAPH Computer Graphics Annual Conference, Los

Angeles, CA, 1995, pp. 13-20.

[4] C. Erikson, D. Manocha, and W. V. Baxter III, “HLODs

for Faster Display of Large Static and Dynamic

September 2002.

Environments,” Proc. ACM Symp. Interactive 3D

Graphics, Atlanta, GA, 2001, pp. 111-120.

[5] T. Funkhouser and C. H. Sequin, “Adaptive Display

Algorithm for Interactive

Visualization of Complex Virtual Environments,” Proc.

ACM SIGGRAPH Computer Graphics Annual Conference,

Los Angeles, CA, 1993, pp. 99-108.

[6] D. Hekmatzada, J. Meseth, and R. Klein, “Non-

Photorealistic Rendering of Complex 3D Models on

Mobile Devices,” Proc. 8th Annual Conf. of the Int’l

Assoc. for Mathematical Geology (IAMG 2002), vol. 2,

Alfred-Wegener-Stiftung, September 2002, pp. 93-98.

[7] H. Hoppe, “Progressive Meshes,” Proc. ACM SIGGRAPH

Computer Graphics Annual Conference, New Orleans, LA,

1996, pp. 99-108.

[8] D. S. Johnson and K. A. Niemi, “On Knapsacks, Partitions,

and a New Dynamic Programming Technique for Trees,”

Mathematics of Operations Research, vol.8, 1983, pp.1-14.

[9] P. W. C. Maciel and P. Shirley, “Visual Navigation of

Large Environments Using Texture Clusters,” Proc. Symp.

Interact. 3D Graphics, Monterey, CA, 1995, pp. 95-102.

[10] S. Martello and P. Toth, Knapsack Problems: Algorithms

and Computer Implementations. John Wiley & Sons, Inc,

New York, 1990.

[11] A. E. W. Mason and E. H. Blake, “Automatic Hierarchical

Level of Detail Optimization in Computer Animation,”

Comp. Graphics Forum, vol. 16, no. 3, 1997, pp. 191-200.

[12] A. E. W. Mason and E. H. Blake, “A Graphical

Representation of the State Spaces of Hierarchical Level-

of-Detail Scene Descriptions,” IEEE Trans. Visualization

and Computer Graphics, vol. 7, no. 1, 2001, pp. 70-75.

[13] W. Pasman and F. W. Jansen, “Scheduling Level of Detail

with Guaranteed Quality and Cost,” Proc. 7th Int’l Conf. on

3D Web Technology, Tempe, AZ, Feb. 2002, pp.43-51.

[14] N. Samphaiboon and T. Yamada, “Heuristic and Exact

Algorithms for the Precedence-Constrained Knapsack

Problem,”

Journal of

Applications, vol. 105, no. 3, 2000, pp. 659-676.

[15] D. X. Shaw and G. Cho, “The Critical-Item, Upper

Bounds, and a Branch-and-Bound Algorithm for the Tree

Knapsack Problem,” Networks, vol. 31, no. 4, 1998, pp.

205-216.

Frame Rates During

Optimization Theory and

20000

10000

5000

1000

500

xcity41

xcity42

xcity43

xcity44

0

0.1

0.2

0.3

0.4

0.5

benefit ratio

R

world

Figure 8. Speed comparison of Exact vs. Greedy

simplification.

#### View other sources

#### Hide other sources

- Available from Carlos Correa · May 22, 2014
- Available from psu.edu