Content uploaded by Xiang Chen

Author content

All content in this area was uploaded by Xiang Chen on Oct 10, 2014

Content may be subject to copyright.

Optimizing Load Distribution in Camera Networks

with a Hypergraph Model of Coverage Topology

Aaron Mavrinac and Xiang Chen

Department of Electrical and Computer Engineering, University of Windsor

Email: {mavrin1,xchen}@uwindsor.ca

Abstract—A new topological model of camera network cov-

erage, based on a weighted hypergraph representation, is in-

troduced. The model’s theoretical basis is the coverage strength

model, presented in previous work and summarized here. Opti-

mal distribution of task processing is approximated by adapting

a local search heuristic for parallel machine scheduling to this

hypergraph model. Simulation results are presented to demon-

strate its effectiveness.

I. INT ROD UCTION

Multi-camera systems have been studied extensively for

a wide variety of applications. Although centralized archi-

tectures for fusing and processing data from these multiple

sources are a natural extension of traditional computer vision

methods, such conﬁgurations are limited in scalability and

robustness. The increasingly popular distributed smart camera

network [1] paradigm is the answer to this challenge. In

such a system, each camera node possesses local processing

capabilities, and data is increasingly abstracted (and thus

increasingly compact) as it is communicated and processed

farther from its original source. Zivkovic and Kleihorst [2] give

an overview and analysis of smart camera node architecture

illuminating the beneﬁts of this design.

Naturally, any initial image or video processing tasks which

require data only from a single node are assigned to that node.

However, if the nodes themselves are also responsible for

fusing and processing data from multiple sources – as they

must be, in a true distributed smart camera network – it is

less obvious where to assign such tasks.

Scheduling has been an active area of research for decades,

and algorithms solving a variety of different problems have

been used in such diverse applications as manufacturing and

distributed computing [3]. Formulating an appropriate schedul-

ing problem requires domain-speciﬁc knowledge; in our case,

an understanding of the underlying nature of a multi-camera

task.

The scale and performance of most tasks in multi-camera

networks (indeed, in sensor networks generally) are directly

related to the volume of coverage of the sensor(s) in question.

In previous work [4], we developed a real-valued coverage

model for multi-camera systems, inspired by task-oriented

sensor planning models from the computer vision literature [5]

and by coverage models used for various purposes in wireless

sensor networks [6], [7]. We demonstrate that, given a set

of a priori parameters of the multi-camera system and some

task requirements, this model accurately describes the true

coverage of a scene in the context of the task. In order that this

work be self-contained, we provide in Section II a reduced but

functionally complete description of the model. This provides

us with a basis for a priori quantitative characterization of

multi-camera tasks.

The next step is to abstract this understanding into a topo-

logical structure suitable for optimization over the network.

Our ﬁrst contribution is a novel topological model for camera

network coverage using a hypergraph representation, described

in Section III. Devarajan and Radke [8] propose the vision

graph as a theoretical topological model for pairwise tasks in

camera networks; it has since been constructed and employed

in several such applications [9], [10]. Lobaton et al. [11],

[12] recognize the inadequacy of a graph for accurately

capturing topology, and generalize to a simplicial complex

representation. In the context of camera networks which may

be processing a coverage-bound task with data from arbitrary

combinations of sensors, we contend that only the hypergraph

representation is sufﬁciently general. Additionally, we deﬁne

a hyperedge weighting function which incorporates the salient

coverage information for optimization.

Our second contribution, detailed in Section IV, is the

characterization of the optimal task processing distribution

problem in the hypergraph framework, and the adaptation of a

local search heuristic from the scheduling literature [13] which

has been shown to exhibit good performance for this class of

problem.

We present simulated experimental results demonstrating

the method on a virtual network of 23 cameras in Section V.

Finally, we give some concluding remarks in Section VI.

II. COV ER AGE MODEL

A. Stimulus Space

The sensor coverage model requires the deﬁnition of a

stimulus space to describe individual observable data. A visual

stimulus is localized to a point in three-dimensional space, and

also has a direction (normal to the surface on which the point

lies, i.e. view angle). We therefore deﬁne a directional space

as the stimulus space.

Deﬁnition 1: The directional space D3=R3×[0, π]×

[0,2π)consists of three-dimensional Euclidean space plus

direction, with elements of the form (x, y, z , ρ, η).

We term p∈D3adirectional point. For convenience,

we denote its spatial component ps= (px,py,pz)and its

directional component pd= (pρ,pη).

z

x

y

ρ

η

Fig. 1. Axes and Angles of D3

A standard 3D pose P:R3→R3, consisting of rotation

matrix Rand translation vector T, may be applied to p∈D3.

The spatial component is transformed as usual, i.e. P(ps) =

Rps+T. The direction component is transformed as follows.

If dis the unit vector in the direction of pd, then P(pd) =

(arccos(Rdz),arctan 2(Rdy,Rdx)).

B. Coverage Strength Model

The coverage strength model of a given sensor system

(which may be a single physical sensor or multiple sensors)

assigns to every point in the stimulus space a measure of

coverage strength.

Deﬁnition 2: A coverage strength model is a mapping C:

D3→[0,1], for which C(p), for any p∈D3, is the strength

of coverage at p.

Deﬁnition 3: The set hCi={p∈D3|C(p)>0}is the

coverage hull of a coverage strength model C.

In order for the coverage strength model to offer a useful

gauge of sensor system performance, it requires the context of

a task.

Deﬁnition 4: A relevance model is a mapping R:D3→

[0,1], for which R(p), for any p∈D3, is the minimum desired

coverage strength or coverage priority at p.

The coverage strength model is deﬁned in part by task

requirements, deﬁned by a set of task parameters which

encapsualate various properties of the a posteriori quality of

sensed data. These parameters and a relevance model together

fully describe a task.

Given coverage strength and/or relevance models Ciand

Cj, we deﬁne their union and intersection, respectively, as

Ci∪Cj(p) = max(Ci(p), Cj(p)) (1)

Ci∩Cj(p) = min(Ci(p), Cj(p)) (2)

for all p∈D3. This, together with Deﬁnition 3, implies that

hCi∪Cji=hCii ∪ hCjiand hCi∩Cji=hCii ∩ hCji.

The k-coverage strength model for a subset of sensor

systems M⊂N, where |M|=k, is

CM=\

m∈M

Cm(3)

The k-coverage strength model for the network is

Ck

N=[

M∈(N

k)

CM(4)

where each Mis a k-combination of N. Note that in the

common case where k= 1, (3) and (4) reduce to C1

N=

Sm∈NCm.

C. Single-Camera Model

First, we present a single-camera parameterization of the

coverage strength model, for which the full theoretical deriva-

tion can be found in [4].

Given a task parameter γindicating a margin in the image

(in pixels) for full coverage, the horizontal and vertical cross-

sections of the visibility component, CV, are given by

CV h(p) = B[0,1]

min px

pz+ sin(αhl),sin(αhr )−px

pz

γh

(5)

CV v(p) = B[0,1]

min py

pz+ sin(αvt),sin(αv b)−py

pz

γv

(6)

for γ > 0, where αhl and αhr are the horizontal ﬁeld-of-view

angles, and αvt and αvb are the vertical ﬁeld-of-view angles.

The complete CVis then given by

CV(p) = min(CV h(p), CV v (p)) if pz>0,

0 otherwise.(7)

The resolution component, CR, is given by

CR(p) = B[0,1] z2−pz

z2−z1(8)

for R1> R2, where the values of z1and z2are given by

(9), substituting task parameters R1(ideal resolution) and R2

(minimum resolution), respectively, for R.

zR=1

Rmin w

2 sin(αh/2),h

2 sin(αv/2)(9)

In the preceding equation, αh=αhl +αhr and αv=αvt +

αvb.

Given a task parameter cmax indicating the maximum ac-

ceptable blur circle diameter, the focus component, CF, is

given by

CF(p) = B[0,1] min pz−zn

z⊳−zn

,zf−pz

zf−z⊲ (10)

for cmax > cmin, where (z⊳, z⊲)and (zn, zf)are the near and

far limits of depth of ﬁeld as given by (11), substituting blur

circle diameters cmin and cmax, respectively, for c.

z=AfzS

Af ±c(zS−f)(11)

In the preceding equation, Ais the effective aperture di-

ameter, fis the focal length, and zSis the subject distance.

Generally, cmin is equal to the physical pixel size, yielding the

depth of ﬁeld for effectively perfect focus.

The direction (angle of view) component, CD, is given by

CD(p) = B[0,1] Θ(p)−π+ζ2

ζ2−ζ1(12)

where ζ1, ζ2∈[0, π/2] are task parameters indicating the ideal

and maximum view angles, respectively, and Θ(p)is deﬁned

as

Θ(p)≡pρ−py

rsin pη+px

rcos pηarctan r

pz(13)

where r=qp2

x+p2

y.

The full coverage strength model is simply the product of

these components:

C(p) = CV(p)CR(p)CF(p)CD(p)(14)

D. Multi-Camera System Model

A set of single-camera models may be placed in the context

of a world coordinate frame and a scene, and then combined

into multi-camera coverage models. Again, theoretical details

may be found in [4].

The six degrees of freedom of a camera’s world frame

pose P:R3→R3are called the extrinsic parameters of

the camera [14]. As discussed in Section II-A, Pcan be

extended to PD:D3→D3. The in-scene model for a single

camera, then, is the single-camera model Cwith its domain

transformed to the world frame, deﬁned by

Cs(p) = C(P−1

D(p)) (15)

for any world frame point p∈D3.

Given a scene model Sconsisting of a set of plane segments

(which represent opaque surfaces in the scene), the point ps

is occluded iff the point of intersection between the line from

psto the camera’s principal point and any plane segment in

Sexists, is unique, and is not ps.

If V:R3→ {0,1}is a bivalent indicator function such

that V(ps) = 1 iff psis not occluded from a given camera’s

viewpoint, then the in-scene model with static occlusion is

deﬁned by

Co(p) = Cs(p)V(ps)(16)

for any p∈D3, where psis the spatial component of p, and

where Csis given by (15).

Finally, the k-ocular multi-camera system model is com-

puted via (3) and (4).

E. Discrete Model

While it is feasible to compute the vertices of the coverage

hull hCoiof an in-scene camera coverage strength model with

occlusion directly from the parameterizations in Sections II-C

and II-D, the only obvious way to obtain hCo

Mi, where |M|>

1, is to ﬁnd Tm∈MhCo

mi. This involves ﬁnding the intersection

of arbitrary, generally non-convex polytopes given by vertices,

which has been shown to be NP-hard by Tiwary [15].

An arbitrarily close approximation can be achieved in the

discrete domain.1A coverage strength model Chas a discrete

counterpart denoted ˙

Csuch that ˙

C(p) = C(p)for all p∈

˙

D3, where ˙

D3is a discrete subset of D3(once this subset

1Incidentally, this also greatly simpliﬁes the computation of occlusion in

Coas per (16).

has been deﬁned, it should be used consistently). We denote

the summation Pp∈˙

D3˙

C(p)as |˙

C|. Then, given ˙

Ciand ˙

Cj

sampled over the same discrete subset of D3,˙

Ci∩˙

Cjcan be

computed exhaustively.

III. COVE RAGE TOPOLOGY

A. Mathematical Background

Ahypergraph His a pair H= (V, E), where Vis a set

of vertices, and Eis a set of non-empty subsets of Vcalled

hyperedges. If P(V)is the power set of V, then E⊆ P(V)\∅.

Aweighted hypergraph H= (V, E , w)also includes a

weight function over its hyperedges w:E→R+. An

unweighted hypergraph may be interpreted as a weighted

hypergraph for which w(e) = 1 for all e∈E.

The degree of a vertex in H, denoted δH(v)for some v∈V,

is the total weight of hyperedges incident to the vertex.

δH(v) = X

e∈Ew(e)if v∈e

0otherwise (17)

Following the deﬁnition of Frank et al. [16], a directed

hypergraph is a pair D= (V, ~

E), where ~

Eis a set of hyperarcs;

a hyperarc is a hyperedge e⊆Vwith a designated head vertex

v∈V, denoted ev. The remaining vertices e\vare called tail

vertices. Two additional notions of vertex degree are deﬁned:

the indegree, δi

H(v), is the total weight of hyperarcs of which

vis the head vertex, and the outdegree, δo

H(v), is the total

weight of hyperarcs of which vis a tail vertex.

An orientation Λof an undirected hypergraph Hhas the

same vertex and hyperedge sets (and the same weight function,

if applicable), but assigns a direction (head vertex) to each

hyperedge. In an orientation of a simple hypergraph, if ev∈~

E,

then eu∈~

Eimplies u=v(that is, eis unique). Therefore,

we omit the head vertex superscript in certain circumstances;

for example, the weight of evis denoted simply w(e).

B. Coverage Hypergraph

The coverage hypergraph of a camera network Nis the

hypergraph HC= (N, EC, wC). Its hyperedge set is deﬁned

as

EC={M∈ P(N)|hCM∩Ri 6=∅} (18)

where CMis computed by (3) for a given task, Ris a relevance

model for the task, and P(N)denotes the power set of N.

Intuitively, M∈ECindicates that nodes Mhave mutual

coverage of some region of D3with respect to R.

Theorem 1: ECis an abstract simplicial complex; that is,

for every M∈EC, and every L⊆M,L∈EC.

Proof: If n∈M, then by (3), CM=CM\n∩n. From (2),

for all p∈D3,CM(p)≤CM\n(p). Then, from Deﬁnition

3, clearly hCMi ⊆ hCM\ni, and hCM∩Ri ⊆ hCM\n∩Ri.

Thus, for every M∈EC, and every M\n⊂M,M\n∈EC.

The hyperedge weight function of HC,wC:EC→R+, is

deﬁned as

wC(M) = |˙

CM∩˙

R|(19)

for some discrete subset ˙

D3of the stimulus space.

Fig. 2. Example Camera Network Layout with Coverage Hypergraph

Theorem 2: For any L⊆M∈EC,wC(L)≥wC(M).

Proof: From the proof of Theorem 1, for all p∈D3,

CM(p)≤CM\n(p), so |˙

CM| ≤ | ˙

CM\n|. Thus, for every

M∈EC, and every M\n⊂M,wC(M\n)≥wC(M).

Consider a partial hypergraph HK

C= (N, E K

C, wC)of HC

with hyperedge subset

EK

C={M∈EC||M| ∈ K}(20)

where K⊂Z+. When K={k}, we term this the k-coverage

hypergraph of N. When K={k, l}, we term this the k, l-

coverage hypergraph of N, and so on.

Since ECis an abstract simplicial complex, the 2-coverage

hypergraph H2

Cis the (weighted) primal graph of HC, qual-

itatively equivalent to the vision graph as described in most

other sources. We formally deﬁne the vision graph as H2

C.

IV. TASK PRO CE SS IN G DIS TR IB UT IO N

A. Problem Statement

Consider the portion of a k-ocular task in camera network

Nwhich involves processing data from all of M⊆N, where

|M|=k; we shall term this an M-subtask. Only stimuli within

hCMiare relevant to an M-subtask. Given a relevance model

Rfor the task, the expected processing load for a given M-

subtask is proportional to |˙

CM∩˙

R|. Although this conjecture is

tautological given that Ris arbitrary, since Rideally represents

the distribution of the stimuli necessary to perform the task,

it is reasonable to assume in general that it also reﬂects the

distribution of the processing load incurred by said stimuli.

This is supported by empirical evidence [4], [17].

Assuming that Nconsists of smart camera nodes with equal

local computational resources, the problem is to distribute the

processing of all M-subtasks over the nodes such that the

maximum load on any one node is minimized.

The set of eligible nodes to which M-subtasks may be

assigned is restricted to M, for the following reasons:

1) Robustness: If a node n∈Mfails, the M-subtask can

no longer be processed. Thus, assigning it to any n∈M

carries no risk of disrupting service for valid models.

2) Locality: In a large network, because the sensing range

is ﬁnite, if hCMi 6=∅, it is likely that nodes Mare

physically proximate. Since we assume nothing about

the network structure, it is sensible to keep the M-

subtask processing node physically local for communi-

cation efﬁciency.

The usefulness of this restriction is especially apparent in the

special case k= 1, allowing camera-local subtasks (image

preprocessing, etc.) to be included in the accounting.

Given a K-ocular task, where K⊂Z+, this problem can

be solved by ﬁnding an orientation of HK

Cwhich minimizes

the maximum weighted indegree.

B. Minimum Indegree Orientation

The minimum maximum indegree orientation problem for

hypergraphs can be stated as follows. Given a simple, undi-

rected, weighted hypergraph H= (V, E , w), ﬁnd an orienta-

tion Λof Hwhich minimizes maxu∈V[δi

Λ(u)].

This is equivalent to the scheduling problem of ofﬂine

makespan minimization over identical parallel machines with

eligibility constraints [18]; according to the three-ﬁeld notation

by Graham et al. [19], P|Mj, Mj6=Mkif i6=k|Cmax. This

is a special case of P|Mj|Cmax , which in turn is a special case

of R||Cmax [20]. The problem is NP-hard [21], but a number

of approximation algorithms and search heuristics have been

proposed.

We present here a local search heuristic based on the

GR/EFF descent of Piersma and Van Dijk [13]. The main

differences are the use of hypergraph notation and some

simpliﬁcations made possible by constraints particular to our

problem.

Initialization

Suppose the given hypergraph is H= (V, E, w). Let Λ =

(V, ~

E, w), with ~

E=∅initially.

Starting Point

Consider Ein any order. For each e∈E, add euto ~

E

such that δi

Λ[u] = minv∈eδi

Λ[v].

Neighbourhood Search

1) Choose vmax ∈Vsuch that δi

Λ[vmax] = maxv∈Vδi

Λ[v].

Let R={(v, evmax )|v∈V\vmax, v ∈e, evmax ∈~

E}.

2) If R=∅, go to Step 4. Otherwise, consider any

(v, evmax )∈ R; remove (v, evmax )from R.

3) If δi

Λ[v]< δi

Λ[vmax]−w(e), replace evmax with evin ~

E

and go to Step 1. Otherwise, go to Step 2.

4) Sort Vin nonincreasing order of indegree. Let v1and

v2be its last and ﬁrst elements, respectively.

5) Let ~

E1={ev1|v2∈e, ev1∈~

E} and ~

E2={ev2|v1∈

e, ev2∈~

E}. Let I=~

E1×~

E2.

6) If I=∅, go to Step 8. Otherwise, consider any

(ev1

1, ev2

2)∈ I; remove (ev1

1, ev2

2)from I.

7) If max(δi

Λ[v1]−w(e1) + w(e2), δi

Λ[v2]−w(e2) +

w(e1)) <max(δi

Λ[v1], δi

Λ[v2]), replace ev1

1and ev2

2,

respectively, with ev2

1and ev1

2in ~

Eand go to Step 4.

Otherwise, go to Step 6.

8) Let v2be the next element in V. If v2=v1, let v1be the

previous element in Vand let v2be the ﬁrst element in

V. If v1is the ﬁrst element of V, return Λ. Otherwise,

go to Step 5.

V. EX PE RIMENTAL RES ULTS

A. Description of Simulation

We test task distribution on a simulated network Nof 23

camera nodes arranged in a virtual environment with walls and

other occlusions. Our tasks are independent of the directional

dimensions ρand η; accordingly, we simplify the discussion

by working exclusively in R3. A top view of the environment

is shown in Figure 3, along with the relevance model R, which

is uniform in zfrom 1.5m to 2.0m (with the ﬂoor at 0m, and

all cameras at 2.5m), and the locations of the cameras.

Fig. 3. Floor Plan and Relevance Model

The camera coverage strength models are derived from real

parameters of a calibrated Prosilica EC-1350 1.3MP grayscale

CCD camera with a Computar M3Z1228C-MP lens. The

speciﬁc task parameters used are γ= 20,R1= 0.3,R2=

0.01, and cmax = 0.008 (ζ1and ζ2are unused). Extrinsic

parameters are deﬁned manually to deploy the cameras in

a reasonable arrangement covering the environment (82.42%

coverage performance with respect to R).

The camera network and environment are simulated using

our Adolphus2simulation software (Figure 4).

Fig. 4. Adolphus Showing hCi

Iiand hCi

Mi

The coverage hypergraph HCfor Nand Ris computed over

the discrete space ˙

R3={(250x, 250y, 250z)|x, y, z ∈Z},

with coordinates in millimeters. Although it is too large to

represent here graphically, Table I shows some statistics of

the hyperedges in the complete HC.

TABLE I

HYPE RE DGE S I N HC

Edge Size Count Mean Weight

1 23 750.51

2 78 155.66

3130 50.13

4 152 23.49

5122 14.09

661 9.37

7 17 6.40

Total 583 71.85

For each task, events of interest are points p∈R3generated

randomly using λ−1Ras a probability density function, where

λ=RRRR3R dx dy dz. The detection probability for event

pby camera node nis Cn(p). Camera nodes individually

detect events and are assumed to propagate their data to the

appropriate nodes for processing.

B. Task 1: Generic Multi-View Processing

The ﬁrst simulation experiment models a generic task in

which each event is processed by every combination of camera

nodes which detects it. Processing an event charges one unit

of processing load to the node to which the combination is

assigned (i.e., the vertex in HCwhich is the head of the edge

comprising the combination).

We generated 10,000 random events and assigned their

processing to nodes according to Λ, the minimum maximum

weighted indegree orientation of HCapproximated per the

algorithm in Section IV-B. For comparison, we also assigned

the same event detections using four other orientations of

HC: the optimal unweighted minimum maximum indegree

2Adolphus is free software licensed under the GNU General Pub-

lic License. Python source code and documentation are available at

http://github.com/ezod/adolphus.

Fig. 5. Load Statistics for Task 1

orientation U, two random orientations R1and R2, and a

greedy orientation G(edges oriented in arbitrary order to the

vertex with least indegree). Figure 5 shows the maximum

and standard deviation of processing loads (with a mean of

1378.39) for each strategy.

The Λdistribution yields both the least maximum load and

the most consistent distribution of load over the network, with

improvements of 5% and 22%, respectively, over the next best

strategy tested.

C. Task 2: Best-Pair Stereo Reconstruction

The second simulation experiment models a best-pair stereo

reconstruction task. Hypothetically, upon detection of an event,

camera nodes estimate their pairwise coverage of the event,

then reach network-wide consensus on the pair with best

coverage; the best pair then proceeds to perform a dense 3D

reconstruction of the event. In our model, each estimation of

pairwise coverage charges one unit of processing load to the

assigned node, and each reconstruction charges ﬁve units of

processing load to the assigned node for the best pair.

Fig. 6. Load Statistics for Task 2

We generated 2,000 random events and assigned their

processing to nodes according to Λ, the minimum maximum

weighted indegree orientation of H2

C. Again, we compare this

to the unweighted solution U, two random orientations R1and

R2, and a greedy orientation G. Figure 6 shows the maximum

and standard deviation of processing loads (with a mean of

373.87) for each strategy.

Again, the Λdistribution yields both the least maximum

load and the most consistent distribution of load over the

network, with improvements of 13% and 35%, respectively,

over the next best strategy tested.

VI. CO NC LU SI ON S

The coverage hypergraph is a generalization of previous

models of camera network coverage topology which fully cap-

tures node-level coverage relationships. As such, it is a useful

combinatorial structure for optimization in distributed smart

camera applications. We have demonstrated with simulated

experiments its application to optimizing the distribution of

task processing load, by adapting and applying an algorithm

for a related scheduling problem.

This model is conceptually simple, but shows much promise

as a powerful tool given that it has a strong, reliable theoretical

foundation and tractability with a large volume of well-studied

optimization techniques.

A. Future Work

Although the coverage strength model provides an excellent

theoretical basis for deﬁning the coverage hypergraph, in prac-

tice the necessary calibration parameters are often unavailable.

We propose to construct HCprobabilistically from sensor

data, following approaches used to construct other topological

models. Cheng et al. [9] build the vision graph by pairwise

matching of digests of local features. The exclusion approach

of Detmold et al. [22], [23] also builds the vision graph,

starting with a complete graph and eliminating (or reducing the

likelihood of) edges when an occupancy mismatch is detected.

Lobaton et al. [12] construct their CN-complex by matching

detection and occlusion events.

Developing such a method would also allow us to attempt

an experimental application using a real camera network

without calibration, with one or more tasks of a less contrived

nature than those in Section V. Our results currently depend

on our assumptions about computational cost and detection

probability holding in practice, since in simulation we have no

means by which to generate events besides the relevance and

coverage strength models, which are also used to construct

the hypergraph itself. Our previous work [4], [17] provides

some evidence that these assumptions are generally valid,

but a complete real-world application would present a more

convincing case.

The particular optimization over HCpresented in this work

could be adapted to a variety of more complex task distribution

scenarios. Multiple tasks with different computational costs

could be combined into a single objective. Other problems

aside from task processing distribution may require different

interpretations of HC(e.g. a redeﬁnition of the weight func-

tion) and/or different optimization approaches.

Finally, it is ultimately desirable that any such optimization

algorithms be decentralized, so that they may be computed on

the camera network itself. This is a non-trivial problem and

certainly warrants further investigation.

ACK NOWLEDGMENT

This research was supported in part by the Natural Sciences

and Engineering Research Council of Canada.

REF ER EN CE S

[1] B. Rinner and W. Wolf, “An Introduction to Distributed Smart Cameras,”

Proc. IEEE, vol. 96, no. 10, pp. 1565–1575, 2008.

[2] Z. Zivkovic and R. Kleihorst, “Smart Cameras for Wireless Camera Net-

works: Architecture Overview,” in Multi-Camera Networks: Principles

and Applications, H. Aghajan and A. Cavallaro, Eds. Academic Press,

2009, ch. 21, pp. 497–510.

[3] M. L. Pinedo, Scheduling: Theory, Algorithms, and Systems, 2nd ed.

Prentice-Hall, 2002.

[4] A. Mavrinac, J. L. Alarcon Herrera, and X. Chen, “A Fuzzy Model for

Coverage Evaluation of Cameras and Multi-Camera Networks,” in Proc.

4th ACM/IEEE Int. Conf. Distributed Smart Cameras, 2010, pp. 95–102.

[5] K. A. Tarabanis, P. K. Allen, and R. Y. Tsai, “A Survey of Sensor

Planning in Computer Vision,” IEEE Trans. Robotics and Automation,

vol. 11, no. 1, pp. 86–104, 1995.

[6] B. Wang, Coverage Control in Sensor Networks. Springer, 2010.

[7] H. Ma and Y. Liu, “Some Problems of Directional Sensor Networks,”

Int. J. Sensor Networks, vol. 2, no. 1-2, pp. 44–52, 2007.

[8] D. Devarajan and R. J. Radke, “Distributed Metric Calibration of Large

Camera Networks,” in Proc. 1st Wkshp. on Broadband Advanced Sensor

Networks, 2004.

[9] Z. Cheng, D. Devarajan, and R. J. Radke, “Determining Vision Graphs

for Distributed Camera Networks Using Feature Digests,” EURASIP J.

Advances in Signal Processing, 2007.

[10] G. Kurillo, Z. Li, and R. Bajcsy, “Wide-Area External Multi-Camera

Calibration using Vision Graphs and Virtual Calibration Object,” in Proc.

2nd ACM/IEEE Int. Conf. Distributed Smart Cameras, 2008.

[11] E. J. Lobaton, S. S. Sastry, and P. Ahammad, “Building an Algebraic

Topological Model of Wireless Camera Networks,” in Multi-Camera

Networks: Principles and Applications, H. Aghajan and A. Cavallero,

Eds. Academic Press, 2009, ch. 4, pp. 95–115.

[12] E. J. Lobaton, R. Vasudevan, R. Bajcsy, and S. Sastry, “A Distributed

Topological Camera Network Representation for Tracking Applications,”

IEEE Trans. Image Processing, vol. 19, no. 10, pp. 2516–29, 2010.

[13] N. Piersma and W. Van Dijk, “A Local Search Heuristic for Unrelated

Parallel Machine Scheduling with Efﬁcient Neighborhood Search,”

Mathematical and Computer Modelling, vol. 24, no. 9, pp. 11–19, 1996.

[14] Y. Ma, S. Soatto, J. Koˇ

seck´

a, and S. S. Sastry, An Invitation to 3-D

Computer Vision. Springer, 2004.

[15] H. R. Tiwary, “On the Hardness of Computing Intersection, Union and

Minkowski Sum of Polytopes,” Discrete and Computational Geometry,

vol. 40, no. 3, pp. 469–479, 2008.

[16] A. Frank, T. Kir´

aly, and Z. Kir´

aly, “On the Orientation of Graphs and

Hypergraphs,” Discrete Applied Mathematics, vol. 131, no. 2, pp. 385–

400, 2003.

[17] A. Mavrinac, J. L. Alarcon Herrera, and X. Chen, “Evaluating the Fuzzy

Coverage Model for 3D Multi-Camera Network Applications,” in Proc.

3rd Int. Conf. Intelligent Robotics and Applications, 2010, pp. 692–701.

[18] K. Lee, J. Y.-T. Leung, and M. L. Pinedo, “A Note on Graph Balancing

Problems with Restrictions,” Information Processing Letters, vol. 110,

no. 1, pp. 24–29, 2009.

[19] R. L. Graham, E. L. Lawler, J. K. Lenstra, and A. H. G. Rinnooy

Kan, “Optimization and Approximation in Deterministic Sequencing and

Scheduling: A Survey,” Ann. of Discrete Mathematics, vol. 5, pp. 287–

326, 1979.

[20] J. Y.-T. Leung and C.-L. Li, “Scheduling with Processing Set Restric-

tions: A Survey,” Int. J. Production Economics, vol. 116, no. 2, pp.

251–262, 2008.

[21] M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide

to the Theory of NP-Completeness. W. H. Freeman & Co., 1979.

[22] H. Detmold, A. R. Dick, A. Van Den Hengel, A. Cichowski, R. Hill,

E. Kocadag, K. Falkner, and D. S. Munro, “Topology Estimation for

Thousand-Camera Surveillance Networks,” in Proc. 1st ACM/IEEE Int.

Conf. Distributed Smart Cameras, 2007, pp. 195–202.

[23] H. Detmold, A. R. Dick, A. Van Den Hengel, A. Cichowski, R. Hill,

E. Kocadag, Y. Yarom, K. Falkner, and D. S. Munro, “Estimating Cam-

era Overlap in Large and Growing Networks,” in Proc. 2nd ACM/IEEE

Int. Conf. Distributed Smart Cameras, 2008.