Conference PaperPDF Available

Optimizing load distribution in camera networks with a hypergraph model of coverage topology

Authors:

Abstract and Figures

A new topological model of camera network coverage, based on a weighted hypergraph representation, is introduced. The model's theoretical basis is the coverage strength model, presented in previous work and summarized here. Optimal distribution of task processing is approximated by adapting a local search heuristic for parallel machine scheduling to this hypergraph model. Simulation results are presented to demonstrate its effectiveness.
Content may be subject to copyright.
Optimizing Load Distribution in Camera Networks
with a Hypergraph Model of Coverage Topology
Aaron Mavrinac and Xiang Chen
Department of Electrical and Computer Engineering, University of Windsor
Email: {mavrin1,xchen}@uwindsor.ca
Abstract—A new topological model of camera network cov-
erage, based on a weighted hypergraph representation, is in-
troduced. The model’s theoretical basis is the coverage strength
model, presented in previous work and summarized here. Opti-
mal distribution of task processing is approximated by adapting
a local search heuristic for parallel machine scheduling to this
hypergraph model. Simulation results are presented to demon-
strate its effectiveness.
I. INT ROD UCTION
Multi-camera systems have been studied extensively for
a wide variety of applications. Although centralized archi-
tectures for fusing and processing data from these multiple
sources are a natural extension of traditional computer vision
methods, such configurations are limited in scalability and
robustness. The increasingly popular distributed smart camera
network [1] paradigm is the answer to this challenge. In
such a system, each camera node possesses local processing
capabilities, and data is increasingly abstracted (and thus
increasingly compact) as it is communicated and processed
farther from its original source. Zivkovic and Kleihorst [2] give
an overview and analysis of smart camera node architecture
illuminating the benefits of this design.
Naturally, any initial image or video processing tasks which
require data only from a single node are assigned to that node.
However, if the nodes themselves are also responsible for
fusing and processing data from multiple sources – as they
must be, in a true distributed smart camera network – it is
less obvious where to assign such tasks.
Scheduling has been an active area of research for decades,
and algorithms solving a variety of different problems have
been used in such diverse applications as manufacturing and
distributed computing [3]. Formulating an appropriate schedul-
ing problem requires domain-specific knowledge; in our case,
an understanding of the underlying nature of a multi-camera
task.
The scale and performance of most tasks in multi-camera
networks (indeed, in sensor networks generally) are directly
related to the volume of coverage of the sensor(s) in question.
In previous work [4], we developed a real-valued coverage
model for multi-camera systems, inspired by task-oriented
sensor planning models from the computer vision literature [5]
and by coverage models used for various purposes in wireless
sensor networks [6], [7]. We demonstrate that, given a set
of a priori parameters of the multi-camera system and some
task requirements, this model accurately describes the true
coverage of a scene in the context of the task. In order that this
work be self-contained, we provide in Section II a reduced but
functionally complete description of the model. This provides
us with a basis for a priori quantitative characterization of
multi-camera tasks.
The next step is to abstract this understanding into a topo-
logical structure suitable for optimization over the network.
Our first contribution is a novel topological model for camera
network coverage using a hypergraph representation, described
in Section III. Devarajan and Radke [8] propose the vision
graph as a theoretical topological model for pairwise tasks in
camera networks; it has since been constructed and employed
in several such applications [9], [10]. Lobaton et al. [11],
[12] recognize the inadequacy of a graph for accurately
capturing topology, and generalize to a simplicial complex
representation. In the context of camera networks which may
be processing a coverage-bound task with data from arbitrary
combinations of sensors, we contend that only the hypergraph
representation is sufficiently general. Additionally, we define
a hyperedge weighting function which incorporates the salient
coverage information for optimization.
Our second contribution, detailed in Section IV, is the
characterization of the optimal task processing distribution
problem in the hypergraph framework, and the adaptation of a
local search heuristic from the scheduling literature [13] which
has been shown to exhibit good performance for this class of
problem.
We present simulated experimental results demonstrating
the method on a virtual network of 23 cameras in Section V.
Finally, we give some concluding remarks in Section VI.
II. COV ER AGE MODEL
A. Stimulus Space
The sensor coverage model requires the definition of a
stimulus space to describe individual observable data. A visual
stimulus is localized to a point in three-dimensional space, and
also has a direction (normal to the surface on which the point
lies, i.e. view angle). We therefore define a directional space
as the stimulus space.
Definition 1: The directional space D3=R3×[0, π]×
[0,2π)consists of three-dimensional Euclidean space plus
direction, with elements of the form (x, y, z , ρ, η).
We term pD3adirectional point. For convenience,
we denote its spatial component ps= (px,py,pz)and its
directional component pd= (pρ,pη).
z
x
y
ρ
η
Fig. 1. Axes and Angles of D3
A standard 3D pose P:R3R3, consisting of rotation
matrix Rand translation vector T, may be applied to pD3.
The spatial component is transformed as usual, i.e. P(ps) =
Rps+T. The direction component is transformed as follows.
If dis the unit vector in the direction of pd, then P(pd) =
(arccos(Rdz),arctan 2(Rdy,Rdx)).
B. Coverage Strength Model
The coverage strength model of a given sensor system
(which may be a single physical sensor or multiple sensors)
assigns to every point in the stimulus space a measure of
coverage strength.
Definition 2: A coverage strength model is a mapping C:
D3[0,1], for which C(p), for any pD3, is the strength
of coverage at p.
Definition 3: The set hCi={pD3|C(p)>0}is the
coverage hull of a coverage strength model C.
In order for the coverage strength model to offer a useful
gauge of sensor system performance, it requires the context of
a task.
Definition 4: A relevance model is a mapping R:D3
[0,1], for which R(p), for any pD3, is the minimum desired
coverage strength or coverage priority at p.
The coverage strength model is defined in part by task
requirements, defined by a set of task parameters which
encapsualate various properties of the a posteriori quality of
sensed data. These parameters and a relevance model together
fully describe a task.
Given coverage strength and/or relevance models Ciand
Cj, we define their union and intersection, respectively, as
CiCj(p) = max(Ci(p), Cj(p)) (1)
CiCj(p) = min(Ci(p), Cj(p)) (2)
for all pD3. This, together with Definition 3, implies that
hCiCji=hCii ∪ hCjiand hCiCji=hCii ∩ hCji.
The k-coverage strength model for a subset of sensor
systems MN, where |M|=k, is
CM=\
mM
Cm(3)
The k-coverage strength model for the network is
Ck
N=[
M(N
k)
CM(4)
where each Mis a k-combination of N. Note that in the
common case where k= 1, (3) and (4) reduce to C1
N=
SmNCm.
C. Single-Camera Model
First, we present a single-camera parameterization of the
coverage strength model, for which the full theoretical deriva-
tion can be found in [4].
Given a task parameter γindicating a margin in the image
(in pixels) for full coverage, the horizontal and vertical cross-
sections of the visibility component, CV, are given by
CV h(p) = B[0,1]
min px
pz+ sin(αhl),sin(αhr )px
pz
γh
(5)
CV v(p) = B[0,1]
min py
pz+ sin(αvt),sin(αv b)py
pz
γv
(6)
for γ > 0, where αhl and αhr are the horizontal field-of-view
angles, and αvt and αvb are the vertical field-of-view angles.
The complete CVis then given by
CV(p) = min(CV h(p), CV v (p)) if pz>0,
0 otherwise.(7)
The resolution component, CR, is given by
CR(p) = B[0,1] z2pz
z2z1(8)
for R1> R2, where the values of z1and z2are given by
(9), substituting task parameters R1(ideal resolution) and R2
(minimum resolution), respectively, for R.
zR=1
Rmin w
2 sin(αh/2),h
2 sin(αv/2)(9)
In the preceding equation, αh=αhl +αhr and αv=αvt +
αvb.
Given a task parameter cmax indicating the maximum ac-
ceptable blur circle diameter, the focus component, CF, is
given by
CF(p) = B[0,1] min pzzn
zzn
,zfpz
zfz (10)
for cmax > cmin, where (z, z)and (zn, zf)are the near and
far limits of depth of field as given by (11), substituting blur
circle diameters cmin and cmax, respectively, for c.
z=AfzS
Af ±c(zSf)(11)
In the preceding equation, Ais the effective aperture di-
ameter, fis the focal length, and zSis the subject distance.
Generally, cmin is equal to the physical pixel size, yielding the
depth of field for effectively perfect focus.
The direction (angle of view) component, CD, is given by
CD(p) = B[0,1] Θ(p)π+ζ2
ζ2ζ1(12)
where ζ1, ζ2[0, π/2] are task parameters indicating the ideal
and maximum view angles, respectively, and Θ(p)is defined
as
Θ(p)pρpy
rsin pη+px
rcos pηarctan r
pz(13)
where r=qp2
x+p2
y.
The full coverage strength model is simply the product of
these components:
C(p) = CV(p)CR(p)CF(p)CD(p)(14)
D. Multi-Camera System Model
A set of single-camera models may be placed in the context
of a world coordinate frame and a scene, and then combined
into multi-camera coverage models. Again, theoretical details
may be found in [4].
The six degrees of freedom of a camera’s world frame
pose P:R3R3are called the extrinsic parameters of
the camera [14]. As discussed in Section II-A, Pcan be
extended to PD:D3D3. The in-scene model for a single
camera, then, is the single-camera model Cwith its domain
transformed to the world frame, defined by
Cs(p) = C(P1
D(p)) (15)
for any world frame point pD3.
Given a scene model Sconsisting of a set of plane segments
(which represent opaque surfaces in the scene), the point ps
is occluded iff the point of intersection between the line from
psto the camera’s principal point and any plane segment in
Sexists, is unique, and is not ps.
If V:R3→ {0,1}is a bivalent indicator function such
that V(ps) = 1 iff psis not occluded from a given camera’s
viewpoint, then the in-scene model with static occlusion is
defined by
Co(p) = Cs(p)V(ps)(16)
for any pD3, where psis the spatial component of p, and
where Csis given by (15).
Finally, the k-ocular multi-camera system model is com-
puted via (3) and (4).
E. Discrete Model
While it is feasible to compute the vertices of the coverage
hull hCoiof an in-scene camera coverage strength model with
occlusion directly from the parameterizations in Sections II-C
and II-D, the only obvious way to obtain hCo
Mi, where |M|>
1, is to find TmMhCo
mi. This involves finding the intersection
of arbitrary, generally non-convex polytopes given by vertices,
which has been shown to be NP-hard by Tiwary [15].
An arbitrarily close approximation can be achieved in the
discrete domain.1A coverage strength model Chas a discrete
counterpart denoted ˙
Csuch that ˙
C(p) = C(p)for all p
˙
D3, where ˙
D3is a discrete subset of D3(once this subset
1Incidentally, this also greatly simplifies the computation of occlusion in
Coas per (16).
has been defined, it should be used consistently). We denote
the summation Pp˙
D3˙
C(p)as |˙
C|. Then, given ˙
Ciand ˙
Cj
sampled over the same discrete subset of D3,˙
Ci˙
Cjcan be
computed exhaustively.
III. COVE RAGE TOPOLOGY
A. Mathematical Background
Ahypergraph His a pair H= (V, E), where Vis a set
of vertices, and Eis a set of non-empty subsets of Vcalled
hyperedges. If P(V)is the power set of V, then E⊆ P(V)\∅.
Aweighted hypergraph H= (V, E , w)also includes a
weight function over its hyperedges w:ER+. An
unweighted hypergraph may be interpreted as a weighted
hypergraph for which w(e) = 1 for all eE.
The degree of a vertex in H, denoted δH(v)for some vV,
is the total weight of hyperedges incident to the vertex.
δH(v) = X
eEw(e)if ve
0otherwise (17)
Following the definition of Frank et al. [16], a directed
hypergraph is a pair D= (V, ~
E), where ~
Eis a set of hyperarcs;
a hyperarc is a hyperedge eVwith a designated head vertex
vV, denoted ev. The remaining vertices e\vare called tail
vertices. Two additional notions of vertex degree are defined:
the indegree, δi
H(v), is the total weight of hyperarcs of which
vis the head vertex, and the outdegree, δo
H(v), is the total
weight of hyperarcs of which vis a tail vertex.
An orientation Λof an undirected hypergraph Hhas the
same vertex and hyperedge sets (and the same weight function,
if applicable), but assigns a direction (head vertex) to each
hyperedge. In an orientation of a simple hypergraph, if ev~
E,
then eu~
Eimplies u=v(that is, eis unique). Therefore,
we omit the head vertex superscript in certain circumstances;
for example, the weight of evis denoted simply w(e).
B. Coverage Hypergraph
The coverage hypergraph of a camera network Nis the
hypergraph HC= (N, EC, wC). Its hyperedge set is defined
as
EC={M∈ P(N)|hCMRi 6=∅} (18)
where CMis computed by (3) for a given task, Ris a relevance
model for the task, and P(N)denotes the power set of N.
Intuitively, MECindicates that nodes Mhave mutual
coverage of some region of D3with respect to R.
Theorem 1: ECis an abstract simplicial complex; that is,
for every MEC, and every LM,LEC.
Proof: If nM, then by (3), CM=CM\nn. From (2),
for all pD3,CM(p)CM\n(p). Then, from Definition
3, clearly hCMi ⊆ hCM\ni, and hCMRi ⊆ hCM\nRi.
Thus, for every MEC, and every M\nM,M\nEC.
The hyperedge weight function of HC,wC:ECR+, is
defined as
wC(M) = |˙
CM˙
R|(19)
for some discrete subset ˙
D3of the stimulus space.
Fig. 2. Example Camera Network Layout with Coverage Hypergraph
Theorem 2: For any LMEC,wC(L)wC(M).
Proof: From the proof of Theorem 1, for all pD3,
CM(p)CM\n(p), so |˙
CM| | ˙
CM\n|. Thus, for every
MEC, and every M\nM,wC(M\n)wC(M).
Consider a partial hypergraph HK
C= (N, E K
C, wC)of HC
with hyperedge subset
EK
C={MEC||M| ∈ K}(20)
where KZ+. When K={k}, we term this the k-coverage
hypergraph of N. When K={k, l}, we term this the k, l-
coverage hypergraph of N, and so on.
Since ECis an abstract simplicial complex, the 2-coverage
hypergraph H2
Cis the (weighted) primal graph of HC, qual-
itatively equivalent to the vision graph as described in most
other sources. We formally define the vision graph as H2
C.
IV. TASK PRO CE SS IN G DIS TR IB UT IO N
A. Problem Statement
Consider the portion of a k-ocular task in camera network
Nwhich involves processing data from all of MN, where
|M|=k; we shall term this an M-subtask. Only stimuli within
hCMiare relevant to an M-subtask. Given a relevance model
Rfor the task, the expected processing load for a given M-
subtask is proportional to |˙
CM˙
R|. Although this conjecture is
tautological given that Ris arbitrary, since Rideally represents
the distribution of the stimuli necessary to perform the task,
it is reasonable to assume in general that it also reflects the
distribution of the processing load incurred by said stimuli.
This is supported by empirical evidence [4], [17].
Assuming that Nconsists of smart camera nodes with equal
local computational resources, the problem is to distribute the
processing of all M-subtasks over the nodes such that the
maximum load on any one node is minimized.
The set of eligible nodes to which M-subtasks may be
assigned is restricted to M, for the following reasons:
1) Robustness: If a node nMfails, the M-subtask can
no longer be processed. Thus, assigning it to any nM
carries no risk of disrupting service for valid models.
2) Locality: In a large network, because the sensing range
is finite, if hCMi 6=, it is likely that nodes Mare
physically proximate. Since we assume nothing about
the network structure, it is sensible to keep the M-
subtask processing node physically local for communi-
cation efficiency.
The usefulness of this restriction is especially apparent in the
special case k= 1, allowing camera-local subtasks (image
preprocessing, etc.) to be included in the accounting.
Given a K-ocular task, where KZ+, this problem can
be solved by finding an orientation of HK
Cwhich minimizes
the maximum weighted indegree.
B. Minimum Indegree Orientation
The minimum maximum indegree orientation problem for
hypergraphs can be stated as follows. Given a simple, undi-
rected, weighted hypergraph H= (V, E , w), find an orienta-
tion Λof Hwhich minimizes maxuV[δi
Λ(u)].
This is equivalent to the scheduling problem of offline
makespan minimization over identical parallel machines with
eligibility constraints [18]; according to the three-field notation
by Graham et al. [19], P|Mj, Mj6=Mkif i6=k|Cmax. This
is a special case of P|Mj|Cmax , which in turn is a special case
of R||Cmax [20]. The problem is NP-hard [21], but a number
of approximation algorithms and search heuristics have been
proposed.
We present here a local search heuristic based on the
GR/EFF descent of Piersma and Van Dijk [13]. The main
differences are the use of hypergraph notation and some
simplifications made possible by constraints particular to our
problem.
Initialization
Suppose the given hypergraph is H= (V, E, w). Let Λ =
(V, ~
E, w), with ~
E=initially.
Starting Point
Consider Ein any order. For each eE, add euto ~
E
such that δi
Λ[u] = minveδi
Λ[v].
Neighbourhood Search
1) Choose vmax Vsuch that δi
Λ[vmax] = maxvVδi
Λ[v].
Let R={(v, evmax )|vV\vmax, v e, evmax ~
E}.
2) If R=, go to Step 4. Otherwise, consider any
(v, evmax )∈ R; remove (v, evmax )from R.
3) If δi
Λ[v]< δi
Λ[vmax]w(e), replace evmax with evin ~
E
and go to Step 1. Otherwise, go to Step 2.
4) Sort Vin nonincreasing order of indegree. Let v1and
v2be its last and first elements, respectively.
5) Let ~
E1={ev1|v2e, ev1~
E} and ~
E2={ev2|v1
e, ev2~
E}. Let I=~
E1×~
E2.
6) If I=, go to Step 8. Otherwise, consider any
(ev1
1, ev2
2)∈ I; remove (ev1
1, ev2
2)from I.
7) If max(δi
Λ[v1]w(e1) + w(e2), δi
Λ[v2]w(e2) +
w(e1)) <max(δi
Λ[v1], δi
Λ[v2]), replace ev1
1and ev2
2,
respectively, with ev2
1and ev1
2in ~
Eand go to Step 4.
Otherwise, go to Step 6.
8) Let v2be the next element in V. If v2=v1, let v1be the
previous element in Vand let v2be the first element in
V. If v1is the first element of V, return Λ. Otherwise,
go to Step 5.
V. EX PE RIMENTAL RES ULTS
A. Description of Simulation
We test task distribution on a simulated network Nof 23
camera nodes arranged in a virtual environment with walls and
other occlusions. Our tasks are independent of the directional
dimensions ρand η; accordingly, we simplify the discussion
by working exclusively in R3. A top view of the environment
is shown in Figure 3, along with the relevance model R, which
is uniform in zfrom 1.5m to 2.0m (with the floor at 0m, and
all cameras at 2.5m), and the locations of the cameras.
Fig. 3. Floor Plan and Relevance Model
The camera coverage strength models are derived from real
parameters of a calibrated Prosilica EC-1350 1.3MP grayscale
CCD camera with a Computar M3Z1228C-MP lens. The
specific task parameters used are γ= 20,R1= 0.3,R2=
0.01, and cmax = 0.008 (ζ1and ζ2are unused). Extrinsic
parameters are defined manually to deploy the cameras in
a reasonable arrangement covering the environment (82.42%
coverage performance with respect to R).
The camera network and environment are simulated using
our Adolphus2simulation software (Figure 4).
Fig. 4. Adolphus Showing hCi
Iiand hCi
Mi
The coverage hypergraph HCfor Nand Ris computed over
the discrete space ˙
R3={(250x, 250y, 250z)|x, y, z Z},
with coordinates in millimeters. Although it is too large to
represent here graphically, Table I shows some statistics of
the hyperedges in the complete HC.
TABLE I
HYPE RE DGE S I N HC
Edge Size Count Mean Weight
1 23 750.51
2 78 155.66
3130 50.13
4 152 23.49
5122 14.09
661 9.37
7 17 6.40
Total 583 71.85
For each task, events of interest are points pR3generated
randomly using λ1Ras a probability density function, where
λ=RRRR3R dx dy dz. The detection probability for event
pby camera node nis Cn(p). Camera nodes individually
detect events and are assumed to propagate their data to the
appropriate nodes for processing.
B. Task 1: Generic Multi-View Processing
The first simulation experiment models a generic task in
which each event is processed by every combination of camera
nodes which detects it. Processing an event charges one unit
of processing load to the node to which the combination is
assigned (i.e., the vertex in HCwhich is the head of the edge
comprising the combination).
We generated 10,000 random events and assigned their
processing to nodes according to Λ, the minimum maximum
weighted indegree orientation of HCapproximated per the
algorithm in Section IV-B. For comparison, we also assigned
the same event detections using four other orientations of
HC: the optimal unweighted minimum maximum indegree
2Adolphus is free software licensed under the GNU General Pub-
lic License. Python source code and documentation are available at
http://github.com/ezod/adolphus.
Fig. 5. Load Statistics for Task 1
orientation U, two random orientations R1and R2, and a
greedy orientation G(edges oriented in arbitrary order to the
vertex with least indegree). Figure 5 shows the maximum
and standard deviation of processing loads (with a mean of
1378.39) for each strategy.
The Λdistribution yields both the least maximum load and
the most consistent distribution of load over the network, with
improvements of 5% and 22%, respectively, over the next best
strategy tested.
C. Task 2: Best-Pair Stereo Reconstruction
The second simulation experiment models a best-pair stereo
reconstruction task. Hypothetically, upon detection of an event,
camera nodes estimate their pairwise coverage of the event,
then reach network-wide consensus on the pair with best
coverage; the best pair then proceeds to perform a dense 3D
reconstruction of the event. In our model, each estimation of
pairwise coverage charges one unit of processing load to the
assigned node, and each reconstruction charges five units of
processing load to the assigned node for the best pair.
Fig. 6. Load Statistics for Task 2
We generated 2,000 random events and assigned their
processing to nodes according to Λ, the minimum maximum
weighted indegree orientation of H2
C. Again, we compare this
to the unweighted solution U, two random orientations R1and
R2, and a greedy orientation G. Figure 6 shows the maximum
and standard deviation of processing loads (with a mean of
373.87) for each strategy.
Again, the Λdistribution yields both the least maximum
load and the most consistent distribution of load over the
network, with improvements of 13% and 35%, respectively,
over the next best strategy tested.
VI. CO NC LU SI ON S
The coverage hypergraph is a generalization of previous
models of camera network coverage topology which fully cap-
tures node-level coverage relationships. As such, it is a useful
combinatorial structure for optimization in distributed smart
camera applications. We have demonstrated with simulated
experiments its application to optimizing the distribution of
task processing load, by adapting and applying an algorithm
for a related scheduling problem.
This model is conceptually simple, but shows much promise
as a powerful tool given that it has a strong, reliable theoretical
foundation and tractability with a large volume of well-studied
optimization techniques.
A. Future Work
Although the coverage strength model provides an excellent
theoretical basis for defining the coverage hypergraph, in prac-
tice the necessary calibration parameters are often unavailable.
We propose to construct HCprobabilistically from sensor
data, following approaches used to construct other topological
models. Cheng et al. [9] build the vision graph by pairwise
matching of digests of local features. The exclusion approach
of Detmold et al. [22], [23] also builds the vision graph,
starting with a complete graph and eliminating (or reducing the
likelihood of) edges when an occupancy mismatch is detected.
Lobaton et al. [12] construct their CN-complex by matching
detection and occlusion events.
Developing such a method would also allow us to attempt
an experimental application using a real camera network
without calibration, with one or more tasks of a less contrived
nature than those in Section V. Our results currently depend
on our assumptions about computational cost and detection
probability holding in practice, since in simulation we have no
means by which to generate events besides the relevance and
coverage strength models, which are also used to construct
the hypergraph itself. Our previous work [4], [17] provides
some evidence that these assumptions are generally valid,
but a complete real-world application would present a more
convincing case.
The particular optimization over HCpresented in this work
could be adapted to a variety of more complex task distribution
scenarios. Multiple tasks with different computational costs
could be combined into a single objective. Other problems
aside from task processing distribution may require different
interpretations of HC(e.g. a redefinition of the weight func-
tion) and/or different optimization approaches.
Finally, it is ultimately desirable that any such optimization
algorithms be decentralized, so that they may be computed on
the camera network itself. This is a non-trivial problem and
certainly warrants further investigation.
ACK NOWLEDGMENT
This research was supported in part by the Natural Sciences
and Engineering Research Council of Canada.
REF ER EN CE S
[1] B. Rinner and W. Wolf, “An Introduction to Distributed Smart Cameras,”
Proc. IEEE, vol. 96, no. 10, pp. 1565–1575, 2008.
[2] Z. Zivkovic and R. Kleihorst, “Smart Cameras for Wireless Camera Net-
works: Architecture Overview,” in Multi-Camera Networks: Principles
and Applications, H. Aghajan and A. Cavallaro, Eds. Academic Press,
2009, ch. 21, pp. 497–510.
[3] M. L. Pinedo, Scheduling: Theory, Algorithms, and Systems, 2nd ed.
Prentice-Hall, 2002.
[4] A. Mavrinac, J. L. Alarcon Herrera, and X. Chen, “A Fuzzy Model for
Coverage Evaluation of Cameras and Multi-Camera Networks,” in Proc.
4th ACM/IEEE Int. Conf. Distributed Smart Cameras, 2010, pp. 95–102.
[5] K. A. Tarabanis, P. K. Allen, and R. Y. Tsai, “A Survey of Sensor
Planning in Computer Vision,IEEE Trans. Robotics and Automation,
vol. 11, no. 1, pp. 86–104, 1995.
[6] B. Wang, Coverage Control in Sensor Networks. Springer, 2010.
[7] H. Ma and Y. Liu, “Some Problems of Directional Sensor Networks,”
Int. J. Sensor Networks, vol. 2, no. 1-2, pp. 44–52, 2007.
[8] D. Devarajan and R. J. Radke, “Distributed Metric Calibration of Large
Camera Networks,” in Proc. 1st Wkshp. on Broadband Advanced Sensor
Networks, 2004.
[9] Z. Cheng, D. Devarajan, and R. J. Radke, “Determining Vision Graphs
for Distributed Camera Networks Using Feature Digests,EURASIP J.
Advances in Signal Processing, 2007.
[10] G. Kurillo, Z. Li, and R. Bajcsy, “Wide-Area External Multi-Camera
Calibration using Vision Graphs and Virtual Calibration Object,” in Proc.
2nd ACM/IEEE Int. Conf. Distributed Smart Cameras, 2008.
[11] E. J. Lobaton, S. S. Sastry, and P. Ahammad, “Building an Algebraic
Topological Model of Wireless Camera Networks,” in Multi-Camera
Networks: Principles and Applications, H. Aghajan and A. Cavallero,
Eds. Academic Press, 2009, ch. 4, pp. 95–115.
[12] E. J. Lobaton, R. Vasudevan, R. Bajcsy, and S. Sastry, “A Distributed
Topological Camera Network Representation for Tracking Applications,”
IEEE Trans. Image Processing, vol. 19, no. 10, pp. 2516–29, 2010.
[13] N. Piersma and W. Van Dijk, “A Local Search Heuristic for Unrelated
Parallel Machine Scheduling with Efficient Neighborhood Search,”
Mathematical and Computer Modelling, vol. 24, no. 9, pp. 11–19, 1996.
[14] Y. Ma, S. Soatto, J. Koˇ
seck´
a, and S. S. Sastry, An Invitation to 3-D
Computer Vision. Springer, 2004.
[15] H. R. Tiwary, “On the Hardness of Computing Intersection, Union and
Minkowski Sum of Polytopes,Discrete and Computational Geometry,
vol. 40, no. 3, pp. 469–479, 2008.
[16] A. Frank, T. Kir´
aly, and Z. Kir´
aly, “On the Orientation of Graphs and
Hypergraphs,Discrete Applied Mathematics, vol. 131, no. 2, pp. 385–
400, 2003.
[17] A. Mavrinac, J. L. Alarcon Herrera, and X. Chen, “Evaluating the Fuzzy
Coverage Model for 3D Multi-Camera Network Applications,” in Proc.
3rd Int. Conf. Intelligent Robotics and Applications, 2010, pp. 692–701.
[18] K. Lee, J. Y.-T. Leung, and M. L. Pinedo, “A Note on Graph Balancing
Problems with Restrictions,” Information Processing Letters, vol. 110,
no. 1, pp. 24–29, 2009.
[19] R. L. Graham, E. L. Lawler, J. K. Lenstra, and A. H. G. Rinnooy
Kan, “Optimization and Approximation in Deterministic Sequencing and
Scheduling: A Survey,” Ann. of Discrete Mathematics, vol. 5, pp. 287–
326, 1979.
[20] J. Y.-T. Leung and C.-L. Li, “Scheduling with Processing Set Restric-
tions: A Survey,” Int. J. Production Economics, vol. 116, no. 2, pp.
251–262, 2008.
[21] M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide
to the Theory of NP-Completeness. W. H. Freeman & Co., 1979.
[22] H. Detmold, A. R. Dick, A. Van Den Hengel, A. Cichowski, R. Hill,
E. Kocadag, K. Falkner, and D. S. Munro, “Topology Estimation for
Thousand-Camera Surveillance Networks,” in Proc. 1st ACM/IEEE Int.
Conf. Distributed Smart Cameras, 2007, pp. 195–202.
[23] H. Detmold, A. R. Dick, A. Van Den Hengel, A. Cichowski, R. Hill,
E. Kocadag, Y. Yarom, K. Falkner, and D. S. Munro, “Estimating Cam-
era Overlap in Large and Growing Networks,” in Proc. 2nd ACM/IEEE
Int. Conf. Distributed Smart Cameras, 2008.
... Kulkarni et al. (2007) construct a vision graph using a Monte Carlo feature matching technique with a geometric model component, and demonstrate its use in duty cycling and triggered wake-up. Mavrinac and Chen (2011) propose a coverage hypergraph derived directly from their geometric coverage model, and apply it to the optimization of load distribution using a parallel machine scheduling algorithm. Table 2 compares the nature and properties of a selection of topological coverage overlap models from the literature, grouped by application (indicated in the first column). ...
... Kulkarni et al. (2007) model the full k-overlap topology of the camera network, although they do not explicitly formalize this model in a hypergraph representation or use any combinatorial techniques. Mavrinac and Chen (2011) present an explicit hypergraph representation of k-overlap topology, with an initial scheduling application using a combinatorial algorithm. ...
... Both describe methods of handling non-uniform spatial distributions of reference points. Mavrinac and Chen (2011) theoretically use the volume of intersection between k cameras' geometric coverage models to weight hyperedges, but in practice, the required polytope intersection procedure is NP-hard, so they use a uniform distribution of points to compute a discrete approximation. ...
Article
Full-text available
Modeling the coverage of a sensor network is an important step in a number of design and optimization techniques. The nature of vision sensors presents unique challenges in deriving such models for camera networks. A comprehensive survey of geometric and topological coverage models for camera networks from the literature is presented. The models are analyzed and compared in the context of their intended applications, and from this treatment the properties of a hypothetical inclusively general model of each type are derived.
Article
An automatic method for solving the problem of view planning in high-resolution industrial inspection is presented. The method's goal is to maximize the visual coverage, and to minimize the number of cameras used for inspection. Using a CAD model of the object of interest, we define the scene-points and the viewpoints, with the later being the solution space. The problem formulation accurately encapsulates all the vision-and task-related requirements of the design process for inspection systems. We use a graph-based approach to formulate a solution for the problem. The solution is implemented as a greedy algorithm, and the method is validated through experiments.
Article
The employment of visual sensor networks for video surveillance has brought in as many challenges as advantages. While the integration of multiple cameras into a network has the potential advantage of fusing complementary observations from sensors and enlarging visual coverage, it also increases the complexity of tracking tasks and poses challenges to system scalability. For real time performance, a key approach to tackling these challenges is the mapping of the global tracking task onto a distributed sensing and processing infrastructure. In this paper, we present an efficient and scalable multi-camera multi-people tracking system with a three-layer architecture, in which we formulate the overall task (i.e., tracking all people using all available cameras) as a vision based state estimation problem and aim to maximize utility and sharing of available sensing and processing resources. By exploiting the geometric relations between sensing geometry and people’s positions, our method is able to dynamically and adaptively partition the overall task into a number of nearly independent subtasks with the aid of occlusion reasoning, each of which tracks a subset of people with a subset of cameras (or agencies). The method hereby reduces task complexity dramatically and helps to boost parallelization and maximize the system’s real time throughput and reliability while accounting for intrinsic uncertainty induced, e.g., by visual clutter and occlusions. We demonstrate the efficiency of our decentralized tracker on challenging indoor and outdoor video sequences.
Conference Paper
A method for PTZ camera reconfiguration is presented. The objective of this work is to improve target tracking and surveillance applications in unmanned vehicles. Pan, tilt, and zoom configurations are computed transforming the visual constraints, given by a model of visual coverage, into geometric constraints. In the case of multiple targets the camera configurations are computed by a consensus algorithm. The approach is defined in a multi-agent framework allowing for scalability of the system, and cooperation between the cameras. Experimental results show the performance of the approach.
Article
Processing visual information from a large camera network is important for many applications, such as smart environments, human–computer interfaces, and surveillance. Smart camera motes using onboard processing offer an opportunity to create scalable solutions based on distributed processing of visual information. Wireless communication in sensor networks is preferred for many practical reasons. Wireless smart camera node architecture is analyzed in this chapter.
Article
Aimed at disadvantages on most existing coverage control algorithms for directional sensor networks, such as finite orientations, the restrained degree of optimization, the deficient applicability of these algorithms etc., a coverage control strategy (CCS) was presented, which was strictly deduced from some mathematic properties. A distributed algorithm generated from the CCS could be extensively used in different kinds of networks with different sensing models. Simulation results and performance analysis illustrate that the CCS can efficiently enhance the coverage of networks and show the certain superiority comparing to existing algorithms of the same kind.
Book
Sensors are devices that convert physical stimulus into recordable signals. Sensors have facilitated people to understand, monitor, and control machines and environments for many decades. A sensor node consists of not only sensor unit but also microcontroller unit, communication unit, storage unit, and power supply for producing, collecting, storing, processing, and delivering sensory data. The size and cost of a single sensor node has been reducing with the continuous advances of micro-electro-mechanical systems (MEMS) techniques. The miniaturization of sensor nodes has promoted the emergence of sensor networks, which normally consists of a large number of sensor nodes collaborating to accomplish advanced tasks. Applications of sensor networks are in a wide range, including battlefield surveillance, environmental monitoring, biological detection, smart space, industrial diagnostics, etc. Despite promising applications, there are also great challenges in designing, implementing, and operating sensor networks. Many research issues have been studied, and many solution approaches have been proposed for sensor networks. In this chapter, we provide some backgrounds and introduction about sensors, sensor nodes, and sensor networks.
Article
The theory of deterministic sequencing and scheduling has expanded rapidly during the past years. In this paper we survey the state of the art with respect to optimization and approximation algorithms and interpret these in terms of computational complexity theory. Special cases considered are single machine scheduling, identical, uniform and unrelated parallel machine scheduling, and open shop, flow shop and job shop scheduling. We indicate some problems for future research and include a selective bibliography.
Article
Motivated by applications in surveillance sensor networks, we present a distributed algorithm for the automatic, exter-nal, metric calibration of a network of cameras with no centralized processor. We model the set of uncalibrated cameras as nodes in a communication network, and pro-pose a distributed algorithm in which each camera only communicates with other cameras that image some of the same scene points. Each node independently forms a neighborhood cluster on which the local calibration takes place, and calibrated nodes and scene points are incremen-tally merged into a common coordinate frame. The accu-rate performance of the algorithm is illustrated using ex-amples that model real-world sensor networking situations.
Article
Distributed smart cameras (DSCs) are real-time distributed embedded systems that perform computer vision using multiple cameras. This new approach has emerged thanks to a confluence of simultaneous advances in four key disciplines: computer vision, image sensors, embedded computing, and sensor networks. Processing images in a network of distributed smart cameras introduces several complications. However, we believe that the problems DSCs solve are much more important than the challenges of designing and building a distributed video system. We argue that distributed smart cameras represent key components for future embedded computer vision systems and that smart cameras will become an enabling technology for many new applications. We summarize smart camera technology and applications, discuss current trends, and identify important research challenges.