Conference PaperPDF Available

Sensor planning for range cameras via a coverage strength model


Abstract and Figures

A method for sensor planning based on a previously developed coverage strength model is presented. The approach taken is known as generate-and-test: a feasible solution is predefined and then tested using the coverage model. The relationship between the resolution of the imaging system and its performance is the key component to perform sensor planning of range cameras. Experimental results are presented; the inverse correlation between coverage performance and measurement error demonstrates the usefulness of the model in the sensor planning context.
Content may be subject to copyright.
Sensor Planning for Range Cameras via a Coverage Strength Model
Jose Luis Alarcon Herrera, Aaron Mavrinac, and Xiang Chen
Abstract A method for sensor planning based on a pre-
viously developed coverage strength model is presented. The
approach taken is known as generate-and-test: a feasible
solution is predefined and then tested using the coverage model.
The relationship between the resolution of the imaging system
and its performance is the key component to perform sensor
planning of range cameras. Experimental results are presented;
the inverse correlation between coverage performance and
measurement error demonstrates the usefulness of the model
in the sensor planning context.
Sensor planning toward optimal camera placement is an
important aspect of system integration in machine vision.
The goal of sensor planning is to improve the performance
of the vision system, the performance herein is defined as
the ability of the system to repeatedly complete a task under
controlled conditions. Several methods have been proposed
to solve this problem. Typically a set of feasible camera con-
figurations is defined as well as some metric for performance;
optimal camera placement is generally achieved by maxi-
mizing coverage. This is particulary useful for large multi-
camera configurations. However, some machine vision tasks
such as industrial inspections require monocular systems and
rely more on an approach that takes in to account the task’s
parameters in detail. As discussed in Mavrinac et al. [1] a
global view of the system where coverage is defined as a
bivalent condition of visibility is not sufficient; points in the
field of view can be fully covered, partially covered or not
covered at all, therefore, to express the vagueness of coverage
the model assigns coverage strength a value in the range
[0,1]. Our task of three-dimensional measurement based on
laser scanners is used primarily in industrial inspections
where the parameters involved in the scene are strictly
controlled, (e.g. no external occlusion is allowed, etc.). Our
previously developed coverage model [2], [1] is well suited to
a generate-and-test approach. Our coverage metric has been
shown to closely reflect the task’s a posteriori performance
[2], [3], [1]. Currently there exists no feasible technique
for numerical optimization using this model; in this paper
we employ the generate-and-test approach to perform sensor
planning. The main purpose of the current brief is to test
the usefulness of the coverage strength model in the sensor
planning context. The experimental results are expected to
This research was supported in part by the National Council of Science
and Technology of Mexico (CONACyT) and by the Natural Sciences and
Engineering Research Council of Canada (NSERC). The authors would like
to acknowledge the support provided by Vista Solutions Inc.
J. L. Alarcon Herrera, A. Mavrinac and X. Chen are with the
Department of Electrical & Computer Engineering, University of
Windsor, 401 Sunset Ave., Windsor, Ontario, Canada, N9B 3P4.
provide preliminary effort toward optimal sensor placement;
this is mentioned later in Section VI.
In sensor planning and optimal camera placement, static
occlusion and dynamic occlusion present an issue for the
maximization of coverage; objects in the scene occlude
points of interest, thus preventing the cameras from imaging
the entire scene. Dynamic occlusion has been handled using
a probabilistic model by Mittal and Davis [4] and Chen and
Davis [5]. In the context of laser-based systems, the work
of Pito [6] also deals with occlusion, approaching the next-
best-view problem by focusing on minimizing occlusion.
As shown in the work of Scott et al. [7] maximizing
coverage also involves achieving certain degree of overlap
for the case of n-ocular tasks such as surface modeling and
reconstruction. In more recent work Scott [8] models the
laser scanner system in detail. Prieto et al. [9] give special
attention to the effects of the angle between the laser plane
and the optical axis of the camera. However, the authors do
not include the effects of focal length and aperture diameter
in the estimation of good camera placement.
Sensor planning requires a priori information of the system
such as camera parameters that allow the computation of
some performance metric. A performance metric is then used
to assign some meaningful value to a particular camera con-
figuration before it can be selected as a good configuration.
Ram et al. [10] developed a performance metric considering
such factors as direction of view and zoom. However, the
authors neglect distortion caused by perspective projection.
Erdem and Sclaroff [11] propose the use of a more realistic
model for coverage. The work of Gonz´
alez-Banos et al.
[12] is more concerned with the accurate representation of
performance. In a laser based task, the authors parameterize
visibility using conditions such as direction of view and
range within the working distance of the camera. Other
examples are found in the work of Angella et al. [13] and
orster et al [14].
The sensor planning literature shows different ways in
which coverage is modeled and parameterized; however,
most existing models are bivalent and do not always en-
capsulate all the parameters related to the overall description
of coverage. Some models are concerned only with direction
of view and zoom such as that of Ram et al. [10], Reed and
Allen [15] provide an excellent example, working to solve
the next-best-view problem, they consider not only visibility
but resolution and direction as well. Their work is also an
example of the generate-and-test approach.
This paper is organized as follows. in Section II, we give
an overview of the camera parameters and some concepts that
are relevant to our task. In Section III, we build the necessary
background and describe the coverage strength model: we
review the components of the model that account for the
various factors involved in the camera’s performance. Section
V describes the experimental setup and presents the results.
Finally, we present some concluding remarks and notes on
future work in Section VI.
The model of a camera has two types of parameters:
intrinsic and extrinsic. The intrinsic parameters include the
focal length, the effective aperture diameter, the radial dis-
tortion coefficients, the physical pixel size, the sensor size
in pixels, and the pixel coordinates of the optical center.
The extrinsic parameters express the camera position and
orientation relative to a reference frame.
Most sensor planning research has proposed methods
and algorithms for finding good camera configurations by
choosing a solution space over the extrinsic parameters of the
camera (which can be continuous or discrete) and optimizing
the configuration. In this paper we aim to modify not only the
extrinsic parameters but also the intrinsic parameters through
the use of a realistic coverage strength model (see Section
III), that takes into account all of the aforementioned char-
acteristics of the camera to achieve an accurate description
of coverage.
A. Camera Calibration
A laser-based 3D imaging system is typically configured
as shown in Figure 1. The two main characteristics of this
configuration are the camera and the laser plane. Both the
coverage strength model and the 3D measurement algorithm
rely on the camera’s parameters, thus camera calibration is
Fig. 1. Typical Camera Setup
The calibration procedure comprises two stages. The first
corrects for lens distortion where image coordinates (u, v)
are calculated from the raw pixel coordinates (u, v)using
Brown’s lens distortion model [16].
u=u+uo(C1r2+C2r4) + 2C3uovo+C4r2+ 2u2
v=v+vo(C1r2+C2r4) + 2C4uovo+C3r2+ 2v2
where (ou, ov)are the pixel coordinates of the image projec-
tion of the optical center and C1to C4are the lens distortion
The second stage produces a homography between the
two-dimensional image plane and the two-dimensional laser
plane defined in homogeneous coordinates as a 3×3matrix
where sis a scale factor.
Mavrinac et al. [17] provide the derivation and implemen-
tation details of this calibration procedure.
B. Measurement Resolution and Occlusion
In this paper we consider two types of resolution: first,
the optical resolution which is the ability of the camera to
capture in detail the object in the field of view; this is defined
by the sensor’s pixel size in micrometers together with the
number of pixels needed to form a feature in the image.
Second, the measurement resolution, (also known as height
resolution), refers to the minimum change in the position of
the laser line that can be detected by the camera along the
As discussed in Section V-C, in order to choose good
camera placement we extend the coverage model to account
for measurement resolution. As will be made clear, the
addition of this factor yields a more accurate description
of coverage which is closely related to the a posteriori
performance of the task.
Sensor planning in laser based tasks is directly related to
the accuracy of the measurements and the completeness of
the image; performance is the degree of accuracy and cov-
erage that the system is is able to achieve. The performance
of laser scanner tasks is negatively affected by two types of
occlusion: laser occlusion and camera occlusion [18]. The
first occurs when the laser is unable to illuminate a point in
the object that needs to be visible from the camera, this is
generally the case for non-convex shapes. The second takes
place when the camera is unable to image the scene due
to self-occlusion of the object of interest. Occlusion is not
addressed in this paper and is left as subject for future work.
In previous work, Mavrinac et al. [2], [3], [1] developed a
coverage strength model which includes most of the camera’s
characteristics and properties; among these are the extrinsic
and intrinsic parameters as well as the optical properties
of the lens, the camera’s sensor and several intuitive task
parameters which will be described in this section.
A. General Model
The coverage strength model of a given camera system
assigns to every point in the stimulus space a measure of
Definition 1: The three-dimensional directional space
D3=R3×[0, π]×[0,2π)consists of three-dimensional
Euclidean space plus direction, with elements of the form
(x, y, z, ρ, η).
Definition 2: A coverage strength model is a mapping C:
D3[0,1], for which C(p), for any pD3, is the strength
of coverage at p.
Definition 3: A relevance model is a mapping R:D3
[0,1], for which R(p), for any pD3, is the minimum
desired coverage strength or coverage priority at p.
We term pD3adirectional point. For convenience,
we denote its spatial component ps= (px,py)or ps=
(px,py,pz)and its directional component pd= (pρ,pη).
η[0,2π),ρ[0, π].
The coverage performance of a sensor system is given by
m(C, R)|˙
where ˙
Cis the coverage strength model sampled on a
discrete grid of points in D3, similarly, ˙
Ris the discretized
relevance model.
Here we detail the coverage strength model
parametrization for cameras, which consists of four
components: visibility, resolution, focus, and direction
(angle of view). We omit most of the derivation
as this is covered in previous work [1]. Throughout,
B[0,1](x) = min(max(x, 0),1) is a function that limits the
value xto [0,1].
The first component, CV, characterizes visibility. The
pinhole camera model is used to compute the angles of
the field of view of the camera. A task parameter γis
introduced to account for the partial coverage of non-point
features located near the boundaries of the field of view. γ
is measured in pixels and it reflects the expected size of the
feature’s neighborhood. The horizontal and vertical cross-
sections are given by
CV h(p) = B[0,1]
min px
pz+ sin(αhl),sin(αhr )px
CV v(p) = B[0,1]
min py
pz+ sin(αvt),sin(αv b)py
where αhl,αhr are the horizontal angles of view, and αvt
and αvb are the vertical angles of view, of a rectilinear
projected image, γhand γvare the horizontal and vertical
offsets calculated from γ(see Mavrinac et al.[1]).
the complete CVis given by
CV(p) = min(CV h(p), CV v (p)) if pz>0,
0 otherwise.(8)
The second component, CR, characterizes pixel resolution
(number of pixels per unit distance). The resolution is a func-
tion of the distance between a point and the principal point
along the optical axis. Two task parameters are introduced;
R1which is the ideal pixel resolution and R2which is the
minimum resolution. The resolution component is given by
CR(p) = B[0,1] z2pz
for R1> R2, where the values of z1and z2are given by
(10), substituting task parameters R1and R2, respectively,
for R.
Rmin w
2 sin(αh/2),h
2 sin(αv/2)(10)
The third component, CF, characterizes focus (depth of
field). The task parameter cmax indicates the maximum blur
circle diameter that can be tolerated.
The component CFis given by
CF(p) = B[0,1] min pzzn
zfz (11)
where the values of znand zfare the near and far limits of
the depth of field as given by (12) substituting cmax for c.
Similarly substituting cmin for cin (12) yields zand z.
Af ±c(zSf)(12)
The fourth component, CD, characterizes direction (angle
of view). A point p is visible if the camera lies in the half-
space defined by the plane tangent to the surface of the point;
p is visible from the camera only if
rsin pη+px
rcos pηarctan r
where r=qp2
The task parameters ζ1and ζ2are the ideal and maximum
angles between the normal of a feature and the optical axis.
The direction component is given by
CD(p) = B[0,1] Θ(p)π+ζ2
The full model is given by
C(p) = CV(p)CR(p)CF(p)CD(p)(15)
The following iterative procedure is an application of the
coverage strength model, the objective is to select good
camera configurations from a feasible solution space.
1. Based on the geometry of the scene, predefine a
discrete solution space: from an initial configuration,
iteratively change the camera parameters. For simplic-
ity these are categorized as position, orientation, and
intrinsic parameters.
2. Define the relevance model for the task. In this case the
relevance model is a discetized subset of the laser plane
within the operational field of view of the camera, and
it is in the same reference frame.
3. Select a camera configuration from the solution space.
4. Compute the coverage strength of the selected config-
5. Repeat steps 3 and 4 over the solution space and
output the configuration with the highest coverage
Moreover, if it is desired to change the camera configu-
ration, such as a change in the optics (i.e. aperture, focal
length); this model facilitates the investigation of the effect
on the performance of the imaging system; thus, the need to
make any physical changes is eliminated.
A. Apparatus
In the experiments, the camera used is the SICK-IVP
Ranger E industrial 3D camera with a laser line projector.
The camera and laser were mounted and calibrated using two
different calibration techniques: laser line calibration [17]
and full camera calibration [19].
Laser line calibration is used to produce the lookup table
required by the ranger to output calibrated images. Calibrated
images have pixel values given in millimeters with respect
to the reference frame defined during calibration.
Full calibration1is used to compute the camera’s intrin-
sic and extrinsic parameters that are required to generate
the coverage model. The laser line calibration generates a
mapping from a two-dimensional plane to a two-dimensional
plane, from this mapping the three-dimensional pose of the
camera cannot be estimated directly. Moreover the current
laser line calibration method does not compute the focal
length, therefore full calibration is also required in addition
to the look-up-table generation. Lens distortion was corrected
in all the experiments.
With the system mounted and calibrated, several pictures
of the target were taken using different camera-laser con-
figurations. The system had to be calibrated every time the
camera and laser where rearranged. Different target positions
were used in order to cover most of the field of view of the
camera and thus not limit the results to a particular case; the
1Performing two different types of calibration in this paper is only
necessary for the experiments and is not normally required for sensor
accuracy of the measurements is not the same everywhere in
the field of view; this is caused by perspective projection.
The target is the calibration object used for laser calibra-
tion, it has a series of triangles of known dimensions. The
image processing software developed in HALCON takes as
an input the calibrated image generated using the look-up-
table available from calibration, then, the software detects
the triangles on the image and measures the height.
B. Software
In this experiment, we use our Adolpus2simulation soft-
ware to compute the coverage performance for a particular
camera configuration. The model is parameterized using the
camera system. The intrinsic and extrinsic parameters used in
the model are those of the physical system. Most of the image
processing and calibration is performed using the HALCON
machine vision libraries [20].
The camera system was estimated as shown in the simu-
lation example in Figure 2
Fig. 2. Software Simulation
C. Task-Related Parametrization
Range cameras are very robust to blur due to de-focus;
when a profile is acquired by the camera and the laser
line is extracted, the camera computes the center of gravity,
allowing for high accuracy even with an image that is out
of focus. The focus parameter was set to a relatively large
value; blur circles of 1.0 mm are the maximum blur allowed.
The relevance model in this experiment is defined as the
points of interest of the target; which are the crests and
valleys of the triangular shape in the calibration target. The
points of interest are directional points in R3, with direction
normal to the face of the features themselves; in other words
the direction is parallel to the yaxis. (see Figure 1).
The ideal angle for best resolution is ζ1= 0; the resolution
of the camera increases as the angle α, (see Figure 1),
increases until the optical axis of the camera becomes
orthogonal to the laser plane [18]. The second parameter
was selected as the angle at which the camera can no longer
collect any useful information, so ζ2=π
2Adolphus is free software licensed under the GNU General Public
License. Complete Python source code and documentation are available
The parameters γ, R1and R2are measured in pixels.
An estimated size of the features detected by the image
processing software was selected as the value for γ, so γ= 6,
similarly R1= 5.22 and R2= 1.0were selected as the ideal
and minimum cutoff resolutions, respectively.
D. Results and Analysis
Using the measurement software, the data was compared
with the ground truth; this is to establish the performance
of the physical system. The measured performance is then
compared with the coverage strength. As an example, four
examples from the experiment data pool are shown in table
Camera αCoverage Strength Measurement Error (mm)
C1 53.880.6122 0.1472
C2 51.180.5706 0.4498
C3 34.460.3275 0.8167
C4 16.910.1859 0.9311
where αis the angle between the laser plane and the optical
axis of the camera, as shown in Figure 1.
0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65
Coverage Performance
Measurement Error
Fig. 3. Performance Correlation
The Pearson correlation coefficient is calculated between
the coverage metric and the measurement error. The corre-
lation is r=0.8508.
The predicted performance of the system measured
throughout the coverage strength is closely related to the
performance of the task, it is clear that choosing camera
configuration number one from the example in table I will
yield the most accurate results. Sensor planning is then pos-
sible by predefining a set of feasible camera configurations
and then computing the coverage strength to select the one
with the highest value.
A. Conclusions
The sensor planning task for the case of visual sensors
can be achieved through the selection of best-camera config-
uration based on the information provided by the coverage
strength model. Moreover the coverage strength model can
be easily adapted according to the needs of the task. It has
been shown that the model is flexible to another kind of
task: three-dimensional measurement where the model was
adapted to account for the height resolution.
B. Future Work
As described in section II-B, laser scanners are highly
affected by camera occlusion; the laser light is being blocked
by the object of interest which is not known a priori. This
can be seen as dynamic occlusion which is hard to predict
and include in the coverage metric. One way to approach
this in future work is to develop a probabilistic model for
camera occlusion. Another, and more interesting subject is
to find a suitable method for optimal sensor placement; exact
solutions are not feasible because of the computational cost
as explained by H¨
orster et al. [14], the challenge is then to
find the best approximation.
[1] A. Mavrinac, X. Chen, and J. L. Alarcon-Herrera, “Coverage Strength
Models for Multi-Camera Systems,” IEEE Trans. Pattern Analysis and
Machine Intelligence, submitted for publication.
[2] A. Mavrinac, J. L. Alarcon-Herrera, and X. Chen, “A Fuzzy Model
for Coverage Evaluation of Cameras and Multi-Camera Networks, in
Proc. 4th ACM/IEEE Intl. Conf. on Distributed Smart Cameras, 2010.
[3] ——, “Evaluating the Fuzzy Coverage Model for 3D Multi-Camera
Network Applications,” in Proc. Intl. Conf. on Intelligent Robotics and
Applications, 2010.
[4] A. Mittal and L. Davis, “A General Method for Sensor Planning in
Multi-Sensor Systems: Extension to Random Occlusion,” International
Journal of Computer Vision, vol. 76, pp. 31–52, 2008.
[5] X. Chen and J. Davis, “An Occlusion Metric for Selecting Robust
Camera Configurations,” Machine Vision and Applications, vol. 19,
pp. 217–222, 2008.
[6] R. Pito, “A Solution to the Next Best View Problem for Automated
Surface Acquisition,” Pattern Analysis and Machine Intelligence, IEEE
Transactions on, vol. 21, no. 10, pp. 1016–1030, 1999.
[7] W. Scott, G. Roth, and J.-F. Rivest, “View Planning With a Registration
Constraint,” in 3-D Digital Imaging and Modeling. Proceedings. Third
International Conference on, 2001, pp. 127–134.
[8] W. Scott, “Model-Based View Planning, Machine Vision and Appli-
cations, vol. 20, pp. 47–69, 2009.
[9] F. Prieto, R. Lepage, P. Boulanger, and T. Redarce, “A CAD-Based
3D Data Acquisition Strategy for Inspection,” Machine Vision and
Applications, vol. 15, pp. 76–91, 2003.
[10] S. Ram, K. R. Ramakrishnan, P. K. Atrey, V. K. Singh, and M. S.
Kankanhalli, “A Design Methodology for Selection and Placement of
Sensors in Multimedia Surveillance Systems,” in Proceedings of the
4th ACM International Workshop on Video Surveillance and Sensor
Networks, 2006, pp. 121–130.
[11] U. M. Erdem and S. Sclaroff, “Automated Camera Layout to Satisfy
Task-Specific and Floor Plan-Specific Coverage Requirements, Com-
puter Vision and Image Understanding, vol. 103, no. 3, pp. 156–169,
[12] H. Gonz´
alez-Banos and J. C. Latombe, “A Randomized Art-Gallery
Algorithm for Sensor Placement,” in Proc. 17th Ann. Symp. on
Computational Geometry, 2001, pp. 232–240.
[13] F. Angella, L. Reithler, and F. Gallesio, “Optimal Deployment of
Cameras for Video Surveillance Systems, in Proc. IEEE Conf. on
Advanced Video and Signal Based Surveillance, 2007, pp. 388–392.
[14] E. H ¨
orster and R. Lienhart, “On the Optimal Placement of Multiple
Visual Sensors, in Proc. 4th ACM Intl. Wkshp. on Video Surveillance
and Sensor Networks, 2006, pp. 111–120.
[15] M. Reed and P. Allen, “Constraint-Based Sensor Planning for Scene
Modeling,” in Computational Intelligence in Robotics and Automation.
IEEE International Symposium on, 1999, pp. 131–136.
[16] D. C. Brown, “Decentering Distortion of Lenses,” Photogrammetric
Engineering, vol. 32, no. 3, pp. 444–462, 1966.
[17] A. Mavrinac, X. Chen, P. Denzinger, and M. Sirizzotti, “Calibration
of Dual Laser-Based Range Cameras for Reduced Occlusion in 3D
Imaging,” in Proc. IEEE/ASME Intl. Conf. on Advanced Intelligent
Mechatronics, 2010.
[18] Ranger E/D Reference Manual, SICK IVP, 206.
[19] Z. Zhang, “Flexible Camera Calibration by Viewing a Plane from
Unknown Orientations, in Computer Vision. The Proceedings of the
Seventh IEEE International Conference on, vol. 1, 1999, pp. 666–673.
[20] HALCON. MVTec Software GmbH. [Online]. Available: http:
... For multi-camera deployment, a good survey can be found in [11] for research at the earlier stage. Some recent results have also been reported in the literature [12], [13], [15], [16] and in a survey [17]. Although some feasible or near-optimal approaches are reported, the research on this topic is far from mature in application to realistic scenarios, epically when 3-D models are involved [17]. ...
... Jiang et al. [23] consider the FOV constraint and the occlusion case in their weighted coverage model, however, resolution and focus are not included in this model. Mavrinac et al. [9] proposed a new coverage model to take into account almost all realistic constraints, which is validated and successfully applied in many scenarios, such as deployment of range cameras [12], real-time view selection for large-scale visual surveillance systems [9], and industrial inspection for 3-D objects [1]. ...
... In particular, to solve convex optimization problems formulated in (11), (12), and (17)−(28), we use the free convex optimization software package CVXOPT. 1 Actually, these convex optimization problems can also be solved by many other free or commercial software tools [29]. ...
Based on a convex optimization approach, we propose a new method of multi-camera deployment for visual coverage of a 3-D object surface. In particular, the optimal placement of a single camera is first formulated as translation and rotation convex optimization problems, respectively, over a set of covered triangle pieces on the target object. The convex optimization is recursively applied to expand the covered area of the single camera, with the initially covered triangle pieces being chosen along the object boundary for the first trial through a selection criterion. Then, the same optimization procedures are applied to place the next camera and thereafter. It is pointed out that our optimization approach guarantees that each camera is placed at the optimal pose in some sense for a group of triangles instead of a single piece. This feature, together with the selection criterion for initially covered triangles, reduces the number of operating cameras while still satisfying various constraint requirements such as resolution, field of view, blur, and occlusion. Both simulation and experimental results are presented to show superior performance of the proposed approach, comparing with the results from other existing methods.
... Scott [6] proposes an improved model making fewer assumptions about the object; his concept of verified measurability is similar to our bounded performance metric. Alarcon Herrera et al. [12] present an initial analysis of the viewpoint evaluation component using an early formulation of the coverage model of Section IV. ...
Full-text available
A semiautomatic model-based approach to the view planning problem for high-resolution active triangulation 3-D inspection systems is presented. First, a comprehensive, general, high-fidelity model of such systems is developed for the evaluation of configurations with respect to a model of task requirements, with a bounded scalar performance metric. The design process is analyzed, and the automated view planning problem is formulated only for the critically difficult aspects of design. A particle swarm optimization algorithm is applied to the latter portion, including probabilistic modeling of positioning error, using the performance metric as an objective function. The process leverages human strengths for the high-level design, refines low-level details mechanically, and provides an absolute measure of task-specific performance of the resulting design specification. The system model is validated, allowing for a reliable rapid design cycle entirely in simulation. Parameterization of the optimization algorithm is analyzed and explored empirically for performance.
... This model was developed in our previous work in Mavrinac et al. (2010). Validation of the model is provided in Alarcon-Herrera et al. (2011). For a detailed description of the coverage model and its components we refer the reader to the aforementioned sources. ...
Conference Paper
A method for PTZ camera re-conguration oriented toward tracking applications and surveillance systems is presented. The visual constraints are transformed into geometric constraints by a coverage model, and the nal PTZ congurations are averaged by a consensus algorithm. The approach is to design a distributed algorithm that enables cooperation between the cameras. Experimental results show successful camera handoff.
... It is desirable to eventually relax all three of these restrictions in further study on the topic. We also consider direct validation of the coverage model outside our scope, and direct the reader to our earlier validation work [Mavrinac et al. 2010a[Mavrinac et al. , 2010b[Mavrinac et al. , 2011. ...
Full-text available
The problem of online selection of monocular view sequences for an arbitrary task in a calibrated multi-camera network is investigated. An objective function for the quality of a view sequence is derived from a novel task-oriented, model-based instantaneous coverage quality criterion and a criterion of the smoothness of view transitions over time. The former is quantified by a priori information about the camera system, environment, and task generally available in the target application class. The latter is derived from qualitative definitions of undesirable transition effects. A scalable online algorithm with robust suboptimal performance is presented based on this objective function. Experimental results demonstrate the performance of the method—and therefore the criteria—as well as its robustness to several identified sources of nonsmoothness.
Conference Paper
Full-text available
Based on convex optimization techniques, we propose a new multi-camera deployment method for optimal visual coverage of a three-dimensional (3D) object surface. Different from existing methods, the optimal placement of a single camera is formulated as two convex optimization problems, given a set of covered triangle faces. Moreover, this idea is incorporated into a recursive framework to expand the covered area for each camera, wherein initially covered triangle faces are elegantly chosen using an importance criterion for the first recursion. By placing cameras one by one using the same method, the object surface is gradually covered by iteratively removing the covered partition of the previously deployed camera. Due to the usage of convex optimization, each camera is guaranteed to be placed at an optimal pose for a group of triangle faces other than a single one. This merit, together with the importance criterion-based selection of initially covered triangle faces, reduces the number of required cameras while satisfying various constraints including the resolution, field of view, focus and occlusion. Simulation results on two real 3D computer-aided design (CAD) models are presented to verify the effectiveness of the proposed approach.
The application of surveillance systems is a great enhancement of security level to the monitored area by providing important reference for the security teams to make prompt actions against threats or incidents.
An automatic method for solving the problem of view planning in high-resolution industrial inspection is presented. The method's goal is to maximize the visual coverage, and to minimize the number of cameras used for inspection. Using a CAD model of the object of interest, we define the scene-points and the viewpoints, with the later being the solution space. The problem formulation accurately encapsulates all the vision-and task-related requirements of the design process for inspection systems. We use a graph-based approach to formulate a solution for the problem. The solution is implemented as a greedy algorithm, and the method is validated through experiments.
Based on convex optimization techniques, we propose a new multi-camera deployment method for optimal visual coverage of a three-dimensional (3D) object surface. Different from existing methods, the optimal placement of a single camera is formulated as two convex optimization problems, given a set of covered triangle faces. Moreover, this idea is incorporated into a recursive framework to expand the covered area for each camera, wherein initially covered triangle faces are elegantly chosen using an importance criterion for the first recursion. By placing cameras one by one using the same method, the object surface is gradually covered by iteratively removing the covered partition of the previously deployed camera. Due to the usage of convex optimization, each camera is guaranteed to be placed at an optimal pose for a group of triangle faces other than a single one. This merit, together with the importance criterion-based selection of initially covered triangle faces, reduces the number of required cameras while satisfying various constraints including the resolution, field of view, focus and occlusion. Simulation results on two real 3D computer-aided design (CAD) models are presented to verify the effectiveness of the proposed approach.
Conference Paper
A method for PTZ camera reconfiguration is presented. The objective of this work is to improve target tracking and surveillance applications in unmanned vehicles. Pan, tilt, and zoom configurations are computed transforming the visual constraints, given by a model of visual coverage, into geometric constraints. In the case of multiple targets the camera configurations are computed by a consensus algorithm. The approach is defined in a multi-agent framework allowing for scalability of the system, and cooperation between the cameras. Experimental results show the performance of the approach.
Conference Paper
A method for PTZ camera re-configuration oriented toward tracking applications and surveillance systems is presented. Pan, tilt, and zoom configurations are computed based on visual constraints given by a coverage model of the camera system. The visual constraints are transformed into geometric constraints by the coverage model, and the final pan, tilt, and zoom configurations are averaged by a consensus algorithm. The approach is defined in a multi-agent framework by designing a distributed algorithm that enables scalability of the system and cooperation between the cameras. Experimental results show successful camera handoff and target tracking.
Full-text available
This paper addresses the problem of how to select the opti-mal number of sensors and how to determine their placement in a given monitored area for multimedia surveillance sys-tems. We propose to solve this problem by obtaining a novel performance metric in terms of a probability measure for ac-complishing the task as a function of set of sensors and their placement. This measure is then used to find the optimal set. The same measure can be used to analyze the degrada-tion in system's performance with respect to the failure of various sensors. We also build a surveillance system using the optimal set of sensors obtained based on the proposed design methodology. Experimental results show the effec-tiveness of the proposed design methodology in selecting the optimal set of sensors and their placement.
Full-text available
Systems utilizing multiple sensors are required in many domains. In this paper, we specifically concern ourselves with applications where dynamic objects appear randomly and the system is employed to obtain some user-specified characteristics of such objects. For such systems, we deal with the tasks of determining measures for evaluating their performance and of determining good sensor configurations that would maximize such measures for better system performance. We introduce a constraint in sensor planning that has not been addressed earlier: visibility in the presence of random occluding objects. occlusion causes random loss of object capture from certain necessitates the use of other sensors that have visibility of this object. Two techniques are developed to analyze such visibility constraints: a probabilistic approach to determine “average” visibility rates and a deterministic approach to address worst-case scenarios. Apart from this constraint, other important constraints to be considered include image resolution, field of view, capture orientation, and algorithmic constraints such as stereo matching and background appearance. Integration of such constraints is performed via the development of a probabilistic framework that allows one to reason about different occlusion events and integrates different multi-view capture and visibility constraints in a natural way. Integration of the thus obtained capture quality measure across the region of interest yields a measure for the effectiveness of a sensor configuration and maximization of such measure yields sensor configurations that are best suited for a given scenario. The approach can be customized for use in many multi-sensor applications and our contribution is especially significant for those that involve randomly occurring objects capable of occluding each other. These include security systems for surveillance in public places, industrial automation and traffic monitoring. Several examples illustrate such versatility by application of our approach to a diverse set of different and sometimes multiple system objectives.
Conference Paper
Full-text available
A robust model-based calibration method for dual laser line active triangulation range cameras, with the goal of reducing camera occlusion via data fusion, is presented. The algorithm is split into two stages: line-based estimation of the lens distortion parameters in the individual cameras, and computation of the perspective transformation from each image to a common world frame in the laser plane using correspondences on a target with known geometry. Experimental results are presented, evaluating the accuracy of the calibration based on mean position error as well as the ability of the system to reduce camera occlusion.
Technical Report
This paper presents a model-based view planning approach for automated object reconstruction or inspection using laser scanning range sensors. Quality objectives and performance measures are defined. Camera and positioning systems performance is modeled statistically. A theoretical framework is presented. The method is applicable to a broad class of objects with reasonable geometry and reflectance properties. Sampling of object surface and viewpoint space is characterized, including measurement noise and poses error effects. The technique is generalizable for common range camera and positioning system designs.
This paper descirbes a placement strategy to compute a set of “good” locations where visual sensing will be most effective. Throughout this paper it is assumed that a {\em polygonal 2-D map} of a workspace is given as input. This polygonal map --- also known as a {\em floor plan} of {\em layout} --- is used to compute a set of locations where expensive sensing tasks (such as 3-D image acquisition) could be executed. A map-building robot, for example, can visit these locations in order to build a full 3-D model of the workspace. The sensor placement strategy relies on a randomized algorithm that solves a variant of the {\em art-gallery problem}-\cite{Oro87, She92, Urr97} : Find the minimum set of guards inside a polygonal workspace from which the entire workspace boundary is visibe. To better take into account the limitations of physical sensors, the algorithm computes a set of guards that satisfies incidence and range constraints. Although the computed set of guards is not guaranteed to have minimum size, the algorithm does compute with high probability a set whose size is at most a factor $\big0{ (n + h) \cot \log(c \ (n + h) )$ from the optimal size$c$, where $n$ is the number of edges in the input polygonal map and $n$ the number of obstacles in its interior (holes).
Many novel multimedia systems and applications use visual sensor arrays. An important issue in designing sensor ar-rays is the appropriate placement of the visual sensors such that they achieve a predefined goal. In this paper we focus on the placement with respect to maximizing coverage or achieving coverage at a certain resolution. We identify and consider four different problems: maximizing coverage sub-ject to a given number of cameras (a) or a maximum total price of the sensor array (b), optimizing camera poses given fixed locations (c), and minimizing the cost of a sensor array given a minimally required percentage of coverage (d). To solve these problems, we propose different algorithms. Our approaches can be subdivided into algorithms which give a global optimum solution and heuristics which solve the prob-lem within reasonable time and memory consumption at the cost of not necessarily determining the global optimum. We also present a user-interface to enter and edit the spaces un-der analysis, the optimization problems as well as the other setup parameters. The different algorithms are experimen-tally evaluated and results are presented. The results show that the algorithms work well and are suited for different practical applications. For the final paper it is planned to have the user interface running as a web service.
Vision based tracking systems for surveillance and motion capture rely on a set of cameras to sense the environment. The exact placement or configuration of these cameras can have a profound affect on the quality of tracking which is achievable. Although several factors contribute, occlusion due to moving objects within the scene itself is often the dominant source of tracking error. This work introduces a configuration quality metric based on the likelihood of dynamic occlusion. Since the exact geometry of occluders can not be known a priori, we use a probabilistic model of occlusion. This model is extensively evaluated experimentally using hundreds of different camera configurations and found to correlate very closely with the actual probability of feature occlusion.