Conference PaperPDF Available

Calibration of dual laser-based range cameras for reduced occlusion in 3D imaging

Authors:

Abstract and Figures

A robust model-based calibration method for dual laser line active triangulation range cameras, with the goal of reducing camera occlusion via data fusion, is presented. The algorithm is split into two stages: line-based estimation of the lens distortion parameters in the individual cameras, and computation of the perspective transformation from each image to a common world frame in the laser plane using correspondences on a target with known geometry. Experimental results are presented, evaluating the accuracy of the calibration based on mean position error as well as the ability of the system to reduce camera occlusion.
Content may be subject to copyright.
Calibration of Dual Laser-Based Range Cameras
for Reduced Occlusion in 3D Imaging
Aaron Mavrinac, Xiang Chen, Peter Denzinger, and Michael Sirizzotti
Abstract A robust model-based calibration method for dual
laser line active triangulation range cameras, with the goal of
reducing camera occlusion via data fusion, is presented. The
algorithm is split into two stages: line-based estimation of the
lens distortion parameters in the individual cameras, and com-
putation of the perspective transformation from each image to a
common world frame in the laser plane using correspondences
on a target with known geometry. Experimental results are
presented, evaluating the accuracy of the calibration based on
mean position error as well as the ability of the system to reduce
camera occlusion.
I. INT RO DUC TI O N
Active 3D vision is a popular family of methods for
obtaining robust and accurate three-dimensional digitizations
of real objects [1], [2]. One common paradigm, known as
laser line triangulation, uses a single camera to view a laser
line projected over the surface of an object at some angular
offset from its principal axis, processing the image to obtain
a cross-section profile of the object; this process can be
repeated moving the object in small increments normal to
the laser plane to yield a full surface scan of the object.
A major problem with these devices, particularly in inspec-
tion and metrology applications, is the occlusion of portions
of the target object, resulting in incomplete 3D data. Two
types of occlusion may occur: laser occlusion, in which the
laser is unable to illuminate an object point visible from
the camera, and camera occlusion, in which the camera is
unable to image an object point illuminated by the laser.
Such issues are typically overcome by performing multiple
scans or employing more complex systems, imposing a large
amount of overhead.
Camera occlusion occurs in any case where a portion of
the target surface faces away from the camera at a greater
angle from the horizontal than the camera itself (about the
xaxis, in our convention described in Section II-A). Thus,
it is possible to mitigate this by adding a second camera
to the system at some angle on the opposite side of the
laser plane. In practice, most surface portions occluded from
one direction are visible from the other, with the occlusion
typically being caused by a height discontinuity of some sort;
eliminating this therefore usually yields nearly complete 3D
This research was supported by the MITACS ACCELERATE program
and the Ontario Centres of Excellence Interact initiative in collaboration
between Vista Solutions Inc. and the University of Windsor.
A. Mavrinac and X. Chen are with the Department of Electrical &
Computer Engineering, University of Windsor, 401 Sunset Ave., Windsor,
Ontario, Canada, N9B 3P4. {mavrin1,xchen}@uwindsor.ca
P. Denzinger and M. Sirizzotti are with Vista Solutions,
2835 Kew Dr., Unit #1, Windsor, Ontario, Canada, N8T 3B7.
{pdenzinger,msirizzotti}@vistasolutions.ca
data. We seek to obtain a combined range image with all
information which would otherwise be available from two
separate scans in opposite orientations.
The challenge, then, is to combine the data from both
sources in such a way that existing processes (e.g. for
inspection or metrology) can be applied to the more complete
data unmodified in other words, the system should be a
drop-in replacement for the single-camera equivalent. We
present here a robust model-based calibration method for
two laser-based range cameras which allows this data to be
fused into a single 3D point cloud or range image in real
time.
Although model-based calibration of active triangulation
camera systems is essentially the same problem as standard
camera calibration, some techniques taking advantage of
the specifics of laser line triangulation have been proposed.
Reid [3] presents a method for estimating the projective
homography with the laser plane using correspondences
between the image and a set of orthogonal planes of known
geometry in the scene. Jokinen [4] presents an area-based
matching approach in which multiple profile maps from
different viewpoints are registered to refine an initial target-
based calibration.
Departing from model-based calibration, Trucco et al. [5]
present a direct calibration method which interpolates a
lookup table for the entire field of view based on a target of
known geometry, and thus implicitly models all intermediate
parameters; this is further explored in [6].
Vilac¸a et al. [7] present a complete calibration method
for two laser-based range cameras, also with the goal in
mind of reducing camera occlusion. It is similar to our
method in that it constrains lens distortion correction and
the perspective homography to the laser plane a valid
simplification over traditional camera calibration, since sub-
sequent measurements are also constrained there. However,
our approach uses the range data directly for calibration,
which allows for implicit constraint of calibration to the
laser plane, higher accuracy (if range values can be obtained
with subpixel accuracy), a more direct line-based process for
lens distortion correction, and the use of a more practical
calibration apparatus.
In the general case, combining range data from multiple
sources is often achieved via registration algorithms (Salvi
et al. [8] present an excellent overview). While this approach
is well-studied and its various algorithms can be applied in a
diverse range of situations, the calibration approach has clear
advantages: it is completely unaffected by incomplete over-
lap, which in contrast causes severe performance degradation
or imposes additional overhead in registration algorithms;
also, a pre-computed lookup table is far less computationally
expensive than iterative registration, and allows for more
exact results. Laser-based range cameras conveniently lend
themselves to such calibration.
The remainder of this paper is organized thus. Section II
introduces some concepts, conventions, and notation used
subsequently. The proposed calibration method is detailed in
Section III. Experimental results are presented in Section IV.
Finally, some concluding remarks are given in Section V.
II. DE FINI TI O NS
A. Geometry
The plane through which the laser line is projected is
defined as the xzplane in the world coordinate system
(with xhorizontal and zvertical), and is termed the laser
plane. The direction a target object moves when performing
a scan is termed the transport direction; this must, of course,
have a ycomponent. World coordinates are assumed to be
defined in some real measurement unit; we use millimetres
herein.
We assume a roughly symmetric camera configuration
(Figure 1), in which the transport direction is positive-y
(normal to the laser plane), and the cameras are placed on
opposite sides of the laser plane. Other configurations are
possible, and our method can be applied to these as well
with minimal modification.
Fig. 1. Symmetric Camera Configuration
The raw discrete two-dimensional coordinates of the data
from the camera, denoted (ur, vr), lie in the sensor plane.
A continuous image plane, with coordinates denoted (u, v),
is defined to describe corrected (or ideal) data points.
B. Range Data
We assume that the cameras in question are already able
to perform basic laser line triangulation, either on-board or
in software on a host computer, using well-studied methods
(see Section IV-A for the specifics of our experimental setup
and imaging hardware).
The range data generated by the camera from a single
image is termed a profile, and consists of an ordered set of
height values one per camera sensor column correspond-
ing to the zvalues of object points in the laser plane. Profile
elements are points in the sensor plane, with column index
urand height value vr(height values may be interpolated
and are thus not necessarily actual sensor row indices). An
ordered set of profiles, ostensibly taken at regular intervals
along the transport direction, is termed a scan.
Fig. 2. Range Data
III. CAL IB R ATIO N METH OD
Our calibration method is divided into two stages: correc-
tion for lens distortion and determination of a perspective
mapping from the image plane to the laser plane.
A. Lens Distortion
According to Brown’s model of lens distortion [9], image
plane coordinates (u, v)are computed from sensor plane
(raw pixel) coordinates (ur, vr)as follows:
u=ur+u0(K1r2+K2r4+...) + P1(r2+ 2u2
0)
+ 2P2u0v0(1 + P3r2+...)(1)
v=vr+v0(K1r2+K2r4+...) + P2(r2+ 2v2
0)
+ 2P1u0v0(1 + P3r2+...)(2)
u0=urouv0=vrovr=qu2
0+v2
0
where (ou, ov)is the optical center on the sensor plane, Ki
are radial distortion coefficients, and Piare tangential (or
decentering) distortion coefficients. For practical purposes,
we limit our model to coefficients K1,K2,P1, and P2;
experiments have shown that higher-order coefficients are
relatively insignificant in most cases [10].
The parameters can be estimated by exploiting the well-
known fact that, in the absence of distortion, straight lines in
the three-dimensional scene map to straight lines in the two-
dimensional image [11], [12]. Line-based correction lends
itself particularly well to our case, since we have a readily-
available source of straight lines in the scene constrained to
the plane of interest.
1) Line Extraction: Line point sets are obtained by taking
a profile of any flat object. The raw profile may contain
extraneous points; these are eliminated using a two-step
process: first, all points with height below a certain threshold
are removed; second, the remaining points are fit to a linear
model using RANSAC [13]. The linear model takes the form:
v=αu +β(3)
with the origin (ou, ov)at the optical center (using the sensor
center as an initial guess). Since, clearly, we cannot yet
obtain uand v, we substitute the raw values urand vr,
respectively, assuming that lens distortion is negligible for
the purpose of line extraction. The RANSAC consensus set
forms the final line point set, and the model parameters α
and βare retained to initialize the optimization model in the
following step.
A number of lines, well-distributed over the field of view,
should be profiled. Although not necessary, it is convenient
to do so with both cameras simultaneously.
2) Parameter Optimization: Given a set of Mline point
sets, each composed of Nmpoints, the objective is to find the
distortion parameters such that the total deviation in the line
equations is minimized [11]. This is achieved by minimizing
F, the sum of squared distances from each undistorted point
(u, v)to its corresponding line:
f=αmuv+βm(4)
F=
M
X
m=1
Nm
X
k=1
f2(5)
where uand vare computed according to Equations 1
and 2, respectively. The set of 2M+ 6 design variables
is {K1, K2, P1, P2, ou, ov, α1. . . αM, β1. . . βM}. The dis-
tortion parameters (Kiand Pi) are initialized to zero, (ou, ov)
is initialized to the sensor center, and the line parameters (αm
and βm) are initialized from the RANSAC model parameters
from the extraction step.
A solution can be found numerically using the Levenberg-
Marquardt algorithm [14] for nonlinear optimization (the
authors of [11] use a simpler gradient descent method,
but note problems with convergence to local minima). This
process requires the Jacobian matrix, consisting of the partial
derivatives of fwith respect to each parameter.
B. Perspective Mapping
1) Homography: A homography between the two-
dimensional image plane and the two-dimensional laser plane
is defined by a 3×3matrix Has:
λ
x
z
1
=H
u
v
1
(6)
where λis a scale factor.
Given a set of laser plane point coordinates and their
corresponding image plane coordinates, this homography can
be solved linearly. A point correspondence pair (x, z)
(u, v)yields two linear equations:
x(h31u+h32 v+h33)(h11 u+h12 v+h13) = 0 (7)
z(h31u+h32 v+h33)(h21 u+h22 v+h23) = 0 (8)
At least five such point correspondences are required to
find H. An optimal solution to the resulting overdetermined
linear system can be found numerically using singular value
decomposition.
2) Calibration Target: Point correspondences are ob-
tained by taking a single profile of a calibration target of
known structure. The target should have a number (five or
more) of precise, well-distributed, non-collinear points, such
as sharp corners, which can be easily localized in the image
(see Section IV-A.2 for our design and a brief description of
our point detection method). The detected image plane point
(u, v)is associated with the known laser plane point (x, z )
defined in units of length relative to a common origin.
The profile is taken by both cameras simultaneously, so
that their independent homographies map to a common
coordinate system.
IV. EXP ERI MEN TAL RESU LTS
A. Apparatus
1) Camera System: All experiments were performed us-
ing two SICK-IVP Ranger D industrial 3D smart cameras
with a single laser line source. These were mounted in a fixed
configuration similar to Figure 1 above a small conveyor belt,
with an encoder and photoswitch for controlling the start and
transport direction resolution of scans. Figure 3 shows the
equipment during the calibration procedure.
Fig. 3. Equipment Configuration
2) Calibration Target: We employ a calibration target
with one flat side (for the lens distortion correction stage)
and one side with a number of right-angled steps (for the
perspective mapping stage), as seen in Figure 3. Our target
was manufactured to a tolerance of 0.1 mm. This is attached
to a robotic arm to automate the positioning of the target.
Although accurate positioning is not required, rotation of the
target about the yand zworld coordinate axes introduces
error and should be avoided.
Nine line profiles on the flat side are taken for use in
the lens distortion step, in a pattern spanning the field of
view. Figure 4 shows a composite of data from all nine scans
for one camera; note the significant radial distortion prior to
correction.
Point detection on the stepped side is performed by finding
the middle (lowest) step, then searching outwards for the
horizontal lines of each step, using RANSAC to extrapolate
the precise corner positions. Figure 5 shows a plot of a
typical set of results, with squares indicating the estimated
corner position.
0
100
200
300
400
500
0 200 400 600 800 1000 1200 1400
Fig. 4. Line Profile Pattern
0
100
200
300
400
500
0 200 400 600 800 1000 1200 1400
Fig. 5. Typical Point Detection Result
3) Software: The software implementation is developed in
ANSI C and compiled with GCC using the MinGW tools.
The SICK iCon API is used to interface with the range cam-
eras over Gigabit Ethernet. For Levenberg-Marquardt opti-
mization, singular value decomposition, and random number
generation for RANSAC, the GNU Scientific Library [15]
is employed. Range images are output in PGM (P2 ASCII)
format.
B. Accuracy
The accuracy of the sensor calibration is tested by mea-
suring a large number of points of known geometry relative
to a common origin. The mean error is computed separately
for each camera, yielding the calibration accuracy for mea-
surements on each side, and then the mean deviation for each
point from one camera to the other is computed, yielding the
relative accuracy of the pair.
The sensor in the Ranger D camera has 1536 columns and
512 rows. To interpolate the impact position of the laser line
on the sensor, a built-in algorithm using a center-of-gravity
method with a typical resolution of 1/10 pixel is employed;
results are returned in terms of 1/16 pixel, yielding an
effective sensor size of 1536×8192. Our calibration software
generates a lookup table between these sensor coordinates
and world coordinates in the laser plane, by mapping each
discrete coordinate through the distortion model and the
perspective homography.
For our test case with a field of view 150 mm high by
200 mm wide and the cameras at an angle of 45to the
laser plane, the mean absolute position errors for points in
the individual cameras were measured at 0.1007 mm for the
left and 0.0968 mm for the right (on the same order as the
manufacturing tolerance for our calibration target), while the
mean relative position error between the two cameras was
measured at 0.2123 mm (note that the cameras estimate the
laser plane origin independently, hence the larger relative
error). This corresponds to subpixel accuracy in our output
images, which are sampled at 0.5 mm per pixel. Adjusting for
the relative size of field of view and camera resolution, these
results compare favorably with those of Vilac¸ a et al. [7].
C. Occlusion Reduction
Our measurement software generates one row of a range
image by binning and averaging the points in a rectified
profile generated by combining the raw profiles from each
camera transformed through their respective lookup tables
according to a given field of view width and resolution.
Where no data is available, a value of zero is assigned.
A full range image is generated from a scan, with the
aforementioned resolution typically equal to the spacing
between profiles in the scan.
As an approximate quantification of camera occlusion, we
define a closed countour of the scanned object in the range
image by segmentation, and consider the proportion of pixels
within this contour with value zero (black pixels) expessed as
a percentage. We compare the occlusion in the left and right
range images (that is, the range images generated using only
data from the left and right cameras, respectively) to that in
the combined image for a number of objects, with results
shown in Table I.
TABLE I
OCC LU S IO N REDU CT I ON F OR VA RI O US OB J EC TS
Object Occl. Left Occl. Right Occl. Combined
Connecting Rod 8.35% 8.20% 1.22%
Pump Inserts 3.64% 3.80% 0.11%
Transmission Gear 3.10% 3.28% 0.10%
Subfloor Panel 8.37% 14.91% 0.82%
Bearing Collar 16.48% 14.62% 2.85%
Toy Bricks 17.14% 13.49% 2.49%
Figures 6 and 7 show the calibrated images from the left
and right cameras as well as the combined image for two
typical objects. The effect of camera occlusion can clearly
be seen in the left and right images: black regions near raised
edges show where no data exists. By contrast, these effects
are almost completely absent from the combined images.
These results highlight the problem faced in many appli-
cations as well as the impact of the solution. Regardless of
how objects are oriented, with single-camera scans, there
is in the general case some degree of camera occlusion.
When the images from two cameras are combined, this
occlusion is dramatically reduced (with minimal computation
as compared to registration).
Left Right Combined
Fig. 6. Occlusion Reduction for Connecting Rod
Left Right Combined
Fig. 7. Occlusion Reduction for Pump Inserts
V. CO NC L US ION S
We have presented a straightforward calibration method
for laser-based range cameras, which adapts the underlying
theory of proven techniques to take advantage of the planar
constraint and readily available straight line data inherent to
these systems. For the individual sensor, the calibration has
been shown to exhibit high absolute accuracy. The method
is used to calibrate dual cameras into a common reference
frame, with which we have achieved a substantial reduction
in camera occlusion with high relative accuracy.
Although we present only dual cameras in a symmetric
configuration here, the principles could easily be extended to
more cameras and different configurations, potentially result-
ing in further reduction of occlusion for some applications.
The calibration method can also be applied to single-camera
systems.
Our method is applicable to a variety of inspection,
metrology, and other tasks which might benefit from more
complete range data in a single scan. Importantly, it allows
existing methods working with range images to function
unchanged with the fused data, making the system suitable
as a drop-in replacement for a single-camera solution.
REF ER ENC ES
[1] P. J. Besl, “Active, Optical Range Imaging Sensors,” Machine Vision
and Applications, pp. 127–152, 1988.
[2] S. K. Mada, M. L. Smith, L. N. Smith, and S. Midha, “An Overview
of Passive and Active Vision Techniques for Hand-Held 3D Data
Acquisition,” in Opto-Ireland 2002: Optical Metrology, Imaging, and
Machine Vision. SPIE, 2003, pp. 16–27.
[3] I. D. Reid, “Projective Calibration of a Laser-Stripe Range Finder,”
Image and Vision Computing, vol. 14, no. 9, pp. 659–666, 1996.
[4] O. Jokinen, “Self-Calibration of a Light Striping System by Matching
Multiple 3-D Profile Maps,” in Proc. 2nd Intl. Conf. on 3-D Digital
Imaging and Modeling, 1999, pp. 180–190.
[5] E. Trucco, R. B. Fisher, and A. W. Fitzgibbon, “Direct Calibration and
Data Consistency in 3-D Laser Scanning,” in Proc. British Machine
Vision Conf., 1994, pp. 489–498.
[6] E. Trucco, R. B. Fisher, A. W. Fitzgibbon, and D. K. Naidu, “Cal-
ibration, Data Consistency and Model Acquisition with a 3-D Laser
Striper,” Intl. Jrnl. of Computer Integrated Manufacturing, vol. 11,
no. 4, pp. 292–310, 1998.
[7] J. L. Vilac¸a, J. C. Fonseca, and A. M. Pinho, “Calibration Procedure
for 3D Measurement Systems Using Two Cameras and a Laser Line,
Optics & Laser Technology, vol. 41, no. 2, pp. 112–119, 2009.
[8] J. Salvi, C. Matabosch, D. Fofi, and J. Forest, “A Review of Recent
Range Image Registration Methods with Accuracy Evaluation, Image
and Vision Computing, vol. 25, pp. 578–596, 2007.
[9] D. C. Brown, “Decentering Distortion of Lenses,” Photogrammetric
Engineering, vol. 32, no. 3, pp. 444–462, 1966.
[10] J. Weng, P. Cohen, and M. Herniou, “Camera Calibration with
Distortion Models and Accuracy Evaluation, IEEE Trans. on Pattern
Analysis and Machine Intelligence, vol. 14, no. 10, pp. 965–980, 1992.
[11] B. Prescott and G. F. McLean, “Line-Based Correction of Radial Lens
Distortion,” Graphical Models and Image Processing, vol. 59, no. 1,
pp. 39–47, 1997.
[12] F. Devernay and O. Faugeras, “Straight Lines Have to be Straight,”
Machine Vision and Applications, vol. 13, pp. 14–24, 2001.
[13] M. A. Fischler and R. C. Bolles, “Random Sample Consensus: A
Paradigm for Model Fitting with Applications to Image Analysis and
Automated Cartography, Comm. of the ACM, vol. 24, no. 6, pp. 381–
395, 1981.
[14] C. T. Kelley, Iterative Methods for Optimization. SIAM, 1999.
[15] M. Galassi, J. Davies, J. Theiler, B. Gough, G. Jungman, P. Alken,
M. Booth, and F. Rossi, GNU Scientific Library Reference Manual,
3rd ed. [Online]. Available: http://www.gnu.org/software/gsl/
... Odwzorowanie zatem w układzie triangulacyjnym nie jest -jak w przypadku obrazu ze zwykłej kamery -przestrzeni 3D na obraz 2D, lecz jednej płaszczyzny dwuwymiarowej, wyznaczonej światłem laserowym, na drugą, będącą powierzchnią rejestratora kamery. Pomijając zniekształcenia optyczne, opisane w rozdziale 8.1, można to przekształcenie zapisać w postaci homografii [45,[53][54][55], definiowanej równaniem (8.6). ...
... stopniowane [54,[142][143][144], piłozębne [145,146], płaskie tarcze z nadrukowanymi punktami charakterystycznymi wykorzystywane także do kalibracji dystorsji [44,45,141,144,147,148]. ...
... Dzięki niej wyznaczana jest transformacja pomiędzy współrzędnymi przetwornika, a współrzędnymi obiektu rzeczywistego. Bazuje ona na metodzie opisanej w [54,145,151]. W procesie kalibracji wykorzystywany jest wzorzec posiadający z jednej strony płaską powierzchnię, a z drugiej zarys piłokształtny (rys. 8.5), który fotografowany jest w różnych oddaleniach, zgodnie z osią skanowania. ...
Thesis
Full-text available
In the industry, especially automotive, the most important criterion of quality is the dimensional accordance with specification. Keeping the dimensional tolerances given by the constructor is critical from the reliability and life time point of view. For this reason, an important role in quality control play the coordinate measurements. On the market there are many coordinate measuring machines intended not only to work in the laboratory, but also on the production line. Their manufacturers have developed applications for a very wide range of applications. From small, to measure the precise details, of the size in scale of centimetres until large, having workspaces that allow for the measurements of the whole body of vans. All these machines are characterised by very large accuracies, and in addition, due to the classic, tactile nature of measurement, are relatively easy to validate. For this reason, until recently such machines were the only used in the field of geometrical quality inspection. Their main disadvantage is, however, very little speed and often limited possibility of full automation. Therefore, 100% control by means of measuring machines is impossible in large scale production. For this reason, the role of optical devices grows. Data from German industry, the most representative in terms of development trends, show that about 30% of production enterprises are going to change the current solutions to 3D non-contact optical systems and the trend is rising. Optical measuring devices, particularly those that are based on the image analysis, allow the acquisition of more points than tactile ones. This is due to the fact that single image contains information about a large number of measuring points (> 103), depending on its resolution. In addition, the acquisition of modern cameras and communication interfaces requires just a fractions of a second. This is especially useful when mapping objects with freeform shapes, which by nature cannot be described with single values. Examples of such non-contact digitizing method are presented in Chapter 1. Freeform surfaces are an essential element of such industries as automotive, aerospace or appliances, and the accuracy of their implementation is often a decisive influence on the efficiency and reliability. With the development of multiaxis machining techniques the number of its applications grows. A consequence of this trend is the need for quality control methods. In this paper the focus was mainly on applications of optical 3D reconstruction methods in the industrial quality control. Very rapid expansion is observed in this field because of modern trends in production. In part I, the optical 3D reconstruction methods used in quality control will be presented. After this concise characteristics they are compared, and as a result, the method with the greatest potential for applications will be chosen. Perfect candidate appears to be the laser triangulation method which, thanks to the considerable speed and relatively simple components is used successfully for over thirty years to measure the shape of the products on the production lines. Another advantage is its resistance to external light, which is important to reduce other optical and video solutions. Thanks to the use of monochromatic light can be effectively minimized with the use of optical filters. In the remainder of the work the source of the interference of the laser triangulation are briefly described. From there, one can see that one of the most important disturbances are internal specular reflections that arise while scanning complicated shapes containing highly reflective surfaces. They appear on the triangulation images as multiple light profiles leading to ambiguity. In this case, the 3D model reconstruction is impossible. Because this problem is crucial (it eliminates the triangulation method laser for many parts produced in the metal industry) one can find in the literature several solutions to eliminate the problem of internal reflection. They are briefly characterized in Chapter 4. One of the methods there uses the preliminary scan (pattern) of an object that satisfies the quality criteria, so that the data collected serve as the standard for filtration. Application of non-ideal detail as a golden-template and problems with data synchronization did not allow for the implementation of this concept. It was developed at a time when computer systems were not sufficiently developed to fully exploit it. The wide use of CAD/CAM systems in the modern industry, however, gives new opportunities, so that the use of this approach may be reconsidered. Each product today is manufactured according to the nominal model developed by the designer in the form of a virtual solid model. As applied to quality control, such model can therefore be used as a knowledge base for elimination of gross errors in the process of mapping shape reconstruction. Errors of this type will be in the case of laser triangulation method for example multiple profiles, which formation mechanism is described above. However, it is not clear whether the use the knowledge of nominal object’s shape will allow for its digital reconstruction in the conditions of strong disturbances associated with surface reflectivity. Because this can be regarded as a measurement method, it is also important to proof if the changes do not derogate from its metrological capacity. Based on the above concepts and concerns a thesis has been formulated together with research tasks. Proof of the thesis requires the test implementation of the modified laser triangulation algorithms and analysis of obtained results. The use of the nominal model for 3D measurements requires extra synchronization-related treatments. Studies in this area have shown that for proper operation of the developed concepts, it is necessary to develop a stable positioning method of the object being analysed, as well as the calibration of the measuring system. They have been described in part II. Research section (III) describes the verification of the resulting modification of laser triangulation method. In accordance with the thesis its proof requires both the investigation if a new segmentation algorithm eliminates internal reflections, and improves the accuracy of the scan. The first part was carried out using the two examples that manifest very well internal, specular reflections. The first is the v-block, and the other is a shiny freeform surface. Investigations have confirmed the effectiveness of the algorithm. In the second part it was necessary to designate the measuring uncertainty of triangulation setup. The phenomenon of multiple reflection does not, however, manifests in the standard procedure of uncertainty estimation, that requires multiple measurements of gauge blocks. It would therefore be necessary to use more specialized patterns. These, however, significantly impede research carried out under repeatability conditions. A simplified estimation method of measurement’s uncertainty, has been proposed based on the assumption that the segmentation accuracy is one of the components of the uncertainty budget. The convergence of the results of this estimation method and the classic method, argues that the standard assumptions about the immediate impact of the segmentation accuracy on the uncertainty of the measurements. The results suggest that segmentation algorithm improved the ability to detect light profile and, therefore, also contributed to improving the measurement uncertainty. The last part of the work (IV) summarizes the developed solution and the outcome of the tests.
... Configurations with single cameras are well-studied; multiple-camera cofigurations are less common, but are beginning to find use in industry. We assume that multiple-camera configurations are mutually calibrated: see treatments by Saint-Marc et al. [17], Vilaça et al. [18], and Mavrinac et al. [19]. ...
... with z H (H, p) = Hh sin ζ(p) tan α l + tan α r (19) where ζ(p) is the view angle to p, as given by (23). (19) is derived similarly to (17), with the additional term accounting for the effect of the oblique angle of incidence. ...
Article
Full-text available
A semiautomatic model-based approach to the view planning problem for high-resolution active triangulation 3-D inspection systems is presented. First, a comprehensive, general, high-fidelity model of such systems is developed for the evaluation of configurations with respect to a model of task requirements, with a bounded scalar performance metric. The design process is analyzed, and the automated view planning problem is formulated only for the critically difficult aspects of design. A particle swarm optimization algorithm is applied to the latter portion, including probabilistic modeling of positioning error, using the performance metric as an objective function. The process leverages human strengths for the high-level design, refines low-level details mechanically, and provides an absolute measure of task-specific performance of the resulting design specification. The system model is validated, allowing for a reliable rapid design cycle entirely in simulation. Parameterization of the optimization algorithm is analyzed and explored empirically for performance.
... where s is a scale factor. Mavrinac et al. [17] provide the derivation and implementation details of this calibration procedure. ...
... In the experiments, the camera used is the SICK-IVP Ranger E industrial 3D camera with a laser line projector. The camera and laser were mounted and calibrated using two different calibration techniques: laser line calibration [17] and full camera calibration [19]. ...
Conference Paper
Full-text available
A method for sensor planning based on a previously developed coverage strength model is presented. The approach taken is known as generate-and-test: a feasible solution is predefined and then tested using the coverage model. The relationship between the resolution of the imaging system and its performance is the key component to perform sensor planning of range cameras. Experimental results are presented; the inverse correlation between coverage performance and measurement error demonstrates the usefulness of the model in the sensor planning context.
... Motion planning of laser sensors benefits from its similarity to production technologies where path-oriented systems are widely used [12], including milling, grinding, roller burnishing [13] and spraying. In addition to concern with tool collision, with a laser sensor there are also concerns with occlusion of either the laser or the camera [11,14,15]. ...
Article
Full-text available
Nowadays, computerized dimensional inspection is included in most manufacturing processes. Industrial vision systems, laser triangulation sensors (LTS) in particular, are responsible for ensuring dimensional and geometrical tolerance compliance. Computer-aided design / computer-aided engineering (CAD/CAE) plays an important role in the manufacturing process. However, in the design and engineering of LTS, there is no equivalent computer-aided tool to provide assistance to the process definition. This paper presents a simulation environment to assist with this issue. Based on the CAD files of the parts, all the parameters of an LTS sensor and its scanning strategy can be simulated in a virtual Direct3D environment. This simulation provides a re-creation of the camera acquisition, allowing collision and occlusion detection and ensures complete and effective digitalization. Furthermore, the capability of the process of tolerance compliance evaluation can be verified. Once the effectiveness of the process is proven, the simulation can be employed for continuous performance improvement and the incorporation of new parts.
... In laser triangulation, it is impossible for the stripe to be imaged because of occlusion. Some researchers [9] used another sensor to obtain a second view or project another stripe [10] to reduce the influence of occlusion. A narrower triangulation angel was used to make the system more robust to occlusion [11]. ...
Conference Paper
It is very important to acquire accurate depth information of target object or scene for many applications in machine learning. The use of 3D reconstruction based on active laser triangulation technology is very complex in real application. One main problem is that most of these technologies detect light stripes by considering each column or row of the image as independent signals causing lack of robustness. In real application, variable illumination, uneven surface and imaging noise will make stripe detection fail. In this paper, by considering laser stripe distortion assumption, we adopt efficient belief propagation algorithm to extract center of laser stripe, which proves superior to existing peak detection approaches. Because of occlusion and low reflectivity, laser stripe captured by the sensor will be cut into several parts at some points, which are referred to as outliers. As for non-outlier, the SNR of that point is high and the disparity difference between left and right neighbor is slight. First, determine whether a point is an outlier or not by computing the weighted SNR and disparity difference. Then efficient belief propagation algorithm is adopted to infer the outlier map which is called labels in machine learning. Experimental results demonstrate the feasibility of our proposed approach.
Conference Paper
Full-text available
In this paper a fast and innovative three-dimensional vision system, having high resolution in the surface reconstruction, is discussed. It is based on a triangulation 3D laser scanner with a linear beam shape. The high precision (few microns) is guaranteed by very small laser line width, small camera pixel-size and proper optical properties of the Telecentric Lens. The entire system has been tested on two kinds of sample objects such as a 20 €cent coin and a set of precision drilling tools. The main purpose of this work is the detection and reconstruction of the 3D surface of tiny objects and the measurement of their surface defects with high accuracy. Furthermore the occlusion problem is faced and solved by properly handling the camera-laser setup. Experimental tests prove the high precision of the system that can reach a resolution of 15 μm.
Article
Full-text available
Active, optical range imaging sensors collect three-dimensional coordinate data from object surfaces and can be useful in a wide variety of automation appli- cations, including shape acquisition, bin picking, assem- bly, inspection, gaging, robot navigation, medical diagno- sis, and cartography. They are unique imaging devices in that the image data points explicitly represent scene sur- face geometry in a sampled form. At least six different optical principles have been used to actively obtain range images: (1) radar, (2) triangulation, (3) moire, (4) holo- graphic interferometry, (5) focusing, and (6) diffraction. In this survey, the relative capabilities of different sen- sors and sensing methods are evaluated using a figure of merit based on range accuracy, depth of field, and image acquisition time.
Article
Full-text available
Range data can play an important role in industrial inspection and quality assurance. We analyse the issues of calibration, stripe location and measurement consistency in low-cost, triangulation-based range sensors using structured laser light. We adopt a direct calibration technique which does not require modelling any specific sensor component or phenomenon, and therefore is not limited in accuracy by the inability to model error sources. We compare five algorithms for determining the location of the stripe in the images with subpixel accuracy. We describe data consistency tests based on two-camera geometry, which make it possible to acquire satisfactory range images of highly reflective surfaces with holes. Finally, we sketch the use of our range sensor within an automatic system for 3-D model acquisition from multiple range images. Experimental results illustrating the various topics accuracy are reported and discussed.
Article
The digitization of the 3D shape of real objects is a rapidly expanding discipline, with a wide variety of applications, including shape acquisition, inspection, reverse engineering, gauging and robot navigation. Developments in computer product design techniques, automated production, and the need for close manufacturing tolerances will be facts of life for the foreseeable future. A growing need exists for fast, accurate, portable, non-contact 3D sensors. However, in order for D scanning to become more commonplace, new methods are needed for easily, quickly and robustly acquiring accurate full geometric models of complex objects using low cost technology. In this paper, a brief survey is presented of current scanning technologies available for acquiring range data. An overview is provided of current 3D-shape acquisition using both active and passive vision techniques. Each technique is explained in terms of its configuration, principle of operation, and the inherent advantages and limitations. A separate section then focuses on the implications of scannerless scanning for hand held technology, after which the current status of 3D acquisition using handheld technology, together with related issues concerning implementation, is considered more fully. Finally, conclusions for further developments in handheld devices are discussed. This paper may be of particular benefit to new comers in this field.
Article
a b s t r a c t In order to ensure the precision of the measurement of complex 3D object surfaces using non-contact laser scanning systems, a novel stereo vision calibration procedure based on a laser line projection plane is presented. This calibration procedure can also be used in measurement systems based on a single camera and a laser line projection. This procedure, while using only laser-coplanar points, is oriented towards laser line detection and allows the matching of two images on the laser projection plane without the use of a rigid motion equation. These features make this procedure very precise, simple and, consequently, easier to implement.
Article
The three-dimensional reconstruction of real objects is an important topic in computer vision. Most of the acquisition systems are limited to reconstruct a partial view of the object obtaining in blind areas and occlusions, while in most applications a full reconstruction is required. Many authors have proposed techniques to fuse 3D surfaces by determining the motion between the different views. The first problem is related to obtaining a rough registration when such motion is not available. The second one is focused on obtaining a fine registration from an initial approximation. In this paper, a survey of the most common techniques is presented. Furthermore, a sample of the techniques has been programmed and experimental results are reported to determine the best method in the presence of noise and outliers, providing a useful guide for an interested reader including a Matlab toolbox available at the webpage of the authors.
Article
We discuss the calibration of a laser triangulation range system mounted on a robot vehicle. Range sensing in general is briefly reviewed, followed by a description of the Oxford/NEL range finder, a sensor mounted on the Oxford AGV for use in object recognition and acquisition tasks. Calibration of the range sensor is achieved by modelling it as a projectivity between two planes: the plane of the light stripe and the plane of the camera's detector array. The vehicle is driven to a known location where there exists an arrangement of orthogonal planes whose equations expressed in world coordinates have been premeasured accurately. Known vehicle position relative to world coordinates, known sensor position relative to vehicle coordinates, and a set of world plane to image point correspondences lead to an overdetermined set of linear equations which can be solved to give the required eight unknown parameters of the projectivity.
Article
A method for determining the radial distortion parameters of a camera is presented. The technique is based on the analysis of distorted images of straight lines and does not require the determination of point correspondence between a scene and an image of that scene. The method is described in detail, including information on the line detection method and the optimization procedure used to estimate the distortion parameters. Quantitative and qualitative experimental results using both synthetic and real image data show that the technique is effective.