Content uploaded by Helder J. Araujo
Author content
All content in this area was uploaded by Helder J. Araujo
Content may be subject to copyright.
UNCERTAINTY PROPAGATION IN ESTIMATION OF
PARTIAL 3D VELOCITY
Nuno Gonc¸alves and Helder Ara´
ujo
Institute of Systems and Robotics - Coimbra
Dept. of Electrical and Computers Engineering
University of Coimbra
Pinhal de Marrocos - POLO II - 3030 Coimbra
PORTUGAL
email: nunogon,helder @isr.uc.pt
Keywords: uncertainty propagation, 3D velocity,
rotation, translation, depth resolution
Abstract
This paper analyses two methods to compute the 3D
velocity of a navigating stereo head in the depth ( )
direction. Both methods, which are function of the
optical flow and disparity maps, are presented in two
approaches: differential and discrete. All the expres-
sions of both methods in both formulations are stud-
ied within the scope of uncertainty propagation. This
provides a mean to point out the critical input vari-
ables for each method where special care in mea-
surements should be taken. Different paths (trans-
lational, rotational and mixed) as well as different
types of surfaces are compared.
1 Introduction
Motion estimation has been studied mainly within
the framework of rigid body motion. However, in
robotics literature it is easy to find the motion esti-
mation problem also stated in a different way: the
estimation of the time-to-impact (TTI) or time-to-
collision.
This quantity yields the time needed to impact
with the nearest obstacle if the motion remains un-
changed. It can be computed with the expression:
(1)
where is the depth of the nearest obstacle and
is the 3D motion of the vehicle in the depth ( ) di-
rection. Given the depth information the problem
becomes the estimation of .
In robotics applications it is very important to avoid
the collision with obstacles and the TTI performs
an important role in that matter. Physiological re-
searchers [7] stated that in the human (and animal,
in general) visual system the speed of self-motion
can not be determined visually using only the optical
flow pattern. TTI, however, can be directly measured
from the optical flow. There is no general agreement
if human uses or not this strategy in avoiding colli-
sions.
Colombo [1] points out that often the TTI is con-
fused with scaled depth (which considers only the
translational motion). This approximation is reason-
able when a narrow field of view is used but at the
image periphery gross estimation errors should be
expected. To avoid this model error, both transla-
tional and rotational components of rigid body mo-
tion should be considered.
In this paper we are interested in the computation of
the denominator of expression 1, that is the 3D ve-
locity of the navigating system in the direction.
This velocity is a function of the rigid body transla-
tional and rotational velocities which are in general
unknown. Two methods ([9, 5, 2, 4]) are presented
to compute , using optical flow and disparity maps
(which provide the depth information used both for
and for TTI) in stereo sequences. Both meth-
ods are formulated in a differential and discrete ap-
proaches.
This paper analyses those methods within the scope
of uncertainty propagation. Errors in the input vari-
ables used to compute will inevitably corrupt
their results. Even very small errors in the optical
flow and disparity information can produce a high
level of uncertainty in the values of . This fact
reduces the accuracy and the interest of such a com-
putation (it also suggests that a high number of mea-
surements have to be done) and that is why the quan-
tification of the uncertainty if fundamental.
The aim of this work is to quantify the variance of
the computed values for which provides a mean
to point out the critical input variables in the meth-
ods. Those critical factors indicate which measure-
ments should be carefully done.
In the next section the problem of motion estimation
is stated and in the following two sections the differ-
ential and discrete approaches are presented. Sec-
tion 5 derives the uncertainty propagation expres-
sions for both methods (two approaches) and in sec-
tion 6 some experiments are described and results
are presented. Section 7 presents some conclusions
of the work.
2 Motion Estimation
Before the description of the methods used to com-
pute the 3D velocity, we shall first introduce the no-
tations and geometry used throughout this paper.
In this paper a 3D point in space is designated by its
coordinate vector and the world co-
ordinate system is coincident with the cyclopean co-
ordinate system, that is, centred in the middle point
between the optical centres of both cameras. The
origins of the local camera coordinate systems are
the optical centres at a distance f(focal length) of
the image plane. Both cameras are parallel to each
other separated by the baseline b. The flow induced
in the image planes is represented by
for the left image plane and by for the
right image plane.
Figure 1 shows the geometry of the stereo vision sys-
tem and the world coordinate system.
(xr,yr) (xl,yl)
P=[X Y Z]
Or Ol
Zr
Xr Xl
Zl
b
f
Yr Yl
Figure 1: World and stereo coordinate system
The model used for the 3D total velocityof a point
in space is the rigid body motion. Let be the total
3D velocity of the point . As any rigid body motion
can be expressed by a translational componentgiven
by and a rotational component given
by the 3D velocity is given by
.
Computing the components of the total 3D velocity
, it is obtained the following expression:
(2)
The third component of the 3D velocity - -isthe
velocity of the scene points in the direction of the
optical axis which is the quantity to be estimated
(providing a mean to compute the time-to-impact
()).
Besides that, there are two possible approaches to
the problem of estimation: differential and dis-
crete. The two methods in both approaches are pre-
sented in the next sections.
3 Differential approach
In this section 3D motion estimation is considered
from a differential standpoint. The correspondences
across time are not known and the differential optical
flow is available. Two methods to estimate the 3D
velocity in the direction - are presented. The
details and proofs of those methods are available in
[3, 2, 9, 5].
- Depth Constraint
The change in the depth of a point or rigid body over
time is directly related to its velocity in 3D space. It
can be used this principle to relate the velocity in the
direction with depth.
The depth at instant of a point should be the depth
at the instant plus the displacement in the direc-
tion - . This relationship is given by the follow-
ing expression, the linear Depth Change Constraint
Equation - DCCE (first order Taylor series approxi-
mation):
(3)
where is the depth at a given time , ,
and its spatial-temporal derivatives. and
are the components of the optical flow.
- Binocular Flow Constraint
The second method to compute is based on the
differences between the flows induced by the move-
ment of a point in a stereo pair of images [9]. The
parallel stereo system is again used and is considered
to move rigidly with the scene.
Point in figure 1, its projection in each image
plane ( and ) and the optical cen-
tres ( and ) define two similar triangles, so that
the following relationship can be written:
(4)
Now, computing the temporal derivative of the equa-
tion 4, it yields:
(5)
4 Discrete Formulation
This section presents the discrete versions of both
methods to compute .
In the discrete formulation of DCCE and binocular
flow methods the depth information is assumed to
be available and so the disparity in time and (
and ) are known. Feature correspondences are also
available.
- Discrete DCCE
The DCCE equation in the discrete formulation is
given by:
(6)
where .
In the discrete formulation of the DCCE equation,
the image velocities were replaced by the finite dif-
ferences of the point image coordinates.
Binocular Flow
The discrete binocular flow method equation is given
by:
(7)
or, in a easier way to compute, after rearranging the
terms:
(8)
5 Uncertainty Propagation
Given the two models for , both in the differen-
tial and discrete approaches, it is important to anal-
yse the uncertainty propagation in the equations due
to uncertainty in the data inputs. As we shall see,
it is possible to determine the critical independent
variables that in presence of uncertainties affect the
recovery of motion.
The first step is to define the independent variables
for each expression:
(9)
where the geometric parameters are assumed to be
known, that is, the baseline and thefocal length.
Any noise in the values of the disparity maps, depth
data, their temporal and spatial derivatives and in the
binocular image flows affect the computation of .
To study the uncertainty propagation, the covariance
matrix of an expression that depends on an input
variable vector is computed. Let be the function
vector to be estimated and the vector with the in-
dependent variables. Consider a n-vector random
variable and a m-vector random variable function
of the n-vector . Notice that the relation between
and is nonlinear. If it is considered the mean
point of the random variables and computed the first
order approximation it can be written the covariance
matrix of the function vector in the form [6]:
(10)
where is the covariance matrix of the input vari-
ables . is the Jacobian matrix that maps
vector to .
It is assumed that all variables are affected by Gaus-
sian random white noise with zero mean and stan-
dard deviation denoted by , where denotes the
variable. Also the noise in the variables is assumed
to be independent so the covariance matrix for this
input signal is given by:
for j=k
for j k (11)
In this study the depth is computed from the dispar-
ity with and so the uncertainty analysis
is within the scope of the disparity and optical flow
(differential and/or discrete). The expressionsde-
pend on depth and depth spatial and temporal gradi-
ents. So, before analysing each equation, the uncer-
tainty propagation in the depth is first derived.
(12)
For the gradients of depth in relation to the variable
()wehave:
(13)
so that
(14)
The depth covariance expressionbecomes:
(15)
It is now possible to concentrate the attention in the
expressions of for both the DCCE and DV meth-
ods.
5.1 Depth Constraint - Differential
For the first expression it yields and
.
The covariance matrix for this input signal is
given by:
.
.
..
.
.
(16)
To compute the covariance matrix of the function
vector, equation 10 is used. It yields:
(17)
The resulting covariance matrix is a matrix
given by the expression:
(18)
showing the dependencies on the variances of
( . Substituting equations 13 and 15 in
equation 18 it yields:
(19)
5.2 Binocular Flow Constraint - Differential
Using a similar reasoning for the second method:
(20)
(21)
and the Jacobian matrix is:
(22)
The covariance matrix of the function vector, after
arranging the terms, is then:
(23)
and substituting equation 15 in equation 23 it is ob-
tained:
(24)
5.3 Depth Constraint - Discrete
In this case the independent variables vector is
given by:
(25)
The Jacobian matrix is straightforward in this func-
tion. The covariance matrix, dependent on the depth,
yields:
(26)
and substituting equations 13 and 15 in equation 26
it is obtained:
[](27)
5.4 Binocular Flow Constraint - Discrete
Using the same reasoning, the independent variables
vector for the discrete binocular flow method yields:
(28)
Calculating the jacobian matrix and substituting it in
the first order approximation of the covariance ma-
trix of , it yields:
(29)
and putting together equation 15 and equation 29 it
is obtained:
(30)
5.5 Resolution of Depth Data
The uncertainty caused by random noise in the input
variables strongly affects the accuracy of the estima-
tion of . Besides that, the finite resolution of the
disparity maps can be one important source of error
and affects even more the estimation accuracy.
Figure 2 shows how the resolution of the disparity
can produce uncertainty in the position of a 3D point,
mainly in the depth coordinate.
The software used in our study to obtain the dispar-
ity fields has a finite resolution of of pixel. So,
some changes in the real depth of a point do not pro-
duce any change in the disparity and since depth is
inversely proportionalto the disparity its value is cal-
culated with decreasing resolution as the valueof the
depth itself increases.
Let be the minimum change in disparity. Then
the minimum change in depth that produce changes
in disparity is:
(31)
Equation 31 indicates that for near objects small
changes in depth cause high changes in the dispar-
ity and for distant objects the minimum change in
depth that produce changes in the disparity is very
high.
Zl Zr
P
Left camera Right camera
Figure 2: Effect of finite resolution of disparity maps
in depth
So, let us consider a realistic situation: mm,
mm, px and the pixel width
mm.
In that particular case it is obtained, for example:
However, if the resolution lowers to , for the
same case, it yields:
(a) Left image (b) Disparity
Figure 3: Intensity images and disparity field for
synthetic world
It can be seen that the low resolution in dispar-
ity/depth data can produce large errors with increas-
ing distance to the optical centre of the camera. This
fact will produce significant errors in the computa-
tion of depth field gradients mainly for small motion
between two consecutive frames. This also means
that it will be difficult to recover motion for distant
points (unless high resolution disparity is used).
The perturbation caused by rounding/quantization
error (limited resolution) is given by the following
equation [8]:
(32)
where is the minimum increment due to finite
resolution.
6 Experiments and Results
To analyse quantitatively the uncertainty equations
19, 24, 27 and 30, it was constructed a synthetic
world composed by several objects: front and lateral
walls, ground plane, a box on the ground, a cylin-
der and a sphere. Figure 3 shows the left image of a
synthetic stereo pair and the corresponding disparity
map.
This world was projected into two equal cameras
mounted in a virtual navigation robot with baseline
, focal length , square pixels width of
. The virtual robot performed several paths
(translational, rotational and mixed paths) and the
data stored includes: left and right images, dispar-
ity in high resolution (map of floats) and continuous
and discrete image velocities.
The uncertainty equations represent the variance of
the estimation of in each point. Given the dispar-
ity maps, as well as their spatial and temporal gradi-
ents and the continuous and discrete velocities, the
uncertainty for each point using equations 19, 24, 27
and 30 can be computed as function of the variance
of the input variables.
For that purpose the following assumptions are
made: the variance of the differential and discrete
image velocities are equal for both and coor-
dinates ( )
and the same for the discrete velocities, and for the
gradients of the disparity since the
derivatives of the disparity maps are approximated
by a finite differences equation (for ex.,).
The uncertainty propagation equation are then given
by:
(33)
where and represent one of the
methods (DCCE/DIF, DV/DIF, DCCE/DISC and
DV/DISC respectively). and are
the weights of the disparity and velocities random
noises.
From the variance equations of all expressions and
from equation 31 it is clear that the distance of a
point to the optical centre is one of the most im-
portant factors to the uncertainty value (variance).
To observe this sentence, figure 4 plots the vari-
ance value for equation 19 where darker points rep-
resent values with low variance and lighter points
have higher variance (saturation for values equal and
above ). It can be seen that farther objects
have higher variances. The map for the uncertainty
equations to are not presented since they are
very close to one another.
Table 1 presents some values for the uncertainty co-
efficients. Two points from four objects were cho-
sen: (G)round, (B)ox A, (S)phere and (W)all. The
points are sorted in increasing order of its depth.
Regarding the values reported in table 1 it can be ob-
served that, as was expected, for farther objects the
Figure 4: Variance map for . Lighter point have
higher variance. It was used
calculated from equation 32, corresponding to a res-
olution of .
G13216 81.2 13275 10774
256.2 11993 256.2 –
G63364 198.0 63517 57711
1299 61532 1299 –
B52768 133.8 52825 48038
1.6 51563 1.6 –
B57902 100.3 57958 51740
1218 56170 1218 –
S94444 312.7 94516 87223
0.3 94050 0.3 –
S108960 308.2 109100 101237
85.7 108375 85.7 –
W392981 485.6 393179 373989
1.4 392359 1.4 –
W370668 576.8 370843 350382
1.4 369942 1.4 –
Table 1: Uncertainty coefficients for points at differ-
ent depths
variance of the estimation values is higher. It can
also be noticed that the DCCE method is much more
sensitive to the uncertainty in the disparity map than
to the uncertainty in the velocities. The binocular
flow method (DV), however, in the differential ap-
proach is sensitive almost only to the uncertainty in
velocities and, in the discrete approach, to the un-
certainty in the disparity. Furthermore, the coeffi-
cients , , (dependence on the
disparity) and (dependence on the velocities)
present values very close to each other. This sug-
gests that when the uncertainty in the disparity is
similar to the uncertainty in the velocities, the un-
certainty of estimation values are very similar for
both methods in both formulations.
To see more explicitly the relation between the un-
certainty coefficients and the depth of the points used
to compute , figure 5 plots these uncertainty coef-
ficients when a sphere is moved from 2.5 meters to
5 meters with the same motion conditions.
The depth, however, isn’t the only variable that in-
fluences the uncertainty in the estimation. It is
known that the path also plays an important role in
the accuracy of the estimated. We concluded in
[2, 3] that for pure rotational paths the estima-
tion accuracy is poor. Furthermore, the amplitude
of the velocities also affects the results. To observe
these effects, figure 6 plots all uncertainty coeffi-
cients when the velocities are multiplied by a factor
of 2, 4, 8, 16 and 32, in three paths: (A) pure transla-
tion through optical axis direction, (B) pure rotation
over the vertical axis and (C) translation through all
axis and rotation over vertical and horizontal axis.
Regarding figure 6 it can be observed that for the
DCCE method, in both formulations, the first un-
certainty coefficients ( and ) grow up
with the increase of the velocities in the rotational
paths and don’t present variations in translational
and mixed paths. For the coefficients dependent
on the uncertainty in the velocities ( and
), it can be seen that they are around their
mean value and don’t increase or decrease. For the
differential approach of the binocular flow method
the coefficient increases for all paths and
presents only slightly changes in . In the dis-
crete formulation, however, it is observed that there
2500 3000 3500 4000 4500 5000
0
0.5
1
1.5
2
2.5
3
3.5
4x 105
DEPTH OF THE SPHERE [mm]
UNCERTAINTY COEFFICIENT
Coef11
Coef22
Coef31
Coef41
(a) , , and
2500 3000 3500 4000 4500 5000
0
50
100
150
200
250
DEPTH OF THE SPHERE [mm]
UNCERTAINTY COEFFICIENT
Coef12
Coef21
Coef32
(b) , and
Figure 5: Uncertainty coefficients - depth effect.
1 2 4 8 16 32
2
2.2
2.4
2.6
2.8
3
3.2
3.4
3.6x 105
VELOCITY AMPLITUDE
UNCERTAINTY COEFFICIENT
Seq. A
Seq. B
Seq. C
(a)
1 2 4 8 16 32
20
30
40
50
60
70
80
VELOCITY AMPLITUDE
UNCERTAINTY COEFFICIENT
Seq. A
Seq. B
Seq. C
(b)
1 2 4 8 16 32
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
VELOCITY AMPLITUDE
UNCERTAINTY COEFFICIENT
Seq. A
Seq. B
Seq. C
(c)
1 2 4 8 16 32
1.98
2
2.02
2.04
2.06
2.08
2.1
2.12x 105
VELOCITY AMPLITUDE
UNCERTAINTY COEFFICIENT
Seq. A
Seq. B
Seq. C
(d)
1 2 4 8 16 32
2
2.2
2.4
2.6
2.8
3
3.2
3.4
3.6x 105
VELOCITY AMPLITUDE
UNCERTAINTY COEFFICIENT
Seq. A
Seq. B
Seq. C
(e)
1 2 4 8 16 32
20
30
40
50
60
70
80
VELOCITY AMPLITUDE
UNCERTAINTY COEFFICIENT
Seq. A
Seq. B
Seq. C
(f)
1 2 4 8 16 32
1.6
1.65
1.7
1.75
1.8
1.85
1.9
1.95
2
2.05
2.1x 105
VELOCITY AMPLITUDE
UNCERTAINTY COEFFICIENT
Seq. A
Seq. B
Seq. C
(g)
Figure 6: Uncertainty coefficients - effect of the am-
plitude of velocities in three paths.
is a decrease of the uncertainty coefficient .
It is also noticed that the rotational path presents
higher values of uncertainty for almost all coeffi-
cients and velocities.
7 Conclusions
In this paper the uncertainty propagation expressions
of the third component of 3D velocity estimation are
derived. Those expressions were written as function
of the uncertainty on the disparity map and the un-
certainty on the velocities (continuous and discrete).
From the analysis of the expressions and the re-
sults plotted it is possible to conclude that, for the
DCCE method, both in the differential and in the dis-
crete formulations, the critical factor is the disparity.
There is an increasing tendency of the uncertainty
coefficients when the velocities themselvesincrease.
For the DV method, however, the two formulations
have distinct behaviors. For the differential one, the
critical factor is the uncertainty on velocities and for
the discrete one the critical factor is the uncertainty
on the disparity. The former approach presents an
increasing of the uncertainty coefficients when ve-
locities grow up and the discrete approach presents a
decreasing in that situation.
In the DCCE method as well as in the DVmethod, in
both approaches, the coefficients of the critical fac-
tors were ever much bigger than the other ones. The
difference is between one and five orders of magni-
tude.
It was also observed that the 3D point depth rela-
tively to the cameras is very important to the un-
certainty coefficients. Those coefficients grow up
in a high power (between 2 and 4) of the depth co-
ordinate, so farther objects have higher uncertainty
which suggests that the is more accurate when
closer points are used.
Furthermore, some paths were compared and it was
observed that in rotational paths the uncertainty co-
efficients were bigger than in translational and even
mixed paths. This suggests that rotational motion is
more difficult to estimate.
References
[1] C. Colombo and A. Del Bimbo. Generalized
Bounds for Time to Collision from First-Order
Image Motion. In 7th IEEE International Con-
ference on Computer Vision, pages 220–226,
Corfu, Greece, September 1999. IEEE.
[2] N. Gonc¸alves. Estimac¸˜ao de movimento em
sequˆencias de imagens est´ereo - comparac¸˜ao
de dois m´etodos. Master’s thesis, Department
of Electrical and Computers Engineering of the
Faculty of Science and Technology of the Uni-
versity of Coimbra, 2002.
[3] N. Gonc¸alves and H. Ara´ujo. Analysis of two
methods for the estimation of partial 3d veloc-
ity. In Proc. of the 9th International Sympo-
sium on Intelligent Robotic Systems, Toulouse,
France, 2001.
[4] N. Gonc¸alves and H. Ara´ujo. Estimation of 3d
motion from stereo images - differential and dis-
crete formulations. In Proc. of the 16th Interna-
tional Conference on Pattern Recognition, Au-
gust 2002.
[5] M. Harville, A. Rahimi, T. Darrell, G. Gordon,
and J. Woodfill. 3d pose tracking with linear
depth and brightness constraints. In Proc. IEEE
International Conference on Computer Vision,
Corfu, Greece, 1999.
[6] K. Kanatani. Statistical Optimization for Geo-
metric Computation: Theory and Practice. Ma-
chine Intelligence and Pattern Recognition - vol.
18. North-Holland - Elsevier, 1996.
[7] S. Palmer. Vision Science: Photons to Phe-
nomenology. MIT Press, 1999.
[8] K. Shanmugan. Digital and analog communica-
tion systems. John Wiley & Sons, 1979.
[9] A. M. Waxman and J. H. Duncan. Binocular im-
age flows: Steps towards stereo - motion fusion.
IEEE Trans. on Pattern Analysis and Machine
Intelligence, 8(6):715–729, Nov. 1986.