# Rigorous computing in computer vision

**ABSTRACT** In this paper we discuss how Interval Analysis can be used to solve some problems in Computer Vision, namely autocalibration and triangulation. The crucial property of Interval Analysis is its ability to rigorously bound the range of a function over a given domain. This allows to propagate input errors with guaranteed results (used in multi-views triangulation) and to search for solution in non-linear minimisation problems with provably correct branch-and-bound algorithms (used in autocalibration). Experiments with real calibrated images illustrate the interval approach.

**0**Bookmarks

**·**

**65**Views

- [Show abstract] [Hide abstract]

**ABSTRACT:**In this paper we demonstrate how Interval Analysis and Constraint Logic Programming can be used to obtain an accurate geometric model of a scene that rigorously takes into account the propagation of data errors and roundoff. Image points are represented as small rectangles: As a result, the output of the n-views triangulation is not a single point in space, but a polyhedron that contains all the possible solutions. Interval Analysis is used to bound this polyhedron with a box. Geometrical constraints such as orthogonality, parallelism, and coplanarity are subsequently enforced in order to reduce the size of those boxes, using Constraint Logic Programming. Experiments with real calibrated images illustrate the approach.Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition 06/2006; 1:1185-1190. - SourceAvailable from: Agostino Dovier
##### Conference Paper: Reconstruction with Interval Constraints Propagation.

[Show abstract] [Hide abstract]

**ABSTRACT:**In this paper we demonstrate how Interval Analysis and Constraint Logic Programming can be used to obtain an ac- curate geometric model of a scene that rigorously takes into account the propagation of data errors and roundoff. Image points are represented as small rectangles: As a result, the output of the n-views triangulation is not a single point in space, but a polyhedron that contains all the possible so- lutions. Interval Analysis is used to bound this polyhedron with a box. Geometrical constraints such as orthogonal- ity, parallelism, and coplanarity are subsequently enforced in order to reduce the size of those boxes, using Constraint Logic Programming. Experiments with real calibrated im- ages illustrate the approach.2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2006), 17-22 June 2006, New York, NY, USA; 01/2006

Page 1

Vision, Video and Graphics (2005)

E. Trucco, M. Chantler (Editors)

Rigorous Computing in Computer Vision

Michela Farenzena and Andrea Fusiello

Dipartimento di Informatica, Università di Verona

Strada Le Grazie 15, 37134 Verona, Italy

farenzena@sci.univr.it, andrea.fusiello@sci.univr.it

Abstract

In this paper we discuss how Interval Analysis can be used to solve some problems in Computer Vision, namely

autocalibration and triangulation. The crucial property of Interval Analysis is its ability to rigorously bound the

range of a function over a given domain. This allows to propagate input errors with guaranteed results (used in

multi-views triangulation) and to search for solution in non-linear minimisation problems with provably correct

branch-and-bound algorithms (used in autocalibration). Experiments with real calibrated images illustrate the

interval approach.

Categories and Subject Descriptors (according to ACM

CCS): I.4.8 [Image Processing and Computer Vision]: Scene

Analysis I.3.5 [Computer Graphics]: Computational Geom-

etry and Object Modeling G.4 [Mathematical Software]: Re-

liability and robustness

1. Introduction

Interval analysis (IA) [Moo66] is an approach to the solu-

tion of numerical problems by performing computations on

sets of reals rather than on floating point approximations to

reals. IA defines methods for computing an interval that en-

closes the range of various elementary mathematical func-

tions. Interval evaluations return a superset of the mathemat-

ically correct result, hence the interval approach is said to be

rigorous.

There are two principal advantages of IA over classical

numerical analysis. The first is that the input errors and the

roundoff errors are automatically incorporated into the result

interval. Thus, interval evaluation can be viewed as automat-

ically performing both a calculation and an error analysis.

The second is that IA allows one to compute provably cor-

rect upper and lower bounds on the range of a function over

an interval, and this proves useful in solving global optimi-

sation problems. Another important application of IA is the

construction of verifiable constraint solvers, which return in-

tervals that are guaranteed to contain all the real solutions.

In this paper we report our experience in applying IA

tools to Computer Vision problems. IA is not a panacea,

of course, but has been strangely overlooked by the Com-

puter Vision community. To the best of our knowledge, only

[Bro83, OF87, MKP96] in the past approached IA, but they

did not push this research forward. More attention has been

given to IA in the Computer Graphics community, where it

has been applied to problems such as ray tracing, approxi-

mating implicit curves [Sny92], and others.

On the basis of our preliminary results, however, we

maintain that this is a very interesting and promis-

ing paradigm, which might challenge the probabilistic

maximum-likelihood one in problems involving real data

and provide guarantee of global convergence to non-linear

optimisation algorithms.

We applied IA techniques to the problem of autocal-

ibration [FBFB04b], whose solution comes from a non-

linear minimisation, and to the problem of triangulation

[FBFB04a], which requires that error in the localisation of

image points is suitably taken into account. In this paper we

will give an overview of IA and outline its application to

these two problems.

The classical approach to autocalibration (or self-

calibration), in the case of a single moving camera with

constant but unknown intrinsic parameters, is based on the

Kruppa equations [MF92], which havebeen found tobevery

sensitive to noise [LF97]. Other formulations (see [Fus00]

for a review) avoid this instability, but all are based on

c ? The Eurographics Association 2005.

Page 2

M. Farenzena & A. Fusiello / Rigorous Computing

a non-linear minimisation and none of the existing meth-

ods is provably convergent. On the contrary, IA algorithms

[Han92] solve the optimisation problem with automatic re-

sult verification, i.e. with the guarantee that the global min-

imisers have been found.

In the absence of errors, triangulation is a trivial prob-

lem, involving only finding the intersection of rays in the

space corresponding toback-projections of theimage points.

If data are perturbed, however, the rays do not intersect in a

common point, and obtaining the best estimate of the 3-D

point is not a trivial task. In literature, the custom procedure

is to find the “best” 3-D point in some sense [HS97, Zha98].

Thanks to IA, instead of selecting one best solution, one can

enclose the set of all the possible solutions, given a bounded

error affecting the image points.

Adhering to the IA paradigm, we do not model a proba-

bility distribution inside the intervals, therefore there is no

preferred solution in the solution set.

2. Interval Analysis

Interval Analysis [Moo66] is an arithmetic defined on inter-

vals, rather than on real numbers. It was firstly introduced

for bounding the measurement errors of physical quantities

for which no statistical distribution was known. In the se-

quel of this section we shall follow the notation used in

[KNN∗], where intervals are denoted by boldface. Under-

scores and overscores will represent respectively lower and

upper bounds of intervals. The midpoint of an interval x is

denoted by mid(x). IR and IRnstand respectively for the set

of real intervals and the set of real interval vectors of dimen-

sion n. If f(x) is a function defined over an interval x then

range(f,x) denotes the range of f(x) over x.

If x = [x,x] and y =?y,y?, a binary operation between x

x◦y = {x◦y | x ∈ x∧y ∈ y},∀ ◦ ∈ {+,−,×,÷}.

and y is defined in interval arithmetic as:

Operationally, interval operations are defined by the min-

max formula:

x◦y =?min?x◦y,x◦y,x◦y,x◦y?,

Here, interval division x/y is undefined when 0 ∈ y. How-

ever, under certain circumstances it make sense to define

such quotients in a extended arithmetic [Kea96], where the

division by zero is included, resulting the interval [−∞,∞]

in the worst case.

max?x◦y,x◦y,x◦y,x◦y??

(1)

It is to note that the ranges of the four elementary interval

operations are exactly the ranges of the corresponding real

operations, and theabove definitions imply the abilitytoper-

form them with arbitrary precision. When implemented on a

digital computer, however, truncation errors occur, and they

may cause the resulting interval not tocontain the true result.

In order to preserve the guarantee that the true value always

lieswithin theinterval, end-points must be rounded outward,

i.e.,the lower endpoint of the interval must be rounded down

and the upper endpoint must be rounded up.

2.1. Inclusion functions

In general, for arbitrary functions, interval computation can-

not produce the exact range, but only approximate it.

Definition 1 (Interval extension) [Kea96] A function f :

IR → IR is said to be an interval extension of f : R → R

provided

range(f,x) ⊆ f(x)

for all intervals x ⊂ IR within the domain of f.

Suchafunctionisalsocalledaninclusion function. So,given

a function f and a domain x, the inclusion function yields a

rigorousbound (or enclosure) on theexact rangerange(f,x).

This property is particularly suited for error propagation: If

x bounds the input error on the variable x, f(x) bounds the

output error. Therefore, if the exact value is contained in in-

terval data, the exact value will be contained in the interval

result. This approach is different from the established tech-

niques for error propagation [Fau93, Har96, Kan93], mainly

based on statistical analysis: a statistical distribution of the

error need not to be assumed, and the result is mathemati-

cally guaranteed to contain the exact value.

Definition 2 (Natural interval extension) Let us consider

a function f computable as an arithmetic expression f, com-

posed of a finite sequence of operations applied to constants,

argument variables or intermediate results. A natural inter-

val extension of such a function, denoted by f(x), isobtained

by replacing variables with intervals and executing all arith-

metic operations according to the rules above.

Similar definitions apply for interval vectors (or boxes) in

IRn. Some points are worth noting:

• Different expressions for the same function yield different

natural interval extensions. For instance, f1(x) = x2−x,

and f2(x) = x(x−1) are both natural interval extensions

of the same function.

• Variable dependency: Evaluating the expression f(x) =

x−x withthe interval [1,2], the result is f([1,2])=[1,2]−

[1,2] = [−1,1], not 0, as expected, because the piece of

information that the two intervals represent the same vari-

able is lost.

• Overestimation: Although the ranges of interval arith-

metic operations are exact, this is not so if operations are

composed. For example, if x = [0,1] we have f2(x) =

[0,1]([0,1]−1) = [0,1][−1,0] = [−1,0], which strictly

includes range(f,[0,1]) = [−1/4,0]. This effect arises as

a consequence of the previous two.

• Wrapping effect: This is a phenomenon intrinsic to inter-

val computation in Rn, namely the fact that the image of

c ? The Eurographics Association 2005.

Page 3

M. Farenzena & A. Fusiello / Rigorous Computing

a box x under a map F : Rn→ Rnis not a box, in general.

Interval computation can yield, at best, the interval hull

of range(F,x),i.e. the smallest box containing range(F,x)

(see Fig. 1).

Figure 1: The wrapping effect.

The notion of order of an inclusion function characterises

how sharply interval extensions enclose the range of a func-

tion: a higher order of inclusion means that the inclusion

function gives sharper bounds. It can be shown [Kea96] that

the natural interval extension is first order. Higher-order in-

clusion functions have been defined, for example the Taylor

models (see [Neu02]):

Definition 3 (Taylor Model) Let f : x ⊂ Rn→ R be a func-

tion that is (m + 1) times continuously partially differen-

tiable. Let x0be a point in x and Pm,fthe m-th order Taylor

polynomial of f around x0. Let Im,fbe an interval such that

f(x) ∈ Pm,f(x−x0)+Im,f

∀x ∈ x.

(2)

We call the pair (Pm,f,Im,f) an m-th order Taylor model of f

[MB03] .

Pm,f+Im,fencloses range(f,x) between two hypersurfaces,

as in Fig. 2.

−1−0.8−0.6 −0.4−0.20 0.20.4 0.6 0.81

−1.5

−1

−0.5

0

0.5

1

1.5

−101

−2

−1

0

1

2

Figure 2: Example of bounding a 7th order polynomial with

a 3rd order Taylor model

The sharpness of the bounds depends on the method used

to obtain the inclusion function for Pm,f. A Taylor-Bernstein

form is a Taylor model where the polynomial is expressed

in the Bernstein basis rather than in the canonical power

basis. The advantage is that the Taylor-Bernstein form al-

lows to compute the exact range of the polynomial part (see

[NK02]). It can be shown that a Taylor-Bernstein form of

degree m has order of inclusion m+1.

2.2. IA-based Optimisation

The ability of Interval Analysis to compute bounds to the

range of functions has been most successful in global op-

timisation. The overall structure of the Moore-Skelboe or

Hansen [Han92] branch-and-bound algorithm is:

1. store in a list L the initial interval x0∈ IRncontaining

the sought minima;

2. pick an interval x from L;

3. if x is guaranteed not to contain a global minimiser,

then discard it, otherwise subdivide x and store the sub-

intervals in L;

4. repeat from step 2 until the width of the intervals in L are

below the desired accuracy.

The criteria used to delete intervals are based on rigorous

bounds, therefore the interval containing the global min-

imiser is never deleted.

A problem of global optimisation algorithms based on

this scheme is the so called cluster effect [KD94]: sub-

intervals containing no solutions cannot be easily eliminated

if there is a local minimum nearby. As a consequence of

over-estimation in range bounding, many small intervals are

created by repeated splitting, whose processing may domi-

nate the total work spent on global search. This phenomenon

occurs when the order of the inclusion function is less than

three [KD94], hence with Taylor-Bernstein form of degree

≥ 2 as inclusion functions the cluster effect is avoided.

We employed an algorithm inspired by a recently pro-

posed global optimisation method [NK02], based on the

Moore-Skelboe-Hansen branch-and-bound algorithm and

Taylor-Bernstein forms for bounding the range of the ob-

jective function.

The complete optimisation scheme can be summarised as

the pseudo-code reported in the next page. A combination of

several tests has been used in our implementation:

1. The cut-off test uses an upper boundˆf of the global min-

imum of the objective function f to discard an interval

x from L if f(x) >ˆf. Any value taken by f is an up-

per bound for its global minimum, but the tighter is the

bound, the more effective is the cut-off test.

2. The monotonicity test determines whether the function f

has no stationary points in an entire sub-interval x. De-

note the interval extension of the gradient of f over x by

∇f(x). If 0 ?∈ ∇f(x) then x can be deleted.

3. The concavity test examines the concavity of f, using its

Hessian matrix H. Let Hi,i(x) denote the interval exten-

sion of the i−th diagonal entry of the Hessian over x. An

interval can be deleted if Hi,i(x) < 0 for some i.

4. The Interval Newton step applies one step of the inter-

val Newton method [Kea96] to the non-linear system

∇f(x) =0, x∈x. As a consequence we may validate that

x contains no stationary points, in which case we discard

x, otherwise we may contract or subdivide x.

3. Computer Vision background

Throughout this paper we will use the general projective

camera model [HZ03]. Let M = [x,y,z,1]Tbe the homo-

c ? The Eurographics Association 2005.

Page 4

M. Farenzena & A. Fusiello / Rigorous Computing

GLOBAL-OPTIMISATION ALGORITHM

U ← ∅

L ← {x0} list of intervals sorted in order of increasing f(x)

while L ?= ∅ do

remove the first interval x from L

if stop criterion then U ← U ∪{x}

else if (cut-off test: f(x) >ˆf or

monotonicity test: 0 ?∈ ∇f(x) or

concavity test: Hi,i(x) < 0 for some i) then Y ← ∅

else interval Newton step: Y ← x∩N(∇f;x,mid(x))

bisect Y and insert the resulting intervals in L

updateˆf

end

return U

geneous coordinates of a 3D point in the world reference

frame. The homogeneous coordinates of the projected point

m are given by

κm = PM

(3)

where κ is the depth of M wrt the camera, and P = A[R|t] is

the camera matrix, whose position and orientation are repre-

sented, respectively, by the translation vector t and the 3×3

rotation matrix R (extrinsic parameters). The matrix A con-

tains the intrinsic parameters, and has the following form:

A =

αu

0

0

γ

u0

v0

1

αv

0

,

(4)

where αu, αvare the focal lengths in horizontal and vertical

pixels, respectively, (u0,v0) are the coordinates of the prin-

cipal point, given by the intersection of the optical axis with

the retinal plane, and γ is the skew factor, that models non-

rectangular pixels.

Two conjugate points m and m′are related by the funda-

mental matrix F [LF96]:

m′TFm = 0(5)

which is related to intrinsic and extrinsic parameters by

F ∽ A′−T([t]×R)A−1.

(6)

where ∽ denotes equality up to a scale factor. The rank of

F is two and, being defined up to a scale factor, it depends

upon seven parameters.

When conjugate points are in normalised coordinates

(A−1m), i.e. the intrinsic parameters are known, they are re-

lated by the essential matrix:

E ∽ [t]×R.

(7)

The essential matrix encodes the rigid transformation be-

tweenthetwocameras, anditdepends upon fiveindependent

parameters: three for the rotation and two for the translation

up to a scale factor.

A counting argument implies that there must exist two

linear independent constraints that characterise the essential

matrix. Indeed, the essential matrix is characterised by the

following Theorem [HF89, Har92]:

Theorem 1 A real 3×3 matrix E can be factored as the

product of a non-zero skew-symmetric matrix and a rotation

matrix if and only if E has two identical singular values and

one zero singular value.

4. Autocalibration: problem formulation

In many practical cases, the intrinsic parameters are un-

known and point correspondences are the only information

that can be extracted from a sequence of images. In this

hypothesis, called weak calibration, fundamental matrices

can be obtained directly from conjugate points. Autocalibra-

tion consists in computing the intrinsic parameters, or –in

general– recovering the Euclidean stratum from weak cali-

brated cameras.

The autocalibration method by Mendonça and Cipolla is

based on Theorem 1. They designed a cost function which

takes the intrinsic parameters as arguments, and the funda-

mental matrices as parameters, and returns a positive value

proportional to the difference between the two non-zero sin-

gular value of the essential matrix. Let Fijbe the fundamen-

tal matrix relating views i and j (computed from point corre-

spondences), and let Aiand Ajbe the respective (unknown)

intrinsic parameter matrices. The cost function is

χ(Ai, i = 1...n) =

n

∑

i=1

n

∑

j>i

wij

1σij−2σij

1σij+2σij,

(8)

where1σij≥2σijare the non zero singular values of

Eij= AT

iFijAj,

(9)

and wijare normalised weight factors.

c ? The Eurographics Association 2005.

Page 5

M. Farenzena & A. Fusiello / Rigorous Computing

4.1. The Huang-Faugeras cost function

The use of Eq. (8) as an optimisation criterion has been con-

sidered, however bounding the ranges of the singular val-

ues of an interval matrix is not trivial, since it requires the

solution of a min-max optimisation problem. Therefore, in

the same spirit of the Mendonça-Cipolla algorithm, we min-

imise the following cost function,

χ(Ai,i = 1,...,n)=

n

∑

i=1

=

n

∑

j=i+1

wij2tr(EijEijT)2−tr2(EijEijT)

tr2(EijEijT)

.

(10)

based on the Huang-Faugeras constraint:

det(E) = 0 ∧ 2tr((EET)2)−(tr(EET))2= 0.

(11)

which is equivalent to the constraint expressed by Theorem

1. Indeed, it is easy to see that

tr(EET)2=

3

∑

k=1

σ4

k(E).

(12)

Hence, the second clause of (11) can be rewritten as

2tr(EET)2−tr2(EET) =

= (σ2

1−σ2

2)2+σ2

3(σ2

3−2(σ2

1+σ2

2)).

(13)

Therefore, provided that σ3= 0, each term of the cost

function expressed by (10) vanishes for σ2

corresponding term of the Mendonça-Cipolla function (8).

Moreover, as the terms are always positive, we do not need

to take their square, as it would be required in a generic least

squares problem, thereby reducing the order of the numera-

tor and the denominator of the cost function from sixteen to

eight.

1=σ2

2, as does the

The Jacobian and Hessian matrices of the cost function

are derived in closed form in [FBFB03].

An enclosure A of the intrinsic parameters is obtained as

the result of minimimizing (10) using the global optimiza-

tion algorithm described in Sec. 2.2.

5. Triangulation: problem formulation

Let Pi, i = 1,...,n be a sequence of n known cameras and

mibe the image of some unknown point M in 3-D space,

both expressed inhomogeneous coordinates. Theproblem of

computing the point M given the camera matrices Piand the

image points miis known as the triangulation problem. In

the absence of errors, this problem is trivial, involving only

finding the intersection point of rays in the space. When data

are perturbed by errors, however, the rays corresponding to

back-projections of the image points do not intersect in a

common point, therefore only an approximate solution can

be defined. This approximation can be circumvented if one

refrainsfrom searching for one solution and compute instead

a set of solutions that contains the error-freesolution and can

be defined precisely in terms of the error affecting the image

points.

Figure 3: Interval-based triangulation.

In thecase of twoviews, assuming that errors arebounded

by rectangles B1and B2in the image, the solution set of

triangulation is a polyhedron D2with a diamond shape as in

Fig 3. Geometrically, D2is obtained by intersecting the two

semi-infinite pyramids defined by the two rectangles B1and

B2and the respective camera centres.

In the general case of n views, the solution set is defined

as the polyhedron formed by the intersection of the n semi-

infinite pyramids generated by the intervals B1,...Bn. Ana-

lytically, this region is defined as the set

Dn= {M : ∀i = 1,...,n∃mi∈ Bis.t. mi≃ PiM}.

In the following we will show how the solution set can be

enclosed with an axis-aligned box using Interval Analysis.

Given the camera matrices P1and P2, let m1and m2

be two corresponding points. It follows that m2lies on the

epipolar line of m1and so the two rays back-projected from

image points m1and m2lie in a common epipolar plane. As

they lie in the same plane, they will intersect at some point.

This point is the reconstructed 3-D scene point M.

The equation of the epipolar line can be derived from the

equation describing the optical ray of m1:

M =

?

−P−1

3×3,1P· 4,1

1

?

+λ

?

P−1

3×3,1m1

0

?

,

λ ∈ R, (14)

where P3×3,1is the matrix composed by the first three rows

and first three columns of P1, and P· 4,1is the fourth column

of P1. The epipolar line corresponding to m1represents the

projection of the optical ray of m1onto the image plane 2:

κm2= e2+λm′

1

(15)

where

e2= P2

?

−P−1

3×3,1P· 4,1

1

?

and m′

1= P3×3,2P−1

3×3,1m1.

Analytically, the reconstructed 3-D point M can be found

c ? The Eurographics Association 2005.

Page 6

M. Farenzena & A. Fusiello / Rigorous Computing

using Equation (15), by solving for parameters κ and λ, us-

ing the following closed form expressions:

1

κ=(m2×m′

λ

κ=(m2×e2)·(m′

1)·(e2×m′

||e2×m′

1)

1||2

,

1×e2)

||m′

1×e2||2

.

(16)

The coordinates of M are then obtained by inserting the

value λ into Equation (15) (obviously, M can also be recov-

ered with respect to the other camera, using the optical ray

generated by m2). After doing all the substitutions, we can

write a closed form expression that relates the reconstructed

point to the two conjugate image points:

M = f(m1,m2)

(17)

We chose this formulation of the triangulation, introduced

by [Fau92], precisely because it leads to this closed form ex-

pression, representing the geometric operation of intersect-

ing rays in 3-D space. This will be a key feature for the ap-

plication of Interval Analysis.

6. Interval-based triangulation

Let us consider the expression f defined in Eq. (17). If we let

m1and m2vary in B1and B2respectively, then range(f,B1×

B2) describes the solution set D2. Interval Analysis gives us

a way to compute an axis-aligned bounding box containing

D2by simply evaluating f(m1,m2), the natural interval ex-

tension of f, with B1= m1and B2= m2.

IA guarantees that if the conjugate intervals m1and m2

contain the exact point correspondences, then the interval

result contains the exact (i.e. error-free) 3-D reconstructed

point.

It may be worth noting that the result is not to be inter-

pretedinaprobabilisticor fuzzyway: noassumptionismade

on error statistical distribution, hence no point inside the re-

sulting 3-D interval is more probable or more important than

others.

? ?

??

? ?

? ?

??

? ?

? ?

Figure 4: Interval-based triangulation with n views.

The approach is easily extendible to the general n-views

case. As defined in Sec. 5, the solution set of triangulation is

the 3-D polyhedron formed by the intersection of the semi-

infinite pyramids generated by back-projecting in space the

intervals m1,...,mn (Fig. 4). Thanks to the associativity of

intersection, Dncan be obtained by first intersecting pairs of

such pyramids and then intersecting the results. Let Di,j

the solution set of the triangulation between view i and view

j. Then:

2be

Dn=

\

i=1,...,n

j=i+1,...,n

Di,j

2.

(18)

An enclosure of the solution set Dn is obtained by inter-

secting the n(n−1)/2 enclosures of Di,j

method described in Sec. 6. Since each enclosure contains

the respective solution set Di,j

2, their intersection will con-

tain Dn. Similarly, as the error-free solution is contained in

each Di,j

2, then it must be contained in Dnas well.

2computed with the

7. Experimental results

Experimental validation of the algorithms described here

and other results can be found in [FBFB04b] and on the

Internet†. In this paper we only report one example of au-

tocalibration and reconstruction.

Weused the Valbonne sequence, consisting of five frames.

The starting interval for the global minimisation is chosen

as follows: the midpoint for (u0,v0) is the image centre and

the width is 20% of the image size; the interval for the focal

lengths is [300×1700]. The average width of the elements

of the intrinsic parameters matrix obtained at the end of the

minimisation is about one pixel. Table 1 compares our re-

sults with those published in [ZF96], obtained with a differ-

ent autocalibration algorithm.

αu

αv

u0

v0

Zeller & Faugeras681.3 679.3258.8383.2

Our algorithm618.5699.2234.1372.8

Table 1: Midpoint of intrinsic parameters computed with

our algorithm versus the result found in [ZF96].

Once intrinsic parameters are known, the motion can be

factorised out from the essential matrices [Har92], and the

projection matrices recovered as in [ZF96]. This part is exe-

cuted using mid(A), but at the end of the process the interval

nature of A is taken into account by computing normalised

coordinates in interval arithmetic: m ← A−1m.

Normalised pointwise projection matrices Pi= [R | t] are

then used together with interval normalised coordinates to

†http://www.sci.univr.it/~fusiello/demo

c ? The Eurographics Association 2005.

Page 7

M. Farenzena & A. Fusiello / Rigorous Computing

reconstruct theValbonne church (Figure5) withour interval-

based triangulation.

Given that image points are contained in 2-pixel wide in-

tervals, the average side length of the 3-D boxes is about 50

cm. It is interesting to note that these boxes extend mainly

along the z-axis.

8. Conclusions

In this paper we discussed how Interval Analysis can be

used to solve some problems in Computer Vision, namely

autocalibration and triangulation. Autocalibration consists

in performing a non-linear minimisation, and triangulation

requires that errors in the localisation of image points are

suitably taken into account. IA allows to propagate input er-

rors with guaranteed results and to obtain provably correct

branch-and-bound algorithms.

On the basis of our preliminary results we maintain that

IA is a very interesting and powerful paradigm, which might

be applied to many other problems.

Work is in progress aimed at including geometrical con-

straints (e.g. known angles or lengths), which, together with

the rigidity of the structure, will help to reduce further the

width of the solution boxes.

Acknowledgements

Arrigo Benedetti co-authored some papers in the past and

introduced the authors to IA.

References

[Bro83]

terpretations of two-dimensional images. IEEE Transac-

tions on Pattern Analysis and Machine Intelligence 5, 2

(March 1983), 140–149.

BROOKS R.: Model-based three-dimensional in-

[Fau92]

mensions with an uncalibrated stereo rig? In Proceedings

of the European Conference on Computer Vision (Santa

Margherita L., 1992), pp. 563–578.

FAUGERAS O.: What can be seen in three di-

[Fau93]

sion: A Geometric Viewpoint. The MIT Press, Cambridge,

MA, 1993.

FAUGERAS O.: Three-Dimensional Computer Vi-

[FBFB03]

M., BUSTI A.: Globally Convergent Autocalibration us-

ing Interval Analysis. Tech. rep., Dipartimento di Infor-

matica, Università di Verona, 2003.

FUSIELLO A., BENEDETTI A., FARENZENA

[FBFB04a]

BENEDETTI A.:

brated stereo reconstruction. In Proceedings of the In-

ternational Conference of Pattern Recognition (Cam-

bridge,UK, 2004), vol. IV, pp. 288–292.

FARENZENA M., BUSTI A., FUSIELLO A.,

Rigorous accuracy bound for cali-

[FBFB04b]

M., BUSTI A.: Globally convergent autocalibration using

interval analysis. IEEE Transactions on Pattern Analysis

and Machine Intelligence 26, 12 (December 2004), 1633–

1638.

FUSIELLO A., BENEDETTI A., FARENZENA

[Fus00]

struction: A review. Image and Vision Computing 18, 6-7

(May 2000), 555–563.

FUSIELLO A.:Uncalibrated Euclidean recon-

[Han92]

terval Analysis. Marcel Dekker, New York, 1992.

HANSEN E. R.: Global Optimization Using In-

[Har92]

era position for uncalibrated cameras.

of the European Conference on Computer Vision (Santa

Margherita L., 1992), pp. 579–587.

HARTLEY R. I.: Estimation of relative cam-

In Proceedings

[Har96]

computer vision.

acteristics of Vision Algorithms (Cambridge, UK, 1996),

pp. 1–12.

HARALICK R. M.: Propagating covariance in

In Workshop on Performance Char-

[HF89]

the E matrix in two-view motion estimation. IEEE Trans-

actions on Pattern Analysis and Machine Intelligence 11,

12 (December 1989), 1310–1312.

HUANG T. S., FAUGERAS O.: Some properties of

[HS97]

puter Vision and Image Understanding 68, 2 (November

1997), 146–157.

HARTLEY R. I., STURM P.: Triangulation. Com-

[HZ03]

ometry in computer vision, 2nd ed. Cambridge University

Press, 2003.

HARTLEY R., ZISSERMAN A.: Multiple view ge-

[Kan93]

chine Vision. Oxford University Press, 1993.

KANATANI K.: Geometric Computation for Ma-

[KD94]

tivariate global optimization. Journal of Global Optimiza-

tion 5 (1994), 253–365.

KEARFOTT B., DU: The cluster problem in mul-

[Kea96]

uos Problems. Kluwer, 1996.

KEARFOTT B.: Rigorous Global Search: Contin-

[KNN∗]

A., RUMP S. M., SHARY S. P., VAN HENTENRYCK P.:

Standardized notation in interval analysis. Submitted to

Reliable Computing.

KEARFOTT R. B., NAKAO M. T., NEUMAIER

[LF96]

tal matrix: Theory, algorithms, and stability analysis. In-

ternational Journal of Computer Vision 17 (1996), 43–75.

LUONG Q.-T., FAUGERAS O. D.: The fundamen-

[LF97]

a moving camera from point correspondences and funda-

mental matrices. International Journal of Computer Vi-

sion 22, 3 (1997), 261–289.

LUONG Q.-T., FAUGERAS O.: Self-calibration of

[MB03]

validated functional inclusion methods.

Journal of Pure and Applied Mathematics 4, 4 (2003),

379–456.

MAKINO K., BERZ M.: Taylor models and other

International

c ? The Eurographics Association 2005.

Page 8

M. Farenzena & A. Fusiello / Rigorous Computing

Figure 5: Interval-based reconstruction of the Valbonne church (left). To better visualise the 3-D structure, segments joining

the midpoints of the intervals have been drawn. On the right a frame of the sequence is shown with the projection of the 3-D

intervals overlaid.

[MF92]

self-calibration of a moving camera. International Jour-

nal of Computer Vision 8, 2 (1992), 123–151.

MAYBANK S. J., FAUGERAS O.: A theory of

[MKP96]

sensitivity assessment ov vision algorithm based on direct

error-propagation. In Workshop on Performance Charac-

teristics of Vision Algorithms (Cambridge, UK, 1996).

MARIK R., KITTLER J., PETROU M.: Error

[Moo66]

1966.

MOORE R. E.: Interval Analysis. Prentice-Hall,

[Neu02]

Reliable Computing 9 (2002), 43 – 79.

NEUMAIER A.: Taylor forms - use and limits.

[NK02]

for global optimization using the taylor-bernstein form as

an inclusion function. International Journal of Global

Optimization 24 (2002), 417–436.

NATARAJ P. S. V., KOTECHA K.: An algorithm

[OF87]

ing for computer vision. Image Vision Computing 5, 3

(1987), 233–238.

ORR M. J. L., FISHER R. B.: Geometric reason-

[Sny92]

graphics. In Computer Graphics (SIGGRAPH ’92 Pro-

ceedings) (July 1992), vol. 26, pp. 121–130.

SNYDER J. M.: Interval analysis for computer

[ZF96]

Calibration from Video Sequences: the Kruppa Equations

Revisited. Research Report 2793, INRIA, February 1996.

ZELLER C., FAUGERAS O.:

Camera Self-

[Zha98]

and its uncertainty: A review. International Journal of

Computer Vision 27, 2 (March/April 1998), 161–195.

ZHANG Z.: Determining the epipolar geometry

c ? The Eurographics Association 2005.

#### View other sources

#### Hide other sources

- Available from Andrea Fusiello · May 30, 2014
- Available from psu.edu