Content uploaded by Andreas Birk
Author content
All content in this area was uploaded by Andreas Birk on Aug 23, 2022
Content may be subject to copyright.
Stochastic map merging in rescue environments
Stefano Carpin and Andreas Birk
School of Engineering and Science
International University of Bremen – Germany
{s.carpin,a.birk}@iu-bremen.de
Abstract. We address the problem of merging multiple noisy maps in
the rescue environment. The problem is tackled by performing a stochas-
tic search in the space of possible map transformations, i.e. rotations
and translations. The proposed technique, which performs a time vari-
ant Gaussian random walk, turns out to be a generalization of other
search techniques like hill-climbing or simulated annealing. Numerical
examples of its performance while merging partial maps built by our
rescue robots are provided.
Final Version:
RoboCup 2004: Robot Soccer World Cup VIII, LNAI 3276, Springer, 2005
@inbook{multimap_rcup04,
author = {Carpin, Stefano and Birk, Andreas},
title = {Stochastic map merging in rescue environments},
booktitle = {{RoboCup} 2004: Robot Soccer World Cup VIII},
editor = {Nardi, Daniele and Riedmiller, Martin and Sammut, Claude},
series = {Lecture Notes in Artificial Intelligence (LNAI)},
publisher = {Springer},
volume = {3276},
pages = {p.483ff},
year = {2005},
type = {Book Section}
}
1 Introduction
One of the main tasks to be carried out by robots engaged in a rescue scenario
is to produce useful maps to be used by human operators. Among the charac-
teristics of such environment is the lack of a well defined structure, because of
collapsed parts and debris. Robots are supposed to move in uneven surfaces and
to face significant skidding while operating. It follows that maps generated using
odometric information and cheap proximity range sensors turn out to be very
inaccurate. In the robotic systems we have developed this aspect is even more
emphasized by our choice to implement simple mapping algorithms for their real-
time execution on devices with possible low computational power [1],[2]. One of
the possible ways to overcome this problem is to use multiple robots to map
the same environment. The multi-robot approach has some well known advan-
tages in itself, most notably robustness [3]. In the rescue framework, multi-robot
systems are even more appealing because of the possibility to perform a faster
exploration of the inspected area, thus increasing the chances to quickly locate
victims and hazards. As the goal is to gather as much information as possible, it
is evident that the maps produced by different robots will only partially overlap,
as they are likely to spread around in different regions and not to stick together
for the whole mission. It is then a practical issue of enormous importance to
merge together such partially overlapping maps before they are used by the hu-
man operators. To solve the map matching problem we have borrowed some
ideas from a recent randomized motion planning algorithm recently developed
by one of the authors which have turned out to work very efficiently [4],[5]. The
algorithm performs a Gaussian random walk, but its novel aspect is that it up-
dates its distribution parameters so that it can take advantage from its recent
history. Section 2 formally defines the problem and describes the algorithmic
machinery used to solve it, together with convergence results. Next, section 3
offers details about the implementation of the proposed technique and numerical
results. A final discussion is presented in section 4.
2 Theoretical foundations
We start with the formal definition of a map.
Definition 1. Let Nand Mbe two positive real numbers. An N×Mmap is a
function
m: [0, N ]×[0, M]→R.
We furthermore denote with IN×Mthe set of N×Mmaps. Finally, for each
map, a point from its domain is declared to be the reference point. The reference
point of map mwill be indicated as R(m).
The function mis a model of the beliefs encoded in the map. For example, one
could assume that a positive value of m(x, y) is the belief that the point (x, y)
in the map is free, while a negative value indicates the opposite. Moreover, the
absolute value indicates the degree of belief. The important point is that we
assume that if m(x, y) = 0 no information is available. From now on, for sake
of simplicity, we will assume N=M, but the whole approach holds also for
N=M.
Definition 2. Let x,yand θbe three real numbers and m1∈IN×N. We define
the {x, y, θ}-transformation to be the functional which transforms the map m1
into the map m2obtained by the translation of R(m1)to the point (x, y)followed
by a rotation of θdegrees. We will indicate it as Tx,y,θ, and we wil l write m2=
Tx,y,θ (m1)to indicate that m2is obtained from m1after the application of the
given {x, y, θ}-transformation.
Definition 3. Adissimilarity function ψover IN×Nis a function
ψ:IN,N ×IN,N →R+∪ {0}
such that
–∀m1∈IN,N ψ(m1, m1)=0
–given two maps m1and m2and a transformation Tx,y,θ, then ψ(m1, Tx,y,θ (m2))
is continuous with respect to x,yand θ.
The dissimilarity function measures how much two maps differ. In an ideal world,
where robots are able to build two perfectly overlapping maps, their dissimilarity
will be 0. When the maps cannot be superimposed the ψfunction will return
positive values.
Having set the scene, the map matching problem can be defined as follows.
Given m1∈IN,N ,m2∈IN,N and a dissimilarity function ψover IN×N,
determine the {x, y, θ}-transformation T(x,y ,θ)which minimizes
ψ(m1, T(x,y,θ)(m2)).
The devised problem is clearly an optimization problem over R3. Traditional
AI oriented techniques for addressing this problem include genetic algorithms,
multipoint hill-climbing and simulated annealing (see for example [6]). We hereby
illustrate how a recent technique developed for robot motion planning can be
used to solve the same problem. In particular, we will also show that multipoint
hill-climbing and simulated annealing can be seen as two special cases of this
broader technique.
From now we assume that the values x,yand θcome from a subset of S⊂R3
which is the Cartesian product of three intervals. In symbols,
(x, y, θ)∈S= [a0, b0]×[a1, b1]×[a2, b2].
Also, to simplify the notation we will often indicate with s∈Sthe three pa-
rameters which identify a transformation, and we will then write Ts. Before
moving into the stochastic part, we define a probability space [7] as the triplet
(Ω, Γ, η) where Ωis the sample space, whose generic element is denoted ω.Γis
aσ−algebra on Ωand ηa probability measure on Γ.
Definition 4. Let {f1, f2, . . .}be a sequence of mass distributions whose events
space consists of just two events. The random selector induced by {f1, f2, . . .}
over a domain Dis a function
RSk(a, b) : D×D→D
which randomly selects one of its two arguments according to the mass distribu-
tion fk.
Definition 5. Let ψbe a dissimilarity function over IN×N, and RSfbe a ran-
dom selector over S induced by the sequence of mass distributions {f1, f2, . . .}.
The acceptance function associated with ψand RSfis defined as follows
Ak:S×S→S
Ak(s1, s2) = s2if ψ(m1, Ts2(m2)) < ψ(m1, Ts1(m2))
RSk(s1, s2)if ψ(m1, Ts2(m2)) > ψ(m1, Ts1(m2))
From now on the dependency of Aon ψand RSfwill be implicit, and we will not
explicitly mention it. We now have the mathematical tools to define Gaussian
random walk stochastic process, which will be used to search for the optimal
transformation in S.
Definition 6. Let tstart be a point in S, and let Abe an acceptance function.
We call Gaussian random walk the following discrete time stochastic process
{Tk}k=0,1,2,3,...
T0(ω) = tstart
Tk(ω) = A(Tk−1(ω), Tk−1(ω) + vk(ω)) k= 1,2,3, . . . (1)
where vk(ω)is a Gaussian vector with mean µkand covariance matrix Σk.
From now on the dependence on ωwill be implicit and then we will omit to
indicate it.
Assumption We assume that there exist two positive real numbers ε1and ε2
such that for each kthe covariance matrix Σksatisfies the following inequalities:
ε1I≤Σk≤ε2I. (2)
where the matrix inequality A≤Bmeans that B−Ais positive semidefinite.
The following theorem proves that the stochastic process defined in 1 will even-
tually discover the optimal transformation in S. The proof is omitted for lack of
space.
Theorem 1. Let ˆs∈Sbe the element which minimizes ψ(m1, Ts(m2)), and
let {T0, T1, . . . , Tk}the sequence of transformations generated by the Gaussian
random walk defined in equation 1. Let Tk
bbe the best transformation generated
among the first kelements, i.e. the one yielding the smallest value of ψ. Then
for each ε > 0
lim
k→+∞Pr[|ψ(m1, T k
b(m2)) −ψ(m1, Tˆs)(m2))|> ε] = 0 (3)
Algorithm 1 depicts the procedure used for exploring the space of possible trans-
formations accordingly to the stochastic process illustrated. As the optimal value
of the dissimilarity is not known, practically the algorithm will be bounded to
a certain number of iterations and it will return the transformation producing
the lowest ψvalue.
We wish to outline that this algorithm is a modification of the Adaptive Random
Walk motion planner we have recently introduced [4]. The fundamental differ-
ence is that in motion planning one has to explore the space of configurations
in order to reach a known target point, while in this case this information is not
available.
1: k←0, tk←tstart, Σ0←Σinit , µ0←µinit
2: c0←ψ(m1, Ttstart (m2)
3: loop
4: Generate a new sample s←xk+vk
5: cs←ψ(m1, Ts(m2)
6: if cs< ckOR RD(tk, s) = sthen
7: k←k+ 1, tk←s, ck=cs
8: Σk←Update(tk, tk−1, tk−2,...,tk−M)
9: µk←Update(xk, tk−1, tk−2,...,tk−M)
10: else
11: discard the sample s
Algorithm 1: Basic Gaussian Random Walk Exploration algorithm
3 Numerical results
The results presented in this section are based on real-world data collected with
the IUB rescue robots. A detailed description of the robots is found in [1]. We
describe how we implemented the algorithm described in section 2 and we sketch
the results we obtained. In our implementation a map is a grid of 200 by 200
elements, whose elements can assume integer values between -255 and 255. This
is actually the output of the mapping system we described in [2]. According to
such implementation, positive values indicate free space, while negative values
indicate obstacles. As anticipated, the absolute value indicates the belief, while
a 0 value indicates lack of knowledge. The function ψused for driving the search
over the space Sis defined upon a map distance function borrowed from picture
distance computation [8]. Given the maps m1and m2, the function is defined as
follows
ψ(m1, m2) = X
c∈C
d(m1, m2, c) + d(m2, m1, c)
d(m1, m2, c) = Pm1[p1]=cmin{md(p1, p2)|m2[p2] = c}
#c(a)
where
–Cdenotes a set of values assumed by m1or m2,
–m1[p] denotes the value cof map m1at position p= (x, y),
–md(p1, p2) = |x1−x2|+|y1−y2|is the Manhattan-distance between p1and
p2,
–#c(m1)=#{p1|m1[p1] = c}is the number of cells in m1with value c.
Before computing D, we preprocess the maps m1and m2setting all positive
values to 255 and all negative values to -255. In our case then C={−255,255},
i.e., locations mapped as unknown are neglected. A less obvious part of the linear
time implementation of the picture distance function is the computation of the
numerator in the d(m1, m2, c)-equation. It is based on a so called distance-map
d-mapcfor a value c. The distance-map is an array of the Manhattan-distances
to the nearest point with value cin map m2for all positions p1= (x1, y1):
d-mapc[x1][y1] = min{md(p1, p2)|m2[p2] = c}
The distance-map d-mapcfor a value cis used as lookup-table for the computa-
tion of the sum over all cells in m1with value c. Figure 1 shows an example of a
distance-map. Algorithm 2 gives the pseudocode for the three steps carried out
= map cell with value c
32212345
21101234
10000123
21111012
21011001
32101001
32100012
43211123
d-mapc
Fig. 1. A distance-map d-mapc
to built it, while the underlying principle is illustrated in 2.
1: for y←0 to n−1do
2: for x←0 to n−1do
3: if M(x, y) = cthen
4: d-mapc[x][y]←0
5: else
6: d-mapc[x][y]← ∞
7: for y←0 to n−1do
8: for x←0 to n−1do
9: h←min(d-mapc[x−1][y] + 1, d-mapc[x][y−1] + 1)
10: d-mapc[x][y] = min(d-mapc[x][y], h)
11: for y←n−1 downto 0 do
12: for x←n−1 downto 0 do
13: h←min(d-mapc[x+ 1][y] + 1, d −mapc[x][y+ 1] + 1)
14: d-mapc[x][y] = min(d-mapc[x][y], h)
Algorithm 2: The algorithm for computing d-mapc
It can be appreciated that to build the lookup map it is necessary just to scan
the target map for three times. In this case it is possible to avoid the quadratic
matching of each grid cell in m1against each grid cell in m2.
= neighbor= visited position
Relaxation Step 2Relaxation Step 1Init
0 0 0
0
0
0
0
0
0 0
00
0
0
88
0
88
8
88
8
8
8 8
88
8
88
8888
88
8
8
8
8
88
8
888
8
8
88
8
8
88
8
8
8
8
88
8
88
Fig. 2. The working principle for computing d-mapc
While implementing the Gaussian random walk algorithm one has to choose
how to update µk,Σkand the sequence of mass distributions {f1, f2, . . .}used
to accept or refuse sampled transformation which lead to an increment in the
dissimilarity function ψ. For the experiments later illustrated we update µkat
each stage to be a unit vector in the direction of the gradient. Only two different
Σkmatrices are used. If the last accepted sample was accepted, Σk= 0.1I,
where Iis the 3×3 identity matrix. This choice pushes the algorithm to perform
a gradient descent. If the last sample has not been accepted, Σk= 10I. This
second choice gives the algorithm the possibility to perform big jumps when it
has not been able to find a promising descent direction. The random decisor
accepts a sampled transformation swith probability
2(BD −ψ(m1, Ts(m2))
BD
where BD is the best dissimilarity value generated up to current step. Figure 3
illustrates the result of the search procedure.
In particular, subfigure d shows the trend of the dissimilarity function for the
sampled transformation generated by the algorithm. Many recurrent gradient
descents stages can be observed, interleaved by exploring stages where wide
spikes are generated as a consequence of samples generated far from the point
currently being explored. In the devised example 200 iterations are enough to
let the search algorithm find a transformation almost identical to the best one
(which was determined by applying a brute force algorithm).
4 Conclusions
We addressed the problem of map fusion in rescue robotics. The specific problem
is to find a good matching of partially overlapping maps subject to a significant
20 40 60 80 100 120 140 160 180 200
20
40
60
80
100
120
140
160
180
200
(a)
20 40 60 80 100 120 140 160 180 200
20
40
60
80
100
120
140
160
180
200
(b)
20 40 60 80 100 120 140 160 180 200
20
40
60
80
100
120
140
160
180
200
(c)
Fig. 3. Subfigures a and b illustrate the maps created by two robots while exploring two
different parts of the same environment. To make the matching task more challenging
the magnetic compass and the odometry system were differently calibrated. Subfigure
c shows the best matching found after 200 iterations of the search algorithm.
amount of noise. This means finding a suitable rotation and translation which
optimize an overlapping quality index. It is then required to perform a search
in order to detect the parameters which optimizes a quality index. We intro-
duced a theoretical framework, called Gaussian random walk. We outlined that
it generalizes some well known approaches for iterative improvement. In fact, it
incorporates a few parameters that when properly tuned can result in techniques
like hill-climbing or simulated annealing. The novel aspect of the proposed al-
gorithm is in the possibility to use time variant random distributions, i.e. the
distributions’ parameters can be updated and tuned accordingly to the already
generated samples. It has in fact to be observed that most of the random based
approaches use stationary distributions, or distributions whose time dynamics
is not influenced by the partial results already obtained.
The proposed algorithm has been applied for fusing maps produced by the robots
we are currently using in the Real Rescue competition. Preliminary results con-
firm the effectiveness of the proposed technique, both in terms of result accuracy
and computation speed.
References
1. Birk, A., Carpin, S., Kenn, H.: The iub 2003 rescue robot team. In: Robocup 2003.
Springer (2003)
2. Carpin, S., Kenn, H., Birk, A.: Autonomous mapping in the real robot rescue league.
In: Robocup 2003. Springer (2003)
3. Parker, L.: Current state of the art in distributed autonomous mobile robots. In
Parker, L., Bekey, G., J.Barhen, eds.: Distributed Autonomous Robotic Systems 4.
Springer (2000) 3–12
4. Carpin, S., Pillonetto, G.: Motion planning using adaptive random walks. IEEE
Transactions on Robotics and Automation (To appear)
5. Carpin, S., Pillonetto, G.: Learning sample distribution for randomized robot mo-
tion planning: role of history size. In: Proceedings of the 3rd International Confer-
ence on Artificial Intelligence and Applications, ACTA press (2003) 58–63
6. Russel, S., Norwig, P.: Artificial Intelligence - A modern approach. Prentice Hall
International (1995)
7. Papoulis, A.: Probability, Random Variables, and Stochastic Processes. McGraw-
Hill (1991)
8. Birk, A.: Learning geometric concepts with an evolutionary algorithm. In: Proc. of
The Fifth Annual Conference on Evolutionary Programming, The MIT Press, Cam-
bridge (1996)