Content uploaded by Andreas Birk
Author content
All content in this area was uploaded by Andreas Birk on Aug 19, 2022
Content may be subject to copyright.
Plane-Based Registration of Sonar Data for Underwater 3D
Mapping
Kaustubh Pathak, Andreas Birk, and Narunas Vaskevicius*
Abstract
Surface-patches based 3D mapping in a real world
underwater scenario is presented. It is based on a 6
degrees of freedom registration of sonar data. Planar
surfaces are fitted into the sonar data and the subse-
quent registration method maximizes the overall geo-
metric consistency within a search-space to determine
correspondences between the planes. This approach
has previously only been used on high quality range
data from sensors on land robots like laser range find-
ers. It is shown here that the algorithm is also appli-
cable to very noisy, coarse sonar data. The 3D map
presented is of a large underwater structure, namely the
Lesumer Sperrwerk, a flood gate north of the city of Bre-
men, Germany. It is generated from 18 scans collected
using a Tritech Eclipse sonar.
Final Version:
IEEE International Conference on Intelligent Robots
and Systems (IROS), 2010
@inproceedings{Underwater3Dmapping-IROS10,
author = {Pathak, Kaustubh and Birk, Andreas
and Vaskevicius, Narunas},
title = {Plane-Based Registration of Sonar
Data for Underwater 3D Mapping},
booktitle = {IEEE International Conference on
Intelligent Robots and Systems (IROS)},
pages = {4880 - 4885},
year = {2010},
type = {Conference Proceedings}
}
1. Introduction
Maps are the core world models for autonomous
mobile robots engaging in complex mission tasks.
*The authors are with the Dept. of Computer Science, Ja-
cobs University Bremen, 28759 Bremen, Germany. [k.pathak,
a.birk]@jacobs-university.de
While many successful solutions exist for 2D mapping
by land robots - some even consider this as a more or
less solved problem [1][2] - it is still a major challenge
for the underwater domain [3]. There are two main rea-
sons for this. First, high quality, high resolution range
sensors - especially laser range finders - are available
for land robots, whereas underwater range sensors pro-
duce much coarser, noisier data at lower update fre-
quencies. Second, land robots operate in environments
where many obstacles exist that provide a basis for rich
sets of natural landmarks for mapping, whereas this is
rarely the case for underwater environments [4].
As a consequence, many underwater approaches
to mapping rely on artificial markers, i.e., beacons at
stationary positions, which have to be exactly known
[5][6][7] or at least constrained, e.g. by the known depth
of the ocean floor [8][9]. When natural landmarks are
used in the underwater domain, then they are usually
highly environment specific. Examples for application
specific landmarks are bubble plumes in shallow vent
areas [10][11], complex floor topographies e.g. along
ridges [12], or visual features on visually rich ocean
floors [13], especially at reefs [14] like the Great Barrier
Reef [15][16].
Previous work on underwater mapping predomi-
nantly dealt with 2D representations, which is sufficient
for a wide range of applications. For underwater sys-
tems, one may argue that ground elevation as repre-
sented in classic bathymetric maps may be sufficient
[17]. But underwater robots are increasingly used not
only in open sea applications but also in more com-
plex environments like marinas, harbors or at dams.
2D mapping in these environments may be sufficient
for aiding a remote operator or for most simple tasks
[18][19], but it is far from sufficient for any intelligent
operation of AUVs. The work on 3D underwater map-
ping so far has mainly concentrated on vision based ap-
proaches and significant efforts to localize the vehicle
[20, 13].
Here, up to our knowledge for the first time, regis-
tration of sonar data is used to generate a 3D underwa-
ter surface-patches based map. The registration method
was introduced by ourselves in [21], where typical land
robot sensors were used for its validation. Sonar data is,
in contrast, much more coarse and noisy.
2. Plane-Segment Extraction and Match-
ing
The scan-matching based on plane-segments con-
sists of the following three steps:
1. Planes extraction from raw point-clouds: This
procedure is based on region-growing in a range-
image scan followed by a least-squares estimation
of the parameters of planes. The covariances of
the plane-parameters are computed as well. The
details may be found in the previously published
work of the authors [22].
2. Pose-registration by plane-matching: This step
consists of two substeps:
(a) Finding the correspondences between plane-
segments in the two scans to be matched.
These two scans may be successive sam-
ples for normal registration or may be non-
successive, if a loop is being closed.
(b) After the correspondences have been decided
on, finding the optimal rotation and transla-
tion which aligns the corresponding set of
planes. This gives the pose change of the
robot between the scans.
3. Polygonization: This step consists of polygoniz-
ing each plane-segment by finding the boundary of
each surface-patch so that the surface can be com-
pactly described. This step is crucial for visualiza-
tion of the result, however, if only pose registration
is desired, it may be omitted. It is also described
in [22].
The registration method in the second step above
uses planar patches extracted from the range data and
maximizes the overall geometric consistency within
a search-space to determine correspondences between
planes. This method, named Minimum Uncertainty
Maximum Consensus (MUMC) was introduced by the
authors in [21]. The search-space is pruned using cri-
teria such as overlap, and size-similarity. For all these
tests, only the plane parameter covariance matrix is em-
ployed, without the need to refer back to the original
point-cloud. This approach is fast and its reliability. Its
computation-time increases with the number of planes.
Finally, the covariance matrix of the solution is com-
puted which identifies the principal uncertainty direc-
tions. This information is indispensable for subsequent
refinement processing like pose-graph-SLAM [23], al-
though this is outside the scope of this paper.
3. Experiments and Results
3.1. The Tritech Eclipse Sonar
The device used in the experiments presented here
is a Tritech Eclipse sonar. It is a multi-beam sonar with
time-delay beam-forming and electronic beam steering.
Its core acoustic sensing parameters are:
• Operating Frequency: 240 kHz
• Beam Width: 120◦
• Number of Beams: 256
• Acoustic Angular Resolution: 1.5◦
• Effective Angular Resolution: 0.5◦
• Depth/Range Resolution: 2.5 cm
• Maximum Range: 120 m
• Minimum Focus Distance: 0.4 m
• Scan Rate: 140 Hz at 5 m, 7 Hz at 100 m
Please note that the scan rate is dependent on the reso-
lution with which the scan is taken and that high reso-
lution scans take longer.
The core hardware parameters are:
• Width: 342 mm
• Height: 361 mm
• Depth: 115 mm
• Weight Wet / Dry: 9 kg /19 kg
• Depth Rating: 2500 m
• Power Consumption: 60 W
• Supply Voltage Nominal: 20-28 VDC
3.2. A 3D Map of the Lesumer Sperrwerk
The device was used to generate 18 scans of the
Lesumer Sperrwerk, a river flood gate in the north of
Bremen, Germany (figure 1). The overall area covered
is approximately 110 m by 70 m. The sonar data is
quite noisy and error-prone. Hence, a pre-filtering us-
ing a threshold on the intensity values was done, i.e.,
readings with a weak echo were discarded. In addition
to a reduction in noise and in the overall amount of data,
Figure 1. An overview of the Lesumer Sperrwerk as seen from the river’s surface.
(a) scan 4, top view (b) scan 17, top view
(c) scan 4, perspective view (d) scan 17, perspective view
Figure 2. Examples of sonar scans as point clouds. As can be seen, the data is quite noisy.
Table 1. MUMC Parameters (Units mm and ra-
dians). Compare with [21, Table II].
Parameter Value
F
t% 50
ε1,¯c10−7
,5
¯
Ldet 15
¯
χ2
ovlp 2
¯
χ2
×5×105
¯
χ2
δ10
¯
χ2
t,eχ2
1,1.5% =3.84
κ6
it led to a significant reduction of the field of view of the
sonar to about 90◦opening angle - instead of 120◦- as
the center is most illuminated by sound; an effect which
is also described in the device’s manual. Despite this
simple pre-processing, the data is still quite noisy. Ex-
ample point clouds from the scans are shown in figure
2. The scans have varying amount of overlap, ranging
from about 90 to 50 percent between consecutive scans.
Planes are fitted in the 18 scans with the previously
described method [22]. One interesting side-effect of
the plane based representation is the compression of
the data. The effect is even stronger in this experiment
where pre-processed, sub-sampled point cloud data is
used. The average point cloud size is here 126 KB
whereas the planar patches are only 24 KB on the av-
erage, i.e., smaller by a factor of more than 5 (figure 3).
The data is then turned into a 3D map with
our plane-registration method MUMC. During the
extraction-phase, the uncertainties in the planes’ param-
eters (normal and distance to the origin) were also com-
puted as covariance matrices. For this, a key require-
ment is the availability of a sensor uncertainty model.
Since a sonar’s measurement error depends on a wide
array of effects which are hard to model, we opted for
assuming a constant standard deviation of σ=1 meter
for all beams. A more accurate model will definitely im-
prove the covariance estimates for the extracted planes.
Most of the consistency tests in MUMC [21] are based
on χ2-tests in which these plane-covariances are used
in a central way. Interestingly, we hardly changed the
default thresholds in [21, Table II] for the sonar, al-
though the defaults were computed based on sensors
commonly used on land-robots. This parameter table
is reproduced here in Table 1 to show the exact values
used for the Tritech Eclipse sonar. The lack of any sub-
stantial change in these values compared to other sen-
sors shows that the method is robust as long as the sen-
sor error model used is reasonable.
Please note that no motion sensors like a Inertial
Navigation System (INS) or even an attitude sensor like
a gyro where required. The resulting 3D map is shown
in figure 5. It has a very reasonable correspondence
with the real structure as shown in an overlay of a top
view with Google earth (figure 4). The map is at least
suited to be used for some rough path-planning on an
AUV.
The plane extraction takes about 0.9 to 1.4 seconds
and the polygonization of the patches - useful mainly
for visualization or path-planning - takes 0.87 to 1.5
seconds. The actual registration, i.e., the plane match-
ing takes 8 to 56 seconds with an average of 31 seconds
on a standard PC with a AMD Turion 64 X2 processor
and 1 Gb of RAM. Though this has not been the main
focus of this paper, these run-times are still suitable for
online computations on the vehicle, especially to occa-
sionally map larger areas for online path-planning.
4. Conclusions
We presented the registration of sonar data to gen-
erate a 3D map in a real world underwater scenario. We
employ plane-based registration, which was previously
introduced by ourselves and so far had only been tested
on quite high quality range data from sensors on land
robots. The plane-based registration decouples rotation
and translation determination, and it is able to compute
the uncertainty in pose-registration using the uncertain-
ties in the plane parameters.
It was shown that this recently introduced algo-
rithm can even cope with the coarse and noisy data from
a sonar. Concretely, the generation of a 3D map of a
larger underwater structure in form of a flood gate was
presented. A Tritech Eclipse sonar was used to acquire
18 scans of the environment, which got successfully
registered into one 3D map that corresponds well with
the real structure.
Acknowledgments
The research leading to the results presented here
has received funding from the European Community’s
Seventh Framework Programme (EU FP7) under grant
agreement n. 231378 ”Cooperative Cognitive Control
for Autonomous Underwater Vehicles (Co3-AUVs)”,
http://www.Co3-AUVs.eu. Our previous work
on plane-based registration for 3D mapping was sup-
ported by the German Research Foundation (DFG).
Figure 3. One important fringe benefit of the plane extraction is the significant compression of the
data.
Figure 4. An overlay of the top-view of the 3D map of the Sperrwerk on an image from Google maps.
It can be seen that the map captures the real structure quite well.
Figure 5. Perspective views of the 3D map generated from the 18 registered scans. A comparison
with ground truth is shown in figure 4. Corresponding planar patches matched across two or more
scans are shown in the same color.
References
[1] S. Thrun, Robotic Mapping: A Survey. Morgan Kauf-
mann, 2002.
[2] U. Frese, “A discussion of simultaneous localization
and mapping,” Autonomous Robots, vol. 20, pp. 25–42,
2006.
[3] D. Walker, “Xauv: A modular highly maneuverable au-
tonomous underwater vehicle,” in Oceans 2007 Oceans
2007 VO -, 2007, Conference Proceedings, pp. 1–4.
[4] P. Newman and H. Durrant-Whyte, “Using sonar in
terrain-aided underwater navigation,” in Robotics and
Automation, 1998. Proceedings. 1998 IEEE Interna-
tional Conference on, H. Durrant-Whyte, Ed., vol. 1,
1998, Conference Proceedings, pp. 440–445 vol.1.
[5] S. Williams, P. Newman, G. Dissanayake, and
H. Durrant-Whyte, H. A4 Durrant-Whyte, “Au-
tonomous underwater simultaneous localisation and
map building,” in Robotics and Automation, 2000. Pro-
ceedings. ICRA ’00. IEEE International Conference on,
P. Newman, Ed., vol. 2, 2000, Conference Proceedings,
pp. 1793–1798 vol.2.
[6] M. Kemp, B. Hobson, J. Meyer, R. Moody, H. Pinnix,
and B. Schulz, “Masa: a multi-auv underwater search
and data acquisition system,” in Oceans ’02 MTS/IEEE,
B. Hobson, Ed., vol. 1, 2002, Conference Proceedings,
pp. 311–315 vol.1.
[7] D. Thomson and S. Elson, “New generation acoustic
positioning systems,” in Oceans ’02 MTS/IEEE Oceans
’02 MTS/IEEE VO - 3, S. Elson, Ed., vol. 3, 2002, Con-
ference Proceedings, pp. 1312–1318 vol.3.
[8] E. Olson, J. J. Leonard, and S. Teller, “Robust range-
only beacon localization,” Oceanic Engineering, IEEE
Journal of, vol. 31, no. 4, pp. 949–958, 2006.
[9] P. Newman and J. Leonard, “Pure range-only sub-sea
slam,” in Robotics and Automation, 2003. Proceed-
ings. ICRA ’03. IEEE International Conference on,
J. Leonard, Ed., vol. 2, 2003, Conference Proceedings,
pp. 1921–1926 vol.2.
[10] T. Maki, H. Kondo, T. Ura, and T. Sakamaki, “Naviga-
tion of an autonomous underwater vehicle for photo mo-
saicing of shallow vent areas,” in OCEANS 2006 - Asia
Pacific, H. Kondo, Ed., 2006, Conference Proceedings,
pp. 1–7.
[11] ——, “Photo mosaicing of tagiri shallow vent area
by the auv ”tri-dog 1” using a slam based navigation
scheme,” in OCEANS 2006, H. Kondo, Ed., 2006, Con-
ference Proceedings, pp. 1–6.
[12] I. Nygren and M. Jansson, “Terrain navigation for un-
derwater vehicles using the correlator method,” Oceanic
Engineering, IEEE Journal of, vol. 29, no. 3, pp. 906–
915, 2004.
[13] H. Madjidi and S. Nagahdaripour, “3-d photo-
mosaicking of benthic environments,” in OCEANS
2003. Proceedings, S. Nagahdaripour, Ed., vol. 4, 2003,
Conference Proceedings, pp. 2317–2318 Vol.4.
[14] M. Dunbabin, P. Corke, and G. Buskey, “Low-cost
vision-based auv guidance system for reef navigation,”
in Robotics and Automation, 2004. Proceedings. ICRA
’04. 2004 IEEE International Conference on, P. Corke,
Ed., vol. 1, 2004, Conference Proceedings, pp. 7–12
Vol.1.
[15] S. Williams and I. Mahon, “Simultaneous localisation
and mapping on the great barrier reef,” in Robotics and
Automation, 2004. Proceedings. ICRA ’04. 2004 IEEE
International Conference on, I. Mahon, Ed., vol. 2,
2004, Conference Proceedings, pp. 1771–1776 Vol.2.
[16] I. Mahon and S. Williams, “Slam using natural features
in an underwater environment,” in Control, Automation,
Robotics and Vision Conference, 2004. ICARCV 2004
8th Control, Automation, Robotics and Vision Confer-
ence, 2004. ICARCV 2004 8th VO - 3, S. Williams, Ed.,
vol. 3, 2004, Conference Proceedings, pp. 2076–2081
Vol. 3.
[17] D. Oskard, T.-H. Hong, and C. Shaffer, “Real-time al-
gorithms and data structures for underwater mapping,”
Systems, Man and Cybernetics, IEEE Transactions on,
vol. 20, no. 6, pp. 1469–1475, 1990.
[18] D. Ribas, P. Ridao, J. Domingo Tardos, and J. Neira,
“Underwater slam in a marina environment,” in Intelli-
gent Robots and Systems, 2007. IROS 2007. IEEE/RSJ
International Conference on, P. Ridao, Ed., 2007, Con-
ference Proceedings, pp. 1455–1460.
[19] D. Ribas, P. Ridao, J. Neira, and J. D. Tardos, “Slam us-
ing an imaging sonar for partially structured underwater
environments,” in Intelligent Robots and Systems, 2006
IEEE/RSJ International Conference on, P. Ridao, Ed.,
2006, Conference Proceedings, pp. 5040–5045.
[20] B. A. am Ende, “3d mapping of underwater caves,”
IEEE Computer Graphics and Applications, vol. 21, pp.
14–20, 2001.
[21] K. Pathak, A. Birk, N. Vaskevicius, and J. Poppinga,
“Fast Registration Based on Noisy Planes with Un-
known Correspondences for 3D Mapping,” IEEE Trans-
actions on Robotics, vol. 26, no. 2, pp. 1 – 18, March
2010.
[22] J. Poppinga, N. Vaskevicius, A. Birk, and K. Pathak,
“Fast plane detection and polygonalization in noisy 3D
range images,” in IEEE Int. Conf. on Intelligent Robots
and Systems (IROS), Nice, France, 2008.
[23] K. Pathak, A. Birk, N. Vaskevicius, M. Pfin-
gsthorn, S. Schwertfeger, and J. Poppinga, “Online
3d slam by registration of large planar surface seg-
ments and closed form pose-graph relaxation,” Journal
of Field Robotics, Special Issue on 3D Mapping,
vol. 27, no. 1, pp. 52–84, 2010. [Online]. Avail-
able: http://robotics.jacobs-university.de/publications/
JFR-3D-PlaneSLAM.pdf