Conference PaperPDF Available

Plane-based registration of sonar data for underwater 3D mapping

Authors:
  • Constructor University

Abstract and Figures

Surface-patches based 3D mapping in a real world underwater scenario is presented. It is based on a 6 degrees of freedom registration of sonar data. Planar surfaces are fitted into the sonar data and the subsequent registration method maximizes the overall geometric consistency within a search-space to determine correspondences between the planes. This approach has previously only been used on high quality range data from sensors on land robots like laser range finders. It is shown here that the algorithm is also applicable to very noisy, coarse sonar data. The 3D map presented is of a large underwater structure, namely the Lesumer Sperrwerk, a flood gate north of the city of Bremen, Germany. It is generated from 18 scans collected using a Tritech Eclipse sonar.
Content may be subject to copyright.
Plane-Based Registration of Sonar Data for Underwater 3D
Mapping
Kaustubh Pathak, Andreas Birk, and Narunas Vaskevicius*
Abstract
Surface-patches based 3D mapping in a real world
underwater scenario is presented. It is based on a 6
degrees of freedom registration of sonar data. Planar
surfaces are fitted into the sonar data and the subse-
quent registration method maximizes the overall geo-
metric consistency within a search-space to determine
correspondences between the planes. This approach
has previously only been used on high quality range
data from sensors on land robots like laser range find-
ers. It is shown here that the algorithm is also appli-
cable to very noisy, coarse sonar data. The 3D map
presented is of a large underwater structure, namely the
Lesumer Sperrwerk, a flood gate north of the city of Bre-
men, Germany. It is generated from 18 scans collected
using a Tritech Eclipse sonar.
Final Version:
IEEE International Conference on Intelligent Robots
and Systems (IROS), 2010
@inproceedings{Underwater3Dmapping-IROS10,
author = {Pathak, Kaustubh and Birk, Andreas
and Vaskevicius, Narunas},
title = {Plane-Based Registration of Sonar
Data for Underwater 3D Mapping},
booktitle = {IEEE International Conference on
Intelligent Robots and Systems (IROS)},
pages = {4880 - 4885},
year = {2010},
type = {Conference Proceedings}
}
1. Introduction
Maps are the core world models for autonomous
mobile robots engaging in complex mission tasks.
*The authors are with the Dept. of Computer Science, Ja-
cobs University Bremen, 28759 Bremen, Germany. [k.pathak,
a.birk]@jacobs-university.de
While many successful solutions exist for 2D mapping
by land robots - some even consider this as a more or
less solved problem [1][2] - it is still a major challenge
for the underwater domain [3]. There are two main rea-
sons for this. First, high quality, high resolution range
sensors - especially laser range finders - are available
for land robots, whereas underwater range sensors pro-
duce much coarser, noisier data at lower update fre-
quencies. Second, land robots operate in environments
where many obstacles exist that provide a basis for rich
sets of natural landmarks for mapping, whereas this is
rarely the case for underwater environments [4].
As a consequence, many underwater approaches
to mapping rely on artificial markers, i.e., beacons at
stationary positions, which have to be exactly known
[5][6][7] or at least constrained, e.g. by the known depth
of the ocean floor [8][9]. When natural landmarks are
used in the underwater domain, then they are usually
highly environment specific. Examples for application
specific landmarks are bubble plumes in shallow vent
areas [10][11], complex floor topographies e.g. along
ridges [12], or visual features on visually rich ocean
floors [13], especially at reefs [14] like the Great Barrier
Reef [15][16].
Previous work on underwater mapping predomi-
nantly dealt with 2D representations, which is sufficient
for a wide range of applications. For underwater sys-
tems, one may argue that ground elevation as repre-
sented in classic bathymetric maps may be sufficient
[17]. But underwater robots are increasingly used not
only in open sea applications but also in more com-
plex environments like marinas, harbors or at dams.
2D mapping in these environments may be sufficient
for aiding a remote operator or for most simple tasks
[18][19], but it is far from sufficient for any intelligent
operation of AUVs. The work on 3D underwater map-
ping so far has mainly concentrated on vision based ap-
proaches and significant efforts to localize the vehicle
[20, 13].
Here, up to our knowledge for the first time, regis-
tration of sonar data is used to generate a 3D underwa-
ter surface-patches based map. The registration method
was introduced by ourselves in [21], where typical land
robot sensors were used for its validation. Sonar data is,
in contrast, much more coarse and noisy.
2. Plane-Segment Extraction and Match-
ing
The scan-matching based on plane-segments con-
sists of the following three steps:
1. Planes extraction from raw point-clouds: This
procedure is based on region-growing in a range-
image scan followed by a least-squares estimation
of the parameters of planes. The covariances of
the plane-parameters are computed as well. The
details may be found in the previously published
work of the authors [22].
2. Pose-registration by plane-matching: This step
consists of two substeps:
(a) Finding the correspondences between plane-
segments in the two scans to be matched.
These two scans may be successive sam-
ples for normal registration or may be non-
successive, if a loop is being closed.
(b) After the correspondences have been decided
on, finding the optimal rotation and transla-
tion which aligns the corresponding set of
planes. This gives the pose change of the
robot between the scans.
3. Polygonization: This step consists of polygoniz-
ing each plane-segment by finding the boundary of
each surface-patch so that the surface can be com-
pactly described. This step is crucial for visualiza-
tion of the result, however, if only pose registration
is desired, it may be omitted. It is also described
in [22].
The registration method in the second step above
uses planar patches extracted from the range data and
maximizes the overall geometric consistency within
a search-space to determine correspondences between
planes. This method, named Minimum Uncertainty
Maximum Consensus (MUMC) was introduced by the
authors in [21]. The search-space is pruned using cri-
teria such as overlap, and size-similarity. For all these
tests, only the plane parameter covariance matrix is em-
ployed, without the need to refer back to the original
point-cloud. This approach is fast and its reliability. Its
computation-time increases with the number of planes.
Finally, the covariance matrix of the solution is com-
puted which identifies the principal uncertainty direc-
tions. This information is indispensable for subsequent
refinement processing like pose-graph-SLAM [23], al-
though this is outside the scope of this paper.
3. Experiments and Results
3.1. The Tritech Eclipse Sonar
The device used in the experiments presented here
is a Tritech Eclipse sonar. It is a multi-beam sonar with
time-delay beam-forming and electronic beam steering.
Its core acoustic sensing parameters are:
Operating Frequency: 240 kHz
Beam Width: 120
Number of Beams: 256
Acoustic Angular Resolution: 1.5
Effective Angular Resolution: 0.5
Depth/Range Resolution: 2.5 cm
Maximum Range: 120 m
Minimum Focus Distance: 0.4 m
Scan Rate: 140 Hz at 5 m, 7 Hz at 100 m
Please note that the scan rate is dependent on the reso-
lution with which the scan is taken and that high reso-
lution scans take longer.
The core hardware parameters are:
Width: 342 mm
Height: 361 mm
Depth: 115 mm
Weight Wet / Dry: 9 kg /19 kg
Depth Rating: 2500 m
Power Consumption: 60 W
Supply Voltage Nominal: 20-28 VDC
3.2. A 3D Map of the Lesumer Sperrwerk
The device was used to generate 18 scans of the
Lesumer Sperrwerk, a river flood gate in the north of
Bremen, Germany (figure 1). The overall area covered
is approximately 110 m by 70 m. The sonar data is
quite noisy and error-prone. Hence, a pre-filtering us-
ing a threshold on the intensity values was done, i.e.,
readings with a weak echo were discarded. In addition
to a reduction in noise and in the overall amount of data,
Figure 1. An overview of the Lesumer Sperrwerk as seen from the river’s surface.
(a) scan 4, top view (b) scan 17, top view
(c) scan 4, perspective view (d) scan 17, perspective view
Figure 2. Examples of sonar scans as point clouds. As can be seen, the data is quite noisy.
Table 1. MUMC Parameters (Units mm and ra-
dians). Compare with [21, Table II].
Parameter Value
F
t% 50
ε1,¯c107
,5
¯
Ldet 15
¯
χ2
ovlp 2
¯
χ2
×5×105
¯
χ2
δ10
¯
χ2
t,eχ2
1,1.5% =3.84
κ6
it led to a significant reduction of the field of view of the
sonar to about 90opening angle - instead of 120- as
the center is most illuminated by sound; an effect which
is also described in the device’s manual. Despite this
simple pre-processing, the data is still quite noisy. Ex-
ample point clouds from the scans are shown in figure
2. The scans have varying amount of overlap, ranging
from about 90 to 50 percent between consecutive scans.
Planes are fitted in the 18 scans with the previously
described method [22]. One interesting side-effect of
the plane based representation is the compression of
the data. The effect is even stronger in this experiment
where pre-processed, sub-sampled point cloud data is
used. The average point cloud size is here 126 KB
whereas the planar patches are only 24 KB on the av-
erage, i.e., smaller by a factor of more than 5 (figure 3).
The data is then turned into a 3D map with
our plane-registration method MUMC. During the
extraction-phase, the uncertainties in the planes’ param-
eters (normal and distance to the origin) were also com-
puted as covariance matrices. For this, a key require-
ment is the availability of a sensor uncertainty model.
Since a sonar’s measurement error depends on a wide
array of effects which are hard to model, we opted for
assuming a constant standard deviation of σ=1 meter
for all beams. A more accurate model will definitely im-
prove the covariance estimates for the extracted planes.
Most of the consistency tests in MUMC [21] are based
on χ2-tests in which these plane-covariances are used
in a central way. Interestingly, we hardly changed the
default thresholds in [21, Table II] for the sonar, al-
though the defaults were computed based on sensors
commonly used on land-robots. This parameter table
is reproduced here in Table 1 to show the exact values
used for the Tritech Eclipse sonar. The lack of any sub-
stantial change in these values compared to other sen-
sors shows that the method is robust as long as the sen-
sor error model used is reasonable.
Please note that no motion sensors like a Inertial
Navigation System (INS) or even an attitude sensor like
a gyro where required. The resulting 3D map is shown
in figure 5. It has a very reasonable correspondence
with the real structure as shown in an overlay of a top
view with Google earth (figure 4). The map is at least
suited to be used for some rough path-planning on an
AUV.
The plane extraction takes about 0.9 to 1.4 seconds
and the polygonization of the patches - useful mainly
for visualization or path-planning - takes 0.87 to 1.5
seconds. The actual registration, i.e., the plane match-
ing takes 8 to 56 seconds with an average of 31 seconds
on a standard PC with a AMD Turion 64 X2 processor
and 1 Gb of RAM. Though this has not been the main
focus of this paper, these run-times are still suitable for
online computations on the vehicle, especially to occa-
sionally map larger areas for online path-planning.
4. Conclusions
We presented the registration of sonar data to gen-
erate a 3D map in a real world underwater scenario. We
employ plane-based registration, which was previously
introduced by ourselves and so far had only been tested
on quite high quality range data from sensors on land
robots. The plane-based registration decouples rotation
and translation determination, and it is able to compute
the uncertainty in pose-registration using the uncertain-
ties in the plane parameters.
It was shown that this recently introduced algo-
rithm can even cope with the coarse and noisy data from
a sonar. Concretely, the generation of a 3D map of a
larger underwater structure in form of a flood gate was
presented. A Tritech Eclipse sonar was used to acquire
18 scans of the environment, which got successfully
registered into one 3D map that corresponds well with
the real structure.
Acknowledgments
The research leading to the results presented here
has received funding from the European Community’s
Seventh Framework Programme (EU FP7) under grant
agreement n. 231378 ”Cooperative Cognitive Control
for Autonomous Underwater Vehicles (Co3-AUVs)”,
http://www.Co3-AUVs.eu. Our previous work
on plane-based registration for 3D mapping was sup-
ported by the German Research Foundation (DFG).
Figure 3. One important fringe benefit of the plane extraction is the significant compression of the
data.
Figure 4. An overlay of the top-view of the 3D map of the Sperrwerk on an image from Google maps.
It can be seen that the map captures the real structure quite well.
Figure 5. Perspective views of the 3D map generated from the 18 registered scans. A comparison
with ground truth is shown in figure 4. Corresponding planar patches matched across two or more
scans are shown in the same color.
References
[1] S. Thrun, Robotic Mapping: A Survey. Morgan Kauf-
mann, 2002.
[2] U. Frese, “A discussion of simultaneous localization
and mapping,” Autonomous Robots, vol. 20, pp. 25–42,
2006.
[3] D. Walker, “Xauv: A modular highly maneuverable au-
tonomous underwater vehicle, in Oceans 2007 Oceans
2007 VO -, 2007, Conference Proceedings, pp. 1–4.
[4] P. Newman and H. Durrant-Whyte, “Using sonar in
terrain-aided underwater navigation, in Robotics and
Automation, 1998. Proceedings. 1998 IEEE Interna-
tional Conference on, H. Durrant-Whyte, Ed., vol. 1,
1998, Conference Proceedings, pp. 440–445 vol.1.
[5] S. Williams, P. Newman, G. Dissanayake, and
H. Durrant-Whyte, H. A4 Durrant-Whyte, “Au-
tonomous underwater simultaneous localisation and
map building, in Robotics and Automation, 2000. Pro-
ceedings. ICRA ’00. IEEE International Conference on,
P. Newman, Ed., vol. 2, 2000, Conference Proceedings,
pp. 1793–1798 vol.2.
[6] M. Kemp, B. Hobson, J. Meyer, R. Moody, H. Pinnix,
and B. Schulz, “Masa: a multi-auv underwater search
and data acquisition system,” in Oceans ’02 MTS/IEEE,
B. Hobson, Ed., vol. 1, 2002, Conference Proceedings,
pp. 311–315 vol.1.
[7] D. Thomson and S. Elson, “New generation acoustic
positioning systems,” in Oceans ’02 MTS/IEEE Oceans
’02 MTS/IEEE VO - 3, S. Elson, Ed., vol. 3, 2002, Con-
ference Proceedings, pp. 1312–1318 vol.3.
[8] E. Olson, J. J. Leonard, and S. Teller, “Robust range-
only beacon localization,” Oceanic Engineering, IEEE
Journal of, vol. 31, no. 4, pp. 949–958, 2006.
[9] P. Newman and J. Leonard, “Pure range-only sub-sea
slam,” in Robotics and Automation, 2003. Proceed-
ings. ICRA ’03. IEEE International Conference on,
J. Leonard, Ed., vol. 2, 2003, Conference Proceedings,
pp. 1921–1926 vol.2.
[10] T. Maki, H. Kondo, T. Ura, and T. Sakamaki, “Naviga-
tion of an autonomous underwater vehicle for photo mo-
saicing of shallow vent areas, in OCEANS 2006 - Asia
Pacific, H. Kondo, Ed., 2006, Conference Proceedings,
pp. 1–7.
[11] ——, “Photo mosaicing of tagiri shallow vent area
by the auv ”tri-dog 1” using a slam based navigation
scheme,” in OCEANS 2006, H. Kondo, Ed., 2006, Con-
ference Proceedings, pp. 1–6.
[12] I. Nygren and M. Jansson, “Terrain navigation for un-
derwater vehicles using the correlator method, Oceanic
Engineering, IEEE Journal of, vol. 29, no. 3, pp. 906–
915, 2004.
[13] H. Madjidi and S. Nagahdaripour, “3-d photo-
mosaicking of benthic environments, in OCEANS
2003. Proceedings, S. Nagahdaripour, Ed., vol. 4, 2003,
Conference Proceedings, pp. 2317–2318 Vol.4.
[14] M. Dunbabin, P. Corke, and G. Buskey, “Low-cost
vision-based auv guidance system for reef navigation,
in Robotics and Automation, 2004. Proceedings. ICRA
’04. 2004 IEEE International Conference on, P. Corke,
Ed., vol. 1, 2004, Conference Proceedings, pp. 7–12
Vol.1.
[15] S. Williams and I. Mahon, “Simultaneous localisation
and mapping on the great barrier reef,” in Robotics and
Automation, 2004. Proceedings. ICRA ’04. 2004 IEEE
International Conference on, I. Mahon, Ed., vol. 2,
2004, Conference Proceedings, pp. 1771–1776 Vol.2.
[16] I. Mahon and S. Williams, “Slam using natural features
in an underwater environment, in Control, Automation,
Robotics and Vision Conference, 2004. ICARCV 2004
8th Control, Automation, Robotics and Vision Confer-
ence, 2004. ICARCV 2004 8th VO - 3, S. Williams, Ed.,
vol. 3, 2004, Conference Proceedings, pp. 2076–2081
Vol. 3.
[17] D. Oskard, T.-H. Hong, and C. Shaffer, “Real-time al-
gorithms and data structures for underwater mapping,”
Systems, Man and Cybernetics, IEEE Transactions on,
vol. 20, no. 6, pp. 1469–1475, 1990.
[18] D. Ribas, P. Ridao, J. Domingo Tardos, and J. Neira,
“Underwater slam in a marina environment, in Intelli-
gent Robots and Systems, 2007. IROS 2007. IEEE/RSJ
International Conference on, P. Ridao, Ed., 2007, Con-
ference Proceedings, pp. 1455–1460.
[19] D. Ribas, P. Ridao, J. Neira, and J. D. Tardos, “Slam us-
ing an imaging sonar for partially structured underwater
environments, in Intelligent Robots and Systems, 2006
IEEE/RSJ International Conference on, P. Ridao, Ed.,
2006, Conference Proceedings, pp. 5040–5045.
[20] B. A. am Ende, “3d mapping of underwater caves,”
IEEE Computer Graphics and Applications, vol. 21, pp.
14–20, 2001.
[21] K. Pathak, A. Birk, N. Vaskevicius, and J. Poppinga,
“Fast Registration Based on Noisy Planes with Un-
known Correspondences for 3D Mapping, IEEE Trans-
actions on Robotics, vol. 26, no. 2, pp. 1 18, March
2010.
[22] J. Poppinga, N. Vaskevicius, A. Birk, and K. Pathak,
“Fast plane detection and polygonalization in noisy 3D
range images,” in IEEE Int. Conf. on Intelligent Robots
and Systems (IROS), Nice, France, 2008.
[23] K. Pathak, A. Birk, N. Vaskevicius, M. Pfin-
gsthorn, S. Schwertfeger, and J. Poppinga, “Online
3d slam by registration of large planar surface seg-
ments and closed form pose-graph relaxation,” Journal
of Field Robotics, Special Issue on 3D Mapping,
vol. 27, no. 1, pp. 52–84, 2010. [Online]. Avail-
able: http://robotics.jacobs-university.de/publications/
JFR-3D-PlaneSLAM.pdf
... They are usually fitted on a tripod or ROV and need to be kept stationary during the scanning process. Pathak et al. [159] used Tritech Eclipse sonar, an MBS with delayed beam forming and electronic beam steering, to generate a final 3D map after 18 scans. On the basis of the region grown in distance image scanning, the plane was extracted from the original point cloud. ...
... Pathak [159] MBS A surface-patch-based 3D mapping in actual underwater scenery was proposed. It is based on 6DOF registration of sonar data. ...
Article
Full-text available
At present, 3D reconstruction technology is being gradually applied to underwater scenes and has become a hot research direction that is vital to human ocean exploration and development. Due to the rapid development of computer vision in recent years, optical image 3D reconstruction has become the mainstream method. Therefore, this paper focuses on optical image 3D reconstruction methods in the underwater environment. However, due to the wide application of sonar in underwater 3D reconstruction, this paper also introduces and summarizes the underwater 3D reconstruction based on acoustic image and optical–acoustic image fusion methods. First, this paper uses the Citespace software to visually analyze the existing literature of underwater images and intuitively analyze the hotspots and key research directions in this field. Second, the particularity of underwater environments compared with conventional systems is introduced. Two scientific problems are emphasized by engineering problems encountered in optical image reconstruction: underwater image degradation and the calibration of underwater cameras. Then, in the main part of this paper, we focus on the underwater 3D reconstruction methods based on optical images, acoustic images and optical–acoustic image fusion, reviewing the literature and classifying the existing solutions. Finally, potential advancements in this field in the future are considered.
... This approach needs a large number of planes to produce a reliable result as it uses least square techniques and consensus approach. This algorithm was also tested with coarse and noisy data from a sonar for underwater 3D mapping (Pathak et al., 2010a). ...
Preprint
Many applications including object reconstruction, robot guidance, and scene mapping require the registration of multiple views from a scene to generate a complete geometric and appearance model of it. In real situations, transformations between views are unknown an it is necessary to apply expert inference to estimate them. In the last few years, the emergence of low-cost depth-sensing cameras has strengthened the research on this topic, motivating a plethora of new applications. Although they have enough resolution and accuracy for many applications, some situations may not be solved with general state-of-the-art registration methods due to the Signal-to-Noise ratio (SNR) and the resolution of the data provided. The problem of working with low SNR data, in general terms, may appear in any 3D system, then it is necessary to propose novel solutions in this aspect. In this paper, we propose a method, {\mu}-MAR, able to both coarse and fine register sets of 3D points provided by low-cost depth-sensing cameras, despite it is not restricted to these sensors, into a common coordinate system. The method is able to overcome the noisy data problem by means of using a model-based solution of multiplane registration. Specifically, it iteratively registers 3D markers composed by multiple planes extracted from points of multiple views of the scene. As the markers and the object of interest are static in the scenario, the transformations obtained for the markers are applied to the object in order to reconstruct it. Experiments have been performed using synthetic and real data. The synthetic data allows a qualitative and quantitative evaluation by means of visual inspection and Hausdorff distance respectively. The real data experiments show the performance of the proposal using data acquired by a Primesense Carmine RGB-D sensor. The method has been compared to several state-of-the-art methods. The ...
... Current UW depth estimation methods can be divided into active and passive [54]. Active methods, which include different kinds of sonar [42,23,8], UW laser line-scanning [45,16], range- gated imaging systems [39], and LiDAR [46] are usually bulky. Also, their performance is limited by the scattering of light in water [51]. ...
... Sonar is a typical sensor for underwater measurement and positioning. It can capture 3D point cloud images from deep water environments of tens or even hundreds of meters [8,9]. Cho et al used acoustic lens-based multi beam sonar scans underwater structures to obtain point cloud data [10]. ...
Article
Full-text available
Underwater structure inspections are essential for infrastructure maintenance, such as hydraulic facilities, bridges, and ports. Due to the influence of turbidity, dark light, and distortion, the traditional methods cannot satisfy the requirements of on-site inspection applications. This paper proposed a methodology of the point cloud data capture in the turbid underwater environment. The method consisted of an acquisition device, a distortion correction algorithm, and a parameter optimization approach. The acquisition device was designed by composing a silt-removing module, a structured light camera module, and a clear water replacement module, which can integrate with an underwater inspection robot. The underwater multi-medium plane refraction distortion model was established through analysis, and a refraction correction algorithm was provided to correct the distortion. To obtain the maximum field of view of the point cloud, the nonlinear optimization approach was used to select the medium material and thickness. After the real experiments using the Intel RealSense sr300 depth camera, maximum measuring distance could range up to 253 mm in water, the accuracy of the point cloud of the underwater target objects was ±3.77 mm, and the maximum error was 8.76%. Compared with other methods, this method was more suitable for 3D point cloud capture in the turbidity environment. Keywords: structured light, underwater imaging, underwater 3D point cloud, image distortion, underwater measurement
Article
Exploiting stronger winds at offshore farms leads to a cyclical need for maintenance due to the harsh maritime conditions. While autonomous vehicles are the prone solution for O&M procedures, sub-sea phenomena induce severe data degradation that hinders the vessel’s 3D perception. This article demonstrates a hybrid underwater imaging system that is capable of retrieving tri-dimensional information: dense and textured Photogrammetric Stereo (PS) point clouds and multiple accurate sets of points through Light Stripe Ranging (LSR), that are combined into a single dense and accurate representation. Two novel fusion algorithms are introduced in this manuscript. A Joint Masked Regression (JMR) methodology propagates sparse LSR information towards the PS point cloud, exploiting homogeneous regions around each beam projection. Regression curves then correlate depth readings from both inputs to correct the stereo-based information. On the other hand, the learning-based solution (RHEA) follows an early-fusion approach where features are conjointly learned from a coupled representation of both 3D inputs. A synthetic-to-real training scheme is employed to bypass domain-adaptation stages, enabling direct deployment in underwater contexts. Evaluation is conducted through extensive trials in simulation, controlled underwater environments, and within a real application at the ATLANTIS Coastal Testbed. Both methods estimate improved output point clouds, with RHEA achieving an average RMSE of 0.0097m - a 52.45% improvement when compared to the PS input. Performance with real underwater information proves that RHEA is robust in dealing with degraded input information; JMR is more affected by missing information, excelling when the LSR data provides a complete representation of the scenario, and struggling otherwise.
Article
This paper analyzes the influence of underwater scattering on structured light-based 3-D reconstruction techniques and improves the performance of the 3-D reconstruction under turbid water condition. Two typical structured light-based 3-D reconstruction techniques, fringe projection profilometry (FPP) and single-pixel imaging-based metrology (SIM), are selected to reconstruct the 3-D shape of underwater objects. First, we formulate the error model of underwater FPP, and introduce a Hilbert transform-based method (HT-based FPP) to compensate the measurement error. Second, we first introduce SIM for the underwater 3-D reconstruction, analyze the SNR of underwater image reconstructed by single-pixel imaging, and propose a HSI-based visible region location method to reduce the measurement time. The provided simulation and experiment verify the correctness of the theoretical analysis and the effectiveness of the proposed methods, and compare FPP, HT-based FPP and SIM under water with different turbidities.
Article
Accurate underwater depth estimation is a cornerstone of reaching autonomous underwater exploration. However, it is incredibly tricky due to the inherent attenuation character and heavy noise. Fortunately, the depth-changing trend and underwater light attenuation are closely correlated, providing powerful clues for underwater depth estimation. Rather than simulating the underwater attenuation through formulas, we propose an underwater self-supervised depth estimation neural network in our work. With the guidance of multiple constraints, which are meticulously designed based on the comprehensive analyses of underwater characters, this network can learn the depth-changing trend by itself from attenuation information in underwater monocular videos. Our detailed experiments on underwater datasets prove that the proposed framework can obtain accurate and fine-grained depth maps. We believe the work may provide an economical solution for underwater perception.
Article
Full-text available
Conference Paper
Full-text available
In this paper we describe a system for underwater navigation with AUVs in partially structured environments, such as dams, ports or marine platforms. An imaging sonar is used to obtain information about the location of planar structures present in such environments. This information is incorporated into a feature-based SLAM algorithm in a two step process: (I) the full 360deg sonar scan is undistorted (to compensate for vehicle motion), thresholded and segmented to determine which measurements correspond to planar environment features and which should be ignored; and (2) SLAM proceeds once the data association is obtained: both the vehicle motion and the measurements whose correct association has been previously determined are incorporated in the SLAM algorithm. This two step delayed SLAM process allows to robustly determine the feature and vehicle locations in the presence of large amounts of spurious or unrelated measurements that might correspond to boats, rocks, etc. Preliminary experiments show the viability of the proposed approach
Conference Paper
Full-text available
A fast but nevertheless accurate approach for surface extraction from noisy 3D point clouds is presented. It consists of two parts, namely a plane fitting and a polygonalization step. Both exploit the sequential nature of 3D data acquisition on mobile robots in form of range images. For the plane fitting, this is used to revise the standard mathematical formulation to an incremental version, which allows a linear computation. For the polygonalization, the neighborhood relation in range images is exploited. Experiments are presented using a time-of-flight range camera in form of a Swissranger SR-3000. Results include lab scenes as well as data from two runs of the rescue robot league at the RoboCup German Open 2007 with 1,414, respectively 2,343 sensor snapshots. The 36 ldr 106, respectively 59 ldr 106 points from the two point clouds are reduced to about 14ldr103, respectively 23 ldr 103 planes with only about 0.2 sec of total computation time per snapshot while the robot moves along. Uncertainty analysis of the computed plane parameters is presented as well.
Conference Paper
Full-text available
This paper describes a navigation system for autonomous underwater vehicles (AUVs) in partially structured environments, such as dams, harbors, marinas or marine platforms. A mechanical scanning imaging sonar is used to obtain information about the location of planar structures present in such environments. A modified version of the Hough transform has been developed to extract line features, together with their uncertainty, from the continuous sonar dataflow. The information obtained is incorporated into a feature-based SLAM algorithm running an Extended Kalman Filter (EKF). Simultaneously, the AUV's position estimate is provided to the feature extraction algorithm to correct the distortions that the vehicle motion produces in the acoustic images. Experiments carried out in a marina located in the Costa Brava (Spain) with the Ictineu AUV show the viability of the proposed approach.
Article
Full-text available
We present a robot-pose-registration algorithm, which is entirely based on large planar-surface patches extracted from point clouds sampled from a three-dimensional (3-D) sensor. This approach offers an alternative to the traditional point-to-point iterative-closest-point (ICP) algorithm, its point-to-plane variant, as well as newer grid-based algorithms, such as the 3-D normal distribution transform (NDT). The simpler case of known plane correspondences is tackled first by deriving expressions for least-squares pose estimation considering plane-parameter uncertainty computed during plane extraction. Closed-form expressions for covariances are also derived. To round-off the solution, we present a new algorithm, which is called minimally uncertain maximal consensus (MUMC), to determine the unknown plane correspondences by maximizing geometric consistency by minimizing the uncertainty volume in configuration space. Experimental results from three 3-D sensors, viz., Swiss-Ranger, University of South Florida Odetics Laser Detection and Ranging, and an actuated SICK S300, are given. The first two have low fields of view (FOV) and moderate ranges, while the third has a much bigger FOV and range. Experimental results show that this approach is not only more robust than point- or grid-based approaches in plane-rich environments, but it is also faster, requires significantly less memory, and offers a less-cluttered planar-patches-based visualization.
Article
Full-text available
A fast pose-graph relaxation technique is presented for enhancing the consistency of three-dimensional (3D) maps created by registering large planar surface patches. The surface patches are extracted from point clouds sampled from a 3D range sensor. The plane-based registration method offers an alternative to the state-of-the-art algorithms and provides advantages in terms of robustness, speed, and storage. One of its features is that it results in an accurate determination of rotation, although a lack of predominant surfaces in certain directions may result in translational uncertainty in those directions. Hence, a loop-closing and relaxation problem is formulated that gains significant speed by relaxing only the translational errors and utilizes the full-translation covariance determined during pairwise registration. This leads to a fast 3D simultaneous localization and mapping suited for online operations. The approach is tested in two disaster scenarios that were mapped at the NIST 2008 Response Robot Evaluation Exercise in Disaster City, Texas. The two data sets from a collapsed car park and a flooding disaster consist of 26 and 70 3D scans, respectively. The results of these experiments show that our approach can generate 3D maps without motion estimates by odometry and that it outperforms iterative closest point–based mapping with respect to speed and robustness.
Article
Full-text available
This paper aims at a discussion of the structure of the SLAM problem. The analysis is not strictly formal but based both on informal studies and mathematical derivation. The first part highlights the structure of uncertainty of an estimated map with the key result being “Certainty of Relations despite Uncertainty of Positions”. A formal proof for approximate sparsity of so-called information matrices occurring in SLAM is sketched. It supports the above mentioned characterization and provides a foundation for algorithms based on sparse information matrices. Further, issues of nonlinearity and the duality between information and covariance matrices are discussed and related to common methods for solving SLAM. Finally, three requirements concerning map quality, storage space and computation time an ideal SLAM solution should have are proposed. The current state of the art is discussed with respect to these requirements including a formal specification of the term “map quality”.
Conference Paper
The XAUV is a new AUV developed for rapid algorithm and sensor development. It is also designed to complement existing shiphull inspection efforts by moving quickly around a ship, generating a coarse map. The overall philosophy is to create a vehicle modular enough to experiment with new sensors and configurations, small enough to operate in small tanks, and coded to minimize the overhead required to test new SLAM and control algorithms. Physically, the vehicle is a stacked hull design, with control and sensing components on top, and a swappable battery on the bottom. It has a 2DOF servoed sensor mount on the front, allowing the vehicle to scan a ship or wall to the side, while moving forward. The array carries a camera and blazed array, and can be refitted to carry a novel 3D camera and/or most other small sensors. The vehicle can also be fitted with up to eight control surfaces: four in front and four in back; this will allow nimble high seed maneuvering. It uses two vertical and two forward thruster for normal operation, but can also be fitted with servoed thrusters (one or two DOF each) for hovering and experimental applications. The control system runs a combination of MOOS, MATLAB, and VB applications, and can be booted on the one best suited for the developer, with the intention of making the interface between sensor inputs, high- level code, and thruster outputs, as invisible as possible. The vehicle has been fully designed and is under final manufacturing and initial testing and coding.
Conference Paper
This paper proposes a navigation scheme of an Autonomous Underwater Vehicle (AUV) for photo mosaicing of a shallow vent area with bubble plumes. While bubble plumes disturb acoustic positioning systems, they can be detected by sonars. So the method takes advantage of the plumes as landmarks using a profiling sonar. By adopting the concept of Simultaneous Localization and Mapping (SLAM), the method achieves drift-free, accurate and independent positioning without using conventional acoustic transponders. The high positioning accuracy enables complete data acquisition, as well as position based mosaicing without relying on pictorial correlations of the photos. Some artificial acoustic reflectors are also deployed to enhance positioning performance. The scheme was implemented to the testbed AUV "Tri-Dog 1" and the performance was verified through sea experiments at Tagiri vent area, Kagoshima Bay in Japan. The AUV succeeded in fully autonomous observation at the challenging environment, to build photo mosaics of more than 300 m of the floor including a tube-worm colony
Conference Paper
Although underwater vent areas are scientifically important, precise photo mosaicing of them is for the following reasons still a hard task. Firstly, the visible range is limited by the turbid water and floating particles. Secondly, the conventional acoustic positioning is vulnerable to the bubble plumes of the vent areas. In this paper a navigation method of an autonomous underwater vehicle (AUV) for photo mosaicing of shallow vent areas where bubbles are spouting is proposed. Simultaneously, this method estimates the position of the AUV and the landmarks, such as bubble plumes and artificial sonar reflectors. The simultaneous localization and mapping (SLAM) based approach enables drift free, accurate, real-time and independent navigation in a local area, which is suitable for photo mosaicing. The proposed method has been implemented on the testbed AUV "Tri-Dog 1" and a tank experiment has been carried out to verify its performance.