Conference PaperPDF Available

Underwater Field Equipment of a Network of Landmarks Optimized for Automatic Detection by AI

Authors:
UNDERWATER FIELD EQUIPMENT OF A NETWORK OF LANDMARKS OPTIMIZED FOR
AUTOMATIC DETECTION BY AI
L. Beaudoin, L. Avanthey
SEAL Research Team (Sense, Explore, Analyse and Learn),
´
EPITA, 14-16 rue Voltaire, 94270 Le Kremlin-Bicˆ
etre, France,
´
Equipe Acquisition et Traitement, IGN, 73 avenue de Paris, 94165 Saint Mand´
e, France
ABSTRACT
To qualify the point clouds obtained by 3D reconstruction
of a global study area in close-range remote sensing, control
points, whose position has been measured essentially manu-
ally in the field with an instrument whose precision is known,
are used. In the underwater environment, equipping the field
and carrying out these measurements is a complex operation
to perform due to the peculiarities of the environment. We
present in this article a first step towards the automation of
this task, the automatic detection of targets by a deep learning
algorithm which will serve to correctly position the control
points locally, and a simplification of the manual measure-
ment which will serve in future work to control the results of
automatic readings.
Index TermsGCP network, underwater artificial land-
marks, detection and identification by deep-learning, quick
manual in situ measurements
1. INTRODUCTION
In photogrammetry, we qualify the result of a 3D reconstruc-
tion by comparing the model obtained with in situ measure-
ments on ground control points (GCP). This approach is valid
whether in an aerial or underwater environment. The first ex-
periments took place in the aerial world. Bringing these tech-
niques directly to the underwater world is not possible be-
cause of the specificities of this environment (lower visibility,
diffusion and diffraction phenomena, scene dynamism ...).
The measurements of the network of GCP are made man-
ually. However, the operational constraints are much stronger
in the underwater environment because of the limited access
time to the site by the divers, and the difficulty to have an ab-
solute (no GPS signal) or relative (visibility of a few meters)
positioning. So, there is therefore great interest for a fully
automatic methodology for referencing GCP. The main ad-
vantage is to reduce the cost of manual measurements while
increasing the accuracy of the overall network by increasing
the number of GCP.
In this article, we present our experimental results on the
first steps of this methodology. First, we present our work on
artificial landmarks that serve as GCP and a deep learning al-
gorithm dedicated to automatic detection and identification of
these GCP in an underwater environment. Then we present an
original and operationnal methodology for quick in situ man-
ual measurements of GCP network which will be usefull to
qualify the future results of the fully automatic method. The
last section present the result obtained on several underwater
acquisition campaigns.
2. CONTEXT AND STATE OF THE ART
In close-range remote sensing, due to the short observation
distance, the global reconstruction of a scene is done from a
collection of local sub-scenes. This is even more marked in
the underwater environment because of the visibility reduced
to a few meters. In situ measurements of underwater GCP
are dedicated to physical parameters like depth, the distribu-
tion and distance between GCP, etc. These GCP are used to
qualify the precision of the model.
There are therefore two precision scales: local precision
(within the same sub-scene) and global precision (on the com-
plete scene) which notably includes slow drifts, which are
difficult to detect at the local scale. Thus, one can have for
example precise local reconstructions but an imprecise global
reconstruction. In the literature, the techniques to improve
precision are different depending on whether one is interested
in one or other of these details.
For local precision, the classic method consists of using
standard rules (alternating colored bands representing known
distances). For sites with a high visit rate and studied for sev-
eral days, such as archaeological sites, the scene is partially
or totally equipped with fixed or mobile grids [1].
For overall accuracy, the most widespread method, result-
ing from photogrammetric practices, consists of distributing
specific markers on the study area (weighted, screwed or even
sunken targets in the field). These targets can also be uniquely
colored and numbered, like archaeological labels. Then, mea-
surements are taken: the network of GCP thus obtained makes
it possible to check the coherence of the reconstructed global
scene.
Fig. 1. State of the art GCP and measurement method [6, 7].
In the underwater environment (see figure 1) where no
absolute positionning is directly available, it is only possi-
ble to measure relative distances betweens GCPs [2, 3, 4, 5].
Their depth can be measured absolutely with a sensor or rel-
atively with graduated vertical bars and level lines. When the
surface is close, it is possible to reference all these measure-
ments in an absolute coordinate system, by using vertical bars
equipped with a GPS sensor for example or by an external tri-
lateration from a fixed point.
The common problem with all of these methods is the hu-
man and logistical cost. Indeed, they make a lot of use of
divers who have limited access time to the site, whether by
meteorological, physical (the deeper the site, the shorter the
in situ time), administrative (number of dives per day) or lo-
gistics constraints.
To limit in situ manipulations, the characteristic points
used to form GCP could be natural, but the seabed has an ex-
traordinary diversity: it can be uniform (sand), or contains too
much information (reef), indistinguishable information (grass
or pebbles), information of variable quality (drop-off), etc. It
is therefore very difficult to make a priori assumptions about
the quality and distribution of the landmarks of an area. How-
ever, it is important to master this parameters because they im-
pact the quality and precision of the estimate. For example, a
rule is to favor angles around 60 degrees between landmarks
to optimize trilateration.
In addition, even if we had well distributed natural land-
marks, it is very complicated to describe them in a sufficiently
precise manner while remaining concise to reference them
and find them back on the images. The problem is even more
complex when you go deeper because the orientation of the
lighting completely changes the perception of the scene [8].
To overcome all these difficulties, we therefore chose to use
artificial landmarks rather than relying on natural landmarks.
3. AUTOMATIC DETECTION OF
CHARACTERISTIC POINTS
We created our own landmarks (see figure 2) based on aerial
work on automatic tracking [9], feedback from underwater
work from [10] and target detection work by computer vision
suitable for the underwater world [11].
To reduce the time required to equip the site and improve
Fig. 2. Artificial landmarks optimized for the etablishment of
underwater network of GCP detectable by computer vision.
its relevance, our landmarks allow us to estimate local and
global accuracy at the same time. It is a square subdivided
into 9 tiles of known size that can be used as a 2D yardstick.
Its corners, its center and those of its tiles can be localized in
a robust way both in situ and on the images.
As we want our method to be fully automatic over time,
we adapted the design of our landmarks to detection and iden-
tification algorithms. The periodic repetition of the same pat-
tern of standard rules generates indecision in an automatic
process and the dimensions of archaeological tags are too
punctual to ensure proper detection. Moreover, those state
of the art landmarks are not robust to partial masking (algae,
sand, fish, etc.) which is very common underwater.
Our landmark measure 20 ×20 cm to be easily detected
considering 2-3m observation distance and the choice of at-
tenuated colors on light tiles is the best compromise that we
have found to maintain a good contrast between the neighbor-
ing tiles to facilitate detection without saturating the sensors
whatever the lighting conditions. The diagram of the top, bot-
tom, right and left central tiles is unique and uses two comple-
mentary identification strategies (squares and arcs), whereas
the central number is mainly used for manual referencing.
The orientation of the landmark can be automatically deduced
thanks to the asymmetry of the colored tiles and the orienta-
tion of their letters.
For automatic detection, we are based on the deep learn-
ing algorithm YOLO (You Only Look Once) [12]. We use its
version 3, implemented in the darknet framework. The lower
layers of the network have been trained on the ImageNet base
for general object detection. This then allows us to use only a
small sample of manually annotated images (less than fifty) to
specialize the last layer on the detection of our landmarks in
our environnments (transfer learning). We train the network
on more than 1000 iterations to find the center and the width
of each landmark that can appear on the images. The main
advantages of YOLO over its competitors are its speed and
its robustness due its capacity to consider the whole image as
informative context and to learn generalizable representation
of object. In addition, YOLO only requires relatively reduced
on-board processing capacity resources, which allows its use
by small robotic platforms or very compact payloads handled
by divers.
Being able to carry out the detections in near real time
and in situ will make it possible, when equipping the field,
that the landmarks are optimally positioned by estimating the
global network from local points of view. This will also allow
to automatically index the acquired images by the numbers of
the detected landmarks and thus classify the image sequences
by a neighborhood criterion rather than by a purely temporal
criterion. It also alow a classification according to the angle
of passage (orientation of the landmarks) inside these clusters,
which allows to adjust the algorithm used for registration dur-
ing reconstruction.
4. NETWORK OF GCP
To assess the quality of the GCP network during the qualifi-
cation phase of the fully automatic method, we need in situ
manual measurements. But unlike aerial work, it is difficult
to count on an absolute positioning of each landmarks since
there is no referencing by GPS underwater. We have seen
in the state of the art that the positioning of the landmarks
is done by a succession of relative measurements (distances
between the landmarks) and by trilateration between neigh-
boring landmarks of known depth (each landmark must be
connected to at least three other). The classic methodology
used in underwater archeology is particularly costly in divers
resources. We have therefore developed an original method
to optimize the use of dive time and make the results more
robust.
The originality of this manual method is to postpone the
measurement of the distance itself after the dive. For this, the
diver has strands, all distinguishable by a single number ac-
cording to a system inspired by the Inca quipu, a submersible
slate and a reel of unwinding rope, the end of which is con-
nected to a weight.
The diver positions the latter on the corner of a landmark
and unwinds the rope to the corner of a second landmark
nearby. He then marks the distance by fixing one of the
strands on the rope and notes the references on the tablet:
number of the strand, number of both landmarks and their
corresponding corner letter (for example ”2: 9A-5M”). The
depth of each landmark is measured with a sensor and associ-
ated with the corresponding number. Back from the mission,
the measurements are made a posteriori on the rope for each
strand to build the network of GCP.
One of the advantages of this method is to limit errors due
to human measurements in often complicated diving condi-
tions (cold, current, visibility, etc.). Another one is the possi-
bility to keep an analog version of the measurements and then
the metrics can then be checked as many times as necessary.
5. RESULTS
In order to evaluate the results of our automatic detection al-
gorithm, we have built a database of around 5000 images from
our landmarks taken under different conditions (from clear to
poor visibility) at sea and in pools. Approximately 33% of
Fig. 3. Some examples of partially masked or cut landmarks.
these images contain a landmarks, 27% contain two, and 25%
three or four. The rest (15%) contains no landmark at all. On
average, 15% of the landmarks on the images are cut (edges)
and 35% are partially obstructed (algae, fish, etc.). Our deep
learning algorithm detects 84% of the landmarks. Given the
size of our landmarks, a GSD (ground sample distance) of 20
to 50 mm is necessary to allow detection. If we remove the
landmarks that do not fit this case, our algorithm then has a
detection rate of 90%. The false detection rate is less than 1%
and the multiple detection rate for the same object is less than
2%.
About 20% of the detected landmarks are not identifiable,
either because the GSD does not allow it (2 to 5 mm GSD
is need for identification), because the lighting is problematic
(saturation or absence of light) or that the obstruction or cut
is too important. The algorithm manages to correctly iden-
tify the identifiable landmarks with a success rate greater than
86%. Finally, we note that detection and identification are
very robust to partial obstruction because more than 85% of
the landmarks in this case are correctly classified.
We equipped the field and carried out the measurements
(distances and depths) between landmarks to form a network
with our methodology on three study areas located on the
Mediterranean coast: Cap de Nice (300m2area, depths from
4 to 15m, pebbles, rocks, posidonia meadows and scree), Cap
Ferrat (60m2, depths from 1.5 to 4m, rocks and posidonia
meadow) and Collioure (120m2, depths from 2 to 3m, posi-
donia meadow, dead matte, rocks, broken pipes and gravel).
In the first area, we experimented with several configura-
tions of the GCP network, starting from the distributions in
circles or ellipses recommended by [1], to which secondary
landmarks are added inside. We observed that a compromise
between a spatial and vertical equidistribution (marking of the
different depth levels) makes it possible to extract rich and
very useful information for the phase of analysis on the three-
dimensional morphology of the study area.
The landmarks are placed on the seabed using a 1kg
weight so that they are not moved by swell or currents during
the mission. To equip the Cap de Nice area, we estimate that
around thirty landmarks are necessary and that ten is more
than enough for the Collioure and Cap Ferrat areas, which
corresponds on average to a landmark for 10m2.
We counted about 1 minute on average across all the
tested areas so that a single diver could take a complete mea-
Fig. 4.In situ manual mesurements of the network of under-
water landmarks.
1
2
3
4
5
6
7
9
10
3,50
3,50
3,04
3,22
3,38
2,17
2,48
2,41
2,85
4,13
2,60
3,95
4,63
3,95
4,56
4,55
5,30
+10
2,27
1,65
2,85
2,60
-60 1,05
+15
3,0
2,85
2,75
2,60
4,12
2,79
2,59
2,33
2,99
1,70
3,49
3,03
3,49
2,47
4,54
5,29
4,29
3,57
3,63
4,27
3,37
3
7
5
1
2
10
N
10
2,51
2,50
9
3,3
3,2
3,30
4,54
3,5
2
3,29
4,53
3,0
6
2,97
2,92
2,14
3,0
1
2,08
3,84
3,0
4
3,83
3,70
3,70
1,60
2,63
1,81
3,04
2,69
5,08
4,88
3
6,36
6,37
N
Fig. 5. Collioure (left) and Cap Ferrat (right) GCP networks.
surement (less than 15m in length) including note taking
(figure 4). The same measurements were made with the clas-
sic tape measure method: 2 divers are then mobilized and it
takes more than 2 minutes on average per measurement.
With such a method, we can consider errors in the mea-
surements of around 2% of the distance (i.e. an accuracy of
+/- 8mm for landmarks 4m apart), which is comparable with
the results usually obtained with ribbons meters [13]. The
reconstructed diagrams of the Collioure and Cap Ferrat net-
works are presented in figure 5.
6. CONCLUSION
We have proposed in this article a methodology to optimize
the creation of networks of underwater GCP, both in speed
and in quality, and to prepare the automation of the process.
We have shown that an appropriate design of landmarks for
underwater operational conditions allows automatic detection
and identification on images in near real time by a deep learn-
ing algorithm with a success rate greater than 77%. Most
of the failures are linked either to the lighting (saturation) or
to a distance that is too far away (GSD insufficient for iden-
tification). The robustness of detection and identification by
our algorithm to partial obstructions is greater than 86%. This
point is important as in underwater environment we have seen
that 50% of the landmarks are concerned.
We have also proposed a method to simplify in situ man-
ual measurements which consists in recording distance infor-
mation via ropes and reporting metric measurements offsite.
The procedure, less costly in time, can then be carried out by
a reduced team for a precision equal to conventional methods.
7. REFERENCES
[1] A. Bowens, Underwater archaeology: the NAS guide to
principles and practice, 2011.
[2] P. Drap et al, “A Photogrammetric Process Driven By
an Expert System: a New Approach for Underwater Ar-
chaeological Surveying Applied to the ’Grand Ribaud
F’ Etruscan Wreck,” in CVPR, 2003, vol. 1.
[3] J. Henderson et al, “Mapping Submerged Archaeolog-
ical Sites using Stereo-Vision Photogrammetry,” Inter-
national Journal of Nautical Archaeology, vol. 42, no.
2, pp. 243–256, 2013.
[4] S. Rubin et al, “Scuba Surveys to Assess Effects of El-
wha Dam Removal on Shallow, Subtidal Benthic Com-
munities,” in Elwha River Science Symposium, 2011,
pp. 41–43.
[5] D. Skarlatos et al, “Precision Potential of Underwa-
ter Networks for Archaeological Excavation Through
Trilateration and Photogrammetry, ISPRS, vol. XLII-
2/W10, pp. 175–180, 2019.
[6] E. Diamanti et al, “Geometric Documentation of Under-
water Archaeological Sites,Geoinformatics FCE CTU,
vol. 11, pp. 37–48, 2013.
[7] C. Balletti et al, “Underwater Photogrammetry and 3D
Reconstruction of Marble Cargos Shipwreck, ISPRS,
vol. XL-5/W5, 2015.
[8] S. Williams et al, “Repeated AUV Surveying of Urchin
Barrens in North Eastern Tasmania,” in ICRA, 2010, pp.
293–299.
[9] N. A. Matthews, “Aerial and Close-Range Photogram-
metric Technology: Providing Resource Documenta-
tion, Interpretation, and Preservation, Tech. Rep.,
2008.
[10] D. Skarlatos et al, “Photogrammetric Approaches
for the Archaeological Mapping of the Mazotos Ship-
wreck,” in STIAC, 2010.
[11] L. Avanthey et al, “Light-weight tools to perform local
dense 3d reconstruction of shallow water seabed, Sen-
sors, vol. 16, no. 5, pp. 712–742, 2016.
[12] J. Redmon et al, “Yolov3: An incremental improve-
ment,” CVPR, vol. 1804.02767, 2018.
[13] P. Holt, “An Assessment of Quality in Underwater Ar-
chaeological Surveys Using Tape Measurements,” Inter-
national Journal of Nautical Archaeology (IJNA), vol.
32, no. 2, pp. 246–251, 2003.
... A mechanism given in [73] dynamically chooses feature layer channels, termed as DC block and is combined with YOLOX to make YOLOX-DC. A network establishment concept with defined local points in underwater environment is given in [74] that uses the YOLO version for automatic target detection and eases the manual measurement in future trials. The YOLO algorithm is modified in [75] and transfer learning is adopted to ease the complexity of training and target detection. ...
Article
Full-text available
This paper provides a study of the latest target (object) detection algorithms for underwater wireless sensor networks (UWSNs). To ensure selection of the latest and state-of-the-art algorithms, only algorithms developed in the last seven years are taken into account that are not entirely addressed by the existing surveys. These algorithms are classified based on their architecture and methodologies of operation and their applications are described that are helpful in their selection in a diverse set of applications. The merits and demerits of the algorithms are also addressed that are helpful to improve their performance in future investigation. Moreover, a comparative analysis of the described algorithms is also given that further provides an insight to their selection in various applications and future enhancement. A depiction of the addressed algorithms in various applications based on publication count over the latest decade (2023-2013) is also given using the IEEE database that is helpful in knowing their future application trend. Finally, the challenges associated with the underwater target detection are highlighted and the future research paradigms are identified. The conducted study is helpful in providing a thorough analysis of the underwater target detection algorithms, their feasibility in various applications with future challenges and defined strategies for further investigation.
... We then used these calibration results to reconstruct in 3D an experimentation site on which we were able to make in situ distance measurements on control points (following the methodology described in [66]). To take into account the fact that the greater the distances measured, the more they are tainted with errors (mainly because of the tension that must be applied to the measurement cable), it is customary to express the errors in percentage of total measured length. ...
Article
Full-text available
The 3D reconstruction of underwater scenes from overlapping images requires modeling the sensor. While underwater self-calibration gives good results when coupled with multi-view algorithms, calibration or pre-calibration with a pattern is still necessary when scenes are weakly textured or if there are not enough points of view of the same points; however, detecting patterns on underwater images or obtaining a good distribution of these patterns on a dataset is not an easy task. Thus, we propose a methodology to guide the acquisition of a relevant underwater calibration dataset. This process is intended to provide feedback in near real-time to the operator to guide the acquisition and stop it when a sufficient number of relevant calibration images have been reached. To perform this, pattern detection must be optimized both in time and success rate. We propose three variations of optimized detection algorithms, each of which takes into account different hardware capabilities. We present the results obtained on a homemade database composed of 60,000 images taken both in pools and at sea.
Article
Full-text available
This article describes an affordable and setup-friendly cable-based localization technique for underwater remotely operated vehicles, which exploits the piecewise linear shape of the umbilical being equipped with a sliding ballast. Each stretched part of the cable is instrumented with a waterproof inertial measurement unit (IMU) to measure its orientation. Using the cable's geometry, the vehicle's location can be calculated in relation to the fixed or moving end of the cable. Experiments carried out with a robotic system in a water tank prove the reliability of this localization strategy. The study investigates the influence of measurement uncertainties on cable orientation and length, as well as the impact of the IMU location along the cable on localization precision. The accuracy of the localization method is discussed.
Article
Full-text available
Given the rise and wide adoption of Structure from Motion (SfM) and Multi View Stereo (MVS) in underwater archaeology, this paper investigates the optimal option for surveying ground control point networks. Such networks are the essential framework for coregistration of photogrammetric 3D models acquired in different epochs, and consecutive archaeological related study and analysis. Above the water, on land, coordinates of ground control points are determined with geodetic methods and are considered often definitive. Other survey works are then derived from by using those coordinates as fixed (being ground control points coordinates considered of much higher precision). For this reason, equipment of proven precision is used with methods that not only compute the most correct values (according to the least squares principle) but also provide numerical measures of their precisions and reliability. Under the water, there are two options for surveying such control networks: trilateration and photogrammetry, with the former being the choice of the majority of archaeological expeditions so far. It has been adopted because of ease of implementation and under the assumption that it is more reliable and precise than photogrammetry. This work aims at investigating the precision of network establishment by both methodologies by comparing them in a typical underwater archaeological site. Photogrammetric data were acquired and analysed, while the trilateration data were simulated under certain assumptions. Direct comparison of standard deviation values of both methodologies reveals a clear advantage of photogrammetry in the vertical (Z) axis and three times better results in horizontal precision.
Article
Full-text available
Tasks such as distinguishing or identifying individual objects of interest require the production of dense local clouds at the scale of these individual objects of interest. Due to the physical and dynamic properties of an underwater environment, the usual dense matching algorithms must be rethought in order to be adaptive. These properties also imply that the scene must be observed at close range. Classic robotized acquisition systems are oversized for local studies in shallow water while the systematic acquisition of data is not guaranteed with divers. We address these two major issues through a multidisciplinary approach. To efficiently acquire on-demand stereoscopic pairs using simple logistics in small areas of shallow water, we devised an agile light-weight dedicated system which is easy to reproduce. To densely match two views in a reliable way, we devised a reconstruction algorithm that automatically accounts for the dynamics, variability and light absorption of the underwater environment. Field experiments in the Mediterranean Sea were used to assess the results.
Conference Paper
Full-text available
Nowadays archaeological and architectural surveys are based on the acquisition and processing of point clouds, allowing a high metric precision, essential prerequisite for a good documentation. Digital image processing and laser scanner have changed the archaeological survey campaign, from manual and direct survey to a digital one and, actually, multi-image photogrammetry is a good solution for the underwater archaeology. This technical documentation cannot operate alone, but it has to be supported by a topographical survey to georeference all the finds in the same reference system. In the last years the Ca' Foscari and IUAV University of Venice are conducting a research on integrated survey techniques to support underwater metric documentation. The paper will explain all the phases regarding the survey's design, images acquisition, topographic measure and the data processing of two Roman shipwrecks in south Sicily. The cargos of the shipwrecks are composed by huge marble blocks, but they are different for morphological characteristic of the sites, for the depth and for their distribution on the seabed. Photogrammetrical and topographical surveys were organized in two distinct methods, especially for the second one, due to the depth that have allowed an experimentation of GPS RTK's measurements on one shipwreck. Moreover, this kind of three-dimensional documentation is useful for educational and dissemination aspect, for the ease of understanding by wide public.
Article
Full-text available
Photogrammetry has often been the most preferable method for the geometric documentation of monuments, especially in cases of highly complex objects, of high accuracy and quality requirements and, of course, budget, time or accessibility limitations. Such limitations, requirements and complexities are undoubtedly features of the highly challenging task of surveying an underwater archaeological site. This paper is focused on the case of a Hellenistic shipwreck found in Greece at the Southern Euboean gulf, 40-47 meters below the sea surface. Underwater photogrammetry was chosen as the ideal solution for the detailed and accurate mapping of a shipwreck located in an environment with limited accessibility. There are time limitations when diving at these depths so it is essential that the data collection time is kept as short as possible. This makes custom surveying techniques rather impossible to apply. However, with the growing use of consumer cameras and photogrammetric software, this application is becoming easier, thus benefiting a wide variety of underwater sites. Utilizing cameras for underwater photogrammetry though, poses some crucial modeling problems, due to the refraction effect and further additional parameters which have to be co-estimated [1]. The applied method involved an underwater calibration of the camera as well as conventional field survey measurements in order to establish a reference frame. The application of a three-dimensional trilateration using common tape measures was chosen for this reason. Among the software that was used for surveying and photogrammetry processing, were Site Recorder SE, Eos Systems Photomodeler, ZI's SSK and Rhinoceros. The underwater archaeological research at the Southern Euboean gulf is a continuing project carried out by the Hellenic Institute for Marine Archaeology (H.I.M.A.) in collaboration with the Greek Ephorate of Underwater Antiquities, under the direction of the archaeologist G.Koutsouflakis. The geometric documentation of the shipwreck was the result of the collaboration between H.I.M.A. and the National Technical University of Athens.
Conference Paper
Full-text available
This paper describes an approach to achieving high resolution, repeated benthic surveying using an Autonomous Underwater Vehicle (AUV). A stereo based Simultaneous Localisation and Mapping (SLAM) technique is used to estimate the trajectory of the vehicle during multiple overlapping grid based surveys. The vehicle begins each dive on the surface and uses GPS to navigate to a designated start location. Once it reaches the designated location on the surface, the vehicle dives and executes a pre-programmed grid survey, collecting co-registered high resolution stereo images, multibeam sonar and water chemistry data. A suite of navigation instruments are used while the vehicle is underway to estimate its pose relative to the local navigation frame. Following recovery of the vehicle, the SLAM technique is used to refine the estimated vehicle trajectory and to find loop closures both within each survey and between successive missions to co-register the dives. Results are presented from recent deployments of the AUV Sirius at a site in North Eastern Tasmania. The objective of the deployments described in this work were to document the behaviour of barrens-forming sea sea urchins which have recently become resident in the area. The sea urchins can overgraze luxuriant kelp beds that once dominated these areas, leaving only rocky barrens habitat. The high resolution stereo images and resulting three dimensional surface models allow the nocturnal behaviour of the animals, which emerge to feed predominantly at night, to be described. Co-registered images and resulting habitat models collected during the day and at night are being analysed to describe the behaviour of the sea urchins in more detail.
Conference Paper
Full-text available
The present paper focuses on a new tool dedicated to the survey and the representation of archaeological and architectural heritage. The tool is based on a photogrammetric process related to an expert system that handles a knowledge base coming from the field of archaeological or architectural expertise. The system was tested on an archaeological field: the Etruscan amphora, Py4. The first step of the photogrammetric survey was presented in VAST 2001 [Drap, Long, 2001]. In this paper we present a new symbolic approach to manage the data surveyed. The use of an Expert System gives us a higher level of abstraction by the insertion of a new abstract layer between surveyed data and the model to compute. The presence of a great number of amphorae on the site of Grand Ribaud F, the Etruscan wreck located in Hyères, France, together with the archaeologist's survey needs of the wreck, led us to the development of the system. We add also a persistence mechanism for the data, structured in XML. A Web site allows access to all the excavation data. (http://GrandRibaudF.gamsau.archi.fr) The project is articulated in several phases: • Development of the theoretical model: for each identified object, a geometrical description offers a complete set of geometrical primitives, which are the only objects that can be potentially measured, and a theoretical representation of the object. • Photogrammetric measurement being highly incomplete (the object is seen only partially or is in part deteriorated), the Expert System determines the best strategy to inform all the geometrical parameters of the studied object, starting from taken measurements and the default data as defined in the architectural model and the geometrical model. • The resulting object is thus based on a theoretical model, dimensioned more or less partially by a photogrammetric measurement. At the time of the photograph exploitation, the operator can choose the number of attributes of th- - e object, which are regarded as relevant to measure. The choice of attributes is revisable in time, for example at the time of a second series of measurements. The system can be used to position in space some objects of catalogue after a scale phase. If measurement is more complete, in addition to positioning in space, the system allows an analysis of how the measurements vary from the theoretical model and, from there, a study of these deformations or erosions. These, in turn, allow one to question the initial model. The whole developments of the project are written in Java and use the expert system Jess, available on the WEB.
Article
We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at https://pjreddie.com/yolo/
Article
Underwater Archaeology: The NAS Guide to Principles and Practice provides a comprehensive summary of the archaeological process as applied in an underwater context. Long awaited second edition of what is popularly referred to as the NAS Handbook Provides a practical guide to underwater archaeology: how to get involved, basic principles, essential techniques, project planning and execution, publishing and presenting Fully illustrated with over 100 drawings and new colour graphics New chapters on geophysics, historical research, photography and video, monitoring and maintenance and conservation.
Article
Creating photo-mosaics and plans of submerged archaeological sites quickly, cost-effectively and, most importantly, to a high level of geometric accuracy remains a huge challenge in underwater archaeology. This paper describes a system that takes geo-referenced stereo imagery from a diver-propelled platform and combines it with mapping techniques widely used in the field of robotic science to create high-resolution 2D photo-mosaics and detailed 3D textured models of submerged archaeological features. The system was field tested on the submerged Bronze Age town of Pavlopetri off the coast of Laconia, Greece, in 2010. This paper outlines the equipment used, data collection in the field, image processing and visualization methodology.
Article
The quality of an underwater archaeological survey using 3D trilateration with fibreglass tape measures was established on an underwater test site. A precision of 25 mm was calculated for tape measurements giving a position accuracy of 43 mm. Of the 304 measurements which were made during the tests, 20% were found to be in error.