ArticlePDF Available

Nonparametric Empirical Depth Regression for Bathymetric Mapping in Coastal Waters

Authors:

Abstract and Figures

Existing empirical methods for estimation of bathymetry from multispectal satellite imagery are based on simplified radiative transfer models that assume that transformed radiance values will have a linear relationship with depth. However, application of these methods in temperate coastal waters of New Zealand demonstrates that this assumption does not always hold true and consequently existing methods perform poorly. A new purely empirical method based on a nonparametric nearest-neighbor regression is proposed and applied to WorldView-2 and WorldView-3 imagery of temperate reefs dominated by submerged kelp forests interspersed with other bottom types of varying albedo including reef devoid of kelp and large patches of sand. Multibeam sonar data are used to train and validate the model and results are compared with those from a widely used linear empirical method. Free and open source Python code was developed for the implementation of both methods and is presented for use. Given sufficient training data, the proposed method provided greater accuracy (0.8 m RMSE) than the linear empirical method (2.2 m RMSE) and depth errors were less dependent on bottom type. The proposed method has great potential as an efficient and inexpensive method for the estimation of high spatial resolution bathymetry over large areas in a wide range of coastal environments.
Content may be subject to copyright.
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING (AUTHOR’S MANUSCRIPT) 1
Non-Parametric Empirical Depth Regression for
Bathymetric Mapping in Coastal Waters
Jared Kibele and Nick T. Shears
Abstract—Existing empirical methods for estimation of
bathymetry from multispectal satellite imagery are based on
simplified radiative transfer models that assume transformed
radiance values will have a linear relationship with depth. How-
ever, application of these methods in temperate coastal waters of
New Zealand demonstrates that this assumption does not always
hold true and consequently existing methods perform poorly. A
new purely empirical method based on a non-parametric nearest
neighbor regression is proposed and applied to WorldView-
2 and WorldView-3 imagery of temperate reefs dominated by
submerged kelp forests interspersed with other bottom types of
varying albedo including reef devoid of kelp and large patches
of sand. Multibeam sonar data are used to train and validate
the model and results are compared with those from a widely
used linear empirical method. Free and open source Python
code was developed for the implementation of both methods
and is presented for use. Given sufficient training data, the
proposed method provided greater accuracy (0.8m RMSE) than
the linear empirical method (2.2m RMSE) and depth errors were
less dependent on bottom type. The proposed method has great
potential as an efficient and inexpensive method for the estimation
of high spatial resolution bathymetry over large areas in a wide
range of coastal environments.
Index Terms—Bathymetry mapping, depth estimation,
WorldView-2 (WV2), K Nearest Neighbor (KNN), free and open
source software (FOSS).
I. INTRODUCTION
BATHYMETRY maps are essential for navigation [1],
resource management [2], [3], [4], [5], and as an interme-
diate product used for mapping of coastal marine habitats [6],
[7], [8]. While the estimation of bathymetry using multispec-
tral satellite imagery may not provide the accuracy attainable
by boat-based acoustic methods, it does offer a number of
advantages. Image-based methods can provide bathymetry
estimates from the intertidal and shallow subtidal zone e.g. <
5 m depth), where the collection of sonar data is problematic,
down to depths of around 20 m depending on a number
of factors including water clarity and bottom reflectance.
Synthetic Aperture Radar (SAR) can provide detailed shallow-
water bathymetry when combined with acoustic methods in
This is the author’s manuscript, and may differ from the print version in
layout and page numbering. Manuscript received January 14, 2016; revised
June 17, 2016 and August 2, 2016; accepted August 2, 2016. This work
was supported in part by the Auckland Council, in part by the DigitalGlobe
Foundation for the imagery grant, in part by the Leigh Marine Laboratory for
funding the multibeam survey, and Waikato University along with Discovery
Marine Ltd. for carrying it out. The work of N. T. Shears was supported by
the Royal Society of New Zealand Rutherford Discovery Fellowship under
Grant RDFUOA1103.
The authors are with the Leigh Marine Laboratory, University of
Auckland, Auckland 1010, New Zealand (e-mail: jkibele@gmail.com;
n.shears@auckland.ac.nz).
Digital Object Identifier of print version: 10.1109/JSTARS.2016.2598152
areas with relatively strong currents [9] but is less reliable
when currents are weak. The primary advantage of image-
based methods is their comparatively low cost and large spatial
scales over which bathymetry information can be obtained
based on readily available imagery. Correspondingly this pro-
vides a potentially efficient and more accessible approach to
bathymetric mapping for researchers and resource managers.
Many methods have been described in the literature for
the estimation of depth from satellite imagery [10], [11].
These methods however have one or more of the following
drawbacks: 1) They are complex, difficult to implement and
require specialized skills [11], 2) are dependent on field
measurements with costly equipment [12], and 3) are limited
in applicability to a narrow range of environmental conditions
(e.g. clear water and/or uniform bottom reflectance) [1]. There-
fore current methods do not provide a widely accessible and
cost effective means of obtaining bathymetric data over large
spatial scales. Despite previous efforts aimed at making these
methods more accessible to workers outside the field of remote
sensing [13], there is still considerable specialized knowledge
required to choose and implement an effective method. The
difficulty of implementation is due in part to the complexity
of the methods, but is also compounded by the paucity of free
and open source software (FOSS) or even proprietary software
available for use. As it stands, the selection and application
of existing methods for depth estimation requires specialized
skills and knowledge that greatly restrict their use.
This paper proposes a method that can be carried out
easily and at low cost without extensive knowledge of optical
remote sensing. The method is based on K nearest neighbor
regression (KNN), a non-parametric method commonly used
in machine learning [14]. Only 2 inputs are required: the
multispectral image itself and a set of known depths to train
the KNN model. This method is fully empirical in that it is not
based on modeling the complex physics of the transmission
of light through water. The only assumption inherent in this
method is that radiance across the bands of the image will
have a similar relationship to depth in training pixels (the
subset with known depths) as in the unknown pixels. Conse-
quently, this method is potentially capable of producing useful
results across a broader range of environmental conditions
than previous methods based on simplified physical models.
Furthermore, atmospheric correction is not required and should
have little effect. A free and open source Python library is
provided for application of the proposed method and related
tasks (http://jkibele.github.io/OpticalRS/).
In this study, the proposed method will be applied to
WorldView-2 (WV2) and WorldView-3 (WV3) imagery of
Cape Rodney in north eastern New Zealand (Fig 1) using
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING (AUTHOR’S MANUSCRIPT) 2
Fig. 1. RGB composite of the WV2 imagery of the study region in northeastern New Zealand overlayed with the footprint of the multibeam survey depths
less than 20m outlined in red and those greater than 20m outlined and lined with black. An area of sandy bottom is outlined and hatched in yellow and a
small area of kelp covered bottom is shown in green.
depth information from a multibeam sonar survey to both
train the model and for accuracy assessment. The resulting
bathymetry is then compared with that from a physics-based
empirical depth estimation method [15] applied to the same
data. Lyzenga’s method was chosen because it has been widely
used and can be applied, like the proposed KNN method, using
imagery and a subset of known depths as the only required
inputs.
This paper is laid out as follows: Section II describes the
satellite imagery and the multibeam bathymetric data used
in this study. Section III describes the preprocessing of the
imagery and depth data. Section IV describes the application
of the proposed KNN method and the application of Lyzenga’s
depth estimation method to the same data and presents the
resulting bathymetry from both methods. Section V presents
an assessment and comparison of the performance of the two
methods in regard to overall accuracy, spatial distribution of
error, sensitivity to quantity of training points, and estimate
performance relative to maximum depth. Section VI discusses
the implications of the results and provides final conclusions.
II. DATA ACQUISITION
The study area is the waters in and around the Cape
Rodney to Okakari Point (CROP) marine reserve, northeastern
New Zealand. CROP is New Zealand’s oldest marine reserve
[16] and has been the subject of multiple habitat mapping
studies since its inception in 1977 [17], [18], [19]. CROP is
located on an open coast in the outer Hauraki Gulf away from
large riverine inputs of sediment, with mean total suspended
solid (TSS) measurements of 4.13 mg/L [20], Secchi depths
typically between 6.5 and 10 meters (1st and 3rd quartile)
(Shears, unpublished data), and chlorophyll-a concentrations
with a mean of 0.87 µg/L (Leigh Marine Laboratory, unpub-
lished data). These waters are relatively clear for a temperate
reef environment but represent a challenge for optical remote
sensing methods compared to coral reef environments where
optical remote sensing methods are more frequently employed.
Australia’s Great Barrier Reef, for example, has average
Secchi depths of 11.5 meters and an average chlorophyll
concentration of 0.32 µg/L [21] with suspended particulate
matter (SPM) concentrations near 2 mg/L [22]. Bottom types
within the study area range from high albedo terrigenous sands
[23] to dense stands of the kelp Ecklonia radiata with very
low albedo. A range of other habitats form bottom types of
intermediate albedo [17], [24].
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING (AUTHOR’S MANUSCRIPT) 3
A. Multibeam Sonar
The multibeam depth data were collected by Discovery
Marine Ltd. (www.dmlsurveys.co.nz) in May 2014. The survey
was completed using a RS Sonic 2022 multibeam echosounder
on board the 7m survey vessel Pandora. Positioning was via a
Trimble RTK GNSS and Posmov motion compensator. Sound
velocity profiles were recorded with an AML minos SVP.
Data were acquired with QINSy 8.1 navigation software and
processed with QINSy 8.1 and Fledermaus 7.4. Tidal data were
recorded at 5 minute intervals for the duration of the survey
and the bathymetric data were reduced to Auckland MSL 1946
vertical datum and supplied to the authors as xyz point data.
B. Multispectral Imagery
The 8-band WV2 and WV3 imagery for this study was
supplied by the DigitalGlobe Foundation under an imagery
grant. The WV2 image was acquired by the satellite at 22:53
UTC on 18 January, 2014. Solar azimuth and elevation were
66.1and 60.1respectively. Satellite azimuth and elevation
were 46.2and 63.8and the off nadir view angle was 23.1.
The imagery was geometrically corrected and orthorectified
by DigitalGlobe and delivered at the ’Standard 2A’ product
level [25].
The WV3 image was acquired by the satellite at 22:15 UTC
on 12 January, 2015. Solar azimuth and elevation were 72.6
and 57.5respectively. Satellite azimuth and elevation were
62.8and 72.8and the off nadir view angle was 15.6.
The imagery was geometrically corrected and orthorectified
by DigitalGlobe and delivered at the ’Standard 2A’ product
level [25].
III. DATA PREPROCESSING
WV2 imagery, WV3 imagery, and multibeam data sets
required some preprocessing before the KNN and Lyzenga
[26], [15] depth estimation methods could be applied.
A. Multibeam Sonar
Multibeam depths needed adjustment for tide and to be
matched to the imagery in terms of resolution and projection.
The xyz point data were converted to GeoTiff format using
GRASS [27], reprojected to UTM zone 60 south, and corrected
to chart datum using data from Land Information New Zealand
(LINZ) [28]. Additional data from LINZ were then used
to calculate the height of tide above chart datum at image
acquisition time. This was found to be 2.34m and 1.74m
for the WV2 and WV3 images respectively. These values
were added to chart datum depths to create depth data sets
tailored to each image, and the resulting GeoTiffs were then
downsampled to match the spatial resolution of the WV2
imagery using QGIS [29].
B. Multispectral Imagery
In order to facilitate a direct comparison, the higher spatial
resolution (1.4m) WV3 imagery was downsampled (using
bilinear resampling) to match the WV2 imagery (2.0m) using
GDAL warp [30]. The WV2 and WV3 imagery required minor
preprocessing that was common to both depth estimation
methods and some additional preprocessing for the Lyzenga
method. The land was masked from the images by thresholding
the NIR2 band and eliminating unmasked clumps with less
than 2000 pixels. Then integer digital number (DN) values
were converted to floating point decimals and rescaled to the
interval [0,1] and denoised using bilateral filtering [31] as
implemented in the scikit-image Python library [32]. These
denoised images became the input for the KNN method from
which radiance values (Lin Eq. 4) were derived.
Additional preprocessing for the Lyzenga method was based
on the steps outlined in [15] but modified slightly to suit
the imagery used in this study. Due to the low reflectance
of kelp dominated bottoms and relatively high water column
backscatter, the Lyzenga et al. method of selecting shallow-
water pixels based on thresholding the blue and green bands
was not used. Instead, only pixels with known depths (from
the multibeam survey) were used. There was very little sun
glint apparent in the WV2 imagery, but the sun glint removal
algorithm described in section III of [15] was applied in order
to replicate the original methods as closely as possible. After
downsampling and denoising, so little sun glint was apparent
in the WV3 image that this step was deemed unnecessary.
Deep-water pixels in each image were designated by calculat-
ing pixel brightness and using 3x3 moving window to mask
pixels with fewer than 50% of neighboring pixels above the
10th percentile of brightness. These deep-water pixels were
used to calculate the deep-water means and standard deviations
specific to each image. Transformation according to equation
7 from [33],
Xi=ln(LiLsi)(1)
(where Liis the radiance in band iand Lsi is the average deep-
water pixel radiance in band i) resulted in too many undefined
values because Lsi > Lifor many pixels in the both scenes.
This problem has been previously noted [34], [35] and will
be discussed in detail later. In order to proceed with the WV2
image, mean deep-water radiance minus 2 standard deviations
(Li) was used in place of mean deep-water radiance (Lsi)
[36], [37] so instead of equation 1, transformed radiance (Xi)
was calculated according to equation 1 from [36],
Xi=ln(LiLi)(2)
For the WV3, 4 standard deviations were subtracted from
mean deep-water radiance because 2 standard deviations left
too many pixels undefined. For complete details on imagery
preprocessing, including the Python code used, refer to the
OpticalRS documentation (http://jkibele.github.io/OpticalRS/).
IV. DEP TH ESTIMATION
This section describes the KNN method and the Lyzenga
method, their application, and then presents the resulting
bathymetry generated by both methods. Both estimation meth-
ods were implemented using the Python programming lan-
guage. The code used is available as part of the OpticalRS
Python library (http://jkibele.github.io/OpticalRS/).
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING (AUTHOR’S MANUSCRIPT) 4
Fig. 2. Multibeam measured depth compared to KNN and Lyzenga Method
depths estimated from WV3 imagery.
A. KNN Regression Depth Estimation
KNN regression is a method for estimating a continuous
dependent variable based on a number (K) of the nearest
training examples in feature space [38]. KNN methods make
no assumptions about the distribution or linearity of data.
All training data is retained in memory and predictions for
unknown independent variables are produced by averaging the
values of the Knearest known independent variables based
on a distance metric [39]. Consequently, the KNN regression
method is incapable of making predictions beyond the range
of training data provided.
In the case at hand, the continuous dependent variable
is the depth and the independent variables are the radiance
values in the 8 bands of the WV2 image. When the KNN
model is trained, each training pixel (those with known depth)
is retained in memory as a sample (L) with 8 independent
Fig. 3. Estimated vs measured depths for the KNN model and the Lyzenga
model. WV3 estimates are shown across the top panel and WV2 estimates
on the bottom. Some estimates of the Lyzenga method lie below the extent
of the y axes.
variables (L1, L2, ..., L8) and a known depth (z). Once the
model has been trained, the depth estimate (ˆz) for a pixel
with radiance values L0is calculated as:
ˆz=1
K
K
X
i=1
zi(3)
for the K(K= 5 for this study) nearest values of zas
determined by the euclidean distance (d) between L0and the
training samples. dis calculated as:
d(L, L0) = v
u
u
t
n
X
i=1
(LiL0
i)2(4)
Stated simply, once the KNN model is supplied with training
data, unknown depths are estimated to be the average depth of
the K(5) most similar (in terms of radiance across all bands)
pixels with known depths, and all estimated depths will lie
between the minimum and maximum depth of the training
points.
B. Lyzenga Depth Estimation
The Lyzenga method for depth estimation assumes that
scattering effects can be removed and that a linear relationship
between depth and transformed radiance can be achieved using
the transformation in Equation 1 such that:
ˆz=h0+
N
X
i=1
hiXi(5)
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING (AUTHOR’S MANUSCRIPT) 5
for Nbands of a multispectral image. Using a training set
of pixel transformed radiance values and known depths, hi
values can be obtained via linear regression [26]. With 8-
band WV2 imagery Ncan be anywhere between 1 and 8.
Although the use of all 8 bands (N= 8) resulted in a
modest (0.1m) improvement in RMSE, N= 2 was used in
this study for consistency with most other applications of the
method. Previous efforts have been made to find generalized hi
parameters that work across different images but more accurate
results are obtained by deriving hiparameters specific to the
image under consideration [15]. The image specific approach
was employed in this study.
The following steps are used to train the Lyzenga depth
estimation model [15]. First, the optimal pair of bands is
selected. For each of the 8
2possible combinations, ordinary
least squares (OLS) regression is conducted with the 2 selected
Xivalues as independent variables and corresponding depths
as the dependent variable. The band pair with the highest r2
value is selected as the optimal combination. Then the hi
(intercept and slopes) parameters are determined (via OLS)
and recorded for this band combination.
Once the hiparameters have been determined, the model
can be said to be trained and depth estimation for the remain-
ing pixels is simply a matter of executing equation 5 with the
determined parameters. When N= 2, the equation takes the
form:
ˆz=h0+hiXi+hjXj(6)
for the band combination of bands iand j.
For the WV2 training data used in this study the optimal
band combination (r2= 0.63) was found to be bands 2 and
3 (478 nm and 546 nm). The hiparameters were determined
via OLS and resulted in the following depth estimation model:
ˆz= 17.08 + 16.06X216.16X3(7)
For the WV3 training data used in this study the optimal
band combination (r2= 0.63) was also found to be bands
2 and 3. The hiparameters were determined via OLS and
resulted in the following depth estimation model:
ˆz=8.83 + 10.11X217.46X3(8)
C. Depth estimation comparison
To compare the efficacy of the KNN and the Lyzenga
methods, each was applied to the WV2 and WV3 input data.
Multibeam depths at image acquisition time (z) were masked
where greater than 20 m (Fig. 1) and WV radiance values (L
for the KNN method and X(Equation 2) for the Lyzenga
method) were masked to match. This resulted in 644,953
and 699,001 pixels with known depth and radiance for the
WV2 and WV3 images respectively. The difference is due to
tide height differences. Each model was trained with 300,000
pixels and used to estimate the depth of all unmasked pixels.
The resultant bathymetry maps produced by the two methods
using the WV3 imagery are shown in Fig. 2 along with
the Multibeam depths for visual comparison. The results for
the WV2 image are, visually, very similar. The RMSE was
calculated for the pixels not used in model training (Fig.
3). The KNN method provided a better approximation than
the Lyzenga method as indicated by the RMSE values for
both WV2 (KNN: 1.54m, Lyzenga: 2.54m) and WV3 (KNN:
0.79m, Lyzenga: 2.22m). In contrast to the Lyzenga depth
estimates, the KNN estimated depths do not exceed 20m,
giving the KNN scatter plots in Fig. 3 a clipped appearance.
This is due to the fact that KNN regression will not predict
values beyond the range of training data.
V. ACC UR ACY ASSESS ME NT
Further testing was conducted to compare the performance
of the methods across variations in: A) bottom albedo, B) size
of training set, and C) maximum depth.
A. Sensitivity to Variation in Bottom Albedo
To assess differences in response to varying bottom albedo,
the spatial distribution of error was compared between meth-
ods. Errors were calculated for all available pixels for both
estimation methods (Fig. 4). Visual inspection of Fig. 4
indicates that errors from the WV2 image were related to
bottom albedo (or habitat, e.g. kelp vs. sand) in both models,
but more so for the Lyzenga model. Errors were isolated for the
kelp and sand areas identified in Fig. 1 and analysis confirmed
this impression. The kelp and sand areas were manually
outlined based on unpublished data collected using Benthic
Photo Survey [40] as well as previous habitat mapping efforts
[17], [18], [19]. For the KNN method the average errors were
0.05m and 0.81m in the sand and kelp areas respectively. For
the Lyzenga method average errors were -1.55m and 2.19m.
Visual inspection of the WV3 prediction errors in Fig. 4
indicates that Lyzenga method errors are related to differences
in bottom albedo but KNN method errors show no discernible
pattern. For the WV3 image the average KNN errors were
-0.01m and 0.05m for the sand and kelp respectively while
the Lyzenga method average errors were -0.14m and -0.72m.
The much larger difference in average errors for the Lyzenga
method relative to bottom type for both images, indicates that
the KNN method is much less sensitive to variations in bottom
albedo.
B. Sensitivity to Number of Training Points
To investigate the relative sensitivity to the size of the
training set for each model, training and estimation procedures
were carried out repeatedly with varying proportions of data
points assigned to training and evaluation. The number of
training points ranged from 10 to 500k (80% of the full data
set) on a 15 step logarithmic scale. Each model was trained
and tested (on the remaining non-training points) ten times at
each training set quantity. The randomization procedure for
the selection of training points was altered each time. Figure
5 displays the resulting RMSE values for the WV3 image as
a function of the number of training set size for both models.
Initially, the accuracy of both models improves very quickly
with additional training points (Fig. 5). The Lyzenga model ap-
proaches its maximum accuracy (approximately 2.2m RMSE)
with as little as 100 training points. However, the accuracy
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING (AUTHOR’S MANUSCRIPT) 6
Fig. 4. Estimated depth minus multibeam depth for the two different models.
Fig. 5. Depth estimation model error from WV3 imagery as a function of
training point set size on a logarithmic scale.
of the KNN method continues to improve while the Lyzenga
method levels off. At around 150 - 200 training points, the
KNN method provides more accurate depth estimates. With
1000 training points, the KNN RMSE was better by more
than 0.5m compared to the Lyznenga method. The results for
the WV2 image followed the same pattern and differed only
in that the maximum accuracies obtained by both methods
was somewhat lower as indicated in Fig. 3. The shapes of the
curves and their crossing point were very similar.
C. Performance at Varying Maximum Depths
The WV3 image and corresponding depth data were re-
stricted to various maximums between 5 and 30 meters.
These depth-restricted data sets were randomly partitioned
into training sets of 1500 points with the remaining points
comprising a corresponding test set. A KNN model was trained
Fig. 6. RMSE and mean error ±standard error of the mean for each method
across different maximum depths using WV3 imagery.
and tested with each set and the RMSE, mean error, and
standard error was calculated for each depth increment. The
entire procedure was then repeated for the Lyzenga model.
The KNN method provided better accuracy at all depth
ranges (Fig. 6) and this advantage (measured as Lyzenga
RMSE minus KNN RMSE) increased as the maximum depth
increased from approximately 0.3m with a maximum depth
of 5m to 1.6m with a maximum of 30m. The mean error for
the Lyzenga method stayed within 0.16m of 0 while the mean
error for the KNN method drops as low as 0.25m below 0 with
increasing maximum depth. The same analysis conducted with
the WV2 results, yielded very similar results.
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING (AUTHOR’S MANUSCRIPT) 7
Fig. 7. Depth vs. transformed radiance for WV3 bands 1-4. Yellow represents
pixels over sand bottom. Green represents pixels over kelp bottom. Lowess
curves are included to illustrate the trends.
VI. DISCUSSION
The KNN method used in this study provided a better
estimation of depth (WV3 RMSE = 0.79m, WV2 RMSE =
1.54m) than the Lyzenga method (WV3 RMSE = 2.22m, WV2
RMSE = 2.54m) under the conditions represented in the study
region and images analysed. In previous studies the Lyzenga
method yielded lower RMSE values (ranging from 0.7m to
2.4m) [26], [15], [41] than those reported in this study. The
higher RMSE in this study is likely due to differences in
environmental conditions, including higher levels of suspended
solids and chlorophyll, and the presence of a bottom type with
very low albedo. Specifically, the low radiance values over
kelp and the relatively high reflectance of the water column
in the blue bands (band 1: 427 nm and band 2: 478 nm)
resulted in what Ji et al. called ”over deduction” [34]. The
modified transformation expressed in Eq. 2 makes it possible
to apply the Lyzenga method where it would not otherwise
be possible, but it does not address the underlying limitation
of Lyzenga’s transformation (Eq. 1). The Lyzenga method is
based on a quasi-single-scattering approximation (QSSA) of
the radiative transfer equation (RTE) [33] and predicated on
the transformation of observed radiance producing a negative
linear relationship with depth across all bottom types. The
dependence on this transformation represents an inherent as-
sumption about the range of bottom albedo and the optical
properties of the water where the depth is to be estimated.
In the current study, this linear relationship was achieved
for some bottom types but not others because these inherent
assumptions were violated by the environmental conditions
(low bottom albedo and reflective water column in the blue
bands) in the study area.
The relationship between transformed radiance (according
to Eq. 2) and depth for WV3 pixels over both sand and kelp
are shown in Fig. 7 (see Fig. 1 for locations of kelp and
sand). The sand pixels generally follow the negative linear
relationship that the Lyzenga method is based on [33], but
the kelp pixels do not. Large brown macroalgae such as
the kelp Ecklonia radiata have very low reflectance in the
wavelengths of WV2 (and WV3) bands 1 and 2 [42]. QSSA
predicts that, due to larger relative influence of water column
scattering effects, radiance will increase with depth rather
than decrease once the bottom albedo drops below a certain
threshold [43] (see Fig. 6 and 7). The threshold varies with
wavelength, water clarity, and other factors in a manner that
is beyond the scope of this paper, but it is useful to note
that the threshold increases with backscatter (i.e. increased
turbidity and/or chlorophyll). Most wavelength and bottom
albedo combinations were above this threshold for the WV2
and WV3 imagery used in this study, but the albedo of kelp
was below for bands 1 and 2 (Fig. 7). Examination of the blue
bands on their own corroborates this relationship; the shallow
kelp dominated areas appear much darker than optically deep-
water when band 1 (or 2) is viewed as a grayscale image.
Consequently, the radiance transformation did not consistently
produce the expected negative linear relationship with depth
across all bottom types, and the Lyzenga method did not
perform as well in this study as it has elsewhere.
The KNN method produced more accurate results in the
present study compared to the Lyzenga methods. However,
unlike the Lyzenga method, KNN does not lend itself to
physical interpretations of sources of error. Likely sources of
error include sensor noise, water column heterogeneity, and
insufficient bottom type separability. There was less sensor
noise visible in the WV3 image than in the WV2 image
and it is likely that this, in addition to possible differences
in sediment load, contributed to the higher accuracy obtained
using the WV3 image. This method has not yet been tested in
less turbid water with a smaller range of bottom albedo (e.g.
tropical coral reefs), but it seems likely that it will perform
at least as well and likely better in less turbid conditions than
those present in the current study. No effort was made in this
study to stratify the sampling of training points over depth
or bottom type. It is however likely that such stratification
could improve the performance of both methods, particularly
when training sets are small. It must also be noted that
the Lyzenga method can estimate depths beyond the range
of training data provided to it while the KNN method can
not. In some situations, such as the present study, where the
range of training depths is known to closely approximate the
range of true depths, this difference proves advantageous to
the KNN method in terms of accuracy, but where estimation
beyond the range of available training data is required, the
Lyzenga method is more appropriate and may provide better
overall accuracy. The Lyzenga method has some potential
for generalized use (resulting in reduced accuracy) without
training for individual images [15], [44]. The KNN method,
being entirely empirical, must be trained for each image
separately.
The KNN regression method presented here requires less
image preprocessing and can, with sufficient training data,
estimate bathymetry in difficult environmental conditions with
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING (AUTHOR’S MANUSCRIPT) 8
much less error than the widely used Lyzenga method. The
KNN estimates for depths less than 20m had an RMSE that
was over 1.4m lower than that of the Lyzenga method for
the WV3 image and approximately 1m lower for the WV2
image (Fig. 3). Errors were noticeably less influenced by
bottom type in the KNN method (Fig. 4), and that impression
was confirmed by the examination of average errors over 2
different bottom types. Both methods showed similar decreases
in accuracy with increasing maximum depth (Fig. 6), but the
KNN method was consistently more accurate. The only way
in which the Lyzenga method performed better than the KNN
method was when small numbers (<150) of training points
were used (Fig. 5).
This study demonstrates that the KNN method can outper-
form the Lyzenga method in conditions where the inherent
assumptions of Lyzenga’s method are violated. Further study
is necessary, via numerical simulation with RTE or real world
data collected under a range of conditions, to assess the
wider applicability of the KNN method. This method and
its FOSS implementation could prove to be a valuable tool
for researchers and resource managers who require low cost
bathymetric estimates of optically shallow-waters. The only
data requirements are multispectral imagery and a set of
geolocated depth measurements from the field. As proof of
concept, this study used multibeam sonar depths, but these
field data can also be collected with relatively inexpensive
and easily portable equipment [40]. This is critical where
resources for bathymetry mapping are limited and in locations
where other methods such as lidar or multibeam sonar are
not practical or appropriate. This method helps to expand
the already considerable value of high resolution multispectral
satellite imagery for marine applications [45], [46], [47], [48],
and could prove especially valuable as a source of depth data
for water column correction [49] to aid in the mapping of
submerged habitats [50], [51], [52].
ACKNOWLEDGMENT
The authors would like to thank Auckland Council for
funding this research, DigitalGlobe Foundation for the imagery
grant, Leigh Marine Laboratory for funding the multibeam
survey, and Waikato University along with Discovery Marine
Ltd. for carrying it out. Additional funding was provided
by the Royal Society of New Zealand Rutherford Discovery
Fellowship to NTS (RDFUOA1103). Thanks to Prof. John
Philip Matthews and an anonymous reviewer for their helpful
comments. Thanks also to the open source software commu-
nity and particularly to those behind QGIS [29], GDAL, Scikit-
learn [53], Scikit-image [32], and IPython [54].
REFERENCES
[1] A. H. Benny and G. J. Dawson, “Satellite Imagery as an Aid to
Bathymetric Charting in the Red Sea,” Cartographic Journal, The,
vol. 20, no. 1, pp. 5–16, Jun. 1983.
[2] D. L. Jupp, K. K. Mayo, D. A. Kuchler, D. V. R. Claasen, R. A.
Kenchington, and P. R. Guerin, “Remote sensing for planning and
managing the great barrier reef of Australia,” Photogrammetria, vol. 40,
no. 1, pp. 21–42, Sep. 1985.
[3] E. Saarman, M. Gleason, J. Ugoretz, S. Airam, M. Carr, E. Fox,
A. Frimodig, T. Mason, and J. Vasques, “The role of science in support-
ing marine protected area network planning and design in California,”
Ocean & Coastal Management, 2012.
[4] A. Jordan, M. Lawler, V. Halley, and N. Barrett, “Seabed habitat map-
ping in the Kent Group of islands and its role in Marine protected area
planning,” Aquatic Conservation: Marine and Freshwater Ecosystems,
vol. 15, no. 1, pp. 51–70, 2005.
[5] M. Merrifield, W. McClintock, C. Burt, E. Fox, P. Serpa, C. Steinback,
and M. Gleason, “MarineMap: A web-based platform for collaborative
marine protected area planning,” Ocean & Coastal Management, 2012.
[6] P. N. Bierwirth, T. J. Lee, and R. V. Burne, “Shallow Sea-Floor Re-
flectance and Water Depth Derived by Unmixing Multispectral Imagery,
Photogrammetric Engineering and Remote Sensing; (United States), vol.
59:3, Mar. 1993.
[7] T. Sagawa, E. Boisnier, T. Komatsu, K. B. Mustapha, A. Hattour,
N. Kosaka, and S. Miyazaki, “Using bottom surface reflectance to map
coastal marine areas: a new application method for Lyzenga’s model,
International Journal of Remote Sensing, vol. 31, no. 12, pp. 3051–3064,
2010.
[8] F. Eugenio, J. Marcello, and J. Martin, “High-Resolution Maps of
Bathymetry and Benthic Habitats in Shallow-Water Environments Using
Multispectral Remote Sensing Imagery,IEEE Transactions on Geo-
science and Remote Sensing, vol. 53, no. 7, pp. 3539–3549, Jul. 2015.
[9] H. Wensink and W. Alpers, “SAR-Based Bathymetry,” in Encyclopedia
of Remote Sensing, ser. Encyclopedia of Earth Sciences Series, E. G.
Njoku, Ed. Springer New York, 2014, pp. 719–722.
[10] F. C. Polcyn, W. L. Brown, and I. J. Sattinger, “The Measurement
of Water Depth by Remote Sensing Techniques,” The University of
Michigan, Ann Arbor, Willow Run Laboratories, Tech. Rep. 8973-26-F,
Oct. 1970.
[11] A. G. Dekker, S. R. Phinn, J. Anstee, P. Bissett, V. E. Brando, B. Casey,
P. Fearns, J. Hedley, W. Klonowski, and Z. P. Lee, “Intercomparison
of shallow water bathymetry, hydro-optics, and benthos mapping tech-
niques in Australian and Caribbean coastal environments,Limnology
and Oceanography: Methods, vol. 9, pp. 396–425, 2011.
[12] Z. Lee, K. L. Carder, C. D. Mobley, R. G. Steward, and J. S. Patch,
“Hyperspectral remote sensing for shallow waters. 2. Deriving bottom
depths and water properties by optimization,” Applied Optics, vol. 38,
no. 18, pp. 3831–3843, 1999.
[13] E. P. Green, P. J. Mumby, A. Edwards, and C. Clark, “Remote Sensing
Handbook for Tropical Coastal Management,” 2005.
[14] N. S. Altman, “An Introduction to Kernel and Nearest-Neighbor Non-
parametric Regression,” The American Statistician, vol. 46, no. 3, pp.
175–185, Aug. 1992.
[15] D. R. Lyzenga, N. Malinas, and F. Tanis, “Multispectral bathymetry
using a simple physically based algorithm,” Geoscience and Remote
Sensing, IEEE Transactions on, vol. 44, no. 8, pp. 2251 –2259, Aug.
2006.
[16] W. J. Ballantine and D. P. Gordon, “New Zealand’s first marine reserve,
Cape Rodney to Okakari point, Leigh,” Biological Conservation, vol. 15,
no. 4, pp. 273–280, Jun. 1979.
[17] T. Ayling, “Okakari Point to Cape Rodney marine reserve: a biolog-
ical survey,” Leigh Marine Laboratory, University of Auckland, New
Zealand, Technical Report, 1978.
[18] D. M. Parsons, N. T. Shears, R. C. Babcock, and T. R. Haggitt,
“Fine-scale habitat change in a marine reserve, mapped using radio-
acoustically positioned video transects,” Marine and Freshwater Re-
search, vol. 55, no. 3, pp. 257–265, 2004.
[19] K. Leleu, B. Remy-Zephir, R. Grace, and M. J. Costello, “Mapping
habitats in a marine reserve showed how a 30-year trophic cascade
altered ecosystem structure,” Biological Conservation, vol. 155, pp. 193–
201, Oct. 2012.
[20] B. M. Seers and N. T. Shears, “Spatio-temporal patterns in coastal
turbidity Long-term trends and drivers of variation across an estuarine-
open coast gradient,” Estuarine, Coastal and Shelf Science, vol. 154, pp.
137–151, Mar. 2015.
[21] G. De’ath and K. Fabricius, “Water quality as a regional driver of
coral biodiversity and macroalgae on the Great Barrier Reef,Ecological
Applications, vol. 20, no. 3, pp. 840–850, 2010.
[22] R. van Woesik, T. Tomascik, and S. Blake, “Coral assemblages and
physico-chemical characteristics of the Whitsunday Islands: evidence of
recent community changes,” Marine and Freshwater Research, vol. 50,
no. 5, p. 427, 1999.
[23] D. P. Gordon, Cape Rodney to Okakari Point Marine Reserve : review of
knowledge and bibliography to December 1976, ser. Tane. Supplement
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING (AUTHOR’S MANUSCRIPT) 9
; v. 22 (1976). Auckland, NZ: Auckland University Field Club 1976,
1976.
[24] N. T. Shears, R. C. Babcock, C. A. J. Duffy, and J. W. Walker,
“Validation of qualitative habitat descriptors commonly used to classify
subtidal reef assemblages in north-eastern New Zealand,New Zealand
Journal of Marine and Freshwater Research, vol. 38, no. 4, pp. 743–752,
2004.
[25] T. Updike and C. Comp, “Radiometric Use of WorldView-2 Imagery,
2010.
[26] D. R. Lyzenga, “Shallow-water bathymetry using combined lidar and
passive multispectral scanner data,International Journal of Remote
Sensing, vol. 6, no. 1, pp. 115–125, 1985.
[27] GRASS Development Team, “Geographic resources analysis support
system (GRASS) software, version 6.4. 0. Open Source Geospatial
Foundation,” 2010.
[28] R. Baker and M. Watkins, Guidance Notes for the Determination of
Mean High Water Mark for Land Title Surveys. rofessional Develop-
ment Committee of the New Zealand Institute of Surveyors, 1991.
[29] Quantum GIS Development Team, “Development Team, 2012. Quantum
GIS Geographic Information System. Open Source Geospatial Founda-
tion Project,” 2011.
[30] GDAL Development Team, “GDAL Geospatial Data Abstraction Li-
brary,” 2016.
[31] C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color
images,” in Computer Vision, 1998. Sixth International Conference on.
IEEE, 1998, pp. 839–846.
[32] S. van der Walt, J. L. Schnberger, J. Nunez-Iglesias, F. Boulogne,
J. D. Warner, N. Yager, E. Gouillart, and T. Yu, “scikit-image: image
processing in Python,” PeerJ, vol. 2, p. e453, Jun. 2014.
[33] D. R. Lyzenga, “Passive remote sensing techniques for mapping water
depth and bottom features,” Applied Optics, vol. 17, no. 3, pp. 379–383,
Feb. 1978.
[34] W. Ji, D. Civco, and W. Kennard, “Satellite remote bathymetry: a new
mechanism for modeling,” Photogrammetric Engineering and Remote
Sensing, vol. 58, no. 5, pp. 545–549, 1992.
[35] R. P. Stumpf, K. Holderied, and M. Sinclair, “Determination of water
depth with high-resolution satellite imagery over variable bottom types,
Limnology and Oceanography, vol. 48, pp. 547–556, 2003.
[36] R. A. Armstrong, “Remote sensing of submerged vegetation canopies for
biomass estimation,” International Journal of Remote Sensing, vol. 14,
no. 3, pp. 621–627, Feb. 1993.
[37] D. Schweizer, R. A. Armstrong, and J. Posada, “Remote sensing
characterization of benthic habitats and submerged vegetation biomass
in Los Roques Archipelago National Park, Venezuela,International
Journal of Remote Sensing, vol. 26, no. 12, pp. 2657–2667, 2005.
[38] R. D. King, C. Feng, and A. Sutherland, “Statlog: Comparison of
Classification Algorithms on Large Real-World Problems,” Applied
Artificial Intelligence, vol. 9, no. 3, pp. 289–333, May 1995.
[39] S. B. Imandoust and M. Bolandraftar, “Application of K-Nearest Neigh-
bor (KNN) Approach for Predicting Economic Events: Theoretical
Background,” Int. Journal of Engineering Research and Applications,
vol. 3, no. 5, pp. 605–610, 2013.
[40] J. Kibele, “Benthic Photo Survey: Software for Geotagging, Depth-
tagging, and Classifying Photos from Survey Data and Producing
Shapefiles for Habitat Mapping in GIS,” Journal of Open Research
Software, vol. 4, no. 1, Mar. 2016.
[41] H. Su, H. Liu, and W. D. Heyman, “Automated derivation of bathymetric
information from multi-spectral satellite imagery using a non-linear
inversion model,Marine Geodesy, vol. 31, no. 4, pp. 281–298, 2008.
[42] P. J. Werdell and C. Roesler, “Remote assessment of benthic substrate
composition in shallow waters using multispectral reflectance,Limnol-
ogy and Oceanography, vol. 48, pp. 557–567, 2003.
[43] H. R. Gordon and W. R. McCluney, “Estimation of the Depth of Sunlight
Penetration in the Sea for Remote Sensing,” Applied Optics, vol. 14,
no. 2, p. 413, Feb. 1975.
[44] A. Kanno, Y. Tanaka, A. Kurosawa, and M. Sekine, “Generalized
Lyzenga’s Predictor of Shallow Water Depth for Multispectral Satellite
Imagery,Marine Geodesy, vol. 36, no. 4, pp. 365–376, Dec. 2013.
[45] J. P. Matthews and Y. Yoshikawa, “Synergistic surface current mapping
by spaceborne stereo imaging and coastal HF radar,” Geophysical
Research Letters, vol. 39, no. 17, p. L17606, Sep. 2012.
[46] J. Xu and D. Zhao, “Review of coral reef ecosystem remote sensing,
Acta Ecologica Sinica, vol. 34, no. 1, pp. 19–25, Feb. 2014.
[47] A. Hommersom, M. R. Wernand, S. Peters, and J. d. Boer, “A review
on substances and processes relevant for optical remote sensing of
extremely turbid marine areas, with a focus on the Wadden Sea,”
Helgoland Marine Research, vol. 64, no. 2, pp. 75–92, Jun. 2010.
[48] M. A. Hamel and S. Andrfout, “Using very high resolution remote sens-
ing for the management of coral reef fisheries: Review and perspectives,
Marine Pollution Bulletin, vol. 60, no. 9, pp. 1397–1405, Sep. 2010.
[49] M. L. Zoffoli, R. Frouin, and M. Kampel, “Water Column Correction
for Coral Reef Studies by Remote Sensing,” Sensors, vol. 14, no. 9, pp.
16 881–16 931, Sep. 2014.
[50] P. J. Mumby, C. D. Clark, E. P. Green, and A. J. Edwards, “Benefits of
water column correction and contextual editing for mapping coral reefs,
International Journal of Remote Sensing, vol. 19, no. 1, pp. 203–210,
1998.
[51] T. Sagawa, A. Mikami, M. N. Aoki, and T. Komatsu, “Mapping seaweed
forests with IKONOS image based on bottom surface reflectance,” Nov.
2012, pp. 85 250Q–85 250Q.
[52] A. Minghelli-Roman and C. Dupouy, “Correction of the Water Column
Attenuation: Application to the Seabed Mapping of the Lagoon of New
Caledonia Using MERIS Images,” IEEE Journal of Selected Topics in
Applied Earth Observations and Remote Sensing, vol. Early Access
Online, 2014.
[53] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion,
O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, and V. Dubourg,
“Scikit-learn: Machine learning in Python,” The Journal of Machine
Learning Research, vol. 12, pp. 2825–2830, 2011.
[54] F. Perez and B. Granger, “IPython: A System for Interactive Scientific
Computing,” Computing in Science Engineering, vol. 9, no. 3, pp. 21
–29, Jun. 2007.
Jared Kibele left a career in software development
to pursue marine science. He received an associate’s
degree in marine science and technology from Mon-
terey Peninsula College in 2004 and a bachelor’s
degree in Marine Biology from University of Cali-
fornia, Santa Cruz in 2006. In 2007 he was employed
by the Pacific States Marine Fisheries Commission
as a GIS analyst for California’s Marine Life Pro-
tection Act Initiative (MLPA). From there he went
to University of California, Santa Barbara where he
became the Senior GIS Analyst for MarineMap; a
web-based decision-support tool for the MLPA. 2011 was spent sailing from
California to New Zealand with his wife aboard their 31 foot ketch called
Architeuthis. In 2012 he began his PhD research at University of Auckland’s
Leigh Marine Lab under supervisor Nick Shears.
Nick Shears received a BSc in Biological Sciences
and a PhD in Marine Science from the University of
Auckland in 1997 and 2003 respectively. He carried
out a postdoctoral fellowship at the University of
California Santa Barbara from 2006-2009 before
returning to the University of Auckland as a Re-
search Fellow. In 2011 he was awarded a Ruther-
ford Discovery Fellowship and since 2012 has been
a Senior Lecturer at the University of Aucklands
Leigh Marine Laboratory. His research has a strong
application to marine management and conservation,
focusing on the ecology of rocky reefs, monitoring and mapping changes in
marine habitats, and understanding the impact of human activities on marine
ecosystems.
... Xie et al. employed the SVM, BP, and RF to achieve an overall root mean square error (RMSE) below 1.5 m [34]. Other models, including the light gradient boosting machine (LGBM) [35] and the k-nearest neighbor (KNN) [36], were also applied to SDB methods and produced reliable bathymetric predictions. However, few studies adequately quantified the relative accuracy and reliability between numerous machine learning and classical approaches under the same scenarios. ...
... The KNN model is a nonparametric machine learning algorithm that estimates a continuous dependent variable using K nearest training samples from the feature space [3,36,57]. Its principle is to apply methods, such as Euclidean distance and Manhattan distance, to calculate the distance between the current point to be classified and the known points. ...
Article
Full-text available
Satellite-derived bathymetry (SDB) techniques are increasingly valuable for deriving high-quality bathymetric maps of coral reefs. Investigating the performance of the related SDB algorithms in purely spaceborne active–passive fusion bathymetry contributes to formulating reliable bathymetric strategies, particularly for areas such as the Spratly Islands, where in situ observations are exceptionally scarce. In this study, we took Anda Reef as a case study and evaluated the performance of eight common SDB approaches by integrating Sentinel-2 images with Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2). The bathymetric maps were generated using two classical and six machine-learning algorithms, which were then validated with measured sonar data. The results illustrated that all models accurately estimated the depth of coral reefs in the 0–20 m range. The classical algorithms (Lyzenga and Stumpf) exhibited a mean absolute error (MAE), root mean square error (RMSE), and mean absolute percentage error (MAPE) of less than 0.990 m, 1.386 m, and 11.173%, respectively. The machine learning algorithms generally outperformed the classical algorithms in accuracy and bathymetric detail, with a coefficient of determination (R2) ranging from 0.94 to 0.96 and an RMSE ranging from 1.034 m to 1.202 m. The multilayer perceptron (MLP) achieved the highest accuracy and consistency with an RMSE of as low as 1.034 m, followed by the k-nearest neighbor (KNN) (1.070 m). Our results provide a practical reference for selecting SDB algorithms to accurately obtain shallow water bathymetry in subsequent studies.
... where "Z" is the satellite-derived depth value, "N" is the number of bands, "α 0 " and "α i " are the coefficients obtained as a result of calibration, "i" is the number of data points (1, 2..., N), "R(λi)" is the perceived reflectance value for the band "i", and "R ∞ " is the deep-water reflectance value for the band "i". In the literature, green and blue bands are preferred due to their high level of water penetration [64,65]. ...
Article
Full-text available
Satellite-derived bathymetry (SDB) is the process of estimating water depth in shallow coastal and inland waters using satellite imagery. Recent advances in technology and data processing have led to improvements in the accuracy and availability of SDB. The increased availability of free optical satellite sensors, such as Landsat missions and Sentinel 2 satellites, has increased the quantity and frequency of SDB research and mapping efforts. In addition, machine learning (ML)- and deep learning (DL)-based algorithms, which can learn to identify features that are indicative of water depth, such as color or texture variations, have started to be used for extracting bathymetry information from satellite imagery. This study aims to produce an initial optical image-based SBD map of Horseshoe Island’s shallow coasts and to perform a comprehensive and comparative evaluation with Landsat 8 and Sentinel 2 satellite images. Our research considers the performance of empirical SDB models (classical, ML-based, and DL-based) and the effects of the atmospheric correction methods ACOLITE, iCOR, and ATCOR. For all band combinations and depth intervals, the ML-based random forest and XGBoost models delivered the highest performance and best fitting ability by achieving the lowest error with MAEs smaller than 1 m up to 10 m depth and a maximum correlation of R2 around 0.80. These models are followed by the DL-based ANN and CNN models. Nonetheless, the non-linearity of the reflectance–depth connection was significantly reduced by the ML-based models. Furthermore, Landsat 8 showed better performance for 10–20 m depth intervals and in the entire range of (0–20 m), while Sentinel 2 was slightly better up to 10 m depth intervals. Lastly, ACOLITE, iCOR, and ATCOR provided reliable and consistent results for SDB, where ACOLITE provided the highest automation.
... Bathymetry data (1 m resolution) for the majority of the study site, extending up to ~ 1.2 km from the coastline, were available from a prior study 39 (Fig. 1). Lower resolution (20 m) bathymetry data were obtained from NIWA for the area which the higher resolution data did not cover 40 . ...
Article
Full-text available
Anthropogenic stressors, such as plastics and fishing, are putting coastal habitats under immense pressure. However, sound pollution from small boats has received little attention given the importance of sound in the various life history strategies of many marine animals. By combining passive acoustic monitoring, propagation modelling, and hearing threshold data, the impact of small-boat sound on the listening spaces of four coastal species was determined. Listening space reductions (LSR) were greater for fishes compared to crustaceans, for which LSR varied by day and night, due to their greater hearing abilities. Listening space also varied by sound modality for the two fish species, highlighting the importance of considering both sound pressure and particle motion. The theoretical results demonstrate that boat sound hinders the ability of fishes to perceive acoustic cues, advocating for future field-based research on acoustic cues, and highlighting the need for effective mitigation and management of small-boat sound within coastal areas worldwide.
... Several papers are based on the work by Lyzenga [8,9] and consider various applications [10,11,12,13,14]. Empirical approaches do not always need absolute radiometric or atmospheric adjustments [11,15]. Moreover, they can handle different datasets as long as the bottoms of the water pools are similar [16]. ...
Conference Paper
Full-text available
We present a machine learning approach that uses a custom Convolutional Neural Network (CNN) for estimating the depth of water pools from multispectral drone imagery. Using drones to obtain this information offers a cheaper, timely, and more accurate solution when compared to alternative methods, such as manual inspection. This information, in turn, represents an asset to identify potential breeding sites of mosquito larvae, which grow only in shallow water pools. As a significant part of the world's population is affected by mosquito-borne viral infections, including Dengue and Zika, identifying mosquito breeding sites is key to control their spread. Experiments with 5-band drone imagery show that our CNN-based approach is able to measure shallow water depths accurately up to a root mean square error of less than 0.5~cm, outperforming state-of-the-art Random Forest methods and empirical approaches.
... In this study, images were only screened to remove those with cloud cover, as the compositing method addressed the other artifacts. Previous research suggested that machinelearning models are more affected by environmental variables than the inversion models with empirical calibration, as the one used in this study (Kibele and Shears, 2016;Duan et al., 2022). Li et al. (2019) applied the band-ratio algorithm (Stumpf et al., 2003) for depth retrieval and generated a manually selected water attenuation index for pristine viewing conditions in offshore waters, which may not always represent the water attenuation conditions in shallow areas. ...
Article
Full-text available
Monitoring the complex seafloor morphology that drives the functioning of shallow coastal ecosystems is vital for assessing marine activities. Satellite-derived bathymetry (SDB) can provide a crucial dataset for creating the bathymetry maps needed to understand hazards and impacts produced by climate change in vulnerable coastal zones. SDB is effective in clear water, but still has limitations in application to areas with some turbidity. Here, using the twin satellites Sentinel-2A/B, we integrate water quality information from the satellite with a multi-temporal compositing method to demonstrate a potential for comprehensively operational bathymetric mapping over a range of environments. The automated compositing method diminishes the turbidity impact in addition to inferring the maximum detectable depth and removing optically deep-water areas. Examining a wide range of conditions along the Caribbean and eastern coast of the U.S. shows detailed bathymetry as deep as 30 m at 10 m spatial resolution with median errors <1 m when compared to high-resolution lidar surveys. These results demonstrate that the model adopted can provide useful bathymetry in areas that do not have consistently clear water and can be extended across multiple geographic regions and optical conditions at local, regional, and national scales.
... These are implemented in various contexts [21][22][23][24][25], and rely on the availability of groundtruth depth measurements for model calibration compared to the analytical methods. The empirical methods do not necessarily require absolute radiometric and atmospheric corrections [22,26], and depending on model performance they can be applied on datasets with similar seafloor types [27]. In contrast, analytical methods account for any seafloor type included as model input [28][29][30][31]. ...
Article
Full-text available
Short-term changes in shallow bathymetry affect the coastal zone, and therefore their monitoring is an essential task in coastal planning projects. This study provides a novel approach for monitoring shallow bathymetry changes based on drone multispectral imagery. Particularly, we apply a shallow water inversion algorithm on two composite multispectral datasets, being acquired five months apart in a small Mediterranean sandy embayment (Chania, Greece). Initially, we perform radiometric corrections using proprietary software, and following that we combine the bands from standard and multispectral cameras, resulting in a six-band composite image suitable for applying the shallow water inversion algorithm. Bathymetry inversion results showed good correlation and low errors (<0.3 m) with sonar measurements collected with an uncrewed surface vehicle (USV). Bathymetry maps and true-color orthomosaics assist in identifying morphobathymetric features representing crescentic bars with rip channel systems. The temporal bathymetry and true-color data reveal important erosional and depositional patterns, which were developed under the impact of winter storms. Furthermore, bathymetric profiles show that the crescentic bar appears to migrate across and along-shore over the 5-months period. Drone-based multispectral imagery proves to be an important and cost-effective tool for shallow seafloor mapping and monitoring when it is combined with shallow water analytical models.
... These are implemented in various contexts [21][22][23][24][25], and rely on the availability of groundtruth depth measurements for model calibration compared to the analytical methods. The empirical methods do not necessarily require absolute radiometric and atmospheric corrections [22,26], and depending on model performance they can be applied on datasets with similar seafloor types [27]. In contrast, analytical methods account for any seafloor type included as model input [28][29][30][31]. ...
Preprint
Full-text available
Short-term changes in shallow bathymetry affect the coastal zone and therefore their monitoring is an essential task in coastal planning projects. This study provides a novel approach for monitoring shallow bathymetry change based on drone multispectral imagery. Particularly we apply a shallow water inversion algorithm on two composite multispectral datasets being acquired five months apart in a small Mediterranean sandy embayment (Chania, Greece). Initially, we perform radiometric corrections using proprietary software and following we combine the bands from standard and multispectral cameras resulting in a six-band composite image suitable for applying the shallow water inversion algorithm. Bathymetry inversion results showed good correlation and low errors (< 0.3m) with sonar measurements collected with an uncrewed surface vehicle (USV). Bathymetry maps and true-color orthomosaics assist in identifying morphobathymetric features representing crescentic bars with rip channel systems. The temporal bathymetry and true-color data reveal important erosional and depositional patterns, which were developed under the impact of winter storms. Furthermore, bathymetric profiles show that the crescentic bar appears to migrate across and along-shore over the 5-months period. Drone-based multispectral imagery proves to be an important and cost-effective tool for shallow seafloor mapping and monitoring when it is combined with shallow water analytical models.
Article
Full-text available
Accurate bathymetric data in shallow water is of increasing importance for navigation safety, coastal management, and marine transportation. Satellite-derived bathymetry (SDB) is widely accepted as an effective alternative to conventional acoustic measurements in coastal areas, providing high spatial and temporal resolution combined with extensive repetitive coverage. Many previous empirical SDB approaches are unsuitable for precision bathymetry mapping in various scenarios, due to the assumption of homogeneous bottom over the whole region, as well as the neglect of various interfering factors (e.g., turbidity) causing radiation attenuation. Therefore, this study proposes a bottom-type adaption-based SDB approach (BA-SDB). Under the consideration of multiple factors including suspended particulates and phytoplankton, it uses a particle swarm optimization improved LightGBM algorithm (PSO-LightGBM) to derive depth of each pre-segmented bottom type. Based on multispectral images of high spatial resolution and in situ observations of airborne laser bathymetry and multi-beam echo sounder, the proposed approach is applied in shallow water around Yuanzhi Island, and achieves the highest accuracy with an RMSE value of 0.85 m compared to log-ratio, multi-band, and classical machine learning methods. The results of this study show that the introduction of water-environment parameters improves the performance of the machine learning model for bathymetric mapping.
Article
Satellite-derived bathymetry (SDB), an important technology in marine geodesy, is advantageous because of its wide coverage, low cost, and short revisit cycle. At present, several different kinds of SDB methods exist, and their inversion accuracy is affected by algorithm performance, band selection, and sample distribution, among other factors. But these factors have not been adequately quantified and compared. In the present study, we evaluate the performances and highlight the best scenarios for applying the six classical empirical methods including the log-transformed single band (LSB), band ratio (BR), Lyzenga polynomial (LP), support vector regression (SVR), third order polynomial (TOP), and back propagation (BP) neural network. The results reveal that the number of training samples is important for the empirical SDB methods, and the TOP and BP methods need more training samples than other methods. Compared to the robust BR and LP methods, the TOP and BP methods can obtain high accuracy but are severely influenced by incomplete samples. In addition, experiments that prove the local minimum (poor robustness) problem of the BP method exist and cannot be ignored in the bathymetry field. The present study highlights the most suitable method for obtaining reliable SDB results and their applicability.
Article
The blue economic, maritime, and exploration activities rely on precise knowledge of coastal bathymetry. The recent trends in Satellite-Derived Bathymetry (SDB) research have focused on various methods of estimation using very high-resolution satellite imagery and in-situ data, but mostly in clear and transparent water. The Indian coastal region mostly has murky water, which is further constrained to employ SDB techniques in areas of river mouth and delta due to the presence of numerous underwater rocks and rich sediment carried by rivers. This study is focused on analyzing SDB in two study areas characterized as turbid, sediment-laden, and complex due presence of numerous underwater rocks. The objective of the research is to analyze and compare the performance of a few univariate Machine Learning (ML) regression algorithms in SDB estimation using ASTER, LANDSAT-8, and Sentinel-2A spectral bands with high resolution in-situ bathymetric data. This study evaluates linear, three robust linear Huber, RANSAC & ThielSen, and a non-linear Gaussian Process Regression ML algorithm, to analyze the efficacy in SDB estimation using a univariate spectral bandwidth of satellite imagery. The applied non-linear regression model has estimated SDB with the accuracy of R² 0.87, RMSE 1.77 m, and MAE 1.27 m for depth of 30 m in site A; and site B, R² 0.91, RMSE 1.51 m & MAE 1.17 m for depth of 22 m. The advantage offered by this approach includes; minimum required parameters, less processing time, and the potential to be an alternative to hydrographic surveys.
Article
Full-text available
Photo survey techniques are common for resource management, ecological research, and remote sensing but current data processing methods are cumbersome and inefficient. The Benthic Photo Survey (BPS) software described here was created to simplify the data processing and management tasks associated with photo surveys of underwater habitats. BPS is free and open source software written in Python with a QT graphical user interface. BPS takes a GPS log and jpeg images acquired by a diver or drop camera and assigns the GPS position to each photo based on time-stamps (i.e. geotagging). Depth and temperature can be assigned in a similar fashion (i.e. depth-tagging) using log files from an inexpensive consumer grade depth / temperature logger that can be attached to the camera. BPS provides the user with a simple interface to assign quantitative habitat and substrate classifications to each photo. Location, depth, temperature, habitat, and substrate data are all stored with the jpeg metadata (EXIF). BPS can then export all of these data in a spatially explicit point shapefile format for use in GIS. BPS greatly reduces the time and skill required to turn photos into usable data thereby making photo survey methods more efficient and cost effective. BPS can also be used, as is, for other photo sampling techniques in terrestrial and aquatic environments and the open source code base offers numerous opportunities for expansion and customization.
Article
Full-text available
Coastlines, shoals, and reefs are some of the most dynamic and constantly changing regions of the globe. The emergence of high-resolution satellites with new spectral channels, such as the WorldView-2, increases the amount of data available, thereby improving the determination of coastal management parameters. Water-leaving radiance is very difficult to determine accurately, since it is often small compared to the reflected radiance from other sources such as atmospheric and water surface scattering. Hence, the atmospheric correction has proven to be a very important step in the processing of high-resolution images for coastal applications. On the other hand, specular reflection of solar radiation on nonflat water surfaces is a serious confounding factor for bathymetry and for obtaining the seafloor albedo with high precision in shallow-water environments. This paper describes, at first, an optimal atmospheric correction model, as well as an improved algorithm for sunglint removal based on combined physical and image processing techniques. Then, using the corrected multispectral data, an efficient multichannel physics-based algorithm has been implemented, which is capable of solving through optimization the radiative transfer model of seawater for bathymetry retrieval, unmixing the water intrinsic optical properties, depth, and seafloor albedo contributions. Finally, for the mapping of benthic features, a supervised classification methodology has been implemented, combining seafloor-type normalized indexes and support vector machine techniques. Results of atmospheric correction, remote bathymetry, and benthic habitat mapping of shallow-water environments have been validated with in situ data and available bionomic profiles providing excellent accuracy.
Conference Paper
Full-text available
Seaweed forests are important habitats for many fishery species. However, decrease in seaweed forests is reported in all over Japan. Mapping and monitoring seaweed forest distribution is necessary for understanding their present status and taking measures for their conservation. Since traditional diving visual observation is not efficient for large scale mapping, alternative method is required. Although satellite remote sensing is one of the noteworthy methods, only a few studies have been conducted probably due to two main problems about mapping seaweed forests by remote sensing. The first one is a difficulty to collect field truth data. The second one is a light attenuation effect in water column which makes analysis more difficult. We applied an efficient method to overcome these two problems. We selected the seaweed beds off Shimoda in Izu Peninsula, Japan, as a study area. An IKONOS satellite image was used for analysis because its high spatial and radiometric resolutions are practical for seaweed mapping. We measured spectral reflectance profiles of seaweed and substrates in the study area. The result revealed effective wavelength bands for distinguishing seaweeds from other substrates. Truth data for satellite image analysis and evaluation were collected in the field using the boat and an aquatic video camera. This method allowed us to collect many truth data in short time. Satellite image analysis was conducted using radiometric correction for water column and maximum likelihood classification. The overall accuracy using error matrix reached 97.9%. The results indicate usefulness of the method for seaweed forest mapping.
Chapter
Although microwaves cannot penetrate into the water body, SAR can yield information on shallow-water bathymetry or underwater bottom topography. This is achieved indirectly by means of sensing variations in the sea surface roughness over bathymetry when a strong current (usually tidal current) is present. SAR is a very sensitive instrument to sense small variations in the (small-scale) sea surface roughness image. At present, it is not possible to invert reliably SAR images, which are related to sea surface roughness maps, into bathymetric maps by using theoretical models. However, if SAR images are combined with acoustic sounding data acquired from ships, they are of great value for generating bathymetric maps and thus can help to reduce greatly the cost of updating depth charts in tidal areas.
Article
A new mechanism for remote bathymetric modeling that overcomes inherent deficiencies of previous remote bathymetric techniques is presented. Spectral observations indicate that the emergent radiance from water is dominated by water column scattering rather than by the bottom reflection, unless the water is very shallow and transparent or overlies a highly reflecting bottom. Observation of this phenomenon has made is possible to develop a water column scattering-based remote bathymetric model which can be applied to turbid and somewhat deep coastal waters. -from Authors
Article
Multispectral satellite remote sensing can predict shallow-water depth distribution inexpensively and exhaustively, but it requires many in situ measurements for calibration. To extend its feasibility, we improved a recently developed technique, for the first time, to obtain a generalized predictor of depth. We used six WorldView-2 images and obtained a predictor that yielded a 0.648 m root-mean-square error against a dataset with a 5.544 m standard deviation of depth. The predictor can be used with as few as two pixels with known depth per image, or with no depth data, if only relative depth is needed.
Article
Science, resource management, and defense need algorithms capable of using airborne or satellite imagery to accurately map bathymetry, water quality, and substrate composition in optically shallow waters. Although a variety of inversion algorithms are available, there has been limited assessment of performance and no work has been published comparing their accuracy and efficiency. This paper compares the absolute and relative accuracies and computational efficiencies of one empirical and five radiative-transfer-based published approaches applied to coastal sites at Lee Stocking Island in the Bahamas and Moreton Bay in eastern Australia. These sites have published airborne hyperspectral data and field data. The assessment showed that (1) radiative-transfer—based methods were more accurate than the empirical approach for bathymetric retrieval, and the accuracies and processing times were inversely related to the complexity of the models used; (2) all inversion methods provided moderately accurate retrievals of bathymetry, water column inherent optical properties, and benthic reflectance in waters less than 13 m deep with homogeneous to heterogeneous benthic/ substrate covers; (3) slightly higher accuracy retrievals were obtained from locally parameterized methods; and (4) no method compared here can be considered optimal for all situations. The results provide a guide to the conditions where each approach may be used (available image and field data and processing capability). A re-analysis of these same or additional sites with satellite hyperspectral data with lower spatial and radiometric resolution, but higher temporal resolution would be instructive to establish guidelines for repeatable regional to global scale shallow water mapping approaches.