Content uploaded by Jared Kibele
Author content
All content in this area was uploaded by Jared Kibele on Dec 20, 2017
Content may be subject to copyright.
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING (AUTHOR’S MANUSCRIPT) 1
Non-Parametric Empirical Depth Regression for
Bathymetric Mapping in Coastal Waters
Jared Kibele and Nick T. Shears
Abstract—Existing empirical methods for estimation of
bathymetry from multispectal satellite imagery are based on
simplified radiative transfer models that assume transformed
radiance values will have a linear relationship with depth. How-
ever, application of these methods in temperate coastal waters of
New Zealand demonstrates that this assumption does not always
hold true and consequently existing methods perform poorly. A
new purely empirical method based on a non-parametric nearest
neighbor regression is proposed and applied to WorldView-
2 and WorldView-3 imagery of temperate reefs dominated by
submerged kelp forests interspersed with other bottom types of
varying albedo including reef devoid of kelp and large patches
of sand. Multibeam sonar data are used to train and validate
the model and results are compared with those from a widely
used linear empirical method. Free and open source Python
code was developed for the implementation of both methods
and is presented for use. Given sufficient training data, the
proposed method provided greater accuracy (0.8m RMSE) than
the linear empirical method (2.2m RMSE) and depth errors were
less dependent on bottom type. The proposed method has great
potential as an efficient and inexpensive method for the estimation
of high spatial resolution bathymetry over large areas in a wide
range of coastal environments.
Index Terms—Bathymetry mapping, depth estimation,
WorldView-2 (WV2), K Nearest Neighbor (KNN), free and open
source software (FOSS).
I. INTRODUCTION
BATHYMETRY maps are essential for navigation [1],
resource management [2], [3], [4], [5], and as an interme-
diate product used for mapping of coastal marine habitats [6],
[7], [8]. While the estimation of bathymetry using multispec-
tral satellite imagery may not provide the accuracy attainable
by boat-based acoustic methods, it does offer a number of
advantages. Image-based methods can provide bathymetry
estimates from the intertidal and shallow subtidal zone e.g. <
5 m depth), where the collection of sonar data is problematic,
down to depths of around 20 m depending on a number
of factors including water clarity and bottom reflectance.
Synthetic Aperture Radar (SAR) can provide detailed shallow-
water bathymetry when combined with acoustic methods in
This is the author’s manuscript, and may differ from the print version in
layout and page numbering. Manuscript received January 14, 2016; revised
June 17, 2016 and August 2, 2016; accepted August 2, 2016. This work
was supported in part by the Auckland Council, in part by the DigitalGlobe
Foundation for the imagery grant, in part by the Leigh Marine Laboratory for
funding the multibeam survey, and Waikato University along with Discovery
Marine Ltd. for carrying it out. The work of N. T. Shears was supported by
the Royal Society of New Zealand Rutherford Discovery Fellowship under
Grant RDFUOA1103.
The authors are with the Leigh Marine Laboratory, University of
Auckland, Auckland 1010, New Zealand (e-mail: jkibele@gmail.com;
n.shears@auckland.ac.nz).
Digital Object Identifier of print version: 10.1109/JSTARS.2016.2598152
areas with relatively strong currents [9] but is less reliable
when currents are weak. The primary advantage of image-
based methods is their comparatively low cost and large spatial
scales over which bathymetry information can be obtained
based on readily available imagery. Correspondingly this pro-
vides a potentially efficient and more accessible approach to
bathymetric mapping for researchers and resource managers.
Many methods have been described in the literature for
the estimation of depth from satellite imagery [10], [11].
These methods however have one or more of the following
drawbacks: 1) They are complex, difficult to implement and
require specialized skills [11], 2) are dependent on field
measurements with costly equipment [12], and 3) are limited
in applicability to a narrow range of environmental conditions
(e.g. clear water and/or uniform bottom reflectance) [1]. There-
fore current methods do not provide a widely accessible and
cost effective means of obtaining bathymetric data over large
spatial scales. Despite previous efforts aimed at making these
methods more accessible to workers outside the field of remote
sensing [13], there is still considerable specialized knowledge
required to choose and implement an effective method. The
difficulty of implementation is due in part to the complexity
of the methods, but is also compounded by the paucity of free
and open source software (FOSS) or even proprietary software
available for use. As it stands, the selection and application
of existing methods for depth estimation requires specialized
skills and knowledge that greatly restrict their use.
This paper proposes a method that can be carried out
easily and at low cost without extensive knowledge of optical
remote sensing. The method is based on K nearest neighbor
regression (KNN), a non-parametric method commonly used
in machine learning [14]. Only 2 inputs are required: the
multispectral image itself and a set of known depths to train
the KNN model. This method is fully empirical in that it is not
based on modeling the complex physics of the transmission
of light through water. The only assumption inherent in this
method is that radiance across the bands of the image will
have a similar relationship to depth in training pixels (the
subset with known depths) as in the unknown pixels. Conse-
quently, this method is potentially capable of producing useful
results across a broader range of environmental conditions
than previous methods based on simplified physical models.
Furthermore, atmospheric correction is not required and should
have little effect. A free and open source Python library is
provided for application of the proposed method and related
tasks (http://jkibele.github.io/OpticalRS/).
In this study, the proposed method will be applied to
WorldView-2 (WV2) and WorldView-3 (WV3) imagery of
Cape Rodney in north eastern New Zealand (Fig 1) using
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING (AUTHOR’S MANUSCRIPT) 2
Fig. 1. RGB composite of the WV2 imagery of the study region in northeastern New Zealand overlayed with the footprint of the multibeam survey depths
less than 20m outlined in red and those greater than 20m outlined and lined with black. An area of sandy bottom is outlined and hatched in yellow and a
small area of kelp covered bottom is shown in green.
depth information from a multibeam sonar survey to both
train the model and for accuracy assessment. The resulting
bathymetry is then compared with that from a physics-based
empirical depth estimation method [15] applied to the same
data. Lyzenga’s method was chosen because it has been widely
used and can be applied, like the proposed KNN method, using
imagery and a subset of known depths as the only required
inputs.
This paper is laid out as follows: Section II describes the
satellite imagery and the multibeam bathymetric data used
in this study. Section III describes the preprocessing of the
imagery and depth data. Section IV describes the application
of the proposed KNN method and the application of Lyzenga’s
depth estimation method to the same data and presents the
resulting bathymetry from both methods. Section V presents
an assessment and comparison of the performance of the two
methods in regard to overall accuracy, spatial distribution of
error, sensitivity to quantity of training points, and estimate
performance relative to maximum depth. Section VI discusses
the implications of the results and provides final conclusions.
II. DATA ACQUISITION
The study area is the waters in and around the Cape
Rodney to Okakari Point (CROP) marine reserve, northeastern
New Zealand. CROP is New Zealand’s oldest marine reserve
[16] and has been the subject of multiple habitat mapping
studies since its inception in 1977 [17], [18], [19]. CROP is
located on an open coast in the outer Hauraki Gulf away from
large riverine inputs of sediment, with mean total suspended
solid (TSS) measurements of 4.13 mg/L [20], Secchi depths
typically between 6.5 and 10 meters (1st and 3rd quartile)
(Shears, unpublished data), and chlorophyll-a concentrations
with a mean of 0.87 µg/L (Leigh Marine Laboratory, unpub-
lished data). These waters are relatively clear for a temperate
reef environment but represent a challenge for optical remote
sensing methods compared to coral reef environments where
optical remote sensing methods are more frequently employed.
Australia’s Great Barrier Reef, for example, has average
Secchi depths of 11.5 meters and an average chlorophyll
concentration of 0.32 µg/L [21] with suspended particulate
matter (SPM) concentrations near 2 mg/L [22]. Bottom types
within the study area range from high albedo terrigenous sands
[23] to dense stands of the kelp Ecklonia radiata with very
low albedo. A range of other habitats form bottom types of
intermediate albedo [17], [24].
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING (AUTHOR’S MANUSCRIPT) 3
A. Multibeam Sonar
The multibeam depth data were collected by Discovery
Marine Ltd. (www.dmlsurveys.co.nz) in May 2014. The survey
was completed using a RS Sonic 2022 multibeam echosounder
on board the 7m survey vessel Pandora. Positioning was via a
Trimble RTK GNSS and Posmov motion compensator. Sound
velocity profiles were recorded with an AML minos SVP.
Data were acquired with QINSy 8.1 navigation software and
processed with QINSy 8.1 and Fledermaus 7.4. Tidal data were
recorded at 5 minute intervals for the duration of the survey
and the bathymetric data were reduced to Auckland MSL 1946
vertical datum and supplied to the authors as xyz point data.
B. Multispectral Imagery
The 8-band WV2 and WV3 imagery for this study was
supplied by the DigitalGlobe Foundation under an imagery
grant. The WV2 image was acquired by the satellite at 22:53
UTC on 18 January, 2014. Solar azimuth and elevation were
66.1◦and 60.1◦respectively. Satellite azimuth and elevation
were 46.2◦and 63.8◦and the off nadir view angle was 23.1◦.
The imagery was geometrically corrected and orthorectified
by DigitalGlobe and delivered at the ’Standard 2A’ product
level [25].
The WV3 image was acquired by the satellite at 22:15 UTC
on 12 January, 2015. Solar azimuth and elevation were 72.6◦
and 57.5◦respectively. Satellite azimuth and elevation were
62.8◦and 72.8◦and the off nadir view angle was 15.6◦.
The imagery was geometrically corrected and orthorectified
by DigitalGlobe and delivered at the ’Standard 2A’ product
level [25].
III. DATA PREPROCESSING
WV2 imagery, WV3 imagery, and multibeam data sets
required some preprocessing before the KNN and Lyzenga
[26], [15] depth estimation methods could be applied.
A. Multibeam Sonar
Multibeam depths needed adjustment for tide and to be
matched to the imagery in terms of resolution and projection.
The xyz point data were converted to GeoTiff format using
GRASS [27], reprojected to UTM zone 60 south, and corrected
to chart datum using data from Land Information New Zealand
(LINZ) [28]. Additional data from LINZ were then used
to calculate the height of tide above chart datum at image
acquisition time. This was found to be 2.34m and 1.74m
for the WV2 and WV3 images respectively. These values
were added to chart datum depths to create depth data sets
tailored to each image, and the resulting GeoTiffs were then
downsampled to match the spatial resolution of the WV2
imagery using QGIS [29].
B. Multispectral Imagery
In order to facilitate a direct comparison, the higher spatial
resolution (1.4m) WV3 imagery was downsampled (using
bilinear resampling) to match the WV2 imagery (2.0m) using
GDAL warp [30]. The WV2 and WV3 imagery required minor
preprocessing that was common to both depth estimation
methods and some additional preprocessing for the Lyzenga
method. The land was masked from the images by thresholding
the NIR2 band and eliminating unmasked clumps with less
than 2000 pixels. Then integer digital number (DN) values
were converted to floating point decimals and rescaled to the
interval [0,1] and denoised using bilateral filtering [31] as
implemented in the scikit-image Python library [32]. These
denoised images became the input for the KNN method from
which radiance values (Lin Eq. 4) were derived.
Additional preprocessing for the Lyzenga method was based
on the steps outlined in [15] but modified slightly to suit
the imagery used in this study. Due to the low reflectance
of kelp dominated bottoms and relatively high water column
backscatter, the Lyzenga et al. method of selecting shallow-
water pixels based on thresholding the blue and green bands
was not used. Instead, only pixels with known depths (from
the multibeam survey) were used. There was very little sun
glint apparent in the WV2 imagery, but the sun glint removal
algorithm described in section III of [15] was applied in order
to replicate the original methods as closely as possible. After
downsampling and denoising, so little sun glint was apparent
in the WV3 image that this step was deemed unnecessary.
Deep-water pixels in each image were designated by calculat-
ing pixel brightness and using 3x3 moving window to mask
pixels with fewer than 50% of neighboring pixels above the
10th percentile of brightness. These deep-water pixels were
used to calculate the deep-water means and standard deviations
specific to each image. Transformation according to equation
7 from [33],
Xi=ln(Li−Lsi)(1)
(where Liis the radiance in band iand Lsi is the average deep-
water pixel radiance in band i) resulted in too many undefined
values because Lsi > Lifor many pixels in the both scenes.
This problem has been previously noted [34], [35] and will
be discussed in detail later. In order to proceed with the WV2
image, mean deep-water radiance minus 2 standard deviations
(Li∞) was used in place of mean deep-water radiance (Lsi)
[36], [37] so instead of equation 1, transformed radiance (Xi)
was calculated according to equation 1 from [36],
Xi=ln(Li−Li∞)(2)
For the WV3, 4 standard deviations were subtracted from
mean deep-water radiance because 2 standard deviations left
too many pixels undefined. For complete details on imagery
preprocessing, including the Python code used, refer to the
OpticalRS documentation (http://jkibele.github.io/OpticalRS/).
IV. DEP TH ESTIMATION
This section describes the KNN method and the Lyzenga
method, their application, and then presents the resulting
bathymetry generated by both methods. Both estimation meth-
ods were implemented using the Python programming lan-
guage. The code used is available as part of the OpticalRS
Python library (http://jkibele.github.io/OpticalRS/).
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING (AUTHOR’S MANUSCRIPT) 4
Fig. 2. Multibeam measured depth compared to KNN and Lyzenga Method
depths estimated from WV3 imagery.
A. KNN Regression Depth Estimation
KNN regression is a method for estimating a continuous
dependent variable based on a number (K) of the nearest
training examples in feature space [38]. KNN methods make
no assumptions about the distribution or linearity of data.
All training data is retained in memory and predictions for
unknown independent variables are produced by averaging the
values of the Knearest known independent variables based
on a distance metric [39]. Consequently, the KNN regression
method is incapable of making predictions beyond the range
of training data provided.
In the case at hand, the continuous dependent variable
is the depth and the independent variables are the radiance
values in the 8 bands of the WV2 image. When the KNN
model is trained, each training pixel (those with known depth)
is retained in memory as a sample (L) with 8 independent
Fig. 3. Estimated vs measured depths for the KNN model and the Lyzenga
model. WV3 estimates are shown across the top panel and WV2 estimates
on the bottom. Some estimates of the Lyzenga method lie below the extent
of the y axes.
variables (L1, L2, ..., L8) and a known depth (z). Once the
model has been trained, the depth estimate (ˆz) for a pixel
with radiance values L0is calculated as:
ˆz=1
K
K
X
i=1
zi(3)
for the K(K= 5 for this study) nearest values of zas
determined by the euclidean distance (d) between L0and the
training samples. dis calculated as:
d(L, L0) = v
u
u
t
n
X
i=1
(Li−L0
i)2(4)
Stated simply, once the KNN model is supplied with training
data, unknown depths are estimated to be the average depth of
the K(5) most similar (in terms of radiance across all bands)
pixels with known depths, and all estimated depths will lie
between the minimum and maximum depth of the training
points.
B. Lyzenga Depth Estimation
The Lyzenga method for depth estimation assumes that
scattering effects can be removed and that a linear relationship
between depth and transformed radiance can be achieved using
the transformation in Equation 1 such that:
ˆz=h0+
N
X
i=1
hiXi(5)
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING (AUTHOR’S MANUSCRIPT) 5
for Nbands of a multispectral image. Using a training set
of pixel transformed radiance values and known depths, hi
values can be obtained via linear regression [26]. With 8-
band WV2 imagery Ncan be anywhere between 1 and 8.
Although the use of all 8 bands (N= 8) resulted in a
modest (0.1m) improvement in RMSE, N= 2 was used in
this study for consistency with most other applications of the
method. Previous efforts have been made to find generalized hi
parameters that work across different images but more accurate
results are obtained by deriving hiparameters specific to the
image under consideration [15]. The image specific approach
was employed in this study.
The following steps are used to train the Lyzenga depth
estimation model [15]. First, the optimal pair of bands is
selected. For each of the 8
2possible combinations, ordinary
least squares (OLS) regression is conducted with the 2 selected
Xivalues as independent variables and corresponding depths
as the dependent variable. The band pair with the highest r2
value is selected as the optimal combination. Then the hi
(intercept and slopes) parameters are determined (via OLS)
and recorded for this band combination.
Once the hiparameters have been determined, the model
can be said to be trained and depth estimation for the remain-
ing pixels is simply a matter of executing equation 5 with the
determined parameters. When N= 2, the equation takes the
form:
ˆz=h0+hiXi+hjXj(6)
for the band combination of bands iand j.
For the WV2 training data used in this study the optimal
band combination (r2= 0.63) was found to be bands 2 and
3 (478 nm and 546 nm). The hiparameters were determined
via OLS and resulted in the following depth estimation model:
ˆz= 17.08 + 16.06X2−16.16X3(7)
For the WV3 training data used in this study the optimal
band combination (r2= 0.63) was also found to be bands
2 and 3. The hiparameters were determined via OLS and
resulted in the following depth estimation model:
ˆz=−8.83 + 10.11X2−17.46X3(8)
C. Depth estimation comparison
To compare the efficacy of the KNN and the Lyzenga
methods, each was applied to the WV2 and WV3 input data.
Multibeam depths at image acquisition time (z) were masked
where greater than 20 m (Fig. 1) and WV radiance values (L
for the KNN method and X(Equation 2) for the Lyzenga
method) were masked to match. This resulted in 644,953
and 699,001 pixels with known depth and radiance for the
WV2 and WV3 images respectively. The difference is due to
tide height differences. Each model was trained with 300,000
pixels and used to estimate the depth of all unmasked pixels.
The resultant bathymetry maps produced by the two methods
using the WV3 imagery are shown in Fig. 2 along with
the Multibeam depths for visual comparison. The results for
the WV2 image are, visually, very similar. The RMSE was
calculated for the pixels not used in model training (Fig.
3). The KNN method provided a better approximation than
the Lyzenga method as indicated by the RMSE values for
both WV2 (KNN: 1.54m, Lyzenga: 2.54m) and WV3 (KNN:
0.79m, Lyzenga: 2.22m). In contrast to the Lyzenga depth
estimates, the KNN estimated depths do not exceed 20m,
giving the KNN scatter plots in Fig. 3 a clipped appearance.
This is due to the fact that KNN regression will not predict
values beyond the range of training data.
V. ACC UR ACY ASSESS ME NT
Further testing was conducted to compare the performance
of the methods across variations in: A) bottom albedo, B) size
of training set, and C) maximum depth.
A. Sensitivity to Variation in Bottom Albedo
To assess differences in response to varying bottom albedo,
the spatial distribution of error was compared between meth-
ods. Errors were calculated for all available pixels for both
estimation methods (Fig. 4). Visual inspection of Fig. 4
indicates that errors from the WV2 image were related to
bottom albedo (or habitat, e.g. kelp vs. sand) in both models,
but more so for the Lyzenga model. Errors were isolated for the
kelp and sand areas identified in Fig. 1 and analysis confirmed
this impression. The kelp and sand areas were manually
outlined based on unpublished data collected using Benthic
Photo Survey [40] as well as previous habitat mapping efforts
[17], [18], [19]. For the KNN method the average errors were
0.05m and 0.81m in the sand and kelp areas respectively. For
the Lyzenga method average errors were -1.55m and 2.19m.
Visual inspection of the WV3 prediction errors in Fig. 4
indicates that Lyzenga method errors are related to differences
in bottom albedo but KNN method errors show no discernible
pattern. For the WV3 image the average KNN errors were
-0.01m and 0.05m for the sand and kelp respectively while
the Lyzenga method average errors were -0.14m and -0.72m.
The much larger difference in average errors for the Lyzenga
method relative to bottom type for both images, indicates that
the KNN method is much less sensitive to variations in bottom
albedo.
B. Sensitivity to Number of Training Points
To investigate the relative sensitivity to the size of the
training set for each model, training and estimation procedures
were carried out repeatedly with varying proportions of data
points assigned to training and evaluation. The number of
training points ranged from 10 to 500k (80% of the full data
set) on a 15 step logarithmic scale. Each model was trained
and tested (on the remaining non-training points) ten times at
each training set quantity. The randomization procedure for
the selection of training points was altered each time. Figure
5 displays the resulting RMSE values for the WV3 image as
a function of the number of training set size for both models.
Initially, the accuracy of both models improves very quickly
with additional training points (Fig. 5). The Lyzenga model ap-
proaches its maximum accuracy (approximately 2.2m RMSE)
with as little as 100 training points. However, the accuracy
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING (AUTHOR’S MANUSCRIPT) 6
Fig. 4. Estimated depth minus multibeam depth for the two different models.
Fig. 5. Depth estimation model error from WV3 imagery as a function of
training point set size on a logarithmic scale.
of the KNN method continues to improve while the Lyzenga
method levels off. At around 150 - 200 training points, the
KNN method provides more accurate depth estimates. With
1000 training points, the KNN RMSE was better by more
than 0.5m compared to the Lyznenga method. The results for
the WV2 image followed the same pattern and differed only
in that the maximum accuracies obtained by both methods
was somewhat lower as indicated in Fig. 3. The shapes of the
curves and their crossing point were very similar.
C. Performance at Varying Maximum Depths
The WV3 image and corresponding depth data were re-
stricted to various maximums between 5 and 30 meters.
These depth-restricted data sets were randomly partitioned
into training sets of 1500 points with the remaining points
comprising a corresponding test set. A KNN model was trained
Fig. 6. RMSE and mean error ±standard error of the mean for each method
across different maximum depths using WV3 imagery.
and tested with each set and the RMSE, mean error, and
standard error was calculated for each depth increment. The
entire procedure was then repeated for the Lyzenga model.
The KNN method provided better accuracy at all depth
ranges (Fig. 6) and this advantage (measured as Lyzenga
RMSE minus KNN RMSE) increased as the maximum depth
increased from approximately 0.3m with a maximum depth
of 5m to 1.6m with a maximum of 30m. The mean error for
the Lyzenga method stayed within 0.16m of 0 while the mean
error for the KNN method drops as low as 0.25m below 0 with
increasing maximum depth. The same analysis conducted with
the WV2 results, yielded very similar results.
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING (AUTHOR’S MANUSCRIPT) 7
Fig. 7. Depth vs. transformed radiance for WV3 bands 1-4. Yellow represents
pixels over sand bottom. Green represents pixels over kelp bottom. Lowess
curves are included to illustrate the trends.
VI. DISCUSSION
The KNN method used in this study provided a better
estimation of depth (WV3 RMSE = 0.79m, WV2 RMSE =
1.54m) than the Lyzenga method (WV3 RMSE = 2.22m, WV2
RMSE = 2.54m) under the conditions represented in the study
region and images analysed. In previous studies the Lyzenga
method yielded lower RMSE values (ranging from 0.7m to
2.4m) [26], [15], [41] than those reported in this study. The
higher RMSE in this study is likely due to differences in
environmental conditions, including higher levels of suspended
solids and chlorophyll, and the presence of a bottom type with
very low albedo. Specifically, the low radiance values over
kelp and the relatively high reflectance of the water column
in the blue bands (band 1: 427 nm and band 2: 478 nm)
resulted in what Ji et al. called ”over deduction” [34]. The
modified transformation expressed in Eq. 2 makes it possible
to apply the Lyzenga method where it would not otherwise
be possible, but it does not address the underlying limitation
of Lyzenga’s transformation (Eq. 1). The Lyzenga method is
based on a quasi-single-scattering approximation (QSSA) of
the radiative transfer equation (RTE) [33] and predicated on
the transformation of observed radiance producing a negative
linear relationship with depth across all bottom types. The
dependence on this transformation represents an inherent as-
sumption about the range of bottom albedo and the optical
properties of the water where the depth is to be estimated.
In the current study, this linear relationship was achieved
for some bottom types but not others because these inherent
assumptions were violated by the environmental conditions
(low bottom albedo and reflective water column in the blue
bands) in the study area.
The relationship between transformed radiance (according
to Eq. 2) and depth for WV3 pixels over both sand and kelp
are shown in Fig. 7 (see Fig. 1 for locations of kelp and
sand). The sand pixels generally follow the negative linear
relationship that the Lyzenga method is based on [33], but
the kelp pixels do not. Large brown macroalgae such as
the kelp Ecklonia radiata have very low reflectance in the
wavelengths of WV2 (and WV3) bands 1 and 2 [42]. QSSA
predicts that, due to larger relative influence of water column
scattering effects, radiance will increase with depth rather
than decrease once the bottom albedo drops below a certain
threshold [43] (see Fig. 6 and 7). The threshold varies with
wavelength, water clarity, and other factors in a manner that
is beyond the scope of this paper, but it is useful to note
that the threshold increases with backscatter (i.e. increased
turbidity and/or chlorophyll). Most wavelength and bottom
albedo combinations were above this threshold for the WV2
and WV3 imagery used in this study, but the albedo of kelp
was below for bands 1 and 2 (Fig. 7). Examination of the blue
bands on their own corroborates this relationship; the shallow
kelp dominated areas appear much darker than optically deep-
water when band 1 (or 2) is viewed as a grayscale image.
Consequently, the radiance transformation did not consistently
produce the expected negative linear relationship with depth
across all bottom types, and the Lyzenga method did not
perform as well in this study as it has elsewhere.
The KNN method produced more accurate results in the
present study compared to the Lyzenga methods. However,
unlike the Lyzenga method, KNN does not lend itself to
physical interpretations of sources of error. Likely sources of
error include sensor noise, water column heterogeneity, and
insufficient bottom type separability. There was less sensor
noise visible in the WV3 image than in the WV2 image
and it is likely that this, in addition to possible differences
in sediment load, contributed to the higher accuracy obtained
using the WV3 image. This method has not yet been tested in
less turbid water with a smaller range of bottom albedo (e.g.
tropical coral reefs), but it seems likely that it will perform
at least as well and likely better in less turbid conditions than
those present in the current study. No effort was made in this
study to stratify the sampling of training points over depth
or bottom type. It is however likely that such stratification
could improve the performance of both methods, particularly
when training sets are small. It must also be noted that
the Lyzenga method can estimate depths beyond the range
of training data provided to it while the KNN method can
not. In some situations, such as the present study, where the
range of training depths is known to closely approximate the
range of true depths, this difference proves advantageous to
the KNN method in terms of accuracy, but where estimation
beyond the range of available training data is required, the
Lyzenga method is more appropriate and may provide better
overall accuracy. The Lyzenga method has some potential
for generalized use (resulting in reduced accuracy) without
training for individual images [15], [44]. The KNN method,
being entirely empirical, must be trained for each image
separately.
The KNN regression method presented here requires less
image preprocessing and can, with sufficient training data,
estimate bathymetry in difficult environmental conditions with
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING (AUTHOR’S MANUSCRIPT) 8
much less error than the widely used Lyzenga method. The
KNN estimates for depths less than 20m had an RMSE that
was over 1.4m lower than that of the Lyzenga method for
the WV3 image and approximately 1m lower for the WV2
image (Fig. 3). Errors were noticeably less influenced by
bottom type in the KNN method (Fig. 4), and that impression
was confirmed by the examination of average errors over 2
different bottom types. Both methods showed similar decreases
in accuracy with increasing maximum depth (Fig. 6), but the
KNN method was consistently more accurate. The only way
in which the Lyzenga method performed better than the KNN
method was when small numbers (<150) of training points
were used (Fig. 5).
This study demonstrates that the KNN method can outper-
form the Lyzenga method in conditions where the inherent
assumptions of Lyzenga’s method are violated. Further study
is necessary, via numerical simulation with RTE or real world
data collected under a range of conditions, to assess the
wider applicability of the KNN method. This method and
its FOSS implementation could prove to be a valuable tool
for researchers and resource managers who require low cost
bathymetric estimates of optically shallow-waters. The only
data requirements are multispectral imagery and a set of
geolocated depth measurements from the field. As proof of
concept, this study used multibeam sonar depths, but these
field data can also be collected with relatively inexpensive
and easily portable equipment [40]. This is critical where
resources for bathymetry mapping are limited and in locations
where other methods such as lidar or multibeam sonar are
not practical or appropriate. This method helps to expand
the already considerable value of high resolution multispectral
satellite imagery for marine applications [45], [46], [47], [48],
and could prove especially valuable as a source of depth data
for water column correction [49] to aid in the mapping of
submerged habitats [50], [51], [52].
ACKNOWLEDGMENT
The authors would like to thank Auckland Council for
funding this research, DigitalGlobe Foundation for the imagery
grant, Leigh Marine Laboratory for funding the multibeam
survey, and Waikato University along with Discovery Marine
Ltd. for carrying it out. Additional funding was provided
by the Royal Society of New Zealand Rutherford Discovery
Fellowship to NTS (RDFUOA1103). Thanks to Prof. John
Philip Matthews and an anonymous reviewer for their helpful
comments. Thanks also to the open source software commu-
nity and particularly to those behind QGIS [29], GDAL, Scikit-
learn [53], Scikit-image [32], and IPython [54].
REFERENCES
[1] A. H. Benny and G. J. Dawson, “Satellite Imagery as an Aid to
Bathymetric Charting in the Red Sea,” Cartographic Journal, The,
vol. 20, no. 1, pp. 5–16, Jun. 1983.
[2] D. L. Jupp, K. K. Mayo, D. A. Kuchler, D. V. R. Claasen, R. A.
Kenchington, and P. R. Guerin, “Remote sensing for planning and
managing the great barrier reef of Australia,” Photogrammetria, vol. 40,
no. 1, pp. 21–42, Sep. 1985.
[3] E. Saarman, M. Gleason, J. Ugoretz, S. Airam, M. Carr, E. Fox,
A. Frimodig, T. Mason, and J. Vasques, “The role of science in support-
ing marine protected area network planning and design in California,”
Ocean & Coastal Management, 2012.
[4] A. Jordan, M. Lawler, V. Halley, and N. Barrett, “Seabed habitat map-
ping in the Kent Group of islands and its role in Marine protected area
planning,” Aquatic Conservation: Marine and Freshwater Ecosystems,
vol. 15, no. 1, pp. 51–70, 2005.
[5] M. Merrifield, W. McClintock, C. Burt, E. Fox, P. Serpa, C. Steinback,
and M. Gleason, “MarineMap: A web-based platform for collaborative
marine protected area planning,” Ocean & Coastal Management, 2012.
[6] P. N. Bierwirth, T. J. Lee, and R. V. Burne, “Shallow Sea-Floor Re-
flectance and Water Depth Derived by Unmixing Multispectral Imagery,”
Photogrammetric Engineering and Remote Sensing; (United States), vol.
59:3, Mar. 1993.
[7] T. Sagawa, E. Boisnier, T. Komatsu, K. B. Mustapha, A. Hattour,
N. Kosaka, and S. Miyazaki, “Using bottom surface reflectance to map
coastal marine areas: a new application method for Lyzenga’s model,”
International Journal of Remote Sensing, vol. 31, no. 12, pp. 3051–3064,
2010.
[8] F. Eugenio, J. Marcello, and J. Martin, “High-Resolution Maps of
Bathymetry and Benthic Habitats in Shallow-Water Environments Using
Multispectral Remote Sensing Imagery,” IEEE Transactions on Geo-
science and Remote Sensing, vol. 53, no. 7, pp. 3539–3549, Jul. 2015.
[9] H. Wensink and W. Alpers, “SAR-Based Bathymetry,” in Encyclopedia
of Remote Sensing, ser. Encyclopedia of Earth Sciences Series, E. G.
Njoku, Ed. Springer New York, 2014, pp. 719–722.
[10] F. C. Polcyn, W. L. Brown, and I. J. Sattinger, “The Measurement
of Water Depth by Remote Sensing Techniques,” The University of
Michigan, Ann Arbor, Willow Run Laboratories, Tech. Rep. 8973-26-F,
Oct. 1970.
[11] A. G. Dekker, S. R. Phinn, J. Anstee, P. Bissett, V. E. Brando, B. Casey,
P. Fearns, J. Hedley, W. Klonowski, and Z. P. Lee, “Intercomparison
of shallow water bathymetry, hydro-optics, and benthos mapping tech-
niques in Australian and Caribbean coastal environments,” Limnology
and Oceanography: Methods, vol. 9, pp. 396–425, 2011.
[12] Z. Lee, K. L. Carder, C. D. Mobley, R. G. Steward, and J. S. Patch,
“Hyperspectral remote sensing for shallow waters. 2. Deriving bottom
depths and water properties by optimization,” Applied Optics, vol. 38,
no. 18, pp. 3831–3843, 1999.
[13] E. P. Green, P. J. Mumby, A. Edwards, and C. Clark, “Remote Sensing
Handbook for Tropical Coastal Management,” 2005.
[14] N. S. Altman, “An Introduction to Kernel and Nearest-Neighbor Non-
parametric Regression,” The American Statistician, vol. 46, no. 3, pp.
175–185, Aug. 1992.
[15] D. R. Lyzenga, N. Malinas, and F. Tanis, “Multispectral bathymetry
using a simple physically based algorithm,” Geoscience and Remote
Sensing, IEEE Transactions on, vol. 44, no. 8, pp. 2251 –2259, Aug.
2006.
[16] W. J. Ballantine and D. P. Gordon, “New Zealand’s first marine reserve,
Cape Rodney to Okakari point, Leigh,” Biological Conservation, vol. 15,
no. 4, pp. 273–280, Jun. 1979.
[17] T. Ayling, “Okakari Point to Cape Rodney marine reserve: a biolog-
ical survey,” Leigh Marine Laboratory, University of Auckland, New
Zealand, Technical Report, 1978.
[18] D. M. Parsons, N. T. Shears, R. C. Babcock, and T. R. Haggitt,
“Fine-scale habitat change in a marine reserve, mapped using radio-
acoustically positioned video transects,” Marine and Freshwater Re-
search, vol. 55, no. 3, pp. 257–265, 2004.
[19] K. Leleu, B. Remy-Zephir, R. Grace, and M. J. Costello, “Mapping
habitats in a marine reserve showed how a 30-year trophic cascade
altered ecosystem structure,” Biological Conservation, vol. 155, pp. 193–
201, Oct. 2012.
[20] B. M. Seers and N. T. Shears, “Spatio-temporal patterns in coastal
turbidity Long-term trends and drivers of variation across an estuarine-
open coast gradient,” Estuarine, Coastal and Shelf Science, vol. 154, pp.
137–151, Mar. 2015.
[21] G. De’ath and K. Fabricius, “Water quality as a regional driver of
coral biodiversity and macroalgae on the Great Barrier Reef,” Ecological
Applications, vol. 20, no. 3, pp. 840–850, 2010.
[22] R. van Woesik, T. Tomascik, and S. Blake, “Coral assemblages and
physico-chemical characteristics of the Whitsunday Islands: evidence of
recent community changes,” Marine and Freshwater Research, vol. 50,
no. 5, p. 427, 1999.
[23] D. P. Gordon, Cape Rodney to Okakari Point Marine Reserve : review of
knowledge and bibliography to December 1976, ser. Tane. Supplement
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING (AUTHOR’S MANUSCRIPT) 9
; v. 22 (1976). Auckland, NZ: Auckland University Field Club 1976,
1976.
[24] N. T. Shears, R. C. Babcock, C. A. J. Duffy, and J. W. Walker,
“Validation of qualitative habitat descriptors commonly used to classify
subtidal reef assemblages in north-eastern New Zealand,” New Zealand
Journal of Marine and Freshwater Research, vol. 38, no. 4, pp. 743–752,
2004.
[25] T. Updike and C. Comp, “Radiometric Use of WorldView-2 Imagery,”
2010.
[26] D. R. Lyzenga, “Shallow-water bathymetry using combined lidar and
passive multispectral scanner data,” International Journal of Remote
Sensing, vol. 6, no. 1, pp. 115–125, 1985.
[27] GRASS Development Team, “Geographic resources analysis support
system (GRASS) software, version 6.4. 0. Open Source Geospatial
Foundation,” 2010.
[28] R. Baker and M. Watkins, Guidance Notes for the Determination of
Mean High Water Mark for Land Title Surveys. rofessional Develop-
ment Committee of the New Zealand Institute of Surveyors, 1991.
[29] Quantum GIS Development Team, “Development Team, 2012. Quantum
GIS Geographic Information System. Open Source Geospatial Founda-
tion Project,” 2011.
[30] GDAL Development Team, “GDAL Geospatial Data Abstraction Li-
brary,” 2016.
[31] C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color
images,” in Computer Vision, 1998. Sixth International Conference on.
IEEE, 1998, pp. 839–846.
[32] S. van der Walt, J. L. Schnberger, J. Nunez-Iglesias, F. Boulogne,
J. D. Warner, N. Yager, E. Gouillart, and T. Yu, “scikit-image: image
processing in Python,” PeerJ, vol. 2, p. e453, Jun. 2014.
[33] D. R. Lyzenga, “Passive remote sensing techniques for mapping water
depth and bottom features,” Applied Optics, vol. 17, no. 3, pp. 379–383,
Feb. 1978.
[34] W. Ji, D. Civco, and W. Kennard, “Satellite remote bathymetry: a new
mechanism for modeling,” Photogrammetric Engineering and Remote
Sensing, vol. 58, no. 5, pp. 545–549, 1992.
[35] R. P. Stumpf, K. Holderied, and M. Sinclair, “Determination of water
depth with high-resolution satellite imagery over variable bottom types,”
Limnology and Oceanography, vol. 48, pp. 547–556, 2003.
[36] R. A. Armstrong, “Remote sensing of submerged vegetation canopies for
biomass estimation,” International Journal of Remote Sensing, vol. 14,
no. 3, pp. 621–627, Feb. 1993.
[37] D. Schweizer, R. A. Armstrong, and J. Posada, “Remote sensing
characterization of benthic habitats and submerged vegetation biomass
in Los Roques Archipelago National Park, Venezuela,” International
Journal of Remote Sensing, vol. 26, no. 12, pp. 2657–2667, 2005.
[38] R. D. King, C. Feng, and A. Sutherland, “Statlog: Comparison of
Classification Algorithms on Large Real-World Problems,” Applied
Artificial Intelligence, vol. 9, no. 3, pp. 289–333, May 1995.
[39] S. B. Imandoust and M. Bolandraftar, “Application of K-Nearest Neigh-
bor (KNN) Approach for Predicting Economic Events: Theoretical
Background,” Int. Journal of Engineering Research and Applications,
vol. 3, no. 5, pp. 605–610, 2013.
[40] J. Kibele, “Benthic Photo Survey: Software for Geotagging, Depth-
tagging, and Classifying Photos from Survey Data and Producing
Shapefiles for Habitat Mapping in GIS,” Journal of Open Research
Software, vol. 4, no. 1, Mar. 2016.
[41] H. Su, H. Liu, and W. D. Heyman, “Automated derivation of bathymetric
information from multi-spectral satellite imagery using a non-linear
inversion model,” Marine Geodesy, vol. 31, no. 4, pp. 281–298, 2008.
[42] P. J. Werdell and C. Roesler, “Remote assessment of benthic substrate
composition in shallow waters using multispectral reflectance,” Limnol-
ogy and Oceanography, vol. 48, pp. 557–567, 2003.
[43] H. R. Gordon and W. R. McCluney, “Estimation of the Depth of Sunlight
Penetration in the Sea for Remote Sensing,” Applied Optics, vol. 14,
no. 2, p. 413, Feb. 1975.
[44] A. Kanno, Y. Tanaka, A. Kurosawa, and M. Sekine, “Generalized
Lyzenga’s Predictor of Shallow Water Depth for Multispectral Satellite
Imagery,” Marine Geodesy, vol. 36, no. 4, pp. 365–376, Dec. 2013.
[45] J. P. Matthews and Y. Yoshikawa, “Synergistic surface current mapping
by spaceborne stereo imaging and coastal HF radar,” Geophysical
Research Letters, vol. 39, no. 17, p. L17606, Sep. 2012.
[46] J. Xu and D. Zhao, “Review of coral reef ecosystem remote sensing,”
Acta Ecologica Sinica, vol. 34, no. 1, pp. 19–25, Feb. 2014.
[47] A. Hommersom, M. R. Wernand, S. Peters, and J. d. Boer, “A review
on substances and processes relevant for optical remote sensing of
extremely turbid marine areas, with a focus on the Wadden Sea,”
Helgoland Marine Research, vol. 64, no. 2, pp. 75–92, Jun. 2010.
[48] M. A. Hamel and S. Andrfout, “Using very high resolution remote sens-
ing for the management of coral reef fisheries: Review and perspectives,”
Marine Pollution Bulletin, vol. 60, no. 9, pp. 1397–1405, Sep. 2010.
[49] M. L. Zoffoli, R. Frouin, and M. Kampel, “Water Column Correction
for Coral Reef Studies by Remote Sensing,” Sensors, vol. 14, no. 9, pp.
16 881–16 931, Sep. 2014.
[50] P. J. Mumby, C. D. Clark, E. P. Green, and A. J. Edwards, “Benefits of
water column correction and contextual editing for mapping coral reefs,”
International Journal of Remote Sensing, vol. 19, no. 1, pp. 203–210,
1998.
[51] T. Sagawa, A. Mikami, M. N. Aoki, and T. Komatsu, “Mapping seaweed
forests with IKONOS image based on bottom surface reflectance,” Nov.
2012, pp. 85 250Q–85 250Q.
[52] A. Minghelli-Roman and C. Dupouy, “Correction of the Water Column
Attenuation: Application to the Seabed Mapping of the Lagoon of New
Caledonia Using MERIS Images,” IEEE Journal of Selected Topics in
Applied Earth Observations and Remote Sensing, vol. Early Access
Online, 2014.
[53] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion,
O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, and V. Dubourg,
“Scikit-learn: Machine learning in Python,” The Journal of Machine
Learning Research, vol. 12, pp. 2825–2830, 2011.
[54] F. Perez and B. Granger, “IPython: A System for Interactive Scientific
Computing,” Computing in Science Engineering, vol. 9, no. 3, pp. 21
–29, Jun. 2007.
Jared Kibele left a career in software development
to pursue marine science. He received an associate’s
degree in marine science and technology from Mon-
terey Peninsula College in 2004 and a bachelor’s
degree in Marine Biology from University of Cali-
fornia, Santa Cruz in 2006. In 2007 he was employed
by the Pacific States Marine Fisheries Commission
as a GIS analyst for California’s Marine Life Pro-
tection Act Initiative (MLPA). From there he went
to University of California, Santa Barbara where he
became the Senior GIS Analyst for MarineMap; a
web-based decision-support tool for the MLPA. 2011 was spent sailing from
California to New Zealand with his wife aboard their 31 foot ketch called
Architeuthis. In 2012 he began his PhD research at University of Auckland’s
Leigh Marine Lab under supervisor Nick Shears.
Nick Shears received a BSc in Biological Sciences
and a PhD in Marine Science from the University of
Auckland in 1997 and 2003 respectively. He carried
out a postdoctoral fellowship at the University of
California Santa Barbara from 2006-2009 before
returning to the University of Auckland as a Re-
search Fellow. In 2011 he was awarded a Ruther-
ford Discovery Fellowship and since 2012 has been
a Senior Lecturer at the University of Aucklands
Leigh Marine Laboratory. His research has a strong
application to marine management and conservation,
focusing on the ecology of rocky reefs, monitoring and mapping changes in
marine habitats, and understanding the impact of human activities on marine
ecosystems.